text
stringlengths
263
344k
id
stringlengths
47
47
dump
stringclasses
23 values
url
stringlengths
16
862
file_path
stringlengths
125
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
81.9k
score
float64
2.52
4.78
int_score
int64
3
5
BY JENEE GREGOR April is designated as Earth month, with a multitude of events that showcase ways to be more eco-aware and conscious about daily activities. An Earth Day event has been put on at WCC for the past 20 years, and this year the head of planning from the Sustainability Council, Dale Petty, along with Student Activities showed the school’s commitment to being more sustainable. “I like to think of it as Earth Month, and we do whatever we can,” Petty said. One of the graphic design classes created different posters for Earth Day that were put up all over the school, giving the students a sense of involvement and experience. More tables were set up than can be mentioned, and all the organizations were educating about what they do for the earth and how others can get involved. Wheels in Motion bike shop representative Matt Yost was showing off some of their road bikes and talking about their “Go by Bike Top Ten List.” Plus, the infinite gas mileage from riding a bike is a benefit, said Yost. Friends of the B2B, or Border to Border bike trail, came to talk about their events and encourage people to do more biking. The B2B trail starts in Ypsilanti and heads through Ann Arbor, and eventually will go all the way to Dexter. “We are trying to get students and staff to use the trail that is right outside of their door,” said Bruce Geffen, the affiliate of Friends of the B2B. They also have a Spring Ride put on by Bike Ypsi, in 3, 15, 30 and 40 mile routes. The Huron and Clinton Metroparks came out to hand out maps, and give details about using the parks and what they are doing for the environment. Mark Irish, an interpreter to the naturalist position offered this piece of advice for people looking to be more Earth forward in the area to, “Let your native plants grow.” Native plants are those that are naturally growing in this area, before it was settled by the Europeans long ago. These plants are better for maintaining a balanced ecosystem. Irish mentioned that in the parks they are doing invasive plant removal and doing more native plantings to maintain the ecosystem. Ann Arbor and Ypsilanti District Libraries had representative tables making crafts, including paper cut outs and buttons, and handed them out to the passersby. Ypsilanti District Library has a seed library, much like WCC’s, and has an upcoming workshop that educates people about the seed saving practices, taught by Stefanie Stauffer, a WCC instructor. Recycle Ann Arbor brought information about how to be more involved with recycling and what resources are available within Ann Arbor. “Recycle Ann Arbor is a non-profit contracted by the city, not part of the city system,” said Lisa Perschke, a recycling program specialist, master composter and gardener. Recycle Ann Arbor attended to help people sign up for programs, and to educate. Perschke mentioned that they are the re-use center in Ann Arbor and help people find place for hard-to-recycle items. The message is that if someone wanted to recycle something that was at least possible to be recycled, they are the ones who make it happen. WCC Core Garden showcased some of their accomplishments, and asked for volunteers to work in the hoop houses, or enclosed greenhouses, near campus. Most of the food they grow goes to Garrett’s, the restaurant in the Student Center, creating a small food loop within the campus. The City of Ann Arbor was educating people on what they are doing for the community, and have very lofty, yet attainable goals. Josh MacDonald, environmental affiliate with the city, shared that they hope to have carbon emissions down by 25 percent by 2025, and down by 80 percent by 2050, by implementing more and more from the Climate Action Plan. April is Earth Month but shouldn’t be relegated to only being conscious in April, take shorter showers, turn off some lights, and plant a tree at any time. Each thing that can be done in small steps makes it easier to think of in the future. EARTH DAY FUN FACTS: - Earth Day originated in 1970, spurred by a massive 1969 oil spill in Santa Barbara, Calif. - Earth Day is April 22, or the last Friday of April - Earth Day inspired support which led to the creation of the Environmental Protection Agency and contributed to the passage of the Clean Air Act
<urn:uuid:f9e886ed-bc5d-4809-ae29-4d1e71fc0339>
CC-MAIN-2017-34
https://www.washtenawvoice.com/2016/04/18/earth-day-celebration-brings-light-earth-friendly-year/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104160.96/warc/CC-MAIN-20170817210535-20170817230535-00077.warc.gz
en
0.970534
945
2.640625
3
January 10, 2014 Radiation levels around the boundary of the crippled Fukushima No. 1 nuclear plant have risen to eight times the government standard of 1 millisievert per year, Tokyo Electric Power Co. said. The Nuclear Regulation Authority is scheduled to hold a meeting Jan. 10 to discuss countermeasures for a southern area on the plant site that has long been a source of problems. A level of 8 millisieverts per year was estimated as of December near an area with many storage tanks containing highly radioactive water, company officials said. After water leaks from underground tanks on the plant’s premises were found last April, the utility transferred radioactive water to the aboveground storage tanks near the southern boundary, TEPCO officials said. The readings there were estimated at 7.8 millisieverts per year as of May. This article was posted: Friday, January 10, 2014 at 11:28 am
<urn:uuid:e00275b7-7c76-4f87-9f85-e14fbd12111e>
CC-MAIN-2014-41
http://www.prisonplanet.com/radiation-levels-near-fukushima-plant-boundary-8-times-the-government-standard.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663365.9/warc/CC-MAIN-20140930004103-00458-ip-10-234-18-248.ec2.internal.warc.gz
en
0.962739
188
2.546875
3
The newborn is pretty new to the world. The little one’s psychological mindset is yet to develop. It is obvious they face a lot of problems in following a regular lifestyle. Babies only medium of communication is crying. You may have noticed your infant crying during sleep. It is no surprise but quite natural. But, you may be curious to know why do babies cry in their sleep. And you also want to know what to do when you notice your baby crying in sleep. Read this article to know why babies cry in sleep. Why Do Babies Cry In Their Sleep? Babies rarely sleep without making any noise. At times, your baby may also cry during sleep. As a parent, you want to know the reason for this act of your baby, and you want to make sure that nothing is wrong with the baby and is completely fine. Your baby will soon get out of the habit with the time. But, it is crucial for the baby to be healthy. The neurological system of the newborn remains raw. However, they control and regulate their body systems that include wake or sleep cycles. Sometimes babies pass through semi-conscious and weird state making babies to difficult to make out the difference between the sleeping state and awake. This state gradually improves, and you must see improvement by 3 or 4 months age. Within the first year, it is vital to take care of the baby by comforting him or her when upset. If the baby is grunting, then consider sleeping with the baby in the baby’s room. So that you can cuddle and comfort the baby as soon as it feels uncomfortable. Apart from this babies face several sleep problems. Basically, babies do not have a deep sleep. Often babies have an irregular pattern and short interval of sleep. Unlike adults, babies have active and lighter sleep. Babies also cry for many other reasons when awake. Let us explore few reasons to get an insight and address the baby’s concern. Why Do Babies Cry? As a new parent, you may initially face some difficulty in understanding the baby’s language and fulfill its requirement. Babies may simply cry to seek your attention. However, there is a couple of reasons that makes the baby cry. Let us see some of these: 1. Diaper Change Your baby may need a nappy change. The baby’s diaper may be soiled or wet and the baby is irritated. To protest against the condition and simply want to feel better baby may cry. Further, baby’s tender skin may get affected with a wet or soiled diaper. Feeling hungry is one of the most common reasons the baby may feel crying. The younger the baby is more frequently feel hungry. Small ones will have a tiny stomach and are not able to hold more. For this reason, the baby may feel hungry soon. Especially if your baby is on breastfeeding then it is required for you to feed in short intervals. On the other hand, if you are formula feeding then the baby may not feel hungry for the next two hours after feeding. 3. Feeling Like Crying Babies younger than five months will simply cry during late afternoon or early evenings. This behavior is not attributed to any specific reason. If you notice this, there is nothing to get worried or start discovering reasons for crying. It is quite natural and there is nothing wrong with the baby. The period of crying may range from few hours to an inconsolable long session. The baby may also reject your efforts to comfort him or her. Additionally, babies may clasp their fists, arch the back and draw the knees up. Also, your baby may feel frustrated and flushed during crying. You may get upset when you fail to ease the baby’s distress. Although, you may face a lot of difficulty during this phase it is assured that baby comes out of this soon. The condition of babies crying persistently in spite of being healthy is termed as colic. Often most of the people think colic is associated with digestive and stomach problems. The baby may be intolerant towards some substance in the formula or breast milk. These days with insight to baby’s crying patterns, it is been deduced that crying is not related to tummy problems. 4. Feeling Tired Babies may feel tired and also find hard to sleep particularly if they are excessively tired. As the time pass by you will start reading the signs of the baby feeling sleepy. Some of them include baby staring at the empty space for long, staying quiet and crying for small things. If the baby feels over-stimulated, then it may find difficult to settle down and sleep silently. The baby may get over-stimulated if it is cuddled and receive attention from a lot of people. Taking your baby away from the crowd and help your baby to calm down and sleep. 5. Need To Be Cuddled Baby need lot of pampering, physical contact, and cuddling. Hold the baby close and reassure your love and affection whenever the baby is in need of it. Sling the baby and sing nice songs to the baby. Do not be worried of spoiling the baby by holding too much. First few months are exceptional and conversely by holding the baby close you will nurture a cute relationship. Moreover, holding your baby close to your heart makes the baby feel relaxed by listening to your heartbeat. 6. Make The Baby Feel Better After few months, you are aware of the physical changes taking place in the baby. If the baby is not keeping well, you can notice some change in the tone and the way the baby cries. The baby may cry on high pitch seeking your immediate attention. Or if the baby remains quiet but usually crying loud. These signs point out that there something wrong with the baby most probably the baby is sick. Remember nobody know the baby better than you. If you feel something serious then call the midwife or you can visit your physician. Doctors always deal with the baby’s problem seriously and give you apt solutions.
<urn:uuid:cd591871-3f9e-468f-9c15-5bd9a47bf826>
CC-MAIN-2017-34
http://youcrazykids.com/for-a-parent-why-do-babies-cry-in-their-sleep/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105927.27/warc/CC-MAIN-20170819220657-20170820000657-00557.warc.gz
en
0.959332
1,237
2.734375
3
Monitoring of wilding conifer infestations, their extent and severity, as well as changes in these descriptors with or without management is required not only for the development of a national strategy. Many stakeholders and managers of wilding infested or potentially threatened land require better data on the issue that they dealing with. The monitoring of wilding conifers can enable land managers to understand the dynamics and the impact of their management actions better to make improved management decisions where needed. Monitoring enables stakeholders to document the change that their management has made across their “patch” and link this to their overall goals and objectives. The ability to do this in a quantitative and thorough way can allow managers to gain confidence from supporters and funding bodies and can be used as leverage for further funding. Furthermore monitoring data will allow the performance of risk analysis and cost calculations based on the trend data that monitoring can supply. We have been developing a ground plot method to monitor management success (currently primarily the reduction in wildling conifers) as well as a smartphone application, that allows easy recording of quantitative data about wilding occurrence. The reports can be downloaded by clicking on the links below.
<urn:uuid:af9fec2e-3311-459b-9deb-b7783deadc60>
CC-MAIN-2017-43
http://www.wildingconifers.org.nz/index.php/research
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826049.46/warc/CC-MAIN-20171023130351-20171023150351-00352.warc.gz
en
0.957406
240
2.9375
3
(January 2014) Nearly half of the world's population, some 3 billion people, is under the age of 25. As the largest generation ever of young people, investments in their health and well-being are crucial so they can make a positive transition into adulthood and fully contribute to the economic and social development of their families, communities and nations. But in order to develop strategies and mobilize financial resources to support adolescent and youth development, decisionmakers need reliable, up-to-date demographic, health, education and socioeconomic data about young people. PRB, in collaboration with UNFPA, developed a specialized interactive map and publication about young people in sub-Saharan Africa. The interactive map and report present available data for 25 specific indicators, disaggregated by age and sex when possible. The map has been expanded to include trend data for 15 indicators. Graphs have been created for selected indicators for 10 countries. The publication provides an overview of key data findings about population, education, employment, sexual and reproductive health, HIV/AIDS, and gender and social protection issues, and has 45 country profiles. The available data for each indicator was ranked and sorted based on a three-tier system devised with technical guidance and input from UNFPA. For example, the color "red" is used to identify countries that need to take immediate action to address a particular indicator. "Yellow" is used to identify countries that are making progress in meeting targets for particular indicators, but many need additional investments to see further improvement. And "green" is used to distinguish countries that are making exceptional progress toward achieving targets or goals related to a particular indicator. The map is available in English and French (see link for French at bottom of map). Please use the scroll bars to view all the indicators, data definitions, and share buttons.
<urn:uuid:b3383166-6777-431f-8116-20cb40194026>
CC-MAIN-2017-47
http://www.prb.org/Publications/Reports/2014/status-report-youth.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807089.35/warc/CC-MAIN-20171124051000-20171124071000-00746.warc.gz
en
0.933914
366
3.15625
3
The Austrian picture The Counterfeiters, which won this year’s Academy Award for best foreign film, dramatizes yet another little-known story of the Holocaust. In “Operation Bernhard” the Nazis assembled a select band of prisoners at Sachsenhausen concentration camp and put them to work producing counterfeit versions of the English pound note and the American dollar bill. The prisoners were rewarded with food and clothes and soft beds to sleep in. The operation was Heinrich Himmler’s idea for bringing down the British and American economies. According to Stefan Ruzowitzky’s movie, the scheme failed only because one of the prisoners, Adolf Burger (August Diehl)—whose memoir provided the source material for the movie—kept sabotaging it.
<urn:uuid:078e02ed-e6b6-41e0-8d2a-a530c46e9ff4>
CC-MAIN-2014-23
http://www.christiancentury.org/reviews/2008-04/counterfeiters
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888972.38/warc/CC-MAIN-20140722025808-00138-ip-10-33-131-23.ec2.internal.warc.gz
en
0.932117
159
2.75
3
Water is one of South Jersey’s most important natural resources; because of the important role it plays, the South Jersey Land & Water Trust considers watershed education, stream assessments, and waterway protection some of its highest priorities. SJLWT conducts regular watershed assessments through vernal pool surveys, macroinvertebrate inventories, monitoring, and community cleanups of bodies of water, including Kirkwood Lake, the Cooper River, Oldmans Creek, and many small streams and tributaries. Learn more about watersheds and discover some ways you can help protect the waterways of South Jersey. SJLWT advocates on behalf of many worthy environmental programs, including the preservation of farmland and open space and the passing of legislative bills designed to help the natural environment. Most recently, SJLWT has advocated for the approval of a Motorized Access Plan, or MAP, which would reduce damage to unique Pinelands species in Wharton State Forest, caused by off-road vehicles. Off-road vehicles have caused much damage—some irreparable—to the Pinelands in this area, and all the species supported by the Pinelands habitat. Learn more about Wharton’s MAP and our friends at the Pinelands Preservation Alliance. Protecting habitats is another high priority of the South Jersey Land & Water Trust. This is done through ongoing programs, such as community cleanups, swamp pink fence project, and walks in the woods, which educate members about endangered habitats and introduce them to protected species. Citizen Stream Science Program Our citizen science stream monitoring program seeks to engage the public to apply their curiosity and talents to get to know the Delaware Watershed. We provide training for physical, biological and chemical stream testing such as: habitat assessment, macroinvertebrate assessment and chemical assessment parameters such as dissolved oxygen and phosphorus. Citizen scientists can provide information that would not otherwise be available due to time or resource limitations. This is a perfect program for school groups, college students, interns or anyone interested in learning about water quality and our beloved Delaware River! Children under 18 must be accompanied by an adult. River Friendly School Program The River Friendly School Certification Program is aligned with and complementary to the Watershed Institute’s River-Friendly School Program, designed to provide Pre-K to 12 schools with the opportunity to implement projects that benefit water quality, promote water conservation, and improve wildlife habitat, while also creating real-life learning opportunities for students, staff, parents, and other visitors. SJLWT works with individual schools to help them reach the goals and requirements for certification in each of four areas: Water Quality Management, Water Conservation Techniques, Wildlife and Habitat Enhancement, and Education and Outreach. The River-Friendly School Program is a tiered system, where schools can choose the level they would like to work towards. All schools are required to have at least one lesson in each category (Water Quality, Water Conservation, Wildlife and Habitat), and share river-friendly information with parents and the community. The program provides ongoing technical information, support and guidance for implementing environmental projects specific to the unique location, resources and needs of your facility. To learn more about our current citizen stream science projects (or other programs) or for information on how to get your school river-friendly certified, contact Ashley Aversa, our Program Coordinator at email@example.com
<urn:uuid:d3520022-345b-452f-a756-97acb0a9cb72>
CC-MAIN-2020-24
https://sjlandwater.org/water-protection/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390755.1/warc/CC-MAIN-20200526081547-20200526111547-00324.warc.gz
en
0.923886
695
3.09375
3
Introducing NoSQL and MongoDB At the core of most large-scale applications and services is a high-performance data storage solution. The back-end data store is responsible for storing important data such as user account information, product data, accounting information, and blogs. Good applications require the capability to store and retrieve data with accuracy, speed, and reliability. Therefore, the data storage mechanism you choose must be capable of performing at a level that satisfies your application’s demand. Several data storage solutions are available to store and retrieve the data your applications need. The three most common are direct file system storage in files, relational databases, and NoSQL databases. The NoSQL data store chosen for this book is MongoDB because it is the most widely used and the most versatile. The following sections describe NoSQL and MongoDB and discuss the design considerations to review before deciding how to implement the structure of data and the database configuration. The sections cover the questions to ask and then address the mechanisms built into MongoDB that satisfy the resulting demands. What Is NoSQL? A common misconception is that the term NoSQL stands for “No SQL.” NoSQL actually stands for “Not only SQL,” to emphasize the fact that NoSQL databases are an alternative to SQL and can, in fact, apply SQL-like query concepts. NoSQL covers any database that is not a traditional relational database management system (RDBMS). The motivation behind NoSQL is mainly simplified design, horizontal scaling, and finer control over the availability of data. NoSQL databases are more specialized for types of data, which makes them more efficient and better performing than RDBMS servers in most instances. NoSQL seeks to break away from the traditional structure of relational databases, and enable developers to implement models in ways that more closely fit the data flow needs of their system. This means that NoSQL databases can be implemented in ways that traditional relational databases could never be structured. Several different NoSQL technologies exist, including the HBase column structure, the Redis key/value structure, and the Virtuoso graph structure. However, this book uses MongoDB and the document model because of the great flexibility and scalability offered in implementing back-end storage for web applications and services. In addition, MongoDB is by far the most popular and well-supported NoSQL language currently available. The following sections describe some of the NoSQL database types. Document Store Databases Document store databases apply a document-oriented approach to storing data. The idea is that all the data for a single entity can be stored as a document, and documents can be stored together in collections. A document can contain all the necessary information to describe an entity. This includes the capability to have subdocuments, which in RDBMS are typically stored as an encoded string or in a separate table. Documents in the collection are accessed via a unique key. The simplest type of NoSQL database is the key-value stores. These databases store data in a completely schema-less way, meaning that no defined structure governs what is being stored. A key can point to any type of data, from an object, to a string value, to a programming language function. The advantage of key-value stores is that they are easy to implement and add data to. That makes them great to implement as simple storage for storing and retrieving data based on a key. The downside is that you cannot find elements based on the stored values. Column Store Databases Column store databases store data in columns within a key space. The key space is based on a unique name, value, and timestamp. This is similar to the key-value databases; however, column store databases are geared toward data that uses a timestamp to differentiate valid content from stale content. This provides the advantage of applying aging to the data stored in the database. Graph Store Databases Graph store databases are designed for data that can be easily represented as a graph. This means that elements are interconnected with an undetermined number of relations between them, as in examples such as family and social relations, airline route topology, or a standard road map.
<urn:uuid:c104e212-8d30-4d66-8cdc-79a6183a3940>
CC-MAIN-2023-40
https://www.informit.com/articles/article.aspx?p=2247310&amp;seqNum=6
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510707.90/warc/CC-MAIN-20230930181852-20230930211852-00191.warc.gz
en
0.90899
856
3.296875
3
What does your lab study? Researchers in our lab are studying empathy as a risk and protective factor for depression and anxiety in children, especially within the context of parent-child interactions. Our work aims to develop an empirical conceptualization of empathy and identify correlates of negative empathy and positive empathy across childhood. We use diverse methodology to study empathy and its correlates across behavioral, physiological, and neural units of analysis What do children enjoy most about the research? Families who participate in our studies enjoy study exercises in which children and families reminisce about their shared emotional experiences. They enjoy learning about how family members experienced difficult and happy family events. What are things families might do at home to improve children's learning? We don’t study learning, but here are tips for things parents can do to help children be more empathic: 1. Behave in caring and affectionate ways toward their children to provides examples of how to act empathically and strengthen positive parent-child relationships so children are receptive to parents’ examples. 2. Respond to their children’s emotions with contingent responses, like comforting a child’s crying and laughing with a child’s laughter. 3. Explain what they feel during emotional parent-child interactions the impact of the child’s behavior on others’ feelings. It is important to be warm and accepting rather than harsh or dismissive during these discussions.
<urn:uuid:f9d689a0-2c5d-4997-9a80-8ecdd4ce1f75>
CC-MAIN-2023-40
https://gsuchildresearch.weebly.com/interviews/interview-with-dr-tully
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506676.95/warc/CC-MAIN-20230925015430-20230925045430-00791.warc.gz
en
0.930526
292
3.5
4
The Hospital de la Caridad was founded in 1674 by Don Miguel de Mañara to care for physically and mentally ill of Seville who were too poor to afford treatment. Don Miguel de Mañara was supposedly the inspiration for Byron´s Don Juan as he left a life of debauchery to found the hospital after having an intense religious vision in which he saw his own funeral procession. He subsequently built the hospital and adjoining church and dedicated his life to charity and the religious order that runs the institution. The church and hospital are still working, although it now focuses on caring for the elderly of Seville. Link to Hospital de la Caridad website.
<urn:uuid:21a66a5a-a551-44da-b5c0-b8a5fd50891d>
CC-MAIN-2017-26
https://mindhacks.com/2006/08/28/hospital-de-la-caridad/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321426.45/warc/CC-MAIN-20170627134151-20170627154151-00475.warc.gz
en
0.977141
144
2.53125
3
The Vietnam War Declassification Project In April 2000, the Gerald R. Ford Library released approximately 40,000 pages of classified documents concerning the Vietnam War. Many are from National Security Advisors Henry Kissinger and Brent Scowcroft and their staffs and deal with the decision to evacuate U.S. forces from Vietnam in April 1975. This site provides 15 samples of the newly declassified material, 27 additional documents related to the war already available, 17 photographs of Ford and his advisors during meetings, and finding aids for those planning to travel to the Ford Library in Ann Arbor, Michigan. The sample documents include important memos, letters, and cables regarding corruption in South Vietnam; "ominous developments" by the North Vietnamese reported to Kissinger in March 1975; the evacuation decision and its execution; the seizure of the U.S. merchant ship Mayaguez by a Cambodian gunboat crew in May 1975; the plight of Vietnamese refugees; "lessons of the war" imparted to Ford by Kissinger; and notes from Scowcroft to Ford on the then-ongoing reconstruction of Cambodian society by the Khmer Rouge. This site will be valuable for those teaching courses on the Vietnam War and its aftermath and the internal workings of the Ford Administration.
<urn:uuid:1d9595f8-4526-4c7b-8fd0-9bac040f0618>
CC-MAIN-2014-10
http://teachinghistory.org/history-content/website-reviews/23375
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999653644/warc/CC-MAIN-20140305060733-00020-ip-10-183-142-35.ec2.internal.warc.gz
en
0.933056
252
2.6875
3
The Delta Green Ground Beetle inhabits the margins of vernal pools in California's Central Valley. Vernal Pools, depressions that hold water for a short time following winter rains, are among California's most unique and special habitats, for both plants and animals. That this ground beetle was completely dependent on these ephemeral wetlands for survival wasn't realized until 1974, by which time most had been destroyed to make way for agriculture. At the time of its listing, the species was known to occur at only two pools in Solano County. The Delta Green, like most ground beetles, is a generalist predator, in both its adult and larval stages. They forage around pool edges for other small invertebrates, such as springtails, fly larvae, and other small beetles. When the pools dry up in the summer, the beetles go underground awaiting the next rainy season. Vernal pools have numerous endemic species, and the Delta Green Ground Beetle benefits from being part of a large cohort of ecologically interrelated species. Recent conservation plans have focused on improving protections for existing vernal pools, as well as recreating them in areas they once occurred. Such habitat focused efforts are clearly the best way forward for endangered species recovery.
<urn:uuid:775a7815-03be-47f3-a20a-3224ae244857>
CC-MAIN-2017-43
http://www.sbnature.org/collections/invert/entom/elaph_virid.php
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00465.warc.gz
en
0.968129
252
4.03125
4
Did you read the Preface? Thanks! There is an unpublished corollary to Special Relativity that states that time spent tracking down, identifying, and fixing problems with computers and software is completely disconnected from normal reality. Everyone who has had to re-install an entire operating system because an application didn't work properly fully appreciates this fact. By way of contrast, our overall experience with the OpenLinux installation and operating environment is a positive one. This is true even though we try to do everything possible wrong to break things and poke holes in them, in order to tell you all about it. In this chapter, we're going to address various challenges that a user might encounter on the path to successfully running an OpenLinux system. Material is here freely adapted from the Caldera Knowledge Base (located at http://support.calderasystems.com/caldera) as well other parts of this book. Additionally, we have tapped our copious experience with doing silly things that break software, as this forces us to learn how to fix problems, again, so that we can write about it. Plus, as a hobby, it's slightly less painful than playing kickball on the freeway. The basic structure that underlies effective troubleshooting is four-fold, involving the following elements: Step-wise progress is the foundation of this structure. Its converse implies that making multiple simultaneous changes in a troubleshooting session leads quickly down the path of insanity. Making one change at a time allows results to be observed and documented. Taking effective notes on problems and potential solutions has two advantages. First, a written record helps record the problem and, eventually, its solution into your own memory for easy access at a later date. Also, a record written at the moment makes reporting a bug or a problem to a support organization much more likely to be successful. Concentration is necessary to prevent errors of the first two parts. Generalization is the final key to success, as many challenges are very similar to others. There are a few pitfalls in loading OpenLinux 2.4, as the problems faced are similar to those of recent versions and should be useful for at least the next couple of editions. These troubleshooting guides are organized in a chronological manner corresponding to the stages of OpenLinux installation. Each section begins with a brief listing of the problems contained therein. We open this chapter before installation, progress through the install process, and hit the home stretch with establishing a solidly booting system. At the end, we've collected a few tidbits that didn't neatly fall into any of these categories. Bear in mind that (nearly) every system is a little different, and what we've found to be a problem may work fine for you. On the other hand, there are undoubtedly learning opportunities that each user comes across that we managed to miss completely. That said, we'll do our best to be good Sherpas and get you to the peak via the best route. Let's get to work. The challenges discussed here are as follows: Problem: I have heard reports of computer speakers being "blown out" by a blast of sound as the OpenLinux installer starts up. Is this true? Solution: We have heard the same reports, although we were unable to reproduce the problem directly - systems with built-in speakers like laptops only displayed a moderate volume level for us when the installation splash music played. The concern is for speakers that do not have an independent volume knob, but that are run by a software control or from the sound card output alone. If you can't turn your speakers down, or off, for the beginning of the OpenLinux installation, you can temporarily unplug them from the system until the initial Lizard screen (Language Selection) is shown. Do have the speakers plugged in for sound output testing later in the install process, when an on-screen volume control is available. When installing on a notebook, there are often BIOS controlled function keys which can be used to enable or disable the speakers. An alternative is to plug in external speakers that can be turned down, usually disabling the internal speakers in the process. We installed OpenLinux on three laptops without taking any of these steps, and found that the introductory music was neither too loud, nor did it damage our speakers. Problem: When starting an OpenLinux installation from within Windows, the boot process recognizes the CD-ROM drive, but skips the installation from CD-ROM and fails to install, saying: Unable to install OpenLinux on this system. Please refer to the printed documentation. Press <return> for more information. What's wrong? Why does it tell me that my hardware isn't supported by Linux when I press Return? Solution: This problem occurs with the retail package only, when you start the installation with the "Installation from Windows and Commercial Packages" disk. You must change to the Binaries and Installation disk when you see the text screen prompt "To start the installation, please insert your OpenLinux Installation CD, and Press any key to continue . . ." The confusion arises because, unfortunately, both discs have the word "Installation" on them. It is important to note that you can only begin the installation process from within Windows 95 or 98. For NT or Windows 2000, you must either boot from the Binaries and Installation CD-ROM (preferred and easier), or use boot floppies. The message that follows this failure implies that the most likely reason is a hardware incompatibility. This is usually untrue, unless you are using unusually esoteric or old hardware. To change your default boot device, you need to make changes by using the BIOS Setup screens. When a computer is executing a POST (Power On Self-Test), there is a message printed somewhere on the screen that resembles "Press <DEL> to enter SETUP." When you do so, the BIOS Setup screens are available for modification. It is important that any changes to BIOS parameters be documented, with before and after information written down. In this manner, changes that negatively impact the performance of the computer can be easily reversed. Figure 5-1 shows a representative BIOS setup screen: there are many different types. A brand-name BIOS setup screen, shown modifying the boot device order. If the system still doesn't boot into the installer properly, there are three other routes that are feasible. Boot in Framebuffer or VESA mode, or start with either Lizard or LISA boot floppies (see Chapter 3 for details on installation options). The LISA installer is definitely the last option to exercise - It uses a much older Linux kernel (2.0.36), which needs to be upgraded after a successful setup. The VESA mode installation uses a VGA compatibility mode that is (theoretically) present on all video chips, even the newest ones that aren't yet recognized by OpenLinux. This mode from the Binaries and Installation CD-ROM boot is identical to the Framebuffer mode offered when the install is started from within Windows 95 or 98. Boot floppies (either Lizard or LISA) are recommended when the system can't boot from the CD-ROM device. Additional boot parameters for fine-tuning the installation boot to the system hardware are listed later, in Table 5-1. Since we're installing and de-installing operating systems quite frequently, we set up our machines for a boot order of CD-ROM, removable devices (or floppy or A:), and hard drive (or C:). This allows us maximum flexibility. If your BIOS does not allow booting from the CD-ROM device, then a boot floppy is the usual fall-back position for installing OpenLinux. Problem: Can PartitionMagic change the size of a partition on a second hard drive? Solution: A full retail copy of PartitionMagic can handle that task without difficulty. On the other hand, the PartitionMagic - Caldera Edition (PMCE), which is packaged with eDesktop, cannot. PMCE is designed to provide minimum functionality for resizing the system boot drive (the "C:" drive) only. There is also a tool on the eDesktop Binaries and Installation CD-ROM called fips. Fips stands for the First (non-destructive) Interactive Partition Splitting program. We experimented, successfully, with fips. However, it is not for the technically faint-of-heart. If you want more control over your partitions than CEPM offers, and are feeling adventurous, then give fips a try. Like most normal filesystem modification programs, you run a better than normal chance of blowing away all the data on the drive, so make sure you have good backups before you begin any experimentation. Fips can be found at CD-ROM://col/tools/fips/*, DOS executables, documentation and all. The last option involves blowing away your existing partitions altogether. Then reinstall Windows, using only part of the hard drive. Then restart the OpenLinux install. Problem: How difficult is it to dual-boot Windows with OpenLinux? Solution: The answer to this question is rooted in the version of Windows that needs to co-exist with OpenLinux. For Windows 95, 98, and 2000, the answer is that the systems live together easily. Windows NT is a beast of a different breed, however. For the Windows 95 and 98 systems (as well as Windows 2000 installed onto FAT-formatted partitions), simply install the bootloader to the Master Boot Record (MBR) and let Grub (the GRand Unified Bootloader) handle booting into one system or the other. If you want to change the boot order, or set a different default booting OS, then check out "The Post-Installation Blues" section, later in this chapter for pointers on that very topic. Windows NT and Win2K can be installed onto either FAT32, NTFS, or (in the case of the latter) NTFS5 formatted partitions. By default, OpenLinux cannot recognize any but FAT-formatted partitions for use in dual-booting situations. Brian often expressly installs these MS operating systems onto FAT32 partitions, so that they can be mounted, read from, and written to while running Linux (there known as VFAT formatted volumes). On the other hand, Tom has had a lot of difficulty in making OpenLinux coexist nicely with these OS's. Take all of the advice herein with a grain of salt: Your mileage may vary. There are two different routes to take with Windows NT and Windows 2000. First, if you are working with the retail package of OpenLinux 2.4 eDesktop, then you have a Caldera Edition of BootMagic which will allow booting from multiple OS's easily. If using BootMagic or some other third-party boot manager, then do not check the "Write master boot record" box in the Set Up Boot Loader screen during the install. The challenging method of having NT and Linux live on the same hardware together involves using NT's bootloader as an entry point for Linux. Here are the key gory details, gleaned from the NT OS Loader + Linux mini-HOWTO, located at http://www.linuxdoc.org/HOWTO/mini/Linux+NT-Loader.html. Start with a running OpenLinux installation where the bootloader is installed to the root partition (not the MBR). dd if=/dev/hda2 of=/bootsect.lnx bs=512 count=1, substitute your root partition name for /dev/hda2. This makes a copy of the boot sector on the root device. The output file, which we have named bootsect.lnx, can be called anything at all. We try to choose 8.3 (DOS-style) filenames that convey as much information as possible, and suggest that you do the same. mcopy /bootsect.lnx a:(mcopy is part of the DOS tools set that is packaged with OpenLinux, installed with every package grouping). attrib -s -r c:\boot.ini, so that the file is not read-only, nor a system file (temporarily) attrib +s +r c:\boot.ini. The difficulties addressed in this section are: Problem: When running the installer to change over from another distro to OpenLinux 2.4, the custom (expert) partition screen doesn't show my partition table properly. If I continue with the install, can this impact partitions that I can't see in the installer? Solution: Yes. Stop, and read the following carefully. There are two very different circumstances in which the Partition Manager portion of the Lizard installer can cause major problems. The first circumstance is in using proprietery disk manager software, such as the recent software from Maxtor. These disk access programs provide BIOS extensions for computers that don't have the ability to recognize today's large hard disk drives. The software is not necessary for use in most computers produced in 1998 or later. Additionally, the software is not necessary for use with Linux at all. It does, however, interfere with some software reading the partition table properly. This is not a bug in the disk management software, but simply a case of conflicting requirements. If you are running Linux only, then use the software tools that came with your hard drive to remove these management facilities from your hard drive. You may even need to get low-level disk formatting tools from the drive manufacturer to remove all traces of this from the drive. In this case, you must back everything up, as all data will be lost in the process of rebuilding the drive. For a dual-boot configuration, you would need to make use of the partition sizes that are available without the disk manager software, or use a separate second hard disk for the Linux installation, and do not write to or modify the partitions on the original drive. Low level formatting of a hard disk is not recommended unless you know what you are doing. Used improperly, the formatting tools can render a disk useless. The second situation (changing from another Linux distribution to OpenLinux) can arise when converting a pre-existing GNU/Linux installation to OpenLinux. There are some combinations of partition type, size, and layout that confuse the custom (expert) partitioning tool in the Lizard installer. The capable tech support crew at Caldera were able to replicate a problem we found with a nine partition setup left behind by a Mandrake 6.2 installation. We note that the partitioning tools that are available once OpenLinux is running recognize these partition setups perfectly, and regard this as a bug in the list. This box is followed by directions on the use of the editor (shown on the same screen): Up Arrow and Down Arrow are used to select entries. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' to get a command line, 'o' to open a new line after the current line ('O' uppercase, for a line above the current one), 'd' to remove the selected line, or press Esc to go back to the main menu. This last option takes you to a text-mode version of the boot menu previously seen within the Lizard partitioning tool. The safe solution here is to backup all the data to another media, then use the partition manager from the original distribution installer to delete all the existing partitions, leaving free space on the drive for OpenLinux to work with. Problem: I am having problems starting the installer so that it doesn't choke on my hardware. Is there any way to set the boot parameters manually on the installer? Solution: When the installer process begins, the OpenLinux boot screen is displayed with the first option "Standard install mode (recommended)" selected. Before the countdown at the bottom of the screen hits zero, press 'e' (lowercase only, please). This brings up the Grub boot editor, which has an edit screen that looks similar to the following: GRUB version 0.5.94 (636K lower / 130048K upper memory) __________________________________________________________ root (fd0) kernel /vmlinuz vga=274 nosmp noapic debug=2 vga=274 initrd /initrd.gz __________________________________________________________ (The bold text of the first linestands in place of an inverse text highlight bar across the top line in the graphical view.) To make modifications to the installer boot mode, the second line ("kernel...") is the one to edit. There are a variety of boot parameters that can be used - these are listed in Table 5-1. List of Installation Boot Parameters |er=cautious||Cautious (less aggressive) probing of system hardware, recommended in the case of initial install failure. Sound, mouse, and network hardware are left to the user to specify.| |er=demo||Does a full mock install. Caldera specifically recommends this choice for use at trade shows. We have found it useful in walking people through the install without committing to disk.| |er=expert||Calls the installer with the -noskip option, to allow the user to see all screens during installation, even those that have had the associated hardware correctly identified and setup.| |er=hwinfo||After hardware probing, but prior to starting the Lizard installer, the system prompts you to insert a DOS-formatted floppy into the A: drive. The file hwinfo0.txt is written to the diskette. This file can be e-mailed to Caldera Technical Support (if you have that option through purchasing a retail version), to help them in identifying your hardware and your problem.| |er=modules||The "er=modules" parameter forces the installer to request the modules floppy, and wait for it to be loaded before continuing.| |er=noether||This parameter excludes all ethernet card probing from the install process. This is useful when the install is hanging up in that part of the probing process, or taking too long.| |er=nofloppy||Skip the modules floppy altogether.| |er=noothercd||Don't probe for proprietary CD-ROM drives.| |er=nopcmcia||Don't probe for PC card devices.| |er=text||Good only in eDesktop 2.4, this starts a text-mode, stripped-down, and modified version of the LISA installer, using the latest kernel, hardware probing, and other features borrowed from the Lizard. Screens from LISA that are not needed are never seen.| |insmod=||Often used in concert with er=cautious to provide a specific driver module for the kernel. For example, Note the explicit use of the % character to stand in place of a space in the command. |lizard=xxxxx||These are the unattended install modes, where the xxxxx stands in for auto, floppy, |lz=yyyyy||A veritable cornucopia of Lizard installer parameters: display ip:display, redirect the output to the given display language language, set the default language noanim, do not show animation at startup anim, show animation mouseprobe, probe mouse nomouseprobe, do not probe mouse netprobe, probe network settings nonetprobe, do not probe network settings server server, the X-server to use noskip, do not skip autodetected pages skip, skip autodetected pages demo, fake installation sound, enable sound playback nosound, disable sound playback For example, lz=nosound,server= XF86_Mach64 |vga=274||VGA BIOS graphic mode of 640x480 in 16 colors.| |vga=785||VESA (Framebuffer) graphic mode of 640x480 in 16 colors. This is a slow, reliable video mode that should work on every video card. Recommended if the standard mode seems to break the display.| Alternatively, in OpenLinux 2.3 or eServer, many of these options may be entered at the Lilo boot prompt, consecutively. For example: boot: install er=expert vga=785 Problem: My laptop has an ATI Rage Mobility (aka the ATI Mach64 Mobility) chipset in it. After I've tested the video mode, the screen is garbled; I can't read it to proceed with the install. Solution: Some laptop/chipset combinations demonstrate an incompatibility during mode-switching. This prevents the system from effectively returning to the 640x480 mode used by the Lizard after the video test completes. We were unable to replicate the problem with an Acer laptop that incorporated that particular chipset. Your experiences might differ. However, we were able to make the failure mode happen on another, older laptop. The correct short answer is to refrain from testing the video mode. Continue on with the installation, and make changes to the display sub-system later if necessary by running XF86Setup as the root user from a console after the install is complete. Problem: When booting the Installation CD from within Windows, sometimes the system hardware is detected properly, other times it isn't. What's happening? Solution: Booting the installer from inside Windows is one of those good news/bad news propositions. Most of our experiences on this topic are on the bad news side of the coin. Here is what is happening: A running Windows installation pre-initializes some of the installed hardware, and that initialization confuses the Linux installer. For this type of problem, try starting the Binaries and Installation CD-ROM directly from a cold boot (power up), which ensures that all the hardware is reset properly. This is also true for general booting of certain laptop models. A reboot from Windows that doesn't involve a power-down cycle can leave the pointing device (especially touch pads) in a state that can't be read by Linux on a [warm] reboot. The only cure for this problem is to shutdown the computer fully, and restart. The good news side of starting the installer from within Windows is that it can assist the Lizard installer in correctly identifying some types of hardware. For example, a system using a Sound Blaster (SB) compatible soundcard might install better from Windows, since the card is pre-initialized in SB mode. Problem: I have attempted to install OpenLinux on my laptop but when it gets to the loading Lizard part, it plays some music, the screen goes blank, and nothing more happens. Solution: Laptops have long been regarded as the bane of Linux computing. In years past, many hardware compromises were made to shoehorn enough functionality into the small packages. Recently, both laptop hardware and GNU/Linux capabilities have improved tremendously. OpenLinux has generally been a very laptop-friendly distribution, and we have successfully installed it on several laptops ranging in age from zero to about five years. To address the specific question: Display problems are the most common primary failure mode for installation onto a laptop. The video hardware is frequently a sub-set of a known (and recognized) piece of hardware, limited for both power consumption and heat dissipation purposes. This causes the Lizard installer no end of grief. Our counsel is to use the VESA or the Non-Graphical install modes from the CD-ROM. Failing that, try working with the Lizard or LISA boot floppies. The trick is to get through the install, get a working system, then play with getting the video system running. Check the "Linux On Laptops" Web site at http://www.cs.utexas.edu/users/kharker/linux-laptop/. Additionally, there is a Linux Laptop HOWTO located on the Linux Documentation Project site at http://www.linuxdoc.org/HOWTO/Laptop-HOWTO.html The problems at hand in this section are: Problem: When the computer is turned on, it boots into Linux by default. How can I get the Grub bootloader to offer Windows as the default OS in my dual-boot system? Solution: The Grub bootloader process is controlled by the file called /boot/grub/menu.lst. The following is a representative sample of a dual-boot system menu.lst file (line numbers added for convenience): 1 # 2 # /boot/grub/menu.lst - generated by Lizard 3 # 4 5 6 # options 7 8 timeout = 5 9 splashscreen = (hd0,1)/boot/message.col24 10 11 default = 0 12 13 title = Linux 14 root = (hd0,1) 15 kernel = /boot/vmlinuz-pc97-2.2.14-modular vga=274 noapic nosmp debug=2 16 root=/dev/hda4 17 title = Windows 18 19 chainloader = (hd0,0)+1 The critical point for the default boot OS is line 11. The choices are numbered starting with 0 (zero). The first title line is Linux (on lucky number 13), which is the first (or zero) OS. The second title, for Windows, is seen on line 17. To tell the Grub bootloader to run Windows by default, change line 11 to read default =0, using a text editor as root. When modifying system files, first make a copy of the existing, working file. This is accomplished by the simple expedient of typing (for example) cp menu.lst menu.lst.orig from a command prompt prior to making your changes. That way, if the changes turn out to be a mistake, then a simple copy operation is all that's necessary to return to a known good point in the process. There are usually a large number of questions about the Grub tool, as it is a fairly recent addition to the Linux toolbox. We have included larger sections on Grub and its older alternative, Lilo, in the section "Startup and Shutdown" in Chapter 19. If your goal is to remove Grub from the system and use Lilo instead, read on. Problem: I want to use the Lilo bootloader instead of Grub. I don't need the special capabilities of Grub, and I know Lilo and its configuration. Are there any special problems in replacing Grub? I didn't see an option for Lilo in the installation routine. Solution: While the Grub offers both ease-of-use features and added functionality, clearly Lilo has its adherents. Lilo is among the default packages installed with OpenLinux 2.4. Additionally, the /etc/lilo.conf file is pre-built to provide the same configuration as Grub. So using Lilo is a simple process (although our version varies from the Caldera Knowledge Base somewhat, and the comments before each command are for your benefit, not to type in): Become the root user. Then change the current directory to /boot. (By the way, that's a really bad root password shown below. We recommend something more secure <g>) [syroid@donovan syroid]$ su Password: root_password [root@donovan syroid]# cd /boot Make a recursive backup copy of the Grub sub-directory into the temporary storage area of your filesystem. The recursion ensures that you get not only the sub-directory and its contents, but any existing structure below that point. [root@donovan boot]# cp -R grub /tmp Completely remove the Grub directory and all its contents. The "-rf" options to the rm command instruct that the removal be recursive (to include directories) and forced (no prompting). While this is a handy command, read it twice after you've typed it, then press Enter. If you make a typing mistake in an "rm -rf ..." (and we have), then who knows what you'll delete. It depends on the typo. [root@donovan boot]# rm -rf grub You should examine your Lilo configuration file to determine that it meets your needs. We use the cat utility, below. Immediately after installation, there should be no changes required. On the other hand, if you have made edits to the /boot/grub/menu.lst file to change boot parameters, and so on, then similar modifications need to be made to lilo.conf. [root@donovan boot]# cat /etc/lilo.conf # # /etc/lilo.conf - generated by Lizard # # target boot = /dev/hda install = /boot/boot.b # options prompt delay = 50 timeout = 50 message = /boot/message default = linux other = /dev/hda1 label = Windows image = /boot/vmlinuz-pc97-2.2.14-modular label = linux root = /dev/hda4 vga = 274 read-only append = "debug=2 noapic nosmp" To change your default boot loader from Grub to Lilo, simply type: [root@donovan boot]# lilo Added Windows Added linux * The asterisk next to linux indicates the default OS. This is set up by the "default=..." line in /etc/lilo.conf. To see what the lilo command is going to do, without actually taking action, type lilo -v -t. This runs lilo in verbose (-v) test (-t) mode. For more details on the output from this, consult the lilo manual page. [root@donovan boot]# reboot There is only one way to test a Lilo configuration - that's to reboot your system. This process works for us. Caldera actually recommends simply removing the Grub files, then using RPM to delete the Grub package. We prefer to hold off on that measure on the off chance that we change our mind. The package can always be removed later. Now for some other challenges related to the Grub bootloader. Problem: I am using eDesktop 2.4 and I would like to see the normal text scroll by as the boot processes start up. I cannot seem to disable this anywhere. Is it possible to eliminate the graphical boot screen? Solution: This is easy to specify with two simple changes to the /boot/grub/menu.lst file. The additional text in the file is marked in bold text. We first made a backup of the existing data. Note that we put a modification comment into the file near the top. This is always a good practice when editing configuration files. 1 # 2 # /boot/grub/menu.lst - generated by Lizard 3 # 4 # 07/28/2000 Modified by Bilbrey - no splash and vga to normal 5 6 # options 7 8 timeout = 5 9 # splashscreen = (hd0,1)/boot/message.col24 10 11 default = 0 12 13 title = Linux 14 root = (hd0,1) 15 kernel = /boot/vmlinuz-pc97-2.2.14-modular vga=normal noapic nosmp debug=2 16 root=/dev/hda4 17 title = Windows 18 19 chainloader = (hd0,0)+1 By putting a hash mark (#) in front of the "splashscreen = ...", we prevent that line from being read by Grub altogether. Instead it is simply passed over as a comment. Replace the vga=274 option with vga=normal to disable the extended text mode of the default Grub boot sequence. Be careful not to introduce a newline (carriage return) into the "kernel =" line, as this usually causes the boot to fail, perhaps in a spectacular or confusing manner. Problem: I have an old VGA monitor. It worked fine for me on Windows, but I can't find the name (or manufacturer) anywhere in the list on the Monitor Selection screen during an eDesktop install. What's the routine to configure OpenLinux for an unlisted monitor? Solution: You can configure your monitor successfully as one of the generic models listed in the "typical" section during the installation. Login as the root user, and type lizardx at a command prompt. lizardx runs through the mouse, keyboard, video card and display setup processes with exactly the same dialogs as in the installer. Figure 5-2 shows the Select Monitor screen from Lizardx. Choosing a "typical" display from the Select Monitor screen in Lizardx. Unfortunately, there are some configurations (for instance, Brian's laptop) where lizardx doesn't work unless a graphical display is already up and running in some form. In these cases, or if lizardx isn't installed, you will need to run XF86Setup (typed as shown, with upper- and lowercase letters) to configure the X Window server. Figure 5-3 shows XF86Setup, which starts by running in a basic (640x480) VGA mode. The splash screen from XF86Setup shows the five areas of configuration. Problem: When running in 640x480 or 800x600 video resolutions, some applications such as the KDE Control Center cannot be used properly - the dialog box is larger than the viewable screen, and critical buttons, such as "OK" or "Cancel" cannot be seen or used. Solution: Either during installation or by running either Lizardx or XF86Setup following installation, change your video mode to enable virtual desktops. This will turn your low-resolution screen into a scrollable "window" to a higher-resolution desktop. This will let you use all parts of a large window, such as the KDE Control Center. Figure 5-4 shows the specific screen from Lizardx, with a drop-down list box located at the lower left corner, which allows selection among the various higher "virtual" resolutions. The effect of a virtual desktop is to use the pointing device at the edges of the screen to "drag" a window the size of your actual physical resolution around on a larger virtual workspace. Video Mode Selection from Lizardx, with virtual screen option at lower left. Problem: I can't create a boot floppy during installation (my laptop didn't come with a floppy drive). Can I boot into my OpenLinux system with the Installation CD-ROM? Solution: Yes. This is also a useful method when there wasn't a boot floppy created during the installation process. We describe this procedure for an eDesktop system. Boot up your computer with your Binaries and Installation CD-ROM in the drive. Using the Arrow keys, highlight the Standard Install on the menu (or the cautious install, if you know that this is necessary for your hardware). Then type e. This displays three lines of parameters that you can edit. The first line should say "root (fd0)" The second line is a list of kernel parameters, and the third line specifies the initrd (Linux shorthand for INITial Ram Disk). Highlight the second (the "kernel=..." line), and type e to edit it. At the end of this line, add the following text: The [boot_partition] section should be replaced with your specific root device information. For example, to boot from /dev/hda2 you would type at the end of the kernel line: After changing the line, press Enter to leave the edit mode, and type b to boot. The most common error in this process is leaving edit mode by pressing Esc (which aborts the edit and abandons the changed text) instead of Enter. We've tested this with both ATAPI SCSI devices. Problem: I didn't make a rescue boot disk during installation. Can it be done now, or do I have to install the system again to make one? Solution: While booting into your installed OpenLinux system with the Installation CD-ROM works, as noted in a prior solution, a backup boot floppy is always a good idea. In order to accomplish this procedure, you'll need an installation CD-ROM (either one from the retail package, or the downloadable ISO image CD-ROM), and a 1.44MB floppy disk that you can erase. Put your installation CD-ROM into the drive, then: [syroid@donovan syroid]$ su Password: your_root_password [root@donovan syroid]# mount /mnt/cdrom [root@donovan syroid]# cd /mnt/cdrom/col/launch/floppy Make sure that the write protect tab is covering the slot, then insert the floppy. The following command copies the floppy image file onto the diskette: [root@donovan syroid]# dd if=install.144 of=/dev/fd0 Once the floppy write process is done, mount the floppy in order to edit the syslinux.cfg file. [root@donovan syroid]# mount /mnt/floppy [root@donovan syroid]# vim /mnt/floppy/syslinux.cfg This opens the vim editor (see Chapter 16 for details about vim and other editors) on the file syslinux.cfg, which controls the booting process. The changes are in boldface, and in the append line (at the bottom), substitute your root device for ours. # default install default boot prompt 1 timeout 30 display boot.msg F1 boot.msg F0 syslinux.cfg label install kernel vmlinuz append vga=274 debug=2 nosmp noapic BOOT_IMAGE=install initrd=initrd.gz local label boot kernel vmlinuz append vga=274 debug=2 nosmp noapic BOOT_IMAGE=boot initrd=initrd.gz local root=/dev/hda2 Save the file. Then unmount the floppy. This is an important step and one that is often forgotten. Unlike Windows or DOS, Linux and its cousins don't always write directly to a device, but buffer the output so that the OS can continue. If you pop out the floppy without unmounting it first, it's likely to have corrupt data on it. Unmount the CD-ROM too, before you eject it from the drive. [root@donovan syroid]# umount /mnt/floppy [root@donovan syroid]# umount /mnt/cdrom Now reboot your system to test the rescue floppy. Make sure that the BIOS is set to boot from the floppy drive (sometimes referred to in the BIOS setup as "Removable Drives") before the HDD. Problem: The drive access light on my system is constantly blinking. The system is running OpenLinux eDesktop. Can I (or should I) make it stop? Solution: There is a little Compact Disk icon in the KDE system tray (in the lower right corner of the desktop) from a program called kautorun. This progarm monitors the CD drive in order to auto-play audio disks or auto-start/mount data discs. Right-click on the icon, and select "Quit." As long as you log out of that session normally, kautorun won't return to bother you. If you logout of a later session with kautorun enabled, then it will return again at the next login. Problem: OpenLinux is installed in 15GB of space at the back of a large hard disk (more than 30GB). The install went fine and the first boot went just fine. Now, neither Windows nor eDesktop will boot. I thought Grub could handle big drives better than the old bootloader. Do I have to start over? Solution: There are several ways to approach this problem. We'll give you the easy, and then the fast answer. The easy solution is to boot with a DOS or Windows boot diskette, and at the DOS prompt, type fdisk /mbr. This clears the non-functioning version of the bootloader out of the drive, and has always returned our Windows partitions to regularity. Once Windows is booting again, reinstall OpenLinux. Ensure that you put a checkmark in the "Write master boot record" box on the Set Up Bootloader screen during install. Then Grub will properly manage booting both Windows and Linux. There are no guarantees that come with our information. These sorts of solutions have worked well for us today and in the past. But programs like fdisk work on the hardware at low levels, and have the potential to kill as well as cure. Of course, if no operating system is booting, then there's very little to risk, is there? The much faster (but more challenging method) involves booting into OpenLinux using either the rescue floppy, or starting up by booting into the system using the Installation CD-ROM, as discussed previously in this section. Then, as root, edit /etc/grub.conf, to reflect the locations to install the stage 1 and stage 2 bootloaders. A typical grub.conf file follows, used to dual-boot one of our systems (lines 6 and 7 are continued for display purposes, but should be typed as a single line): 1 # 2 # /etc/grub.conf - generated by Lizard 3 # 4 # 5 root (hd0,1) 6 install /boot/grub/stage1 (hd0,1) /boot/grub/stage2 0x8000 (hd0,1)/boot/grub/menu.lst 7 install /boot/grub/stage1 d (hd0) /boot/grub/stage2 0x8000 (hd0,1)/boot/grub/menu.lst 8 quit The configuration file is actually a list of commands to the grub utility, instructing it where to put various pieces of the bootloader. Our actual partition table for this system currently has Windows loaded on hda1, OpenLinux loaded on hda2, and a swap partition as hda3. Grub speaks a slightly different language. Drives are numbered, rather than lettered (Linux hda is equivalent to Grub's hd0), and partitions are comma-separated from the drive ID, and are numbered, starting with zero. So Grub refers to hda2 as hd0,1. In the configuration file, the Stage 1 bootloader is put into both the hda2 partition, and into the MBR. It is line 7 that is lacking, if your system won't boot. The key to writing the bootloader into the MBR (letting Grub handle boot management for you) is the "d (hd0)" part of line 7. You'll need to make the information specific to your system though. Pay special attention to the root drive specification. If your root partition is hdb7, then line 5 should read Once you've modified the grub.conf file to your system's requirements, we need to install the modified Grub system. Unlike modifications to the /boot/grub/menu.lst file, grub.conf changes must be explicitly installed. [root@donovan syroid]# grep -v ^# /etc/grub.conf | grub --batch Probe devices to guess BIOS drives. This may take a long time. GRUB version 0.5.94 (640K lower / 3072K upper memory) [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,1) Filesystem type is ext2fs, partition type 0x83 grub> install /boot/grub/stage1 (hd0,1) /boot/grub/stage2 0x8000 (hd0,1)/boot/g rub/menu.lst grub> install /boot/grub/stage1 d (hd0) /boot/grub/stage2 0x8000 (hd0,1)/boot/g rub/menu.lst grub> quit The command line at the top of the listing uses the grep utility to strip out the comments from the configuration file, then feeds what remains to the grub program, in batch mode instead of the default interactive style. If Grub runs with no error messages then all is well. One caveat - if there's a blank line in the grub.conf file, then Grub will print a message like: grub> Error: Unrecognized command This error is not a real problem, as Grub simply doesn't like blank input and displays its displeasure. Now, eject the boot floppy or CD-ROM, reboot the system (usually by typing shutdown -r now, where the -r stands for "reboot"), and test your freshly dual-booting computer. In this section, we address various learning opportunities that aren't related directly to installation. However, these topics often provide an immediate target for that "urge to fix something broken" that overcomes us once in a while. In addition, there are a few problems mentioned in this section, like the upcoming Netscape issue, which are also security updates in disguise. Have a look around. If any of these items listed are giving you more excitement than you bargained for, then the answers are here. Problem: Netscape crashes when I look hard at the screen! When I try to access a secured site, access my IMAP e-mail account, or use Webmin, Netscape 4.72 suddenly quits. How can I fix this? Solution: Netscape 4.72 is the version that ships with OpenLinux 2.4. The Netscape program and a couple of shared libraries1 that are part of the XFree group of packages need to be upgraded for more stable Netscape operation. Additionally, the 4.73 version provides fixes for recently uncovered vulnerabilities in the SSL transaction handling routines. This upgrade is strongly recommended. The most recent (current as of July 2000) versions of the required RPM packages are: These files (or possibly even newer versions) are available from the site ftp://ftp.calderasystems.com/pub/eDesktop/updates/current. Once you have downloaded them, install the RPM files as the root user, in the order shown below: [root@gryphon RPMS]# rpm -Fvh XFree86-libs-3.3.6-4.i386.rpm XFree86-libs ######################################################### [root@gryphon RPMS]# rpm -Fvh communicator-4.73-2.i386.rpm communicator ######################################################### [root@gryphon RPMS]# rpm -Fvh xswallow-1.0.18-2.i386.rpm xswallow ######################################################### The called options for the rpm utility above, "-Fvh", instruct that the new files should be used to "freshen" the existing installed software, be verbose (display messages about program activity), and print a list of hash marks (as seen previously) to visually indicate the installation of the requested package. By the way, the difference in RPM-speak between freshen, update, and install is that freshen only installs if an older package version is present. Update installs regardless, but runs the installation scripts in such a way as to be extra careful of user-modified configuration files. A late security update just crossed our radar screens. We have seen warnings from several different Linux vendors that there is a buffer overflow exploit that affects all versions of Netscape between 3.0 and 4.73. At the time of this writing, there were no updates for this problem available from Caldera. However, we do anticipate a release soon. If you can't find it in the updates, then uninstall your Netscape RPMS, fetch the latest 4.75 from the Netscape, and install it manually following the excellent instructions on that site. Problem: Does OpenLinux 2.4 come with any TrueType fonts? How do I use them? Solution: OpenLinux 2.4 comes with a large number of TrueType fonts. They are found in the larabie-fonts RPM on the Installation CD-ROM, and are only loaded by default with the All Packages installation. Mount the disk and install the package as follows: [root@gryphon RPMS]# mount /mnt/cdrom [root@gryphon RPMS]# cd /mnt/cdrom/Packages/RPMS [root@gryphon RPMS]# rpm -Uvh larabie-fonts-1.0.i386.rpm xswallow ######################################################### [root@gryphon RPMS]# cd ; umount /mnt/cdrom Once you have installed larabie-fonts, restart the X Server. This can be accomplished by logging out of KDE, then using the "Restart X Server" option after selecting Shutdown from the Login dialog box, as shown in Figure 5-5. Restarting the X Server from the Login Dialog Box Another way to restart the X Server is by typing Ctrl+Alt+Backspace, (after closing any open applications, since this method forcibly closes all running user processes, then kills and restarts X). If the fonts are still not available, be sure that the FontPath variable is defined in your /etc/XF86Config file for the TrueType fonts. There should be a line that reads: If this line is not present, then enter it on a separate line next to the rest of the FontPath statements in the Section "Files" of /etc/XF86Config. Restarting the X Server is also necessary after modifying the configuration file as discussed. Problem: One of the earliest oddities we noted when beginning this odyssey was an unexplained error message in the shutdown sequence, on a fresh installation. It is generally our experience that fresh installations are not supposed to have error messages, as this indicates a broken package. The error reads: keyserver forget to set AF_INET in udp sendmsg. Fix it! Solution: OpenLinux 2.4 comes with a "lite" version of the Caldera image management software, Cameleo. When installed, there are two services related to Cameleo that run by default - /opt/cameleo/bin/calserver and /opt/cameleo/bin/keyserver. There are some very interesting explanations for this behavior and the associated error message. Our best detective work indicates that the error lies with the keyserver program, and that it is probably harmless, as the Caldera Knowledge Base claims. On the other hand, these services only need to be running if you are using Cameleo. To disable the Cameleo services, open the menu K --> Settings --> COAS --> System --> Daemons. From the displayed dialog box, uncheck the box next to Cameleo Servers. After one more system shutdown, the error message will trouble you no more (unless you re-enable the services). Problem: How can I mount, read, and write to my windows (FAT32) partition as a normal user, rather than just as root? Solution: Edit the /etc/fstab file and change the "/dev/hda1..." (or whichever partition is appropriate) line to read as follows: /dev/hda1 /mnt/hda1 auto defaults,user,suid,nodev 0 0 Unmount the partition as root and then log in as a regular user. You are then able to mount it without problems. Test the read/write by typing touch /mnt/hda1/testwrite. If the file is created, then everything is working fine (and the file can be deleted). Problem: I can't change my video modes - all I get is one resolution in X. I have read that I can use Ctrl+Alt+KP_Plus or KP_Minus (from the numeric keypad) to increase or decrease my screen, but when I do this, nothing happens. Why? Solution: The installation process only allows the selection of a single video mode. Recall that a video mode is a combination of display resolution and color depth that your monitor can show, given its operating characteristics. In the screen section of the configuration file /etc/XF86Config, you can see that for each color depth, one or more display resolutions can be listed, as the following listing shows: Section "Screen" Driver "Accel" Device "ATI Mach64 Mobility" Monitor "LCD 1024x768" DefaultColorDepth 32 BlankTime 0 SuspendTime 0 OffTime 0 SubSection "Display" Depth 8 Modes "1024x768" "640x480" Virtual 0 0 EndSubSection SubSection "Display" Depth 15 Modes "1024x768" "640x480" Virtual 0 0 EndSubSection SubSection "Display" Depth 16 Modes "1024x768" "640x480" Virtual 0 0 EndSubSection SubSection "Display" Depth 24 Modes "1024x768" "640x480" Virtual 0 0 EndSubSection SubSection "Display" Depth 32 Modes "1024x768" "640x480" Virtual 0 0 EndSubSection EndSection There are a number of sections in an XF86Config file, which address mouse, keyboard, and a variety of screen server types, from standard VGA16 through the specialized accelerated X Servers, shown in the preceding excerpt. Ignoring many of the other features, notice that the color depth (DefaultColorDepth) is set to 32. This instructs us to look to the Display subsection for 32-bit color, where the listed modes are "1024x678" and "640x480". These modes are written as quoted and run-together strings because the data 1024 x 768 actually has no meaning for the X Server. These are just convenient human-readable labels which key into the complete mode lines which the X Server uses to setup the video hardware appropriately. Running the program XF86Setup generated this particular XF86Config file, which already has multiple modes available for each possible color depth. Also addressed earlier in this chapter, XF86Setup has a control called "Mode Selection" that allows multiple modes (resolutions) to be chosen and written into the XF86Config file. Log in to X as the root user, and type XF86Setup in a terminal window. An initial text dialog box appears in the terminal window, ending with the question "Is this a reconfiguration?" Press Tab until "< YES >" is highlighted (if it isn't already), then press Enter. XF86Setup then starts up in a new graphical window. Choose from the buttons at the top of the window. This will allow you to choose multiple resolutions and color depths using the buttons on the dialog box, as shown in Figure 5-6. Choosing screen resolutions in the "Modeselection" dialog box from XF86Setup. Depending on your hardware configuration, some options may be selected, yet not allowed. For instance, we can choose 1600x1200 for this laptop screen, but when we later examine the resulting XF86Config file, "1600x1200" is not one of the options available. This is because there is no way to display that resolution on this LCD, so there is no valid mode line with the "1600x1200" label, and it isn't accepted and written into the Screen section. Once you have made these modifications to the configuration file, then restart X to incorporate those changes. Following that, you may use Ctrl+Alt+KP_Plus or Ctrl+Alt+KP_Minus to switch between valid resolutions without restarting the X Server. Some laptops (like Gryphon, the previously mentioned Acer Travelmate) have a number of keys that are additionally designated as function keys, and using the Ctrl+Alt in combination with other keys often has unexpected results. This is especially true in the circumstances described here, since the numeric keypad is a special set of overlaid keys. You may not be able to do dynamic mode switching at all on a laptop computer. Problem: After installing Perfect Backup from the commercial software disk of my retail eDesktop version, a new user, pbadmin, appeared on the list of users at the login dialog box when X starts up. Is this right, and can I make the pbadmin user disappear without breaking Perfect Backup?< em>Solution: The user pbadmin is automatically created as a part of the Perfect Backup installation process. Do not delete pbadmin; this "user" is a necessary part of the Perfect Backup system, as it has the specific system permissions necessary to accomplish the required tasks. However, you can delete this user from the login screen dialog box. Login as the root user, and use the menus to select K --> Settings --> Applications --> Login Manager. While we cover the Login Manager in Chapter 8, Figure 5-7 shows the Users tab page from the KDM Configuration dialog box, displayed when Login Manager is run. Moving users about in the KDM Configuration dialog (KDE Login Manager). In order for the user pbadmin to not show at the Login dialog box, highlight the user on the list at left, then select the lower >> button to move pbadmin into the No-show users list. Alternatively, move all the users back into the main list, then move the users you explicitly want into the Selected users list, and choose the "Show only selected users" radio button. Then, any additional users added during software installation are not shown by default. The drawback to this latter method is that adding new, real, users to the system requires the additional step of adding them directly via this method. Problem: I've tried OpenLinux, and for now I want to go back to running just Windows. I formatted the whole hard drive. Then, during the Windows installation process when I reboot, I keep seeing Grub. How can I erase Grub from the hard disk (and how did Grub survive formatting the disk)? Solution: Grub is written into the Master Boot Record of the booting hard disk, and is executed in order to load any operating system. In order to remove Grub, you should boot into the system using the DOS or Windows installation floppy and type fdisk /mbr. Then continue with the Windows installation. Problem: In previous versions of OpenLinux that used the Lilo bootloader, I was able to specify a system runlevel at the boot prompt. How can I do this with Grub? Solution: Changing from the default runlevel (usually 5, which is defined in OpenLinux as both multi-user and GUI) is a little better hidden in Grub than it is in Lilo. However, making the change is quite easy. When the booting Linux version is highlighted on the Grub boot screen, type e to begin the edit process. Use the arrow keys to select the second line, beginning with "kernel." Type e again, to edit that line. The cursor is pre-positioned at the end of the line (which may be wrapped on your screen, depending upon line length). Type the number of the desired runlevel (usually between 1 and 3, since 4 is unused, 5 is default, and 6 is shutdown), and press Enter to close the edit. Then type b to continue booting, using the revised entry. Descriptions of the OpenLinux runlevels (different distributions may define runlevels differently) are shown in Table 5-2: List of OpenLinux 2.4 eDesktop Runlevels |0||Complete system halt.| |1||Single user mode, frequently used for system maintenance purposes.| |2||Multi-user mode, without NFS.| |3||Full multi-user mode.| |4||nused in the current scheme.| |5||Full multi-user mode plus launch GUI environment.| |6||System restart mode (aka reboot).| This solution makes a one-time-only change. To craft a more permanent change to the system, edit the file /boot/grub/menu.lst. For instance, if you frequently need to boot into single user mode (runlevel 1), then create a new boot stanza which is a copy of the current one, give it a new title like "Linux Single," and add a "1" to the end of the kernel line. Then the new runlevel option shows on the boot screen, thereafter. Problem: After running the Lizardx program from the console (to configure XFree86 after installation is complete), the console displays an error message: Fatal IO Error: X Client Killed Did the program crash (as it would appear), or did it work correctly? Solution: This is a completely normal occurrence. From the perspective of the console process (where lizardx was started), the program dies inexplicably, thus the error message. As long as lizardx is completed by the user selecting the OK button, the program terminates properly. Problem: I can't connect with the Internet after installation. No problems with the modem - I am on a network. Is there a problem with the gateway or DNS data? Can I change the DNS server configuration on my computer? Solution: To check or change the DNS information for your system, use the taskbar menus: K --> Settings --> COAS --> Network --> TCP/IP --> Resolver. If you are the root user, the Name Resolver Setup dialog box is displayed (as shown in Figure 5-8). Regular users may run the COAS tools. However, most of the utilities are superuser tools. Prior to the requested dialog, a root password prompt dialog appears, and it must be correctly completed to run the desired utility. Making DNS changes with the Name Resolver Setup dialog box To change or add DNS servers, press the button next to the "DNS servers" text. The button title is either "none" to indicate that no servers have been defined or, as Figure 5-8 depicts, a list of the current DNS servers. This action brings up the DNS Name Servers dialog box, which allows you to edit, add, delete, or change the query order of the defined servers in an intuitive manner. Alternatively, for the command line aficionado, the file containing name resolution server addresses is /etc/resolv.conf. This file can be modified with any text editor. However, the most common problem with network connectivity is that the route to the gateway server is sometimes left unset coming out of installation. To rectify the oversight, use K --> Settings --> COAS --> Network --> Ethernet Interfaces. An example of the resulting dialog box, Ethernet Interface Configuration, is shown in Figure 5-9. Ethernet card and interface configuration dialog box, default route disabled. Press the Disabled button next to "Default route" to enable network routing, and then enter the correct information for your gateway server. If you are uncertain about the address of the gateway, then check with your ISP or System Administrator for the correct information. Close the dialog box by pressing OK. Problem: After switching my drive to the second IDE controller, Grub stops at stage1. How can I make the drive work in a new location? Solution: Mass storage devices are assigned in Linux according to their actual location on the IDE controllers, unlike the DOS/Windows environment, where the first hard drive is always drive C:, whether it is on the Primary or Secondary IDE controller. Table 5-3 shows the controller channels and device assignments: IDE Channels vs. Linux Device Names |IDE Controller Channel||Linux Device Name| Let's assume for the purposes of illustration that the system was originally setup on a single hard disk designated as Primary Master (thus /dev/hda). The drive is then moved to the Secondary Master position (/dev/hdc), which breaks Grub. Our hypothetical prior partition layout is as follows: Windows /dev/hda1 / /dev/hda2 swap /dev/hda3 After moving the boot drive to another channel, there are only a few steps necessary to clean up the mess. First, note that all the partitions are now in the same device locations, except on /dev/hdc. So the OpenLinux root partition is on /dev/hdc2. grep -v ^# /etc/grub.conf | grub --batch 1 # /etc/fstab on gryphon :: bilbrey 2 # 3 devpts /dev/pts devpts gid=5,mode=620 0 0 4 /proc /proc proc defaults 0 0 5 /dev/cdrom /mnt/cdrom iso9660 ro,user,noauto,exec 0 0 6 /dev/fd0 /mnt/floppy auto defaults,user,noauto 0 0 7 /dev/hda1 /mnt/hda1 vfat ro 0 0 8 /dev/hda2 / ext2 defaults 1 1 9 /dev/hda3 swap swap defaults 0 0 shutdown -r now, if you prefer as there's no functional difference). If the boot does not go according to plan, watch the error messages carefully for hints to the underlying misconfiguration, and repeat this process, checking all of the changes and additions. The first time we tried this, a simple typographical error took about 20 minutes to find. We were so sure that we had inputted all the changes correctly. Problem: How can I remove BootMagic from my hard disk? Solution: Boot into Windows, then run the BootMagic configuration routine. Uncheck the box marked "BootMagic Enabled," then uninstall BootMagic. If that does not completely clear BootMagic from your drive, then boot your system using a DOS or Windows boot floppy, and type fdisk /mbr at the DOS prompt, to clean out the vestiges of BootMagic. Problem: When I close my notebook and open it again, the system locks. I can't do anything but power-cycle the computer to reboot. Can I make Linux work properly with my notebook? Solution: The behavior of Linux on a notebook is strongly dependent on the built-in routines that the hardware and BIOS provide for the various functions specific to portable hardware. On Gryphon the Acer Travelmate, closing the lid puts the machine to sleep automatically. Over approximately 25 cycles, only one ended with the system locked up solid. That's one too many, of course, as a hard boot is tough on the filesystem, and takes quite a bit of time (five to ten minutes) on reboot, to check and repair the partitions. The solution is apmd, the Advanced Power Management Daemon. Unlike many other distributions, Caldera neither packages nor supports apmd. The Caldera Knowledge Base does note that apmd can be acquired from the site http://www.worldvisions.ca/~apenwarr/apmd, and installed using directions from that location. Prior to installing apmd, you need to confirm that your kernel is compiled with Advanced Power Management BIOS Support enabled. By default, eDesktop does not have this feature enabled. Look to Chapter 7, Kernel Management, for details on compiling and installing a new kernel. The specific features to enable are located in the General setup section of the kernel configuration. There are a wide variety of other resources for working with and troubleshooting OpenLinux specifically. We catalog both print and online assets in the following lists. By the time this is published, there are going to be changes and additions - use online search tools to locate resources and check them out. Find what works for you and keep learning. Printed materials include: Online OpenLinux resources can be found at: Last, but by no means least, find and use your local Linux Users Group (LUG). We usually give readers GLUE (Groups of Linux Users Everywhere: http://noframes.linuxjournal.com/glue/) as a starting point in finding a local resource. Aside from usually regular meetings, LUG members often hold installfests. There the new user can bring in her hardware and GNU/Linux disks and get direct, hands-on assistance through installation. We strongly recommended this route for the utterly new user. To reiterate, there are four keys to successful troubleshooting: Make one change at a time, document the process, concentrate on observing the results, and generalize from past successes and failures. The small number of possible challenges that we have addressed in this chapter are very similar to some of the other troubles that may crop up. The goal isn't to show you how to fix specific difficulties (although that's a useful side-effect), but to provide you with insight into the types of problem/answer combinations that appear in a freshly-installed Linux environment. This chapter covered the following points: Go to the Table Of Contents
<urn:uuid:07dbe9b4-c8ac-44b9-9b23-de3b6e4c3bdf>
CC-MAIN-2017-43
http://linuxbook.orbdesigns.com/ch05/btlb_c05.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00263.warc.gz
en
0.887224
14,474
2.5625
3
The previous two chapters present differing views on the feasibility and the utility of calculating storage capacities. There is no doubt that exact calculations will always elude us, for lack of proper sources. In his contribution, Lars Blöck insists rightly on the modelling aspect of his calculations. Notwithstanding the justified scepticism of Javier Salido Domínguez, it remains true that even the maximum capacities estimated by Lars Blöck lead to a serious reappraisal of previously published estimates. However, Blöck himself stresses that his model is designed for the area corresponding to modern south-west Germany and that it should not be applied as it stands to other regions. The aim of this chapter is to present and study in greater detail the various parameters necessary for calculating storage capacities, in order to set out what could be considered an average situation, valid for the whole of the research area, and possibly beyond. In the north-western provinces, airtight storage in underground silos is not attested during the Roman period: it disappeared at the end of the Iron Age (La Tène D) until the Early Middle Ages (see Chapter 4 for the Iron Age, and various contributions in Vigil-Escalera, Bianchi, and Quirós Castillo 2013 for the Middle Ages). Calculating the storage capacities of an underground silo is fairly easy, since grain will be preserved only if the silo is completely filled before being sealed. Underground silos thus offer one of the rare cases when the total volume of a structure equals it operating volume. However, it should be noted that for Frédéric Gransar, Iron Age silos were used to store grain in spikes or spikelets, whereas for François Sigaut, silos are mainly used for grain that was already threshed and winnowed (Sigaut 1988; Gransar 2003). Sigaut does mention underground storage of spikelets, but as an uncommon phenomenon. Sigaut used mainly historical and ethnographical documentation, whereas Gransar used archaeological evidence: this may well be the main reason for their differing interpretations. This matter is nonetheless worth exploring further, since chaff can represent 10 to 50% of the total weight of hulled grain (Ouzoulias 2006, 174. For emmer and spelt, see also Saunders 1904 who gives a lower proportion of chaff than Ouzoulias). As evidenced by clear archaeological data, the shape and size of granaries from the Roman period differed from their Iron Age predecessors (see Part II of this volume). In most regions, wooden granaries on posts were gradually replaced, from the first century CE onwards, by larger, stone-built structures. However, the underlying technical principles of storage remained unaltered: grain was still preserved through control of the atmosphere by ventilation (to use the categories devised by Sigaut 1988, 17). The following discussion therefore concerns all granaries, regardless of their architectural type and their chronology. There is likewise no technical difference between a small rural granary and a big urban or military structure. For this reason, it is legitimate, in my view, to use data from non-rural contexts to understand storage in the countryside and vice versa. However, due consideration must be given to at least two factors. First, the function of the storage structure under study in the chain between harvest and supply. This chapter will focus on storage in a rural context, in most cases on production sites. What is at stake is therefore likely to be mid- to long-term storage, either for on-site use and consumption, or for sale on the market at a later date, with good conservation as the main goal. Second, our reconstructions must naturally take into account all available archaeological data; primary evidence, however meagre, should guide our use of historical and ethnographical comparisons, and should take precedence over evocative but sometimes ill-founded parallels (see Halstead 1987, with examples from the Mediterranean). In trying to assess the storage capacities of granaries, scholars have taken two approaches. The older attempts took their cue from the buildings’ architecture, generally taking storage in bulk as a given. Thus Francis Haverfield and Robin Collingwood for wooden, and Anne Gentry for stone military granaries in Britain (Haverfield and Collingwood 1920; Gentry 1976). The first two authors thought that grain was stored in bulk, stacked in bins against the walls to an average height of 6 feet (ca. 1.80 m) on both sides of a central corridor. Gentry calculated that the walls of the Corbridge granary could stand the pressure of a grain heap of ca. 3 m (but this was not her favoured solution, as she thought grain was stored in sacks).1 More recently, Emanuele Papi and Francesco Martorella (2007) used the same method to reconstruct heaps as high as 4.90 m in a granary in Thamusida (in modern Morocco)!2 Without using such calculations, Gustav Hermansen (1982, 226–37) combined evidence from present-day Canada, where a storage height of 8 feet (ca. 2.45 m) is common, with the height of the storerooms in Ostia and Portus near Rome, to suggest 3 and 4 m heights for grain heaps in horrea from each site (he does acknowledge that these are maximum values, and that it is rather impractical to store grain in heaps higher than 2 m). This approach, which does not seem to have been applied to rural buildings, either does not give due consideration to the conditions necessary for grain preservation,3 or, in the case of Hermansen, rests rather uncritically on recent parallels. Funnily enough, the thickness of granary walls does not seem to be interpreted in terms of thermal insulation, although this was well known to an eighteenth-century writer such as the abbot Tessier: “Les murs doivent être de bonne épaisseur, pour garantir les blés de l’humidité & de la chaleur” (Tessier 1793, 460; Plin. Nat. 18.300 is rather ambiguous but might be pointing to something similar). The second method starts from the nature of the stored produce rather than from the building. This appears wiser since, as Sigaut wrote (e.g. 1988, 18–19), storage structures are generally built according to the form of the produce. It is clear from various kinds of evidence that a single granary could host a variety of products, either different types of grain as for example in Amiens “ZAC Cathédrale” and Tiel “Passewaaij”, granary 47, or different types of produce, e.g. grain and legumes in Alle (Matterne, Yvinec, and Gemehl 1998, here cat. 250; Kooistra and Heeren 2007, here cat. 312; Demarez et al. 2010, here cat. 125 and 221). Furthermore, not everything might be for human consumption, although it can be difficult to distinguish between food and fodder (Jones 1996). However, much of the archaeological evidence does relate to the storage of grain and in the following pages, the focus will be on wheat and barley, the main crops for the period and region under study. The calculation of storage capacities needs to take four main parameters into account. The first one is obviously which crop is stored, since density varies from species to species. The second parameter is closely linked to the first: in what form is grain stored, i.e. how advanced is the processing of grain when it is brought into storage? Sigaut (1988, 6) distinguished among several forms of grain storage. Sheaves are notoriously hard to find in the archaeological record, all the more so because they can be stored in stacks and leave no tangible traces. Some buildings have been identified as barns but only on typological grounds (Ferdière 1985). We do have hard evidence for storage of grain in bulk and grain in spikes or spikelets. The difference between naked or dehusked grain on the one hand, and hulled grain on the other, is key to understanding the nature of storage. Lars Blöck notes that finds of hulled grain appear more frequent in rural contexts, while dehusked grain is more common in urban and military settings (Blöck 2011–2012, 93–4). This is logical since grain stored hulled or mixed with chaff can be preserved for a longer period, as was already known to ancient authors (Plin. Nat. 18.306; see also Var. R. 1.72). Various estimates of grain density have been put forward; further research is needed but for the region under study, the data compiled by Pierre Ouzoulias will be used here, as they appear both consistent with ancient sources and with proposals by other authors (Ouzoulias 2006, 173–76. For emmer and spelt, Saunders 1904 and Stallknecht, Gilbertson, and Ranney 1996, both using North American data, give slightly lower values. But figures given in Kooistra 1996, 98 and 318, appear decidedly too low: see Chapter 6, this volume). As a general rule, we should assume, for our calculations, storage in spikelets or with chaff, rather than threshed and winnowed grain. Indeed, in the small urban granary of Amiens “ZAC Cathédrale”, the main crop, spelt wheat, was stored hulled (Matterne, Yvinec, and Gemehl 1998). If this building was the storage facility of a merchant, as hypothesized by the authors, this means he bought the grain unprocessed from the producer. The merchant would have had the grain threshed and winnowed later, before sale: it is clear from the Edict of Maximum Prices that selling fully processed barley was much more profitable (100 denarii per modius instead of 30 for husked grain: Edict. Diocl. 1.7).4 However we should not completely rule out the possibility that grain was processed and sold directly by the producer (or the owner of the estate). The rare discovery of a second-century CE shipwreck with its cargo in Woerden (NL) confirms that grain could be shipped in bulk, threshed and winnowed (Pals and Hakbijl 1992; see Haalebos 1996 for the archaeological context). The grain appears to have been stored for some time, already threshed and winnowed, before shipment: there is no way to decide whether this storage took place on the production site or elsewhere. The third parameter to consider is the usable surface area of the granary. The need for free space is regularly mentioned in agronomic literature, at least since Olivier de Serres (1600, 135), but although Haverfield and Collingwood took it into account in 1920, it does not appear in all estimates of storage capacity. The necessary amount of free space is conditioned by the form of storage. There are three ways of storing grain in granaries: in bulk, in bins or in sacks (to my knowledge, storage in large jars, as known in the Mediterranean, is not attested in north-western Europe). Storage in bins is now rejected by all scholars for one good reason: there is no archaeological evidence for it, even in the best preserved granaries (Rickman 1971, 85; Gentry 1976, 18). Storage in bags is favoured by some, most notably Geoffrey Rickman and Anne Gentry, followed by Anne Johnson (1983) and more recently by Tobias Schubert (2016, 336–37) in his dissertation on rural granaries from the Lower Rhine.5 In all cases, the authors consider the grain naked or threshed and winnowed. The first two had a very rational approach, underlining the practicalities of storage in sacks for handling (Rickman 1971, 85–86; Gentry 1976, 18–22). But one can agree with Catherine Virlouvet and Nicolas Monteix when, discussing Rickman, they write that although his ideas look good on paper, he underplays the constant care needed by grain unless it is very dry (Virlouvet 2015, 680; Monteix in Boetto et al. 2016, 213–14). Furthermore, the presence of saccarii near horrea does not mean that goods were necessarily carried in bags, since the term apparently designated all kinds of porters (Virlouvet 2015, 676–77). Schubert (2016, 336) adds one piece of archaeological evidence in favour of storage in sacks. In 1818, a fourth-century CE burgus was excavated in Engers (near Koblenz, DE). The evidence points to a violent destruction by fire already in ancient times, which led to the charring of large quantities of grain, mixed with melted lead. The presence of this lead, as well, according to Schubert, as the mixture of various kinds of grain, is for him evidence that grain was stored in bags. It is worth citing here the original report as published in 1826: Ausser den römischen Gussmauern und sonstigen Baumaterialien gehören zu den umbezweifelt achten, hier vorgefundenen Anticaglien, die Scherben von terra cotta, Gefässstücke aus grobem Thon; so wie die sehr grosse Masse verbrannten Getraides, welches in Schichten von 3 bis 9 Zoll Dicke [ca. 8 to 24 cm], an einigen Stellen sogar mehrere Fuss hoch lag [in this case, 1 foot = ca. 29 cm]; es war Roggen [rye], Gerste [barley], meistens aber Weizen [wheat], wahrscheinlich also ein Getraide-Magazin gewesen. Die Menge grosser Stücke tropfsteinartig geschmolzenen Bleies, die sich in dem Getraide vorfanden, könnten zu der Vermuthung fuhren, der Thurm sei mit Bleiplatten gedeckt gewesen (Dorow 1826, 24). First of all, nowhere is it written that the various kinds of grain were mixed. Second, although it is written that lead was indeed found in the grain, it is not clear why this should indicate storage in bags, all the more since Schubert accepts Dorow’s interpretation that it came from the roof. Third, 8 to 24 cm layers are consistent with storage in bulk; heaps several feet high less so, but the original report seems to indicate that this was not the general situation (see below for a discussion of the height of the heap). Based on comparison with similar buildings, there was probably at least one upper-floor: could the grain have fallen during or after the fire (see the better preserved burgus in Zullestein: Baatz in Reddé et al. 2006, 227–28, s.v. Biblis)? Most importantly, the type of site must be taken into consideration: Engers is a late-antique fortified post interpreted as a harbour or a landing station, probably built during Valentinian’s reorganization of the Rhine frontier. Can the storage method adopted in this context be applied to earlier civilian contexts? This does not mean that storage in bags did not exist; during the excavation of Amiens “ZAC Cathédrale”, some textile fibres were recovered among the grain, possibly indicating sacks (Matterne, Yvinec, and Gemehl 1998, 106). But it should be remembered that we are dealing here with hulled spelt, and not naked or dehusked grain, stored in a probable merchant warehouse, and not on a production site. As a general rule, sacks are only adapted to short-term storage (Geraci and Marin 2016, 88). All in all, storage in bulk, whether the grain was fully processed or still in spikelets, is the most likely solution. For what this testimony is worth, it should be noted that storage of hulled grain on the floor is described in the peasant-setting of the Moretum, a short poem formerly attributed to Vergil (Mor. 16).6 It is thus supported by archaeological and textual evidence, as well as historical and ethnographical parallels. In military granaries, or more generally large rectangular granaries, scholars normally assume a central corridor at least as wide as the entrance door, with grain stacked in heaps against the walls. Haverfield and Collingwood, Gentry, Papi and Martorella, and Monteix, concord on this point. Yet from the eighteenth century onwards, agronomists insist on the need to keep some free space between the walls and the heaps, varying from ca. 30 to 120 cm (1 to 4 feet) (Table 3.1). In addition to providing room to move around, this was to avoid both dust and pests falling from the walls, and grain falling in the gaps between the wall and the floor (particularly if it was made of boards). |Duhamel 1768||ca. 60 cm between wall and grain| |Duhamel 1768||In the Grenier d’Abondance in Lyon, ca. 120 cm between wall and grain| |Krünitz 1788||ca. 60 cm between wall and grain| |Diffloth 1907||40 to 50 cm between wall and grain, and a central passage at least 100 cm wide| |Haverfield and Collingwood 1920||Central corridor of ca. 90 cm (theoretical)| |Richmond and McIntyre 1938–1938||Central corridor of ca. 300 cm (theoretical)| |Demarez et al. 2010||100 cm around heap of grain (theoretical)| To this should be added the space necessary for accessing the granary and walking around the heaps, whether a single central corridor or smaller passageways. Finally, extra space is needed if grain is stored threshed and winnowed, because it needs frequent shovelling. The operation consists in moving the grain from one place to another by tossing it in the air with a shovel (hence the name). It should be done very frequently in the first six months after the harvest, with decreasing frequency until the end of the first or second year (authors vary). The problem with shovelling in Antiquity is that ancient authors apparently do not mention it at all; textual evidence of this practice is first attested in the sixteenth century (Beutler 1981, 37–39). This led Sigaut (1988, 8) to think that it had not become common practice until the eighteenth century. However, it is hard to conceive storage of threshed and winnowed grain in bulk (archaeologically attested in Roman times) without shovelling: this would have led to an incredibly high rate of wastage hardly compatible with the existence of surpluses needed to feed city-dwellers and soldiers (on the need for shovelling, similar view in Matterne 2001, 150; on wastage, see below). Shovelling is apparently not needed when grain is stored hulled or mixed with chaff (Beutler 1981, 36; in the second half of the eighteenth century, this storage method was apparently disappearing; see also Sigaut 1981, 165, and 1988, 13–14). Given these various factors and the absence of ancient documentation on the matter, not to mention the fact that the situation may have varied with time and space, it is very hard to give an estimate of free space. Recently, Jean‑Daniel Demarez, studying the granaries from Alle, put forward the hypothesis that storage in heaps would have left 70% of the granary unused (and “only” 40% if produce was stored in bins: Demarez et al. 2010, 393–94). However, this is nothing more than an educated guess and it seems safer to turn to historically attested examples. I found two, both from the eighteenth century and both for threshed and winnowed grain in bulk. Very recently (2016), Enrico Da Gai and Giulia Vertecchi published a 1788 document regarding the calculation of the storage capacities of the public granaries in Terra Nova in Venice (buildings now destroyed, but well documented). They arrive at a maximum usable surface area of ca. 85% of the total area. This is very close to the conclusions reached by Duhamel du Monceau in 1768 about the maximum usable surface area of the Grenier d’Abondance from Lyon. He calculated a maximum value of 81% of the total area, but noted himself that the available free space would be barely sufficient (Duhamel du Monceau 1753, 235–36, 1768, 249–51). Earlier in his book, making a theoretical calculation, he had come to the conclusion that 68.5% of the total area could be used for storing grain in heaps (Duhamel du Monceau 1753, 14–16, 1768, 13–16). Interestingly, the space needed for shovelling only accounts for less than 3% of the total area of the granary; but this is in stark contrast with the recommendations by Paul Diffloth in 1907, who wrote that to each heap of grain should correspond the same surface of free space, in order to shovel the grain from one spot to another (Diffloth 1907, 360).7 All in all, it seems advisable to consider that when grain is stored in bulk, be it threshed and winnowed, with chaff or in spikelets, at least 1/3 of the granary needs to be kept free. Further research is needed to determine whether the examples quoted here are representative of ancient practices. The fourth parameter is closely linked to the previous one and concerns the height of the grain heap. Figures found in the literature vary from 20 to 490 cm. In Table 3.2, I have compiled as much data on the subject as was available to me (but with no claim to exhaustivity). I have classified it according to the nature of the reasoning: unknown, based on the architecture of the storage building, based on agronomy and/or historical examples (it is often hard to distinguish between the two in older literature), based on archaeological evidence. Table 3.2 makes it very clearly that a mean height of 20 to 40 cm is consistent with archaeological data, agronomic literature and historical cases from the seventeenth to the early twentieth centuries. This margin, given by almost every author for fresh grain (Reneaume is an exception), can be applied with confidence to the ancient countryside: since harvests were annual, grain stored on rural sites was mostly fresh. The discovery of Amiens “ZAC Cathédrale” shows that these values apply both to hulled and naked or dehusked grain. Since archaeological finds consist mainly of charred grain, it is important to note that charring has a significant impact on the weight but not on the volume of the grain (Ferrio et al. 2004, in particular 1636, tab. 1 and 1638, tab. 2). It should be stressed that layers might not always be fully preserved, although it appears to be the case in Amiens. The maximum height of the heap for dry grain is much more variable. Sigaut gives 100 cm as a maximum for grain stored in bulk and shovelled; in the late eighteenth century, similar values are given by Krünitz and Tessier. But in the early twentieth century, Diffloth and the Larousse agricole give lower figures, as does the Venetian archival document from 1788 cited above. The finds from Ribchester (up to 60 cm) and Engers (several feet at some points), both military, may thus have consisted of dry grain. Of course, the grain heap cannot be entirely flat. Grain being a semi-fluid, the heap is conical in shape. It is stable when the angle is within the range of 25 to 30° (De Lucia and Assenato 1992 [fr] = 1994 [en], appendix 2; Brinkkemper 1993, 150; see also Monteix in Boetto et al. 2016, 215 with note 113. In this volume, Ferdière opts for an angle of 45° which may thus be too steep). In theory, then, the wider the heap, the higher it may be. Otto Brinkkemper (1993, 149–50), for instance, trying to work out the storage capacity of two granaries with a usable surface area of ca. 16 m², reconstructs a single heap reaching 115 cm, with a 30° slope, equivalent to an even layer of grain 30 cm thick. Although Brinkkemper deals with hulled crops, this raises questions about the preservation of the grain in the centre of the heap. Several small heaps seem more likely: such is the case in the photographs from a modern granary in Heilbronn republished by Schubert (2016, 337). Furthermore, when different crops are stored in the same room, a case often encountered in excavations, they are always stored separately. In our calculation, assuming an even layer with a mean thickness of 30 cm should therefore be a good approximation of real conditions in ancient times. In the light of the evidence presented in Table 3.2, figures above 100 cm for grain storage in bulk appear overestimated. Grain cannot be stored for any length of time in such conditions and too high a heap renders shovelling very impractical, if not impossible. If one accepts such values, one must also assume very high wastage. Estimating wastage during storage and calculating storage capacities are two different issues, but both must be tackled if we are to understand how storage facilities were managed and what proportion of agricultural surplus was actually available. |Type source||Publication||Height cm||Region or site discussed||Remarks| |Unknown||Haverfield and Collingwood 1920||ca. 180 cm||Great-Britain||Theoretical value. Storage in bins| |Richmond and McIntyre 1938–1939||ca. 150 cm||Great-Britain||Theoretical value. Storage in bins| |Herzig 1946||300 cm||Windisch (CH)||Theoretical value. Storage in bins| |Manning 1975||150 cm||Great-Britain||Theoretical value. Storage in bins or bags| |Galsterer 1990||125 cm||Maximum value based on contemporary practices| |Brinkkemper et al. 1995||30 cm||Netherlands||Mean value for a storage in a cone-shape heap attaining 115 cm in height at the top| |Bakels 1996||100 cm| |Mauné and Paillet 2003||40 cm||Vareilles (FR)| |Demarez et al. 2010||30 to 100 cm||Alle (CH)||Values probably taken from agronomy treatises| |Architecture||Gentry 1976||up to 300 cm||Corbridge (GB)||Calculation based on wall thickness; bulk storage. Note that Gentry thought that grain was stored in sacks and not in bulk.| |Papi and Martorella 2007||300 to 500 cm||Thamusida (MA)||Calculation based on wall thickness; bulk storage| |Architecture/agronomy||Hermansen 1982||240 to 400 cm||Ostia, Portus (IT)||Hermansen relies on parallels from present-day Canada and adapts the value according to the height of the rooms.| |Boetto et al. 2016||160 cm (max. 220 cm)||Ostia (IT)||The authors do no think that heaps reached 220 cm, which is given as a maximum value.| |Agronomy/ history||Archives, 1684||ca. 93 cm||Abtei Weingarten (DE)||Maximum height. Hauptstaatsarchiv Stuttgart B 522 Bü 69. (reference supplied by Lars Blöck)| |Reneaume 1730 ||ca. 60 to 80 cm||France||For fresh (wet) grain; with time grain settles and the height decreases| |Duhamel 1768||ca. 45 cm||France||Mean values observed in public granaries| |Duhamel 1768||ca. 30 cm||France||Very wet grain, 1745 harvest| |Galiani 1770||60 to 75 cm||France ?||2,5 feet = maximal height| |Krünitz 1788||30 cm||Germany||For wet grain| |Krünitz 1788||ca. 60 cm||Germany||For dry grain| |Krünitz 1788||90 to 120 cm||Germany||Maximum height for dry grain| |Parmentier 1789||30 to 45 cm||France| |Tessier 1793||ca. 45 cm||France||Normal mean height| |Tessier 1793||ca. 105 cm||France||1-year-old grain| |Tessier 1793||ca. 150 cm||France||2-year old grain or older| |Da Gai and Vertucchi 2016||ca. 34 to 68 cm||Venice, Rome (IT)||For bulk storage, depending on dryness. 18th c. archives| |Mathieu de Dombasle 1862||25 to 40 cm||France||Posthumous, author died in 1843| |Agronomy/history||Heuzé 1876||60 cm||France||Seems to be a maximum height| |(cont.)||Desmoulins 1896||25 to 30 cm||France||For grain just harvested| |Desmoulins 1896||50 to 60 cm||France||Mean height, when grain has dryied| |Diffloth 1907||15 to 40 cm||France||For fresh grain, depending on dryness. Matterne et al. 1998, 107 cites 4th ed. (1917) with same numbers| |Diffloth 1907||up to 60 cm||France||For very dry grain in well-built granaries; mean values are lower (gives 33 cm as an example)| |Ernest and Dumont 1921||80 cm||France||Maximum height for bulk storage with shoveling| |Sigaut 1981||100 m||General||Maximum height for bulk storage with shoveling| |Malrain 2000||40 cm||Northern France||Discussion with agronomist| |Geraci 2015 = Geraci and Marin 2016||30 and 70 cm||General||Maximum heights for fresh and dry grain. Figures taken from mid-20th c. Italian agronomy treatises| |Archaeology||Knörzer 1970||10 to 20 cm||Neuss (DE)||1st c. CE. Naked wheat| |Gentry 1976||10 to 60 cm||Ribchester (GB)||Military (northern granary)| |Matterne, Yvinec, and Gemehl 1998||20 to 40 cm||Amiens (FR)||2nd c. CE, urban context. Mainly spelt| |Bouby 2001||3 to 15 cm in one spot; 35 cm in another spot||Crest “Bourbousson” (FR)||3rd c. CE, rural context. Mainly naked wheat, barley in one spot (publication does not specify if hulled or dehusked)| |Dorow 1826||9 to 24 cm, but several feet in places||Engers (DE)||4th c. CE, military post. Mainly naked wheat, with rye and barley| |Matterne 1997||20 to 40 cm||Compiègne (FR)||10th c. CE. Mainly rye and naked wheat| |Ruas et al. 2005||5 cm||L’Isle-Jourdain “La Gravette” (FR)||11th c. CE. Mainly naked wheat, hulled grain stored in chaff| |Ruas 2003||up to 10 cm||Durfort “le Castlar” (FR)||14th c. CE. Mainly rye and millet; hulled grain at least partially stored in spikelets| |Borgongino 2006||20 cm||Herculaneum “Villa dei Papiri” (IT)||1st c. CE. Barley| André Tchernia noted that a 20–25% wastage rate was generally assumed for granaries of the Roman period, but that the figure was not supported by scientific studies (Tchernia 2011, 245–46 = 2016, 194; see Papi and Martorella 2007, 90 for a 25% wastage rate given without justification). Véronique Matterne arrives at a similar figure for Amiens “ZAC Cathédrale”, with 20% of the stock unfit for human consumption and 90% germinated (Matterne, Yvinec, and Gemehl 1998, 109–10. See also Smith and Kenward 2011 on pests, whose effects were overestimated by Buckland 1978). But we are dealing here with consumer, not production sites. Sigaut (1981, 165) wrote that on the whole peasants tend to take better care of their stocks and to reduce wastage; indeed, research on developing countries (mainly from the southern hemisphere) reports post-harvest losses of 10 to 15%, part of which occurs after rural storage (Postharvest Food Losses in Developing Countries 1978; De Lucia and Assenato 1992). Moreover, grain considered wasted by current standards might still have been consumed or used in other ways in Antiquity (Matterne, Yvinec, and Gemehl 1998, 110; Smith and Kenward 2012, 146–48). We are a long way from the 50 to 80% wastage rate considered possible by Thomas Gallant (1991, 97–98) in his study of ancient Greece, which seems completely implausible to me,8 as well as hardly compatible with the low yields he estimates (yields are discussed at more length in Garnsey, Gallant, and Rathbone 1984. On this subject, see also Halstead 2014, 238–50. More generally, on the productivity of ancient peasants, see Kron 2008). Table 3.3 is an attempt to sum up the different topics discussed in this chapter, and to provide guidelines for calculating the storage capacities of rural granaries. This “model” is put to test in Chapter 6 of this volume, where the reader will find a discussion of historical implications of storage capacity estimates for the Rhine-delta area. |Points to be considered||Propositions| |Type of cereal stored||See Ouzoulias 2006, 173–177 and other relevant literature, for the various densities| |Hulled or husked grain||See Ouzoulias 2006, 173–177 and other relevant literature, for the various densities| |Usable surface area||Mean value of 70 % of total floor area? To be investigated further| |Thickness of layer||20 to 40 cm (mean value of 30 cm)| |Loss during harvest and storage||Mean value of 10 %? To be investigated further| As a general rule, it appears that so far, estimates put forward by scholars for urban and military horrea are much too high. They should certainly not be transposed to rural granaries. Conversely, it must be stressed again that the tentative model presented here is meant for rural sites from the North-Western provinces, and that the focus is on production sites, where fresh grain would enter the granaries each year. This would not necessarily have been the case in urban and military contexts: the wheat found on the Woerden wreck had been stored for some time previous to its shipping. It should have been dryer than freshly harvested grain and thus presumably storable in higher heaps. Heaps higher than 30 to 40cm may also have been more common in granaries used for very short storage periods during transhipments: such may have been the case for the horrea in Portus. To conclude, the model presented here should be critically assessed before being used or adapted to other types of sites or to another region. This is also true of the historical and ethnographic comparanda we use to compensate the silence of ancient sources. I have been careful to use here mainly French material referring to pre- or early industrial agriculture: although it appears consistent with similar material from Germany and Italy, its general validity for the Western Roman Empire still has to be assessed.
<urn:uuid:1d80d4e8-4c6b-43e0-a44a-46fc7e650f9e>
CC-MAIN-2020-10
https://brill.com/view/book/edcoll/9789004389045/brill-9789004389045_005.xml?language=zh
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145533.1/warc/CC-MAIN-20200221142006-20200221172006-00415.warc.gz
en
0.923658
7,786
3.78125
4
Results of a new study by the University of Seattle and the Fred Hutchinson Cancer Research Center suggest that women retain and carry the living DNA from every man with whom they have had sexual intercourse with. For some, that’s a lot of DNA. This startling information was discovered unintentionally whilst researchers were attempting to determine if women who had been pregnant with a son might be more predisposed to specific neurological diseases. The study took a turn when researchers began to realize the complexity of the female brain. According to the study, female brains have been found to harbor “male microchimerism”, meaning that the presence of male DNA is genetically different from the rest of the cells that make up the woman. “63% of the females (37 of 59) tested harbored male microchimerism in the brain. Male microchimerism was present in multiple brain regions,” according to the study. So…where did these cells come from? Researchers first hypothesized the cells could stem from her father’s DNA, though this isn’t possible as his DNA combines with the mother’s to create a child’s unique DNA. Their second hypothesis was that the DNA might be remnants of a pregnancy. Though, this was later debunked by the presence of male DNA of a female brain who had never been pregnant. In an effort to gain a better understanding of their results before releasing information to the public, scientists buried their findings in several sub-studies and articles, but an analytical review of their investigation clearly shows where the presence of male DNA cells in the female brain stem from. “CONCLUSIONS: Male microchimerism was not infrequent in women without sons. Besides known pregnancies, other possible sources of male microchimerism include unrecognized spontaneous abortion, vanished male twin, an older brother transferred by the maternal circulation, or SEXUAL INTERCOURSE. Male microchimerism was significantly more frequent and levels were higher in women with induced abortion than in women with other pregnancy histories. Further studies are needed to determine specific origins of male microchimerism in women.” The following are possible sources of the male DNA living in women’s brains, according to the scientists: - unknown abortions - a male twin that vanishes - an older brother transferred by the maternal circulation - sexual intercourse Given the fact that 63% of women have male DNA living in their brain, the first three possible sources would only apply to a small percentage of women and could not possibly make up the 63%. This leaves the fourth option, which is more common. Hence, we can statistically infer that the source of male DNA in the female brain is sexual intercourse. This means that a woman absorbs and retains spermatozoa from every male partner she has sexual intercourse with. Scientists supported this finding with the results of autopsies in elderly female women that revealed some had been carrying male DNA in their brains for more than 50 years. Sperm are living cells. When it injected into you, it swims around until it crashes into a wall, attached and burrows itself into the flesh. If it injected into your mouth, it travels to your nasal passages, inner ear, and behind your eyes. Next, it enters your bloodstream and collects in the brain and spine. This sounds like something out of a sci-fi movie, as the sperm becomes a part of you and there’s no getting rid of it. This is just the beginning of truly understanding the full power and effects of sexual intercourse. Ladies, if you don’t want their DNA, don’t do it! If you found this article interesting, please share with friends and family by clicking the button below!
<urn:uuid:6ff59aa7-99d5-4c0a-8844-14d6f9d181f3>
CC-MAIN-2020-24
https://www.healthspiritbody.com/male-dna-female-brain/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391309.4/warc/CC-MAIN-20200526191453-20200526221453-00362.warc.gz
en
0.968972
777
3.21875
3
Concepts of dental operator seating have changed greatly since seated, 4-handed dentistry was first embraced by the profession in the 1960s. Early designs of dental stools were simple (often flat, round seat pans) with minimal adjustability and a one-size-fits-all mentality. Although this move from standing to sitting dentistry promised to reduce the high incidence of work-related pain among dentists, statistics indicate that this goal has not yet been realized.1-5 In an effort to improve working comfort and reduce musculoskeletal disorders among dental operators, manufacturers have promoted multiple design concepts to the marketplace. Although it must be understood that the prevention of work-related pain and musculoskeletal disorders (MSDs) is a multifactorial issue,6 the choice of proper seating is critical and can either improve or worsen a clinician’s comfort and level of musculoskeletal health. It is the opinion of the authors that no ideal chair design exists for dental operators. We have also observed multiple instances of operators (dentists, hygienists, and assistants) using otherwise well-designed ergonomic chairs and stools improperly. This is often due to lack of knowledge as to how to adjust ergonomically designed chairs correctly, combined with a lack of understanding as to how these adjustments impact the biomechanics of the operators and their musculoskeletal health. An important aspect of choosing and optimally adjusting chairs or stools for any individual operator is to understand the effects that a person’s height has on that choice. While it is obvious that people purchase shoes or clothing that fit properly, it seems to be far less so for dental professionals when choosing the chairs that they spend a significant portion of their careers upon. Although many manufacturers provide options to accommodate different operators’ heights and body sizes, many are sold with standard features that have been designed based on a statistically average-sized person (usually a man). These features may fit and support some operators well, but can cause multiple problems for others. CHAIRS VERSUS STOOLS Some confusion exists in the dental profession as to what differentiates a dental stool from a chair. Technically, a stool does not have a back or armrests, so the majority of seating available to dentists today are chairs. Perhaps in an effort to avoid confusion, dental manufacturers frequently refer to patient seating as chairs and doctor seating as stools. For the purposes of this article, we will distinguish between the two, considering both chair and stool designs for the dental operator. SEAT TILT AND LOW-BACK PAIN |Figure 1. Sitting with thighs parallel to the floor promotes flattening of the lumbar spine.| Most dentists and hygienists were taught in school to sit with thighs parallel to the floor, or hips at a 90º angle. This paradigm for seated work has been widely accepted for generations and may be due in part to the design of early operator chairs, which featured flat, nonadjustable seat designs. The nature of dentistry makes intermittent forward leaning virtually unavoidable. This combination of thighs parallel to the floor with forward leaning causes the pelvis to roll backward, promoting flattening of the low back curve7,8 (Figure 1). Research shows that this flattening of the lumbar curve has detrimental effects upon both the spinal musculature and discs. Muscular activity in the lower back increases, which can cause ischemia and painful trigger points. Pressure within the disc also increases, which can lead to premature disc degeneration. The research therefore supports the concept of positioning hips higher than knees, allowing for an increased hip angle with lower associated low-back muscle activity and disc pressure.7,10 |Figure 2. A tilting seat helps maintain the low back curve, reducing muscle and disc pressures.| Chairs with a tilting-seat feature as well as saddle-style stools enable the hip angle to open to greater than 100º, which helps maintain the low back curve, decreases disc pressure, enables closer positioning to the patient, and may help reduce low back pain7,10 (Figure 2). It is essential for operators to maintain—as best as possible—the normal curvature of the spine while working. TALL AND SHORT OPERATORS In general, tall operators with long trunks tend to have a higher incidence of low-back pain. This is partly due to gravitational forces acting on a longer lever arm when the operator assumes any degree of forward leaning. On the other hand, many dentists with shorter torsos tend to have neck and shoulder pain due to arm elevation when the patient is placed at lap level. In the past, operator chairs were often designed for the average man. This trend is changing in response to the evolving demographics of the dental profession in which approximately 70% of operators (dentists and hygienists combined) are now women.11 It is common, however, for stools and chairs to be sold “as is,” offering standard features and little regard to the special needs of individual operators. Fitting a standard seat with a short or tall cylinder may allow operators to maximize the ergonomic benefits of the chair. Therefore, special considerations are need-ed for tall and short operators. Considerations for Taller Operators (Generally 5’10” and Over) |Figure 3. Pivoting forward properly from the hips allows the operator to maintain the 3 primary spinal curves.| Tall dentists with long torsos are most prone to flattening the low back and should pay particular attention to strength-ening of the transverse abdominal muscles. These muscles can be used throughout the day to regularly stabilize the low back curve,12,13 especially with forward leaning. The following “operator pivot” exercise recruits these muscles to stabilize the spine (Figure 3). While this exercise is especially helpful for tall dentists with long torsos, all operators can benefit by incorporating it into their daily routine: •Sit tall on the stool with a slight curve in the low back. •Assume an operating position with the arms. •Exhale, and actively (with your muscles) pull your navel toward your spine. Your transverse abdominal muscles are now helping maintain your low back curve. (One common mistake is to suck in one’s breath to pull the spine toward the navel. You should still be able to talk, breathe, and move while holding this contraction.) •Using the hips as a fulcrum, pivot forward from the hips, maintaining the abdominal contraction throughout the exercise. •Strive to make this exercise a habit throughout the workday anytime you must leave a balanced sitting posture. •A tilted seat pan or saddle-stool design will facilitate pivoting from the hips, making this exercise easier. |Table. Chair Height and Adjustment (Figure 4). To determine if you need a tall or short cylinder, you must first adjust the chair: (1) Sit all the way back on the seat. (2) Adjust the height of the backrest to nestle in your low back curve. (3) Move the backrest away from your back. (4) Tilt the seat slightly forward. (5) Adjust the height with feet flat on the floor so your thighs slope slightly downward. (6) Sit upright with a slight curve in your low back. (7) Bring the backrest forward to contact the curve of your low back snugly. |Figure 4. Proper adjustment of an ergonomic chair. (Chair courtesy of Orascoptic Research.)||Figure 5. A tall cylinder helps taller operators position themselves correctly.| Assess the height of the chair (Table) to evaluate whether a tall cylinder is required (Figure 4). If, at the highest height adjustment, your thighs are still parallel to the floor or your lower legs must be tucked underneath the chair, consider requesting a taller cylinder from the manufacturer (Figure 5). Often, this is simply the cylinder from the same model assisting stool. Longer-legged operators should also assess seat pan depths of various chair and stool designs, as operator seat pan depths may range from 14 to 17 inches deep. The thighs should be well supported with feet flat on the floor. Taller operators should always raise the patient chair to a height that promotes their own best posture while conversing with the patient or consulting chair-side. This will allow for a more neutral posture during these times. A few minutes here and there add up over the course of a 20- to 30-year career in dentistry and can exacerbate existing musculoskeletal problems. Magnifying scopes are an important ergonomic consideration for all dentists, but especially so for taller operators. It is nearly impossible for a tall dentist with a long torso to delivery quality dentistry with the patient at lap level and still sit upright. Make sure the working distance is measured in your own operatory (eye to working surface) with arms relaxed at your sides. Considerations for Shorter Operators (Generally 5’ 4” and Under) It is common to see shorter dentists with legs positioned under the patient, perched on the edge of the chair, arms abducted away from the body, and neck twisted to gain better visibility. This is commonly due to a positioning challenge related to the operator’s smaller stature. Even with the patient positioned correctly at lap level, simply the thickness of the headrest combined with the height of a patient’s head may cause some operators to elevate the arms—a contributing factor to neck pain. The challenge is to position the patient low enough to operate with arms in a relaxed, neutral posture. Opening the hip angle, utilizing a shallower seat pan, and use of a shorter cylinder are considerations for these dentists. |Figure 6. A saddle-style stool opens the hip angle, allowing lower positioning of the patient. (Bambach saddle stool courtesy of Hager Worldwide.)| The greatest hip angle may be obtained with the use of a saddle-style stool (Figure 6). This places the pelvis in a position that facilitates maintenance of the low back curve. By opening the hip angle up to approximately 140º, the stool allows for lower positioning of the patient and also closer positioning of the operator to the patient. The seat pan of a chair should be shallow (14 to 15 inches) and not touch the backs of the knees when seated all the way back in the chair. Assess the height of the chair (Table) to see if a shorter cylinder is required. You may need to request a shorter cylinder if, at the lowest adjustment, the following situation occurs: •You cannot sit all the way back on the seat without easily fitting 2 to 3 fingers between the edge of the seat and the back of your knee. •You do not feel weight evenly distributed through both legs and your buttocks. •You feel you have to perch on the edge of the chair. Armrests are helpful in reducing neck and low-back strain for operators of all heights.14,15 Operators may consider the options of working with different styles of chairs and stools in different operatories as well as intermittently standing to spread the workload between different groups of muscles throughout the day. Dental operators must become more educated regarding the impact of their choices in seat-ing in the operatory. Over a 30-year career, dentists and hygienists may spend upwards of 60,000 hours chairside or more than 1,800 days. Proper selection of chair/stool styles depends on many factors, and operators should try a stool or chair before purchasing it to assess how it fits their specific needs. However, even a well-designed chair or stool can detrimentally impact one’s musculoskeletal health if it is improperly equipped or adjusted for an individual’s body stature. The preceding guidelines should aid operators in their selection and adjustment of appropriate seating for the dental clinic. 1. Shugars D, Miller D, Williams D, et al. Musculoskeletal pain among general dentists. Gen Dent. 1987;35:272-276. 2. Rundcrantz BL, Johnsson B, Moritz U. Cervical pain and discomfort among dentists. Epidemiological, clinical and therapeutic aspects. Part 1. A survey of pain and discomfort. Swed Dent J. 1990;14:71-80. 3. Augustson TE, Morken T. Musculoskeletal problems among dental health personnel. A survey of the public dental health services in Hordaland. Tidsskr Nor Laegeforen. 1996;116:2776-2780. 4. Finsen L, Christensen H, Bakke M. Musculoskeletal disorders among dentists and variation in dental work. Appl Ergon. 1998;29:119-125. 5. Chowanadisai S, Kukiattrakoon B, Yapong B, et al. Occupational health problems of dentists in southern Thailand. Int Dent J. 2000;50:36-40. 6. Valachi B, Valachi K. Mechanisms leading to musculoskeletal disorders in dentistry. J Am Dent Assoc. 2003;134;1344-1350. 7. Harrison DD, Harrison SO, Croft AC, et al. Sitting biomechanics part 1: review of the literature. J Manipulative Physiol Ther. 1999;22:9:594-609. 8. Mandal AC. The Seated Man: Homo Sedens. 3rd ed. Klampenborg, Denmark: Dafnia Publications; 1985:28-29. 9. Karwowski W, Marras WS. The Occupational Ergonomics Handbook. Boca Raton, Fla: CRC Press; 1999:69-170,175,285,585-600,1134. 10. Hedman TP, Fernie GR. Mechanical response of the lumbar spine to seated postural loads. Spine. 1997;22:734-743. 11. White SW. Ergonomics…How does dentistry fit you? Women’s Dent J. 2003;1:58-62. 12. Hodges PW, Richardson CA. Inefficient muscular stabilization of the lumbar spine associated with low back pain. A motor control evaluation of transversus abdominis. Spine. 1996;21:2640-2650. 13. Hides JA, Richardson CA, Jull GA. Multifidus muscle recovery is not automatic after resolution of acute, first-episode low back pain. Spine. 1996;21:2763-2769. 14. Parsell DE, Weber MD, Anderson BC, et al. Evaluation of ergonomic dental stools through clinical simulation. Gen Dent. 2000;48:440-444. 15. Sahrmann S. Diagnosis and Treatment of Movement Impairment Syndromes. Philadelphia, Pa: Mosby; 2001.
<urn:uuid:cbe55592-28e9-4750-8a99-489d01f65a53>
CC-MAIN-2017-26
http://dentistrytoday.com/ergonomics/1116
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320443.31/warc/CC-MAIN-20170625064745-20170625084745-00174.warc.gz
en
0.915672
3,135
2.515625
3
Presentation on theme: "Production and Operations Management Systems"— Presentation transcript: 1Production and Operations Management Systems Chapter 10: Long-Term Planning (Facilities, Location and Layout) Sushil K. Gupta Martin K. Starr 2014 2After reading this chapter, you should be able to: Explain the four distinct parts of facilities planning.Discuss who is responsible for doing facilities planning.Describe the nature of facilities planning models.Explain why planning for the design of facilities requires the systems approach.Describe the application of the transportation model for location decisions.Apply the transportation model to solve location decision problems. 3After reading this chapter, you should be able to (continued) Determine the relative advantages of renting, buying, or building.Show how to use scoring models for facility selection decisions.Describe what doing facility layout entails.Explain how job design and workplace layout interact.Evaluate the use of quantitative layout rules (algorithms).Use load-distance matrices to design and evaluate layouts.Discuss the use of heuristics to improve layouts of plants and offices. 4Facilities PlanningFacilities are the plant and the office within which P/OM does its work.There are four main components of facilities planning and they strongly interact with each other.Location of plant, or the branch, or the warehouse.Specific structure and site.Layout.Furniture, lighting, decorative features, and equipment.In the global world of international production systems, international markets, and rapid technological transfers, facilities planning requires a team effort. 5Location Six factors that can affect location decisions are: Process inputs.Process outputs.Process requirements.Personal preferences.Governmental issues.Site and plant availabilities.Service industries locate close to their customers.Extractors like to be close to their raw materials.Fabricators like to be close to their raw materials and customers.Assembly plants try to keep their component suppliers close-by.Best location is related to the function of the facility and the characteristics of its products and services. 6Models for Facility Location Location decision models use measures of costs and preferences.These models include:Transportation modelsScoring models use combinations of costs and preferences.Breakeven analysisCenter of gravity model (not discussed in this presentation)Columbus, Ohio, is a popular distribution center because a circle around it within a 600-mile radius encloses a large percentage of U.S. retail sales. 61% of the US population and 63% of US manufacturing facilities lie within 600 miles of Columbus Ohio. Such a large market cannot be reached from any other state (www.conway.com/oh/distribution/ohiobody.htm). 7Structure and Site Selection Location, sites and structure decisions can be made both sequentially and simultaneously.Work configuration influences structure decisions.Flow shopJob shopsService industries are often associated with particular kinds and shapes of structures.Airports, hospitals, theaters, and educational institutions typify the site-structure demands for service specifics. 8Structure and Site Selection (continued) The facility elements to be considered include:Is there enough floor space?Are the aisles wide enough?How many stories are desirable?Is the ceiling high enough?Are skylights in the roof useful?Roof shapes permit a degree of control over illumination, temperature, and ventilation.What are the maintenance requirements for roofs?External appearance and internal appearance.Company services should be listed: Capacities of parking lots, cafeterias, medical emergency facilities, male and female restrooms—in the right proportions—must be supplied.Adequate fire and police protection must be defined.Rail sidings, road access, and ship-docking facilities should be specified in the detailed facility-factor analysis.Access to the Internet and various telecom services is no longer considered an extra advantage; it is a necessity in almost all cases. 9Rent, Buy or BuildThe costs of land, construction, rental rates, and existing structures have to be compared with a suitable model.Location, structure, and site come as a package of tangible and intangible conditions that have costs listed below.The opportunity cost of not relocating.The costs of location and relocation studies.The costs of moving may include temporary production stoppage costs.The cost of land—often an investment. Renting, buying, or building has different tax consequences..The costs of changing lead times for incoming materials and outgoing products as a result of different locations.Power and water costs differ markedly according to location.Value-added taxes are used in many European countries. VAT is proportional to the value of manufactured goods. 10Rent, Buy or Build (continued) Insurance rules and costs are location sensitive.Labor scarcities can develop that carry intangible costs.Union-management cooperation is an intangible cost factor.The intangible cost of community discord (or the benefit of community harmony) can be significant.Legal fees and other costs of specialists and consultants are location sensitive, especially for small- and medium-sized organizations.Workmen’s compensation payments and unemployment insurance costs differ by location.The costs of waste disposal, pollution and smoke control, noise abatement, and other nuisance-prevention regulations differ by locations. 11Rent, Buy or Build (continued) Compliance with environmental protection rules differs by location (especially different countries).The costs of damage caused by natural phenomena are affected by locationCosts of reducing disaster probabilities—such as using raised construction to reduce flood damage risk.Normal weather conditions produce costs associated with location.Extreme weather conditions cause facilities to deteriorate faster than in normal weather conditions.Scoring models provide a satisfying means for organizing and combining estimates and hard numbers. 12Location– Scoring Models Location Factors and WeightsThe scoring model of facility location provides a relative weight to each factor that affects the location decision.The objective is to choose the location considering all relevant factors.A six step process for using the scoring model for the hypothetical data given in the table on RHS is described below.Step 1: List all factors that affect the location decisions.Step 2: Assign a weight to each factor.Step 3: Identify alternative locations. Chile, Mexico, Honduras and Brazil are identified are the potential locations for this problem.Factor NameWeightLabor Productivity0.15Nearness to Markets0.18Nearness to Sources of Raw Material0.25Infrastructure Facilities0.12Transportation Facilities0.08Power AvailabilityPolitical Climate0.03Labor Unions0.02Labor Cost0.04Material Cost0.05Total1.00 13Location– Scoring Models (continued) Step 4: Each alternative is evaluated and is given a score on a ten point scale (it could be a 100 point scale) for each alternative.Step 5: The total score for each location is calculated by multiplying the weight of each factor by the points it earned (weight x score) and then adding this number for all factors.Step 6: The total score for each location is calculated and then the location with the highest score is selected.For this problem, the scores are: Chile (5.31), Mexico (5.51), Honduras (4.64) and Brazil (6.52). Therefore, Brazil is the most attractive location based on these factors. 14Location– Scoring Models (continued) Evaluation of Various Location SitesThe score for each location and for each factor is given in the table on RHS.For example, for labor productivity, Chile scored 8 points, Mexico scored 7 points, Honduras scored 3 points and Brazil scored 6 points.Illustration of calculations for total score for Chile are given below.Score for Chile = 5.31= (0.15 x 8) + (0.18 x 4) + (0.25 x 3) + (0.12 x 7) + (0.08 x 6) + (0.08 x 5) + (0.03 x 9) + (0.02 x 3) + (0.04 x 6) + (0.05 x 7).Scoring ModelLocation AlternativesChileMexicoHondurasBrazilFactor NameWeightScore out of 10Labor Productivity0.158736Nearness to Markets0.1849Nearness to Sources of Raw Material0.2552Infrastructure Facilities0.12Transportation Facilities0.08Power AvailabilityPolitical Climate0.03Labor Unions0.02Labor Cost0.04Material Cost0.051Total (weighted sums)1.005.315.514.646.52 15Location - Transportation Model Transportation costs include the combined costs of moving raw materials to the plant and of transporting finished goods from the plant to one or more warehouses.Example:A doll manufacturer has identified Missouri and Ohio as the potential states for locating its manufacturing plant.Several sites in the two regions have been identified.Two cities have been chosen as candidates.These are St. Louis, Missouri, and Columbus, Ohio.Real-estate costs are about equal in both.The problem is to select one of the two cities.The decision will be based on the shipping (transportation) costs. 16Location - Transportation Model (continued) Origins and DestinationsIn Transportation Model terminology, shippers are called sources or origins. Those receiving shipments are called destinations.Sources of components are the origins and the two factories (located at Columbus and St. Louis) are the destinations.In turn, for finished products, the two factories are the origins and the market is the destination.The configuration of origins and destinations are shown in the figure below. 17Location - Transportation Model (continued) The average cost of shipping (also known as the cost of distribution or cost of transportation) are as follows:Sources of the components to:Columbus, Ohio: $6 per production unit.St. Louis, Missouri: $ 3 per production unit.From production plants to market:Columbus to the market: $2 per unit.St. Louis, Missouri to the market: $4 per unit. 18Location - Transportation Model (continued) Total transportation costs to and from Columbus plant are $6 + $2 = $8 per unit;Total transportation costs for St. Louis are $3 + $4 = $7 per unit.Other things being equal, the company should choose St. Louis.The problem becomes more complex when there are a number of origins competing for shipments to a number of destinations.We will illustrate the complexity of the problem and its solution using the example of Rukna Auto Parts Manufacturing Company. 19Rukna Auto PartsRukna Auto Parts manufacturing company has three plants located in Miami, Tempe and Columbus.There are four distributors MKG, Inc., ASN, Inc., GMZ, Inc. and Akla Inc.Due to expected increase in demand, Rukna wants to add one more plant.Two alternative locations under considerations are Dallas and Chicago.Table below shows the capacity of the existing and proposed plants, demands at the four distributors and the transportation cost per unit from a plant to a distributor.MKG, Inc.ASN, Inc.GMZ, Inc.Akla, Inc.CapacityMiami$1.00$3.00$3.50$1.5020,000Tempe$5.00$1.75$2.25$4.0040,000Columbus$2.5030,000Dallas (proposed)$3.7518,000Chicago (proposed)$3.20$2.0$4.35Demand with 20% increase16,20020,16041,640 20Rukna Auto Parts (continued) The objective is to minimize the total cost of transportation.The problem will be solved in two parts – once for each new site.The problems are defined as follows.Problem 1: Find the optimal distribution strategy and the corresponding cost for shipping auto parts from Miami, Tempe, Columbus and Chicago to the four distributors.Problem 2: Find the optimal distribution strategy and the corresponding cost for shipping auto parts from Miami, Tempe, Columbus and Dallas to the four distributors.The transportation model (TM) of linear programming can be used to solve this problem.The TM model finds the optimal distribution strategy that specifies the number of units to be shipped from each plant to each distributor so as to minimize the total cost of transportation. 21Rukna Auto Parts (continued) The Table below shows the optimal solution where Chicago is chosen as the new plant’s site.This table shows the number of units to be shipped from each plant to each distributor.The total transportation cost is $254,830 which is obtained by first multiplying the total number of units shipped and the cost of transportation per unit for each combination of the plant and distributor.These costs are then added together to get the total cost.For example, the transportation cost between Tempe and Akla, Inc. is= $95,200 = 23,800 (units shipped) x $ 4.00 (unit transportation cost).Optimal Distribution Strategy – Chicago PlantMKG, Inc.ASN, IncGMZ, Inc.Akla, Inc.Total ShippedMiami2,160-17,84020,000Tempe16,20023,80040,000Columbus9,84020,16030,000Chicago18,000Total41,640108,000 22Rukna Auto Parts (continued) If Dallas is chosen as the new site, the optimal quantities to be shipped from each plant to each distributor are given in the table below.The total cost of distribution in this case $ 219,770.Since adding Dallas to the current set of existing plants gives a smaller total distribution cost (as compared to Chicago), Dallas is the chosen site.Optimal Distribution Strategy – DallasMKG, Inc.ASN, IncGMZ, Inc.Akla, Inc.Total ShippedMiami20,000-Tempe16,20023,80040,000Columbus9,84020,16030,000Dallas16017,84018,000Total41,640108,000 23Location Decisions Using Breakeven Models How many units of throughput need to be sold in order to recover costs (variable and fixed) and breakeven? Variable (Direct) Costs Variable (direct) costs per unit are the costs of input resources that tend to be fully chargeable and directly attributable to each unit of the product. Total variable costs, TVC = C*Q, where C is variable cost per unit and Q is the number of units produced. Fixed (indirect) Costs Fixed costs have to be paid, whether one unit is made or thousands. These costs are bundled together as overhead costs. Revenue Total revenue, TR = P*Q, is the volume Q multiplied by the price per unit P. 24Location Decisions Using Breakeven Models (continued) Example:Musuk Spices Company (MSC), Delhi, India, plans to set up a new plant at one of the following two locations: Bhopal and Agra in India.The fixed costs per year will be $ 450,000 and $ 300,000 per year for Bhopal and Agra respectively.The variable costs per pound are expected to be $ 10/lb. for Bhopal and $ 14/lb. for Agra respectively.The selling price is expected to be $ 30/lb.Breakeven Point (BEP) = Fixed Cost/(Selling Price – Variable Cost)BEP (Bhopal) = 450,000/(30-10) = 22,500 lbs.BEP (Agra) = 300,000/(30- 14) = 18,750 lbs. 25Location Decisions Using Breakeven Models (continued) Bhopal will generate profits only if the volume of demand is more than 22,500 lbs.Agra will start generating profits if the volume is more than 18,750.If demand is likely to be about 20,000, then Agra is chosen.Say demand will exceed 25,000, which plant should be selected?Find the point of indifference.Find revenue if Q lbs. of spices are produced and sold by each plant.Revenue (Bhopal) = Q (30-10) - 450,000 = 20 Q - 450,000Revenue (Agra) = Q (30-14) - 300,000 = 16 Q - 300,000Equate the two revenues to find the point of indifference – the value of Q.Q = (450,000 – 300,000)/ (20-16) = 150,000/4 = 37,500. 26Location Decisions Using Breakeven Models (continued) The indifference point Q = 37,500. Plant at Bhopal is more desirable if the expected volume of sales is more than 37,500. If the expected sales are less than 37,500 then Agra is more desirable. Either one of them can be chosen if the sales are exactly 37,500. Forecasts have an important role to play in this example. 27Facilities LayoutLayout is the physical arrangement of facilities within a manufacturing plant or a service facility.The layout of a plant/service facility specifies where various machines, equipment and people will be placed.Layout affects the productivity and costs of transportation (materials handling) within the plant.Layout is an interior design problem that strongly interacts with structure and specific site selection and equipment choice. 28Facilities Layout (continued) Opportunity Costs for Layout Improvement Proper workplace layout design can provide:Product and process quality improvements (QI)Throughput and cost benefit productivity improvements (PI)Health benefits (HB) for employees.The layout design improvement should be made if:CPI < OC (QI + PI + HB), where CPI = Cost of layout plan improvement.OC = Opportunity costs incurred for not having used the best possible layout.OC(QI) = Opportunity costs for quality improvements.OC (PI) = Opportunity costs for improved productivityOC(HB) = Opportunity costs for health benefit savings 29Facilities Layout (continued) There are at least the following five basic types of layouts:Job shop process layouts.Product-oriented (Flow shop) layout. Cellular layout.Group technology layout. Hybrid (mixed layouts) - combinations of product and process layoutsChange in technology and even in purpose should lead to reexamination of layout decision. 30Layout Criteria Seven measures of layout effectiveness include the following:Capacity—throughput rate. Goal: Maximize total output volumes and throughput rates.Balance—for perfect balance, the throughput rates of consecutive operations are 100% aligned and synchronized. Goal: Perfect balance.Amount of investment and operating costs. Goal: Minimum expenditures.Flexibility to change layouts. Goal: Maximize ease of change.Amount of work in process (WIP). Goal: Minimize units of inventory.Distance that parts travel; saving an inch traveled thousands of times per day sums to sizeable savings. Goal: Minimize total travel time and distance.Storage for WIP and how much handling equipment is required to move parts from one place in the facility to another. Goal: Minimize space used for storage and minimize moving equipment. 31Layout ModelsFloor Plan Models (drawings) are graphic methods of trial and error that are used by interior decorators.Load-distance models examine alternative layout plans to minimize cost of moving material on the shop floor.Some considerations while designing layouts include:Job shops and the batch production environment are subject to major changes in the product mix.A few order types may dominate the job shop that may require a mix of product layouts (for dominating jobs) with process layouts for other jobs.If the character of the batch work changes a great deal over time, it is best to go for the most general form of process layout.Flexibility is desirable but expensive and disruptive to keep moving equipment around the plant.Modular office layouts have remarkable flexibility for making quick changes. 32Load-Distance ModelsLoad-distance models examine alternative layout plans in terms of the frequency with which certain paths are used.Usually, the highest frequency paths are assigned the shortest plant floor path distances to travel.The objective is to minimize the total unit distances traveled.The physical space is divided into areas that conform to floor layouts including different work areas, stairs, elevators, rest rooms, etc.Distance between departments is calculated and is included in a distance matrix.The amount of materials or the number of trips between the various work centers that different jobs entail over a period of time say monthly, quarterly etc. needs to be determined. 33Load-Distance Models (continued) Example:The floor space locations are designated as A, B, C, D, and E in the following figure.There are five work centers designated as 1, 2, 3, 4, and 5.Problem: Determine which work center is to be assigned to which location so that the total distance traveled is minimized.One possible assignment is: A(1), B(2), C(3), D(4) and E(5) (See Layout 1 below).Layout-1 (with Areas A, B, C, D, and E, and Work Centers 1 through 5 assigned to the Areas) 34Distance MatrixThe average distance that must be traveled between areas A, B, C, D, and E are shown in the table below.This table is called a distance matrix.This matrix is symmetric so the distance from A to D (32 feet) is the same as the distance from D to A.However, symmetry does not always hold because of one-way passages, escalators, conveyor belts, and gravity feed delivery systems.Distance Matrix – Distances in Feet between LocationsTOABCDEFROM1020324016181215 35Load MatrixThe Table below gives the number of units that move between work centers. For example, 100 units move from work center 1 to work center 2 and 60 units move from work center 1 to work center 3.This matrix is not symmetrical.The word “Load” is a generic term. It may represent the number of units moved, weight or volume of the product that is moved, or the number of trips that are made between departments.A discussion of finding the Load Matrix is included later in the presentation.Load Matrix – Number of Units Being Moved Between Work CentersTO12345FROM100608020405010903012070110 36Load-Distance Matrix – Layout 1 The load and distance matrices are combined into a load-distance matrix given in the table below.In the load-distance matrix the distance and load for each combination of the work center and the location area are multiplied.The total number of unit-feet traveled in this layout is 23,025.For example, 100 units move from work center 1 (located at A) to work center 2 (located at B) for a distance of 10 feet (distance between location A and B). Look for the cell combination A(1) (the row number) and B(2) (the column heading) in the table below. The total distance traveled by all items that move from A(1) to B(2) is, therefore, 1,000 unit-feet. Similarly, total distances traveled from A(1) to C(3), A(1) to D(4) and A(1) to E(5) are 1,200, 2,560 and 800 unit-feet respectively. In this way the distances traveled between each combination of work centers can be calculated.Load – Distance Matrix for the Layout-1A (1)B (2)C (3)D (4)E (5)0 = 0 x 01,000 = 100 x 101,200 = 60 x 202,560 = 80 x 32800 = 20 x 40400 = 40 x 100 = 0 x0800 = 50 x 16180 = 10 x 181,800 = 90 x 201,600 = 80 x 201,440 = 90 x 16720 = 60 12450 = 30 x 153,840 = 120 x 32480 = 40 x 120 = 0 x 0700 = 70 x 104,400 = 110 x 40100 = 5 x2075 = 5 x15300 = 30 x10 37Load-Distance Matrix – Layout 2 Is Layout-1the best layout?There are 120 (5!) possible assignments of five work centers to five locations.The table below shows the load-distance matrix for the following assignment: A(1), B(4), C(5), D(3), and E(2). Call it Layout-2.The total number of unit-feet traveled in this layout is 21,725 which is a decrease of 1,300 (6% improvement).Evaluating all possible layouts (120 in this case) is not feasible.Heuristics rules are used to develop a reasonably good (not necessarily the optimal) layout.Load – Distance Matrix for the Layout-2A (1)B (4)C (5)D (3)E (2)8004001,9204,0001,2001,1207202002,20048060752,5601,0803609001,6001,350500 38Heuristic RulesThe heuristic methods are logical, sensible, and clever rules for finding good solutions to complex problems.Two heuristics (rules of thumb) that can be used for improving layout include:Assign work centers with large unit flow rates between them to locations as close as possible.Assign work centers with small unit flow rates between them to locations as distant as possible.These rules were used in moving from the original layout to the revised layout (Layout-2).Trial and error with the improved matrix can be used to test further shifts. 39Finding the Load Matrix To find the load matrix, the processing sequence of each job and the number of units produced (load) need to be specified. See the table below.There are five jobs and six work centers (A, B, C, D, E and F).For example, 200 units of Job 2 are to be produced and these units move through the following sequence of the work centers: C-A-B-D-B-E-F-D.Instead of the number of units, this could have been the number of trips between various work centers.Load and Processing Sequence for a Five-job ProblemJobsLoadProcessing Sequence - Operation Number12345678Job 1100ABCDFEJob 2200Job 3300Job 450Job 5150 40Finding the Load Matrix (continued) The load matrix for this problem is given in the table below.Consider, for example work centers A and B.The movement from A to B occurs for the following jobs.Job 1 (100 units), after first operation in work center A goes to work center B.Job 2 (200 units), after second operation in work center A goes to work center B.Job 3 (300 units), after third operation in work center A goes to work center B.Job 5 (150 units), after first operation in work center A goes to work center B.Job 4 never goes from work center A to B.Therefore, the total number of units that go from work center A to work center B is 750 (= ).Load MatrixTOABCDEFFROM75015035040020030045070050
<urn:uuid:21bd942a-047f-4eac-8f82-49a83c49c563>
CC-MAIN-2017-26
http://slideplayer.com/slide/3759689/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323970.81/warc/CC-MAIN-20170629121355-20170629141355-00089.warc.gz
en
0.899556
5,637
2.8125
3
In this lesson we will look at how to create, refine and apply lights luminosity masks. One of the main tasks that lights masks help to solve is blending exposures. Even though quality of images cameras can take has significantly improved over the last few years, high dynamic range scenes still remain a challenge. Because our eye can very quickly adapt to a wide range of brightness levels, scenes with bright colorful sunset and relatively dark foreground still look very natural and nicely balanced to us. But not to the camera. One of the ways to trick the camera into shooting such a scene is to shoot several exposures. In the most simple case two exposures should be enough. One is taken so that bright parts wouldn't be white and overexposed and another one is taken for the shadow, making them bright enough to still have visible details. Two of these exposures are then blended together. There are tools that combine such images automatically and technique itself is usually called HDR (High Dynamic Range). Unfortunately, results are quite far from perfect. Even when taking HDR picture with modern phone you can notice that images with very high dynamic range look quite unnatural and have lots of issues when looked at closely. Blending images with masks, on the other hand, quite often yields much more accurate and natural results. Let' s take 2 images for an example. One is brighter and another one is darker. Images are taken that way because scene is of high contrast - bright sky and dark stones on the ground. We could have used a gradient filter here, but then it would apply not only on the sky, but also on the Lahta-center skyscraper, situated in Saints-Petersburg. Let's think a little and set our goals: We can achieve all that by building lights mask using first image and applying it on the second image. Here is how our layers look like now Let's hide the top (darker) layer and build a mask using bright layer as the base. First we select the source: We should select the source that will have sky the brightest and with maximum contrast to other parts of the image. Comparing 4 different sources: It's logical that we will get the most contrast and the brightest sky on the blue channel. Of course, we could also settle with just luminosity, but the better the source the more accurate mask will be in the result. As the next step we should pick the mask itself choosing from Lights to Lights 5. Looks like Lights 1 and Lights 2 would work the best here. Lights 1 one has very bright sky. It's bright enough for our purpose and this mask also has lots of details in the shadows. Because mask works where it's bright, for us it would mean that some of the effect will also be applied in the shadows. Lights 2 has much darker shadows. That means they will be selected less. Sky has some details in the sky - clouds. So, both masks are not perfect for our cause. That is why we would need to refine one of them. Let's work with Lights 2 and refine it with Levels. Moving bights point on the left closer to the middle would make lights and bright mids even brighter, not affecting deep shadows. Now, making sure we have the top (darker) layer selected, we can apply our mask on it using To Layer button on the panel. Here is our result! Image is now equally well exposed in lights and darks. On to the next lesson.
<urn:uuid:8397cde6-62fe-484d-89a6-e45d3908dcba>
CC-MAIN-2020-24
https://arcpanel.averin.photo/en/blog/2020-02-04-luminosity-masking-basics-04-lights-masks-example/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410745.37/warc/CC-MAIN-20200531023023-20200531053023-00519.warc.gz
en
0.954042
716
2.90625
3
This program is managed uniformly the p14 and p16 branch. Some new features: Create bit fields. In some register, can be found such bits which belong together. For example: CVR0, CVR1, CVR2, CVR3 These may be useful, to merge during a common field name: CVR The compiler helps handle these bit fields. Create pseudo registers. Some registers are in pairs. For example: FSR0L and FSR0H If these are, by address next to each other there are and address of 'H' marked is higher, then will want to create a 16-bit pseudo-register. This register is of course there is in the lower address.
<urn:uuid:14ffcf19-64d7-4a58-870a-0ed4ce74fb80>
CC-MAIN-2014-15
http://sourceforge.net/p/sdcc/patches/196/
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00236-ip-10-147-4-33.ec2.internal.warc.gz
en
0.932806
154
2.578125
3
The long standing philosophy and science behind the question Writer: Sermila Ispartalıgil Editor: Gracie Enticknap Artist: Sophie Maho Chan “With your feet on the air and your head on the ground/ Try this trick and spin it…/ Your head will collapse/ If there’s nothing in it / And you’ll ask yourself/ Where is my mind?” As the 1988 Pixies song suggests, the questions concerning the mind are endless, but where do they begin, and where are they now? Long before empirical fields emerged, philosophy was concerned with the nature of the mind, its features, mental states, relationship to the body, and learning processes. Exploring the workings of the mind trace back at least to Ancient Greece. While Socrates focused on finding definitions of our concepts, Plato claimed that knowledge is about abstract ideas instead of things. Plato’s notion of Ideas and his suggestion that knowledge of them is innate has left a lasting influence in both philosophy and other fields in cognitive science. Aristotle, like current cognitive scientists, looked at the ways in which objects are represented in our thoughts, with his theory of perception in which the Form defining objects is transmitted to the perceiver’s consciousness. Although these ideas of the Ancient Greek philosophers are no longer recognised as they were, they still influence research in cognitive science. While the rationalists relied on reason to understand the world, which is adopted by many cognitive scientists, empiricists placed sensory perception over reason. John Locke’s ideas regarding how the mind works primarily through linking basic concepts by experience has formed the foundation for a long-standing tradition known as Associationism in cognitive science. Even though Kant would not have accepted cognitive science, his beliefs, which may be thought of as incorporating both empiricism and rationalism, are the most closely matched with those presented in modern cognitive research. Philosophy before Kant presumed that the world “exist[s] independently of us” and questioned the ways in which we might learn about it, whereas Kant argued that cognition was only partially constitutive of the world around us. To him, uncategorized sensory experience and the objects that they spring from (which he called things in themselves) cannot be known by us. His ideas can be seen as part of a turning point in philosophy: what we know about the world depends on how we construct it. Until the nineteenth century, when experimental psychology emerged, the study of the mind was confined to philosophy. Wilhelm Wundt and his students studied the mind more systematically in the laboratory, before behaviorism started to dominate experimental psychology. This was followed by the abandonment of the debates around consciousness and mental representations, but arguments against the premises of behaviourism still remained, in regard to treating language as merely a learned habit, as criticised by Noam Chomsky. With pioneers such as psychologist Miller, Chomsky, and computer scientists Allen Newell and Herbert Simon, the field of cognitive science started to emerge, as an interdisciplinary endeavor, incorporating philosophy, psychology, artificial intelligence, neuroscience, linguistics, sociology and anthropology. Philosophy remains deeply interconnected to both the theoretical and experimental paradigms within which cognitive science works. In his paper, Paul Thagard explores the ways in which both fields can inform one another. One idea that is recurrent throughout the paper is that not taking philosophy into account would in fact simply mean adopting a certain philosophical view without ever questioning it. He makes use of well known phrases to adapt them to philosophy: “Those who ignore philosophy are condemned to repeat it… Those who believe themselves to be exempt from philosophical influence are usually the slaves of some defunct philosopher.” Seen under this light, avoiding philosophy seems impossible, as it leads to doing it implicitly and ineptly. Thagard focuses on two ways in which philosophy is relevant in cognitive science: generality and normativity. Generality aids research in areas like psychology, neuroscience, linguistics, anthropology, and artificial intelligence, which is often undertaken within a narrow framework or aims to address highly niche and specific questions. The generality of philosophy also works as a unifier in cognitive science which is very multidisciplinary. Normativity, on the other hand, helps consider how things ought to be and not just how they are. While scientific research within cognitive science is mostly concerned about reaching descriptive claims, normative ones should also be in the picture to assist the ways in which those descriptive claims are reached. Similarly, philosophy also needs cognitive science. In order to support its theories and claims about the character and workings of the mind, knowledge, and morality, philosophy should utilize the discoveries of disciplines like psychology, neuroscience, and linguistics. While thought experiments can sometimes be helpful by themselves, they are most often not enough in supporting hypotheses without scientific support. Not a priori conjecture, but a reflection on scientific advancements in domains like psychology, neurology, and computer science will lead to metaphysical conclusions regarding the nature of thought. In parallel, epistemology is based on and benefits from scientific findings about mental structures and learning mechanisms, while ethics can make use of the psychology of moral thought. To conclude his paper, Thagard makes use of an analogy to explain the nature of the relationship between philosophy of mind and cognitive science. “The men of experiment are like the ant; they only collect and use. The reasoners resemble spiders who make cobwebs out of their own substance. But the bee takes a middle course. It gathers its material from the flowers of the garden and of the field, but transforms and digests it by a power of its own.” What he hopes to see in philosophy of mind and cognitive science is that honey of the bee, which is the mixture of the experimental and the rational. Therefore, to answer the challenging questions posed by the lyrics of our favourite songs, we might find it most helpful to consult both philosophical theories and scientific findings, as the answers to where our mind is might be hidden somewhere underwater in philosophy of mind, cognitive science, or their interconnections.
<urn:uuid:4a464618-c146-4b83-af57-44f64e010b7b>
CC-MAIN-2023-14
https://kinesismagazine.com/2021/12/13/where-is-my-mind/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00408.warc.gz
en
0.955616
1,235
3.5
4
Though the temperatures are dropping, winter still provides a whole new aspect of fun for kids. It’s easy to stay safe and still enjoy the cold weather. Certain cold-weather activities can lead to accidents, especially if kids haven’t been properly trained or are not supervised by adults. Make sure your kids have the right equipment specific to their activity (including helmets!) The most common injuries result from falls or collision with others, so be sure to keep a close eye on them, especially during busy times. The majority of sledding, snow tubing, and tobogganing-related injuries occur to adolescents age 14 and under. When a sled hits a fixed object such as a tree, rock or fence, the individual is at risk for head and/or neck injuries. Do not lie down on the sled headfirst. Sit in a forward-facing position on sleds (not plastic sheets) that can be easily steered with the feet or a rope tied to the steering handles of the sled. Only let kids sled in approved locations, away from streets, fences, trees, posts or other obstacles that could obscure the sled path. Parents or adults must supervise children. Snowmobiling is becoming more and more popular and also comes with its own set of safety tips. All riders should wear goggles and helmets. No child under the age of 16 should operate a snowmobile, and children under six should not ride on them. Be cautious when snowmobiling near other riders, travel at safe speeds and never snowmobile at night. Always remember to keep kids bundled to avoid hypothermia, frostnip and frostbite (take note that cotton clothing will not keep kids very warm). Be aware of the amount of time kids are spending in the cold, and don’t let them stay outdoors too long in extreme weather. Reference: American Academy of Pediatrics, “Winter Safety Tips,” January 2013. http://www.aap.org/en-us/about-the-aap/aap-press-room/news-features-and-safety-tips/pages/Winter-Safety-Tips.aspx Reference: Kids Health, “Cold, Ice, and Snow Safety,” November 2012. http://kidshealth.org/parent/firstaid_safe/outdoor/winter_safety.html# Reference: American Academy of Orthopaedic Surgeons, “Sledding Injury Prevention,” September 2009. http://orthoinfo.aaos.org/topic.cfm?topic=A00306
<urn:uuid:9993b1e1-7bfc-49b8-8db5-963d7796b46d>
CC-MAIN-2017-26
http://hollandboneandjoint.com/blog/winter-safety-kids/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320386.71/warc/CC-MAIN-20170625013851-20170625033851-00520.warc.gz
en
0.918997
530
2.921875
3
KINESIOLOGY, MELISA TEST AND NEURAL THERAPY ACCORDING TO HUNEKE: NATURAL THERAPHIES WHICH DO NOT OVERRUN OUR BEING. Kinesiology is a methodology based on a simple but fundamental fact: “body language never lies”. The use of muscle testing as an indicator of body language enhances one’s ability to record and gain insight into all of the body’s functions. Muscle testing is the code to talk to the body and receive feedback information. Therapeutic localization is the retrieval of feedback information from a change in the tested muscle strength after a pathological area has been touched. Neural therapy according to Huneke. Neural therapy is used to fight various pains and ailments. It was originally developed in Germany in 1925 by the brothers Ferdinand and Walter Huneke, who injected an antirheumatic drug into the vein of their sister, who was suffering from a severe migraine. Neural therapy uses small quantities of local anaesthetics: the infiltrations are made in about 200 areas, which correspond to the Chinese acupuncture points. Moreover, about 70 of these points are trunks of sensitive and motor nerves, many of which are used by traditional medicine in local anaesthesia. Neural therapy works through the reflex action of the injected product, as is the case in acupuncture. Thus, effects can be obtained in body organs and areas other than those located where the injection is done.
<urn:uuid:33ba2e8e-e61c-4f95-b35c-58c6fc0e0110>
CC-MAIN-2020-16
http://www.studiovincentivecchi.it/en/terapie-integrative-cosae/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00122.warc.gz
en
0.940573
310
2.921875
3
Monosomy is a term used to describe a condition where one member of a pair of chromosomes is absent. This means that the total number of chromosomes amount to 45 instead of the normal 46. For instance, if a child in this case is born with one X-chromosome instead of the usual pair of XX or XY. This child is said to be suffering from Monosomy X. Monosomy X is another name for Turner syndrome. Primarily, turner syndrome is genetic condition that affects girls. This disorder causes the affected girls to be shorter than normal and fail to pass through puberty as they mature into adults. The severity of this condition varies among individuals. Some of the features of this condition include heart defects, kidney problems, and skeletal abnormalities among others. Most of the problems that come from turner syndrome can be corrected or managed by using appropriate medical treatment (Albertsson and Ranke, 2009). The name Turner syndrome is derived from Doctor Henry Turner who made investigation on this condition in 1938. This condition affects an estimated 1 in every 2,500 new born girls worldwide. It is most prominent in pregnancies that fail to survive to term (stillbirths and pregnancies). In a normal reproduction process, the sperm cell of the man and egg cell from the woman begin with the normal 46 chromosomes. The sperm and the egg go through cell division and the 46 chromosomes are split in half where the sperm and egg cells constitute 23 chromosomes each. When an egg with 23 chromosomes is fertilized by a sperm with the same number of chromosomes, the resulting embryo is made up of with a matching set of 46 chromosomes. However, an error sometimes occurs when a sperm or an egg cell is forming, thus causing it to lack a sex chromosome. Because this cell fails to donate sufficient sex chromosomes, the result is Turner syndrome (Albertsson and Ranke, 2009). The missing chromosome can be in either the man’s sperm cell or the woman’s egg cell. In most cases, this happens in the man’s sperm. It is not know whether the man failed to do something or did something that led to an error in his sperm. Therefore, contrary to common thought, Turner syndrome is not associated with advanced age of the woman. The features of this condition result from the missing X chromosome in the body cells. About half of Turner syndrome cases are caused by full Monosomy X. Other cases exhibit a mosaic pattern. A smaller portion of Turner syndrome is caused when part of the X chromosome is missing even though the normal 46 chromosomes are present (Rosenfeld and International Turner Syndrome Symposium, 2008). When a portion of the X chromosome is missing (deletion), affected girls exhibit mild symptoms of the condition. The symptoms of Turner syndrome are dependent on which portion of the X chromosome is missing. About half of girls with this condition exhibit puffy feet and hands when they are born in addition to a webbed and wide neck. It is possible for the doctor to detect “cystic hygroma” during pregnancy. A cystic hygroma implies sacs filled with fluid located at the floor of the neck. These sacs normally disappear before birth and in some cases may persist up to the newborn period. Normally, girls with this syndrome exhibit a low hairline at the rear of the neck, deep-set nails, widely space nipples and minor differences in the position and shape of their ears. The most visible feature among individuals with Turner syndrome is a short stature. The average height of adult woman with Turner syndrome is 4 feet 9 inches. Most women born with this condition have poorly formed ovaries or may even lack them. Ovaries play a major role in a female’s body including producing estrogen. Limited estrogen in the body leads to curtailed sexual development. Normal signs of puberty such as menstruation, breast development and growth of axillary and pubic hair do not occur properly. In most cases, the consequential infertility cannot be corrected (Rosenfeld and International Turner Syndrome Symposium, 2008). Kidney problems, heart problems, and thyroid disorders are also common among these individuals and hence should prompt early evaluation. About two in ten girls suffer from coarctation of the heart. Other signs noted in people with Turner syndrome include middle ear infections, feeding problems in the infancy stage, and skeletal problems. Skeletal problems come in the form of “cubitus valgus” meaning that they have slightly bent elbows. Other reported features in this condition include high blood pressure, dry skin, diabetes and a small jaw. Nevertheless, girls with this syndrome have normal intelligence. They perform better with their verbal IQ compared to nonverbal IQ. Treatment and Management Currently, Turner syndrome does not have a cure. However, most of the serious problems that come with it can be treated. Androgen and growth hormone therapy for example can be administered to the patient to facilitate normal body growth (Ranke and Rosenfeld, 2012). Additionally, hormone replacement therapy is an efficient way of helping girls develop sexual characteristics such as normal breasts and pubic and axillary hair. Surgery can be applied in cases where the patient may be suffering from coarctation of the heart. There is medicine capable of treating diabetes, high blood pressure, and thyroid problems. If they desire, women with Turner syndrome can rely on egg donation to give birth to children. It is important to administer treatment and management measures while the girl is still young. Failure to do so leads to a higher chance of poor results when the girl has grown. For example, estrogen replacement therapy should be conducted at age 12 when girls normally enter puberty (2012). Social and Economic Implications Ultimately, Turner syndrome holds major social and economic implications to the patient and loved ones. In terms of social relations, girls and women with this condition exhibit low self-esteem and limited sexual experiences. Due to diminished growth and other problems that come with this condition, affected girls exhibit poor social adjustment. The main reason is that they are subjected to bullying and disdain by their peers. Additionally, some patients have a hard time adapting to new institutions thus leading to inappropriate behavior. Issues such as making new friends become difficult when learning how to act in a new social environment (Oxford Clinical Communications and Pharmacia Peptide Hormones, 2011). Depression and anxiety normally develop. Successful treatment and management of this condition requires adequate funds. The girl patient needs to make frequent visits to the hospital for regular check. Necessary therapy, medicine, and counseling regarding this condition are very expensive. This implies huge expenses for the parents or guardians of the girl child. In unfortunate cases, the girl may hail from a poor background that cannot afford to pay for treatment and management. In such cases, the wellbeing of the girl is often compromised. Albertsson-Wikland, K., & Ranke, M. B. (2009). Turner syndrome in a life span perspective: Research and clinical aspects : proceedings of the 4th International Symposium on Turner Syndrome, Gothenburg, Sweden, 18-21 May, 2009. Amsterdam: Elsevier. Oxford Clinical Communications., & Pharmacia Peptide Hormones. (2011). GH and Turner syndrome. Oxford: Oxford Clinical Communications for Pharmacia Peptide Hormones. Ranke, M. B., & Rosenfeld, R. G. (2012). Turner syndrome: Growth promoting therapies : Workshop on Turner syndrome : Papers. Amsterdam: Excerpta Medica. Rosenfeld, R. G., & International Turner Syndrome Symposium. (2008). Turner syndrome. New York: M. Dekker. Top-quality papers guaranteed 100% original papers We sell only unique pieces of writing completed according to your demands. We use security encryption to keep your personal data protected. We can give your money back if something goes wrong with your order. Enjoy the free features we offer to everyone Get a free title page formatted according to the specifics of your particular style. Request us to use APA, MLA, Harvard, Chicago, or any other style for your essay. Don’t pay extra for a list of references that perfectly fits your academic needs. 24/7 support assistance Ask us a question anytime you need to—we don’t charge extra for supporting you! Calculate how much your essay costs What we are popular for - English 101 - Business Studies
<urn:uuid:47951aa2-5ae2-4c21-b3eb-a2dc485858ac>
CC-MAIN-2023-50
https://advancedcustomwriting.com/samples/turner-syndrome/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100540.62/warc/CC-MAIN-20231205010358-20231205040358-00044.warc.gz
en
0.913928
1,724
4.09375
4
As part of the ocean conveyor belt, warm water from the tropical Atlantic moves poleward near the surface where it gives up some of its heat to the atmosphere. This process partially moderates the cold temperatures at higher latitudes. As the warm water gives up its heat it becomes more dense and sinks. This circulation loop is closed as the cooled water makes its way slowly back toward the tropics at lower depths in the ocean. If the poles warm, it is possible that melt water from glaciers and the polar ice cap can shut off this circulation and interrupt this circulation system. The melt water is fresher and hence less dense than the ocean water it melts into, and thus the melt water will tend to accumulate near the surface. This layer of fresh water acts as an insulating barrier between the atmosphere and the normal ocean water. The water from the tropics can not release its heat to the atmosphere, and the circulation loop is interrupted. The mechanism has a positive feedback potential in that if the ocean circulation slows, then even less heat will make it to the higher latitudes re-enforcing an effect that will cool the climate at these higher latitudes. GCMD keywords can be found on the Internet with the following citation: Olsen, L.M., G. Major, K. Shein, J. Scialdone, S. Ritz, T. Stevens, M. Morahan, A. Aleman, R. Vogel, S. Leicester, H. Weir, M. Meaux, S. Grebas, C.Solomon, M. Holland, T. Northcutt, R. A. Restrepo, R. Bilodeau, 2013. NASA/Global Change Master Directory (GCMD) Earth Science Keywords. Version 184.108.40.206.0
<urn:uuid:c3adc035-c337-4e14-859f-ba4ca028b24a>
CC-MAIN-2020-05
https://svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=20029
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00459.warc.gz
en
0.895406
367
4.09375
4
8 posts • Page 1 of 1 Why do solutions of weak acids have higher pH values than solutions of strong acids at the same concentration? Is it because a higher pH indicates a less acidic solution because the weak acid has not fully dissociated? Like you said, weak acids do not dissociate 100%, so they will produce less H3O+, resulting in a higher pH value. The concentration of H3O+ will be lower than that of a stronger acid, since stronger acids almost always dissociates completely. If you are saying higher pH in terms of numbers closer to 7 than yes you would be correct. If you think about a strong acid that has completely dissociated, the humber of H+ ions in that solution will be greater than that of a weak acid. I also wanted to input that pH is also related to the concentration of Hydronium ions (H3O+). For example if you have a pH of 1 to convert that into concentration you would take the antilog of -1 (The reason you take the antilog is because pH=-log[H3O+] and we know that the ph is 1 so we solve for the concentration of H3O+.) When we convert it into concentration it would be 10^-1. So in respect to say pH 7. The hydronium concentration would be approximately 10^-7 for pH 7 and the concentration for pH 1 would be approximately 10^-1. When you compare both concentrations we can see that 10^-1 is larger than 10^-7. Hope this helps and clarified any confusion. Who is online Users browsing this forum: No registered users and 1 guest
<urn:uuid:fceeed7b-347a-4967-ac38-b6b77ce1e6d4>
CC-MAIN-2020-29
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=60&t=40059&p=136204
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00498.warc.gz
en
0.946031
344
3.03125
3
A cancer diagnosis often begins a frightening and uncertain time for many people. In some cases however, developing certain cancers can serve as a warning sign of larger biological problems that may open the door to additional cancers. Also known as hereditary non-polyposis colorectal cancer - or HNPCC - Lynch Syndrome is a condition in which genetic defects in a patient result in an increased likelihood of them developing more than one type cancer. The disorder often manifests itself during a colorectal cancer discovery, and signifies a risk of also developing a variety of other cancers, including stomach, breast and liver. As Lynch is a genetic mutation, it is highly transferrable from generation to generation and affected parents have a 50/50 chance of passing it to their kids. According to Dr. Har Chi Lau of Hudson Valley Surgical Group who regularly performs abdominal surgeries, colon cancer is usually found by GI doctors during routine colonoscopies. Once colon cancer is identified and treated, he recommends patients immediately undergo testing for Lynch Syndrome. "You should see if any family member has a history of Lynch-related types of cancers, because if you have Lynch, you’re almost 80 to 90 percent guaranteed to have colorectal cancer," he said. Fortunately, Lynch Syndrome is not a common occurrence — the general population stands a 1 in 300 chance of development — but instances are on the rise due to increased awareness and testing by doctors. "Most people don’t know they have Lynch until they have a cancer already," said Lau. "There are blood tests you can do, and if you have a parent with a certain type of cancer, you as a child should speak to a generic counselor." In many cases, those with Lynch Syndrome simply opt to have their colon removed as a preventative measure. "In term of surgical complexity, the [colon] removal is a bit on the higher end, but it's not a high risk or dangerous procedure," said Lau. "[That's why] having the right surgeon is key." To learn more about Lynch Syndrome and the services offered by Hudson Valley Surgical Group, click here.
<urn:uuid:71f63c51-2170-4344-b421-0fc6a8acc187>
CC-MAIN-2020-29
https://dailyvoice.com/new-york/tarrytown/lifestyle/little-known-disorder-increases-likelihood-of-life-threatening-cancers/770460/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655929376.49/warc/CC-MAIN-20200711095334-20200711125334-00218.warc.gz
en
0.973583
438
3.0625
3
Gather around, Shmoopers. Put on your thinking caps and buckle your seatbelts, because we’re going on a trip in the Wayback Machine. Time: 1945. Place: The US of A. But something is… different. Just like today, the United States of 1945 was a diverse land. But if you opened the newspaper, you'd see a different story. Politicians, journalists, models, even small business owners: they’re all a whiter shade of pale. A black president? Fat chance. There isn’t even a black Disney Princess (although it’s worth pointing out that the US got a black president before it got a black princess). So imagine how the public felt when Black Boy was released in 1945. Richard Wright’s story about a young boy from the South struggling to grow up and become a writer in a world that constantly tries to crush his dreams hit the literary world like the eighth, undiscovered Harry Potter. It spent four months at the top of the best seller’s list, and was the fourth best selling novel at the end of 1945. With a bump from the Oprah-like Book-of-the-Month Club, everyone was reading Wright’s words. Before the Book-of-the-Month Club agreed to give it that boost, though, Wright had to make some changes. The entire second section was nixed, along with some "obscene" parts from the first section. Mentions of the Communist Party, in which Wright was an active member? All gone. Suggestions that the North was not the Promised Land for black people? Also axed. The 1945 version of Black Boy was a much more cheerful book than Wright originally meant it to be. Even in its bowdlerized form, Black Boy was a favorite of both fancy literary critics and regular folk like us Shmoopers. Wright’s literary skills, as well as his honest portrayal of the life of black Americans, won him plenty of admirers—and plenty of money. When the complete manuscript was published in 1991 (thirty-one years after Wright died in 1960), readers were in for a nice surprise. Turns out that Wright had mad philosophy skills in addition to impressive literary credentials. With its second half intact, the book is about much more than the personal experiences of one boy growing up in the racist South. It’s also about every individual’s struggle to find a meaningful life. Why Should I Care? In 2009, the US officially became "post-racial." We got a black president. We got a black Disney Princess. Just a few years later, Beyoncé was named People's Most Beautiful Woman. Racism is over, and you don’t need another dusty old book to tell you how bad it was. Right? Not so much. While we've made great strides when it comes to civil rights, there's still a long way to go. And the dream of a post-racial America isn't the only one that's still a work in progress. Have you ever had a dream that everyone said was impossible? It doesn’t have to be something major, like solving racism. It could be something like getting an A in a subject you’re bad at, or being the first person in your family to go to college, or keeping your room clean for a month. If you know what it is like for people to say your dream will never happen, then it doesn’t matter what your skin color: Richard Wright is talking to you. Ignore the fancy literary criticism and the highbrow philosophy, and Black Boy is an open letter to everyone who said he couldn’t make it. It’s also a letter to all the people who feel just like he does. He’s urging you not to act like everyone expects you to act and not to do what everyone assumes you will. He wants you to find your own path. He wants you to clear your own way. He wants you to stand at his side shouting "Shmoop you!" right along with him. Do you know how he feels? If so, Comrade, welcome to the club.
<urn:uuid:a4d67c74-f097-4ee5-a4bb-b6c0a2c1877d>
CC-MAIN-2014-41
http://www.shmoop.com/black-boy/
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137145.1/warc/CC-MAIN-20140914011217-00041-ip-10-234-18-248.ec2.internal.warc.gz
en
0.971979
872
2.546875
3
When I took an ungraduate chemistry course a few years back, I loved lab, but I have to admit writing up the lab reports seemed like so much busy work. Each report had specified sections, and the lab manual offered advice on what to put in the sections, depending on the experiment. I remember trying to get them done in a hurry and thinking that I wasn’t learning much by doing them. They seemed more like something the instructor could use to make sure you (and your partners) had actually done the lab. That’s probably why I found an article identifying some of the problems with the typical lab report so interesting. Most lab reports follow the format of a scientific report with sections that generate the hypotheses, describe the methods, report the data, and discuss the results; but they end up being very superficial versions of true scientific inquiry. Moreover, lab reports are written for the teacher, not for a professional community and rarely is there any revision component. “Because a lab report typically does not address a genuine question, it does not teach students how scientists find questions, construct hypotheses, design experiments, or make arguments supported by data from the experiment.” The net result is an assignment that “does not help a student learn to think like a chemist.” (p. 20) (The article is about an organic chemistry course.) Elaborating this critique still further, the authors note, “The problem with conventional lab reports is that they encourage students to think and behave like students rather than like professionals. Because students know (or think they know) the expected outcome of the ‘cookbook’ experiments, they chalk up any deviation from the expected outcome as ‘experimental error’ with little thoughtful explanation.” (p. 20) But that’s still not the worst problem with lab reports. They generate a single datum. “No scientist would follow such a process,” the authors assert. It means that “the lab report develops habits that students must unlearn if they are going to think and write like professional chemists.” (p. 20) What’s the alternative? These chemists, with faculty colleagues knowledgeable about writing across the curriculum, “redesigned the sophomore organic experiments so that they promoted genuine inquiry resulting in enough data to be worth writing about; they designed sequences of writing assignments to teach the scientific paper over the course of a year; and they built in genuine writing instruction—employing well-designed assignments, examples, rubrics and peer review—to help students develop ‘writing process knowledge’.” (p. 20) That sounds much more worthwhile than the workbooky lab reports I completed without much thought or effort. Reference: Alaimo, P. J., Bean, J. C., Langenhan, J. M. and Nichols, L. (2009). Eliminating lab reports: A rhetorical approach for teaching the scientific paper in sophomore organic chemistry. The Writing Across the Curriculum Journal, 20, 17-32.
<urn:uuid:62d004e3-dc74-4d40-946a-00da9e1d85f6>
CC-MAIN-2017-30
https://www.facultyfocus.com/articles/faculty-development/replacing-lab-reports/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427429.9/warc/CC-MAIN-20170727042127-20170727062127-00250.warc.gz
en
0.956336
631
2.828125
3
Carved from Greek marble in the third century, this sarcophagus was in the garden of Palazzo Riccardi in the beginning of the 18th century. On the left part of the main side we see Phaedra’s unhappy relationship to her step-son, Hippolytus. Ancient sculptures used as architectural and garden decoration require restoration from the natural elements to which they have been exposed, often for hundreds of years. On the Vestibule… When the Direction of the Gallery expressed the intention of integrally restoring the Lorraine staircase and the vestibule of the Museum and, as usual, invited the Association of the ‘Amici degli Uffizi’ to participate, it seemed to me that there was no better solution than to involve our partners from America, the ‘Friends of the Uffizi Gallery’, our affiliation within the United States. There was no doubt the effort would be an historic one given the intervention regarded ancient marbles of great importance, the majority of which were on exhibit in one of the noblest sections of the building, an area that, as part of the project, was to be restored to its original color scheme. Now that all is done, I can say that we made the right decision: the ancient sculptures truly enrich the landing overlooking the monumental staircase and, above all, the oval vestibule, safeguarded by the effigy of Peter Leopold, the man responsible for the creation of the Gallery’s magnificent new entryway. Maria Vittoria Colonna Rimbotti President of the Friends of the Uffizi Gallery
<urn:uuid:a1aba934-e885-4470-86ba-ed701d89123a>
CC-MAIN-2023-40
https://www.friendsoftheuffizigallery.org/on-the-sarcophagus-with-myth-of-phaedra-and-hippolytus/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511424.48/warc/CC-MAIN-20231004220037-20231005010037-00629.warc.gz
en
0.943109
339
2.53125
3
ASK A SCIENTIST Question: Does a Venus flytrap need water and does it make its own food? An interesting question, about a very interesting plant! The Venus flytrap is one of the carnivorous plants (sometimes called insectivorous plants). These plants are distinguished by their adaptations to trap and digest small animals, primarily, but not exclusively, insects. The Venus flytrap is native to bogs and wetlands in a small area in parts of North and South Carolina. Like all plants, it requires water, which it gets from its roots like most other plants. And like all other green plants, it carries out photosynthesis, producing sugars from carbon dioxide, water, and energy from sunlight. However, the acidic soil that the flytrap grows in is relatively poor in available nutrients, especially nitrogen, and this limits the growth of the plant. The Flytrap overcomes this limitation by capturing and digesting insects, spiders, etc. Its traps are the modified ends of its leaves, which also have glands that produce digestive enzymes. The digested prey supplies the plant with nitrogen, primarily in the form of amino acids, as well as some required minerals. These nutrients allow the flytrap to grow much more robustly than if it relied only on nutrients taken from the soil of the bog. The Venus flytrap is not the only type of carnivorous plant. Indeed, a number of other carnivorous plants are found in bogs in New York, including sundews, pitcher plants, and bladderworts. Each of these utilizes a different type of trap. For more information on carnivorous plants you can look on the Internet. Two good sites are: http://www.botany.org/Carnivorous_Plants/ and http://www.carnivorousplants.org/ . You can also see several different species of carnivorous plants, including the flytrap, and many other interesting plants, at the Binghamton University greenhouse (http://biogreenhouse.binghamton.edu/index.htm ). Ask a Scientist appears Thursdays. Questions are answered by faculty at Binghamton University. Teachers in the greater Binghamton area who wish to participate in the program are asked to write to Ask A Scientist, c/o Binghamton University, Office of Communications and Marketing, PO Box 6000, Binghamton, NY 13902-6000 or e-mail firstname.lastname@example.org. Check out the Ask a Scientist Web site at askascientist.binghamton.edu. To submit a question, download the submission form(.pdf, 460kb).
<urn:uuid:b953259e-b458-4795-abbd-9ec2fbcf8e93>
CC-MAIN-2014-35
http://www.binghamton.edu/news/the-newsroom/ask-a-scientist/index.html?date=2009-09-21
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500959239.73/warc/CC-MAIN-20140820021559-00014-ip-10-180-136-8.ec2.internal.warc.gz
en
0.920945
535
4.03125
4
Collins German School Dictionary : Trusted Support for Learning Paperback A pocket-sized German reference for secondary school students looking for a dictionary that is modern, up-to-date, clear and easy to use. Developed for home study, and to be used in the classroom. * Especially designed for school students, and is ideal in the classroom, at home and during exams. * It contains all the words and phrases students will need, with key curriculum words highlighted, all essential phrases covered, and thousands of examples to show how German is really used. * The clear layout and alphabet tabs down the side of each page lead the students to the information they need quickly and without fuss. All main translations are underlined to help users go straight to the answer they are looking for. * The dictionary includes language tips and culture notes throughout the text. * Ich bin, du bist...German verbs made easy! Each verb on the German side of the dictionary is cross-referred to a comprehensive verb guide, with full conjugations and example phrases showing the verb used in context. Also available in French and Spanish. - Format: Paperback - Pages: 624 pages - Publisher: HarperCollins Publishers - Publication Date: 09/04/2015 - Category: Bilingual/multilingual dictionaries (Children's/YA) - ISBN: 9780007569342
<urn:uuid:a2b3349b-aea6-4256-b500-ebdb0f31ea9a>
CC-MAIN-2017-47
https://www.speedyhen.com/Product/Collins-Dictionaries/Collins-German-School-Dictionary--Trusted-Support-for-Learning/15829283
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809160.77/warc/CC-MAIN-20171124234011-20171125014011-00620.warc.gz
en
0.891888
284
3.28125
3
The History of Rome How was Rome founded? How did it become an Empire? What was the role of the Catholic Church throughout Rome’s history? Learn about the fascinating past of this striking city before you travel to Rome. The exact origins of the city of Rome are still somewhat of a mystery. There are several theories all based on the writings of ancient authors and the archaeological discoveries. For this reason, the founding of Rome is based mainly on legend and myth, instead of solid facts and figures. The existence of a Roman Kingdom was even questioned during practically two centuries by expert historians. During the nineteenth and twentieth century, they dismissed the idea of the early kings of Rome (Romulus, Numa Pompilius, Tullus Hostilius) as well as the date of the founding of what would later become the capital of Italy, in 753 BC. This part of history was merely considered a legend and not taken seriously. It was only during the late twentieth century when, thanks to the findings of numerous archeological digs and other sciences, that the myths surrounding the establishment of the city and its first rulers were reconsidered. It is believed that the first inhabitants of Rome came from various parts of the region, and had neither the economic nor the cultural development of their northern neighbors, the Estrucans, nor the southern civilization called the Sabines and Latins. In the Palatine Hill archeologists found the remains of a primitive settlement from the eighth century BC, with burials on the outskirts of the village. It is thought that as the population grew, the inhabitants settled on the slopes of the nearby hills, and during the next century they established themselves in the valley.
<urn:uuid:6ab28a08-64f3-475d-a5a6-b7e1320811fc>
CC-MAIN-2023-23
https://www.rome.net/history/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651325.38/warc/CC-MAIN-20230605053432-20230605083432-00639.warc.gz
en
0.980338
349
3.625
4
the act of supplying fresh air and getting rid of foul air ventilation, ventilation system, ventilating system(noun) a mechanical system in a building that provides fresh air "she was continually adjusting the ventilation" public discussion, ventilation(noun) free and open discussion of (or debate on) some question of public interest "such a proposal deserves thorough public discussion" breathing, external respiration, respiration, ventilation(noun) the bodily process of inhalation and exhalation; the process of taking in oxygen from inhaled air and releasing carbon dioxide by exhalation The replacement of stale or noxious air with fresh. The mechanical system used to circulate and replace air.. An exchange of views during a discussion. The bodily process of breathing; the inhalation of air to provide oxygen, and the exhalation of spent air to remove carbon dioxide. the act of ventilating, or the state of being ventilated; the art or process of replacing foul air by that which is pure, in any inclosed place, as a house, a church, a mine, etc.; free exposure to air the act of refrigerating, or cooling; refrigeration; as, ventilation of the blood the act of fanning, or winnowing, for the purpose of separating chaff and dust from the grain the act of sifting, and bringing out to view or examination; free discussion; public exposure the act of giving vent or expression Origin: [L. ventilatio: cf. F. ventilation.] Ventilating is the process of "changing" or replacing air in any space to provide high indoor air quality. Ventilation is used to remove unpleasant smells and excessive moisture, introduce outside air, to keep interior building air circulating, and to prevent stagnation of the interior air. Ventilation includes both the exchange of air to the outside as well as circulation of air within the building. It is one of the most important factors for maintaining acceptable indoor air quality in buildings. Methods for ventilating a building may be divided into mechanical/forced and natural types. "Mechanical" or "forced" ventilation is used to control indoor air quality. Excess humidity, odors, and contaminants can often be controlled via dilution or replacement with outside air. However, in humid climates much energy is required to remove excess moisture from ventilation air. Kitchens and bathrooms typically have mechanical exhaust to control odors and sometimes humidity. Kitchens have additional problems to deal with such as smoke and grease. Factors in the design of such systems include the flow rate and noise level. If ducting for the fans traverse unheated space, the ducting should be insulated as well to prevent condensation on the ducting. Direct drive fans are available for many applications, and can reduce maintenance needs. U.S. National Library of Medicine Supplying a building or house, their rooms and corridors, with fresh air. The controlling of the environment thus may be in public or domestic sites and in medical or non-medical locales. (From Dorland, 28th ed) The numerical value of Ventilation in Chaldean Numerology is: 6 The numerical value of Ventilation in Pythagorean Numerology is: 6 Sample Sentences & Example Usage We try to do our best to limit that pressure, but to get adequate ventilation for stiff lungs, we have to use the respirator, and that can also cause the lungs to not develop properly. Tyler is thriving. Every time I see him he’s breathing well without ventilation or oxygen, and his voice and speech have improved, the day Tyler’s trace was removed was the best day. The death of Carol Glover was an unnecessary tragedy. So many things went wrong. Radios didn't work. Ventilation fans didn't get smoke out, the ventilation on the trains themselves sucked smoke into the trains. If you look at the pens, they are just brick sheds with corrugated tin sheets on the roof. During times when the weather is okay, they can work. But when the heat comes, there is hardly any room for ventilation and there is no money to invest to save the stock. It's not unexpected that she's not ready to come off (the ventilator). It's just that in our best case scenario (we thought) maybe we could've hoped to get her off (Sunday) afternoon, whenever they put a patient on a ventilator they're looking at a 50-50 chance of getting them off breathing on their own. The smoke ventilation cases are often the hardest to manage off the ventilator. Images & Illustrations of Ventilation Translations for Ventilation From our Multilingual Translation Dictionary Get even more translations for Ventilation » Find a translation for the Ventilation definition in other languages: Select another language:
<urn:uuid:d69676ed-0a63-4192-adf4-8f824e1654f1>
CC-MAIN-2017-34
http://www.definitions.net/definition/Ventilation
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133449.19/warc/CC-MAIN-20170824101532-20170824121532-00169.warc.gz
en
0.9171
1,012
3.109375
3
My 11 month old son throws his food when in his highchair and screams. We have been practicing sign language for months and no matter what we do he still throws his food and screams. I would love some suggestions. Sign language is a wonderful tool for helping young children communicate their needs. I remember using both verbal and sign language to communicate my daily activities to my daughter and being so excited when she started using it on her own to communicate with me. Keep it up – before long he”ll likely be communicating with you in the ways you model communicating to him. Let’s talk about developmentally appropriate behaviors. It may be a relief to know that kids throw, spill, taste, swirl, flick and otherwise “play” with food and other objects as a means to understanding their world. Think of it as an experiment in science, physics, art, psychology, etc. all wrapped up in a single meal (and maybe in every meal and snack from birth to age 4)! Through sensory exploration children begin to learn the laws of physics (there goes the water out of the cup), emotional intelligence (mom sure looked over here quickly when I did that) and color hues (oh! green puree plus orange puree makes brown puree) simply from exploring their food. So, while I’m not saying you can’t guide him toward more socially acceptable behaviors, it may help to know that these behaviors are a normal part of your child’s learning process and not meant to be a direct assault against your sense of cleanliness or manners. So, how do we guide them toward more appropriate behaviors? First, children need to be able to explore their environment in sensory-stimulating ways in order to get their need for stimulating learning experiences met (check out Play At Home Mom for more ideas on fun and creative sensory exploration). Once children have a regular outlet for sensory exploration they then become free to learn other uses for food (as adults we call this use of food “nourishment” and we use “manners” – sometimes I laugh at our ability to take the fun out of everything). This isn’t to say your son won’t still explore his food at the table, but he may be more likely to eat his food and perform more sensory explorations at other times – just keep in mind he’s still going to eat in developmentally appropriate ways, which aren’t as neat as most adults eat! LOL Once you’re meeting his need to learn about his environment you can guide him toward the socially appropriate behaviors at the table. So when he throws his food off the table you can say in a kind tone, “you want to throw the peas, but peas are for eating. You can eat the food with your fingers or use a spoon” or “I know it’s fun to throw the carrots, but carrots are for eating. When you’re done we can go throw a ball (or fill in your own activity).” He’ll need a number of gentle and consistent reminders throughout the course of the meal, and for several weeks…even months. Expect him to play with his food; encourage him to keep it on the table. I know a lot of parents embrace the throwing and then have their child help clean up. This is a great option because he is learning cause and effect while also having some time to connect with you and help, which builds a sense of capability. This brings us to the the third part of your dilemma, so let’s talk about behaviors that are driven by unmet needs. Rudolph Dreikers says, “there’s no such thing as a misbehaving child, only a discouraged child.” Together, with Alfred Adler, he often discussed the meaning behind children’s behaviors, which they called “Mistaken Goals of Misbehavior.” Jane Nelson and Lynn Lott developed an approach to these behaviors called “Positive Discipline,” in which they help parents identify the 4 Mistaken Goal of Misbehavior.” Once parents understand the goal of their children’s misbehavior they are then able to help children learn appropriate behaviors by first meeting their children’s unmet needs, then helping them learn to meet their needs in more socially appropriate ways. My guess is that he is either screaming to get your attention, screaming because he is excited about the science experiment of eating, or it’s possible he’s screaming because he really hates the high chair and would be more content in your lap. We’ll address the former because the latter will take care of itself when he has a chance to do those science experiments in other settings, as we discussed earlier. So here are some things you can try to help guide your child’s behavior. When he screams you can offer him your company or give him some alternative ways to get your attention: - “Oh, you’d like to get out of your highchair and sit on my lap. You can say “lap.” Then let him sit on your lap and eat. - “Oh, you’d like for me to sit with you and eat,” then join him. - “You’d like my attention. You can get it by calling my name “mama.” Then join him at the table. I know my daughter loves to sit and eat as a family. In fact, when I feed her by herself she is less likely to participate in the meal in ways that meet MY needs (sit quietly, eat neatly, finish you food). So I try to remember her needs when she spills her water, drops her strawberries and scrambles down to play tag with the dog. Discover the meaning behind your child’s behavior, the need that is driving them to scream, cry, whine, hit, run, etc. Once you understand your child’s needs you will be more equipped to address the behavior and begin to guide them toward expressing those needs in positive ways instead.
<urn:uuid:2b8b8689-533a-4ae1-8c7e-e2e58e9377c6>
CC-MAIN-2023-23
https://parentingbeyondpunishment.com/throwing-and-playing-with-food/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653631.71/warc/CC-MAIN-20230607074914-20230607104914-00318.warc.gz
en
0.971452
1,265
2.53125
3
“We are the music makers, and we are the dreamers of dreams.” Arthur O’Shaughnessy Students at Horizons learn to dream and expand through the study of music. Music serves to unify the wide range of ages at our K-8 school. At all grade-levels, students learn about the mechanics of music, but always consider music as a form of self-expression, as well as an opportunity to step outside ourselves and understand the wide world. As such, these studies complement classroom explorations of history, culture, art, science, and more. In the earlier grades, students explore foundations of music through song, storytelling, rhythmic instruments and movement. Third and fourth graders continue this exploration, while also learning to play bucket drums, recorders, and ukuleles. Fifth and sixth graders deepen their study of music through the ukulele, handbells, and keyboard. Finally, seventh and eighth graders take a project-based approach to ensemble-building and part writing. Extending beyond the regular school day, choir and orchestra are offered to Horizons students, third grade and up. Weekly rehearsals culminate in performance experiences for friends and families, at nearby retirement homes, and at community events.
<urn:uuid:7f1aee6f-2971-458c-b882-98ddd341628d>
CC-MAIN-2023-40
https://ho8.bvsd.org/academics/music
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510697.51/warc/CC-MAIN-20230930145921-20230930175921-00607.warc.gz
en
0.961084
263
2.9375
3
The Women Empowerment activities initiated by Late Moolchand Meena T.T. College have had a profound and positive impact on the lives of women in the local community. Through various programs and initiatives, the college has worked tirelessly to uplift women, enhance their skills, and promote gender equality. Here are some ways in which these activities have impacted the lives of women: Enhanced Education and Knowledge: The college's focus on women's education has enabled many girls and women to pursue higher education and gain knowledge in diverse fields. By providing scholarships and financial assistance, Late Moolchand Meena T.T. College has made education more accessible for women who may have otherwise faced barriers. As a result, women have been empowered with knowledge, enabling them to make informed decisions, expand their horizons, and pursue their dreams. Increased Self-Confidence: Through workshops, seminars, and skill development programs, the college has played a significant role in boosting the self-confidence of women. These initiatives have provided women with opportunities to learn new skills, develop leadership qualities, and express themselves confidently. As a result, women have gained the courage to voice their opinions, assert their rights, and actively participate in various spheres of life. Entrepreneurship and Economic Empowerment: Late Moolchand Meena T.T. College has actively promoted entrepreneurship among women by organizing workshops and training programs focused on business management, financial literacy, and marketing strategies. These initiatives have equipped women with the necessary skills and knowledge to start their own ventures, become financially independent, and contribute to the economic growth of their families and communities. Awareness of Rights and Social Issues: The college's women empowerment activities have also played a crucial role in raising awareness about women's rights, gender equality, and social issues. Through seminars and discussions, women have been educated about their legal rights, domestic violence prevention, reproductive health, and other relevant topics. This increased awareness has empowered women to stand up against discrimination, break societal barriers, and advocate for gender equality. Leadership Development: Late Moolchand Meena T.T. College has nurtured leadership qualities in women by providing opportunities for them to take on leadership roles in various programs and initiatives. This has resulted in the emergence of strong women leaders who can inspire and guide others in their communities. By developing leadership skills, women have become agents of change, actively participating in decision-making processes and working towards the betterment of society. Role Models and Mentoring: The college's women empowerment activities have created a supportive environment where women can connect, share experiences, and receive guidance from successful women professionals. Through mentorship programs and networking opportunities, women have gained valuable insights, advice, and encouragement from established role models. This has instilled a sense of hope and motivation among women, inspiring them to overcome challenges and pursue their goals with determination. Women Empowerment activities of Late Moolchand Meena T.T. College have brought about significant transformations in the lives of thousands of women. Through education, skill development, entrepreneurship, awareness programs, and leadership opportunities, women have gained confidence, knowledge, economic independence, and a strong voice in society. The college's commitment to empowering women has created a ripple effect, inspiring and empowering generations of women to break barriers, achieve their full potential, and contribute meaningfully to their communities.
<urn:uuid:02592066-edd8-4237-b412-7a2dd84a684b>
CC-MAIN-2023-40
http://mcmttcollege.net/women-empowerment.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510671.0/warc/CC-MAIN-20230930082033-20230930112033-00336.warc.gz
en
0.964594
690
2.671875
3
Shikakai (Acacia Concinna) is a small shrub-like tree which grows in the warm, dry plains of South Asian countries. For centuries the people who have had access to this tree have used its pod-like fruit to clean their hair. They collect, dry and grind this pod into a powder which is considered a superior cleanser for “lustrous long hair” and has been reported as “promoting hair growth and preventing dandruff”. Because of these benefits, this powder was named “shikakai” which literally translates as “fruit for the hair”. Shikakai is readily available and continues to be commonly used as a preferred shampoo. The dried, powdered fruit is sold in attractive packages that show women with long, beautiful, shiny hair. Many popular brands are sold throughout South Asian countries. Typically, shikakai is mixed with water to make a paste which is worked through the hair. It lathers moderately and cleans hair beautifully. It has a natural low pH, is extremely mild, and doesn’t strip hair of natural oils. Usually no rinse or conditioner is used since shikakai also acts as detangle. This ancient product is probably the world’s original pH balanced shampoo. The resulting shampoos are truly different–they are gentle, mild, naturally low pH, and are genuine alternatives to all other shampoos found today. Shikakai is the nature non-polluting product do not pollute our environment, consider as renewable primary products. The Shikakai trees absorb Carbon dioxide and turn it into Oxygen. Shikakai trees available in the hill areas of Bangladesh in a large number. Sindhiya Enterprise Bangladesh is the collecting source of Shikakai Soapnut shells and other Natural and Eco friendly products in Bangladesh, directly by people of hill tribes, forest and surrounding areas who, especially poor women of area collect Shikakai from the jungles, this is the main income source of common people of the area in the winter season when Shikakai (fruit) comes on its little trees. Our environment and green hous is the affecting with anty nature products, Nature had already arrange many things for us to save green house, but we are still going to damage it. Now it is the time to leave all foolish anty Nature products and join hands with all earth and nature lovers to save our Mother Nature for us and for our children. Sindhiya Enterprise Bangladesh is the main source in Bangladesh to produce Eco friendly and Natural products.
<urn:uuid:045dd560-72ee-4ea0-bec6-6246e928be82>
CC-MAIN-2014-15
http://naturalcosmeticsguide.com/shikakai-acacia-concinna-a-natural-shampoo-and-hair-tonic-the-shampoo-grows-on-tree/
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00382-ip-10-147-4-33.ec2.internal.warc.gz
en
0.961123
541
2.546875
3
|Edison in 1945| |42nd Governor of New Jersey| January 21, 1941 – January 18, 1944 |Preceded by||A. Harry Moore| |Succeeded by||Walter Evans Edge| |United States Secretary of the Navy| January 2, 1940 – June 24, 1940 |Preceded by||Claude A. Swanson| |Succeeded by||Frank Knox| August 3, 1890| West Orange, New Jersey |Died||July 31, 1969 New York City |Relations||Thomas Edison (father)| Charles Edison (August 3, 1890 – July 31, 1969) was a son of Thomas Edison and Mina Miller. He was a businessman, who became Assistant and then United States Secretary of the Navy, and served as the 42nd Governor of New Jersey. Born at his parents' home, Glenmont, in West Orange, New Jersey, he attended the Hotchkiss School in Lakeville, Connecticut. In 1915-1916 he operated the 100-seat "Little Thimble Theater" with Guido Bruno. There they played the works of George Bernard Shaw and August Strindberg while Charles contributed verse to "Bruno's Weekly" under the pseudonym "Tom Sleeper". Late in 1915, he brought his players to Ellis Island to perform for Chief Clerk Augustus Sherman and more than four hundred detained immigrants. These avant-garde activities came to a halt when his father put him to work. He married Carolyn Hawkins on March 27, 1918. They had no children. For a number of years Charles Edison ran Edison Records. Charles became president of his father's company Thomas A. Edison, Inc. in 1927, and ran it until it was sold in 1957, when it merged with the McGraw Electric Company to form the McGraw-Edison Electric Company. Edison was board chairman of the merged company until he retired in 1961. On January 18, 1937, President Roosevelt appointed Charles Edison as Assistant Secretary of the Navy, then as Secretary on January 2, 1940, Claude A. Swanson having died several months previously. Edison himself only kept the job until June 24, resigning to run his gubernatorial campaign. During his time in the Navy department, he advocated construction of the large Iowa-class battleships, and that one of them be built at the Philadelphia Navy Yard, which secured votes for Roosevelt in Pennsylvania and New Jersey in the 1940 presidential election; in return, Roosevelt had BB-62 named the USS New Jersey. In 1940, he won election as Governor of New Jersey, running in reaction to the political machine run by Frank Hague, but broke with family tradition by declaring himself a Democrat. As governor, he proposed updating the New Jersey State Constitution. Although it failed in a referendum and nothing was changed during his tenure, state legislators did reform the constitution later. In 1948, he established a charitable foundation, originally called "The Brook Foundation", now the Charles Edison Fund. Between 1951 and 1969, he lived in the Waldorf-Astoria Hotel, where he struck up a friendship with Herbert Hoover, who also lived there. In 1962, Edison was one of the founders of the Conservative Party of New York State. In 1967, Edison hosted a meeting at the Waldorf-Astoria in New York that led to the founding of the Charles Edison Youth Fund, later the Charles Edison Memorial Youth Fund. Attending the meeting were Rep. Walter Judd (R-Mn), author William F. Buckley, organizer David R. Jones, and Edison's political advisor Marvin Liebman. The name of the organization was changed in 1985 to The Fund for American Studies, in keeping with Edison's request to drop his name after 20 years of use. - "GEDIS.pdf" (PDF). Retrieved September 23, 2007. - Secretaries of the Navy, Naval Historical Center. Accessed August 6, 2007. - Comegno, Carol. "Historian details the role politics played in battleship's creation",[dead link]Courier-Post, January 6, 2000. Accessed May 27, 2007. "Professor Jeffery Dorwart, of Rutgers-Camden said the ship was named after the state by President Franklin Roosevelt to repay a political debt to Charles Edison, the son of inventor Thomas Edison." - "Charles Edison". Retrieved September 23, 2007. - John D. Venable, Out of the Shadow: the Story of Charles Edison (Charles Edison Fund, 1978), p. 271. - Niels Bjerre-Poulsen, Right Face: Organizing the American Conservative Movement 1945-65 (Museum Tusculanum Press, 2002), p. 143. (ISBN 978-8772898094) - History, The Fund for American Studies - "Charles Edison, 78, Ex-Governor Of Jersey and U.S. Aide, Is Dead". New York Times. August 1, 1969. Retrieved 2007-07-21. "Charles Edison, former Governor of New Jersey, ... Mr. Edison, who had been admitted to the hospital on Wednesday, was 78 years ..." - Richard J. Connors, State Constitutional Convention Studies, #4: The Process of Constitutional Revision in New Jersey: 1940-1947. (New York: National Municipal League, 1970). OCLC 118700 - Venable, John D. (1978). Out of the Shadow: The Story of Charles Edison : a Biography. Charles Edison Fund. OCLC 118700. - Biography for Charles Edison (PDF), New Jersey State Library - Charles Edison, Findagrave.com - New Jersey Governor Charles Edison, National Governors Association - Charles Edison Fund: Includes a picture of Charles Edison - The Pragmatic Populism of a Non-Partisan Politician: An Analysis of the Political Philosophy of Charles Edison - Fund for American Studies - History Henry L. Roosevelt |Assistant Secretary of the Navy January 18, 1937 – January 1, 1940 Claude Augustus Swanson |United States Secretary of the Navy January 2 – June 24, 1940 |Party political offices| A. Harry Moore |Democratic nominee for Governor of New Jersey Vincent J. Murphy A. Harry Moore |Governor of New Jersey January 21, 1941 – January 18, 1944 Walter Evans Edge |Non-profit organization positions| John G. Winant |President of the National Municipal League
<urn:uuid:2923e3a3-ecd0-47f4-a601-9cf7a915a78b>
CC-MAIN-2014-23
http://en.wikipedia.org/wiki/Charles_Edison
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510271862.24/warc/CC-MAIN-20140728011751-00317-ip-10-146-231-18.ec2.internal.warc.gz
en
0.923309
1,339
2.53125
3
It is a seldom-known fact that the Fluke 893 ac-dc differential voltmeter can be used for measuring extremely high resistances from 10 megohms to 106 megohms with a typical accuracy of 5%. This measurement method, however, requires some basic calculations on your part. The obvious advantage of the differential voltmeter is its capability of measuring extremely high resistances. Consult the Fluke 893 technical manual for initial switch settings and a more detailed explanation of its operation. Capacitance is that property of a circuit that produces an electrostatic field when two conducting bodies separated by a dielectric material have a potential applied to them. Capacitors are made by compressing an insulating material (dielectric) between two conductors (plates). The farad is the basic measurement of capacitance. It is dependent upon the area of the plates, the distance between the plates, and the type of dielectric used. Electrically, the farad is a measure of 1 coulomb of potential charged by 1 volt. A coulomb (the amount of current flow maintained at 1 ampere that passes a given point of a circuit in 1 second) is a large charge. Most capacitors are measured in millionths of a farad (microfarad), expressed as F, or in one-millionth of a microfarad (picofarad), expressed as pF. Capacitors incur various losses as a result of such factors as resistance in the conductors (plates) or leads, current leakage, and dielectric absorption, all of which affect the power factor of the capacitor. Theoretically, the power factor of an ideal capacitor should be zero; however, the losses listed above cause the power factors of practical capacitors to range from near 0 to a possible 100%. The average power factor for good capacitors, excluding electrolytics, is 2% to 3%. Current leakage, which is an inverse function of frequency, is important only at the lower frequencies and becomes negligible at higher frequencies. Dielectric absorption (sometimes referred to as dielectric viscosity) results in losses that produce heat. The effect of this type of loss is the same as resistance in series with the capacitor. You have probably learned the hard way that some capacitors can retain a charge long after the voltage has been removed. The electrical charge retained by capacitors in de-energized electronic circuits is, in many cases, sufficient to cause a lethal shock. Be sure you and those working with you consider this hazard before performing any type of maintenance on any electrical or electronic circuit and before making connections to a seemingly dead circuit. Use extreme caution prior to working on or near de- energized circuits that employ large capacitors. Be safedischarge and ground all high-voltage capacitors and exposed high-voltage terminal leads by using only an authorized shorting probe, as shown in figure 1-11. Repeat discharge operations several times to make sure that all high-voltage terminations are completely discharged. It is of the utmost importance that you use only an authorized safety shorting probe to discharge the circuits before performing any work on them. An authorized general-purpose safety shorting probe for naval service application may be requisitioned using the current stock number listed in the ELECTRONICS INSTALLATION AND MAINTENANCE BOOK (EIBM), General NAVSEA 0967-LP-000-0100, Section 3, Safety Equipment. Certain electronic equipment are provided with built-in, special-purpose safety shorting probes. These probes are not considered general purpose. Use them only with the equipment for which they are provided and only in a manner specified by the technical manuals for the equipment. It is considered to be poor practice to remove them for use
<urn:uuid:aa348e95-2ba9-461e-a41f-9907fb0d2434>
CC-MAIN-2017-47
http://electriciantraining.tpub.com/14193/css/Capacitor-Measurements-29.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806586.6/warc/CC-MAIN-20171122122605-20171122142605-00363.warc.gz
en
0.923221
823
3.9375
4
By Rubén Caballero Chief Wireless Strategist, Keyssa Formerly VP Engineering at Apple, Inc 5G is the fifth generation of mobile internet connectivity and the feature set is promising: faster data download and upload speeds, wider coverage, more stable connections and lower latency for mission critical operations (think autonomous vehicles, robotics, etc.). Of course, faster speeds are always desirable, but lower latency may be of higher importance for mission critical applications. The processing time required for both sender and receiver using traditional wireless technologies cannot meet the latency requirements of mission critical use cases. It’s all about making better use of the radio spectrum and enabling far more devices to access the mobile internet at the same time. How is it possible to achieve faster speeds/higher bandwidth along with lower latencies? The magic comes in using a different part of the radio spectrum, starting at 28GHz portion of the spectrum and beyond, and commonly referred to as millimeter wave (commonly written as mmWave) technology. mmWave is exactly what it sounds like; the wavelengths are between 3 and 12.5mm long. More importantly, what are the applications that will require mmWave wave technology? Devices are Now Just Interfaces It wasn’t long ago that our phones and computers were insular pieces of hardware fully functional without the need to be connected to other devices. But no longer. Today’s hardware acts as an interface to our music, photos, files, applications, and each other. As a standalone device, compute products have limited functionality. In a world of connected devices, it is the connection that defines performance, and 5G is the next generation of connectivity. mmWave 5G Connectivity Requires a 5G Connector Faster speeds/higher bandwidth and lower latencies will challenge traditional wireless device-to-device technologies, especially at the point-to-point device connection level. For example, smart cities populated with modular hotspots, each with its own camera, snapping into place and connected to the underlying infrastructure with a high-bandwidth, low latency 5G connector. Or closer to home, imagine a 5G-enabled portable hotspot, that when placed on a window in your home, drives a 5G signal into your home network through a 5G mmWave connector. mmWaves from source to destination will ensure a high-bandwidth, low-latency connection not only to the hotspot, camera or smartphone, but to the other downstream devices touched by those devices. Keyssa’s 5G Connector Designed for High-bandwidth, Low-latency Device Connections Keyssa’s family of KSS104M mmWave 5G connectors are designed to meet the requirements of next generation device connectivity. High-bandwidth, low power, low latency, embedded solid-state connectors support both the high-performance promise of 5G connectivity as well as the consumer desire for full wireless device connectivity. About Rubén Caballero Rubén has recently joined Keyssa as Chief Wireless Strategist. Prior to joining Keyssa, Caballero served as VP engineering at Apple for 14 years and was one of the founding leaders of the iPhone hardware team. He later took on an expanded role that included the iPad, Apple Watch, Macintosh and all other hardware products. During this tenure, Rubén founded, scaled and oversaw the Wireless Design & Technology Group, a world-class team of over 1,000 engineers in 26 countries operating across all of Apple’s products, ecosystems and disciplines including disciplines including disciplines including antenna design, RF architecture, wireless validation, EMC, field engineering, certification and regulation and production testing. Rubén holds a BSEE from the École Polytechnique de Montréal; MSEE from New Mexico State University, and Honorary Doctorate from the University of Montréal.
<urn:uuid:945cf553-3777-453d-9fbf-e682171e96bd>
CC-MAIN-2020-16
http://www.keyssa.com/5gconnector/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371803248.90/warc/CC-MAIN-20200407152449-20200407182949-00070.warc.gz
en
0.931118
796
2.609375
3
A Gazetteer of Illinois In Three Parts Containing a General View of the State, a General View of Each County, and a particular description of each town, settlement, stream, prairie, bottom, bluff, etc.; alphabetically arranged - J. M. Peck. Gazetteers are wonderful, compact encyclopedias that provide detailed information about a specific place at a specific time. For family history researchers, gazetteers can hold the key to an ancestral mystery in their wealth of material relating to dates, settlements, county boundaries, population, and more. First compiled in 1834 and revised in 1837, this gazetteer provides a topographical and historical picture of Illinois in the early days. The author enthusiastically proclaims, "No state in the 'Great West' has attracted so much attention, and elicited so many enquiries from those who desire to avail themselves of the advantages of a settlement in a new and rising country, as that of Illinois; and none is filling up so rapidly with an emigrating population from all parts of the United States, and several kingdoms of Europe." This second edition was considered necessary due to the rapid changes of the state in just three years, including the creation of ten new counties. A new index of places increases the usefulness of this book. (1834), 2007, 5½x8½, paper, index, 378 pp.
<urn:uuid:ce21a0e0-801d-4baf-b6e8-75a4cae4bcad>
CC-MAIN-2023-40
https://heritagebooks.com/products/101-p0782
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00104.warc.gz
en
0.926148
299
3.21875
3
1. Symbol for peta-; phosphorus; phosphate'>phosphate; phosphate'>phosphate; proline; product; poise; power; frequently with subscripts indicating location and/or chemical species. 2. Followed by a subscript, refers to the plasma concentration of the substance indicated by the subscript; permeability constant. 3. A blood group designation. See P blood group, Blood Groups Appendix. 4. Symbol for probability; when followed by the sign for “less than” (<), this indicates that a test statistic, a chi-square (χ Stedman’s Medical Dictionary © Wolters Kluwer Health. All rights reserved.
<urn:uuid:81421486-5f18-4960-aba3-30bb290e44a5>
CC-MAIN-2017-47
http://www.medilexicon.com/dictionary/64300
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805761.46/warc/CC-MAIN-20171119191646-20171119211646-00005.warc.gz
en
0.818017
142
2.734375
3
Due to the bundles of energy they exuberate and their overall happiness around human beings, dogs have established and maintained a position as the leading form of animal that is considered popular to be owned as a pet. From puppies to adult dogs, canines act as perfect companions within the lives of individuals and families and serve as a worthy addition to any environment or household. Although their natural tendencies culminate in dogs becoming lively and excited within any environment, it remains important for owners to ensure their pet remains happy via a number of methods and provisions. Providing a healthy, balanced diet on a daily basis is arguably the most important form of treatment any owner should provide a dog with to ensure it receives the nutrients and vitamins required to increase or maintain overall strength of muscles and bones, in addition to improving its reactions, joints and cognitive functions. This is particularly essential for puppies that may lack the nutritional intake after being parted from their mother and require adequate dog food due to its fragile nature to ensure they receive the required intake of vitamins and nutrients in order to grow and prosper. Exercise is an equally important component as all dogs contain comprehensive levels of energy and stamina which requires regular walks or ventures outside of the household in order for a puppy or dog to stretch its legs and exert its playful nature. Leads and soft toys are therefore important purchases to ensure owners have the required provisions in order to take a canine for regular ventures outdoors to prevent any problems within the joints or muscles through a lack of exercise. Soft toys can also be utilised within the household in a safe and controlled manner via training which can be used to establish or improve behavioural instincts of a puppy or dog. Integrating a reward system via treats, which usually come in the form of dry dog food in a chewable bone or bar, also enhances oral health and sharpness of teeth that, combined with a balanced diet and exercise, culminates in a healthy and happy dog in which to cherish and love. The Article is written by www.hillspet.it providing Cucciolo Di Cane and Cibo Per Gatti. Visit http://www.hillspet.it for more information on www.hillspet.it Products and Services___________________________Copyright information This article is free for reproduction but must be reproduced in its entirety, including live links and this copyright statement must be included. Visit www.hillspet.it for more services!
<urn:uuid:820fb5ab-06a7-4630-b29e-c4d940287682>
CC-MAIN-2023-14
https://www.articlewebdirectory.com/article/36667-how-to-keep-a-dog-happy.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00495.warc.gz
en
0.956917
485
2.765625
3
Spring is here and that means time to get outside and enjoy California's beauty. This year people are out in record numbers to see wildflowers and experience all the recreational opportunities that parks offer. CNN reported triple the usual number of visitors to Anza-Borrego Desert State Park. Numbers of recreational visits have surged at National Parks across the county with 330 million visitors recorded last year, during the 100th anniversary of the National Park Service. More than 93 percent of the articles reviewed indicated at least one impact of recreation on animals, the majority of which (59 percent) were negative. Hiking, for example, a common form of outdoor recreation in protected areas, can create a negative impact by causing animals to flee, taking time away from feeding and expending valuable energy. Among the negative impacts observed were decreased species diversity; decreased survival, reproduction, or abundance; and behavioral or physiological disturbance (such as decreased foraging or increased stress). These types of negative effects were documented most frequently for reptiles, amphibians, and invertebrates. Park managers often struggle to balance the need to protect wildlife with the importance of accommodating visitors in support of the many essential benefits nature provides people and the importance of spreading conservation awareness. UC California Naturalists can often be found on the trail helping to interpret nature and focusing on leave no trace in an effort to ensure we don't love nature to death. When out enjoying nature please stay on the trail, respect seasonal closures, minimize noise, do not approach animals, and reduce your driving speed – all recommended steps to minimize the impacts of recreation on wildlife. A light touch now ensures wildlife viewing for many years to come.
<urn:uuid:e1fcc7cb-1692-40f0-902d-18cf5207b5d9>
CC-MAIN-2020-29
https://ucanr.edu/News/?routeName=newsstory&postnum=24115
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886706.29/warc/CC-MAIN-20200704201650-20200704231650-00220.warc.gz
en
0.952715
334
3.34375
3
Chandrayaan 2 Successfully Leaves Earth Orbit, Begins Journey To The Moon After revolving around the earth for nearly 23 days since its launch from the space station at Srikarikota, Chandrayaan-2 has begun its journey to the moon, early morning on Wednesday. The Indian Space Research Organisation (ISRO) confirmed that the mission module has successfully performed the crucial manoeuvre and left the earth’s orbit. If all goes as planned, Chandrayaan-2 would reach the moon’s orbit by August 20. The final orbit raising manoeuvre was carried out at 2:21 am, when the scientist fired the spacecraft’s liquid engine for about 1203 seconds and put it in the Lunar Transfer trajectory, which would eventually cause it to arrive at the moon. All the systems on-board the spacecraft are normal, confirmed the premier space agency which has been monitoring the health of the module continuously from the ISRO Telemetry, Tracking and Command Network (ISTRAC) in Bangalore with the support from Indian Deep Space Network (IDSN) antennas at Byalalu, near Bangalore. India’s most challenging mission till date, Chandrayaan-2 was launched from Satish Dhawan Space Centre in Sriharikota on July 22 using the powerful Geostationary satellite launch vehicle (GSLV)-Mk-111 M-1. It carries an orbiter, Lander Vikram and Rover Pragyaan. While the orbiter would revolve around the moon for an year, Lander Vikram and Rover Pragyan have a lifespan of 14 days, after starting operations on the lunar surface. The next manoeuvre would now be performed on August 20, when the spacecraft would approach the moon and its liquid engine will be fired again to insert the spacecraft into a lunar orbit. As per the plan, the module would be placed in an orbit which passes over the lunar poles at a distance of 100 kms from the moon’s surface. Four more orbit manoeuvre would be required to be performed for the purpose. If all goes as planned, Vikram Lander would separate from the orbiter on September 2 when ISRO would begin its powered descent to make a soft landing on the lunar surface on September 7. It would be for the first time, that Indian scientists would attempt a soft landing on the moon, a feat achieved by only four nations so far namely US, Russia and China.
<urn:uuid:a3726369-58e3-49f7-adcd-39258f74f962>
CC-MAIN-2023-40
https://www.indianpolitics.co.in/chandrayaan-2-successfully-leaves-earth-orbit-begins-journey-to-the-moon/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510528.86/warc/CC-MAIN-20230929190403-20230929220403-00682.warc.gz
en
0.933149
505
2.71875
3
Kuwait has a desert climate and it is hot and dry. The average daily maximum temperature is 34°C, the average daily minimum is 20°C. The amount of precipitation averages just over 110 millimeters per year and December and January receive the most rain. Sandstorms occur in the summer. Vaccinations Koeweit: common risks There are infectious diseases in Kuwait to which you are not resistant. The right vaccinations can make you and your child or baby resistant to this. Our health experts have identified the most important health risks. View the list below with the most common risks and diseases in Kuwait and read which specific vaccinations Kuwait requires of you. In this country dengue fever is prevalent. A good protection against mosquito bites in the daytime is necessary. Vaccination against hepatitis A is recommended for all travelers to this country. Vaccination against hepatitis B depends on your personal situation. Please contact KLM Health Services for a personal advice.
<urn:uuid:576278b4-a590-4366-8a72-c1d868805bbf>
CC-MAIN-2023-23
https://www.klmhealthservices.com/en/inentingen/kuwait/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655143.72/warc/CC-MAIN-20230608204017-20230608234017-00399.warc.gz
en
0.930662
200
2.78125
3
What Supplements Can Help With Low Egg Quality? Concerned about your egg health? Wondering what you can be doing to improve egg quality? We’ve got you covered. Quality and quantity are the two defining characteristics of egg health. To determine fertility, experts need to find out how many eggs the ovaries contain, and most importantly, the quality of the eggs. While egg quantity can’t be changed, research suggests that egg quality has the potential to be improved. Difference between egg quality and egg quantity There are two things experts look at to determine fertility in women: quality and quantity. At birth, the ovary contains around one to two million egg cells, and no more will ever be created. We’re born with all the eggs we will ever have, and the number of eggs in the ovary starts to decline until menopause. This refers to a woman’s egg quantity.
<urn:uuid:2d0bd378-70c5-4deb-9473-b03b4bebbc6e>
CC-MAIN-2023-14
https://natalist.medium.com/what-supplements-can-help-with-low-egg-quality-68e328427295?source=post_internal_links---------2----------------------------
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00049.warc.gz
en
0.952519
192
3.078125
3
FOR IMMEDIATE RELEASE: April 12, 2021 NEW CURRICULUM LESSON CHALLENGES CATHOLIC SCHOOL STUDENTS TO EXTRACT THEIR OWN DNA The students at Our Lady of Perpetual Help Catholic Academy, located at 111-10 115th Street in the South Ozone Park section of Queens, will have a chance to extract their own DNA as part of a hands-on lesson and activity, this Wednesday, April 14, 2021, at 12:45 p.m. Students will use soap to help dissolve the cell membrane and use salt to break up protein chains that hold nucleic acids together, thus releasing the DNA strands. The challenge is for the students to extract their own DNA which will be floating on top of the solution. The students have already been introduced to Captain Barrington Irving’s Regenerative Medicine Expedition, to the Wake Forest Baptist Medical Center in Raleigh, North Carolina, where he learned about the role of DNA (deoxyribonucleic acid). Students of grades 3, 4, and 5 will be participating in the hands-on activity. The materials to be used as part of this experiment include dish detergent, table salt, rubbing alcohol, food coloring, and plastic cups, along with a number of tools including gloves, a magnifying glass, a wooden craft stick, and a measuring cup. The lesson is part of The Flying Classroom STEM supplemental curriculum recently incorporated in schools within the Diocese of Brooklyn. Captain Irving, the first Black person to pilot a plane around the world solo, and at the time the youngest, is the founder of The Flying Classroom program. Members of the media are invited to attend and asked to notify the Diocesan Press Office of their attendance.
<urn:uuid:a9aa60ff-2d9b-4d0f-916a-b784ff2a92d7>
CC-MAIN-2023-14
https://dioceseofbrooklyn.org/press-releases/new-curriculum-lesson-challenges-catholic-school-students-to-extract-their-own-dna/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00498.warc.gz
en
0.918863
426
2.84375
3
With Azeezat Johnson Hip hop is an art form rooted in resistance. It was created and remixed by young Black Americans who sought to take up space within the brutally anti-Black inner cities of Chicago. From the aerosol spray cans used for graffiti; the mouth manipulations necessary for beatboxing; and the ‘scratching’ and remixing of a family member’s record collection, hip hop was undeniably founded as a critical expression of inner-city Black experiences. Young people used whatever resources they could find to create their very own Black sound and culture. Even as corporations try to commercialise the hip hop ‘sound’, the culture is grounded in this history of claiming space (in spite of violent systemic inequalities). It continues to evolve alongside other forms of artistic expression in countries like Nigeria, Brazil, France and the UK. It has inspired a generation of Muslim artists and performers who use spoken word to challenge Islamophobia and social injustice (both within and outside of the Muslim community). The Black Muslim hip hop duo Poetic Pilgrimage – comprising Muneera Rashida and Sukina Abdul Noor – have been trailblazers in the UK scene. Established in 2002, Poetic Pilgrimage demand to be heard on their own terms, and no one else’s. They featured in an Al Jazeera documentary directed by Mette Reitzel in 2015, and continue to inspire young spoken word artists, rappers and hip hop enthusiasts to this day. The duo as depicted by their friend: therapist and photographer Wasi Daniju.
<urn:uuid:83d1e29a-3f8b-4447-a76b-9aa9809ba0e2>
CC-MAIN-2020-24
https://www.criticalmuslim.io/poetic-pilgramage/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348511950.89/warc/CC-MAIN-20200606062649-20200606092649-00355.warc.gz
en
0.948197
324
2.71875
3
COLORADO CITY, TX (FAYETTE COUNTY) COLORADO CITY, TEXAS (Fayette County). Colorado City, on the west bank of the Colorado River directly opposite La Grange in central Fayette County, never progressed beyond the plat stage. The town was designed in the late 1830s by John W. S. Dancy and associate promoters to rival the promotion of La Grange by John H. Moore. Elaborate plans called for the development of 5,000 acres with 156 blocks of residential and commercial property. The proposed city was unanimously selected by the Congress as the capital of the Republic of Texas, but President Sam Houston vetoed the proposal because he wanted the capital to remain in Houston. When Mirabeau B. Lamar succeeded Houston, he selected the site of what is now Austin as the capital, and the plan for Colorado City languished. One of the frequent floods along the Colorado River made the plan unfeasible, and most of the area was later included in the decentralized community of Bluff. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article."COLORADO CITY, TX (FAYETTE COUNTY)," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/hvc65), accessed July 13, 2014. Uploaded on June 12, 2010. Published by the Texas State Historical Association.
<urn:uuid:5e184a08-535c-47a7-bf41-5183b797e413>
CC-MAIN-2014-23
http://www.tshaonline.org/handbook/online/articles/hvc65
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776437601.53/warc/CC-MAIN-20140707234037-00033-ip-10-180-212-248.ec2.internal.warc.gz
en
0.945253
296
3.09375
3
Understanding appropriate use of media elements of text and graphics in multimedia design In this workshop: We will examine the role the media elements of text and graphics play in a multimedia title. We will consider how these elements have been used in the work and how they enhance or detract from achieving the intended purpose for the intended audience. We will consider if the media elements used are most appropriate or if other choices should have been made. As part of assessment series I, You will begin working on assessment template 2.docx using your chosen multimedia work in this workshop. You will need to sumbit this template to your tutor at the beginning of the next workshop. Your tutor will present a multimedia work to you in this workshop. You will be asked to consider the use of text and still images. For the chosen work you will: 1. State the URL /title of the product 2. State the genre and category and the purpose of the application 3. State the intended audience 4. Identify some objectives 5. In critically analysing the work, what questions should you ask about the use of text in this work? 6. In critically analysing the work,what questions should you ask about the use of still images in this work? After this initial exercise you will complete the assessment template 2. docx for your chosen work
<urn:uuid:75594a2b-e12c-45a7-b482-6e89a02a7d1d>
CC-MAIN-2013-20
http://www.ict.griffith.edu.au/teaching/1611ICT/mm1atutorials/tutewk3/week3.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703326861/warc/CC-MAIN-20130516112206-00014-ip-10-60-113-184.ec2.internal.warc.gz
en
0.87505
275
3.484375
3
Minorities durring WWII? What was the impact of WWII on minorities & by minorities i mean like women, blacks, mexican-americans, Japanese- Americans (i think that's most of the minorities).? Asked By: Bill N - 3/19/2008 Best Answer - Chosen by Asker I wish you would not refer to women as a 'minority' , women make up 50% of the population, we are not a minority... More Answered By: Louise C - 3/20/2008 Additional Answers (3) One thing you need to know is that American Indians represented a larger percentage of soldiers (as a percentage of their total population) than any other ethnic group. Answered By: robe - 3/19/2008 I think the greatest impact was on the 'Old School' military leaders' way of thinking: 1st they said integration could not work. But, after a few brave individual officers allowed minorities to take on greater roles and succeed, the minorities were able to push for a larger role in American society as a whole. Answered By: wvpackerfan68 - 3/19/2008 During those years minorities were not treated well, especially blacks. They were segregated and few were promoted in the military. Their service to their country was taken for granted... More Answered By: bigcasino - 3/19/2008 (86) Local Jobs Paying $27-$86/Hour- Available Now. Apply Online. Over 483 Local Jobs Now Hiring In Your Area. $18-$87/Hr - Apply Today! Work At Home Jobs Make $87/HR, Jobs Seen On TV. Jobs Hiring Now Drive Your Car - Make up to $600 Per Weekend. Work When You Want. Other Career Questions What is your current job? Why did you choose this job? What do you enjoy about your job? What do you dislike about your job? What would be your perfect job? Would you rather have this j... Where are the jobs? Is productivity and globalization creating a permanent “recession” of jobs? My main issue is I am doing a paper for school and have no idea where to begin. I was hoping suggestions... Hi ok im 19 and about to start college to do an acess course into a university. Ive been browsing through all the courses and im tottaly stuck! I thought politics but im scared ill end up with some r... Content is not owned or controlled by Monster. Any content concerns should be addressed with Yahoo! Yahoo! Does not evaluate or guarantee the accuracy of any Yahoo! Answers content. Yahoo! Disclaimer. Best-Paying Work-from-Home Jobs It’s easier than ever to work from home. Of course, not every job is a mobile job, and some companies aren’t interested in having their employees work from home. 2013 Marketing Jobs Outlook The US may be facing another year of anemic hiring overall, but that won't be the case in the high-orbit world of multichannel, digital media marketing. 2013 Engineering Jobs Outlook Engineers will find job opportunities in select disciplines in 2013, with candidates who are all-around, client-oriented businesspeople in demand. Best-Paying Jobs by Major What could you earn with a particular four-year degree? Find out by checking out this list of the top-paying jobs for 20 of the most common majors. Eight High-Paying, Secure Jobs Want to earn a good salary and enjoy a measure of job security as well? Check out these well-paying jobs on tap for fast growth in the coming years.
<urn:uuid:00537ec6-d82b-4f29-abf8-7e74ba50ac68>
CC-MAIN-2014-15
http://answers.monster.com/a-20080319230431AAhwG8u.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00367-ip-10-147-4-33.ec2.internal.warc.gz
en
0.941831
772
3
3
Launch Team Focuses on Unique Needs of Curiosity About the size of a small SUV and weighing as much as some cars, the Mars Science Laboratory "Curiosity" is being asked to conduct the most intensive examination of the surface of the red planet ever attempted. It carries cameras, a robotic arm, drill and even a laser to vaporize bits of rock at a distance. That's too much work for solar panels to power, so NASA is fueling the rover with a plutonium-powered battery of sorts called a multi-mission radioisotope thermoelectric generator, or MMRTG. Loaded with 10 pounds of the material, the power source is expected to generate electricity for a mission lasting at least two Earth years. "It requires a fancy power supply in order to do the job," said Dr. Pam Conrad, deputy principal investigator for MSL. "This enables us to make measurements all day, every day, at night, in the winter." Before researchers get a taste of groundbreaking research about Mars, the launch team at NASA's Kennedy Space Center in Florida is focusing on its responsibility to safely launch the spacecraft and its power source. Even if there were an accident and a release of plutonium, something officials put at a 3-tenths of one percent chance of happening, the material would most likely remain on federal property either at Kennedy or Cape Canaveral Air Force Station, the mission's launch site. However, preparation has been the foremost thought for NASA officials and the launch team will be beefed up with officials from the Lawrence Livermore National Laboratory in Calif., specifically the National Atmospheric Release Advisory Center, or NARAC, a team that models plumes to predict radiation hazards. Ron Baskett, an atmospheric scientist for NARAC, said a network will be in place on launch day to feed critical information to the Livermore lab to generate the models if there is a release of radioactive material. Even the network to collect that data will be strengthened a bit over the usual launch day measures. The National Weather Service's Melbourne office will focus some of its instruments on Cape Canaveral Air Force Station on the day of launch, now targeted for Nov. 25. For its part, NASA and the Air Force's 45th Space Wing have 46 towers to collect wind data and two more detailed instruments that collect information about conditions more than 20 miles above Earth. "We believe we have the right team put together, with the right people and all the control and functions that you might expect for this type of launch," Brisbin said. Officials expect a normal launch day, culminating in an Atlas V lifting Curiosity off the Earth and on a path to Mars. "If you see a plume, it does not mean there's been an accident," said Dr. Frank Merceret, director of research for Kennedy's weather office. Most launches produce a plume of some sort, he said, and even accident would not necessarily indicate any radiation has leaked from the spacecraft. NASA has used the power units 26 times in the past, including the Apollo moon landings and the Viking landers on Mars. Also, they've been used to power the Pioneer and Voyager spacecraft, along with the Galileo mission to Jupiter. More recently, the Cassini mission to Saturn and New Horizons mission to Pluto both run on RTGs. All were launched safely. NASA's John F. Kennedy Space Center
<urn:uuid:33ef57d3-aff0-4ffb-86d8-54a52c4d0737>
CC-MAIN-2017-47
https://www.nasa.gov/mission_pages/msl/launch/mslrtgweather.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00146.warc.gz
en
0.93611
692
3.375
3
Kodansha's Furigana Japanese Dictionary is an invaluable tool for anyone with an interest in the Japanese language. It has been edited with the needs of English-speaking users in mind. This book is in the Textbook Reserves Collection and can be checked out for 4 hours at at time from the main Circulation Desk. Gives the learner an understanding of how kanji are used in contemporary Japanese by providing instant access to a wealth of information on the meanings, readings and compounds for frequently used kanji. This book is in the Textbook Reserves Collection and can be checked out for 4 hours at at time from the main Circulation Desk. This in-depth kanji dictionary contains about 5,000 characters and over 70,000 compound words. Due to this comprehensive coverage, it continues to be an excellent companion for reading and writing in Japanese. Contains roughly 10,000 synonyms, 26,000 examples of usage, approximately 2,600 notes and columns that explain the differences and usages between synonyms, and an index that makes it possible to look up words in Japanese as well as English. A six-volume survey of modern Asia. Presents more than 2,600 alphabetically arranged entries covering such subjects as countries, cities, regions, natural features, religions, social issues, languages, people, events, customs, politics, and economics. A multidisciplinary collection of encyclopedias, dictionaries, handbooks, and other specialized reference sources. Includes the Encyclopedia of Modern China, Countries and Their Cultures, the Encyclopedia of Modern Asia, and more. This work offers information on contemporary Japanese society, history, art, and culture and also contains data on the economy, government, and politics of Japan. In addition to the main entries and over 4000 color illustrations, here are nearly 100 feature articles and pictorial essays. Please note this book can only be used in the library. Offering extensive coverage, this Encyclopedia is a new reference that reflects the vibrant, diverse and evolving culture of modern Japan, spanning from the end of the Japanese Imperialist period in 1945 to the present day. Entries cover a broad spectrum of topics from literature and architecture to politics and technology. The Cambridge Encyclopedia of Japan is the essential reference to all facets of Japan past and present. Authoritative and wide-ranging in scope, the Encyclopedia is also filled with the facts, figures and general data on Japan. This is a complete and fascinating picture of Japan and its people: its rich inherited traditions and culture, its modern complexities and its tantalizing future role in the world. This indispensable tool enables scientists and translators with only a basic knowledge of Japanese to quickly locate and evaluate pertinent information, tapping the large body of chemical literature that at present is mainly inaccessible to non-Japanese readers. The dictionary is supplemented by valuable background information on the Japanese language, chemical industry and chemical literature. An Introduction to Japanese Society remains essential reading for students of Japanese society. This book explores the breadth and diversity of Japanese society, with chapters covering class, geographical and generational variation, work, education, gender, minorities, popular culture and the establishment. This Companion provides a comprehensive overview of the influences that have shaped modern-day Japan. Spanning one and a half centuries from the Meiji Restoration in 1868 to the beginning of the twenty-first century, this volume covers topics such as technology, food, nationalism and rise of anime and manga in the visual arts. The Cambridge History of Japanese Literature provides, for the first time, a history of Japanese literature with comprehensive coverage of the premodern and modern eras in a single volume. The book is arranged topically in a series of short, accessible chapters for easy access and reference, giving insight into both canonical texts and many lesser known, popular genres, including folk literature, popular literature, women's literature, manga, and everything in between. The Encyclopedia of Japanese Business and Management is the definitive reference source for the exploration of Japanese business and management. Reflecting the multidisciplinary nature of this field, the Encyclopedia consolidates and contextualizes the leading research and knowledge about the Japanese business system and Japanese management thought and practice.
<urn:uuid:52035044-9802-4c57-9275-5096485eb0fd>
CC-MAIN-2023-50
https://guides.ou.edu/japanese/background
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679515260.97/warc/CC-MAIN-20231211143258-20231211173258-00710.warc.gz
en
0.916041
849
3.25
3
active solar energy system Both of these systems collect and absorb solar radiation, then transfer the solar heat directly to the interior space or to a storage system, from which the heat is distributed. If the system cannot provide adequate space heating, an auxiliary or back-up system provides the additional heat. Liquid systems are more often used when storage is included, and are well suited for radiant heating systems, boilers with hot water radiators, and even absorption heat pumps and coolers. Both air and liquid systems can supplement forced air systems. To learn more about these two types of active solar heating, see the following sections: Economics and other benefits of active solar heating systemsActive solar heating systems are most cost-effective when they are used for most of the year, that is, in cold climates with good solar resources. They are most economical if they are displacing more expensive heating fuels, such as electricity, propane, and oil heat. Some states offer sales tax exemptions, income tax credits or deductions, and property tax exemptions or deductions for solar energy systems. The cost of an active solar heating system will vary. Commercial systems range from $30 to $80 per square foot of collector area, installed. Usually, the larger the system, the less it costs per unit of collector area. Commercially available collectors come with warranties of 10 years or more, and should easily last decades longer. The economics of an active space heating system improve if it also heats domestic water, because an otherwise idle collector can heat water in the summer. Heating your home with an active solar energy system can significantly reduce your fuel bills in the winter. A solar heating system will also reduce the amount of air pollution and greenhouse gases that result from your use of fossil fuels such as oil, propane, and natural gas for heating or that may be used to generate the electricity that you use. Selecting and sizing a solar heating systemSelecting the appropriate solar energy system depends on factors such as the site, design, and heating needs of your house. Local covenants may restrict your options; for example homeowner associations may not allow you to install solar collectors on certain parts of your house (although many homeowners have been successful in challenging such covenants). The local climate, the type and efficiency of the collector(s), and the collector area determine how much heat a solar heating system can provide. It is usually most economical to design an active system to provide 40%–80% of the home's heating needs. Systems providing less than 40% of the heat needed for a home are rarely cost-effective except when using solar air heater collectors that heat one or two rooms and require no heat storage. A well-designed and insulated home that incorporates passive solar heating techniques will require a smaller and less costly heating system of any type, and may need very little supplemental heat other than solar. Besides the fact that designing an active system to supply enough heat 100% of the time is generally not practical or cost effective, most building codes and mortgage lenders require a back-up heating system. Supplementary or back-up systems supply heat when the solar system can not meet heating requirements. They can range from a wood stove to a conventional central heating system. Controls for solar heating systemsControls for solar heating systems are usually more complex than those of a conventional heating system, because they have to analyze more signals and control more devices (including the conventional, backup heating system). Solar controls use sensors, switches, and/or motors to operate the system. The system uses other controls to prevent freezing or extremely high temperatures in the collectors. The heart of the control system is a differential thermostat, which measures the difference in temperature between the collectors and storage unit. When the collectors are 10°–20°F (5.6°–11°C) warmer than the storage unit, the thermostat turns on a pump or fan to circulate water or air through the collector to heat the storage medium or the house. The operation, performance, and cost of these controls vary. Some control systems monitor the temperature in different parts of the system to help determine how it is operating. The most sophisticated systems use microprocessors to control and optimize heat transfer and delivery to storage and zones of the house. It is possible to use a solar panel to power low voltage, direct current (DC) blowers (for air collectors) or pumps (for liquid collectors). The output of the solar panels matches available solar heat gain to the solar collector. With careful sizing, the blower or pump speed is optimized for efficient solar gain to the working fluid. During low sun conditions the blower or pump speed is slow, and during high solar gain, they run faster. When used with a room air collector, separate controls may not be necessary. This also ensures that the system will operate in the event of utility power outage. A solar power system with battery storage can also provide power to operate a central heating system, though this is expensive for large systems. Building codes, covenants, and regulations for solar heating systemsBefore installing a solar energy system, you should investigate local building codes, zoning ordinances, and subdivision covenants, as well as any special regulations pertaining to the site. You will probably need a building permit to install a solar energy system onto an existing building. Not every community or municipality initially welcomes residential renewable energy installations. Although this is often due to ignorance or the comparative novelty of renewable energy systems, you must comply with existing building and permit procedures to install your system. The matter of building code and zoning compliance for a solar system installation is typically a local issue. Even if a statewide building code is in effect, it's usually enforced locally by your city, county, or parish. Common problems homeowners have encountered with building codes include the following: Installing and maintaining your solar heating systemHow well an active solar energy system performs depends on effective siting, system design, and installation, and the quality and durability of the components. The collectors and controls now manufactured are of high quality. The biggest factor now is finding an experienced contractor who can properly design and install the system. Once a system is in place, it has to be properly maintained to optimize its performance and avoid breakdowns. Different systems require different types of maintenance, but you should figure on 8–16 hours of maintenance annually. You should set up a calendar with a list of maintenance tasks that the component manufacturers and installer recommends. Most solar water heaters are automatically covered under your homeowner's insurance policy. However, damage from freezing is generally not. Contact your insurance provider to find out what its policy is. Even if your provider will cover your system, it is best to inform them in writing that you own a new system. Related category• SOLAR ENERGY AND POWER Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Science • Contact
<urn:uuid:e19f2190-c90c-491c-8afe-91cf9e52206a>
CC-MAIN-2014-10
http://www.daviddarling.info/encyclopedia/A/AE_active_solar_energy_system.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011198370/warc/CC-MAIN-20140305091958-00074-ip-10-183-142-35.ec2.internal.warc.gz
en
0.925464
1,400
2.84375
3
Medina During the Time of the Prophet The preponderance of epithets and criticisms of Jews found in the Qur’an and the early literature relate to friction between three Jewish tribes and the Muslim community in Medina during the time of the Prophet (peace be upon him) and the three wars initiated by the Meccans against Medina’s Muslim community. Prior to the Hijrah, the Jewish tribes in Medina allowed the pagan Arab tribe of Banu Qaylah to settle on uncultivated land around the small desert community. The Banu Qaylah was divided into two major clans – the Aws and the Khazraj. Various developments in the latter part of the 6th century weakened the Jewish community’s hold on Medina, and the Banu Qaylah tribe became dominant. However, hostility among the Arab clans resulted in continual fighting. The Arab clans had allies among the Jewish tribes, who aided them in their conflicts. In a decisive battle fought just before the Prophet’s migration, the victorious Aws clan became the dominant authority in Medina. Ten years after Muhammad’s first revelation on Mount Hira’, a delegation consisting of representatives from Medina’s Arab clans, including the Aws and Khazrai, invited the Prophet to come to Medina. They pledged to protect Muhammad if he would come as a neutral outsider and serve as chief arbitrator for the tribal community, which had been fighting with each other for decades. A few months prior to the Prophet’s migration, Jewish converts to Islam from Medina also invited Muhammad to Medina. It is estimated seventy Jewish men and women from Medina accepted Islam while performing pilgrimage in Mecca. The significance of these events was the fact that Muhammad was esteemed and trusted by both Arabs and Jews, and that Islam’s message was able to unite women and men from different regions, clans, social classes, and religious beliefs. In 622 c.e., the Prophet migrated to Medina amid its contentious political environment, prepossessing a symbolic presence rivaling Medina’s embattled leaders. Accordingly, his arrival was perceived as a threat to those in power, as well as to those benefiting from the status quo. The Jewish tribes were concerned about the Prophet’s intentions, and were divided in their recognition of the similarities of Islam’s monotheistic message with their own scriptures. Some of Medina’s Jewish leaders had spoken out before the Prophet’s arrival opposing his claim to being the “final” Prophet, and questioning elements of the Qur’an which were thought to contradict elements of the Hebrew scriptures. The amalgam of political, social, and religious conflicts in Medina were destabilizing and fraught with danger. Given the terms of his invitation, the Prophet proceeded to create peace in the community upon his arrival. His efforts resulted in a tripartite agreement between Medina’s Muslim converts and those who had migrated from Mecca, the Arabs from the Khazraj and Aws clans, and Medina’s Jewish tribes. The agreement, known as the Sahifah in Arabic, is better known today as the Constitution of Medina.(1) The Sahifah was based on an inclusive conception of the rule of law, with two basic principles: the safeguarding of individual rights by impartial judicial authority, and the principle of equality before the law.The terms of the agreement recognized the diverse ethnic, religious and secular affiliations of the signatories — Jews, Muslims, Medina natives, Meccan immigrants, the Arab Aws and Khazraj clans, and did not demand conversion to Islam. The community created by the Sahifah became known as the ummah, a term describing the totality of individuals living in Medina who were bound to one another by the Sahifah. As a rudimentary basis of civil law, the primary purpose of the Sahifah was the resolution of conflicts without violence. Accordingly, blood feuds were abolished, and all rights were given equally to Medina’s citizens, regardless of religion, ethnicity or social position. The salient principles of the visionary Constitution of Medina included: 1- The signatories formed a common ummah, or nationality. 2- The signatories were to remain united in peace and in war. 3- If any of the parties were attacked by an enemy, the others would defend it with their combined forces. 4- None of the parties would give shelter to the Quraysh of Mecca, or make any secret treaty with them. 5- The various signatories were free to profess their own religion. 6- Bloodshed, murder and violence were forbidden. 7- The city of Medina was to be regarded as sacred, and any strangers who came under the protection of its citizens were to be treated the same as Medina’s citizens. 8- All disputes were to be referred to Prophet Muhammad (PBUH) for arbitration and decision Many Medians converted to the faith of the new Meccan immigrants, particularly amongst the pagan and polytheist tribes. However, the Jews were wary, although there were a few converts to Islam. The growing Muslim influence in Medina was not readily accepted by those among the Jewish tribes whose influence had waned in the face of the Prophet’s growing authority. Their opposition was less about theological disagreement than political alliances and their attendant economic benefits. Many Jews in Medina had close links with the chief of the Khazraj clan, `Abd Allah ibn Ubayy ibn Salul. Ibn Salul was partial to the Jews, and would have been Medina’s prince if not for Prophet Muhammad’s arrival. A significant commercial dimension further contributed to Medina’s turbulent socio-political scene. The Prophet’s Muslim followers had established a tax-free marketplace that grew in competition with Medina’s existing Arab-controlled market. In Mecca, the ruling Quraysh tribe, who had clashed previously with the Prophet and his followers in Mecca, began to view Medina’s Muslims as a serious threat. Mecca was the epicenter of major trade routes crisscrossing the Hijaz, and also the central place of worship for Arabia’s pagan religious deities. Religion and trade were the sources of Quraysh power and tribal influence across the Hijaz, and they could not allow their dominant position to be undermined by Medina’s Muslims. In 624 c.e., the conflicts erupted into three wars between the Meccans and the Muslims. In the Battle of Badr, the small, ill-equipped Muslim army, led by Prophet Muhammad (PBUH), defeated the Quraysh army outside Medina. A second war, the Battle of Uhud, was fought in 625 c.e., and resulted in heavy Muslim losses, including injury to the Prophet. A third war, known as Khandaq or Trench, was fought in 627 c.e. when the Meccans, allied with Arab nomadic tribes and two exiled Jewish tribes, launched a final push to defeat the Muslims. After the first Battle of Badr, the Muslims’ relations with the Jewish tribes began to deteriorate. An isolated street fight between a few Muslims and Jews from the Banu Qaynuqa`, a Jewish tribe openly hostile toward Prophet Muhammad, escalated in a series of volatile confrontations. The Muslims marched towards the stronghold of the Banu Qaynuqa` and besieged them for a fortnight, whereupon they surrendered on condition that their lives and property be spared. The Prophet accepted the Banu Qaynuqa` terms, and expelled the tribe from Medina. The Khazraj clan chief `Abd Allah ibn Salul, whose power had diminished considerably with the rise of the Prophet’s influence, conspired with the exiled Jewish tribe to assist the Meccans in their second war with the Muslims, the Battle of Uhud fought in 625 c.e. Ibn Salul’s clan and the Jewish tribe had formed a coalition known as the Hypocrites. The conspiracy against the Muslims became evident when Ibn Salul deserted the Muslim army with 300 of his followers. The reduction in the Muslim army coalition was a severe setback, and they suffered defeat due to the Meccans’ superior tactics and the betrayal of the Khazraj clan and the Jewish tribe. This defeat emboldened Ibn Salul to conspire with another Jewish tribe, Banu al-Nadir, to kill the Prophet after he had rendered a decision against them in a dispute with another Jewish tribe, the Banu Qurayzhah. The Prophet escaped the attempt on his life, and ordered the Banu al-Nadir tribe to leave Medina. When the order was defied, the Muslims laid siege to their stronghold for two weeks, after which the Banu al-Nadir surrendered and were expelled from Medina. Although the Muslims suffered defeat at Uhud, their efforts to spread Islam were somewhat successful. With the growth of Islam and the success of Medina’s Muslim marketplace, the citizens of Medina began to enjoy status and prosperity rivaling Mecca. The Quraysh were losing revenue as desert trade caravans rerouted from Mecca to the Muslims’ tax-free marketplace in Medina, and they viewed the growth of Islam and its monotheistic message with increasing alarm. To counteract the growing influence of Medina’s Muslims, the Quraysh sought and received support from their nomadic tribal allies and the two Jewish tribes who had been expelled from Medina. The third war began in 627 c.e., designated as The Battle of the Confederates by the Arabs. Led by the Quraysh, the Confederates attacked Medina with a large army and laid siege to the city. The Prophet and his Muslim army defended the city with roughly three thousand men. By digging a strategic trench around the vulnerable parts of Medina, the Muslims were able to hold the Confederates at bay. In the midst of hostilities, which subsequently became known as the Trench War, the Quraysh sought an alliance with the Banu Qurayzhah, a Jewish tribe allied with the Prophet yet sympathetic with the Meccans. After being persuaded by the Meccans, the Banu Qurayzhah agreed to support the Quraysh. When the Meccan-Qurayzhah plot was discovered, the Prophet stationed people to guard against a surprise attack from the Qurayzhah. The Confederates were unable to continue their siege without the support of the Jewish tribe, and they withdrew. The Muslim army proceeded to lay siege to the Banu Qurayzhah fortress. The tribe offered to surrender on the condition that one of their former allies determine their fate according to Jewish law. The Prophet appointed Sa`d bin Mu`adh, the chief of the Aws clan, to be the arbiter. Sa`d judged the Banu Quryzhah according to the Torah, for which the punishment for treason was death. The Aws chief’s decision called for the execution of all fighting-age male members of the Banu Qurayzhah tribe, and for their women, children and elders to be expelled from Medina. In passing the sentence, Sa`d reminded the Banu Qurayzhah that if they had succeeded with their conspiracy, all Muslims in Medina would have been killed, including Prophet Muhammad. As stories of the Trench War spread through the region, Prophet Muhammad came to be held personally responsible for the execution of the Jewish tribe’s male combatants. The tale, which became known as “the massacre of Medina“, has been employed to support the portrayal of Islam and the Prophet as anti-Semitic. The falsehood of the accusation is self-evident, given the fact that the executions were the result of sentencing by a non-Muslim tribal chief according to the Torah, and under terms of surrender requested by the Banu Qurayzhah themselves. In fact, given the severity of the circumstances, the way in which the Prophet dealt with this issue demonstrated his deep commitment to justice and fairness under law. The critical references to Jews in the Qur’an that are alleged to be anti-Semitic are embedded in secular commentaries pertaining to Muslim-Jewish relations in 7th century Medina. Although the Jewish tribes and Arab clans were signatories with the Muslims to the Sahifa, three of the Jewish tribes abandoned the Charter and engaged in conspiracies with the Meccans to defeat the Muslims and subvert the Prophet’s authority. In view of the fact that the Jewish tribes were comprised of Arabs who had converted to Judaism, and were not Israelites, their shifting allegiances conformed with Arab tribal tactics during wartime. The conflicts irrupted into war for the same reasons that underlie all wars — money and power. The basic issues driving these conflicts were secular, involving commerce and political relations among the Muslim, Jewish and Arab tribes of the Hijaz. Still, these conflicts had religious connotations. Religious authority and political power were conjoined in virtually all ancient civilizations. The Meccans led the Arab pagan and polytheistic tribes in the Hijaz, and they viewed Muhammad and Islam’s monotheistic message as threats to their dominant religious position in the region. Medina’s Jews, on the other hand, did not recognize the prophethood of Muhammad, nor did they accept the Divine revelations received by Him, even though they were congruent with their own Hebrew scriptures. In summary, the conflicts with the errant Jewish tribes cannot be interpreted as evidence of religious persecution or anti-Semitism on the part of Prophet Muhammad and his Muslim followers, and any attempt to do so contravenes historical facts. The passages in the Qur’an critical of Jews and Medina’s Jewish tribes have been extracted out of context and manipulated to stereotype all Jews, and to characterize the Qur’an and Prophet Muhammad as anti-Semitic. However, as has been shown, the critical references cannot be understood out of their historical context, nor can they be employed to stereotype the contemporary Jewish diaspora or applied to current political events. To be continued… This series of articles is published with kind permission from the author.
<urn:uuid:a39a8cc8-80c3-4be6-980e-982aefba79aa>
CC-MAIN-2023-40
http://www.dawahskills.com/comparative-religion/jewish-muslim-relations-3/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510528.86/warc/CC-MAIN-20230929190403-20230929220403-00104.warc.gz
en
0.973955
2,937
4.0625
4
Eco-schools scheme help to 'green' schools 13th March, 2009 More and more educational establishments are becoming environmentally friendly, thanks to the Eco-Schools scheme. Is this our road map to sustainability? It’s not quite the environmental equivalent of the Normandy landings, but it might yet come to pass – if, that is, the children, teachers, NGOs and local authorities whose inspired action is currently transforming schools throughout the UK into models of sustainable best practice and learning are allowed to have their way. In the past two years, Eco-Schools – developed in 1994 by not-for-profit NGO the Foundation for Environmental Education and administered in Scotland by Keep Scotland Beautiful, in England by ENCAMS, which runs the Keep Britain Tidy campaign, in Wales by Keep Wales Tidy and in Northern Ireland by Tidy Northern Ireland – has seen a massive spike in schools signing up to use its roadmap to sustainability. It offers guidance on a raft of initiatives, funds and laws on nine key themes, ranging from litter and recycling to transport, healthy living, energy and water conservation. From 4,000 schools in 2006, some 11,000 are now enrolled in the scheme, more than half of UK schools. All over the country, schools are evaluating their carbon emissions, upgrading lighting, water taps and cisterns, changing computers from old cathode ray tubes to energy-efficient flat-screens, and, where possible, integrating the learning experience into the curriculum. While some schools have only just opened the hatch to their uninsulated lofts, or are stuck in areas still wedded to incineration, others are well on their way to becoming carbon neutral, with their own recycling and compost systems, transport plans, rainwaterharvesting systems and solar panels. If they successfully implement energyreduction measures, most schools can save as much as 10 per cent on utility bills – water and heating – which, even for a small primary school, can run into £30,000 a year. With decreasing budgets and increasing costs, this is money they sorely need: UK schools spend approximately £450 million on energy each year, three times as much as they do on books, about 3.5 per cent of their budgets. Grow and eat to sustainability Naturally, too, given the huge cost both to the NHS in diet-related disease and the effect food production has on climate change, school dinners have become a big feature of many schools’ drive towards sustainability. At St Peter’s CE Primary in Wem, Shropshire, the school’s central quad has been given over to raised beds, from which herbs and vegetables are used in school dinners, along with fresh eggs from two chickens. The school has a cooking club, and pupils get to cook with and eat the produce grown in the school growing area. School meal take-up has gone up by 17 per cent, and inspired the local authority to roll out the Food for Life ‘silver menu’, which favours locally and organically sourced food, across the county. In London, Merton Parents for Better Food in Schools has set up a farm-twinning scheme with Rushall Farm in Berkshire, which will see reciprocal visits between the schools and farms to talk about growing, and give the children a chance to witness the harvest. ‘It is amazing what can be achieved when the whole community pulls together,’ says Jackie Schneider, chair of Merton Parents. ‘It was the combination of parents, governors, catering staff, schools and local government working together that finally got 39 kitchens built in Merton primary schools and a new improved menu.’ In an attempt to encourage teenage boys to choose healthy food at the canteen, the canteen manager at Glyn Technology School, Surrey, initiated a system of ‘points’ that boys could collect towards rewards, including the top prize of a mountain bike. And, after sustained campaigning by the Soil Association, Focus on Food Campaign, Health Education Trust and many others, the Government has decided to help mend the national diet by putting cooking back in the core curriculum for 11- to 14-year-olds. No stone left unturned It seems that every area of school life is coming under scrutiny. Forward-thinking local authorities are ensuring that new school-builds don’t just meet building regulations, but exceed them. For example, when Hampshire County Council recently built Wellstead Primary School, it installed a horizontal, closed loop, groundsource heat pump below the school’s football pitch, providing 100 per cent of the school’s heating requirements with 50 per cent less CO² emissions than a conventional gas-fired condensing boiler. There’s the Walk to School movement, whose annual Walk-to-School Month has inspired children and parents to promote healthier living and conserve the environment. Even the environmental impact of uniforms has been put under the spotlight. Clean Slate’s range of fairtrade and organic cotton school clothing is exposing the risk to children of PFOAs, a compound used to make Teflon, which is applied to mass-produced children’s garments for an ‘easy-care’ finish. Trailblazers, such as Brabin’s Endowed School in Chipping, Lancashire, which has managed to put all the pieces of the jigsaw together for a long time now – it has won four Eco-Schools Green Flags over eight years – are now busy forming a network with other schools in the area to share good practice in sustainable thinking. Greenwash behind the gloss It is notable that these inspired examples of highly motivated local champions and NGOs willing to drive change invariably do so in spite of government efforts, not because of them. For example, the eight ‘doorways’ of the Department for Children, Schools and Families (DCSF) Sustainable Schools strategy, which would like all schools to be sustainable by 2020 without setting a hard and fast target for carbon reductions, blatantly duplicates the nine ‘themes’ of ENCAMS’ Eco-Schools, and has caused confusion among some schools and local authorities seeking to make sense of the huge raft of initiatives, laws and funds for which they can apply. Not only that, but DCSF has refused to adopt Eco-Schools as a delivery mechanism for England and Wales. It was adopted by the Scottish Executive three years ago. In Scotland, sustainability for schools is a performance indicator for local authorities and, according to ENCAMS, has resulted in an increase in enrolment from 17 to 90 per cent. Just how deep engagement runs is also uncertain. One teacher said: ‘To be honest, it’s very, very hard to get these things started. It’s a new concept for teachers, governors and the pupils. Teachers are not facilities managers – we’re not taught about how to apply the principles of sustainability in a school setting, with relation to the running of the school. Then there’s the question of time. We are looking to incorporate more of the initiatives into the curriculum, but that too is labour-intensive; it requires a lot of planning. And, inevitably, things get put on hold during exams.’ With so much to do, the Government’s voluntary self-evaluation tool, s3, a wordy 70-page document that has already been rewritten once, is hardly a must-read For the children, too, especially teenagers, it’s not just that going green might not be cool enough, but that there are other extracurricular choices they could be making, such as sport, art and drama – and that’s before schools consider how to make the project itself sustainable, once the post-Inconvenient Truth fervour dies down. Then there’s the £37 billion Building Schools for the Future rebuilding and refurbishment project for secondary schools, which, while well-intentioned, has a funding gap big enough to drive a Hummer through. While it promotes high standards of energy efficiency and renewable energy sources, it doesn’t make the highest standards statutory, and never provides 100 per cent of the funds required. Elements that might make buildings more sustainable (but which are inevitably more expensive) are left vulnerable to cuts by local authorities unable to bridge the funding gap but aware they can still meet building regulations. It’s true that the Government has used statutory muscle where it knows it can, like schools secretary Ed Balls’ ‘most robust nutrient standards for school lunches in the world’ (statutory in English primary schools), but not in other areas, such as energy-saving, where a more radical approach might offend Big Business. Fuelling the addiction Almost every week it seems a company is announcing how it is going to teach children to be more sustainable, in a move to fluff-up its green credentials. For example, with more than 8,000 schools signed up, British Gas’s Generation Green project is not quite the equivalent of softdrinks companies sponsoring school vending machines, but one wonders just how sincere it really is. Not only does its starter pack of ‘climate-change-lite’ lesson plans include a range of downloadable posters and stickers emblazoned with the British Gas logo, but also the paltry prizes schools can win include ‘educational’ toy wind turbines instead of real ones, and, for a limited number of schools, some ‘valuable’ solar panels to help them reduce their footprints by another 100kg. Should we expect more from a utility that thinks a child’s sustainability learning objectives should include understanding ‘that energy is supplied to schools, homes and businesses by power companies, and that it is paid for in the form of an energy bill’? Energy bills that they will pay to utilities such as British Gas when they’re older? Not all initiatives are so stingy, but like petrol companies that have reduced prices at the pumps, the rationale seems to be that the utilities are happy for their customers, including future ones, to use less – about 10 per cent – as long as they keep on using fossil fuels. One school said its ideal would be to generate its own energy, or at least run its premises on entirely renewable energy, but even if it were successful in bidding for money through initiatives such as the Carbon Trust, it would never be enough for something like a thermal rod combined heat and power unit. The best it can hope for is to upgrade one of its nine highly inefficient boilers. Even Eco-Schools, in the absence of wholehearted Government backing, is about to sign a four-year partnership with energy giant EDF to help it with essential funding. It’s true that, from 2010, schools will join a scheme that will count the emissions of public sector buildings as part of the total carbon footprint of local authorities, forcing underperforming local authorities to purchase carbon credits from those who meet targets, but the reality today is still a far cry from Tony Blair’s 2004 vision that ‘sustainable development will not just be a subject in the classroom but… the way the school uses and even generates its own power’. We may feel a warm glow knowing that our children are doing their bit for the environment, but is this sustainable in and of itself? One can’t help thinking that a bigger opportunity has been missed to inspire and remodel schools completely. ‘Children’s enthusiasm for environmental issues can become absorbed in the technical details of achieving the next level of green credentials,’ says James Greyson, who runs global issues think-tank BlindSpot. ‘While this undoubtedly makes schools greener and children more informed, it also enrols them in incremental “do your bit” sustainable development, which has proven to be ineffectual against the scale of damage to a world that is fast becoming unfit to pass on to the next generation. ‘Today we have an education system that delivers a population with a dangerously stunted capacity for critical creative thinking and engagement. This underpins and perpetuates unsustainability, including climate change. So tackling climate change is not so much about changing boilers or adding to the curriculum, but about a radically different way of delivering a curriculum.’ A playground for curiosity By example, Lewes New School, a small independent primary school in East Sussex, sees itself as a playground for curiosity. There, teachers claim, the learning is guided, not predetermined, and most of the day is devoted to ‘project time’, where groups of children pursue their expanding interests and ideas. The energy and time lost in other schools trying to coax children to ‘behave’ and to plod through imposed teaching exercises is freed up to do more. Such as the potential for schools to become hubs that can inspire the local community’s capacity for creating a sustainable world, led by the children’s own ideas? Lewes New School’s headteacher, Lizzie Overton, agrees. ‘The pace at which schools go green need not be governed by the spare time in overcrowded curricula and the spare money in over-tightened school budgets,’ she says. ‘Schools are ideally suited to become local demonstrations of creativity and sustainability. Getting to this vision means accepting that technical energy-use changes are pointless without considering how schools can grow society’s capacity to design, create and live in a low-carbon, sustainable society.’ This might only be possible if we offer children the chance to learn that understanding the world, including issues of sustainability, doesn’t come pre-packed, or that the only correct answers are the ones given to them by authority, whether that be government or business. Nick Kettles is a freelance writer and consultant to small businesses seeking to express better their unique contribution to world peace and sustainability 3 steps towards creating a carbon neutral school 1 Identify someone within your school – a teacher or parent who is already sufficiently informed about sustainability to be able to see the wood for the trees – who is willing to give their time to start the project. 2 Use the support of existing NGO s and initiatives, whose case studies offer clear evidence of deep and lasting change. 3 From the outset, allow the children to be involved in every initiative, whether that be changing light bulbs, planting trees, recycling or creating a new school kitchen. 5 things you can do now without anyone’s help 1 Upgrade to energy-efficient computers and light bulbs. 2 Implement a recycling system in your ICT department. 3 Reduce your water usage by placing suitable objects into all toilet cisterns to reduce water capacity. 4 Establish a relationship with a local farm that employs sustainable farming practices to allow the children to see firsthand where their food comes from. 5 Lobby your local authority to champion the purchase of renewable power through their joint buying consortia. If it refuses, opt out of the contract and purchase your power independently. ‘resources and links’ tab for a gateway to all the initiatives you will need) This article first appeared in the Ecologist March 2009 Post a Comment Using this website means you agree to us using simple cookies.
<urn:uuid:0a839f5e-39f3-418d-a00a-745eac0b9a4d>
CC-MAIN-2014-35
http://www.theecologist.org/investigations/society/269648/ecoschools_scheme_help_to_green_schools.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921550.2/warc/CC-MAIN-20140901014521-00047-ip-10-180-136-8.ec2.internal.warc.gz
en
0.958425
3,177
2.90625
3
I. For purposes of analysis, news stories and other forms of communication can be divided into a number of elements. They include: II. The characterizations can be considered forms of action that seek to enhance, defend or attack (that is credit or discredit) people, ideas, experiences, events, institutions, places and objects. III. These efforts to credit or discredit can include a number of different kinds of claims about people (and other things), including: IV. Efforts to claim that oneself or someone else is moral or immoral, namely number 1 above, can involve any of a number of more specific claims, including: V. These efforts to credit and discredit , described in II-IV above, can also be efforts to engage in other kinds of action. They can be efforts to: VI. All of these forms of action may be efforts to: As noted elsewhere, the effort to exert power (or the action of being subordinate) actually has two aspects, which need to be distinguished. In one, player A puts him or herself in a dominant position toward B, such that others see him as dominant. He puts B in the down position. Examples of this are put-downs; aggressive questioning that puts B on the defensive; and situations in which B treats A as more powerful and important than himself. In these situations, A seems to grow in symbolic size and B seems to shrink. This needs to be distinguished from situations in which people exert power by getting others to do what they want, in ways that dont involve an obvious display of weakness on the part of the subordinate one. For example, people who appear to have no power in a relationship may actually have the ability to influence the other parties to the relationship and control their behavior. At the other extreme, people who are aggressively dominant and who elicit expressions of weakness, defensiveness and submissive behavior from others, may, in fact, fail to get those others to do what they want in other ways. It should also be noted that we can exert power over, be subordinate, harm or help, oppose or cooperate with, ourselves, as well, when we divide up, as it were, and one part of us acts toward another part. This is all the more true since we harbor within us images of various aspects of self and significant others from childhood, and we constantly interact with the society around us in terms of the society inside our own personalities.
<urn:uuid:f2fb65fe-5158-4e99-b003-5eb1a38c38c0>
CC-MAIN-2017-30
http://transparencynow.com/news/model.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427766.28/warc/CC-MAIN-20170729112938-20170729132938-00463.warc.gz
en
0.969747
497
2.703125
3
- 公益社団法人 日本心理学会 - 心理学研究 (ISSN:00215236) - pp.89.17317, (Released:2018-07-14) Verbal descriptions of reinforcement contingencies (rules) often exert control over human behavior. The present study investigated how rules affected behaviors when two participants partially communicated with each other during an experiment. Mouse clicks by undergraduate students produced points depending on a multiple fixedratio 50 differential-reinforcement-of-low-rate 10-s schedule. During interruptions in the multiple schedules, participants were asked to describe the schedule contingencies, and then a speaker read the rules to one of a pair of participants (a listener). Discrimination ratios for the listeners were significantly higher than those for participants who were not asked to describe the rules or listened to other’s rules. When both schedules changed to fixedinterval 10-s, all groups were sensitive to schedule changes. The results suggest that the acquisition of scheduleappropriate behavior was affected by instructions even though the instructions were given by individuals other than the experimenter and were imperfect. The results also suggest that the effects of rules and self-rules can be replicated in two-person experiments.
<urn:uuid:b926e53a-c455-44c4-bdd8-a7f9775b0e13>
CC-MAIN-2020-24
http://altmetrics.ceek.jp/article/www.jstage.jst.go.jp/article/jjpsy/advpub/0/advpub_89.17317/_article/-char/ja/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394756.31/warc/CC-MAIN-20200527141855-20200527171855-00312.warc.gz
en
0.944896
275
2.828125
3
Hi guys and gals, Here's a simple circuit mostly just an LED circuit with a Darlington pair. My question relates to the base current to the probe. If the probe is connected to an unearthed metal toaster (not plugged in), the LED glows briefly then goes out. If the probe is connected to the toaster (plugged in) the LED glows and stays lit. I have my ideas as to what's happening, but I'd really like to hear your explanations. Is this base current Displacement current? Where is the base current loop? https://www.dropbox.com/s/vis4xeuss8u4sw7/Ground%20circuit2.png?dl=0 Sorry, I can't seem to post images. Can someone advise please?
<urn:uuid:e1c8d8f6-7169-44e5-9530-6cb762a14a8b>
CC-MAIN-2017-47
https://www.physicsforums.com/threads/base-current-and-darlington-pair.830356/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806447.28/warc/CC-MAIN-20171122012409-20171122032409-00011.warc.gz
en
0.928139
164
2.6875
3
California's utilities are spending $548 million over seven years to subsidize consumer purchases of compact fluorescent lamps. But the benefits are turning out to be less than expected. One reason is that bulbs have gotten so cheap that Californians buy more than they need and sock them away for future use. Another reason is that the bulbs are burning out faster than expected. California's experience is notable because energy experts have placed high hopes on compact fluorescent lamps. Often spiral-shaped, they screw into existing light sockets and offer energy savings of about 75% over traditional incandescent light bulbs. Many nations are relying on them to help cut emissions from power plants and stretch electricity supplies further. The United Nations says 8% of global greenhouse-gas emissions are linked to lighting, and that adoption of compact fluorescent lights could cut pollution. The World Bank has helped dozens of mostly poor nations begin the switch to the bulbs to make electric lighting more affordable. Last June, for example, Bangladesh gave away five million of the bulbs in a single day. No state has done more to promote compact fluorescent lamps than California. On Jan. 1, the state began phasing out sales of incandescent bulbs, one year ahead of the rest of the nation. A federal law that takes effect in January 2012 requires a 28% improvement in lighting efficiency for conventional bulbs in standard wattages. Compact fluorescent lamps are the logical substitute for traditional incandescent light bulbs, which won't be available in stores after 2014. California utilities have used ratepayer funds to subsidize sales of more than 100 million of the bulbs since 2006, most of themmade in China. It is part of a comprehensive state effort to use energy-efficiency techniques as a substitute for power production. Subsidized bulbs cost an average of $1.30 in California versus $4 for bulbs not carrying utility subsidies. Anxious to see what ratepayers got for their money, state utility regulators have devoted millions of dollars in the past three years for evaluation reports and field studies. What California has learned, in a nutshell, is that it is hard to accurately predict and tricky to measure energy savings. It is also difficult to design incentive plans that reward—but don't overly reward—utilities for their promotional efforts. When it set up its bulb program in 2006, PG&E Corp. PCG +1.86% PG&E Corp. U.S.: NYSE $41.11 +0.75 +1.86% Dec. 6, 2013 4:02 pm Volume (Delayed 15m) : 2.83M AFTER HOURS $40.77 -0.34 -0.84% Dec. 6, 2013 4:27 pm Volume (Delayed 15m): 28,451 P/E Ratio 25.38 Market Cap $18.13 Billion Dividend Yield 4.43% Rev. per Employee $750,401 10/09/13 PG&E Fine Is Entirely Appropri... 10/08/13 California Regulators Order PG... 09/26/13 The Purpose of Regulation Is C... More quote details and news » thought its customers would buy 53 million compact fluorescent bulbs by 2008. It allotted $92 million for rebates, the most of any utility in the state. Researchers hired by the California Public Utilities Commission concluded earlier this year that fewer bulbs were sold, fewer were screwed in, and they saved less energy than PG&E anticipated. As a result of these and other adjustments, energy savings attributed to PG&E were pegged at 451.6 million kilowatt hours by regulators, or 73% less than the 1.7 billion kilowatt hours projected by PG&E for the 2006-2008 program. One hitch was the compact-fluorescent burnout rate. When PG&E began its 2006-2008 program, it figured the useful life of each bulb would be 9.4 years. Now, with experience, it has cut the estimate to 6.3 years, which limits the energy savings. Field tests show higher burnout rates in certain locations, such as bathrooms and in recessed lighting. Turning them on and off a lot also appears to impair longevity. California regulators have debated whether utilities should be held to the energy savings they promised in order to earn bonus pay. Staff of the state utilities commission said utilities missed their overall-energy savings targets, partly because of disappointing results from light bulbs. Utilities disagree with many of the staff's conclusions. Steve Malnight, vice president of energy-efficiency programs at PG&E, said the researchers "lost sight of the fact that utilities have produced tremendous value for our customers." Experts agree that compact fluorescent lights save energy over incandescent lights and typically burn longer. One complexity of California's incentive program is it seeks to reward utilities only for energy savings they directly cause. For example, utilities aren't supposed to get rewarded for bulbs purchased by people who say they would have bought them even without utility promotions. "We're not only trying to measure the technical side, but determine how much of a difference utilities have made in transforming the market," said Peter Miller, senior scientist at the Natural Resources Defense Council, an environmental group that supports the utility-lighting programs. For the 2006-2008 program, utilities said they achieved energy savings from all their energy efficiency programs that were 151% of the goal set by regulators. But the commission's staff, armed with exhaustive studies, said utilities saved only 62% of the goal amount, hurt by the bulbs. Nevertheless, anxious to move on to the current 2010-2012 program, the commission last month gave the utilities $68 million of rewards, on top of $143.7 million of incentive pay previously awarded. PG&E pocketed $104 million total. Dian Grueneich, one of two commission members who voted against the final incentive payment, said it rewarded utilities "for subpar performance." Commission President Michael Peevey, who favored the extra pay, said he didn't want to ding utilities for an incentive program that was "unworkable." Later this month, the commission will consider a proposal to simplify the incentive program. Utilities would be judged, henceforth, for technology installation rates, but not for the amount of energy actually saved by their efforts. Write to Rebecca Smith at firstname.lastname@example.org
<urn:uuid:9d1c1336-30c1-4fe6-b3e3-14ed59107698>
CC-MAIN-2013-48
http://online.wsj.com/news/articles/SB10001424052748704259704576033890595565026
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054424/warc/CC-MAIN-20131204131734-00067-ip-10-33-133-15.ec2.internal.warc.gz
en
0.955608
1,294
2.953125
3
Music is the art of composing sounds in order to create a sound in time by using the components of harmony, melody, rhythm, and timing. It’s one of those universal artistic elements of all human cultures. In a wide variety of genres, from classical music to modern jazz, from pop to folk, from classical to avant-garde, music continues to be a central part of our lives. Many years ago, most music was performed for entertainment, much like sports events were, but over time, music has developed into an expressive form, often bringing people together. Music conveys messages that are hard to interpret through other means, and its timeless quality enables it to connect generations and to influence attitudes and choices. Music is woven into the fabric of everyday life. Most of us hear music while driving, cooking, or cleaning. Some of us appreciate it as we travel from place to place, encountering new landscapes and cultures. Music can evoke emotion, stimulate our imagination, and challenge our perceptions. The history of music is an intriguing and fascinating history. Musicological study is growing in popularity. Students in schools across the country are studying how songs are learned and how they change over time. In recent years, there has been a revival of interest in classical forms of music, especially in the United States. More students are enrolling in classical courses and taking music classes. Some of the most popular music composers throughout history have been those who created new creations that continue to influence contemporary musicians today. These include composers such as John Cage, Henry Louis Gates, Herman Melkner, Alexander Borodin, and Yo-Yo Ma. composers have also written music for television, opera, commercials, marching bands, New York Philharmonic, chamber music, and world literature. composers have also written powerful, memorable, and beloved songs that helped fuel the American revolution, have been recorded on classic album covers, and have become beloved by millions of fans around the world. Composers such as Yo-Yo Ma have created and taught entire musical movements, such as jazz, classical, gospel, rock, and soul music. Music theory is used in all types of music. It is necessary for a composer to learn this important subject before beginning to write any music. The concepts of formal and non-formal music are discussed in depth in a music composition class. This course will teach students to develop an understanding of the relationships between notes, chords, and scales. Learning to play an instrument is not easy. It takes time, patience, and dedication to learning a musical instrument. There are many types of instruments including piano, guitar, keyboard, harp, and many more. A student interested in learning about music should consider attending a music school to complete his or her education.
<urn:uuid:4e100687-6a14-478c-95dd-0b23d8bfc164>
CC-MAIN-2023-23
https://nutfreepaleo.com/all-about-music-classes/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652116.60/warc/CC-MAIN-20230605121635-20230605151635-00061.warc.gz
en
0.96457
567
3.578125
4
Dr. David Bogler, USDA NRCS PLANTS Database Annuals, Terrestrial, not aquatic, Stems nodes swollen or brittle, Stems erect or ascending, Stems geniculate, decumbent, or lax, sometimes rooting at nodes, Stems caespitose, tufted, or clustered, Stems terete, round in cross section, or polygonal, Stem nodes bearded or hairy, Stem internodes ho llow, Stems with inflorescence less than 1 m tall, Stems, culms, or scapes exceeding basal leaves, Leaves mostly cauline, Leaves conspicuously 2-ranked, distichous, Leaves sheathing at base, Leaf sheath mostly open, or loose, Leaf sheath smooth, glabrous, Leaf sheath hairy at summit, throat, or collar, Leaf sheath and blade differentiated, Leaf blades linear, Leaf blades 2-10 mm wide, Leaf blades 1-2 cm wide, Leaf blades mostly flat, Leaf blades mostly glabrous, Leaf blades more or less hairy, Leaf blades scabrous, roughened, or wrinkled, Ligule present, Ligule an unfringed eciliate membrane, Inflorescence terminal, Inflorescence an open panicle, openly paniculate, branches spreading, Inflorescence a contracted panicle, narrowly paniculate, branches appressed or ascending, Inflorescence solitary, with 1 spike, fascicle, glomerule, head, or cluster per stem or culm, Inflorescence lax, widely spreading, branches drooping, pendulous, Inflorescence curved, twisted or nodding, Inflorescence branches more than 10 to numerous, Flowers bisexual, Spikelets pedicellate, Spikelets dorsally compressed or terete, Spikelet less than 3 mm wide, Spikelets with 1 fertile floret, Spikelets with 2 florets, Spikelet with 1 fertile floret and 1-2 sterile florets, Spikelets solitary at rachis nodes, Spikelets all alike and fertille, Spikelets bisexual, Spikelets disarticulating below the glumes, Rachilla or pedicel glabrous, Glumes present, empty bracts, Glumes 2 clearly present, Glumes distinctly unequal, Glumes equal to or longer than adjacent lemma, Glume equal to or longer than spikelet, Glumes 4-7 nerved, Glumes 8-15 nerved, Lemma similar in texture to glumes, Lemma 8-15 nerved, Lemma glabrous, Lemma apex truncate, rounded, or obtuse, Lemma awnless, Lemma margins inrolled, tightly covering palea and caryopsis, Lemma straight, Palea present, well developed, Palea about equal to lemma, Stamens 3, Styles 2-fid, deeply 2-branched, Stigmas 2, Fruit - caryopsis, Ca ryopsis ellipsoid, longitudinally grooved, hilum long-linear. Annual herb, tufted 20 cm - 2 m tall Leaves: numerous, alternate, two-ranked. Sheaths round in cross-section, densely soft-hairy with deciduous, bumpy-based hairs. Ligules membranous, fringed with hairs (hairs 1 - 3 mm long). Blades 15 - 40 cm long, 0.5 - 2.5 cm wide, parallel-veined. Inflorescence: a branched arrangement of spikelets (panicle), 6 - 20 cm long, 4 - 11 cm wide, with stiff, appressed to spreading branches. Fruit: a caryopsis, indehiscent, enclosed within the persistent lemma and palea. Culm: stout, 20 cm - 2 m long, round in cross-section, internodes often with bumpy-based hairs. Nodes minutely hairy. Spikelets: solitary, found in the upper portion of the inflorescence, 4 - 6 mm long, egg-shaped. Glumes: unequal, herbaceous. Lower glumes 3 - 3.5 mm long, one-half to three fourths as long as spikelets, gradually tapering to a slender point, five- to seven-veined, veins minutely rough towards the apex. Upper glumes 4 - 5 mm long, slightly longer than upper florets, eleven- to fifteen-veined, veins minutely rough towards the apex. Lemmas:: Lower lemmas similar to upper glumes, 4 - 5 mm long, slightly longer than upper florets, nine- to thirteen-veined, veins minutely rough towards the apex. Upper lemmas shiny, with rolled-up margins on the upper surface. Paleas:: Lower paleas 1 - 1.5 mm long, up to half as long as the upper florets, transparent. Upper paleas longitudinally lined. Florets:: Lower florets sterile. Upper florets bisexual, straw-colored to orange or reddish brown or blackish, 3 - 4 mm long, 2 - 2.5 mm wide, pointed at the apex, more or less shiny. Anthers three. Stigmas red. Similar species: No information at this time. Flowering: late June to mid-October Habitat and ecology: Introduced from Asia and cultivated as a forage crop. It occasionally escapes into disturbed ground and is frequent around grain elevators. Occurence in the Chicago region: non-native Etymology: Panicum comes from the Latin word panis, meaning bread, or panus, meaning "ear of millet." Miliaceum means millet. Author: The Morton Arboretum FNA 2003, Gould 1980, Kearney and Peebles 1969 Common Name: proso millet Duration: Annual Nativity: Non-Native Lifeform: Graminoid General: Coarse annual, branching from lower nodes, stout stems 20-210 cm, hairy at lower nodes, sheaths commonly pilose or hirsute, internodes papillose based. Vegetative: Sheaths terete, densely pilose, with papillose based hairs, blades 15-40 cm long, 7-25 mm wide, pubescent or glabrous; ligule a short, ciliate, membranous collar, 1-3 mm. Inflorescence: Panicle with erect spreading branches 6-20 cm long, 4-11 cm wide, included or shortly exserted at maturity, dense; branches stiff, appressed to spreading, spikelets solitary, pedicels 1-9 mm, scabrous and sparsely pilose, spikelets 4-6 mm, ovoid, usually glabrous; lower glume one half to two thirds as long as the spikelets, 5-7 nerved, upper 11-13 veined, minutely roughened above, slightly exceeding upper florets, fertile lemma 9-13 veined; upper florets 3-4 mm long, 2-2.5 mm wide, smooth or striate, stramineous to orange, red-brown or blackish, disarticulating at maturity. Ecology: Found in disturbed areas, often near or closely associated with agriculture. Notes: Often found along the margins of old fields, fast germinating and short growing in late spring, with lower water requirements than any other cereal grains. Is generally a big component in bird seeds. Ethnobotany: Cultivated in Asia for thousands of years. Etymology: Panicum is a classical Latin name for millet, while milaceum means millet like. Synonyms: None Editor: SBuckley, 2010 Stout annual 2-6(-10) dm; sheaths overlapping, densely hirsute; blades elongate, rounded at base, 10-20 mm wide; panicle included at base, pyramidal to cylindric, dense, 8-20 cm, often nodding at maturity; spikelets turgid, acute, 4.5-6 mm; first glume half as long, acute or acuminate, 5-veined; second glume and sterile lemma equal, distinctly 7- or 9- veined; fr stramineous to brown, 3-3.5 mm; 2n=36, 54, 72. Native of the Old World, occasionally cult. for forage and adventive along roadsides and in waste places. A wild-adapted type, widespread as a field-weed in the midwest (Wis., Minn., Ill., Io., N.D., S.D., Nebr., Kans., Colo.), has been named ssp. ruderale (Kit.) Tzvelev. It is larger, 7-20 dm, with open panicles 10-50 cm, and deciduous spikelets. Gleason, Henry A. & Cronquist, Arthur J. 1991. Manual of vascular plants of northeastern United States and adjacent Canada. lxxv + 910 pp. ©The New York Botanical Garden. All rights reserved. Used by permission. Citation: The vPlants Project. vPlants: A Virtual Herbarium of the Chicago Region. http://www.vplants.org Copyright © 2001–2009 The vPlants Project, All Rights Reserved.
<urn:uuid:5d92cc5b-8c28-4827-b592-a90da4ee4bb4>
CC-MAIN-2020-10
http://vplants.org/portal/taxa/index.php?taxon=1517&cl=VPlants
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00094.warc.gz
en
0.843048
1,964
2.90625
3
Sustainable Use of Biodiversity in Drylands View as PDF File Arid, semi-arid and sub-humid areas encompass a wide range of natural habitats, including grasslands, savanna, barren deserts, scrublands, woodlands and forests. These lands possess a rich array of biodiversity that is unique in its ability to survive in and adapt to dry conditions. Biodiversity is vital for maintaining the health of dryland ecosystems. The loss of a few species may reduce ecosystem resilience significantly, with dire consequences for human livelihoods. A diverse array of domesticated and wild plant species provide people with food, livestock and wildlife with feed as well as many other goods and services. Equally valuable is the rich genetic wealth tied up in domesticated animals. The estimated 4,000 animal breeds known around the world are gaining importance in the fight against poverty, hunger and disease. But some 30 percent of the world's domesticated animals, most of which are found in the tropics and have never been developed, now face the prospect of extinction. Pastoral livestock production can easily coexist with wildlife. But as crop production is extended into grasslands, wildlife populations decline dramatically. So, a major challenge in areas such as the Mara-Serengeti ecosystem in southwestern Kenya and northern Tanzania is to manage conflicts between cropping, pastoral livestock production and wildlife-based tourism and enhance the benefits from these competing activities. Biodiversity contributes vitally to agriculture in drylands, as it does in other ecosystems. The wild relatives of staple foods often contain genes that are useful for crop improvement. And the cover provided by diverse vegetation is critical for soil conservation and for the regulation of rainfall infiltration, surface runoff and local climates. Through activities such as overgrazing, deforestation and the expansion of cultivated land, agriculture may contribute to biodiversity loss, which helps trigger desertification. Once this process is underway, it not only threatens dryland agriculture and pastoralism but also leads to further erosion of biodiversity. To promote the protection and sustainable use of this valuable resource, scientists have opted in recent years for a research approach that addresses the challenges of agriculture and the management of biodiversity and other natural resources in an integrated fashion. This approach aims to promote diversified rural livelihoods in drylands, based on sustainable management of genetic Selected Highlights from Research for Dryland Seed fairs to promote plant genetic A project carried out by the International Plant Genetic Resources Institute (IPGRI) with NGOs and other partners in Mali and Zimbabwe is demonstrating how dryland farmers manage and conserve their plant genetic resources and how to spread best practices. It is also showing how farmers can use these resources to cope with the effects of drought and desertification and improve their livelihoods. One valuable technique, pioneered in eastern Africa by NGOs working with several CGIAR Centers, involves "seed fairs" organized by rural communities. They offer farmers an opportunity to share information and experience and to display and swap samples of the crops and other plants they grow. The emphasis is on variety, not perfection or abundance, and prizes are awarded for the most diverse displays rather than for the finest crops. International aid agencies have created programs that provide farmers with vouchers for buying seed at these fairs. This measure reinforces local seed systems rather than undermining them, as did the previous practice of introducing imported seed in connection with disaster relief. Managing trees in agricultural Trees are scarce but important in drylands. They capture and recycle nutrients and water, stabilize hillsides and sand dunes, and provide diverse products that are important for people and animals. The semi-arid Parklands of West Africa provide one example of a traditional system that integrates trees, crops and livestock. Spanning the whole of the Sahel, the Parklands sustain more than 40 million people but are believed to be undergoing rapid degradation in many areas. One promising approach for reversing the trend is agroforestry. This is the science of integrating trees into agricultural landscapes, where they can help strengthen food security, raise incomes, and restore soil fertility. For example, various live fence technologies developed by the World Agroforestry Centre have proven useful in Africa's Sahel region for protecting market gardens from livestock that roam and graze freely. Some live-fence species have further uses, providing fruit and other useful products, such as tanning chemicals and dyes. A particularly valuable species in Parklands is the Shea tree. It provides important environmental services, and its products contribute to human nutrition and income. One of these products, shea butter, is controlled mainly by women, and benefits from its sale are shared among all members of rural families. The World Agroforestry Center and its partners are engaged in a project that seeks strategies by which women can improve the processing of shea butter and thus command higher prices for this value-added Fodder banks are another promising option, developed by the International Livestock Research Institute (ILRI). Small areas are enclosed by a fence and sown with forage legumes. Farmers then use the fodder bank as they would a pantry, drawing on it when green grass is in short supply during the dry season, In addition to helping reduce livestock pressure on Parklands, the forage legumes offer the further advantage of nourishing the soil through the accumulation of nitrogen in their roots. An ILRI study conducted in the late 1990s showed that this technology is spreading across West Africa and enhancing livestock productivity. Arid forest lands resist the sands of CGIAR researchers are also working at the policy level to ensure that Africa's dry forests get the attention they deserve from land planners and other decision makers. Working with NGOs and research and development organizations at all levels of government, the Center for International Forestry Research (CIFOR) is building a scientific basis for policies that encourage such practices as opening dry forests to small-scale, sustainable use; marketing of dry forest products, like honey and charcoal; and development of
<urn:uuid:da7baa47-86ee-4aaf-98fa-9956407520c6>
CC-MAIN-2013-20
http://www.cgiar.org/web-archives/www-cgiar-org-impact-global-des_fact5-html/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700563008/warc/CC-MAIN-20130516103603-00092-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920192
1,359
3.828125
4
Ontario First to Test Automated Vehicles on Roads in Canada Province Supports Innovation in Transportation Technology Ontario is launching a new pilot to allow for the testing of automated vehicles on Ontario roads. Automated vehicles are driverless or self-driving vehicles that are capable of detecting the surrounding environment using artificial intelligence, sensors and global positioning system coordinates. Automated and connected vehicle technologies have the potential to help improve fuel efficiency as well as reduce traffic congestion, greenhouse gas emissions and driver distraction. Beginning on January 1, 2016, Ontario will lead Canada as the first province to test automated vehicles and related technology on-road. Currently there are nearly 100 companies and institutions involved in the connected vehicle and automated vehicle industry in the province. The pilot will enable those companies to conduct research and development in Ontario rather than in competing jurisdictions, as well as support opportunities to bring automated vehicles to market. The province is also pledging an additional $500,000 in funding to the Ontario Centres of Excellence Connected Vehicle/Automated Vehicle Program, in addition to the $2.45 million in funding recently provided. The program brings academic institutions and business together to promote and encourage innovative transportation technology. Ensuring Ontario's place as a world leader in the auto, transportation, information and communications technology sectors are part of the government's plan to build Ontario up. The four-part plan includes investing in people's talents and skills, making the largest investment in public infrastructure in Ontario's history, creating a dynamic, innovative environment where business thrives, and building a secure retirement savings plan. - Information about applying for the pilot will be available online from the Ministry of Transportation in late November. - The Institute of Electrical and Electronics Engineers forecast that by 2040, autonomous vehicles will account for 75 per cent of all vehicles on the road. “In the world of transportation, Ontario has the opportunity to show leadership on automated technology. Today, Ontario is making its claim in the global marketplace by taking the next steps in automated vehicle innovation. The automated vehicle pilot will ensure that the province’s roads remain safe without creating burdens that stifle investment and innovation in Ontario’s dynamic business environment.” Steven Del Duca “Ontario is a global leader in developing and manufacturing the next generation of vehicles. This new pilot program will build on our success, and help Ontario lead the development of automated and connected car technologies. In this highly competitive global economy, investing in people’s talents and skills to create the next generation of innovative technologies is good for business, and can help lead to the easier movement of goods and services across the province.”
<urn:uuid:ab7f4e51-d870-440d-b31b-03c95bffbbfb>
CC-MAIN-2020-29
https://news.ontario.ca/mto/en/2015/10/ontario-first-to-test-automated-vehicles-on-roads-in-canada.html?_ga=1.84552665.1727134915.1444848219
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151761.87/warc/CC-MAIN-20200714212401-20200715002401-00571.warc.gz
en
0.936966
538
2.765625
3
If everyone had to work an extra ten hours a week in a job he hated, the GDP would rise instead of reflecting a drop in a well-being. Many economists are looking to measure well-being of the citizens of a nation beyond a simple measure of wealth, and the GDP cannot provide this additional information. One tool that has been developed is the Human Development Index (HDI). Using the HDI system, the country in question in this paper, Australia, ranks third. However, in Happiness and the Human Development Index: The Paradox of Australia , authors Blanchflower and Oswald find that using the International Social Survey Program, Australians have low job satisfaction and are not very happy compared to the other English-speaking nations. Australia serves as their example to demonstrate the ineffectiveness of the HDI in measuring well-being. The authors argue that the HDI should not be rejected, but rather that more research is needed to better describe well-being. The United Nations publishes annually the Human Development Index (HDI). The HDI uses lifespan, educational attainment, and adjusted real income to determine human welfare. For 2004, the top-ten countries are: 1) Norway 2) Sweden 3) Australia 4) Canada 5) Netherlands 6) Belgium 7) Icelend 8) USA 9) Japan 10) Ireland The authors acknowledge that the HDI is a positive first step because it acknowledges “a broader conception of well-being than the height of a pile of dollars” (Page 10). However, they believe that it does not capture the most essential feature: happiness. Unlike the other criterion, measuring the psychological state of individuals is more difficult because it is subjective. Happiness research is relatively new but seven points have emerged: 2) However, whole countries do not become happier as they become wealthier. 3) Women are happier than men. Unemployment and divorce hurt well-being tremendously. Even when controlling for income, more educated people are happier. 4) The broad statistical patterns of happiness look the same in every country. However, this research is almost exclusively based on Western countries and therefore, may not give a complete picture. 5) These patterns may also occur in panels of people (longitudinal data). For example, lottery winners might have similarities. 6) Good and bad events greatly affect people early on, but people at least partially adapt to them. 7) Relative events matter. Experimentally, people care how they are treated compared to others and will even pay to hurt others in order to restore fairness. The International Social Survey Programme (ISSP) surveys 50,000 individuals from 35 nations. It asks for one to rate one’s happiness, one’s family life, and one’s principal job on a seven point scale. It also asks one to rate the stress of one’s job on a five point scale and how often one is too tired because of work to perform chores at home on a four point scale. Because Australia ranks so well in the HDI, it is logical that it would perform well in the ISSP also. However, it is in the top fifty percent for happiness, family satisfaction, work stress, and lack-of-tiredness but towards the bottom for job satisfaction. One problem in doing surveys across cultures is that questions may not translate very well. Therefore, sampling countries with the same language minimizes biases, and here Australia compares even less favorably. One weakness in the English language comparison is that many Australians are immigrants who do not have English as they first language, and the survey is unable to correct for that. Nonetheless, the difference between Australia’s scores on the ISSP and HDI are striking. The conclusion by the authors is that the goal of economists is to improve upon the narrow focus of real income and growth, and the HDI currently is insufficient for doing so. Supplementing it with some subjective measure of happiness would improve its effectiveness markedly. . I was disappointed with this article. I agree that the goal of policy should go beyond a mere mathematical measurement of wealth. The article provided little in explaining ways to do this. Furthermore, I am confused why the fact that Australia does well on the HDI but poorly on the ISSP is a paradox. HDI looks at income, lifespan, and education—not happiness. If the fact that Australia performs well on the HDI but not the ISSP is truly a paradox, then it was not explained well. This inadequacy makes me question the validity of their statistics even. The most interesting paradox to me was the summary of previous research on happiness. The authors state that individuals become happier as they become wealthier but countries do not become happier as they are wealthier. They cite a study that shows that people in the United States were happier in the 1970s compared to today despite the fact that GDP per person has risen. I believe the reason why individuals become happy and countries do not is because happiness from wealth is relative to the other people. This sentiment is echoed in the observation that people will pay (lower happiness) in order for things to be fair (Observation number 7). The article focuses on the fact that because happiness is subjective, it is difficult to measure. Given this assumption, I do not understand how research from the 1970s would be sophisticated enough to make the comparison. If the idea that countries do not become happier as they become wealthier, why isn’t the question why countries focus so much on increasing production? We already know that at the very least there is a cost in raising GDP—pollution. I would really like to see the country verse individual happiness as wealth increases explored. .
<urn:uuid:756f13c1-4fad-422a-aa54-d5d6436c1104>
CC-MAIN-2013-20
http://www.econ.brown.edu/econ/sthesis/IanPapers/happiness.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00072-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962437
1,152
3.0625
3
died Nov. 23, 1927, Mexico City Mexican Jesuit priest martyred during anti-Roman Catholic persecutions of the 1920s in Mexico. The son of a socially prominent family, Pro entered the Jesuit novitiate in 1911. Because of government persecutions, he fled to California (191415) and then to Spain (191519) and taught in Nicaragua from 1919 to 1922. He returned to Spain and then studied in Enghien, Belg., where he was ordained in 1925. In 1926 he returned to Mexico, even though Roman Catholicism was virtually proscribed there. Militant Catholics had arisen in several states in the so-called Cristero Rebellion, attacking government buildings, burning schools, and assassinating officials. In reprisal, the government executed members of the clergy, burned churches, and massacred Cristeros and their sympathizers. Father Pro was shot by a firing squad after being suspected of involvement in an assassination attempt against former president Álvaro Obregón. (An automobile used in the plot was linked to Pro's brother.) Pro's execution was ordered, without trial or appeal, by the then president of Mexico, General Plutarco Elías Calles, the founder of what became the Institutional Revolutionary Party. Pro was beatified on Sept. 25, 1988.
<urn:uuid:434d5f37-5e93-4bc3-85e4-bc91c7502b3c>
CC-MAIN-2014-15
http://www.britannica.com/hispanic_heritage/article-9061453
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz
en
0.974608
275
2.625
3
- Prayer and Worship - Beliefs and Teachings - Issues and Action - Catholic Giving - About USCCB The Crisis of Mixed Marriages. 1a When these matters had been concluded, the leaders approached me with this report: “Neither the Israelite laymen nor the priests nor the Levites have kept themselves separate from the peoples of the lands and their abominations—Canaanites, Hittites, Perizzites, Jebusites, Ammonites, Moabites, Egyptians, and Amorites— 2for they have taken some of their daughters as wives for themselves and their sons, thus intermingling the holy seed with the peoples of the lands. Furthermore, the leaders and rulers have taken a prominent part in this apostasy!” Ezra’s Reaction. 3b When I had heard this, I tore my cloak and my mantle, plucked hair from my head and beard, and sat there devastated. 4c Around me gathered all who were in dread of the sentence of the God of Israel* on the apostasy of the exiles, while I remained devastated until the evening sacrifice. 5Then, at the time of the evening sacrifice, I rose in my wretchedness, and with cloak and mantle torn I fell on my knees, stretching out my hands to the LORD, my God. A Penitential Prayer. 6* d I said: “My God, I am too ashamed and humiliated to raise my face to you, my God, for our wicked deeds are heaped up above our heads and our guilt reaches up to heaven. 7From the time of our ancestors even to this day our guilt has been great, and for our wicked deeds we have been delivered, we and our kings and our priests, into the hands of the kings of foreign lands, to the sword, to captivity, to pillage, and to disgrace, as is the case today. 8e “And now, only a short time ago, mercy came to us from the LORD, our God, who left us a remnant and gave us a stake in his holy place; thus our God has brightened our eyes and given us relief in our slavery. 9f For slaves we are, but in our slavery our God has not abandoned us; rather, he has turned the good will of the kings of Persia toward us. Thus he has given us new life to raise again the house of our God and restore its ruins, and has granted us a protective wall in Judah and Jerusalem. 10But now, our God, what can we say after all this? For we have abandoned your commandments, 11g which you gave through your servants the prophets: The land which you are entering to take as your possession is a land unclean with the filth of the peoples of the lands, with the abominations with which they have filled it from one end to the other by their uncleanness. 12h Do not, then, give your daughters to their sons in marriage, and do not take their daughters for your sons. Never promote their welfare and prosperity; thus you will grow strong, enjoy the produce of the land, and leave it as an inheritance to your children forever. 13“After all that has come upon us for our evil deeds and our great guilt—though you, our God, have made less of our sinfulness than it deserved and have allowed us to survive as we do— 14shall we again violate your commandments by intermarrying with these abominable peoples? Would you not become so angered with us as to destroy us without remnant or survivor? 15LORD, God of Israel, you are just; yet we have been spared, the remnant we are today. Here we are before you in our sins. Because of all this, we can no longer stand in your presence.” * [9:4] All who were in dread…God of Israel: lit., “all who trembled”; these people are also mentioned at 10:3, and a similar designation occurs at Is 66:2, 5, a text more or less contemporary with this passage. The allusion may be to a distinct social group of rigorist tendencies who supported Ezra’s marriage reform. * [9:6–15] The prayer attributed to Ezra is a communal confession of sin, of a kind characteristic of the Second Temple period (cf. Neh 9:6–37; Dn 9:4–19; 1QS 1:4–2:1), but adapted to the present situation. By accepting this message, you will be leaving the website of the United States Conference of Catholic Bishops. This link is provided solely for the user's convenience. By providing this link, the United States Conference of Catholic Bishops assumes no responsibility for, nor does it necessarily endorse, the website, its content, or
<urn:uuid:f032ca91-72ad-4424-a74d-1405d816ef9f>
CC-MAIN-2013-20
http://www.usccb.org/bible/ezr/9:12
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00036-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963284
1,012
2.59375
3
Imagine a high voltage supergrid that would supply 48 states with a reliable stream of electricity. Right now, a High Voltage Current transmission network, or supergrid, is being proposed. It’s known as the North American Supergrid (NAS or Supergrid). The NAS would be constructed almost all underground along already existing roadways or right-of-ways. Power lines underground means protection from downed lines and outages happening more and more from natural disasters (think devastating superstorms like Sandy (2012), Maria (Puerto Rico, still without power), hurricanes, tornados, flooding, rain, snow, ice storms. Also, our electric grid has increasingly become a vulnerable target for cyber and terrorist attacks which would be less likely to happen with an underground supergrid. A Supergrid like this sounds pretty good, right? And odds are that it can become a reality. The public learned about just how it would work on November 29th, 2017, at George Washington University’s Jack Morton Auditorium. The symposium was hosted by John Topping, President and CEO of the Climate Institute based in Washington DC. Topping has been with the Climate Institute since its founding in 1986. The super grid was initially promoted by Alexander MacDonald, one of the world’s top weather scientists, and the first presenter at the symposium. MacDonald is known for collaborating with other scientists in authoring the NAS concept published last year in Nature Climate Change. MacDonald revealed how wind is always blowing somewhere in the United States and if it was tapped into for energy on a national scale it would help the U.S. grid overcome intermittency problems. This would work especially since high-voltage, direct-current (HVDC) transmission lines are known to suffer less energy loss than traditional alternating-current transmission lines. Working on this premise, surplus power generated by such renewables as wind, solar hydroelectric and natural gas from around the nation via the proposed national super grid network of 30,000 miles of HVDC electricity lines, would use less power from fossil fuel facilities. By producing power from sustainable energy sources, we would significantly lower carbon emissions created by power made from fossil fuels. Studies about the NAS Super grid have shown cutting electricity generating emissions by as much as 80 percent by 2030 could significantly reduce our carbon dioxide emissions, lessening the growing number of greenhouse gases negatively impacting our climate. The supergrid would link into the existing grid and tap into power not only from traditional power plants, but also from energy generated by renewables, which would create a competitive energy market. The plan is to have approximately two-thirds of the HVDC cable be placed underground but where a link cannot be aligned, construction of traditional, aboveground transmission lines may be required. The proposed super grid construction would create hundreds of thousands of jobs for several decades. Testing the idea with MacDonald was Christopher Clack of NOAA and the Cooperative Institute for Research in Environmental Sciences at the University of Colorado, Boulder, and other colleagues who built a computer model analyzing different configurations of a weblike network of interregional HVDC lines plus renewable energy installations. The model divides the United States into a grid of 152,000 squares that are assigned to regional grids. Our existing electrical grid is over 100 years old: 70% of the grid’s transmission lines and power transformers are over 25 years old, the average age of this country’s power plants is over 30 years old. Strengthening our aging and inefficient grid is becoming a crucial necessity; reliable electricity is now a life-sustaining requirement. This NAS Supergrid would be one of America’s great infrastructure programs. The proposal should have popular, bi-partisan backing in Congress. Let’s hope they move forward and make it happen.
<urn:uuid:f8ed668d-262d-4f7a-9c21-8dd6e631febe>
CC-MAIN-2023-40
https://climateyou.org/2017/12/29/the-north-american-supergrid-saving-our-electricity-and-our-climate-by-climateyou-editor-abby-luby/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510208.72/warc/CC-MAIN-20230926111439-20230926141439-00829.warc.gz
en
0.953734
768
3.96875
4
Lottery is a form of gambling that allows multiple people to buy tickets for a small sum of money in order to have a chance at winning large amounts of money. These financial lotteries are typically run by state and federal governments and can be very lucrative. The origins of Lottery The first recorded lotteries to offer tickets for sale with prizes in the form of money were held in the Low Countries in the 15th century. They were used to raise funds for town fortifications and to help the poor. They were also used in many early American towns to finance roads, schools, libraries, churches, and colleges. They were especially important in the 17th century, when colonial America was a developing nation. There were several lottery advocates in early American history, including George Washington and Benjamin Franklin. They were able to use lottery proceeds to fund projects such as the construction of the Mountain Road in Virginia and the purchase of cannons for Philadelphia. However, despite their popularity in the early 18th century, they were often controversial because of their potential for corruption and deception. In fact, in the nineteenth century, many states outlawed them. Despite these drawbacks, lottery sales are still very popular in the United States. In fiscal year 2003, Americans spent $44 billion on lottery games. The odds of winning a major lottery are pretty slim. In the popular Mega Millions game, for instance, you have to match all six numbers drawn to win the jackpot. The odds of that happening are 1 in 13,983,816. You may be thinking that you should play a smaller-sized lottery game or try to improve your odds with some strategies. But the truth is that the chances of winning a small-sized lottery are just as slim. If you do decide to play a small-sized lottery, you should do so only with a trusted friend or family member. They should be able to give you advice on how to increase your odds. They should also be able to give you an honest assessment of whether or not the game is worth playing. This can help you make an informed decision and avoid becoming involved in a lottery scam. Getting involved in a lottery scam can cost you a fortune and may end up with you losing your money and your reputation. You should always seek out professional advice before participating in any type of a lottery scam. In addition, you should also look into the tax implications of your winnings. Most US lotteries take out 24 percent of the amount you win to pay federal taxes. And if you win a lot of money, you will have to pay state and local taxes as well. The average winner chooses a lump sum over annuity payments when they win a large amount of money. The lump sum payment will generally be larger than the annuity payments, but you’ll have to pay more in taxes.
<urn:uuid:7ab03710-b95c-4c18-9e19-c38513b4ba9d>
CC-MAIN-2023-40
https://robaseball.com/2023/03/27/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511351.18/warc/CC-MAIN-20231004020329-20231004050329-00607.warc.gz
en
0.980476
583
2.90625
3
Extreme rainfall events have caused major flooding issues in Minnesota, Wisconsin and Iowa this year, and it's part of a growing trend in the Upper Midwest. According to FiveThirtyEight, heavy rain events have been increasing over the last 100 years in Minnesota, including the number of "mega-rain" events, which are defined as storms that produce six inches of rain over at least 1,000 square miles – and at least eight inches of rain are recorded in the center of coverage area. According to the Minnesota Department of Natural Resources, Minnesota has documented 11 mega-rain events since 1973. It notes: "Of these 11 events, two were in the 1970s, one was in the 1980s, none were in the 1990s, but four occurred in both the 2000s, and the 2010s (still underway). Thus, the 18 years from 2000-2017 have seen nearly three times as many mega-rains as the 27 years spanning 1973-99." Ironically, FiveThirtyEight's story was published on May 17, about a month before northeast Minnesota and northwest Wisconsin were hammered with 5-15 inches of rain that caused extreme flash flooding in area rivers, creeks and streams. The non-stop rain over the weekend of June 16-17 caused the Nemadji River south of Superior to rise 25 feet in just two days. Countless roads in the area were closed due to water covering roadways or completely washing sections of roads away. MPR meteorologist Paul Huttner said that the 15 inches of rain that fell in Drummond, Wisconsin were about two inches more than needed to qualify as a 1 in 1,000-year rain event. Then, this past weekend, extreme rainfall plagued northern Iowa, with a football field in Forest City showing the power of heavy rain when it started to bubble up, taking on a jello-like composition after eight inches of rain fell in about an hour. Not only are prolific rains becoming more common, they're changing the way engineers are planning for the future. Read the full FiveThirtyEight article for more on that. The DNR notes that climatologists believe the increase in mega-rain events is due to global warming, which provides storms with more moisture to work with.
<urn:uuid:fcf41f56-f9d3-4c0f-be0b-e0491fe9c4d7>
CC-MAIN-2020-29
https://bringmethenews.com/minnesota-lifestyle/minnesota-is-getting-wetter-mega-rain-events-increasing
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883961.50/warc/CC-MAIN-20200704011041-20200704041041-00197.warc.gz
en
0.97858
460
3.03125
3
Lately there have been some strong demands by the people for creating manufacturing facilities of solar cell in the country. What many don’t know is the fact that these facilities existed in Pakistan in the 1980s. At that time, through the efforts of the then government, and the cooperation and financial grants of the UN, Pakistan established the first independent institute, called the National Institute of Silicon Technology (NIST), in Islamabad. In that institute, for the first time, all the 12 stages of solar cell manufacturing facilities were created. After a short while, Pakistan came in the lime light for commencing the manufacturing of solar cell at a startling price that was 30 percent cheaper than the world market. Most of the starting machinery for this endeavour was successfully acquired and the concerned scientists were only waiting to get the government’s permission to start this venture in Pakistan at the price that was so competitive that even some countries came forward in advance to place orders for the purchase of such solar cells made in Pakistan. At that time it was quite easy to grab this opportunity of becoming a pioneer of this manufacturing industry. The country had already attained the respectable reputation and the facilities created at the National Institute of Silicon Technology in Pakistan could have helped the country accelerated the manufacturing process. However, the country missed this golden opportunity. The details of this now abandoned project would still be available in the records resting in the high shelves of government offices. These could even now be easily accessed for learning and getting to know more about this project. Since it already has a detailed plan, the government must try again to produce solar cells in the country. And this time, it must show more dedication. Dr Atique Mufti (Islamabad)
<urn:uuid:9f028219-38b6-4128-b93d-3daa7d0e014b>
CC-MAIN-2017-30
http://chowkyadgar.com/solar-cell/421369
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424247.30/warc/CC-MAIN-20170723042657-20170723062657-00145.warc.gz
en
0.972994
345
3.046875
3
We ought to acquire some sense of the size of the British Empire during Kipling's day. During the late eighteenth century, at the time when the first authors we are considering in this course were at work, the British Empire already included England, Scotland, Wales, Ireland, and large parts of India and North America — including of course the thirteen American colonies which revolted in 1775, and, in a war lasting until 1783, eventually won their independence. Loyalists who emigrated from the new United States to Canada greatly increased the population there, however, and by 1788 the settlement of New South Wales in Australia by transported convicts (who could no longer be sent to the United States) was well underway. England's wars with the French under Napoleon (1799-1815) added more territory &mdsh; most notably the Cape of Good Hope in Africa &mdsh; to the Empire, and various treaties with France and with other nations in succeeding years resulted in the acquisition by the English of islands and territories in the Caribbean, the Mediterranean, and the Pacific: they gained, for example, control of Trinidad, Ceylon, Tobago, Mauritius, St. Lucia, Malta, Malacca, and Singapore, and the British East India Company, by hook and by crook (which is to say by conquest and by treaty) continued to gain control of more and more of India; [see for materials about India in the Postcolonial Web]. New Zealand came under formal British control in 1840, though the native Maoris resisted &mdsh; as the Boers did in South Africa in 1881 and 1899-1902, and as the Mahdi did in 1884 in the Sudan. Fiji was acquired in 1874, Papua in 1884, and Tonga in 1900. After the Great Mutiny in 1857, the authority of the East India Company was superceded by direct government control, which was extended over the Punjab, British Baluchistan, and Burma. British influence expanded in the Mediterranean, and among the sheikdoms of Arabia and the Persian Gulf. Through Cyprus, Gibralter, and Malta, Britain maintained a link through the Mediterranean with India. In the Far East the Empire developed the federated Malay states, established itself in Borneo, acquired Hong Kong, and trading rights in territories on the Chinese mainland and at Shanghai. In Africa after 1882 Britain ruled Egypt and controlled the Sudan and parts of Nigeria: various British companies operated in and effectively controlled what are now Kenya, Uganda, Zimbabwe, Zambia, and Malawi. By the end of the nineteenth century, then, during the period when Kipling became the great Imperial poet, propagandist, and apologist — or, as George Orwell put it, "the prophet of British Imperialism during its expansionist phase," the Empire extended over nearly one quarter of the land surface of the world, and included rather more than a quarter of the world's population. Nearly all of the component parts of the Empire, however, either before or after the World Wars, would eventually acquire some degree of political autonomy (Canada, Australia, New Zealand, and South Africa, for example, became "dominions" rather than colonies) and, eventually, independence: after the end of the Second World War the Empire had effectively ceased to exist. Last modified 14 April 2012
<urn:uuid:71118efc-6670-4a8c-bdd6-eb75ad916169>
CC-MAIN-2017-26
http://www.victorianweb.org/authors/kipling/rkempire.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320338.89/warc/CC-MAIN-20170624203022-20170624223022-00690.warc.gz
en
0.9644
686
3.625
4
By: J Martinez-Alier Contents: 1. Introduction 2. Modern Agriculture: A Source of Energy? 3. The History of Agricultural Energetics: Podolinsky 4. Eduard Sacher's Formulation of Podolinsky's Principle 5. Rudolf Clausius: On the Energy Stocks in Nature 6. Patrick Geddes' Critique of Economics 7. The Carrying Capacity of the Earth, According to Pfaundler 8. Henry Adams' Law of Acceleration in the Use of Energy 9. Soddy's Critique of the Theory of Economic Growth 10. Lancelot Hogben vs. Hayek 11. Methodological Individualism and Intergenerational Allocation 12. Max Weber's Chrematistic Critique of Wilhelm Ostwald 13. Ecological Utopianism: Popper-Lynkeus and Ballod-Atlanticus 14. The History of the Future 15. Political Epilogue. There are currently no reviews for this product. Be the first to review this product! Your orders support book donation projects NHBS is a national institution, not to say an international one, in the world of natural history! Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:59467299-eb67-4cce-b18d-18593e96e6ac>
CC-MAIN-2017-30
http://www.nhbs.com/title/9991/ecological-economics
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428325.70/warc/CC-MAIN-20170727162531-20170727182531-00240.warc.gz
en
0.708599
267
2.75
3
Where else can Einstein chat up Walt Disney or Pocahontas swap wilderness tips with Steve Irwin? Why, at Saint Edward's Second Grade Biography Day, of course! Second graders capped off their biography study with a presentation of their chosen historical figure. Mrs. Voyles and Mrs. Newman's second-grade students prepared for Biography Day as they do each year by each reading a biography of a famous person, then writing a report telling of the person's life and giving an opinion of the book. Students had the opportunity to dress as their chosen subject and learn about the lives of many famous people through their classmates' portrayals. Students learned about history and humanity and became inspired to make a difference in the world. Check out the fun in this album of highlights.
<urn:uuid:2504b546-3bfa-4e53-b643-913ab3e2c238>
CC-MAIN-2020-34
https://www.steds.org/news-story?pk=1206239&fromId=255273
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739177.25/warc/CC-MAIN-20200814040920-20200814070920-00428.warc.gz
en
0.963203
156
3.328125
3
7. Geothermal energy (Heat from the subsurface) Where does the heat come from that causes even rocks to melt deep underground? How could it be used? And where does surface-near ground heat gets its energy from? Deep and surface-near geothermal energies are indeed very different. But they have one thing in common: It depends on the geological nature of the ground where they can be used safely and profitably.
<urn:uuid:256dc2ff-c7fd-42a9-a266-c6ae1e210abe>
CC-MAIN-2020-29
http://www.energieerlebnis.schoenau-im-schwarzwald.de/pb/,Len/1578362.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902377.71/warc/CC-MAIN-20200709224746-20200710014746-00455.warc.gz
en
0.953412
89
3.203125
3
This video discusses a minimum valid HTML page. - [Instructor] First off, you need to know how a web page works. D3 creates graphics within a web page, so you can't have D3 without HTML. An HTML page can be edited in a text editor. Here we've got a text editor on the left, and a browser on the right. If the contents of the HTML file conform to a set of rules, browser software can interpret the contents, and transform your HTML code into a web page with images, tables and text. The basic rules of an HTML page are, DOCTYPE declaration, HTML tags, head tags, and body tags. "Tag" is a hidden key word contained within angle brackets, which tells the browser what to display, or how to display it. There are opening tags, which have angle brackets, and closing tags, which have angle brackets, and a forward slash. The closing tag tells the browser to end the element given by the opening tag. We also need a character set, and title, to make our page complete. Now technically in HTML5, you could omit some of these tags. But the structure is going to be useful for us within the course. So this is the smallest valid HTML file we're going to be working with. The head tags contain information about the page, and they're not shown in the browser. The body tags contain the bits we see, called elements. As our body tags on the left here have no content, the web page appears blank. When we add code in between the body tags, and view the file in a browser, the browser has to work out how to arrange the parts. It will draw the first element top left, and the browser will then draw following elements to the right and down. So web pages are always drawn by the browser from top left, to bottom right, following formatting rules that we supply, plus their own assumptions. Different browsers, such as Chrome and Firefox, often apply slightly different rules, and therefore display elements differently, much to the annoyance of web developers. Now we've added some content to our body tags. We've got "p tags", which means paragraph. And then we've got some "div tags", which means division. HTML elements fall into two categories, block, and inline. Some elements can take either form, but all elements default to one or the other. Block elements take up a rectangular area within the page, and include things like images, tables, paragraphs, and divs. Block elements always start on a new line, and default to taking up the full width available. Inline elements occur within a block of text, usually a paragraph, and include things like hyperlinks and spans. Inline elements don't start on a new line, and they only take up as much space as necessary. You can see here that we've placed p tags, a paragraph, at the top of our page, i.e., as the first set of tags within the body tags. Paragraph is a block element, so those four lines on the right, taken together, take up a rectangular space in the browser window. You see that the small red box is covering part of the text. Let's say we formatted our span element, so that it highlights a word in red. We don't need to give any position or size information to place this red box. The browser works it out when it types out the text. So, if we were to double the font size, the red shading would still appear around the correct word. This is true of all inline elements. Looking at the text editor again, you can see that we've placed number of block elements under the paragraph. In this case, we have five divs. Each div is a block element. With no formatting, these divs have used default settings, and scaled to the height of their contents, and the full width of the page. We can change the width and position of block elements to produce any kind of layout. With formatting, not shown here, we use the same HTML to produce a different layout for our divs, making the blocks different sizes, and arranging them in different locations. These concepts will become important shortly, when we start looking an HTML element called SVG. You can't create a D3 graphic, without an SVG element. - Making a simple bar chart with D3 - Understanding SVG graphics - Drawing basic shapes - Adding text - Using generators and the path element - Creating a scale and axes - Importing data into D3 - Creating trees and Voronoi tessellations - Preparing your data for advanced graphics - Adding interactivity and transitions - Choosing the right graphic - Finding D3.js plugins
<urn:uuid:51bb1e39-21c2-4b2b-8d22-f275b9ad6879>
CC-MAIN-2017-47
https://www.lynda.com/D3-js-tutorials/Recalling-HTML-basics/504428/549386-4.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00054.warc.gz
en
0.883431
981
4.09375
4
|Read the magazine story to find out more.| Pacific Northwest farmers could someday be filling up their machinery's tanks with fuels produced from their own fields, according to ongoing research by U.S. Department of Agriculture (USDA) scientists. Since 2003, Agricultural Research Service (ARS) microbiologist Hal Collins and agronomist Rick Boydston have been studying safflower, camelina, soybeans, mustard, canola, wheat, corn and switchgrass to assess their potential for bioenergy production. ARS is USDA's chief intramural scientific research agency, and this research supports the USDA priority of developing new sources of bioenergy. Collins and Boydston both work at the ARS Vegetable and Forage Crops Research Laboratory in Prosser, Wash., where they've found that the oilseed crops in their studies could someday help supply Washington State with renewable fuels. For instance, they've found that canola, which is already established as a summer crop in the Pacific Northwest, can also be grown in the winter both as a cover crop and potentially as a biofeedstock crop, because its seeds are around 40 percent oil. Their results also suggest that it could take anywhere from 50 to 70 acres for a farmer with 1,000 acres and an onsite crusher and biodiesel facility to grow enough canola to produce the fuel needed to run on-farm operations. The team also found that in field trials, camelina plants produced an average of 2,000 pounds of seeds per acre in 80 days, which translates into 700 pounds of oil—and eventually 93 gallons of oil—per acre. Safflower plants, meanwhile, produced around 3,000 to 3,500 pounds of seeds per acre, and white mustard seed meal could also be used as an organic fertilizer after the seeds were crushed to extract the oil for fuel. Collins and Boydston also evaluated eleven switchgrass cultivars in their studies and found "Kanlow" to be the most promising cultivar for maximum production under sustainable irrigation strategies in the Pacific Northwest's Columbia Basin. Four years after the team planted the first crop, they measured yields of 14 dry tons per acre, which could translate into around 1,000 gallons of cellulosic ethanol per acre. Read more about this research in the February 2011 issue of Agricultural Research magazine.
<urn:uuid:3019f531-c9bf-48bc-b68f-ada026814883>
CC-MAIN-2014-15
http://ars.usda.gov/is/pr/2011/110203.htm
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00506-ip-10-147-4-33.ec2.internal.warc.gz
en
0.954486
476
2.859375
3
The Health Insurance Portability and Accountability Act (HIPAA) protects the privacy of individually identifiable health information. Learn more by clicking on the links to the below. What does the Privacy Rule require? The Privacy Rule prohibits the use or disclosure of “protected health information,” or PHI, unless the patient has signed a specific authorization. PHI is defined in the Privacy Rule as any health information created or received by a health care provider that: (1) identifies an individual; and (2) relates to that individual’s past, present, or future physical or mental health condition or to payment for health care. Protected health information includes information in any form or medium, from a paper medical record to a fax authorization or referral to a conversation between colleagues consulting on the care of a patient. An authorization is not required for the following, provided the patient has acknowledged receipt of a Notice of Privacy Practices: - To treat the patient - To get paid for services - To conduct health care operations (for example, quality assurance, credentialing, audits, compliance monitoring) - Patient information also can be given to patient caregivers (for example, family members), but only if the patient expressly or impliedly consents. - Certain disclosures also can be made by a health care provider without patient authorization to accomplish public policy objectives (for example, to report child or elder abuse). Any other disclosure (such as for research, fundraising or marketing) may only be made if the patient specifically authorizes the disclosure in writing. An authorization is a customized document that requests permission from the patient to use protected health information for specific purposes and for a specific time period. As a general rule, even if a disclosure is permitted under the Privacy Rule, it must be limited to the minimum amount of information necessary. The HIPAA Privacy Rule also gives patients expanded rights to access their medical and billing records, request amendments to those records and obtain an accounting of disclosures of protected health information. HIPAA Privacy Education Completion Reports **Updated as of May 21, 2017** USC HIPAA Guidance Interested in learning more about the HIPAA Privacy Rule? Try these resources: What is a Business Associate? Notice of Privacy Practices- Why it is important Minimum Security Standards for Electronic PHI HIPPA Privacy rule and Sharing information related to Mental Health- Guidance from US Department of Health and Human Services, February 20th, 2014 When Federal Privacy Rules and Fundraising Desires Meet- An advisory on the use of Protected Health Information in Fundraising Communications
<urn:uuid:2c860536-8b44-4f11-83c1-d6751ba895aa>
CC-MAIN-2017-43
http://ooc.usc.edu/data-privacy/health-information/hipaa-privacy-regulations/
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825700.38/warc/CC-MAIN-20171023054654-20171023074621-00017.warc.gz
en
0.911732
526
2.625
3
What is CRI? CRI is a measure of a lights sources ability to show objects colors “realistically or naturally” compared with their true color or Sunlight. High CRI lights also put off professional level of color all the time, this used to be reserved just for professional photographers or Art galleries. Whether you’re an artistic person or not, our eyes are sensitive to light quality and color. A red shirt lit directly with noontime sunlight will render much different than if lit under a fluorescent bathroom light. The High CRI of our LED will help identify objects with the better light. Why is the Sun-In-One LED with a 95+ CRI important to you? – Displays all colors in almost natural light, making it easier to distinguish slight shades of colors and tones – See the true color of meats, fruits, and vegetables in the grocery display case – Enhances skin tone, hues, and textures, making rooms and products in them look their best – converting foot traffic into sales – Reduces eye strain in the office, workplace or at school, increasing productivity and safety – Easier to distinguish colors in security cameras – Less light distortion at night when driving and the colors you are seeing a closer to real colors that could decrease your reaction time while driving. What is CRI and why you want an LED with a high CRI? Color Rendering Index, commonly referred to as CRI, is a method we can use to measure how color looks to the human eyes, depending on the light source as compared to the sun. The CRI provides a scale of values up to 100, with 100 being the best color rendering light quality and a value below zero representing very poor color rendering. When a light has a CRI of 100, it means that there is no difference in color rendition between the light and the reference light (the sun). Likewise, a CRI of 75 means that the light bulb renders a 75% replication of the visible colors that the sun shows, given that both lights have the same color temperature. That is the cri of the average streetlight. Our eyes are sensitive to light quality and color. The higher the CRI value, the more accurate the colors will be.
<urn:uuid:78612517-1da8-4b0c-b597-6976427c95c2>
CC-MAIN-2020-05
https://suninone.com/about/what-is-cri/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00184.warc.gz
en
0.924598
465
3.140625
3
Lava is rock which has been heated to a temperature of roughly 1,300 - 2,200 °F (700 - 1,200 °C) that flows over a planet's surface after escaping the planetary crust. The most common locations of lava activity are volcanoes, which can force lava hundreds of feet into the air during a strong eruption. Lava can be composed of many different types of minerals, and can have different viscosities depending on the composition and temperature. The three basic types of lava include: Silicic: This variety, also called Felsic, is very viscous and has a comparatively low temperature. Its usual mineral composition includes aluminium, silica, potassium, calcium, and sodium. This mineral balance causes its high viscosity, and keeps the temperature at around 1,200 - 1,400 °F, although unusually hot Silicic lava flows have occurred. Andesitic: Andesitic, or Intermediate lava, is lower in aluminium and silica and richer in magnesium and iron. This allows temperatures of 1,400 - 1,800 °F and gives it lower viscosity. Basaltic: Also called Mafic, this lava is usually 1,800 °F or above. It is even lower in aluminium and silica, and even higher in iron and magnesium than Andesitic lava. Because of the mineral composition and the extreme heat, viscosity is low, allowing for extensive lava flows. Lava In Games Rivers of lava continue to be a staple environmental hazard throughout many video games. Often encountered in platforming titles, lava is usually a death-upon-contact hazard, as it is a commonly understood danger. It features prominently in "fire levels", another staple of platform and exploration games like the Super Mario Brothers and Legend of Zelda series. It is also often used to keep players contained to the intended gameplay area, such as in Quake, and Unreal. Lava can have other uses in games too, beyond simply being an insta-kill trap or impassable barrier: - In Terraria, lava's liquid flowing properties can be exploited by a savvy player should they dig far enough down to encounter some. Normally, this involves digging tunnels under one's feet to funnel the lava to a less crucial area so that the player might pass. The player can also use flowing lava to eliminate some enemies (the enemies in the deeper Hell regions are immune to this tactic, however). Lava can also be combined with water to create Obsidian, a very durable resource that requires a high-level pickaxe to mine. Should enough water simply be poured over a large pool of lava, then the entire surface of that pool becomes Obsidian and therefore safe to walk over. - In Dwarf Fortress, another game based on world-crafting, lava can be used as an effective siege weapon should the player manage to create a sluice system and position it over the entrance to their fortress. Since lava destroys enemies, dwarves, resources and greenery with equal prejudice, proper care must be taken to ensure this lava trap does not get out of hand once released. Ingenuity is simply yet another way to wipe out the population of a dwarf fortress. - In Populous and many other God Sim games where the player controls a deity with power over nature, lava and volcanoes can be summoned as a powerful destructive force to eliminate enemies. Like with Dwarf Fortress, the sheer devastation left behind can make rebuilding the same area quite dangerous. - From Dust uses the tropes of many God Sim games to create its puzzles. Among with many other types of land features, the player is able to use lava from any local volcano. Lava has the property of rapidly cooling into immutable stone once removed from its source, creating natural barricades to prevent tsunamis from washing away villages or people. Dropping the lava ball in or near a village will destroy it, however, so precision is required when crafting these walls. - In the Metroid series, lava is a natural trap until Samus receives the Varia Suit, which either eliminates or vastly reduces the harm lava and extremely high temperatures would normally cause to her. In this case, instead of being the usual permanent insta-kill barrier, it is simply another obstacle that can be overcome with the correct technology. Log in to comment
<urn:uuid:d51291e2-792d-4816-8f50-52ee53b0b136>
CC-MAIN-2023-23
https://www.giantbomb.com/lava/3055-220/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648858.14/warc/CC-MAIN-20230602204755-20230602234755-00156.warc.gz
en
0.94197
894
3.640625
4
Romulus And Remus A selection of articles related to romulus and remus. Original articles from our library related to the Romulus And Remus. See Table of Contents for further available material (downloadable resources) on Romulus And Remus. - Our Pagan Village: The Importance and Persuit of Honor - Candlelight flickers over the Beltaine revels. Food is laid out in the circle for the feast. Only one rule – no one can feed themselves. Each is dependent on friends and loved ones for sustenance, joy and delight. After an hour of laughter and revels and way... Paganism & Wicca >> Daily Life - What is hypnotic trance? Does it provide unusual physical or mental capacities? - 2.1 'Trance;' descriptive or misleading? Most of the classical notions of hypnosis have long held that hypnosis was special in some way from other types of interpersonal communication and that an induction (preparatory process considered by some to be... Parapsychology >> Hypnosis - Pagan Mythology - Is the traditional story presented as an historical event that serves to illustrate part of the world view of a people or explain a practice, belief, or natural phenomenon. The mythological beliefs a culture shares gives shape to its actions and choices.... Paganism & Wicca >> Holidays - The Religious Experience: A Wiccan Viewpoint - What is religion? Religion is a set of beliefs which allow us to understand and categorize our world and our place in it. A set of beliefs which define our culture, our expectations, our views of people and behaviors we expect. I have found several different... Religion & Philosophy >> Religions - Survivalists' Guide for the New Millennium: Chapter 6 - AS THE WORM TURNS Health and well being are part of the natural birthright of the human being. With all of its organs intact, the right diet, exercise and mental focus, a human body can overcome any disease. Even so, the effects of living in this... Philosophy >> Survivalists Guide for the New Millennium - Bringing it Down to Earth: A Fractal Approach - 'Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.' B. Mandelbrot W e want to think about the future - it's our nature. Unlike other creatures, humans possess an... Mystic Sciences >> Astrology Romulus And Remus is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Romulus And Remus books and related discussion. Suggested Pdf Resources - Romulus and Remus Lesson Plan - To understand the difference between a legend and a true story. - Livy and Ovid on Romulus and Remus: Fratricide Livy 1.6.3-1.7.3 - Livy and Ovid on Romulus and Remus: Fratricide. Livy 1.6. - Romulus/Remus Data Sheet - Romulus/4 and Remus/2 are inexpensive PCI cards that per port. - Facts about Rome - Romulus & Remus. 15. Romulus and Remus were legendary twins - who founded the city of Rome. Suggested News Resources - Charter government changed Phoenix - 20, 2011 12:43 PM Rome has its Romulus and Remus; Manhattan has its Peter Minuit and his $24 worth of beads. The first is the well-known story of Jack Swilling and Darrell Dupa - our Romulus and Remus - and the canals and our eponymous renascent bird. - The Roma regime hopes for patience - In Serie A, Roma look like they are borrowing so heavily from the manual of the European champions they might as well replace Romulus and Remus on the club's emblem and put there instead something with blue and ruby stripes. - Blood and Honor: 'Julius Caesar' at the Park Ave. Armory - (RSC) opens with a savage prequel: against a back projected image of the Capitoline Wolf (a much reproduced bronze sculpture of a wolf suckling Romulus and Remus) two feral young men gouge, maul, and bite one another until one finally succumbs. - Bridge Banter: Chelsea's revolving door stays unlocked - Romeu and young striker Romelu Lukaku were formally unveiled this week, with one Chelsea wag dubbing them Romulus and Remus, and predicting that the Blues are bound to meet Roma in the Champions League. - Rough, visceral and passionate 'Julius Caesar' - It begins with a snarling battle to the death between the vicious, nearly naked Romulus and Remus, mythical wolf-raised brothers who allegedly founded Rome. Suggested Web Resources - Romulus and Remus - Wikipedia, the free encyclopedia - Romulus and Remus are Rome's twin founders in its traditional foundation myth, although the former is sometimes said to be the sole founder. - Romulus and Remus - According to the roman mythology, the founders of Rome were Romulus and Remus. - Romulus and Remus - What is legend of Romulus and Remus? Romulus and Remus were twin brothers . - Legend: Romulus & Remus - Ancient Rome for Kids - The ancient Romans loved to hear the story of Romulus and Remus. - Romulus & Remus, The Story of - YouTube - Nov 18, 2008 Romulus killed remus thats why the city is called 'rome" If Romulus didn't kill Remus, Remus would've killed Romulus. Romulus And Remus Topics Related searcheschariots of fire plot cloud types storm clouds west edmonton mall retailers jamaican creole examples
<urn:uuid:f0f9bb47-06b7-4a14-b578-54fd6831874d>
CC-MAIN-2014-15
http://www.realmagick.com/romulus-and-remus/
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00453-ip-10-147-4-33.ec2.internal.warc.gz
en
0.905679
1,245
2.515625
3
Healthy air quality What to look for in assessing air quality and how to achieve a healthy level. Wed, Nov 09, 2011 at 01:56 PM Your eyes are watering, your throat is dry and itchy, your head hurts and you're finding it hard to breathe. If these symptoms last longer than the typical cold — and you don't normally suffer from allergies — they may be signs of poor indoor air quality. Whether at home or at work, persistent exposure to pollutants in the air can have serious effects on your health. How do you achieve healthy air quality? Here's what to look for, and a few tips for cleaner, more breathable air. Causes of poor indoor air quality According to the United States Environmental Protection Agency, poor indoor air quality is associated with illnesses like asthma, hypersensitivity pneumonitis, and what's known as 'humidifier fever.' In addition to allergy-like symptoms, people who sit for hours in buildings with polluted air may experience unusual levels of fatigue, dizziness, nausea, irritability and forgetfulness. If symptoms of illness seem to abate when you leave your home or office, that's a strong sign pointing to air quality issues. There are many factors that detract from healthy air quality indoors. In poorly ventilated structures, pollutants like asbestos, formaldehyde and other volatile organic compounds can build up in the air. These toxic compounds are emitted by products like cleaning supplies, air fresheners, insulation, carpeting, adhesives, office equipment and hobby products. Pollutants resulting from combusting appliances like oil heaters, woodstoves and gas cookstoves, can also be retained indoors. Improper ventilation not only prevents these pollutants from leaving the building, it can also introduce outdoor pollution like automobile exhaust, boiler emissions and fumes from dumpsters into the air inside due to poorly located air intake vents. How to achieve healthy air quality First and foremost, check your ventilation systems. Have a professional inspect and service your home's HVAC system on a regular basis as well as any ventilation associated with appliances, including your chimney. While having a tightly sealed home is great for conserving energy, you should ensure that the air within your home is refreshed on a regular basis. Use window or attic fans when weather permits, and install bathroom and kitchen ventilation fans to push potentially polluted air directly outdoors. Limit pollutants inside your home by storing items like pesticides, paints and thinners, adhesives and fuels in a shed or garage. Choose non-toxic cleaning products and household items with no- or low-VOCs including furniture, finishes, carpeting, bedding and drapery. It’s also a good idea to grow an indoor garden. House plants like ficus, bamboo palms, pothos and peace lilies actively work to strip pollutants out of the air. These plants will not only beautify your space and bring in a little of the outdoors, but act as a natural air filter. If you live in an apartment, take steps to temporarily increase the ventilation indoors. Avoid blocking air supply vents, and open the windows every now and then to let in fresh air. Speak to your building management about following the EPA's Building Air Quality guidelines. If your'e concerned about the air quality in your workplace, talk to your co-workers, supervisors and union representatives to see if others are experiencing similar adverse health effects and discuss possible solutions with your employers. If your building managers refuse to address the problem, you can call the National Institute for Occupational Safety and Health (1-800-35NIOSH) to learn about obtaining a health hazard evaluation of your workplace. Have other thoughts on healthy air quality? Leave us a note in the comments below.
<urn:uuid:7f86bb53-1ee2-48da-9930-e1ef22e924e1>
CC-MAIN-2014-23
http://www.mnn.com/health/healthy-spaces/stories/healthy-air-quality?magic_tabs_comment_callback_tab=0&referer=node%2F118095&args=a%3A1%3A%7Bi%3A0%3Bs%3A6%3A%22118095%22%3B%7D
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997859240.8/warc/CC-MAIN-20140722025739-00242-ip-10-33-131-23.ec2.internal.warc.gz
en
0.934107
773
2.953125
3
The Internal Conflict of Roderick Usher In Poe’s short story “The Fall of the House of Usher,” Roderick Usher faces a very obvious internal conflict that is the result of the intimate relationship he and his sister shared. They are twins, and the last people in their family, which has been incestuous in the past. Roderick Usher and his sister rely on each other to keep the Usher family in continuation, and when Usher buries his sister before her obvious time, he feels guilt for the crime he committed, which aid to his personal conflict. “I had learned, too, the very remarkable fact, that the stem of the Usher race, all time-honored as it was, had put forth, at no period, any enduring branch; in other words, that the entire family lay in the direct line of descent, and had always, with very trifling and very temporary variation, so lain” (Poe 719) Roderick Usher and his sister were incestuous, and thus were the last two members of the family. Usher’s Madeline is his last relative on Earth, and Usher is upset knowing that his lifelong companion will soon be no more. When Madeline embraces him after clawing her way out of the tomb, she kills both herself and her brother. He goes into an obvious depression, which eventually turns into an outrage on the narrator. He admitted, however, although with hesitation, that much of the peculiar gloom which thus afflicted him could be traced to a more natural and far more palpable origin-to the severe and long-continued illness-indeed to the evidently approaching dissolution-of a tenderly beloved sister; his sole companion for long years-his last and only relative on earth. internal conflict is directly fueled by the fact that his sister is very sick, and he knows she will die soon. He knows he"tms buried his sister alive, and when they hear a "scratching" at the door, they turn around to see Madeline, who obviously went through a struggle to escape. At this point, Madeline jumps to embrace her brother, and they both fall to the floor, dead. Throughout the short story, Roderick Usher is battling an intense internal conflict, which was fueled by his sister, their relationship with each other, and the dependency on each other to survive. When they both pass together, they"tmre putting the other at ease, knowing that one half won"tmt ever have to survive without the other again. Since their family has been known to be inbred, they have a sibling bond, but also have a deeper bond because they"tmve relied on each other for survival for many years. Once Madeline is buried alive, Roderick Usher realizes that he"tms buried his one true love alive, and that she will soon be dead. This fuels his internal conflict because now he"tms not only struggling with their incestuous history, but her untimely death as well. Secretly, Usher knows that his sister isn"tmt completely dead, but wants to bury her before her time so he doesn"tmt have to witness her actual death.
<urn:uuid:ea3a56b5-4f75-4d37-bae0-6b7949a24b47>
CC-MAIN-2014-10
http://www.megaessays.com/viewpaper/97993.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021389272/warc/CC-MAIN-20140305120949-00092-ip-10-183-142-35.ec2.internal.warc.gz
en
0.987242
666
3.203125
3
2 Department for Transport's expenditure Presentation and information 4. Government finances are notoriously complex, with distinctions between different forms of expenditure (for example, resource and capital spending; departmental expenditure limits and more volatile annually managed expenditure) and complicated adjustments to the figures to ensure that cash and resource budgets and outturns can be kept in line. In the past there were different bases for expenditure plans and outturn figures, which made it difficult to compare planned spending in one year with what actually happened. There has been some simplification in recent years, as a result of the Treasury's Alignment project. Last year we welcomed the intention of Rt Hon Philip Hammond MP, the then Secretary of State for Transport, to go beyond the requirements of the project to simplify the DfT's annual budget (its Main Estimate). We saw the first fruits of this work with the publication of the DfT's annual report and accounts for 2010-11, which includes a three-page table showing outturn and estimated expenditure for the period from 2005-06 to 2014-15 itemised by some 20 comprehensible categories. Figure 1 uses this information to show the department's overall spending profile for the decade. We commend the Department for Transport for simplifying the structure of its Main Estimate and publishing detailed information about spending for the 2005-15 period, which enables us to see more clearly where the department spends money and trends over time. DfT expenditure 2005-15 Notes: Annual Report and Accounts 2010-11, pp35-37. Figures are provided in cash terms (ie without accounting for inflation). The 2010-11 figures are estimated outturns. Figures for subsequent years are for planned expenditure. 5. Each year the DfT provides us with a memorandum to explain its annual Main Estimatewe publish the memorandum for 2011-12 with this report. These have become more useful documents, with clearer explanations of how the figures have changed. However, there remains a problem with inadequate explanation of in-year budget changes. Three recent examples stand out. Firstly, after the 2010 election the new Government reduced the DfT's budget by £683 million, as part of a wider programme of cuts in public expenditure ahead of the Spending Review. Information about where those cuts would be made was released in response to a parliamentary question, rather than proactively by the department. 6. An even more striking example occurred at the time of the 2011 Budget. £300 million of new DfT spending was announced, comprising £100 million in grants to local authorities for road maintenance and £200 million in rail projects. This was in addition to another £100 million in road maintenance grants announced in February 2011. The DfT attributed the new commitments to "savings" but no further explanation was offered. We wrote to Mr Hammond, the then Secretary of State, on 26 April 2011 to ask where and how these savings had arisen and had to wait until 21 July to receive a reply. This explained that the savings had arisen as - £336m from "successful - £273m from efficiencies, such as reduced dependence on consultants - £229m because buoyant rail demand had reduced subsidy payments to train operating companies - £94m from other rail budgets - £29m from the early sale of HS1 Mr Hammond concluded that: Overall the Department spent £1,029 million less than originally planned in 2010/11, of which £486 million was recycled into transport initiatives and £543 million surrendered to the Treasury. We return to the underspend issue below, but it is worth noting that without our intervention the DfT would not have been obliged to explain how it could make new spending commitments in March 2011. 7. Finally, a number of new transport infrastructure projects were announced by the Chancellor in his 2011 autumn statement, including 35 new road and rail schemes. In addition, it was announced that regulated rail fares would increase by RPI + 1% in January 2012, not RPI + 3% as had been intended at the time of the Spending Review. The Treasury's autumn statement document contained some useful information about the costs associated with these changes, but questions remain about whether the DfT had been given additional funding by the Treasury and about the profile of the spending to 2014-15 and beyond. 8. In oral evidence the Secretary of State said that the DfT had been awarded £1.5 billion additional funding over the spending review period. In addition, it was suggested that the DfT had again been able to recycle budgetary underspends to fund new initiatives. Numerous parliamentary questions have been tabled to find out more information about the spending profile for individual projects, with limited success. We wrote to the Secretary of State on 16 January to request further details about some of the announcements in the autumn statement and received her reply on 6 February, the day before this Report 9. In our view, the DfT does not provide Parliament and the public with adequate information about in-year changes in its budget. Cuts have been announced without an explanation of where they would fall and new spending commitments have been made without proper explanation of how they have been funded. We recommend that when the DfT makes an announcement to Parliament about a change to its budget it should explain the effects of the change on specific budget lines and, where a new spending commitment is involved, an explanation of how the money has been found. If the details have not been finalised at the time of the headline announcement the department should indicate when it will be in a position to provide those details and make a written statement at a later date. 10. Returning to the 2010/11 underspend, we were surprised to learn that the Department had ended up in a position where it was required to return over £500 million to the Treasury. This is more than the estimated cost of the entire Northern Hub project and is also likely to have exceeded the total reduction in annual revenue for the English bus industry following the Spending Review. Put another way, the DfT accepted a cut to its in-year budget of £683 million and then underspent on its revised budget by over £1 billion, calling into question whether the in-year cut was necessary. 11. The Secretary of State said that in the event of an underspend she would "look at the scope for bringing forward projects" but would "not just ... spend money at year-end on projects that I do not think will add value". Her predecessor spoke of "lessons for the Department to learn" following the underspend. We also note that the Department's accounts were qualified by the Comptroller and Auditor General because more income was received from train operating companies than the limit for this set by was a technical error but one which suggests that budgetary control at the Department has been slack. Money voted by Parliament for expenditure on transport should be spent on transport, not handed back to the Treasury. We will be watching the Department's performance in this area carefully to check that the lessons Mr Hammond referred to have been learnt. 12. The debate about how much DfT expenditure is incurred in London compared to the rest of the UK was further stoked by the publication in December 2011 of a report by IPPR North which claimed that 84% of planned new infrastructure spending was aimed at London and the south east, compared to just 6% in northern England. The average spend per head in London works out at £2,731 compared to a miserly £5 per head in north east England. IPPR North concluded that this analysis "betray[ed] the government's ongoing failure to take seriously the importance of spatial rebalancing". This report chimes with the conclusions of research by pteg which found that a total of £774 is spent on transport for every Londoner, compared to less than £300 spending per head in Yorkshire and Humberside, the West Midlands and north east England. 13. Responding in the House to the IPPR report Norman Baker MP, Parliamentary Under-Secretary of State at the DfT, said the report's analysis was "not complete; it did not, for example, include the December announcements on local major projects and did not take into account the further £1 billion from the regional growth fund. It is not a complete analysis". He later pointed out that "it can be difficult and misleading to assign spend to a particular region as the benefits of certain projects can be far more widespread" and published a breakdown by region of spending on schemes announced as part of the autumn statement and on local major transport schemes. In a welcome move, the DfT also now publishes a regional breakdown of its overall spending in its annual report. According to this, spending in 2009-10 in London and south east England accounted for 32% of total identifiable UK expenditure. Spending per head in London was £170 compared to £140 in north east England and £120 in south west England. 14. There remain concerns that DfT spending, particularly on infrastructure projects, is unduly focused on London and south east England. We acknowledge, however, that calculating how spending is distributed between regions is complex and some projects may well benefit the nation as a whole. We consider that the DfT could do more to ensure that its expenditure plans involve a fair allocation of resources across the nation. We recommend that the DfT's next annual report and accounts includes a more comprehensive analysis of regional spend, including a fuller explanation of how its figures (which are drawn from National Statistics) are arrived at. In addition, we recommend that major new spending announcements, such as the Spending Review or recent autumn statement, should be accompanied by a comprehensive analysis of their regional impact. Regional Growth and Growing Places 15. The DfT contributes to two inter-departmental funding schemes announced since the Spending Review. REGIONAL GROWTH FUND 16. The Regional Growth Fund is a £1.4 billion fund which operates over three years from 2011 to 2014 and aims to "stimulate private sector investment by providing support for projects that offer significant potential for long term economic growth and the creation of additional sustainable private sector jobs". The fund is administered by the Department for Business, Innovation and Skills: the DfT has contributed £500 million. 17. In November 2010 the then Secretary of State, Mr Hammond, said: We have made a sizeable contribution from the transport budget to the regional growth fund, and I will be very disappointed if we don't get at least our money back, and preferably a lot more, in terms of transport projects. In October, Lin Homer, the DfT's then Permanent Secretary struck a different note, arguing that the Fund was "not ring-fenced to absolute proportionality ... the previous Secretary of State did not go into that looking to get his third specifically spent The DfT subsequently told us that 21 transport schemes and bids had been approved as part of the first and second Regional Growth Fund announcements, and sent us the list. What remains unclear is how much Government money will be spent on these schemes. 18. In our view, there are important reasons for the amount of money allocated to the Regional Growth Fund by the DfT to be at least broadly equivalent to the value of the transport schemes the Fund promotes. The principle of parliamentary control over Government spending would be undermined if money which Parliament agreed should be spent on transport was in fact spent on something else. We recommend that the DfT provide us with details of how much it has contributed to the Regional Growth Fund and how that money has been, or is planned to be, used on transport schemes. GROWING PLACES FUND 19. The Growing Places Fund is a joint initiative of the DfT and the Department for Communities and Local Government. Intended to tackle short term constraints to infrastructure investment, £500 million is available for allocation in 2011-12, from The DfT told us that its contribution to the fund was £125 million and that it was likely to be disbursed to groups of Local This new fund has the potential to ensure that departmental underspends are used creatively rather than handed back to the Treasury but we are concerned about whether a fair proportion of the fund will be allocated to transport projects. We recommend that the DfT explain how the Growing Places Fund is disbursed and what arrangements are in place to ensure that transport projects benefit in proportion to the DfT's contribution to the 3 Financial scrutiny, paragraph 7. Back Annual report and accounts 2010-11, DfT, HC (2010-12) 972 (hereafter 2010-11 report and accounts) pp35-37. Back Ev w3-12. Back HC Deb, 13 Jul 10, c625w and see Reducing costs in the Department for Transport, NAO, 14 Dec 11, HC (2010-12) 1700, figure 1. Back See DfT press release, More than £100 million of extra funding to repair winter potholes, 23 Feb 11 and HC Deb, 15 Mar 11, c284-85w (roads); Ev w2 and HC Deb, 30 Mar 11, c22-23 WS (rail); and also see Budget 2011, HM Treasury, HC (2010-12) 836, paragraphs 1.96-1.97. Back See above. Back Ev w2-3 and see 2010-11 report and accounts, pp48-50. Back See annex A of Autumn Statement 2011, HM Treasury, Cm 8231, Nov 11 (hereafter Autumn Statement). Back See in particular Autumn Statement, paragraphs 1.82-1.96 and National Infrastructure Plan 2011, HM Treasury and Infrastructure UK, Nov 11, paragraphs 3.1-3.58. Back Qq 82-84. Back For example see HC Deb, 30 Jan 12, c 410w (rail schemes). Back Ev 35-36. Back For Northern Hub see Network Rail, Initial Industry Plans 2011: Definition of Proposed CP5 Enhancements, p89. For the bus subsidy see Bus Services after the Spending Review, HC (2010-12) HC 750, paragraph 12. Back Ev w3. Back 2010-11 report and accounts, pp79-81. Back On the wrong track: an analysis of the autumn statement announcements on transport infrastructure, IPPR North, Dec 11. Back Ibid., p13. Back 2011 pteg funding gap report (http://www.pteg.net/NR/rdonlyres/DC78CD78-F557-44F5-8014-FF35C279B17C/0/pteg_2011FundingGapreport_20111102.pdf) HC Deb, 12 Jan 12, c318. Back HC Deb, 31 Jan 12, cc568-71w. Back 2010-11 Report and Accounts, p44. Also see Ev 33. Back Financial Scrutiny, Q11. Back Ev 29-32. Back Ev 29. Back
<urn:uuid:f69566f7-e884-4f4a-83a3-cc219ef4ed5d>
CC-MAIN-2014-10
http://www.publications.parliament.uk/pa/cm201012/cmselect/cmtran/1560/156005.htm
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642170/warc/CC-MAIN-20140305060722-00026-ip-10-183-142-35.ec2.internal.warc.gz
en
0.936854
3,353
2.796875
3
Certificate / Diploma courses A Certificate or Diploma usually refers to a short course of study that provides the student formal recognition of career competency in a single subject. They vary in length (some are few months some are up to 2 years) and are offered at both the undergraduate and graduate levels. In some countries, such as Australia, Pakistan and India, a diploma is a specific academic award of lower rank than a bachelor degree (and in some areas an Advanced Diploma falls in between as well). Courses at Diploma, Advanced Diploma and Associate degree level take between two to three years to complete. Diploma and Advanced Diploma are titles given more practical courses, while Associate degree is given to more academic courses. These courses are usually delivered by universities, TAFE colleges, community education centers and private providers.
<urn:uuid:a7fb8b7e-366b-4a60-8a21-5b253d9d0a21>
CC-MAIN-2013-20
http://edufindme.com/register/user/student
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705956734/warc/CC-MAIN-20130516120556-00085-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963993
170
2.546875
3
Mitochondrial DNA variation within natural populations of D. subobscura is heavily dominated by the two haplotype groups HI (blue) and HII (red) in all three disjunct parts of its global distribution (white). Apart from populations on the Canary Islands, where other haplotypes (green) are common, the mtDNA haplotype frequencies are similar across populations, with on average 37% of all individuals carrying HI and 58% HII haplotypes. This observation alone suggests that balancing selection is contributing to the maintenance of genetic variation in mtDNA. Several experimental studies have shown that flies carrying HI and HII differ in key life history traits, such as as metabolic rate, development time, adult longevity and desiccation resistance. Distribution map reproduced with permission from Rodríguez-Trelles et al.
<urn:uuid:f92479ad-ce20-44ce-b633-9ec7489cdc98>
CC-MAIN-2023-23
https://bmcecolevol.biomedcentral.com/articles/10.1186/s12862-020-1581-2/figures/1
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645595.10/warc/CC-MAIN-20230530095645-20230530125645-00420.warc.gz
en
0.926196
171
2.765625
3
We hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness... The Declaration of Independence If it be a fundamental principle of free government that the Legislative, Executive and Judiciary powers should be separately exercised, it is equally so that they be independently exercised. Remarks to the Constitutional Convention The American continents, by the free and independent condition which they have assumed and maintain, are henceforth not to be considered as subjects for future colonization by any European powers. The Monroe Doctrine The moment you come to the Declaration of Independence -- that every man has a right to life and liberty, an inalienable right -- this case is decided. I ask nothing more in behalf of these unfortunate men, than this Declaration. John Quincy Adams The Amistad Case By virtue of the power and for the purpose aforesaid, I do order and declare that all persons held as slaves within said designated States and parts of States are, and henceforward shall be, free... The Emancipation Proclamation A general association of nations must be formed under specific covenants for the purpose of affording mutual guarantees of political independence and territorial integrity to great and small states alike. Freedom means the supremacy of human rights everywhere. Our support goes to those who struggle to gain those rights or keep them. Our strength is our unity of purpose. Franklin D. Roosevelt The Four Freedoms I, Harry S. Truman, President of the United States of America, do hereby proclaim and make public the said Charter of the United Nations, with the Statute of the International Court of Justice annexed thereto... Proclamation of United Nations Charter I have today issued an Executive Order directing the use of troops under Federal authority to aid in the execution of Federal law at Little Rock, Arkansas. Dwight D. Eisenhower Address to the Nation on the Little Rock School Desegregation Case Next week I shall ask the Congress of the United States to act, to make a commitment it has not fully made in this century to the proposition that race has no place in American life or law. John F. Kennedy Report to the American People on Civil Rights What happened in Selma is part of a far larger movement which reaches into every section and State of America. It is the effort of American Negroes to secure for themselves the full blessings of American life. Their cause must be our cause too. Because it's not just Negroes, but really it's all of us, who must overcome the crippling legacy of bigotry and injustice. And we shall overcome. Lyndon Baines Johnson Address to Congress on the Voting Rights Act Because we are free we can never be indifferent to the fate of freedom elsewhere. Our moral sense dictates a clearcut preference for these societies which share with us an abiding respect for individual human rights. With today's signing of the landmark Americans With Disabilities Act, every man, woman and child with a disability can now pass through once closed doors, into a bright new era of equality, independence and freedom. George Herbert Walker Bush Americans With Disabilities Act Signing Ceremony No President has ever done more for human rights than I have. George W. Bush Interview with The New Yorker magazine
<urn:uuid:69044220-2bd8-4cd4-a0f4-9f189affd3ba>
CC-MAIN-2017-43
http://azizhp.blogspot.com/2004/01/on-human-rights.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824820.28/warc/CC-MAIN-20171021152723-20171021172723-00798.warc.gz
en
0.92287
695
2.828125
3
Braidy® in Pre-K Master Cass: Story Language for Play, SEL, & Problem-Solving—a virtual workshop via Zoom Meeting! For the first time ever, we are offering a “Narrative Development with Braidy®” workshop SPECIFICALLY for people working with Pre-School Aged children. If you are a teacher, SLP, specialist, or daycare provider in a public or private Pre-K classroom, a Home-Based Child Care, an Early Childhood Center, a Head Start/Early Head Start, or a YMCA Preschool, this workshop is for YOU! Braidy® gives you a concrete way to help every child to tell their own story. The ability to communicate “what happened” and share their feelings, thoughts, and plans impacts children’s social, emotional, and academic development, their access to equitable education, and their overall well-being. In Pre-K, exposure to Story Language is critical, since, as the grades progress, story language is embedded in all areas of speaking, listening, reading, and writing. Your modeling of narrative language within play scenarios, routines, and transitions provides a foundation for literacy –at the same time improving communication, thinking, ability to see the big picture, executive functions, and perspective taking. Date: Dates TBA Location: Zoom Live Virtual Meeting Included: Extensive E-Book with Templates, Progress Monitoring Forms, Lessons, and Activities; Certificate of Completion; 0.6 ASHA CEUs (if eligible; see below) Cost: $249 per participant (contact firstname.lastname@example.org for discounted pricing for groups of 4+) The Braidy the StoryBraid® Approach supports all areas of oral language development: Pragmatics, Sounds, Words and Vocabulary, Sentences and Questions, and provides a much-needed pathway to DISCOURSE (stories, information, and conversation). Elements of story language also play an important role in developing the Social Emotional Learning core competencies as delineated by CASEL.org: self-awareness, self-management, social awareness, relationship skills, and responsible decision-making. Modeling story language with Braidy® during symbolic play, role-playing, and pretend play helps children take perspective, increases literate language, and builds vocabulary. Many children are coming to PRE-K with diagnosed or undiagnosed learning difficulties as well as having experienced trauma or ACEs, which can diminish concentration, memory, and the ability to organize and express their thoughts. Using Braidy® to talk about feelings, thoughts, and plans is integral to addressing these difficulties in real time in the classroom as issues arise. Braidy® assists young children in working toward understanding their own behaviors and resolving conflicts with peers. The presenters will demonstrate using Braidy® with children’s literature selections, dramatic play, centers, routines, transitions, circle time, setting expectations, problem-solving, and perspective-taking. Add this MASTER CLASS to your shopping cart and check out. You'll receive a confirmation receipt by email prompting you to register for the Zoom classes. Then you will receive a Zoom Link by separate email. SAVE that email for joining the classes later. All registrants will have access to the RECORDED VERSION of this Master Class until November 30, 2022. If you watch the recording, you will be eligible for a Certificate of Completion. You will be eligible for the 0.6 ASHA CEUs only when participating LIVE in the class (see ASHA CEU information below). Participants will be able to… Learn directly from the experts! Maryellen is the Founder and President of MindWing Concepts. Her 50-year professional career includes school-based SLP, Assistant Professor, Diagnostician, and Coordinator of Intervention Curriculum and Professional Development. She created the Story Grammar Marker and was awarded two U.S. patents. In 2011, Moreau received the Boise Peace Quilt Project Award for her work with children in the area of conflict resolution and social communication. In 2014, she received the Alice H. Garside Lifetime Achievement Award from the International Dyslexia Association, Massachusetts Branch (MABIDA) for exemplary leadership, service, or achievement in the area of helping children with dyslexia and language learning disabilities Moreau is an internationally recognized presenter. Linda received her master’s degree in Speech-Language Pathology in 1981 from the University of Maryland. In 2014 she received her CAGS designation from American International College as a Reading Specialist. Linda has practiced in many settings, including hospitals, public and private schools. She is the former Principal of the Curtis Blake Day School of the Children’s Study Home, for students with dyslexia/LLD. Linda has presented at the Massachusetts Speech-Language-Hearing Association and the American Speech-Language-Hearing conventions. She also presented on the topic of “Narratives” at a learning disabilities conference in the United Arab Emirates. In her current role, Linda uses her 25+ years of experience with Story Grammar Marker®, ThemeMaker®, and MindWing’s other methodologies as a trainer, presenter, and co-author on new publications. To earn ASHA CEU credit, you will need to provide your ASHA number and related information and be present, participative, and attentive for the entire course and complete a course survey. A Certificate of Completion will be provided. ASHA CE Provider approval and use of the Brand Block does not imply endorsement of course content, specific products, or clinical procedures. Thank You! ASHA CEUs will be provided by Community Rehab Associates, Inc. (http://cratherapy.com/). We are very appreciative of our friends at Community Rehab Associates, Inc. for collaborating with us to provide our participants with ASHA CEUs. Statement of Learning Braidy® in Pre-K Master Class will be provided online, LIVE by accessing a Zoom meeting link. The course will be recorded for future use by MindWing Concepts, Inc. The course content will include lectures, video(s), case study reviews, break-out group activities, group discussions, and question-and-answer sessions. Attendees’ microphones will be muted during the workshop. Questions may be posed by listeners by “raising their hand” (live) or using the “chat” feature on Zoom. Questions posed using the “chat” feature will be shared by the moderator with the presenter and addressed. These tools will be used to assess the degree to which course participants achieved the course's/session's learning outcomes: Item No. 08 105
<urn:uuid:b9eee83e-9133-444a-8826-d0f2bb6248da>
CC-MAIN-2023-40
https://mindwingconcepts.com/collections/thememaker/products/braidy-in-pre-k-master-class-dates-tba
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511220.71/warc/CC-MAIN-20231003192425-20231003222425-00484.warc.gz
en
0.929134
1,401
2.515625
3
Owning a thermostat is not enough to save energy. Using it the right way is equally important. Most newbiesare unaware of the proper usage of thermostats and fail to benefitfrom their various features. In order to make the thermostat save energy, the beginnersneed to know some of the basic factsof running a programmable thermostat. Here’s our guide to programmable thermostats for thenewbie: •You cannot buy just any thermostat based on its looks or even its features (that comes later). The first thing you need to do is make sure it is compatible with your central heating and air conditioning system. •Programmable, electronic, and manual are different names of the same kind of thermostat. •Make sure you read the manual, which isprovided with the thermostat, word by word. It will get you on the right track immediately. •Never use the override feature in your thermostat and avoid it as much as you can. Thermostats are made to control the temperature of your HVAC system. Override feature defeats that purpose and reduces energy savings. •The most recommendedthermostat setting is: *Winter: 68 degrees Fahrenheit when you’re at home and 5-8degrees lower when you are away. *Summer: 68 degrees Fahrenheitwhen you’re at home and 5-8 degrees higher when you are away. •Never ignore changing the temperature of your HVAC system when you are away. Even if it is for 2 hours. This is the time when the thermostatcuts back on the energy bills. •Make use of thescheduling feature you have in your thermostat. Set the temperature and time once and the thermostat will take care of it every day. •If you want to change the temeperature during the day,schedulethe change a little ahead of time. For instance, if you go to bed at 10:00 every day and sleep more comfortably in a slightly cooler room, thenschedule the change in temperature 15 minutes before you sleep. •When going on vacations, use the vacation feature instead of turning the entire system off. Recommended temperature includes 55 degreesFahrenheit during winter and 85 degreesFahrenheitduring summer. Most of the times, the heating and cooling problems in your house could be a result of improper usage of the thermostat. Make sure you follow the aforementioned tips to use the thermostat to its best potential.
<urn:uuid:010e169f-2e57-4eb3-9b90-55c88da8bd92>
CC-MAIN-2023-14
https://www.ritetempnyc.com/blog-view/36/-A-Guide-to-Using-Thermostats-for-the-Newbie
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00070.warc.gz
en
0.912387
526
2.625
3
By Eric Berger Boeing's new spacecraft may look a lot like an Apollo-era capsule, but it nonetheless represents the future of spaceflight. On Monday, Boeing showcased its CST-100 capsule for the first time, offering a glimpse of a spacecraft that could begin flying NASA astronauts into orbit as early as 2017. Developed and designed in Houston by a workforce that, by and large, previously worked on NASA's space shuttle program, the spacecraft can carry up to seven astronauts. Former astronaut Chris Ferguson, who two years ago commanded the final space shuttle mission and then left NASA to become Boeing's director of crew and mission operations, showed off the vehicle to reporters. Look inside: See what the new spacecraft looks like Boeing's reusable spacecraft - Named the CST-100 - Crew Space Transportation - with the 100 standing for 100 km, the height of the boundary between Earth's atmosphere and space - Developed and designed in Houston - Carries up to seven astronauts - Would launch from Kennedy Space Center in Florida and return to a landing site in the western United States on the ground, rather than into the ocean "If you can't fly 'em, you might as well help build them," Ferguson explained. On Monday, several astronauts clambered into and out of a mock-up of the CST-100 - Crew Space Transportation, with the 100 standing for 100 km, the height of the boundary between Earth's atmosphere and space. Inside, it's a sardine-like fit for astronauts, with the commander and pilot lying down with their backs to the ground. Still, it's transportation into space. To read this article in one of Houston's most-spoken languages, click on the button below. "It's seems like a good vehicle," said Dr. Serena Auñón, a NASA astronaut who has yet to fly into space. "And it's American." Presently the space agency relies upon Russia and its Soyuz spacecraft to fly to the Space Station. Despite its retro design, Boeing's spacecraft represents the future because it and two other vehicles, NASA officials hope, will offer an alternative to Russian transport and free the space agency from focusing on transporting astronauts to and from low-Earth orbit. The CST-100, along with SpaceX's Dragon capsule and Sierra Nevada's space plane, are being funded in large part by NASA's commercial crew program. Since 2010, Boeing has received about $600 million from NASA to develop the spacecraft, which will be launched by commercially available rockets. With private providers likely beginning to fly astronauts to orbit by 2017, NASA can focus on developing vehicles to send astronauts deeper into the solar system, to the moon and beyond. "I'm looking forward to the days when I won't need a passport or visa to greet the crew when they return home from space," said Kathy Leuders, deputy manager for NASA's commercial crew program. Video and more: Get a breakdown of the new craft The company plans a test flight in 2016, said John Mulholland, who oversees the CST-100 program for Boeing. The company could begin flying astronauts in 2017. The spacecraft will launch from Kennedy Space Center in Florida and return to a landing site in the western United States. Protected by parachutes and airbags, the spacecraft is designed to make a landing on the ground, rather than into the ocean. Mulholland said the CST-100 is being designed to be reusable. The company envisions providing transport into orbit not only to the International Space Station, for NASA, but also to other destinations, such as a space hotel planned by Bigelow Aerospace. The first parts of the test spacecraft will be delivered later this year. "It's really exciting to see this project coming together," Mulholland said. © 2017 Hearst Newspapers, LLC.
<urn:uuid:15c7c36e-1e60-4a1f-bb26-7caa741f396c>
CC-MAIN-2017-43
http://www.houstonchronicle.com/news/houston-texas/houston/article/Boeing-shows-off-new-spacecraft-4680405.php?cmpid=pns
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824357.3/warc/CC-MAIN-20171020211313-20171020231312-00021.warc.gz
en
0.94359
798
3.21875
3
TOEFL test is the most widely respected English-language test in the world, recognized by more than 8,500 colleges, universities and agencies in more than 130 countries, including Australia, Canada, the U.K. and the United States. Wherever you want to study, the TOEFL test can help you get there. The TOEFL test, administered via the Internet, is an important part of your journey to study in an English-speaking country. In addition to the test, the ETS TOEFL Program provides tools and guides to help you prepare for the test and improve your English-language skills. The TOEFL test measures your ability to use and understand English at the university level. And it evaluates how well you combine your listening, reading, speaking and writing skills to perform academic tasks. There are two formats for the TOEFL test. The format you take depends on the location of your test center. The TOEFL test has more test dates (more than 50 per year) and locations (4,500 test centers in 165 countries) than any other English-language test in the world. You can retake the test as many times as you wish. The cost of the test can range from US$160 to US$250 and varies between countries. For information on registration, fees, test dates, locations and formats More than 27 million people from all over the world have taken the TOEFL test to demonstrate their English-language proficiency. The average English skill level ranges between Intermediate and Advanced.
<urn:uuid:572c6cf8-d811-4659-a89b-e156bb1dd760>
CC-MAIN-2017-30
http://rmcedu.com/gre-and-toefl-test.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549436321.71/warc/CC-MAIN-20170728022449-20170728042449-00516.warc.gz
en
0.937594
315
2.640625
3
Plumerias (commonly called frangipanis) are considered by to be the most beautiful and fragrant trees in the plant world. They are all natives of the New World tropics. The word plumeria connotes perfume. The genus consists of 7 species and over 200 named varieties. From their native habitat in the American tropics, they have been introduced to all tropical areas of the world with great success. Plumeria flowers are what most leis are made of in Hawaii. Their growth habits, shape and leaf and floral beauty are unrivaled among trees. Depending on the specie, their height may average from 12-15 ft, up to 25-30 feet. There are also a few dwarf varieties which average only 6-8 feet. In general, the taller growing varieties require more sun and bloom fewer months of the year. Of the 200+ named varieties, the fragrances of plumeria flowers are can smell like coconut to jasmine and including citrus, rose, honeysuckle, raspberry, spice, apricot, peach…or even none at all. Flower color varies from pure white to deep red with yellows, golds, oranges, roses, pinks and all combinations in between. In the collector’s world, the most sought after plumeria is a super dwarf, highly spiced and with a variegated pink and orange bloom. Plumeria are very easy to grow, with very few insect or disease problems. In tropical areas, such as the Yucatan, they can be planted permanently outside. They are cold hardy to 38°F, or zone 10. Below this, they will freeze and rot. They prefer full sun, but will tolerate moderate shade, and are willing to be understory trees. Plumerias are also tolerant of many soil types, including alkaline, coral-based soils, and they can tolerate highly mineral water. It will not grow on the beach. However, in these environments, it requires soil amendments during planting, and protection from sea winds. One of the unique aspects of the plumeria is that it likes to go dormant for at least several weeks each year. In tropical climates, this usually occurs in mid to late January, when the leaves develop a yellow fungus, after which they lose their leaves. There is no need to spray the leaves, this is the part of their cycle. The need for dormancy in the plumeria, makes it a very easy tropical plant to grow in colder climates. When the night temperatures dip to 40 degrees F, simply defoliate the plant and place it in a temperature protected space for the next several months. Garages work wonderfully for this. The plant can be either potted or barerooted. Finally, plumerias, are wonderful to share with gardening friends. Simply cut off a non-blooming branch, and de-foliate the leaves (note the sap is sticky). Let the cut end dry a few days, and then plant or share! Diana L. Harris TCLP, TMGA, NTMN From the Gardens of La Sirena Condominios, Half Moon Bay Akumal North Yucatan Mexico Leave a comment
<urn:uuid:aa65ea85-8936-482e-a32a-b59a429bfe1c>
CC-MAIN-2023-23
https://mayangarden.club/2017/05/plumerias-frangipanis/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653501.53/warc/CC-MAIN-20230607010703-20230607040703-00663.warc.gz
en
0.954414
672
2.921875
3
Providing nourishment is an important mission. The information that follows provides key insights and facts about cereal, its ingredients and the roles they play in contributing to overall health. Select titles from the list below to read more. Ready-to-eat cereals account for a relatively small amount of a child’s daily sugar intake. On average, cereals – including sweetened cereals – provide less than 4 percent of children’s daily sugar intake.6 Some cereals are low in sugar, and some are sweetened. Let’s look at a couple of cereals side-by-side. Both cereals are lower in calories than most other breakfast options. Both are low in fat. Both deliver key vitamins and minerals. Both have at least 9 grams of whole grain per serving. Both products are good breakfast choices from a calorie and nutrition standpoint. Eating cereal, including sweetened cereal, is also associated with improved nutrient intake for children.54, 55 And regardless of sweetness level, children who eat cereal have healthier body weights than those who don’t eat cereal.54, 55 Since 2007, we’ve lowered sugar levels in our Big G kid cereals by more than 14 percent, on average. In 2009, we strengthened our commitment by pledging to reduce all of our cereals advertised to children under 12 to single-digit grams of sugar per serving. Today, all of our Big G kid cereals are at 10 grams of sugar or less per serving. And we’ve reduced sugar in many of our other cereals as well. General Mills strives to be the health leader in every food category in which we compete – and we’re committed to continuing to lead the cereal category. Our research teams are working hard to trim sugar in our cereals while maintaining great taste. All Big G cereals advertised to kids under age 12 now have 10 grams of sugar or less per serving.
<urn:uuid:d2cd692f-23cb-4f11-b63a-343ce9001aa0>
CC-MAIN-2014-23
http://www.cerealbenefits.com/what-about-sugar.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877670.44/warc/CC-MAIN-20140722025757-00108-ip-10-33-131-23.ec2.internal.warc.gz
en
0.940458
403
3.078125
3
The Go-Between The story ‘The Go-Between’ is a short story written by Ali Smith in 2009. The story was written for a collection of short stories written to celebrate the 60th anniversary of the United Nations’ adoption of the Universal Declaration of Human Rights. The story is inspired by Article 13, which stands for the right to freedom of movement. In the story we read about a 33-year-old man who’s name is not revealed. The 33-year old man is a former microbiologist and has worked in a university. In the text we follow this man who gives the reader a directly insight in how it’s like to be an African refugee trying to cross the border between Morocco and Spain. The narrator of the story has tried several times to cross the border illegally without any luck, only a part of his finger and ear made it to Europe, due to the tall fence with barbed wire at the border. The narrator has for unknown reasons left his homeland Cameroon and writes with a lot of details about how he tried to enter Europe and how the government was treating him. Need essay sample on "The Go-Between" ? We will write a custom essay sample specifically for you for only $12.90/page The narrator lives in a Spanish town in Morocco in a small hotel room with 3 other people, where they plan their next attempt to cross the border. The narrator works as a go-between guide, referring to the tittle, for these people in the hotel. He is a guide because he speaks a lot of different languages. He contacts doctors in Europe who may be able to help those who need it, when they are past the border. The story gives a good insight in how dreadful the refugees from North Africa are being treated and how they are lacking human rights. The text ‘The Go-between’ is indeed a story that is critical to the society and it raises some questions about human rights. The themes I found in the story are; being invisible, Africa vs. Europe, limited opportunities, refugees and human rights. The narrator of the story is a well-educated man. He is a microbiologist and has worked in a university “I was a microbiologist, before. I worked in the university. ” (P. 3, l . 32- 33) He speaks a lot of different languages ”The French doctors can be Italian, Spanish, French, English, for in- stance. I speak these, and also some others. ” (P. 3, ll. 31-32) Even with all these qualifications it is not possible for the narrator to enter Europe. He is simply limited because of his origin and the color of his skin. The narrator feels that he and the other refugees are invisible and not wanted, it’s almost like he doesn’t exist. ”A flower will be planted for every single person in Tangier. But not us. Not me. I’m not here… I’m speaking to you and I’m not really here. ” (P. 5, l. 16-118) The refugees wish so desperately to enter Europe that the narrator describes it as the Spanish blindness. From the coast of Morocco you are able to see the lights from Spain and the Spanish border and all the refugees can think of is how much they want to leave Morocco and enter Europe. The refugees also pay all their money to a Network that promises them a boat, but the Network only takes advantage of the desperate refugees so the boat never arrives. ”All the men in this building suffer from it, Spanish Blindness. All you can see is Spain. All you can think is Spain tonight, Spain tonight. They have paid all their money to the Network, and the Network has prom- ised a boat, maybe tonight. This boat never comes. ” (P. 4, l. 73-75) The narrator claims that he does not have this Spanish blindness anymore, that belongs to the past and he actually says that he wishes he could go back, but that’s not a possibility, he have to move forward. ”My blindness is for what’s behind me. I would like to go back. But I have to go forward. I can’t go back. Back’s not possible for me. ” (P. 4, l. 6-77) The narrator does not at any point tell exactly why he can’t go back, but it’s clearly because he’s in danger, if he returns to his homeland “Nobody leaves home unless home is the mouth of a shark. ” (P. 4, l. 80) Another thing in the story I find quite ironic is that prostitutes have a better opportunity to get into Europe, than a well-educated person like the narrator. ”Girls get to Spain a lot easier. (If they’re not pregnant and don’t have TB). (P. 3, l. 66-67) Is it easier to enter Europe as a girl, if you are not pregnant or don’t have tuberculosis? I interpret that as it is easier as a prostitute. One thing is sure, human rights have nothing to do with this and it’s actually a really depressing perspective. Human rights in general are a lacking subject in the text. Especially when it comes to the government’s treatment of refugees “They were meant to process us, even if we didn’t have the papers. They were meant to give us new expulsion papers… What they did in-stead, was they chased us with dogs, sticks, electric shock sticks and guns… ” (P. 2, ll. 21-24) Ali Smith uses a first person narrator in the story ‘The-Go-Between’. This makes the story more intense, and gives you the feeling that you are sitting in front of the narrator. Smith also uses a lot of short sentences, this can also help the story to get more exiting. The short sentences make the story less emotional despite it’s a very serious topic. Ali also uses humor and irony to make the subject less depressing and easier to read, even though there is absolutely nothing to laugh about. ”I landed in no man’s land! Wise Professor Me. ” (p. 4, ll. 87-88)“The Cameroon swimmer. Philosophical Proffessor Me. Border Crosser Extraordinaire. ” (P. 5, l. 123-124). There are a lot of personal experiences in the story but Ali Smith do not use any names in the story. I believe that Smith have chosen to do so because the persons in the story represent more than themselves. Ali Smith intensions with the short story are to highlight human rights and show the rest of the world that some people are invincible and ‘don’t exist’.
<urn:uuid:f0d18d13-c431-4159-b3e8-42ab0acd0c65>
CC-MAIN-2017-43
https://graduateway.com/the-go-between/
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824570.79/warc/CC-MAIN-20171021043111-20171021063111-00540.warc.gz
en
0.963922
1,445
2.546875
3
Another in the occasional series on “Far-Flung Lost London” … East Molesey was first recorded in an Anglo-Saxon charter in the seventh century as Muleseg, from the Old English personal name Mul and eg, meaning either an island or a peninsula (in a loop of a river). It was later recorded in the Norman “Domesday Book” of 1086 as Molesham, and as being held by Odard Balastarius, Richard FitzGilbert and Roger d’Abernon. The settlement continued to grow in the later Medieval and post-Medieval periods, and more markedly in the Victorian, after the arrival of the railway in 1849. East Molesey is now essentially now a contiguous suburb of London, although technically it lies in the Borough of Elmbridge in the County of Surrey (south of the Thames). A number of late Medieval and post-Medieval buildings may still be seen here, including the mid fifteenth- century “Bell” public house (formerly known as the “Crooked House”), and the sixteenth-century “Quillets Royal”. Church of St Mary The old parish church, at least for part of its history dedicated to St Lawrence, was probably originally built in timber in the seventh century (*), and subsequently rebuilt in stone in the twelfth, and repaired in the fourteenth, thereafter standing until the nineteenth, when it was damaged in a fire and had to be demolished. The present, new church, dedicated to St Mary, was built on the site of the old one in 1865. (*) Possibly by Benedictine monks based at Chertsey Abbey (itself built in 666),
<urn:uuid:03c1c9dc-bf4a-4808-9101-87a1569b00ec>
CC-MAIN-2020-24
https://lostcityoflondon.co.uk/2017/01/13/east-molesey/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392141.7/warc/CC-MAIN-20200527044512-20200527074512-00200.warc.gz
en
0.974808
357
2.828125
3
Simply put, solar energy is the most abundant energy source on Earth. About 173,000 terawatts of solar energy hit Earth at any given time, more than 10,000 times the world’s total energy needs. Solar panels are the tools that can transform the sun’s energy into electricity that our homes can use. But there’s no way that we can call solar panels electrical. By capturing the sun’s energy and converting it into electricity for your home or business, solar energy is a key solution to combating the current climate crisis and reducing our dependence on fossil fuels. How does solar energy work? Our sun is a natural nuclear reactor. It emanates small packets of energy known as photons, which travel 93 million miles from the sun to Earth in approximately 8.5 minutes. Every hour, enough photons hit our planet to generate enough solar energy to theoretically meet global energy needs for an entire year. Currently, PV power accounts for only five-tenths of one percent of the energy consumed in the United States. But solar technology is improving and the cost of solar power is falling rapidly, so our ability to harness the abundance of energy from the sun is increasing. In 2017, the International Energy Agency showed that solar power had become the fastest-growing energy source in the world, marking the first time that solar power growth outpaced that of all other fuels. Since then, solar energy has continued to grow and break records around the world. How does the weather affect solar energy? Weather conditions can affect the amount of electricity a solar system produces, but not exactly in the way you might think. The perfect conditions for producing solar power include, of course, a clear, sunny day. But like most electronics, solar panels like the Canadian Solar Panels are more efficient in cold weather than they are in hot weather. That lets the panel generate more electricity in the same period. As the temperature increases, the panel draws less voltage and produces less electricity. But while solar panels are more efficient in cold climates, they don’t necessarily produce more electricity in the winter than they do in the summer. The sunniest weather often occurs in the hottest months of summer. In addition to fewer clouds, the sun usually comes out for most of the day. So while your panels may be less efficient in hot climates, they will likely produce more electricity in the summer than they do in the winter. How do solar panels work? Photons hit a solar cell and knock electrons free from their atoms. An electrical circuit is formed if conductors are attached to a cell’s positive and negative sides. So, while solar panels generate electricity, they are not electrical in the sense that they operate using electricity. When electrons travel through that circuit, they produce electricity. Multiple cells make up a solar panel, and multiple panels (modules) can be connected to form a solar array or solar structure to buy. The more panels you can install, the more energy you can expect to produce. What are solar panels made of? Solar photovoltaic (PV) panels are made up of many solar cells. Solar cells are comprised of silicon, used as semiconductors. They are created with a negative layer and a positive layer, which together create an electric field, the same as in a battery. How do solar panels generate electricity? Photovoltaic solar panels generate direct current (DC) electricity. With DC electricity, electrons rush in one direction around a circuit. This example demonstrates a battery providing electricity to a light bulb. Electrons go from the negative side of the battery, through the lamp, and then to the positive side of the battery. With AC (alternating current) electricity, electrons are pushed and attracted, periodically reversing direction, like the cylinder of a car engine. Generators produce AC electricity as a coil of wire is spun near a magnet. Many different power sources can “turn the handle” on this generator, such as gas or diesel fuel, hydroelectricity, nuclear, coal, wind, or solar. AC electricity was chosen for the US power grid, primarily because it is less expensive to transmit over long distances. However, solar panels create DC electricity. How do we get DC electricity on the AC grid? We use an inverter. What does a solar inverter do? A solar inverter takes DC electricity from the solar array and uses it to create AC electricity. Different types of solar inverters can be thought of as the brain of the system. In addition to reversing power from DC to AC, they also provide ground fault protection and system statistics, including voltage and current in AC and DC circuits, power output, and maximum PowerPoint tracking. Central inverters have dominated the solar industry from the beginning. The introduction of microinverters is one of the biggest technological changes in the photovoltaic industry. Microinverters are optimized for each solar panel, not for an entire solar system like central inverters are. This allows each solar panel to perform at its full potential. When using a central inverter, having a problem with one solar panel (perhaps it is shaded or dirty) can reduce the performance of the entire solar array. How does a solar panel system work? Here is an example of how a residential solar energy system operates. First, sunlight is absorbed by a solar panel mounted on the roof. The panels convert the power into DC, which flows to an inverter. The inverter converts DC electricity to AC, which you can then use to power your home. It’s wonderfully simple and clean, and it’s getting more efficient and affordable by the day. Although solar panels produce energy, we can surely say that solar panels are not electrical themselves. A typical grid-connected photovoltaic system, during peak hours of the day, often produces more power than a customer needs and carries that power through cables which you can buy from a solar dc cable wholesaler, so excess power is returned to the grid for use elsewhere. The customer who is eligible for net metering can receive credits for excess energy produced and can use those credits to draw from the grid at night or on cloudy days. A net meter registers the energy sent compared to the energy received from the network.
<urn:uuid:e42f242c-c83a-4dd7-aefc-b9e81f1306b5>
CC-MAIN-2023-14
https://pas-solar.com/are-solar-panels-electrical/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00100.warc.gz
en
0.929526
1,289
3.84375
4
One of a medical students’ first encounters is with a deceased person, a cadaver. Because a thorough knowledge of human anatomy is fundamental for the practice of medicine, anatomy instruction and dissection of cadavers begins day one. Medical schools, in recent decades, have been adequately supplied with bodies, partly because some folks want to make a contribution to medical science. The fact that the schools bear the costs of disposal (burial or cremation) may also be an incentive. In 18th-century England, cadavers were difficult to obtain. Anatomists in London were limited by law to fewer than a dozen bodies a year – usually those of executed criminals. As a result, there were often pitched battles under the gallows between families and the anatomists for the body of a newly deceased. With no refrigeration, and embalming not yet developed – formaldehyde came into use in 1893 – there was strong demand for fresh bodies. Anatomists of that time earned respectable incomes by enrolling students for lecture classes and corpse dissection. The solution to the shortage of bodies was further criminal behavior, then sometimes known as Resurrectionists. They kept a sharp eye out for funeral processions and awaited fresh burials. Working at night, for hard coin, they swiftly disinterred the dead and delivered them to an anatomist’s darkened rear door. However, bodies deteriorate at warm temperatures and quickly become “unsuitable.” Consequently, anatomists lectured and did their dissections in the winter months. Medical practice was mired in antiquity with theories of disease and treatments dating a millennium or more to ancient Greece. If illness didn’t kill one, treatment well might. Obsession with the four “humors” of the body – phlegm, blood, black bile and yellow bile – prevailed. One man, John Hunter (1728-93), the 10th child of a Scottish family, challenged his medical contemporaries and laid the foundations for modern surgery. He dissected 2,000 bodies during a 12-year period. His lectures about human anatomy and his dissection rooms attracted more students than anyone else’s. By understanding normal anatomy, one can progress to the abnormal and begin to consider actual disease processes. Understanding disease becomes a step to rational treatments. Hunter documented placental circulation and the fact that it was independent of maternal circulation. He also examined fetuses and animal embryos: Accepted teaching was that every being was completely formed in the earliest embryonic stage and simply grew in size. Instead, the reality was an orderly, progressive development of limbs, organs and nervous system. Examining various species, he discovered commonalities of development, the early science of embryology. Increasingly, his dissections included plants, insects, animals, even several whales, and his massive collection of specimens formed the discipline of comparative anatomy. Rejecting much of the status quo, Hunter’s insights and discoveries are milestones of surgery. He recommended surgery only in the absolute necessity for it, and only if the surgeon would have it performed on himself in similar circumstances. The Royal College of Surgeons honored Hunter as “Founder of Scientific Surgery.” Read all about it in: The Knife Man by Wendy Moore. www.alanfraserhouston.com. Dr. Fraser Houston is a retired emergency room physician who worked at area hospitals after moving to Southwest Colorado from New Hampshire in 1990.
<urn:uuid:9ea7269b-7fb9-4e20-abd9-7697286bf660>
CC-MAIN-2014-35
http://durangoherald.com/article/20130224/COLUMNISTS16/130229793/0/NEWS
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500813241.26/warc/CC-MAIN-20140820021333-00261-ip-10-180-136-8.ec2.internal.warc.gz
en
0.96843
712
3.359375
3
SAFETY SOLUTIONS: My Back Hurts National healthcare statistics: In the United States, back disorders account for over 24 percent of all occupational injuries and illnesses involving days away from work, according to the National Institute of Occupational Safety and Health’s (NIOSH) Worker Health Personal impact: About 80 percent of back injuries are short in duration, and workers are able get back to normal health. In the short-term, they may experience pain and reduced functioning. For some, the pain and suffering is long-term. And for a small percentage of people, it is lifelong. For employees with long-term, disabling musculoskeletal injuries, lifetime earnings may drop significantly. These employees may also suffer a loss of independence and a diminished quality of life. Back pains are among mankind’s earliest and most enduring afflictions. It has been estimated that two-thirds of industrial workers, and more than half of all office workers, have suffered at least one back injury by age 65. About 85 percent of the patients one occupational health doctor sees for back problems have strained muscles in their "lumbar" region—the lower back. Lower back pain, he says, is usually set off by a specific movement at a specific moment in time. Lifting, falling or trying to catch or break the fall of an object are the most common actions that cause such an injury. At that instant, the person may feel a snap, a popping sensation, nothing at all or immediate agony. Being in a hurry is a major element in back injury cases, this occupational health expert has found. If a person would just take the time to get a forklift instead of trying to pick up the too-heavy object, or get the ladder instead of just reaching for something too high, a possible injury could probably be avoided. Understanding your spine can also help. Constructed of 24 connected segments of bone and cartilage called vertebrae, it provides structural stability for the body. Spongy discs between the vertebrae cushion the bones while also bonding them together and providing the mobility that allows twisting, bending and flexing movements. Also holding the vertebrae together are muscles and ligaments. Within the bones and protected by them is the spinal cord, the control center of the nervous system. If the springy disc material between the bones of the spine loses some of its bounce—which can happen simply as part of the aging process—then the stress of some particular movement may cause the disc to bulge or even break, with spongy tissue spilling out. This "herniated" disc can press on an adjacent nerve, causing pain, numbness, tingling or painful muscle spasm. Conditioning exercise is also a part of good back pain prevention. Your goals are to improve flexibility of the back (swimming and walking are great for this) and to strengthen both back and stomach muscles, to provide proper back support. Here’s what doctors advise for those who do have an injury that results in acute back pain: Stop. Get into bed for the first terribly painful period. You may want to use ice to reduce swelling or heat to ease muscles. Anti-inflammatory medication or muscle relaxers given to you by your doctor will help muscle spasms, too. Add a board underneath a too-soft mattress. Here are some precautions that can help protect your back from injury: In from one to five days, you should be able to move again, although in easy ways. In fact, it’s important that you do begin to move at this point, to increase flexibility and strength. Allow discomfort and your good sense to tell you how far you should go. Follow the safe lifting practices we’ve stressed so often. Sit and stand upright without slouching. Minimize stress on the lower back by avoiding overweight. Sleep on your back, with a cushion under the knees, or on your side Don’t maintain one position for a long time—take a Long-term recovery may depend on your physician’s help and adhering to the preventive measures already mentioned. Doesn’t this emphasize how much smarter— and more comfortable—you’ll be by taking those preventive steps in the first place? (This article was based on information from the American Physical Therapy Association.) For more information, click on the author biography at the top of the page.
<urn:uuid:d8d05b89-e6bd-411e-b1df-2586dd347178>
CC-MAIN-2020-24
http://www.plasticsmag.com/safetysolutions.asp?fIssue=Nov/Dec-09&aid=5008
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347434137.87/warc/CC-MAIN-20200603112831-20200603142831-00101.warc.gz
en
0.92004
993
2.875
3
This research briefing provides an overview of the current legislation and guidance relevant to the policing of protests, commentary on recent changes in this area and a discussion about calls for further reformJump to full report >> |This research briefing forms part of a series about police powers. The research briefing Introduction to police powers provides an overview of police powers and links to other relevant breifings.| The right to peacefully protest is protected under the European Convention of Human Rights. Articles 10 and 11 of the Convention protect an individual’s right to freedom of expression and assembly. Together they safeguard the right to peaceful protest. However, these rights are not absolute, and the state can implement laws which restrict the right to protest to maintain public order. In the UK several pieces of legislation provide a framework for the policing of protests. The Public Order Act 1986 provides the police with powers to place restrictions on protests and, in some cases, prohibit those which threaten to cause serious disruption to public order. There is also an array of criminal offences which could apply to protestors, for example “aggravated trespass” or “obstruction of a highway”. In addition to relevant criminal law there are civil remedies that can be used to disrupt protests. Provisions in the Protection from Harassment Act 1997 allow individuals and organisations to apply for civil injunctions to prevent protestors from demonstrating in a way which causes harm or harassment. Following criticism of the police’s approach at the G20 protests in 2009 there was reform of policing tactics at protests. The police were criticised for their use of force and for not facilitating constructive dialogue with the G20 protestors. Partly in response to this criticism, the current police guidance emphasises that officers should start from a presumption of peaceful protests. It advocates for the use force only as a last resort and advises officers to maintain open communication with protestors before, during and after a demonstration. Recent protests have raised some questions about the current framework for policing demonstrations. Some have argued that police powers against protestors should be strengthened. Stronger legislation, it is argued, could enable the police to intervene more robustly against peaceful protests that cause lengthy and serious disruption. Others have questioned whether legislation which seeks to restrict harmful speech (harassment and offensive language) is strong enough. When and how the police should intervene against protestors who use offensive language has been controversial in the past. Many civil rights groups argue that the use of harassment legislation against protestors presents a risk to the freedom of expression. Others argue that when protestors use offensive language, they can cause significant distress to their target and civil and criminal action should be taken against them. Commons Briefing papers SN05013 Author: Jennifer Brown
<urn:uuid:9cbf8f27-41f4-4243-8555-434181ecc767>
CC-MAIN-2020-10
https://researchbriefings.parliament.uk/ResearchBriefing/Summary/SN05013?utm_campaign=English%2BCrime&utm_medium=web&utm_source=English_Crime_122
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146176.73/warc/CC-MAIN-20200225233214-20200226023214-00512.warc.gz
en
0.935815
546
2.9375
3