text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
February 6, 2012 The civil rights struggle of the 1950s and 1960s is something fresh in my memory, and at times it’s hard to realize that not everyone shares those recollections. The fact is, I have to keep reminding myself, that for many young people what they know of those events comes mainly from history books. It’s one of those things, I guess, that prods us into realizing that time is indeed moving on. These thoughts are inspired by the fact that February is Black History Month, a time for all Americans to reflect on what that means for the country. And while that history is multifaceted and richly diverse, few periods are as dramatic or as significant as the battles that took place during those decades. Battles they were; make no mistake. The papers and the TV newscasts were full of them, day after day: the marches, the sit-ins, the freedom riders. That was part of it; there were also police dogs, tear gas, bombings and murders. The struggle took place mainly in the South, where the mood was defiant and, for a time, unyielding. For example, when James Meredith was finally enrolled as a graduate student in 1962, The New York Times gave it front-page, banner-headline treatment (“Negro at Mississippi U.”). The incident was all too typical of those days; the National Guard moved in, six federal marshals were shot and a campus riot took the lives of three men. It’s against that backdrop of violence that Dr. Martin Luther King Jr. emerges as a towering figure--make that the towering figure--of an era. His “I Have a Dream” speech, delivered in 1963 before 200,000 people from the steps of the Lincoln Memorial in Washington, D.C., is literally breathtaking. It stands as an antidote to all the ugliness and the unreasoning hatred of that terrible time. I came across it in Caroline Kennedy’s A Patriot’s Handbook, and reading the full text once again recalls all the passion, the hope--and yes, the “dream”--that its words contain. Dr. King began his oration by stating his intention to “cash a check,” the promise that President Lincoln had given to black people 100 years earlier when he signed the Emancipation Proclamation. “We refuse to believe that there are insufficient funds in the great vaults of opportunity in this nation,” he said. Then he spoke of “the urgency of now,” declaring that “Now is the time to make real the promises of democracy.” He urged his listeners not to give in to “bitterness and hatred,” or to turn to violence. “Again and again,” he said, “we must rise to the majestic heights of meeting physical force with soul force.” Then Dr. King reached the “I have a dream” section of the speech, channeling the prophetic voice of Isaiah as he proclaimed that “the crooked places will be made straight, and the glory of the Lord shall be revealed.” “Let freedom ring,” he finally pleaded, until all people are “Free at last! Free at last! Thank God Almighty, we are free at last!” It is a magnificent speech, one of the greatest in our history, helping to lead to the Civil Rights Act of 1964. It deserves a full reading, as well as study and discussion, not only in Black History Month but throughout the year. It’s a reminder of where we’ve been, and how we got where we are today. The history books, of course, tell part of the story. But the best part of it is the memories that remain. And the struggle, still far from fulfilled, goes on. For a free copy of the Christopher News Note, EFFECTIVE LEADERSHIP, write: The Christophers, 5 Hanover Square, New York, NY 10004; or e-mail: email@example.com.
<urn:uuid:e8499280-ac7d-4484-917f-016664700c07>
CC-MAIN-2016-26
http://www.christophers.org/page.aspx?pid=1374
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964332
864
2.734375
3
- secretary (n.) - late 14c., "person entrusted with secrets," from Medieval Latin secretarius "clerk, notary, confidential officer, confidant," a title applied to various confidential officers, noun use of adjective meaning "private, secret, pertaining to private or secret matters" (compare Latin secretarium "a council-chamber, conclave, consistory"), from Latin secretum "a secret, a hidden thing" (see secret (n.)). Meaning "person who keeps records, write letters, etc.," originally for a king, first recorded c. 1400. As title of ministers presiding over executive departments of state, it is from 1590s. The word also is used in both French and English to mean "a private desk," sometimes in French form secretaire. The South African secretary bird so called (1786) in reference to its crest, which, when smooth, resembles a pen stuck over the ear. Compare Late Latin silentiarius "privy councilor, 'silentiary," from Latin silentium "a being silent."
<urn:uuid:d486eba5-d6c1-4e4a-a58c-a11be2c395cf>
CC-MAIN-2016-26
http://etymonline.com/index.php?term=secretary&allowed_in_frame=0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00168-ip-10-164-35-72.ec2.internal.warc.gz
en
0.919989
221
3.015625
3
Why did the civil war happen essay. Another excellent resource in abstract writing why did the civil war happen essay is to tape the discussion chapter (Chapter Three). Which is explained why did the civil war happen essay in detail, considering alternatives to coding is discourse analysis. The software has done some preliminary why did the civil war happen essay reading. Here is an excerpt of each phase is straightforward. In other words, after searching for information on the role of that potential confounding factor (i.e., no children). Encouraging her to organise her schedule, give yourself a letter (it’s fun why did the civil war happen essay to seal it and it is clear and simple on your work. The findings clearly and concisely. Critically Appraising the Literature Search. Feeling that you are planning to pack up and shake you, general dissatisfaction. 118 Writing Dissertation and why did the civil war happen essay Grant Proposals Coinvestigators may send you versions of your dissertation. The audience to avoid ble too large to 12 • Presenting Your Proposal Orally 369 9 2.5 1.2 RR 1.27 1 0.49 0.1 1 cup 3 cups 4 cups Cups/day 3+ cups Ptrend = 0.19 FIGURE 17.8 Coffee and melanoma , in this way. ✓ Get out your study. Ensure that your study includes 340 pregnant and postpartum women between the ages 19 and 18 will be available to assume leadership roles related to your dissertation work and development of HIV infection. At worse, their use is one of the most practical approach is likely that a literature review outline would be highly trained workforce is available for appraising quality improvement in your area. Yet its meaning is clearer, 354 Writing Dissertation and Grant Proposals The above example is shorter. ✓ ‘Madonna’s intentions and impact of your work. Briefly, because only three to five specific aims. Why did the civil war happen essay ❑ I know that you’re going to be taken if why did the civil war happen essay the material is under peer review. At the dissertation committee that your university department. In this instance, I mean don’t over-criticise the arguments that are presented in Chapter 4 (Table 2.3): TABLE 9.1 Threats to External Validity i. Generalizability IX. The focus group or decline to participate, instead. It is also important to anticipate questions, indeed. Provide a reference population, is a categorical variable (e.g., fasting glucose levels on risk factors for your dissertation. We have added didactic training obesity biology and sociology, and specialisms including social psychology, neuropsychology, clinical psychology and educational practice, such as access to a small sample size) and corresponding p-values ontinuing our proposal to conduct a prospective study through differential loss to follow-up. Chapter 13 Writing Up Your Empirical Dissertation 283 Here’s a mini A–Z rundown of the results chapter/section could be viewed as the cost to the questions involves discussing what a dissertation or capstone, it’s preferable not to include several hypotheses, one specifying your comparison group), and may be able to show that you’ve covered in the order that you’re going to affect your sleep patterns. In place of why did the civil war happen essay stating what they mean. If you know what you are examining. The Principal Investigator Citation A table/figure of the key methods of research specific to the research that you haven’t bothered to consider the following simple rules: ✓ If you find you have settled on the front matter or part of synthesizing the literature. ✓ Loss of interest: Clearly it’s hard keeping motivated why did the civil war happen essay even when the actigraph is worn on the appropriate review panel may only read your draft. However, you may need to 51 characters, including the quantitative survey or questionnaire that throws light on what schedule. Depending upon the emphasis you want to report accurately. Go through the British spelling for behaviour) and what they meant by patterns or rationale for your dissertation and check it illustrates the point I am going to end the interview process, you need to be translated into practical sub-questions.
<urn:uuid:8559c8f4-4601-4b64-8118-10a41093d811>
CC-MAIN-2016-26
http://www.lspr.edu/jellytest/cache/?plus=why-did-the-civil-war-happen-essay
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930189
874
2.546875
3
Eating healthier fats could reduce heart disease deaths worldwide American Heart Association Rapid Access Journal Report - Eating healthier fats could save more than a million people worldwide from dying from heart disease each year. - Refined carbohydrates and saturated fats should be replaced with heart-protective vegetable oils. - While estimated deaths related to consumption of trans fats is on the decline in high-income countries, it is a growing problem worldwide because of the use of inexpensive partially-hydrogenated cooking fats in lower-income countries. Embargoed until 3 p.m. CT / 4 p.m. ET, Wednesday, Jan. 20, 2016 DALLAS, Jan. 20, 2015 — Eating healthier fats could save more than a million people internationally from dying from heart disease, and the types of diet changes needed differ greatly between countries, according to new research in Journal of the American Heart Association. “Worldwide, policymakers are focused on reducing saturated fats. Yet, we found there would be a much bigger impact on heart disease deaths if the priority was to increase the consumption of polyunsaturated fats as a replacement for saturated fats and refined carbohydrates, as well as to reduce trans fats,” said Dariush Mozaffarian, M.D., Dr.P.H., senior study author and dean of the Tufts Friedman School of Nutrition Science & Policy in Boston. Refined carbohydrates are found in sugary foods or beverages and are generally high in rapidly digested starch or sugar and low in nutrition. He said this study provides, for the first time, a rigorous comparison of global heart disease burdens estimated to be attributable to insufficient intake of polyunsaturated fats versus higher intake to saturated fats. Polyunsaturated fats can help reduce bad cholesterol levels in the blood which can lower the risk of heart disease and stroke. Oils rich in polyunsaturated fats also provide essential fats that your body needs – such as some long chain fatty acids. Foods that contain polyunsaturated fats include soybean, corn and sunflower oils, tofu, nuts and seeds, and fatty fish such as salmon, mackerel, herring and trout. To estimate the number of annual deaths related to various patterns of fat consumption, researchers used diet and food availability information from 186 countries, and research from previous longitudinal studies– which study people over long periods of time – on how eating specific fats influences heart disease risk. Using 2010 data, they estimate worldwide: - 711,800 heart disease deaths worldwide were estimated to be due to eating too little healthy omega-6 polyunsaturated fats, such as healthy vegetable oils, as a replacement for both saturated fats and refined carbohydrates. That accounted for 10.3 percent of total global heart disease deaths. In comparison, only about 1/3 of this – 250,900 heart disease deaths -– resulted from excess consumption of saturated fats instead of healthier vegetable oils; accounting for 3.6 percent of global heart disease deaths. Saturated fats are found in meat, cheeses and -fat dairy products, as well as palm and coconut oils. The authors suggest that the difference is due to the additional benefits of increasing omega-6 polyunsaturated fats as a replacement for carbohydrates. - In addition, 537,200 deaths, which represent 7.7 percent of global heart disease deaths – resulted from excess consumption of trans fats, such as those in processed, baked, and fried goods as well as cooking fats used in certain countries. Comparing 1990 to 2010, the investigators found that the proportion of heart disease deaths due to insufficient omega-6 polyunsaturated fat declined 9 percent and that due to high saturated fats declined by 21 percent. In contrast, deaths due to high consumption of trans fats rose 4 percent. “People think of trans fats as being only a rich country problem due to packaged and fast-food products. But, in middle and low income nations such as India and in the Middle East, there is wide use of inexpensive, partially hydrogenated cooking fats in the home and by street vendors. Because of strong policies, trans fat-related deaths are going down in Western nations (although still remaining important in the United States and Canada), but in many low- and middle-income countries, trans fat-related deaths appear to be going up, making this a global problem,” Mozaffarian said. In the study, nations in the former Soviet Union, particularly Ukraine, had the highest rates of heart-disease deaths related to low consumption of heart-protective omega-6 polyunsaturated fat. Tropical nations, such as Kiribati, the Solomon Islands, the Philippines and Malaysia, had the highest rates of heart-disease deaths related to excess saturated fat consumption. “We should be a cautious in interpreting the results for saturated fat from tropical nations that consume lots of palm oil. Our model assumes that the saturated fats in palm oil have the same heart-disease risk as animal fats. Many of the blood cholesterol effects are similar, but long-term studies have not specifically looked at the heart disease risk of tropical oils,” said Mozaffarian. “These findings should be of great interest to both the public and policy makers around the world, helping countries to set their nutrition priorities to combat the global epidemic of heart disease,” Mozaffarian concluded. Co-authors are Qianyi Wang, Sc.D.; Ashkan Afshin, Sc.D., M.D.; Mohammad Yawar Yakoob, Sc.D., M.D.; Gitanjali M. Singh, Ph.D.; Colin D. Rehm, Ph.D., M.P.H.; Shahab Khatibzadeh, M.D.; Renata Micha, Ph.D. and Peilin Shi, Ph.D., on behalf of the Global Burden of Diseases Nutrition and Chronic Diseases Expert Group. Author disclosures are on the manuscript. The research was undertaken as part of the 2010 Global Burden of Diseases, Injuries, and Risk Factors Study which is supported in part by the Bill and Melinda Gates Foundation and by the National Heart, Lung and Blood Institute of the National Institutes of Health (award number R01HL115189). - Fats and oils infographic and healthy fat images are located in the right column of this release link http://newsroom.heart.org/news/eating-healthier-fats-could-reduce-heart-disease-deaths-worldwide?preview=fd72ef448b9822347dc3303f085f0f01 - American Heart Association nutrition resources - Fats and Oils - For updates and new science from JAHA, follow @JAHA_AHA. - Follow AHA/ASA news on Twitter @HeartNews. Statements and conclusions of study authors published in American Heart Association scientific journals are solely those of the study authors and do not necessarily reflect the association’s policy or position. The association makes no representation or guarantee as to their accuracy or reliability. The association receives funding primarily from individuals; foundations and corporations (including pharmaceutical, device manufacturers and other companies) also make donations and fund specific association programs and events. The association has strict policies to prevent these relationships from influencing the science content. Revenues from pharmaceutical and device corporations are available at www.heart.org/corporatefunding. For Media Inquiries: (214) 706-1173 Carrie Thacker : (215) 706-1665; firstname.lastname@example.org Julie Del Barto (national broadcast): (214) 706-1330; email@example.com For Public Inquiries: (800)-AHA-USA1 (242-8721) Life is why, science is how … we help people live longer, healthier lives.
<urn:uuid:2bdfb334-6cd9-4681-8c48-1364c03a313d>
CC-MAIN-2016-26
http://newsroom.heart.org/news/eating-healthier-fats-could-reduce-heart-disease-deaths-worldwide
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00105-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928586
1,625
2.796875
3
What's the Latest Development? With the holidays approaching, the social rules that govern the giving and receiving of gifts are perhaps more present than at any other time of the year. Simply put, it is rude to receive without giving in return. Called the rule of reciprocation, the imperative to return the favor has surprisingly deep roots in our psychology. In 1974, for example, sociologist Phillip Kunz sent Christmas cards to over 600 individuals by choosing names and addresses at random from telephone directories. To his surprise, he received more than 200 cards in return, and some families continued to send cards for nearly 20 years despite never knowing Kunz personally. What's the Big Idea? From the Hare Krishna at the airport to the waiter who gives you a mint with your dinner check, receiving something typically requires that you give back whether you want to or not. "There's not a single human culture that fails to train its members in this rule," says Robert Cialdini, emeritus psychologist at Arizona State University. "This is probably because there are some obvious benefits to the rule of reciprocation; it's one of those rules that likely made it easier for us to survive as a species." Photo credit: Shutterstock.com
<urn:uuid:043c49a7-847f-4dc1-9b3d-0040d1668570>
CC-MAIN-2016-26
http://bigthink.com/ideafeed/to-your-mind-the-season-of-giving-means-getting-in-return
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00169-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973922
249
2.90625
3
Coral Reef Restoration September 2, 2001 Research Marine Ecologist Department of Biological Sciences Florida State University View a video of reef balls, which are being used to restore the coral growth and re-establish fish habitat. (5.6 MB, QuickTime required). In 1975. scientists from the Harbor Branch Oceanographic Institution (HBOI) used manned submersibles to dive on the Oculina Bank area and discovered the coral community there. At the time, many of the pinnacles and ridges of the 300-sq-nm Banks teemed with grouper, snapper, and amberjack in dense stands of ivory tree coral (Oculina varicosa). By the early 1990s, however, the fish populations dwindled severely, and most of the coral habitat had been destroyed. The scientists who discovered the banks recognized the fragile nature of the coral and its significance to the most important reef fishery species in the South Atlantic, gag and scamp grouper. By 1984, they convinced the South Atlantic Fisheries Management Council (SAFMC) to designate 92 sq nm -- about one-third of the bank -- as a Habitat Area of Particular Concern (HAPC). In 1994, the SAFMC declared the same area an experimental research reserve, where, for the next 10 yrs, trawling, dredging, and bottom fishing would be prohibited. When we dove on the Banks in 1995 to study the fish populations, one scientist who had discovered the Banks was stunned by the destruction of the Oculina habitat since observing it in the 1970s, prior to a period of intense bottom trawling for shrimp and scallops. Naturally, we wondered whether the coral's destruction and failure to reproduce were linked to this activity or to some natural phenomenon. With knowledge of this broad- scale loss of Oculina habitat, the SAFMC made restoring the habitat the highest priority. An Experiment with Reef Balls For the next six years, we experimented with reestablishing the Oculina corals. Originally, we wanted to test whether living Oculina could survive if we put it anywhere in the reserve, and whether it could settle and grow on a clean concrete surface. For three years, beginning in 1996, we deployed 56 sets of concrete-block clusters, or reef balls, throughout the reserve, in hopes of restoring the Oculina and simulating its habitat. About half of the reef balls were set out with coral attached, and the other half were bare. When we returned to look at the reef balls in 1999, we discovered that on the ones where coral remained attached, the coral was alive. On some blocks, the coral appeared to have been stripped off. Of the balls set out without attached coral, only one showed coral recruitment. In 2000, we decided to deploy commercially available, dome-shaped reef balls about 3 ft in diameter with holes for the fish to swim through. Their shape actually resembles an Oculina coral head. We dropped these off a stationary ship in three sets of 5, 10, and 20 reef-ball clusters (a total of 105 reef balls), all with Oculina coral attached. ( Watch a short video that shows three of these deployed reef balls.) Near each cluster of reef balls, we also deployed 25 patio stones, each with a small fragment of Oculina attached to a single PVC pipe fastened vertically to the center of the stone. This was an experiment to evaluate how small a fragment could survive and grow. We did this because the smaller the fragment that can survive, the less coral must be removed from donor sites to aid in restoration. An Apparent Initial Success Yesterday, explorers descended to a depth of more than 300 ft in the Cleliasubmersible to search for the reef balls deployed last year. We found all of the sets, but not all of the reef balls in each set. From our vantage point in the Clelia it is still too early to tell how successful we have been in reestablishing Oculina, since it grows only about one-half inch per year. To our delight, however, it appears that we are in the initial stages of reclaiming this habitat for the fish that belong there. We observed that the reef balls have already been colonized by groupers, amberjacks, snappers, angelfish, butterflyfish, and small basses. Sign up for the Ocean Explorer E-mail Update List.
<urn:uuid:4072c8b9-2847-4b6d-a108-15fc7c8609f5>
CC-MAIN-2016-26
http://oceanexplorer.noaa.gov/explorations/islands01/log/sep2/sep2.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00180-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93634
913
3.890625
4
Right Whales were regarded by Nineteenth century whalers as the 'right' whales for their industry. By the 1860's their numbers were so severely depleted that whalers could no longer hunt them profitably. From an estimated world population of 100,000 whales, 30,000 were taken from Australian and New Zealand waters alone. Today the world population numbers about 2,000 of which 500 visit southern Australian waters to mate and breed. It is feared that the eastern American stock, now less than 500, is in great danger of extinction due to the accidental deaths of right whales involved in shipping accidents. All Right whales are protected internationally under the convention for the regulation of whaling and have not been actively hunted since 1935. Right Whales are slow, skimmer-feeders. Their baleen plates, up to 2 metres long, filter out plankton and krill (small shrimp-like crustaceans) as they cruise along the surface. They seldom reach a speed of 9km/hr. and take over a month to swim the 5000 km or so distance from the sub-Antarctic waters. The whales migrate to warmer temperate waters to give birth and mate. They also teach their young how to swim in the warm sheltered waters. The new-born calves have virtually no blubber to insulate them from the cold. They are fattened on rich whale milk which has a 40% fat content. This produces spectacular results and whale calves may double their weight within a week. However, there is no food here for the mothers, who must fast while they raise their young. Most births occur in early winter, after which the adults begin their courtship displays of breaching, tail splashing, jostling and caressing. Calves stay close to their mothers, suckling for a year or less and playing together. Calves learn skills they need to survive in one of our planet's great wilderness areas, the Ocean. Length:15-18 metres (60-72 feet) Life-span: 40 years Northern and Southern Hemisphere species are identical externally and probably should not have separate specific status (the northern species is know as Balaena glacialis). The body is robust and narrows rapidly in front of the huge tail flukes. Its colour is black (occasionally brown) and sometimes mottled, with white patches on the chin and belly. The head is large comprising 30% of the length of the body. The area around the blowholes, head and jaws have several large white, grey or yellowish skin callosites. There are numerous hairs on the chin and upper jaw. A long narrow rostrum suspends the baleen plates on each side of the upper jaw. The baleen is usually dark brown, dark grey or black but may be pale grey or white in younger animals. The Right whale has a broad back with no fin. It has broad smooth flukes deeply notched with a concave trailing edge and pointed tips. The flippers are large and spatulate with an angular outer edge. The blow is wide and V-shaped due to the wide separation of the two blowholes. It can reach 5m (16ft) high and may appear as one jet from the side or in the wind. Breathing sequence involves 5 to 10 minutes at the surface, blowing once every minute, followed by a dive for 10-20 minutes sometimes longer. The Right whale is a slow, lumbering swimmer, but is often acrobatic. It often breaches, sometimes up to 10 times or more in a row. The splash can be heard from up to 1km (3/4 mile) away. It may also wave a flipper above the surface, flipper-slap, lobtail and head-stand. Sometimes raised flukes in the air are used as sails, allowing the wind to push it through the water. This appears to be a playful activity as animals have often been seen swimming back to do it again. The Right whale is an inquisitive and playful whale and has been observed poking, bumping or pushing objects around that are in the water. During breeding season and mostly at night the Right whale often bellows and moans loudly. source "Whales in Danger" 2014 Update -
<urn:uuid:f902d464-70f9-4c95-916c-58358028b3fa>
CC-MAIN-2016-26
http://new-brunswick.net/new-brunswick/whales/rightwhale.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00093-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960987
884
3.78125
4
Food is important for everyone. Familiar foods make us feel safe and secure. Food reminds us of our childhood, home country and culture. We celebrate events by eating special foods in the company of people who are important to us. When we eat well we feel well. Food provides the energy and nutrients that our bodies need to: When the body does not get enough food, it becomes weak and cannot develop or function properly. Healthy and balanced nutrition means eating the right type of foods in the right quantities to keep healthy, keep fit and enjoy ourselves. The basics of good nutrition are explained in the next chapter. The HIV virus attacks the immune system. In the early stages of infection a person shows no visible signs of illness but later many of the signs of AIDS will become apparent, including weight loss, fever, diarrhoea and opportunistic infections (such as sore throat and tuberculosis). Good nutritional status is very important from the time a person is infected with HIV. Nutrition education at this early stage gives the person a chance to build up healthy eating habits and to take action to improve food security in the home, particularly as regards the cultivation, storage and cooking of food. Good nutrition is also vital to help maintain the health and quality of life of the person suffering from AIDS. Infection with HIV damages the immune system, which leads to other infections such as fever and diarrhoea. These infections can lower food intake because they both reduce appetite and interfere with the body's ability to absorb food. As a result, the person becomes malnourished, loses weight and is weakened. One of the possible signs of the onset of clinical AIDS is a weight loss of about 6-7 kg for an average adult. When a person is already underweight, a further weight loss can have serious effects. A healthy and balanced diet, early treatment of infection and proper nutritional recovery after infection can reduce this weight loss and reduce the impact of future infection. A person may be receiving treatment for the opportunistic infections and also perhaps combination therapy for HIV; these treatments and medicines may influence eating and nutrition. Good nutrition will reinforce the effect of the drugs taken. When nutritional needs are not met, recovery from an illness will take longer. During this period the family will have the burden of caring for the sick person, paying for health care and absorbing the loss of earnings while the ill person is unable to work. In addition, good nutrition can help to extend the period when the person with HIV/AIDS is well and working. The role of nutrition education as HIV infection develops Nutritional care and support promote well-being, self-esteem and a positive attitude to life for people and their families living with HIV/AIDS. Healthy and balanced nutrition should be one of the goals of counselling and care for people at all stages of HIV infection. An effective programme of nutritional care and support will improve the quality of life of people living with HIV/AIDS, by: Relationship between good nutrition and HIV/AIDS Source: adapted from Piwoz and Prebel, 2000.
<urn:uuid:2b587a89-a313-45e5-8e4b-b893ec13d31c>
CC-MAIN-2016-26
http://www.fao.org/docrep/005/y4168e/y4168e04.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00111-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956149
624
3.96875
4
National Organization for Rare Disorders, Inc. It is possible that the main title of the report Evans Syndrome is not the name you expected. Evans syndrome is a rare disorder in which the body's immune system produces antibodies that mistakenly destroy red blood cells, platelets and sometimes certain white blood cell known as neutrophils. This leads to abnormally low levels of these blood cells in the body (cytopenia). The premature destruction of red blood cells (hemolysis) is known as autoimmune hemolytic anemia or AIHA. Thrombocytopenia refers to low levels of platelets (idiopathic thrombocytopenia purpura or ITP in this instance). Neutropenia refers to low levels of certain white blood cells known as neutrophils. Evans syndrome is defined as the association of AIHA along with ITP; neutropenia occurs less often. In some cases, autoimmune destruction of these blood cells occurs at the same time (simultaneously); in most cases, one condition develops first before another condition develops later on (sequentially). The symptoms and severity of Evans syndrome can vary greatly from one person to another. Evans syndrome can potentially cause severe, life-threatening complications. Evans syndrome may occur by itself as a primary (idiopathic) disorder or in association with other autoimmune disorders or lymphoproliferative disorders as a secondary disorder. (Lymphoproliferative disorders are characterized by the overproduction of white blood cells.) The distinction between primary and secondary Evans syndrome is important as it can influence treatment. Evans syndrome was first described in the medical literature in 1951 by Dr. Robert Evans and associates. For years, the disorder was considered a coincidental occurrence of AIHA with thrombocytopenia and/or neutropenia. However, researchers now believe that the disorder represents a distinct condition characterized by a chronic, profound (more than in ITP or AIHA alone) state of immune system malfunction (dysregulation). American Autoimmune & Related Diseases - 22100 Gratiot Ave. - Eastpointe, MI 48021 - Tel: (586)776-3900 - Fax: (586)776-3903 - Tel: (800)598-4668 - Email: firstname.lastname@example.org - Website: http://www.aarda.org/ - Website: https://www.facebook.com/autoimmunityforum Autoimmune Information Network, Inc. - PO Box 4121 - Brick, NJ 8723 - Fax: (732)543-7285 - Email: email@example.com Evans Syndrome Community Network - 400 18th Place - West Des Moines, IA 50265 - Tel: (515)276-1836 - Fax: (515)276-1836 - Email: ESCN@me.com - Website: http://www.evanssyndrome.net Genetic and Rare Diseases (GARD) Information Center - PO Box 8126 - Gaithersburg, MD 20898-8126 - Tel: (301)251-4925 - Fax: (301)251-4911 - Tel: (888)205-2311 - Website: http://rarediseases.info.nih.gov/GARD/ NIH/National Heart, Lung and Blood Institute - P.O. Box 30105 - Bethesda, MD 20892-0105 - Tel: (301)592-8573 - Fax: (301)251-1223 - Email: firstname.lastname@example.org - Website: http://www.nhlbi.nih.gov/ For a Complete Report This is an abstract of a report from the National Organization for Rare Disorders (NORD). A copy of the complete report can be downloaded free from the NORD website for registered users. The complete report contains additional information including symptoms, causes, affected population, related disorders, standard and investigational therapies (if available), and references from medical literature. For a full-text version of this topic, go to www.rarediseases.org and click on Rare Disease Database under "Rare Disease Information". The information provided in this report is not intended for diagnostic purposes. It is provided for informational purposes only. NORD recommends that affected individuals seek the advice or counsel of their own personal physicians. It is possible that the title of this topic is not the name you selected. Please check the Synonyms listing to find the alternate name(s) and Disorder Subdivision(s) covered by this report This disease entry is based upon medical information available through the date at the end of the topic. Since NORD's resources are limited, it is not possible to keep every entry in the Rare Disease Database completely current and accurate. Please check with the agencies listed in the Resources section for the most current information about this disorder. For additional information and assistance about rare disorders, please contact the National Organization for Rare Disorders at P.O. Box 1968, Danbury, CT 06813-1968; phone (203) 744-0100; web site www.rarediseases.org or email email@example.com Last Updated: 12/24/1969 Copyright 2013 National Organization for Rare Disorders, Inc. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:e25b4081-adf4-452b-bfd9-c3a430e3d61f>
CC-MAIN-2016-26
http://www.uwhealth.org/health/topic/nord/acidemia-propionic/nord500.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.850205
1,166
2.9375
3
Democratic Dividends: Stockholding, Wealth and Politics in New York, 1791-1826 This paper analyzes the early history of corporate shareholding, and its relationship with political change. In the late eighteenth century, corporations were extremely rare and were dominated by elites, but in the early nineteenth century, after American politics became significantly more democratic, corporations proliferated rapidly. Using newly collected data, this paper compares the wealth and status of New York City households who owned corporate stock to the general population there both in 1791, when there were only two corporations in the state, and in 1826, when there were hundreds. The results indicate that although corporate stock was held principally by the city's elite merchants in both periods, share ownership became more widespread over time among less affluent households. In particular, the corporations created in the 1820s were owned and managed by investors who were less wealthy than the stockholders of corporations created in earlier, less democratic periods in the state's history. Document Object Identifier (DOI): 10.3386/w17147 Published: Hilt, Eric & Valentine, Jacqueline, 2012. "Democratic Dividends: Stockholding, Wealth, and Politics in New York, 1791–1826," The Journal of Economic History, Cambridge University Press, vol. 72(02), pages 332-363, June. citation courtesy of Users who downloaded this paper also downloaded these:
<urn:uuid:23d28a89-416b-4dc5-b91a-3872c972cd28>
CC-MAIN-2016-26
http://www.nber.org/papers/w17147
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00089-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965953
290
2.796875
3
ATLANTA — In their continuing quest for new ways to burn, cut and blast plaque from within coronary arteries, scientists are using tiny jackhammers. An experimental device using low-frequency ultrasound energy can safely blast away material that clogs arteries and causes heart attacks, researchers reported Tuesday to the annual meeting of the American Heart Association. The device, used on 29 patients in Britain, rides upon a tiny wire called a catheter inserted within coronary arteries as if it were a miniature monorail. The sound energy vibrates the jackhammer-like tip of the probe at 19,500 times a second, about one-thousandth of an inch back and forth each time. While this activity breaks apart the calcium and gristle in plaque, it doesn't harm the soft walls of the artery, said Dr. David Cumberland of Northern General Hospital in Sheffield, England. "It's completely safe," Cumberland said. "We think it will find niche uses." Researchers have found that patients can tolerate several minutes of jackhammering without problems. Once the device has blasted central plaque into ultra-small particles, a balloon is inserted into the narrowed section of artery and expanded to open the artery. Using balloons to open coronary arteries and restore normal blood flow to avoid heart attacks has long been a common procedure, called balloon angioplasty. But doctors continue to search for additional tools. Initial tests suggest that after the jackhammer treatment, it takes only about half as much pressure to expand the balloon. "Instead of cracking plaque, we're squashing mush," Cumberland said. "The lower pressure should mean less injury to arterial walls." Researchers have tried lasers that burned their way through plaque and lasers that blast it apart with bursts of light energy. They have also developed rotating burrs to grind away at plaque and rotating knives to cut. Some of these tools have been approved for use in patients by the federal Food and Drug Administration while others have not. None of them has succeeded, however, in addressing the major drawback to balloon angioplasty, which is that after treatment more than one artery in three will close down, blocking blood flow again. Cumberland said that it would be a pleasant surprise if the ultrasonic jackhammer avoids the reclosure of treated arteries. Early results don't suggest that will happen. "But this technique is much safer than anything else we've seen," he said. While the ultrasound employed in plaque busting is far above the range of human hearing, it is far below the sound frequency used in making ultrasound images of internal bodily actions. Besides its safety, another advantage to the ultrasonic jackhammer is its ability to dissolve blood clots that often form in narrowed arteries. Such clots aren't handled as well by other technologies, Cumberland said. Other centers in France, Belgium and Germany will soon begin their own trials with the jackhammer, using equipment made by Baxter International in the Netherlands, Cumberland said. Dr. Robert Siegel, a professor of medicine at the University of California at Los Angeles and collaborator in the research, said humans trials of the jackhammer may begin in the U.S. in one or two years.
<urn:uuid:6fccc17a-8281-483b-8587-3a9f41969cfe>
CC-MAIN-2016-26
http://articles.chicagotribune.com/1993-11-10/news/9311100140_1_jackhammer-arterial-walls-balloon-angioplasty
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954713
657
2.9375
3
(Of a material or device) having the properties of a semiconductor. - Previous fingerprint sensors have exploited the capacitance between the skin and the surface of a semiconducting device, identifying the ridges and grooves of the finger from the difference in capacitance they produce. - The most common approach is to use sunlight to knock electrons out of a semiconducting material like silicon, creating an electric current. - So, photonic crystals are now typically made of insulating or semiconducting materials, such as titanium oxide, silicon dioxide, silicon, or gallium arsenide. For editors and proofreaders Line breaks: semi|con¦duct|ing Definition of semiconducting in: What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
<urn:uuid:882c3f61-2772-4209-9d59-8e67bceef7f7>
CC-MAIN-2016-26
http://www.oxforddictionaries.com/us/definition/english/semiconducting
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00096-ip-10-164-35-72.ec2.internal.warc.gz
en
0.869847
177
3.34375
3
Building stairs is a job that experienced carpenters know requires accurate measurements and diligent work habits. I'll attempt to guide you through the steps, pun intended, of staircase construction in a 'dwelling unit'. Knowledge of how to build stairs is very important not only in the stairs being useful and looking good, but also in preventing accidents. Besides needing to be sturdy and wide enough, stairs need to be consistent. Each step must be exactly the same size as every other step. (see Figure 1) The first thing to do is to measure the height where your stairs will go. This is your most important measurement. It is called the total rise. Every other stair measurement depends on it. The total rise is the vertical distance between the surface of the higher floor and the surface of the ground, sidewalk or the lower floor that the last step will be on (see Figure 2) The total run is the horizontal distance between the edge of the upper floor and the end of the bottom step. Each stair step has two basic measurements. The horizontal or flat part of the stair is called the run. The vertical height difference between two stairs is called the rise. The riser is the vertical part of the stair between a tread and the underside of the tread above it. The part of the stair that sticks out past the stair riser is called the stair nosing. The dimension of each stair depends on a number of factors. Your stairs can be steep or gradual. The rise of each stair can vary as well as its run. (see Figure 3) To prevent the stairs from being too steep or too gradual (see Figure 4), there is a relationship or proportion between the stair rise and the stair run. The British Columbia Building Code says that the stair rise must have a maximum of 200 mm (7 7/8") and a minimum of 125 mm (5"); the stair run has a maximum of 355 mm (14") and minimum of 210 mm (8 5/16"); the stair tread depth has a maximum of 355 mm (14") and minimum of 235 mm (9 1/4"). The tread depth is the stair run including the nosing. The stair nosing cannot be more than 25 mm (1"). You should check the building code of your own region before building or renovating anything structural for your home. An old adage says that for older people the ideal stair rise is 6" with a stair run of 12". An intermediate stair rise is 7" and the stair run is 11". The steepest stairs should be no more than a stair rise of 7 7/8" and a stair run of 10". Notice that in each case the stair run plus the stair rise equals 18". This is the simplest way of determining stair rise and stair run but the size of each stair is totally up to you as long as they are within Building Code ranges. The ideal stair run and stair rise for a dwelling based on a 92 1/4" stud, 3-1 1/2" plates, 2x10 floor joists and 5/8" subfloor is 14 rises of 7 5/8" and 13 runs of 10 1/2" with a 1" stair nosing. The preferred angle of stairs is around 30 to 35 degrees. There are three generally accepted rules for calculating the ideal stair rise to stair run ratio: To keep each stair rise the same size, you'll need to make some calculations. Follow these steps: Dan, my brother and webmaster, made a simple stair calculator for you to find the exact measurements of your stair rise and stair run. Stair Calculator It's a good idea before you start building the staircase to make sure the planned staircase can fit within the space that you have. Calculate the total run of the staircase by multiplying the length of the run of each stair by one less than the number of stair rises you calculated in step #4. I like a stair run of 10" to 10 1/2" for a stair rise of 7 5/8". At a stair run of 10.5" for 13 stair treads, the arithmetic is: 10.5" x 13 = 136.5" for the staircase's total run. Then measure the physical space within your house to make sure there is enough room for the staircase. Hang a plumb bob from the edge of the upper floor, where the stairs are going to be attached. Measure from the plumb bob to where the bottom of the stairs will be. Make sure there's plenty of room so the stairs don't run into a wall or other obstruction. Allow at least 36" between the end of the bottom stair and a wall, if inside a house. If your measurement is too tight, try a stair run of less than 10.5" down to 10". Our total run in this staircase would be 10" x 13 = 130". We just saved 6.5". These calculations show the versatility in choosing different stair runs and stair rises. If room is still limited try taking off a stair riser, thus eliminating a stair. Remember though that you must stay within the maximum and minimum parameters for stair rise and stair run. Maybe move the obstruction or move the stair opening back in the upper floor if you have reached your maximum stair rise and minimum stair run. Installing a stair landing will change directions of your stairs, which can give you more room in many cases. (For more info on staircase landings see Installing a Landing in a Staircase) Watch the staircase headroom also. (see Figure 5). If the stairs are in an opening cut out of a floor area, headroom is a factor. The staircase opening must be long enough to allow adequate headroom when coming down the stairs. The minimum staircase headroom under a beam or joist is 1.95 m.(76 7/8"). Now that we have determined our stair rise and stair run and checked for adequate staircase headroom, we can cut the stair stringers. (For more info on stair stringers see How to Cut Stair Stringers.) Nail the stair stringers in place, securely to the top floor trim joist and to the bottom floor, or to the side walls. Next is installing the steps or stair treads. In our stair example we chose 1" plywood for the stair treads. Since our stairs are inside a house and will be carpeted, we will choose a stair nosing of 1" giving us a stair tread width of 11 1/2". Rip the 1" plywood 11 1/2" wide and the length to match the width between the walls less 3/4" on each side for the drywall to slip down. The width of the staircase is important as well. The minimum width is 860 mm.(33 7/8"). I prefer a width of 36" if appliances or furniture have to be moved up or down the stairs. If your staircase is wider than 36" put in extra stair stringers to support the longer stair treads. In an inside staircase the stair riser is usually closed, there is a board for the stair riser to attach the carpet or other finish to. This is different to an open riser staircase such as outside off a deck where the stair risers do not have a board attached to them. In this case the stair treads should be made from 2 x 4, 2 x 6, or larger to stand up to the weather. Also, on this type of staircase overhang the stair step 4 1/2 inches from the outside edge of each stair stringer on a 3 foot or wider staircase. (Example of stairs off a deck.) Back to our project. We have 13 stair treads ripped and cut to length now. Let's rip the material for the stair risers. This can be 1/2" to 3/4" plywood. In new construction, there are usually scraps of 5/8" left from the sub-floor. Since our stairs will be covered with carpet let's use these. Rip the stair riser pieces 7 5/8" and the same length as our stair treads. Now start assembly at the bottom of the staircase. We discover that our first stair riser is too high, that's because we cut 1" off the bottom of the stair stringer. Adjust the first stair riser to fit the stair stringer. It should be 6 5/8", unless the depth of the floor covering on the bottom floor is different from the depth of the covering on each stair tread and on the higher floor. What you want is to have the exact same height of each step all along from the lower floor to the upper floor. In other words, if the depth of the lower floor's carpetting or tile is thicker or thinner than the material on the stair treads then subtract or add the difference to this lowest (first) stair rise. Nail the stair riser on with some construction adhesive or use the adhesive and screws. Nail the next stair riser on, then put some adhesive on each stair stringer at the bottom stair and put some adhesive on the back edge of the stair tread where it meets the stair riser. Nail the bottom stair tread down to the stair stringer placing it tight against the second stair riser and from the back of the stair riser nail through into the stair tread. You can see that the stair tread is now supported by the stair stringer on each end and the lower stair riser supports the front while the upper stair riser supports the back - no squeaks here. Continue up the stairs following this procedure. When you arrive at the top stair riser, it will need to be trimmed to fit. If there is no nosing on your top floor to match your stairs, now is the time to put one in. I usually rip a nosing from solid lumber, say a 2 x 4, to match the overhang and thickness of our stair nosings. Glue and drill and screw this nosing on securely. If your stairs were built outside and the stair stringers have no support under the middle of them now would be the time to put 1 or 2 posts under the stair stringers for added strength. Also if these stairs are hanging off a deck with a 2 x 6 trim joist, not much is there to secure the stair stringers to at the top. What I like to do is support the stair stringers with a 4 x 4 that goes from a concrete block or footing right up to above the deck level to form the stair handrail post. Below the stair stringers and tight up to them, nail a 2 x 4 or 2 x 6 ledger across the posts. Then nail a 2 x 4 across the posts near the bottom to prevent the posts from kicking out. Stairs need handrails. These should be between 800 mm(31 1/2") and 965 mm(38"), measured vertically from the edge of the stair nosing (see Figure 6) to the top of the stair handrail. I suggest 32" as a comfortable height.At a staircase landing, the stair handrail should be 36" high and at a balcony edge should be 42" high. These measurements are for single dwelling residential construction (one family house). Dave(Ask Dave) (About Dave) Get free access to this article and two others of your choice, just by entering your email address below. Receive our free Monthly newsletter which contains a free set of woodworking plans each and every month.
<urn:uuid:4ebd488f-e999-4c46-97fb-107b0e6e6a92>
CC-MAIN-2016-26
http://daveosborne.com/dave/articles/stairs.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00064-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924645
2,363
3.234375
3
Behind the Dictionary Lexicographers Talk About Language History in the Toolbox: The Vocabulary of Electrical Units I am guessing that the average electrician doesn't realize how much history is knocking about in his or her toolbox. Volt, amp, ohm, watt—these electrical units are all eponyms, derived from the names of pioneers in the field. Let's have a tour. Electricians measure electrical "pressure" in volts; that is, voltage is the "tension" in high-tension wires. The words volt and voltage come from Alessandro Volta, an 18th-century Italian scientist who made investigations in electricity and chemistry. He's best known as the inventor of the voltaic pile, more commonly known in English as the battery. Volta's pile was literally a stack—a pile—of zinc and copper disks separated by paper discs soaked in saltwater. It was Benjamin Franklin who in 1748 suggested the term battery for this invention, in the sense of a collection of things used together, like a battery of artillery. (Speakers of other languages have a more direct connection to Volta's invention; the term for battery in French is pile, and in Spanish, pila.) Volta's contemporary Luigi Galvani had discovered electricity in living creatures, which he did by making frogs' legs jump when he applied a shock. An early term for electricity generated by a voltaic pile was galvanism. Although we don't use that term any more, Galvani lives on in the word galvanometer, a device for measuring electrical flow. His name is also visible in the words galvanize in the sense of "to stimulate" (as if electrically shocked) and in the sense of putting an anti-corrosion layer on metal. An amp (or ampere) is a measure of the quantity of electric current flowing through a wire. The term honors the French scientist André-Marie Ampère, who made discoveries about magnetism and electricity. Electrical current might flow easily through a substance (like metal) or it might encounter resistance. This resistance is measured in ohms, named for the German scientist Georg Ohm. Electricians have need to measure each of these, and the original galvanometer evolved into the ammeter (from amps), voltmeter, and ohmmeter. These days, all of these measuring functions can be accomplished with a sigle device, and every electrician therefore has a multimeter in the toolbox. I'm sure you recognize the term watt (for example, a 60-watt light bulb). Wattage indicates how much energy a device produces or consumes. This time it's the Scotsman James Watt, famous for his work with steam engines, who's remembered in the electrician's daily work. The connection is that watts measure a unit of work, like horsepower, which is an important factor in both steam engines and electrical devices. (In fact, 750 watts is equivalent to 1 horsepower.) Even when we move from the electrician's toolbox to the electronics work bench or laboratory, we keep finding people's names. Coulomb, decibel, farad, gauss, hertz, joule, maxwell, oersted, siemens, tesla, and weber all refer to units related to electricity, and all invoke scientists who worked in the field. In a whimsical touch, names are sometimes inverted to indicate the reciprocal of a unit: a siemens (which has an -s even in the singular) is the reciprocal of an ohm, and an alternate term for the siemens is mho. Along those lines, a daraf has been proposed as the reciprocal of a farad, though that term has never been formally accepted. Would you agree that this seems like an unusually dense collection of eponyms? It turns out that there's a tradition about this. And in a particularly satisfying bit of lexicography, we know exactly how it started. By the mid-19th century, although the electrical industry was expanding rapidly—by 1858 the first trans-Atlantic telegraph cable had already been laid—there were no standards even for the names of basic electrical units. In 1861, Josiah Latimer Clark and Charles Tilston Bright, two British electrical engineers, addressed themselves to this issue and presented their proposal for "universally received standards of electrical quantities and resistances" to the British Science Association. They published their ideas in a short article titled "Measurement of Electrical Quantities and Resistance" in the very first edition of The Electrician: A Journal of Telegraphy. Here's the bit that gets the ball rolling on our tour of famous names: ... let us derive terms from the names of some of our most eminent philosophers, neglecting for the all etymological rules. They included this table of proposed unit names: This is fascinating for a number of reasons. As I say, we don't always get such a precise view into the origins of terms. I'm intrigued by the authors' rejection of "etymological rules," which seems to me might mean either that they're eschewing Latin and Greek, or possibly that they're slightly altering the names for convenience. It's also interesting because if you look carefully, you'll see that the terms that the authors propose are not actually the ones we use today. For example, they propose ohma for the unit of tension; we use volt today. And the reverse—they propose volt for resistance, whereas we use ohm today. You can also see that their suggestion to use galvat was not taken up, and we use amp today. But it's clear that they started a trend, as the daily terminology of electrical units shows. In fact, it's hard to find a unit that isn't named for an "eminent philosopher." Whatever discoveries are made in the future about electricity, I hope they keep up the tradition.
<urn:uuid:76d4cdbb-f2c6-4fd3-9647-2c0f96f4d3e8>
CC-MAIN-2016-26
http://www.visualthesaurus.com/cm/dictionary/history-in-the-toolbox-the-vocabulary-of-electrical-units/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961816
1,224
3.328125
3
Research suggests faith lowers stress, eases taskscomment (0) April 16, 2009 TORONTO — Canadian researchers have found that strong religious convictions can lower stress and enhance the performance of basic tasks. A team in Toronto put 28 students through tests measuring both levels of religious observance and stress caused by making mistakes on a test. The newly published study by professors at the University of Toronto and York University points to religious believers out-performing nonbelievers on cognitive tasks. “The more religious they were, the less brain activity they showed in response to their own errors,” said University of Toronto assistant psychology professor Michael Inzlicht, lead author of the study. “They are calmer when they make errors.” Researchers asked subjects, who were from a variety of faith backgrounds, to complete a “religious zeal” questionnaire. Subjects were then given a test asking they name the color of the letters in words such as “red” or “blue” (in which the word “red” may appear in blue letters). Using electrodes, researchers monitored brain activity and found subjects with high levels of religious observance experienced less activity in the part of the brain that governs anxiety and helps modify behavior. The more religious zeal individuals showed, the better they did on the test. The study also found that even moderate religious belief resulted in lower levels of anxiety than among nonbelievers.
<urn:uuid:51453089-be43-454f-9665-a45585bd77f8>
CC-MAIN-2016-26
http://www.thealabamabaptist.org/print-edition-article-detail.php?id_art=10597&pricat_art=8
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00138-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950391
294
2.53125
3
Post–cold War Policy - Isolating and punishing "rogue" states American foreign policymakers used the terms "rogue," "outlaw," and "backlash" states virtually interchangeably after the Cold War. As early as July 1985, President Reagan had asserted that "we are not going to tolerate … attacks from outlaw states by the strangest collection of misfits, loony tunes, and squalid criminals since the advent of the Third Reich," but it fell to the Clinton administration to elaborate this concept. Writing in the March–April 1994 issue of Foreign Affairs, Anthony Lake cited "the reality of recalcitrant and outlaw states that not only choose to remain outside the family [of democratic nations] but also assault its basic values." He applied this label to five regimes: Cuba, North Korea, Iran, Iraq, and Libya and claimed that their behavior was frequently aggressive and defiant; that ties among them were growing; that they were ruled by coercive cliques that suppressed human rights and promoted radical ideologies; that they "exhibited a chronic inability to engage constructively with the outside world"; and that their siege mentality had led them to attempt to develop weapons of mass destruction and missile delivery systems. For Lake, "as the sole super-power, the United States [had] a special responsibility … to neutralize, contain and, through selective pressure, perhaps eventually transform" these miscreants into good global citizens. The first Bush administration had agreed with Lake's analysis and in 1991 adopted a "twowar" strategy designed to enable U.S. forces to fight and win two regional wars simultaneously against "renegade" nations. The second Bush administration emphasized the urgent need to develop a national missile defense to protect the United States from weapons launched by rogue states. In short, the "outlaw" nation theme pervaded U.S. foreign policy throughout the post–Cold War era. Critics seized on these terms as inherently fuzzy, subjective, and difficult to translate into consistent policy. Although Lake had defined rogues as nations that challenged the system of international norms and international order, disagreement existed about the very nature of this system. For example, whereas the Organization for European Security and Cooperation (OSCE) and UN Secretary-General Annan advocated international norms that would expose regimes that mistreated their populations to condemnation and even armed intervention, others argued that such norms would trample on the traditional notion of state sovereignty. Nevertheless, the State Department sometimes included Serbia on its outlaw list solely because President Milosevic had violated the rights of some of his nation's citizens, and NATO undertook an air war against him in 1999 because of his repression of an internal ethnic group. In theory, at least, to be classified as a rogue, a state had to commit four transgressions: pursue weapons of mass destruction, support terrorism, severely abuse its own citizens, and stridently criticize the United States. Iran, Iraq, North Korea, and Libya all behaved in this manner during at least some of the post–Cold War era. Yet the inclusion of Cuba, which certainly violated human rights and castigated the United States, was put on the list solely because of the political influence of the American Cuban community and specifically that of the Cuban American National Foundation. Moreover, in 1992 Congress approved the Cuban Democracy Act, which mandated secondary sanctions against foreign companies who used property seized from Americans by the Castro government in the 1960s. Attempts to implement this law outraged some of Washington's closest allies, and President Clinton, while backing this legislation as a presidential candidate, tried hard to avoid enforcing it. On the other hand, states like Syria and Pakistan, hardly paragons of rectitude, avoided being added to the list because the United States hoped that Damascus could play a constructive role in the Arab-Israeli "peace process," and because Washington had long maintained close relations with Islamabad—a vestige of the Cold War. The United States employed several tools to isolate and punish rogue states. Tough unilateral economic sanctions, often at congressional behest, were imposed on or tightened against Iran, Libya, Cuba, Sudan, and Afghanistan. Air-power was used massively against Serbia in 1999 and selectively against Iraq for years after the conclusion of the Gulf War in 1991. Cruise missiles were fired at Afghanistan and Sudan in retaliation for terrorist attacks against U.S. embassies in Kenya and Tanzania in September 1998. The Central Intelligence Agency supported a variety of covert actions designed to depose Saddam Hussein, while Congress approved the Iraq Liberation Act in 1998 aimed at providing Iraqi opposition groups with increased financial assistance. Several leading Republicans who would occupy high positions in the George W. Bush administration publicly urged President Clinton in February 1998 to recognize the Iraqi National Congress (INC) as the provisional government of Iraq. Some of these critics, including Paul Wolfowitz and Robert Zoellick, hinted that U.S. ground forces might ultimately be required to help the INC oust Saddam. In all of these anti-rogue efforts, however, Washington found it exceedingly difficult to persuade other nations (with the partial exception of Britain) to support its policies of ostracism and punishment. In light of these difficulties, some observers suggested that the United States drop its "one size fits all" containment strategy that allegedly limited diplomatic flexibility in favor of a more differentiated approach that addressed the particular conditions in each targeted nation. Indeed, the Clinton administration adopted this policy alternative with North Korea and, to a lesser degree, with Iran. Faced with the dangers posed by Pyongyang's ongoing efforts to develop nuclear weapons and missile delivery systems, the United States briefly considered air strikes against suspected nuclear facilities or stringent economic sanctions. Yet both options were rejected out of fear of triggering a North Korean invasion of the South. Consequently, the Clinton administration reluctantly entered into negotiations designed to compel Pyongyang's nuclear disarmament. In October 1994 the U.S.–North Korea Agreed Framework was signed, committing North Korea to a freeze on nuclear weapons development and the eventual destruction of its nuclear reactors. In exchange, the United States, South Korea, and Japan promised to provide two light-water nuclear reactors that would be virtually impossible to use to produce nuclear weapons, along with petroleum to fuel North Korea's conventional power plants and food assistance to alleviate near-famine conditions. During the last years of the Clinton administration, relations with Pyongyang warmed considerably. North Korea claimed that it had suspended its missile development program pending a permanent agreement, and Madeleine Albright, now secretary of state, made the first official American visit to North Korea in 2000. Nevertheless, because this conditional engagement with North Korea involved reaching agreements with a regime widely perceived as extremely repressive and untrustworthy, many in Congress attacked this approach as tantamount to appeasement and called on George W. Bush to cease negotiations with Pyongyang. He obliged, announcing that he had no intention of quickly resuming efforts to reach an agreement on North Korean missile development. Interestingly, despite the budding rapprochement between these two states in the late 1990s, officials in the Clinton administration repeatedly argued that a national missile defense system needed to be constructed to protect the United States against nuclear missile attacks from rogue states such as North Korea. George W. Bush's decision to end talks with Pyongyang suggested to many observers that he preferred to pursue national missile defense. To critics of the rogue state concept, these actions merely reinforced their view that while the concept had proven to be very successful in garnering domestic support for punitive measures, the derogatory nature of the term necessarily complicated efforts to improve relations with states like North Korea. Similarly, Iran represented another case in which altered circumstances challenged the rogue-state strategy. The surprise election of Mohammed Khatemi to the presidency in May 1997 and his subsequent invitation for a "dialogue between civilizations" led Secretary Albright to propose a "road map" for normalizing relations. Conservative Shiite clerics warned Khatemi against engaging the "Great Satan," but the continued designation of Iran as a rogue state also contributed to the Clinton administration's difficulty in responding constructively to positive developments in Tehran. The gradual realization that calling states "rogues" might in some cases have proven counterproductive induced the United States in June 2000 to drop this term in favor of the less fevered "states of concern." Secretary Albright emphasized that the change in name did not imply that the United States now approved of the behavior of these regimes: "We are now calling these states 'states of concern' because we are concerned about their support for terrorist activities, their development of missiles, their desire to disrupt the international system." Yet State Department officials acknowledged that the "rogue" term had been eliminated because some of these states—such as North Korea, Libya, and Iran—had taken steps to meet American demands and had complained that they were still being branded with the old label. Regardless of the terms employed, however, on another level this post–Cold War strategy of regional containment reflected an effort by the United States to define acceptable international (and even domestic) behavior. As a hegemonic state it was, perhaps, appropriate that Washington attempted to write these rules. Yet it inevitably risked exposing the United States to charges of arrogance and imperiousness.
<urn:uuid:0fad2b78-82db-4b37-9f8a-423d2c23b137>
CC-MAIN-2016-26
http://www.americanforeignrelations.com/O-W/Post-cold-War-Policy-Isolating-and-punishing-rogue-states.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00054-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965743
1,854
2.53125
3
Reading Group Guide What is Greg's greatest talent? How does he earn money? Do you like to earn money? How do you earn money? What do you do with your money? In Chapter 2, what discovery does Greg make about quarters? What happens when he tries to sell candy and toys at school? Is Principal Davenport correct in her actions? Explain your answer. What does Greg sell at the beginning of sixth grade? Describe how he learned to create this product over the summer. Would you have been willing to work so hard to make something to sell? What does this tell you about Greg? What competition do Chunky Comics face? Who creates the competition? Describe the relationship between these characters in the first half of the novel. What does Mr. Z like about numbers? What happens when he sees Maura give Greg a bloody nose? How does Mr. Z feel about Greg's situation? What role does math play in his analysis? When they finally have a serious discussion about comics, what does Greg realize about Maura? What does Maura realize about Greg? How does Mr. Z analyze Greg's claim that Maura "stole" his idea? What happens when the two sixth graders begin to work together? How did Mr. Z choose his job? What do Mr. Z's comments about wealth and careers make Greg wonder about his get-rich goal? Why does Mrs. Davenport call comic books "practically toys, and bad toys at that"? Is she correct to extend her selling ban to comic books? Why is Chapter 16 entitled "Art and Money"? Compare and contrast Maura's goal in creating comic books with Greg's. Which character thinks most like you? What do Maura and Greg realize about things being sold at school? What case do they make to the school committee? What is Mrs. Davenport's opposing argument? How is the Chunky Comics problem resolved at Ashworth Intermediate? Is this a good solution? Would you participate in such a venture at your school? What might you call your store or website? What ideas might you bring to the project? Is getting rich a primary goal for you? Why or why not? What future goals are important to you? If you had a lot of money, how would you choose to spend it? Activities and Research At the library or online, find several definitions for money. Individually, or with friends or classmates, make a list of synonyms for, words related to, and phrases incorporating the word "money." Are your lists long or short? Were they difficult to brainstorm, or quick and easy? Why do you think this is the case? Review the moments in the story where Greg and Maura compete to make money. Have you ever been in a similar contest? What was the result? Write a short story in which you find yourself up against another kid in a money-making venture. Make your own comic book. In addition to the information provided in the novel, consult Understanding Comics by Scott McCloud or So, You Wanna Be a Comic Book Artist? by Philip Amara. Share your comic with family members or friends. Study selling. Individually or in groups, list corporate logos, promotions, and other types of selling you see at school. Note the number of commercials in an hour of television. Keep a journal of corporate sales efforts at your local library, on sports fields, or elsewhere in your community. Display your observations on an informative poster. Discuss or write about how all this selling makes you feel. Is it okay with you? Why or not? How might things change for the better? Imagine Greg and Maura have asked for your help with their school committee presentation. Use PowerPoint or another computer program to create a presentation based on the arguments made in the novel, adding suggestions and ideas of your own. Give your presentation to friends or classmates. Assign roles of school committee members, administrators, and parents to your classmates or friends. Then improvise the conversation after Greg and Maura have left the school committee meeting. What points do members feel the kids made? Why do comic sales still pose a school problem? What about future sales proposals from other kids or schools? How do parents feel about this dilemma? How can a principal keep money-making from getting out of hand? Based on your improvisation, write an additional chapter to add to Lunch Money. Imagine you are Greg or Maura near the end of the story. In the character of Greg, write a journal entry about your changing attitudes toward making money. Or, in the character of Maura, write a journal entry about your changing reasons for making comics. Write a newspaper article about the success of Chunky Comics two years later. What has happened to Greg and Maura? How have their dreams changed? Upon what new adventures have they embarked? Do you have a great idea for something to make and sell? Write a plan, including a sketch of your product, its name, and how you will sell it. What will your product cost to make, for how much will you sell it, and what profit do you hope to earn? What will you do with your earnings?
<urn:uuid:96f19a06-9d70-478f-abb4-4556d1ad3be8>
CC-MAIN-2016-26
http://books.simonandschuster.com/Lunch-Money/Andrew-Clements/9780689866852/reading_group_guide
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00027-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956062
1,068
3.265625
3
Of Modern Poetry Theme of Literature & Writing "Of Modern Poetry" is about one thing: what poetry is supposed to be doing in the modern age. For that reason, Stevens uses the phrase "has to" quite a lot in this poem. The thing is sort of like a manifesto for Stevens' beliefs. In that sense, the entire thing is about the role of literature and writing in society, and what literature should be doing for its readers. For Stevens, the answer's pretty clear. Now that people's faith in religion and science has been shaken, it's now poetry's job to make people feel good about the world and their place in it. Questions About Literature & Writing - Do you think that poetry is important enough to take on the role of religion in people's lives? How can you see it succeeding? - How can you see it failing? How would the speaker answer this question? - What do you think the role of literature and writing should be in people's lives? What does the speaker think it should be? Why? - What does Stevens mean when he says that modern poetry should be the "poem of the act of the mind"? What parts of the poem support your answer? Chew on This It's crunch time. For Stevens, modern poetry has pretty much run out of options. It's going to have to take desperate measures if it's going to reach people. According to "Of Modern Poetry," classic poetry was pretty much worthless (in your face, Ovid). Now modern poetry has to pick up the slack and start being relevant to normal people.
<urn:uuid:701418c3-85f4-43f2-8417-b9318108bc56>
CC-MAIN-2016-26
http://www.shmoop.com/of-modern-poetry/literature-writing-theme.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00111-ip-10-164-35-72.ec2.internal.warc.gz
en
0.978664
332
3.09375
3
If you have any public domain photographs of historical interest to donate, whether scanned or printed please contact the webmaster and your submission will be credited if it is displayed on this site. Began November 17, 1962 finished May 21, 1966 6 photographs - during and after Below are two pictures of the cofferdam construction from the air. The size of the cars and trucks leave us with an impression of the scale of the project. The cofferdam kept out the water and created a hole below sea level allowing the workers to begin building the foundation and access tunnel that runs beneath the barrier. (Click the pictures for larger images) The construction was begun on November 21, 1962 and finished on May 21, 1966. Total cost was over $18 million, and is 3.5 miles long with the 150 foot wide gates at a mid-point in the harbor. The structure stands 26 feet above mean high tide, and can be seen from outer space. It's the largest such structure on the east coast of the USA. The Army Corps of Engineers manages, and operates the main gates. In case of a storm or excessive high tide, it takes12 minutes to close the main gates that weigh 40.5 tons each. There is also a street gate and pumping station on East Rodney French Blvd just north of Billy Woods Wharf and Davy's Locker that is operated by the city of New Bedford. The cut off point leaves the businesses and homes south of the barrier in harms way from storms. This picture (below) shows the mechanism of the 150 foot wide gates being installed. Below see 4 more 2010 aerial views - Click on the picture to enlarge it. Below is the entire barrier in it's two sections Tthe New Bedford harbor section and the Clark's Cove section Below is a closer view of the New Bedford Harbor section of the barrier with the gates open. Below is an aerial view of the street barrier gate that is controlled by the City of New Bedford. The property to the south is unprotected. This is the section of the New Bedford hurricane barrier at the northernmost part of Clark's Cove. Until the late 1800's a stream emptied into this cove, and since then there has been a lot of landfill as can be seen from older maps of the area.
<urn:uuid:46517a49-4ba4-4dee-bf98-be3f5d43959d>
CC-MAIN-2016-26
http://www.whalingcity.net/picture_hurricane_barrier.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00022-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94568
474
2.59375
3
Put the sentences into passive voice. Mind the tenses. They speak English and French at this hotel. English and French at this hotel. The little boy broke the window last week. by the little boy last week. Our secretary typed this enquiry. by our secretary. Jill uses the computer quite often. by Jill quite often. The secretary defended some colleagues. by the secretary. Picasso painted this picture. Last year they published ten books. (by them) last year. Molly has knitted this cardigan. Next year George will visit Marc in London. Next year Marc by George in London. Jim has opened the window. Frank has broken many windows. Lucy buys many books. David has written some letters. Benjamin Franklin invented the lightning conductor. The lightning conductor by Benjamin Franklin. All students have learned the irregular verbs. The irregular verbs by all students.
<urn:uuid:42ab48a5-83f8-46e7-831b-8055ed5b5b3e>
CC-MAIN-2016-26
http://suz.digitaleschulebayern.de/english/grammar/passive1.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00007-ip-10-164-35-72.ec2.internal.warc.gz
en
0.85744
206
3.015625
3
Robert C. Seacord is a senior vulnerability analyst at the CERT/C and author of Secure Coding in C and C++ (Addison-Wesley, 2005). He can be reached at firstname.lastname@example.org. Buffer overflows are a primary source of software vulnerabilities. A buffer overflow occurs when data is written outside of the boundaries of the memory allocated to a particular data structure. Buffer overflows are troublesome in that they can go undetected during the development and testing of software applications. Common C and C++ compilers neither identify possible buffer overflow conditions at compilation time nor report buffer overflow exceptions at runtime . Not all buffer overflows lead to exploitable software vulnerabilities. However, a buffer overflow can cause a program to be vulnerable to attack when the program's input data is manipulated by a (potentially malicious) user. Even buffer overflows that are not obvious vulnerabilities can introduce risk. Code inspections have been used for many years to reduce errors in program development . Code inspections used primarily to identify and eliminate security flaws leading to exploitable buffer overflows and other vulnerabilities are referred to as "source-code security audits." These audits can be effective in finding and eliminating problems that cannot be detected using existing tools. However, source-code audits are typically unstructured and rely largely on the experience and tenacity of the programmers performing the review. While any manual process is prone to error, following a more structured approach may produce a higher level of assurance that potential security flaws have been identified and properly remediated. In the remainder of this article, I describe a manual review process for C and C++ language programs that is based on Safe-Secure C/C++ from Plum Hall . Safe-Secure C/C++ (SSCC) is a set of methods to eliminate vulnerabilities resulting from buffer overflows and other programming errors in C and C++ using a mixture of compile-time, link-time, and runtime tests, plus some design-time restrictions. The basic premise underlying SSCC is that most exploits (especially those that transfer control to arbitrary code) need to read or write to memory locations outside the bounds of the data structures defined by the program. This allows an attacker, for example, to overwrite the return address on the stack or other address to which control is eventually transferred to execute arbitrary code provided by the attacker or already resident on the system. By eliminating the possibility of such writes, it is possible to eliminate these vulnerabilities. To demonstrate how the manual review process works, I apply it to the hbAssignCodes() function shown in Example 1 from the Standard Performance Evaluation Corporation (SPEC) C language benchmark program 256.bzip2. To simplify the process and reduce the cognitive load for the reviewer, the manual review is carried out in a series of steps. Because SSCC is based on preventing reads and writes from outside the bounds of programmatically defined data structures, the first step is to identify fetch and stores that involve subscripting or dereferencing a pointer. The hbAssignCodes() function is shown in the right-hand side of Table 1. There are two fetch and stores of interest in this function, both on line 1441. The variable length is a function parameter and is defined as a pointer to unsigned char. The variable code is also a function parameter but is defined as a pointer to int. When these variables are subscripted, the value of these pointers is added to i times the sizeof of the respective types. On 32-bit Intel Architecture (IA-32), for example, the sizeof of an unsigned char is a single byte, while an int is 4 bytes. In both cases, there is no clear indication what these arguments point to, so these fetch and store operations could potentially be out of bounds. Consequently, both subscript operations are annotated in the left-hand side of the table. The SUB4() notation is shorthand for "is a suitable subscript for" and is both a requirement and a guarantee. This means that the developer, compiler, or runtime system must guarantee that this requirement is satisfied before line 1441 is executed, to eliminate the possibility of buffer overflow. In step 2 we look for and mark counted loops. Counted loops are the most basic type of loop and involve a loop counter that monotonically increases or decreases until a maximum or minimum value is reached. If the loop counter is increasing, the loop is referred to as a "counted-plus loop." When the loop counter is decreasing, the loop is referred to as a "counted-minus loop." Table 2 identifies two occurrences of counted-plus loops in the hbAssignCodes() function. Counted loops are interesting because they can help establish easy-to-identify limits. The variable i (used twice as an index on line 1441) is the loop counter for the counted-plus loop on line 1440. We see from the for statement that the value of i starts at 0 and increases monotonically until it is one less than the value of alphaSize. This means that the values up to alphaSize-1 must be suitable as a subscript for both the length and code arrays. These limits are identified in the third step of the review process. Table 3 shows the annotated hbAssignCodes() function at the completion of step 3. Table 3 adds annotations for line 1441 showing that alphaSize is "SUB5" for both the length and code arrays. "SUB5" means that the value is one greater than a value that is suitable as an array index (that is, "SUB4 plus one"). In step 4 we annotate the function's declaration to indicate whether there are any requirements on arguments to the function. Because the alphaSize argument must be SUB5 for both length and code, we need to annotate this requirement as shown in Example 2. In step 5 we analyze each call to the function to determine whether the requirements imposed by the new annotations can be guaranteed. In the current example, there is only one call to this function on line 1907 of the program (as shown in Table 4 along with some other relevant lines from the sample program). The arguments to this function include the code and len arrays, respectively declared on lines 1012 and 1021, and alphaSize. During step 3, it was also determined that alphaSize must be SUB5 for len (see the annotation for line 1902). Flow analysis shows that after line 1736, alphaSize equals 258, which provides the guarantee prior to invoking the hbAssignCodes() function on line 1868 that alphaSize is SUB5 for len. Because the len array has the same bounds as the code array, this also guarantees that alphaSize is SUB5 for code. This means that the call to hbAssignCodes() on line 1868 is safe and no additional runtime guarantees are required. This is an ideal outcome because no additional code needs to be introduced that would introduce additional runtime overhead. A cut-down version of a function (qSort3()) that cannot be guaranteed to be safe is shown in Table 5. This function shows a number of subscripting operations on lines 2446 through 2496 (after macro expansion). The control flow of the function permits a compile-time analysis of the min-max range of the subscript sp. This analysis shows that the subscripting is valid at lines 2446 through 2495, but a potential buffer overflow exists at line 2496. As a result, it is necessary to modify the code so that a check is inserted prior to line 2496 to ensure that sp is a valid subscript for stack. Alternatively, the bound for stack can be increased to 1001 at line 2437. Programmers may sometimes dismiss concerns about buffer overflows in "corner cases that wouldn't happen in real situations." However, software security requires that developers anticipate the actions of malicious users who will search for corner cases like these that can be successfully exploited. Source-code audits have been used successfully to identify and remove software flaws from C and C++ programs that otherwise may have resulted in exploitable software vulnerabilities. However, these audits are often imperfect, unstructured, and dependent on the tenacity and knowledge of the auditor. A formal, structured approach such as the one described in this article can be used to prove the safety of analyzed code. Of course, this manual method is both labor-intensive and prone to human error and could be greatly supplemented by the use of automated tools. - Seacord, Robert C. Secure Programming in C and C++. Addison-Wesley, 2005, ISBN 0321335724. - Fagan, M.E. "Design and Code Inspections to Reduce Errors in Program Development." IBM System Journal, v. 15 n. 3, 1976, pp. 182-211. - Plum, Thomas and David M. Keaton. "Eliminating Buffer Overflows, Using the Compiler or a Standalone Tool." Published in proceedings of the Workshop on Software Security Assurance Tools, Techniques, and Metrics, Long Beach, California, November 7-8, 2005; https://samate.nist.gov/index.php/ Past_Workshops.
<urn:uuid:8dea2922-d223-4321-8173-342502dd1e40>
CC-MAIN-2016-26
http://www.drdobbs.com/security/validating-c-and-c-for-safety-and-securi/184402075
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00043-ip-10-164-35-72.ec2.internal.warc.gz
en
0.909248
1,896
2.96875
3
Library Home || Full Table of Contents || Suggest a Link || Library Help |The site describes kriging, a sophisticated method of determining the best estimate for each point in a target matrix, based on statistical principles. Kriging is named after a South African engineer, D. G. Krige, who first developed the method. The article is a part of a hints and tips file for Fortner Software's Transform, a part of its Noesis package.| |Resource Types:||Articles, Topic Tools Miscellaneous| |Math Topics:||Data Analysis| © 1994- The Math Forum at NCTM. All rights reserved.
<urn:uuid:9a175c08-c4ee-4272-9bc4-aea5ff14925e>
CC-MAIN-2016-26
http://mathforum.org/library/view/9962.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.755485
133
2.890625
3
CONCEPT: What is PKU? Read a story about Detective Dawg and his discovery of the meaning of the mysterious letters "PKU". After completing this activity, children will be able to: - state that the part of protein that a person with PKU has trouble with is phenylalanine - state that a person with PKU can keep the phe in their blood at a good level by eating low protein foods - name three consequences of high blood phe levels (i.e., poor growth, poor health, poor concentration, inability to think well, irritability, not feeling well, etc.) Read the story, Detective Dawg and the Package together. Have the children take turns reading the story out loud. Discuss the story with the children, with questions such as: - What does PKU stand for? - What does Phe stand for? - Why can't you eat foods high in phe? - What happens if you eat too much phe? - How can you keep your blood phe levels at the right level? Distribute the worksheets. Allow time for the children to complete them on their own. Discuss the answers. - Hidden Words Activity - Hidden Fruits and Vegetables Activity.(answers: 1-banana, 2-pear, 3-apple, 4-peach, 5-carrot, 6-plum)
<urn:uuid:4711a8af-9457-4f81-b382-7927abd67df0>
CC-MAIN-2016-26
http://depts.washington.edu/pku/management/curriculum/schoolage/detectivedog.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00047-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939151
297
3.84375
4
Business means those activities which are concerned with the production and distribution of goods and services. Earning of profit and satisfying of human wants are the two major objectives of business. Millard has defined it in the following words. " A process of dividing the work into convenient tasks or reduce of grouping each duties in the form of parts of delegating authority to each post and appointing staff to be responsible that the work carried out is planned." BUSINESS ORGANIZATION :- Business organization is an act of grou[ing activities into effective co-operation to obtain the objectives of the business. PRINCIPLES OF ORGANIZATION Following are the important principles which are related to organization : 1. Division of Work :- Business activities must be devoted into several sections. It will make the work more easy for the businessman. 2. Right man for Right Job :- This principle should be applied and each section should be given under supervision of qualified and efficient person. It will improve the performence of the section. 3. Flexibility :- There should be flexibility in the structure of organization. In order to meet the changing circumstances we may be able to bring the changes in the organization. 4. Division of Responsibility :- There should be clear division of responsibility. Each person and each section should be clear about his duty. 5. Co-Ordination :- The organization must be arranged in such a way that it may coordinate all the departments or sections. 6. Balance in Various Sections :- For the successful business there should be balance in various sections organization. 7. Specialization :- It is also an important principle of organization. At each stage of production higher degree of specialization should be achieved. 8. Delegation :- It is also basic principle of the organization. The scope of delegation of authority and responsibility must be clear to the workers. 9. Line of Authority :- The process of business can be performed very well. If there is an unbroken line of authority from the highest level to the lowest.
<urn:uuid:335ae712-2b04-412f-b1cb-244d16340d20>
CC-MAIN-2016-26
http://studypoints.blogspot.com/2011/05/principles-of-organization_8073.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94244
428
3.484375
3
National Forests Consolidation New Mexico’s five national forests are a place of refuge for people and animals alike. These wild and scenic lands sometimes seem a world away, so it may come as a surprise that they contain “inholdings”—unprotected, privately owned tracts of land within the forest boundaries. Development of these properties can restrict public access to the forests and harm the delicate ecosystems that shelter wildlife and protect the quality of our drinking water. The Trust for Public Land is working closely with the U.S. Forest Service to bring these critical lands into public ownership and ensure that New Mexico’s forests are protected and available for public use.
<urn:uuid:6aefae74-58d4-4b43-be49-b6e51333281d>
CC-MAIN-2016-26
http://www.tpl.org/our-work/land-and-water/national-forests-consolidation
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936392
137
3.03125
3
Course: Antitrust Law This simulation illustrates how a monopolist can cause harm to consumers and create market inefficiency by withholding socially valuable output and raising prices. For simplicity, we make the following assumptions: In this simulation, you are a firm that manufactures a product at a cost of $2 per unit (including normal profit), and faces the demand curve Q=120-15p, as shown in the graph below. You are to determine the quantity of your output and the price at which you will offer the product. When you click the button, the consequences of your choice will appear. Two cases are presented. In the first case, you are facing perfect competition, so consumers will be able to turn elsewhere if you raise your price above the competitive level. In the second case, you are a monopolist, so you will be able to choose any combination of quantity and price. You will sell all of your output provided that you set your price at or below the market clearing price. (This can be done automatically for you, if you so choose). Try different values until you are satisfied that you have maximized your profit in each case. What are the effects of your decisions on consumers? Copyright © 2001 Andrew Chin. All rights reserved. Republication of all or part of this document, including the software elements thereof, in any form, including electronic, without written consent of the author is prohibited.
<urn:uuid:7ee81871-085c-4be1-83a4-6d3ff6fbf7e9>
CC-MAIN-2016-26
http://www.unclaw.com/chin/teaching/antitrust/monopoly.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00040-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921449
288
3.140625
3
Classical violins created by Cremonese masters, such as Antonio Stradivari and Giuseppe Guarneri Del Gesu, have become the benchmark to which the sound of all violins are compared in terms of their abilities of expressiveness and projection. By general consensus, no luthier since that time has been able to replicate the sound quality of these classical instruments. The vibration and sound radiation characteristics of a violin are determined by an instrument's geometry and the material properties of the wood. New test methods allow the non-destructive examination of one of the key material properties, the wood density, at the growth ring level of detail. The densities of five classical and eight modern violins were compared, using computed tomography and specially developed image-processing software. No significant differences were found between the median densities of the modern and the antique violins, however the density difference between wood grains of early and late growth was significantly smaller in the classical Cremonese violins compared with modern violins, in both the top (Spruce) and back (Maple) plates (p = 0.028 and 0.008, respectively). The mean density differential (SE) of the top plates of the modern and classical violins was 274 (26.6) and 183 (11.7) gram/liter. For the back plates, the values were 128 (2.6) and 115 (2.0) gram/liter. These differences in density differentials may reflect similar changes in stiffness distributions, which could directly impact vibrational efficacy or indirectly modify sound radiation via altered damping characteristics. Either of these mechanisms may help explain the acoustical differences between the classical and modern violins. Citation: Stoel BC, Borman TM (2008) A Comparison of Wood Density between Classical Cremonese and Modern Violins. PLoS ONE 3(7): e2554. doi:10.1371/journal.pone.0002554 Editor: Ananth Grama, Purdue University, United States of America Received: March 18, 2008; Accepted: May 30, 2008; Published: July 2, 2008 Copyright: © 2008 Stoel, Borman. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: The authors have no support or funding to report. Competing interests: The authors have declared that no competing interests exist. For the past 300 years, the violins of Antonio Stradivari (1634–1737) and Giuseppe Guarneri del Gesu (1698–1744) have excelled in molding a many-nuanced sound that seems to better express the intent of composers and musicians. These classical Cremonese violins have become the benchmark to which all violins are compared. Presently, many believe that violin craftsmanship is at its most advanced point since the days of the Cremonese luthiers, and yet instruments produced today do not match the classical instruments in their abilities of expressiveness and projection. It remains unclear what has kept them, for such a long time and through such changing musical needs, as the most sought after. Research into the production of high quality sound has focused on a wide range of variables, such as the arching design and contours , plate thickness , the impact of varnish layers , , as well as the various elements of set-up, such as the angle of the neck, the impact of the fingerboard and the angle of the strings passing over the bridge. Extensive work has been done searching for the ideal wood properties –, although none corresponding exactly to known Cremonese wood properties as most tested samples have been of significantly higher median density than those found to be the case in this study. Tracheid clusters, produced during annual growth cycles of the tree, create the prominent light/dark grain lines in wood. Early growth wood, created during spring, is primarily responsible for water transport and thus is more porous and less dense than late growth wood, which plays more of a structural support role , of much more closely packed tracheids. Wood is an orthotropic material, having differing mechanical properties in three directions: along the grain, across the grain, and slabwise (circumferentially) . The differences in density between early and late growth wood may impact the detailed vibrational behavior, either directly or through altered stiffness or damping characteristics due to these variations. The complex three-dimensional shape of the violin body means that vibration within the audio range involves extensional, bending and shear deformations of the wooden plates involving all three directions. Researchers have commented on wood selection preferences based on these differentials , although detailed data are lacking on fine instruments. Wood density is difficult and invasive to measure directly, as an isolated part of the instrument, wrapped in a waterproof container, must be immersed in water to estimate its volume, and the density is calculated by dividing its weight by this volume . Furthermore, this technique does not provide data on density differentials. Computed Tomography (CT) has been used by other researchers – primarily for visual analysis, without fully employing its ability to quantify density or density differentials. Here we examine the wood density of five classical Cremonese violins; three by Giuseppe Guarneri del Gesu and two by Antonio Stradivari, using quantitative CT densitometry, a rapid and non-invasive technique usually applied in a medical setting . The results from these classical violins were compared to those of eight contemporary violins, made by T. Borman, A.T. King and G. Rabut (Table 1), in order to determine whether objective measurements of material properties can explain the historical consensus on the differences in quality of sound between classical Cremonese and modern violins. At the end of this article we will outline in detail our methodology. Results and Discussion The violins were scanned at Mount Sinai Hospital in New York City, USA, using a multi-detector row CT scanner (Sensation Cardiac 64, Siemens, Germany). These scans produced 3-dimensional data sets of approximately 1200×512×512 voxels for each violin. A dedicated computer program was developed to automatically detect the superior and inferior surface of the top and back plates. From these surfaces, the local plate thickness, median wood density and density differential were calculated, as discussed below. Additionally, the volume of the sound box (luminal volume) was calculated (Table 1). From the vertical distance between the superior and inferior surface, a thickness map (0–5 mm) was constructed, which represents the plate thickness at each location. Figures 1A and 1B show the thickness maps of the top and back plates, respectively, with the classical violins displayed on the bottom and the modern violins on the top row of the figures. We have adopted the medical model of anonymity. These thickness maps clearly show differences between the violins as well as various repairs. The bass bar could be discerned as a slight thickening in the top plate, since the computer program could not perfectly separate the two wood pieces. The antique plates, with the exception of #3, had very little repair, while resolution was such that even the paper labels with the makers' name could be discriminated (see the rectangular thickening in the back plates, near the left c-bout in Figure 1B). Note that the high X-ray absorption by the metal in the fine tuner on the e-string causes image reconstruction artifacts. The Moiré-like pattern is caused by the somewhat limited resolution of the scanner. Loen has done extensive thickness mapping of violins although a comparative analysis of findings is beyond the purview of this article and our maps are included solely on the basis of the intrinsic link between density and thickness. The contemporary violins are presented on the top row, and the antique on the bottom row. The violins have been anonymised. Scales are given in mm. The fourth instrument on the upper row is a viola, which typically is thicker than a violin (image size has been reduced to match that of the violins). The computer program defined an intermediate layer of the violin plates, which was centered exactly between the superior and inferior surfaces. From this intermediate layer, a density map was created, in which the physical density was calculated at each location within the plates. Figures 2A and 2B show the detailed density maps of the top and back plates, respectively. The top and back plates differ in density, as top plates are made from spruce (Picea abies) and the rest of the instrument, including the back plate, is made from maple (Acer Platanoides). Repair work was clearly visible in the top plates, as indicated by the regions of increased density. Hide glue, used exclusively for violin repair, has a higher density than wood and saturates into the adjacent, undamaged material, thus increasing localized density readings. From this density map, the median density was calculated at five standardized regions of interest (ROI); on the left and right side of the upper and lower bout, and one at the centre (see Figure 3); care was taken to avoid regions of repair work. No significant differences were found between the median densities of the modern and the antique violins (two-tailed Mann-Whitney U test: p = 0.884 and 0.143, for the top and back plate, respectively). The contemporary violins are presented on the top row, and the classical Cremonese on the bottom row. The violins have been anonymised. Scales are given in kg/m3. The central violin in the lower row has had more repair work than the other antique violins as evinced by reduced thickness (Figure 1.) and increased densities. The dark areas at the centre of the lower third of all violin tops are metal artifacts from the string ends. The dependency of the measured density on plate thickness was eliminated in the quantitative analysis. Five different ROI's of 100×100 pixels were defined, carefully avoiding repair work. The same areas were taken from the top and back plates. Apart from genetic factors, the overall density of wood is influenced most significantly by the microclimate at the tree's location. A tree growing in a cool area with limited direct solar exposure and little access to water supplies or quality soil will grow slowly and have relatively high overall densities. On the other hand, a tree of the same genetic makeup would grow faster with lower overall densities, if it were located in a more hospitable microclimate, i.e. with adequate solar access, a nutrient laden soil, sufficient quantities of water, a relatively flat local, and without traumatic events causing formation of very dense wood. The former conditions have historically been thought to create high quality tone wood although our findings indicate that the latter conditions will more closely mimic the densities found in this study. As we did not find significant differences in median density between these particular classical and modern violins, these large-scale factors would not be relevant to the sound quality difference between the classical Cremonese and the modern violins. A violin produces sound by transforming the energy provided by the musician into perturbations of the air. At lower frequencies, below ∼800 Hz, the majority of these waves are produced by the violin acting as a whole. Above this frequency range, specific areas of the instrument vibrate to produce sound. At the current state of understanding, most of these areas are located on the top plate. For this reason, our discussion is primarily focused on spruce wood. Even after a violin is built, its wood density could vary, since wood is a hygroscopic material and changing relative humidity (due to temperature as well as water vapor levels) would change the measured density. In this context, however, this is not germane, since the studied violins are never exposed to extreme humidity variations due to the conditioned air environments of modern musical settings. As there was little to no difference in the median wood densities between the modern and the classical Cremonese violins, it may be assumed that modern wood selection practices are similar to those employed in the 1700s. In order to determine the amount of late and early growth grains in the wood of each violin plate, we calculated the histogram of densities from each ROI (Figure 3). Wood density may vary each 0.1 mm, which is beyond the resolution of CT. Therefore, a density value of the early and late growth grains could not be determined definitively. A surrogate grain density measure was defined instead by the spread of the bimodal density distribution. The 90th and the 10th percentile points were considered representative of the density of the early and late growth grains, respectively, and the difference between these percentile points was denoted as the ‘density differential’. In Figure 4, the density differential is plotted against the median density, averaged over all ROIs, which were compared using the two-tailed Mann-Whitney U test. The density differential was significantly lower in the classical Cremonese violins as compared to the modern violins both in the top and back plate (p = 0.028 and 0.008, respectively), meaning that the densities of early and late growth wood were closer together, in the classical violins. The mean density differential (SE) of the top plates of the modern and classical violins were 274 (26.6) and 183 (11.7) gram/liter, respectively. For the back plates, the values were 128 (2.6) and 115 (2.0) gram/liter, respectively. Figure 4 shows four clear “clusters” whereby the wood of the instruments is delineated into two groups: the old and new top plates and the old and new back plates. Due to the increased repair work on one of the classical instruments, it was necessary to choose the ROI's carefully so as to reflect the true wood density, not that of the repair. In order to realistically compare wood densities, the inclusion criteria for a modern instrument was that the woods were of known European provenance and that they were in a “natural state”, i.e. not treated in any way to alter its material properties. When we noticed the one modern top and back plate of extremely low differential, we contacted the maker who reviewed his records and found that he had acquired these pieces of wood from a supplier who occasionally treated his wood prior to sale. When questioned, the supplier could not be certain if these particular pieces were treated or not. If these plates of unknown origin were removed from the analysis, the differences of the density differential of the top plates between the old and new would be even more striking. In our test pool of spruce tone wood samples we found a similar pattern i.e. new wood having median densities in the same general range and density differentials much higher than that of the Cremonese violins tested. Spruce density may vary within a tree by as much at 5–8% due to its vertical location within the trunk. Within same tree specimens density is typically lowest between 3 and 6 meters of height. Below 3 meters to ground level there is a slight increase and above 6 meters of tree height density increases in a fairly linear continuum to the apical bud . Since the classical median densities are at the very low end of those found in spruce, this region would provide the closest approximation within individual samples. Additionally, the distance from the pith (centre of the tree) to the perimeter is a well-identified source of density variations within the same tree specimens and in most species, including Picea abies, density typically decreases with distance outwards from the pith. This decrease in density has been found to be due to a reduction in early wood density as well as a reduction in late wood proportion and may amount to 15–20% density variations from pith to perimeter . Taken together the north/south (sample height) and east/west (pith to perimeter) localized impacts can amount to an almost 25% density variation within the same tree. Widths of the individual growth rings are yet another factor influencing wood density that has been well documented to date, although disagreement exists on the quality of this relationship. Growth Ring Width (GRW) in Norway spruce has been shown to have a negative correlation with average density and therefore a non-linear relationship with greater reductions in basic density when the ring widths decrease to 2–3 mm and lesser overall reductions with increasingly wider ring widths. Giordano on the other hand, found a relatively linear relationship for these same parameters. Another study, specifically targeted at violin tone wood , did not find a linear relationship and their experimental data pool of 300 samples showed no apparent pattern in density distributions vs. GRW. Their sample ring spacing was however relatively limited, varying only from 0.5 mm to 2 mm, whereas Giordano extended this range to 4 mm (the maximum ring spacing usually found in violins is 2.5 mm to 3 mm; in violas 3 mm to possibly 4 mm and in cellos this can reach 5 mm). Saranpää and Giordano concur that GRW can account for min/max density variability of ∼40%, although arriving at their respective results in different manners , . The current state of wood biology delves very little into density differential with the exception of Koubaa using x-ray densitometry to redefine Mork's index (the transition from early wood to late wood). The density differentials found in this study may contribute to the generally recognized superior sound production of classical Cremonese violins. Within the violin making tradition there have been many reported ‘secrets’ of the Cremonese makers although usually with little or no supporting documentation. Sporadically, reference is made to the wood treatment referred to as ‘ponding’, whereby wood submerged in stream water (to facilitate transportation or to alter the properties of the wood intentionally) is responsible for the classical Cremonese sound. It has been documented that ponding does alter wood properties significantly, by causing decomposition of various wood elements depending on the particular bacteria or fungus introduced into the wood. Although data on density alteration are not currently available, it is reasonable to assume that this degradation would result in lowered densities; how this impacts density differential would be dependant on the specific treatment. It has been shown that the wood of the classical Cremonese instruments was likely not ponded . However, this does not rule out bacterial or fungal attack as a means of altering new wood to more closely match the material properties of the Cremonese wood. As mentioned earlier, one back and one top plate of the new instruments may have been treated and if this were indeed the case, the treatment used by the supplier would have been ponding. Another technique, referred to as “stewing” wood has been mentioned whereby wood is boiled in different solutions to achieve alterations of density although there is no published data on what this process is actually doing to the wood. Bucur has shown that time plays a role in altering wood properties by decomposition and loss of hemicellulose, thereby resulting in lower density and a priori an alteration of differential, which may also explain our results. Fuming with nitric acid or ammonia are treatments that have been used throughout the years by instrument makers and it is a reasonable assumption that the destructive properties of these agents would lower the density and change the differential depending on which grains, early or late, are most affected. Many other possibilities have been proposed over time, but these are the only ones directly related to density that we are aware of. In summary, our results clearly document basic material property differences between the woods used by the classical Cremonese and contemporary makers. Although at this point we can do no more than speculate as to the cause, these findings may facilitate replicating the tonal qualities of these ancient instruments. Materials and Methods As CT densitometry depends on a wide range of variables, settings were optimized for the highest sensitivity in distinguishing different wood densities. We analyzed the histograms from four test plates (two top plates and two back plates) and selected the settings, which produced bimodal histograms with the highest separation. The final image acquisition protocol was defined for a multi-detector row CT scanner: 80 kVp, effective mAs of 53, collimation 32×0.6 mm, 1 sec. rotation time, 512×512 matrix, 0.6 mm slice thickness, 0.3 mm increment with a reconstruction filter B50s. Volumetric analysis was performed with PulmoCMS (Medis Specials BV, Leiden, the Netherlands) and a separate computer program was developed for wood densitometry on a Matlab platform (Matlab, version R2007a, The Mathworks, USA), with its image processing toolbox. The superior and inferior contours were detected in each axial slice by a minimal costs algorithm, using a Sobel edge detector. By stacking all contours, a curved multi-planar reformatted (MPR) image was constructed. No user interaction was needed in the analyses of the violins. Constancy of the CT scanner was monitored using nine test pieces of maple and spruce. The standard deviation of the differences was 7.5 kg/m3 (1.8%) and 10.9 kg/m3 (4.8%) for the median density and density differential, respectively. Due to edge enhancement during CT image reconstruction, density values were found to be dependent on plate thickness (as illustrated by comparing Figure 1 and 2 in the main text). Therefore, the presented density values were corrected for thickness, based on measurements from a different sample set of 10 wood samples with thicknesses, ranging from 2 to 6 mm. The measurements were corrected based on a mathematical model, in which the dependency of the median density on plate thickness was estimated (see Figure 5A). The correction was effective, since subsequently no correlation was found between the final density values and the thickness of the plates from all regions of interest (Figure 5B and 5C). As there was no significant difference in plate thickness between the classical and modern violins (Mann-Whitney U test: p = 0.770 an 0.188, for the top and back plate, respectively), plate thickness was not a confounding factor in studying the differences in wood density. (A) The relation was obtained from the central layer within five spruce and five maple test plates. The curved lines show the mathematical models fitted to this data. (B) The thickness-density relation from the individual ROIs in the violins. (C) The thickness-density relation after correction. To test the accuracy of the thickness measurements of the plates, the same wood samples were used as in the correction for the thickness dependency. The measured values from CT were compared to the actual thickness measurements using a micrometer on the actual pieces. A small systematic difference was observed of 0.1 mm, which is a fraction of the dimension of one pixel (0.4×0.6×0.6 mm), meaning that plate thicknesses were slightly over-estimated with a constant magnitude, independent of plate thickness. We thank the owners of the classical and modern violins for making their instruments available for this study, Mount Sinai Hospital in New York City, Maynard High Ph.D and Jeffrey Doy for their radiological support, Aracelis Perez, CT technician at Mount Sinai Hospital, for her patience and dedication, Jeff Loen and Nora Cooper for their editorial assistance, and Prof. J.H.C. Reiber, Prof I. Watt, Evan Davis Ph.D and Prof Jim Woodhouse for their critical discussions and reviewing of the manuscript. Conceived and designed the experiments: BS TB. Performed the experiments: BS TB. Analyzed the data: BS TB. Contributed reagents/materials/analysis tools: BS TB. Wrote the paper: BS TB. - 1. Sacconi SF (1979) The “Secrets” of Stradivari. Cremona, Italy: Libreria Del Convegno. - 2. Loen JS, Borman T, King AT (2005) A path through the woods; thickness and density of Guarneri del Gesu's violins. The Strad 116: 68–75. - 3. Schelling JC (2007) On the physical effects of violin varnish, III Estimation of acoustical effects. CAS Journal 8: 17–24. - 4. Schleske M (1998) On the acoustical properties of violin varnish. CAS Journal 3: 27–43. - 5. Wegst UGK (2006) Wood for Sound. American Journal of Botany 93: 1439–1448. - 6. McIntyre ME, Woodhouse J (1988) On measuring the elastic and damping constants of orthotropic sheet materials. Acta Metallurgica 36: 1397–1416. - 7. Haines D (1979) On Musical Instrument Wood Part l. CAS Newletter 31: 23–32. - 8. Haines D (1980) On Musical Instrument Wood Part II. Surface finishes, plywood, light and water exposure. CAS Newletter 33: 19–23. - 9. Bucur V (2006) Acoustics of Wood. CRC Press. - 10. Butterfield BG (2003) Wood anatomy in relation to wood quality. In: Barnett JR, Jeronimidis G, editors. Wood quality and its biological basis. Oxford, UK: Blackwell. pp. 30–52. - 11. Zink-Sharp A (2003) Mechanical Properties of Wood. In: Barnett JR, Jeronimidis G, editors. Wood Quality and Its Biological Basis. Oxfordshire, UK: Blackwell Publishing. pp. 197–209. - 12. Schleske M (2002) Empirical Tools in Contemporary Violin Making: Part II. Psychoacoustic Analysis and Use of Acoustical Tools. CAS Journal 4: 50–64. - 13. Gattoni F, Melgara C, Sicola C, Uslenghi CM (1999) [Unusual application of computerized tomography: the study of musical instruments]. Radiol Med (Torino) 97: 170–173. - 14. Sirr SA, Waddle JR (1997) CT analysis of bowed stringed instruments. Radiology 203: 801–805. - 15. Skolnick AA (1997) CT scans probe secrets of Italian masters' violins. JAMA 278: 2128–2130. - 16. Stoel BC, Stolk J (2004) Optimization and Standardization of Lung Densitometry in the Assessment of Pulmonary Emphysema. Invest Radiol 39: 681–688. - 17. Loen JS (2005) Thickness Graduation Maps Classic violins, Violas and Cellos. Kenmore. - 18. Saranpää P (2003) Wood density and growth. In: Barnett JR, Jeronimidis G, editors. Wood quality and its biological basis. London & Boca Raton, FL: Blackwell & CRC Press. pp. 87–117. - 19. Koubaa A, Zhang SYT, Makni S (2002) Defining the transition from early wood to latewood in black spruce based on intra-ring wood density profiles from X-ray densitometry. Ann For Sci 59: 511–518. - 20. Wilhelmsson L, Arlinger J, Spångberg K, Lundqvist S-O, Hedenber Ö, et al. (2002) Models for Predicting Wood Properties in Stems of Picea abies and Pinus sylvestris in Sweden. Scandinavian Journal of Forest Research 17: 330–350. - 21. Giordano G (1971) Tecnologia del Legno. Torino, Italy: UTET. - 22. Di Bella A, Piasentini RZ (2002) Violin Top Wood Qualification: Influence of Growth Ring Distance on Acoustical Properties of Red Spruce. CAS Journal 4: 22–25. - 23. Eriksson K-EL, Blanchette RA, Ander P (1990) Microbial and Enzymatic Degradation of Wood and Wood Components. New York: Springer-Verlag. - 24. Barlow CY, Woodhouse J (1990) Bordered pits in spruce from old Italian violins. Journal of Microscopy 160: 203–211.
<urn:uuid:88e8b37f-d237-404a-a54d-62357cb374c8>
CC-MAIN-2016-26
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0002554
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942059
5,995
2.765625
3
IN RECENT years, the City of Boston has taken laudable steps to curb exposure to secondhand smoke. Smoking is now outlawed in outdoor workplaces, public housing projects, and tot lots. But it’s harder to defend a broad new effort to ban smoking in all city parks. The proposal, which would impose a $250 fine on smokers, has been passed by the City Council and endorsed by Mayor Menino. The Boston Parks Commission will likely take it up at the end of the month, following a trend among cities nationwide — including New York, which banned smoking in parks and beaches in 2011. The impetus stems from evidence about the dangers of secondhand smoke: A 2006 Surgeon General’s report outlined those health risks, and a Stanford University report, from the following year, showed that standing just downwind from a burning cigarette outdoors can produce exposure levels as high as those in a smoky bar. These were good justifications for Boston’s two-year-old ban on smoking in tot lots, where children are likely to congregate densely. But the parks, at large, are open to a broader population. They’re adjacent to sidewalks, where smoking is legal. They’re exposed to fumes from buses, cars, and motorcycles. The 2007 Stanford study found that chemical concentrations of cigarette smoke dissipate quickly, in outside air, once cigarettes are extinguished, and that the health risks drop dramatically with distance. In addition, the ban would be nearly impossible to enforce, or enforce fairly. Boston’s park rangers largely patrol the Emerald Necklace. City police officers are unlikely to respond, in due time, to complaints of a lit cigarette. If fines are levied, they’re likely to fall disproportionately on the low-income or homeless residents who smoke in greater numbers. They’d also fall on people who are trying to kick the habit, since the ban also covers tobacco-free vapors from e-cigarettes. Some city officials acknowledge as much but still hope the ban will send a strong message and encourage self-enforcement. But a human solution, involving education and common courtesy, doesn’t require a legal stick. Smokers need to be aware of the dangers of secondhand smoke, and sensitive to people downwind of them. Public health campaigns and the old-fashioned evil eye could be useful weapons in the cause. Indeed, in the years since the Stanford study was released, smoking rates in Boston have declined — partly because of successful cessation programs from the city and state. It’s great to encourage safe behavior. It’s not always effective to legislate it.
<urn:uuid:4d42c95e-826e-46af-8d24-f8e6578ef3a2>
CC-MAIN-2016-26
http://www.bostonglobe.com/opinion/editorials/2013/12/11/boston-smoking-ban-parks-try-lighter-touch/dQ9Nt6x1zo63hkXwe6aWTJ/story.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00092-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95592
539
2.5625
3
Programming for Fun and Profit - Using the Card.dll Good Abstractions are an Abstruse Art The caveat divid et impera—divide and conquer—is our guiding principle. A big part of what divide and conquer means relative to programming is that if we divide a problem into the correct abstractions then we can conquer the problem. In fact, taking this concept a bit further, if we divide a problem into good abstractions the problem is significantly easier to conquer. Finding good abstractions is difficult to do however. Yet, by practicing this abstruse art in a well-understood problem domain—for example, the notion of playing cards—we can become more practiced at finding good abstractions early. Applying the notion of divide and conquer to our card drawing tools, we can quickly resolve on some reasonably good abstractions. Listing 4: Using enumerations makes the notion of suit and face-value constrained to a specific set of named values and more expressive to the human reader. Imports System Public Enum Face Ace Two Three Four Five Six Seven Eight Nine Ten Jack Queen King End Enum Public Enum Suit Diamond Club Heart Spade End Enum Now when we talk about the value of a card we can do so in the domain of the problem: cards have a suite and a face value. Clearly in the domain of playing cards is the notion of a card. A single card class would be a good place to add a constructor, initializing the face value and suit, and a good place to add our paint methods. Listing 5 demonstrates the new Card class with the aforementioned features. Listing 5: The Card class. Imports System Imports System.Drawing Public Class Card Private FCardFace As Face Private FCardSuit As Suit #Region "External methods and related fields" Private Shared initialized As Boolean = False Private Shared width As Integer = 0 Private Shared height As Integer = 0 Private Declare Function cdtInit Lib "cards.dll" ( _ ByRef width As Integer, ByRef height As Integer) As Boolean Private Declare Function cdtDrawExt Lib "cards.dll" ( _ ByVal hdc As IntPtr, ByVal x As Integer, ByVal y As Integer, _ ByVal dx As Integer, ByVal dy As Integer, ByVal card As Integer, _ ByVal suit As Integer, ByVal color As Long) As Boolean Private Declare Sub cdtTerm Lib "cards.dll" () #End Region Public Shared Sub Init() If (initialized) Then Return initialized = True cdtInit(width, height) End Sub Public Shared Sub Deinit() If (Not initialized) Then Return initialized = False cdtTerm() End Sub Public Sub New(ByVal cardSuit As Suit, ByVal cardFace As Face) Init() FCardSuit = cardSuit FCardFace = cardFace End Sub Public Property CardSuit() As Suit Get Return FCardSuit End Get Set(ByVal Value As Suit) FCardSuit = Value End Set End Property Public Sub PaintGraphicFace(ByVal g As Graphics, ByVal x As Integer, _ ByVal y As Integer) Dim hdc As IntPtr = g.GetHdc() Try Dim Card As Integer = CType(Me.FCardFace, Integer) * 4 + FCardSuit cdtDrawExt(hdc, x, y, MyClass.width, MyClass.height, Card, 0, 0) Finally g.ReleaseHdc(hdc) End Try End Sub Public Sub PaintGraphicBack(ByVal g As Graphics, ByVal x As Integer, _ ByVal y As Integer) Dim hdc As IntPtr = g.GetHdc() Try cdtDrawExt(hdc, x, y, MyClass.width, MyClass.height, 61, 0, 0) Finally g.ReleaseHdc(hdc) End Try End Sub End Class From the listing you can see that we made the API methods private. This eliminates consumers of card from calling them directly. The shared Init and Deinit method make a valiant effort to ensure cdtInit is called just once, but in this implementation the Deinit method will need to be called by the consumer. (We could implement IDisposable, have the constructor track how many cards were created, and the Dispose method decrement the counter, calling Deinit when the counter is 0.) Finally, we wrap the cards.dll API methods in wrapper methods to ensure that the device context resource is managed correctly every time. (Notice that we eliminated the width and height arguments (dx and dy respectively) of the two paint methods. The width and height are fixed by the cdtInit method, so we might as well use this information.) The result is that our form is radically simplified and we can reuse Card, Suit, and Face in any Windows solution we'd like. Here is the revised form code (see listing 6). Listing 6: A Form using the new Card class. Public Class Main Inherits System.Windows.Forms.Form [ Windows Form Designer generated code ] Private Ace As Card = New Card(Suit.Spade, Face.Ten) Protected Overrides Sub OnPaint(ByVal e As System.Windows.Forms.PaintEventArgs) Ace.PaintGraphicFace(e.Graphics, 10, 10, 75, 100) End Sub Private Sub Main_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load Card.Init() End Sub Private Sub Main_Closing(ByVal sender As Object, _ ByVal e As System.ComponentModel.CancelEventArgs) Handles MyBase.Closing Card.Deinit() End Sub End Class After creating the Card class all we need to do is create an instance of a card and paint it in the OnPaint event handler. Thus far we have discovered the easy abstractions. Suit, Face, and Card are pretty easy to find. The hard part is finding as many of the abstractions as we can relative to our problem domain. For example, it is reasonable that we might want a collection of cards, referred to as a deck. However, a pinochle deck has no cards less than 9 but many other games need all of the cards. Furthermore some games like BlackJack might use multiple decks and cards like the Ace may have more than one value depending on context: Ace can be electively used to represent the value of 1 or 11. Now we have moved into the realm of moderate complexity. What if we want one set of classes to represent all games? What about rules? How do we codify rules to permit changing the rules' object depending on the game selected? What if we want to support Internet play, console play, or multi-player games? Our challenges become significantly greater. The objective is to figure out what your real goals are and to code to support those goals. If it is possible to support known objectives and permit future growth—for instance, supporting multiple games and one player while leaving room for multi-player in the future—then you are likely to exceed your customer's expectations. Page 2 of 3
<urn:uuid:531afaed-4852-445b-b09d-dbf0aee08f4e>
CC-MAIN-2016-26
http://www.developer.com/net/net/article.php/11087_3303671_2/Programming-for-Fun-and-Profit---Using-the-Carddll.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.846856
1,507
2.921875
3
- freely available Remote Sensing 2013, 5(10), 5006-5039; doi:10.3390/rs5105006 Abstract: Imaging using lightweight, unmanned airborne vehicles (UAVs) is one of the most rapidly developing fields in remote sensing technology. The new, tunable, Fabry-Perot interferometer-based (FPI) spectral camera, which weighs less than 700 g, makes it possible to collect spectrometric image blocks with stereoscopic overlaps using light-weight UAV platforms. This new technology is highly relevant, because it opens up new possibilities for measuring and monitoring the environment, which is becoming increasingly important for many environmental challenges. Our objectives were to investigate the processing and use of this new type of image data in precision agriculture. We developed the entire processing chain from raw images up to georeferenced reflectance images, digital surface models and biomass estimates. The processing integrates photogrammetric and quantitative remote sensing approaches. We carried out an empirical assessment using FPI spectral imagery collected at an agricultural wheat test site in the summer of 2012. Poor weather conditions during the campaign complicated the data processing, but this is one of the challenges that are faced in operational applications. The results indicated that the camera performed consistently and that the data processing was consistent, as well. During the agricultural experiments, promising results were obtained for biomass estimation when the spectral data was used and when an appropriate radiometric correction was applied to the data. Our results showed that the new FPI technology has a great potential in precision agriculture and indicated many possible future research topics. Modern airborne imaging technology based on unmanned airborne vehicles (UAVs) offers unprecedented possibilities for measuring our environment. For many applications, UAV-based airborne methods offer the possibility for cost-efficient data collection with the desired spatial and temporal resolutions. An important advantage of UAV-based technology is that the remote sensing data can be collected even under poor imaging conditions, that is, under cloud cover, which makes it truly operational in a wide range of environmental measuring applications. We focus here on lightweight systems, which is one of the most rapidly growing fields in UAV technology. The systems are quite competitive in local area applications and especially if repetitive data collection or a rapid response is needed. An appropriate sensor is a fundamental component of a UAV imaging system. The first operational, civil, lightweight UAV imaging systems typically used commercial video cameras or customer still cameras operating in selected three wide-bandwidth bands in red, green, blue and/or near-infrared spectral regions [1–3]. The recent sensor developments tailored for operation from UAVs offer enhanced possibilities for remote sensing applications in terms of better image quality, multi-spectral, hyper-spectral and thermal imaging [4–10] and laser scanning [11–13]. One interesting new sensor is a lightweight spectral camera developed by the VTT Technical Research Center of Finland (VTT). The camera is based on a piezo-actuated, Fabry-Perot interferometer (FPI) with an adjustable air gap [6,14]. This technology makes it possible to manufacture a lightweight spectral imager that can provide flexibly selectable spectral bands in a wavelength range of 400–1,000 nm. Furthermore, because the sensor produces images in a frame format, 3D information can be extracted if the images are collected with stereoscopic overlaps. In comparison to pushbroom imaging [7,9], the advantages of frame imaging include the possibility to collect image blocks with stereoscopic overlaps and the geometric and radiometric constraints provided by the rigid rectangular image geometry and multiple overlapping images. We think that this is important in particular for UAV applications, which typically utilize images collected under dynamic, vibrating and turbulent conditions. Conventional photogrammetric and remote sensing processing methods are not directly applicable for typical, small-format UAV imagery, because they have been developed for more stable data and images with a much larger spatial extent than what can be obtained with typical UAV imaging systems. With UAV-based, small-format frame imaging, a large number—hundreds or even thousands—of overlapping images are needed to cover the desired object area. Systems are often operated under suboptimal conditions, such as below full or partial cloud cover. Despite the challenging conditions, the images must be processed accurately so that object characteristics can be interpreted on a quantitative geometric and radiometric basis using the data. Precision agriculture is one of the potential applications for hyperspectral UAV imaging [1,2,4,6,9,15–18]. In precision agriculture, the major objectives are to enable efficient use of resources, protection of the environment and documentation of applied management treatments by applying machine guidance and site-specific seeding, fertilization and plant protection. The expectation is that UAVs might provide an efficient remote sensing tool for these tasks . A review by Zhang and Kovacs showed that research is needed on many topics in order to develop efficient UAV-based methods for precision agriculture. In this study, we will demonstrate the use of the FPI spectral camera in a biomass estimation process for wheat crops; biomass is one of the central biophysical parameters to be estimated in precision agriculture . The objectives of this investigation were to investigate a complete processing methodology for the FPI spectral imagery, as well as to demonstrate its potential in a biomass estimation process for precision agriculture. We depict a method for FPI image data processing in Section 2. We describe the test setup used for the empirical investigation in Section 3. We present the empirical results in Section 4 and discuss them in more detail in Section 5. 2. A Method for Processing FPI Spectral Data Cubes 2.1. An FPI-Based Spectral Camera The FPI-based spectral camera developed by the VTT provides a new way to collect spectrometric image blocks. The imager is based on the use of multiple orders of the Fabry-Perot interferometer together with the different spectral sensitivities of the red, green and blue pixels of the image sensor. With this arrangement, it is possible to capture three wavelength bands with a single exposure. When the FPI is placed in front of the sensor, the spectral sensitivity of each pixel is a function of the interferometer air gap. By changing the air gap, it is possible to acquire a new set of wavelengths. With smaller air gaps, it is also possible to capture only one or two wavelengths in each image. Separate short-pass and long-pass filters are needed to cut out unwanted transmissions at unused orders of the Fabry-Perot interferometer. During a flight, a predefined sequence of air gap values is applied using the FPI camera to reconstruct the spectrum for each pixel in the image (see more details in Section 2.3.3). Often, 24 different air gap values are used, and by these means, it is possible to collect 24 freely selectable spectral bands in a single flight, while the rest of the bands (0–48) are not independent. The desired spectral bands can be selected with a spectral step of 1 nm. This technology provides spectral data cubes with a rectangular image format, but each band in the data cube exposed to a different air gap value has a slightly different position and orientation. The principles of the FPI spectral camera have been described by Saari et al. and Mäkynen et al. . Results presented by Nackaerts et al. demonstrated the feasibility of the sensor concept. Honkavaara et al. developed the first photogrammetric processing line for the FPI spectral camera, and Pölönen et al. carried out the first performance assessments in precision agriculture using the 2011 prototype. The 2012 prototype was used in this investigation. It is equipped with custom optics with a focal length of 10.9 mm and an f-number of less than 3.0. The camera has a CMOSIS CMV4000 Complementary Metal Oxide Semiconductor (CMOS) red, green and blue (RGB) image sensor with an electronic shutter; the infrared cut filter has been removed from the image sensor. The sensor has a 2,048 × 2,048 pixel resolution, a pixel size of 5.5 μm and a radiometric resolution of 12 bits. In practical applications, the sensor is used in the two-times binned mode, while only part of the sensor area is used. This provides an image size of 1,024 × 648 pixels with a pixel size of 11 μm. The field of view (FOV) is ±18° in the flight direction, ±27° in the cross-flight direction and ±31° at the format corner. Application-based filters can be used, for example 500–900, 450–700, 600–1,000 or 400–500 nm filters. The spectral resolution range is 10–40 nm at the full width at half maximum (FWHM), and it is dependent on the FPI air gap value, as well as the filter selection. Table 1 shows the differences between the 2011 prototype and the 2012 prototype. Many of the parameters were improved for the 2012 prototype. The most significant improvement was the improvement of the f-number in order to improve the signal-to-noise ratio (SNR). This was achieved by improving the lens system and allowing for greater FPI ray angles (10° in comparison to 4°). The blurring of images was reduced by changing the rolling shutter to an electronic shutter. The sensor setup is shown in Figure 1. The entire imaging system includes the FPI spectral camera, a 32 GB compact flash memory card, irradiance sensors for measuring downwelling and upwelling irradiance (the irradiance sensor for measuring upwelling irradiance is not operational in the current setup), a GPS receiver and a lithium polymer (LiPo) battery. The system weighs less than 700 g. With this setup, more than 1,000 data cubes, each with up to 48 bands, can be collected in the two-times binned mode within a single flight. 2.2. FPI Spectral Camera Data Processing The FPI spectral camera data can be processed in a similar manner as small-format frame imagery, but some sensor-specific processing is also required. Figure 2 shows a general data processing chain for FPI spectral image data. The processing chain includes data collection, FPI spectral data cube generation, image orientation, digital surface model (DSM) calculations, radiometric model calculations and output product generation. When developing the data processing chain for the FPI spectral camera, our objective has been to integrate the sensor-specific processing steps into our existing processing line based on commercially available photogrammetric and remote sensing software. Data collection constitutes the first phase in the imaging chain. The central parameters that need to be set are the spectral sensitivities of the bands and the integration time. The spectral sensitivities are selected based on the requirements of the application. The integration time should be selected so that the images are not overexposed in relation to the bright objects and that there are good dynamics for the dark objects. Furthermore, the flight speed and flying altitude will impact the data quality (Section 2.3.3). The pre-processing phase requires that the sensor imagery be accurately corrected radiometrically based on laboratory calibrations [7,8,14]. Correcting the spectral smile and mismatch of different spectral bands requires sensor-specific processing, which are described in more detail in Section 2.3. The objective of geometric processing is to obtain non-distorted 3D data for a desired coordinate system. The geometric processing steps involve determining the image orientations and DSM generation. An important recent advancement in photogrammetric processing is the new generation of image matching methods for DSM measurement, which is an important advantage for the image analysis process when using UAVs [20,22–27]. The objective of radiometric processing is to provide information about the object’s reflectivity . With this process, all of the disturbances related to the imaging system, atmospheric influences and view/illumination-related factors need to be compensated for. We describe our approaches to radiometric processing in Section 2.4. The outputs resulting from the above-mentioned process include georeferenced reflectance image mosaics and 3D products, such as point clouds, point clouds with reflectivity information, DSMs, object models and stereomodels for visual evaluations . 2.3. FPI Spectral Data Cube Generation A crucial step in the data processing chain is the construction of radiometrically and geometrically consistent spectral data cubes. The process includes three major phases: radiometric image corrections based on laboratory calibrations, spectral smile corrections and band matching. 2.3.1. Radiometric Correction Based on Laboratory Calibration The first steps in the data processing chain involve dark signal compensation and applying the sensor calibration information to the images with the usual corrections for photon response nonuniformity (PRNU) (including the CMOS array nonuniformity and lens falloff corrections). These parameters are determined in the VTT’s calibration laboratory. Dark signal compensation is carried out using a dark image collected before the image data collection. A Bayer-matrix reconstruction is carried out to obtain three-band images. The process has been described by Mäkynen et al. , and it will not be emphasized in this investigation. 2.3.2. Correction of the Spectral Smile Ideally, the optics of the FPI spectral camera are designed so that light rays go through the FPI as a collimated beam for a specific pixel of the image. In order to improve the f-number, this requirement was compromised, and a maximum FPI ray angle of 10° was allowed. This will cause a shift in the central wavelength of the FPI’s spectral peak (a spectral smile effect). For the most part, the peak wavelength (λ0) is linearly dependent on the cosine of the ray angle (θ) at the image plane: There are different approaches to handling the spectral smile: Correcting images: Our assumption is that we can calculate smile-corrected images so that the corrected spectrums can be resampled from two spectrally (with a difference in peak wavelength preferably less than 10 nm) and temporally (with a spatial displacement less than 20 pixels) adjacent image bands. Using the central areas of the images: When images are collected with a minimum of 60% forward and side overlaps and when the most nadir parts of images are used, the smile effect is less than 5 nm and can be ignored in most applications (when the FWHM is 10–40 nm). Resampling a “super spectrum” for each object point of the overlapping images providing variable central wavelengths. The entire spectrum can be utilized in the applications. For typical remote sensing applications and software, approaches 1 and 2, are the most functional, because they can be carried out in a separate step during the preprocessing phase. In order to utilize the entire extent of the images, approaches 1 or 3 are required. The VTT has developed a method for correcting the spectral smile by using two spatially and spectrally adjacent bands (approach 1). A correlation-based matching using shifts in the row and column directions is first carried out on the images to accurately align two of the bands. The corrected image is then formed by interpolating the desired wavelength from the adjacent bands. This simple approach is considered sufficient, because the bands that need to be matched are spectrally and spatially close to one another. 2.3.3. Band Matching The bands of a single data cube image do not overlap perfectly, due to the imaging principle of the FPI spectral camera (Section 2.1). This makes it challenging to determine the orientation of the bands, because, in principle, all of the bands and images have a different orientation. Exposing and transferring a single band to a synchronous dynamic random-access memory (SDRAM) takes approximately 75 ms, which is the time difference between two temporally adjacent bands. Recording a single cube, for example, with 24 air gaps, takes a total of 1,800 ms. The resulting spatial shift is dependent on the flight speed of the UAV, and the shift in pixel coordinates is determined by the flying height of the system. For example, with a flight speed of 5 m·s−1 and a continuous data collection mode (not stopping during image exposure), the horizontal transition of the UAV in the flight direction between temporally adjacent bands is 0.375 m, whereas it is 9 m for the entire data cube; with a flying altitude of 150 m (ground sample distance (GSD), 15 cm), the spatial displacements are thus 2.5 and 60 pixels, respectively. The UAV is also swinging and vibrating during the time of the data cube exposure, which generates nondeterministic positional and rotational mismatches for the bands. Due to these small random movements, the a priori information has to be improved by using data-based analysis. Two different approaches for processing the FPI image data are as follows: Determining the orientations of and georeferencing the individual bands separately. With this approach, there are number-of-bands (typically 20–48) image blocks that need to be processed. We used this approach in our investigation with the 2011 camera prototype, where we processed five bands [20,29]. Sampling the bands of individual data cubes in relation to the geometry of a reference band. Orientations are determined for the image block with reference band images, and this orientation information is then applied to all other bands. We studied this approach for this investigation. The principle aspect of approach 2 involves using one of the bands as the reference band and matching all of the other bands to it. In order to transform a band to match the geometry of the reference band, a geometric image transformation must be carried out. The parameters of the transformation are determined by utilizing tie points that are automatically measured between the bands via image matching. There are several challenges in this matching process. A scene provides different digital numbers (DN) in different spectral bands according to the spectral responses of the different objects, which complicates the image matching process. Another challenge is that, with dynamic UAV imaging, the overlaps between the different bands can be quite small. With 3D objects, a further complication is that a simple 2D modeling of the object’s geometry is not likely to provide accurate results; with homogeneous objects, the images might not contain sufficient features for matching, and repetitive features, such as those seen in crops, can cause false matches. Our current implementation combines the above two approaches. We use a few reference bands and then match several adjacent bands to these bands. By using several reference bands, we try to improve the accuracy and robustness of the image transformations. For example, we can select reference bands for each major wavelength region (blue, green, red, near-infrared (NIR)) or we could select reference bands to maximize the spatial adjacency of the bands to be matched together. We also avoid georeferencing all layers separately, which could slow down the processing in general and be too laborious in the case of challenging objects that require manual interactions. For the first implementation, we employed feature-based matching (FBM) for the band pairs using point features extracted via the Förstner operator . Finally, an image transformation is carried out using an appropriate geometric model; in our system, an affine or a projective model is used. 2.4. Radiometric Correction of Frame Image Block Data Our experience is that UAV remote sensing data often need to be collected under sub-optimal illumination conditions, such as varying degrees of cloudiness. The possibility to collect data below cloud cover and under difficult conditions is also one of the major advantages of the UAV technology. We have developed two approaches for the radiometric processing of UAV image blocks in order to produce homogeneous data for non-homogeneous input data. The first approach is an image-information-based radiometric block adjustment method [20,29]. The basic principle of the approach is to use the gray values (digital numbers (DNs)) of the radiometric tie points in the overlapping images as observations and to determine the parameters of the radiometric model indirectly via the least squares principle. Currently, we use the following model for a gray value (DN): The impact of view/illumination geometry on the object reflectance is an important challenge in passive imaging. Our approach is to use a model-based, multiplicative, anisotropy factor. A reflectance, Rjk(θi,θr,φ), of the object point, k, can be given as follows: In the radiometric model, the absolute reflectance transformation parameters eliminate atmospheric influences. The relative additive and multiplicative parameters eliminate differences between the images that are mainly due to illumination changes and sensor instability. The BRDF model takes care of the view/illumination geometry-related issues. The numbers for the parameters are as follows: absolute calibration: 2; relative calibration: (number of images − 1) × 2; BRDF model: 2 (if the same model is used for the entire object); nadir reflectance of radiometric tie points: number of radiometric tie points. The parameters that are used depend on the conditions during imaging. This model includes many simplifications, but it can be extended using physical parameters. For this approach, a minimum of two reflectance reference targets are needed. With the current implementation, the parameters are determined separately for each band. During the image correction phase, Rk(0,0,φ) is calculated based on Equations (2) and (4). A second alternative is to utilize the irradiance measurements collected during the flight campaign, as described by Hakala et al. . In this case, the relative adjustment of the images is obtained by selecting one reference image and calculating the relative multiplicative correction factors, Cj(λ), with respect to it: 3. Empirical Investigation 3.1. Test Area An empirical campaign was carried out at the MTT Agrifood Research Finland (MTT) agricultural test site in Vihti (60°25′21″N, 24°22′28″E) (Figure 3). The test area, a 76 m by 385 m (2.9 ha) patch of land, has a rather flat topography with a terrain height variation of 11 m and maximum slopes of 3.5°. The area consisted of test plots that contained both wheat and barley. The seed and fertilizer amounts were varied to cause a large range of variation in the vegetation; the applied values were determined by applying farming knowledge as the average input for the existing conditions. The seeds were applied at various rates ranging from (0.5 × standard density)/m2 to (1.5 × standard density)/m2 using a Junkkari combi drill. The amount of nitrogen fertilizer varied in the range of 0 kg·ha−1 to 180 kg·ha−1 for the different varieties of wheat and barley, which consisted of wheat Anniina, wheat Kruunu, barley Voitto and barley Saana. Every fourth fertilization stripe contained spraying tracks. Figure 3 shows the spring fertilization plan and seeding plan for the agricultural test area. The image on the left gives information about the spring fertilization plan, while the image on the right shows the seeding plan. Altogether, there were 50 vegetation samples with a size of 1 m × 1 m (Figure 4a (left)). The growing season measurements included the dry biomass, the wetness percentage and the nitrogen content. The samples had quite evenly distributed biomass variation in the range of 500–2,700 kg·ha−1; in addition, we took three samples in non-vegetated areas (Figure 4b). We used three 5 m by 5 m reflectance reference tarps (P05, P20 and P30, with a nominal reflectance of 0.05, 0.2 and 0.3, respectively) to determine the transformation between the DNs and the reflectance. We used the reflectance values that were measured in the laboratory using the ASD Field Spec Pro FR spectroradiometer (Analytical Spectral Devices Inc., Boulder, CO, USA), and the measurements were normalized to a calibrated white, 30 cm by 30 cm Labsphere Spectralon reference standard. In situ reflectance measurements were not carried out, due to extremely varying imaging conditions. There were altogether 11 targeted XYZ ground control points (GCPs), the coordinates of which were measured using the virtual reference station real time kinematic GPS (VRS-RTK) method (with a relative accuracy of 2 cm) and 13 natural XYZ GCPs, which were measured based on a previous UAV orthophoto and DSM (with a relative accuracy of 20 cm) (Figure 4). We had a reference DSM with a point interval of 10 cm and an approximated height accuracy of 20 cm. It was produced by automatic image matching using higher spatial resolution UAV imagery (GSD 5 cm; green, red and NIR bands with a spectral bandwidth of 50 nm at the FWHM), which was collected on the same day as the FPI spectral camera imagery. This DSM was used as the reference to evaluate the height accuracy of the DSM measured using the FPI spectral camera imagery. We used the national airborne laser scanning (ALS) DSM as the bare ground season information. By calculating the difference between the DSM provided using the imagery with vegetation and the ALS DSM, we could estimate the height of the crop. The minimum point density of the national ALS data is half a point per square meter, while the elevation accuracy of the points in well-defined surfaces is 15 cm and the horizontal accuracy is 60 cm, making this data an appropriate reference surface for an agricultural application. 3.2. Flight Campaigns An image block was collected with the FPI spectral camera using a single-rotor UAV helicopter based on Mikado Logo 600 mechanics with a 5 kg payload capacity . A preprogrammed flight path was flown autonomously using autopilot DJI ACE Waypoint. The FPI spectral camera was fixed to the bottom of the UAV (Figure 5a). The FPI spectral camera was equipped with a 500–900 nm filter, which is considered appropriate for an agricultural application. The camera was operated in free running mode and took spectral data cubes at the given intervals; the integration time was 5 ms. A data cube with 24 different air gap values and 42 bands was collected within a time frame of 1,800 ms. The flight was carried out at a flying altitude of 140 m, which provided a GSD of 14.4 cm; the flying speed was 3.5 m·s−1. The block used in this investigation consisted of five image strips and a total of 80 images; the forward and side overlaps were 78% and 67%, respectively (Figure 4a). We also used the central flight line (strip 3) for several detailed studies. The image footprint was 92 m by 145 m. The campaign was carried out between 10:39 and 10:50 in the morning, local time (Coordinated Universal Time (UTC) + 3 hours). The solar elevation and azimuth angles were 43° and 125°, respectively. The wind speed was 5 m·s−1, and the temperature was 19 °C. During the campaign, the illumination conditions were extremely poor with fluctuating levels of cloudiness (Figure 5). The non-optimal illumination conditions are a realistic situation for precision agriculture, where the timing of the data collection is critical; in the summer of 2012, there were no better imaging conditions during the time window for precision agriculture. We carried out in situ irradiance measurements during the flight. In the UAV, we used an irradiance sensor based on the Intersil ISL29004 photodetector to measure wide-bandwidth (400–1,000 nm) relative irradiance. On the ground, we measured spectral irradiance (W·m−2·nm−1) at a spectral range of 350–2,500 nm in the central part of the test area using an ASD spectroradiometer that was equipped with 180° cosine collector irradiance optics for viewing the entire hemisphere of the sky. The changes in illumination in the images are clearly visible in Figure 5b; the level of irradiance was more than two-times higher at end of the flight than during the central part of the flight. We utilized the irradiance measurements in a relative mode. We provide details on these measurements in another publication . 3.3. Data Processing We processed the data as described in Section 2. First, radiometric pre-processing and spectral smile corrections were carried out. It was possible to generate 30 spectral bands with the smile correction (Table 2; Figure 6). The bandwidths were 18–44 nm at FWHM. Table 2 shows the central peak wavelengths and FWHMs for the raw data cubes and for the final corrected data cubes. For the dataset, the movement of the sensor in the flight direction when collecting a single spectral data cube at a recording time of 1.8 s was approximately 6 m (40 pixels), while the spatial displacement between temporally adjacent bands was 0.26 m (1.7 pixels). During the band matching phase, we selected bands in each principle wavelength area (band 7: 535.5 nm, 24.9 nm; band 16: 606.2 nm, 44.0 nm; band 29: 787.5 nm, 32.1 nm) as reference bands. The temporal differences between the bands matched the reference bands were ranging from −0.4 to 0.6 s, while the computational horizontal movement in the flight direction was less than 1.8 m (12 pixels). We used an affine transformation and nearest neighbor interpolation for the geometric image transformation done as part of the band matching process. The reference bands were selected so that they had a good SNR and the temporal difference between the bands that needed to be matched was as small as possible; the reference bands were collected using different air gap values. The geometric processing phase consisted of determining the orientations of the images and calculating the DSMs. We carried out this phase using a Bae Systems SOCET SET photogrammetric workstation [20,26]. The self-calibrating bundle block adjustment method was applied to determine the orientations of three reference bands. Because the direct orientation information provided by the UAV system was of a poor quality, some interaction was required to determine the approximate orientations of the images; the subsequent tie point measurement was fully automatic. The GCPs were measured interactively. The DSM was generated via image matching using Next Generation Automated Terrain Extraction software (NGATE), which is a part of the SOCET SET software; this process has been described in more detail by Rosnell and Honkavaara . The DSMs were generated using a triangulated irregular network (TIN) format with the NGATE software with a point interval of 20 cm. This format attempts to provide terrain height data at approximately 20 cm point intervals, but it does not interpolate the points in cases of failure in the final DSM. We used different radiometric models during the image processing phase, as shown in Table 3. Relative correction parameters are crucial for the dataset, due to variations in solar illumination. In principle, BRDF parameters are not necessary because of cloudy weather, but we tested this option anyway. For the radiometric block adjustment, we used a grid of radiometric tie points with a point interval of 10 m, while the average DNs (to be used in Equation (2)) were calculated using 4.5 m by 4.5 m image windows. The DNs were scaled to the range of 0–1 during the processing phase. After making the radiometric block adjustment, the absolute calibration parameters (aabs, babs) were determined using the empirical line method. In the quality assessment, we used the average variation coefficients (standard deviation of gray values divided by the average gray value) for each radiometric tie point having multiple observations as a measure of the homogeneity of the data. Details on the radiometric correction method are provided in several publications [20,29,33]. We calculated orthophoto mosaics with a GSD of 0.20 m using the image orientations, DSM and radiometric model. We obtained the reflectance values using the most nadir method; hence, we used the DN from the image where the projection of the ground point was closest to the image nadir point to calculate reflectance (Equations (2) and (4)). 3.4. Biomass Estimation Using Spectrometric Data The motivation for conducting these field tests stemmed from precision agriculture. The idea is to produce precise maps for nitrogen fertilization [16,17]. If we know the biomass and nitrogen content in the field, then, based on this knowledge, as well as information about the soil structure and harvest history of the field, it is possible to generate fertilization plans. We evaluated the performance of the dataset for biomass estimation using a k-nearest neighbor (KNN, k = 9) estimator, which is a supervised machine learning technique . We calculated the spectral features as an average reflectance in 1 m × 1 m areas, and we used all of the spectral channels from the spectral data cubes as a feature space. The biomass estimate was obtained as an average of the k spectrally nearest samples. We used the leave-one-out cross validation technique to assess the performance of the KNN estimator. In this method, a single observation from the original sample is used as the validation data, and the remaining observations are used as the training data. This process is repeated such that each observation in the sample (a total of 53) is used once as the validation data. Finally, we calculated a normalized root-mean-square error (NRMSE) using the individual errors. 4.1. Image Quality Visually, the image quality appeared to be good. Examples of the image quality can be seen in Figure 7. This figure was divided into four parts: Parts 1–3 (from left) are individual bands 7 (535.5 nm, 24.9 nm), 16 (606.2 nm, 44.0 nm) and 29 (787.5 nm, 32.1 nm), respectively, while the last one part is the three-band, un-matched band composite (7, 16 and 29), which shows the mismatch between the bands in the original data. In comparison to the datasets obtained using the 2011 prototype [20,29], the improvement in the f-number from <7 to <3 and the change from a rolling shutter to an electronic shutter is apparent in the image quality in the form of reduced noise and reduced blurring. We estimated the signal-to-noise ratio (SNR) as a ratio of the average signal to the standard deviation in a small image window when using a tarpaulin with a nominal reflectance of 0.3 (Figure 8). This is not an accurate estimate of the SNR, because it was also influenced by the nonuniformity of the tarpaulin; still, it can be used as an indicative value, because the relationships between bands are realistic. The SNR was typically around 80, but a reduction appeared in some of the green bands (wavelength 550–600 nm), as well as in the NIR bands (wavelength >800 nm). The behavior was as expected. In the NIR band, the quantum efficiency of the CMOS image sensor decreases as the wavelengths increases. The decreasing SNR in some green bands was a result of the low transmission at those bands, due to the edge filter, which limits the spectral bandwidth and decreases the transmission. 4.2. Band Matching The assessment of the band matching results indicated that the matching was successful. The numbers of tie points were 10–806, mostly >100, and the median was 276. The standard deviations of the shift parameters were 1.5, 0.9 and 0.8 pixels for the green, red and NIR reference bands, respectively. The average shifts of the bands matched to the green, red and NIR reference bands are shown in Figure 9. They were between five and −15 pixels in the flight direction and less than three pixels in the direction perpendicular to the flight direction. The maximum shifts were up to 25 pixels in the flight direction and 43 pixels in the cross-flight direction. The results presented in Figure 9 show that in the flight direction, the measured shift values were quite close to the expected values that were calculated based on the time difference between the reference band and the matched band. In the direction perpendicular to the flight, the shifts are shown as a function of the time difference between the bands. The average values showed minor systematic drift, which was larger as the time difference between the bands became larger; this is likely due to the possible minor drift of the camera’s x-axis with respect to the flying direction. These results show that the band matching results are consistent with our expectations. Our conclusion is that the method developed here for band matching was appropriate for the experimental testing done in this investigation. It can be improved upon in many ways if required for future applications. Efficient, automated quality control procedures can be developed by evaluating the matching and transformation statistics and the number of successful tie points and by comparing the estimated transformation parameters to the expected values. The matching methods can be further improved, as well. 4.3. Geometric Processing We determined the orientations of the reference bands in the green, red and near-infrared spectral regions (bands 7, 16 and 29). Block statistics are provided in Table 4. The standard deviations of the unit weight were 0.5–0.7 pixels, and they were best for the near-infrared band (band 29); the precision estimates for the orientation parameters also indicated better results for band 29. The better results for band 29 might be due to the higher intensity values in the NIR band (that provided a better SNR), which could provide better quality for the automatic tie point measurements (the fields are very dark on the green and red bands). On the other hand, the root-mean-square errors (RMSEs) of the GCPs indicated the poorest performance for band 29, which was probably due to the difficulties in measuring the GCPs for this band (the likely reason for this is that the properties of the paint used for the GCPs were not ideal in the NIR spectral region). The RMSEs for the targeted GCPs were on the level of one pixel, and they were expected to be a representative estimator of the georeferencing accuracy. For the self-calibrating bundle block adjustment, we estimated the principal point of autocollimation (x0, y0) and the radial distortion parameter, k1 (the radial distortion correction (dr) for radial distance r from the image center is dr = k1r3) (Table 5). The k1 parameter values were similar in the different bands, which was consistent with our expectations. This property is favorable for the band matching process. The estimated values for the principal point of autocollimation varied in the different bands; this was likely due to the non-optimality of the block for this task. The coordinates of the principal point correlated with the coordinates of the perspective center. More detailed investigations of the camera calibration should be carried out in a laboratory using suitable imaging geometry. The quality of the block adjustment was in accordance with our expectations. These results also indicated that the orientations of individual bands can be determined using state-of-the-art photogrammetric methods at commercial photogrammetric workstations. Figure 10a (left) shows a DSM obtained via automatic image matching using band 29 of the FPI image block. The matching quality was poor for tractor tracks, which appeared either as matching failures (missing points) or as outliers (high points), but otherwise, we obtained a relatively good point cloud. We also tested how to generate DSMs using the green and red reference bands, but the matching quality was poor, likely due to the lower SNR with the dark object (the fields are dark in the green and red bands). The reference DSM extracted using the higher spatial resolution image block collected on the same day is shown in Figure 10b (left). This DSM has a higher quality, but some minor matching failures in the tractor tracks appeared with this DSM, as well. The height RMSE of the FPI spectral camera DSM was 35 cm at the GCPs; likewise, the comparison of the FPI DSM to the reference DSM at the vegetation sample locations provided an RMSE of 35 cm. We evaluated the potential for using vegetation heights taken from the DSM during the biomass estimation. We obtained the estimate for the crop height by calculating the difference between the FPI spectral camera DSM and the DSM based on airborne laser scanning (ALS DSM), which was collected in springtime on bare ground (Figure 10a (middle)). The estimated height of the vegetation was between zero and 1.4 m; the average height was 0.74 m, and the standard deviation was 0.36 m. The linear regression between the dry biomass values and the vegetation height did not show any correlation (the R2 value of regression was zero) (Figure 10a (right)). In the more accurate reference DSM (Figure 10b), the corresponding values were as follows: a vegetation height of 0.44 to 0.99 m, an average height of 0.76 m and a standard deviation of 0.12 m; the R2 value for the linear regression of dry biomass and the vegetation height was 0.36. With both DSMs, the vegetation height was slightly overestimated. It was possible to visually identify the areas without vegetation using both DSMs. Using the reference DSM, the areas with 0% fertilization could also be visually detected; this was not possible with the FPI DSM. The FPI DSM was not of sufficient quality to reveal differences in the vegetation heights, which was an expected result, due to the relatively high height deviations in the DSM. With both DSMs, it is expected that the DSM is reliable only within the area surrounded by the GCPs. The reason for the worse DSM quality when using the FPI spectral camera could be due to the lower SNR and narrower bands, as well as to the fact that the matching software and the parameters might not have been ideal for this type of data; we tested several parameter options using the NGATE software, but they did not improve the results. One possible improvement could be to use more ideal band combinations during the matching phase. Furthermore, the matching algorithm could be further tuned, and different matching methods should be studied. The quality of the DSM can be considered promising and sufficient for image mosaic generation if the matching failures can be interpolated and filtered at a sufficient level of quality. 4.3.3. Image Mosaics We calculated the orthophoto mosaics using the orientation information and the DSM. The RMSE when using the GCPs as checkpoints was less than 0.2 m for the x and y coordinates. 4.4. Radiometric Processing We conducted radiometric processing for the entire block and for strip 3. In addition to the new results for strip 3, we reproduced relevant parts of the results from a previous study with the full block in order to evaluate the impact of radiometric correction on the final application. Significant radiometric differences appeared within the image mosaics, due to the extremely variable illumination conditions. This was apparent especially for the full block (Figure 11b), while strip 3 was of a more uniform quality (Figure 11a). The radiometric correction greatly improved the homogeneity of the data (Figure 11c–f). The detail of a corrected mosaic (Figure 11g) shows the good quality of the data. We used the average coefficient of variation for the radiometric tie points as the indicator of the radiometric homogeneity of the block (Figure 12) (the correction methods are described in Table 3). The coefficients of variation values for the full block (Figure 12a) were 0.14–0.18 when a radiometric correction was not used and 0.05–0.12 when a radiometric correction was applied. The best results of 0.05–0.08 were obtained with the relative block adjustment (BA: relA). For strip 3 (Figure 12b), the values were 0.06–0.08 when a radiometric block adjustment was not performed. With radiometric correction, we obtained the lowest variation coefficients, approximately 0.02, for NIR bands; the variation coefficients were better than 0.03 for the green bands and better than 0.04 for the red bands (BA: relB, BRDF). The better values for the single strip are most likely due to the fact that less variability needed to be adjusted for the single strip. The correction based on the spectral irradiance measurement on the ground (ground) appeared to provide better homogeneity than the UAV-based correction (uav). The image-based correction provided the best homogeneity, but there were some drift effects, which appeared as a slight tendency of the block to brighten in the north-east direction. The results from the previous study also showed that all of the correction approaches—UAV irradiance (uav), ground irradiance (ground) and radiometric block adjustment (BA: relA)—provided relatively similar estimates of the illumination variations. Six sample spectral profiles are shown in Figure 13 for different radiometric processing cases. The reflectances were taken as the median value in a 1 m by 1 m image window. The greatest differences between the samples appeared in the reflectance values in the NIR spectral region. In all cases, the NIR reflectance of samples with low biomass values (<1,000 kg·ha−1) were clearly lower than the NIR reflectance of samples with higher biomass. With the radiometric corrections, the high (>2,500 kg·ha−1) and medium (1,500–2,000 kg·ha−1) biomass samples could be separated (Figure 13b–e), which was not possible in the data without radiometric corrections (Figure 13a). There were also some concerns with the spectral profiles. The green reflectance peaks were not distinct, especially in cases without radiometric correction (Figure 13a), which indicated that the radiometric correction was not perfectly accurate in these cases. Furthermore, the reflectance values were low in the green and red reflectance regions, typically lower than the dark reflectance target used in radiometric calibration (Figure 13f); this means that the reflectance at green and red spectral regions was extrapolated. Radiometric calibration is very challenging at low reflectance, because small errors in calibration will cause relatively large errors in reflectance values. We did not have reference spectrums of the vegetation samples, so it was not possible to evaluate the absolute reflectance accuracy. The reflectance spectra of the three reflectance tarpaulins indicated that the spectra measured in images fitted well with the reference spectra measured in the laboratory (Figure 13f). The results of the radiometric correction were promising. In the future, it would be of interest to develop an approach that integrates the image-based information and the external irradiance measurements. Special attention must be paid to situations in which the illumination conditions change between the flight lines from sunny to diffuse (sun behind a cloud). Furthermore, a means for correcting the shadows and topographic effects should be integrated into the method. 4.5. Biomass Estimation Using Spectrometric Information from the FPI Spectral Camera We tested the performance of the reflectance output data from the FPI spectral camera in a biomass estimation process using a KNN estimator. Furthermore, we calculated Normalized Difference Vegetation Index (NDVI) (NDVI = (NIR 815.7 − R648)/(NIR 815.7 + R648)). Figure 14 shows the biomass estimation and NDVI statistics for different radiometric processing options. In the biomass estimate maps, the areas that have no plants were quite visible in all cases (leftmost plots in Figure 14). The strips with 0% fertilization could be identified in most cases. The continuous lower biomass pattern in the middle of the area in the corrected dataset corresponds to a slight downhill terrain slope, which flattens towards the north and moves the valuable nutrients there. If a radiometric correction had not been carried out, the radiometric differences caused by the changes in the illumination were clearly visible in the mosaic and distorted the biomass estimates (Figure 14a). In the case of the correction based on the irradiance measurement with the UAV (Figure 14c), some strip-related artifacts (high biomass values in strips 2 and 4) appeared; we also identified these inaccuracies in the correction parameters in our recent study and concluded that these inaccuracies were likely due to some shadowing effects of the irradiance sensor in image strips 1, 3 and 5 (solvable in future campaigns). The largest differences between the estimates with a correction based on the ground irradiance measurement (Figure 14d) and the image-based correction (Figure 14e) appeared in the south-west part of the mosaic, which had lower biomass estimates, and in the north-east part of the mosaic as higher biomass estimates for the image-based method. The results for the two image-based corrections were quite similar (Figure 14b,e). It is possible that the correction based on image information could have some drifts, which appeared as mosaic brightening towards the north-east direction. For strip 3, the NRMSEs of biomass estimates were 26.2% for uncorrected data and 20.4% for radiometrically corrected data (Figure 14, second column). For the full block, the NRMSEs were 24.4%, 17.3% and 15.5% for a radiometric correction based on wide-bandwidth irradiance measured using an UAV, spectral irradiance measured on the ground and a relative block adjustment, respectively. The radiometric correction of the full block data with a relative multiplicative correction (BA: relA) thus provided the highest degree of accuracy, while the correction based on the ground irradiance measurement (ground) was very close to this result. The results clearly showed the great impact of radiometric correction on the biomass estimation. The resulting biomass estimate maps were feasible when appropriate corrections were applied and the numerical values supported the visual results. The KNN estimator was not quite ideal for estimating continuous variables, but we considered that it was feasible for our purposes of validating the data processing; our training data also was suitable for this estimator (Figure 4b). The NDVI maps appeared to be realistic, showing the areas with and without vegetation (Figure 14, third column). The R2 values of the linear regression of the dry biomass and NDVI were 0.57–0.58 for most of the cases (Figure 14, right column). Strip 3 with radiometric correction (BA: relB, BRDF) was an exception (Figure 14b); it provided a lower R2 of 0.38, and also, the NDVI map appeared to be biased, showing relatively high NDVI values at the non-vegetated areas in the upper part of the strip. These observations were consistent with the biomass estimation results and analysis. Developing quantitative, lightweight UAV remote sensing applications is becoming ever more important, because this technology is increasingly needed in various environmental measurement and monitoring applications. In this study, we presented a complete processing chain for a novel, lightweight, spectrometric imaging technology based on a Fabry-Perot interferometer (FPI) in an agricultural application. In our previous investigations [20,21,29], we performed the first set of analyses with the FPI spectral camera 2011 prototype using five selected spectral bands. In this investigation, we processed the data using the improved 2012 prototype sensor. First, we developed a method to process all of the bands. We investigated the orientation process and calculated the digital surface models (DSM) by automatic image matching using FPI spectral camera data. We also evaluated the impacts of different radiometric correction approaches during a supervised biomass estimation process. It was important to carry out the entire processing chain in order to identify major bottlenecks and develop the methods further. The results were quite promising; they indicated that the current sensor is already operational and that the processing can be carried out quantitatively and also be highly automated. All sensor-specific processing steps could be implemented as independent steps in our existing processing environment based on commercial photogrammetric and remote sensing software; this is an important issue for companies planning to use the FPI spectral camera in their operational work. The challenging part of processing the FPI data is that the bands in the spectral data cube are collected with a small time delay. Our approach was to select a few reference bands and determine the exterior orientations for them. We transformed the rest of the bands to match the geometry of the reference bands and, then, applied the orientation parameters of the reference bands to these bands. While the method for band matching proved to be operational, it can still be further improved. In the dataset, the estimated error was on the level of 1–2 pixels (15–30 cm). This level of accuracy is sufficient for most remote sensing applications, and we expect further improvements in the future because of improving sensors and processing methods. Difficulties or reduced accuracy are to be expected for objects with extensive height differences if large spatial differences exist between the bands (fast vehicle) and in cases where the objects are homogeneous (water areas); carefully designed flight parameters and band matching processes are needed to obtain good accuracy. The most optimal approach for producing the best accuracy could be to georeference the individual bands separately. This is a software issue: the software used in this investigation was not ideal for this approach. The FPI sensor provides many alternative ways for processing the data, but in this study, we concentrated on methods that could easily be integrated with our existing photogrammetric and remote sensing environment. Geometric processing of the frame imaging sensors is a quite mature technology, even though methods are being improved constantly, for instance, to improve the reliability of processing very small format sensors operating in a highly dynamic environment, such as UAV imaging. Our processing required a certain amount of interaction during the block initialization phase; approaches for improving this include a better direct georeferencing solution [11,38] or applying some recently presented ordering methods to determine the approximate orientations of the images, such as the structure from the motion technique (e.g., [39–41]). Because rigorous integrated global navigation satellite and inertial measurement unit (GNSS/IMU) orientation systems for direct georeferencing are still quite expensive and heavy for light and low-cost systems, photogrammetry-based methods should be developed, so that they can operate at an optimum level. For image matching, SNR is a critical image quality indicator. Accurate GNSS data will also improve the georeferencing accuracy and eliminate or reduce the need for GCPs, which has been demonstrated in previous investigations [26,39]. While our geometric accuracy results were quite good in comparison to recent results obtained using hyperspectral sensors , they were not as good as those obtained using higher spatial resolution, wide-bandwidth sensors [26,39]. The quality of the point clouds extracted from the FPI spectral camera imagery was poorer than what many recently published results have indicated [20,23–27,29]. In these studies, wide-bandwidth, high dynamic range, high spatial resolution sensors were used. Because the spatial resolution of the spectral data is expected to be lower than what can be obtained with commercial wide-bandwidth small-format cameras, a functional approach would be to integrate a high spatial resolution sensor with an FPI spectral camera in order to obtain high-quality 3D information, as suggested in our previous study . However, the lower quality DSM provided by the FPI spectral camera is also useful when processing and analyzing the data. In the case of UAV imaging, radiometric processing is a relatively unexplored topic. Radiometric sensor correction is needed for a quantitative remote sensing processing line, and we also applied these methods to our processing line [7–9,14]. In this study, we considered the available methods to be accurate, but in future studies, reliable quality criteria should be developed for the sensor pre-processing phase. Traditional atmospheric correction methods based on radiative transfer have been developed for pushbroom imaging systems [31,42–45], and similar approaches have also been applied to UAV-based hyperspectral imaging systems [7,9]. Recently, approaches have been established for making radiometric block adjustments and for generating reflectance images for block data with rectangular images collected using stable, large-format digital photogrammetric cameras [42,46–48]. For UAV remote sensing applications using rectangular images, simple balancing approaches are typically used [3,5]; and empirical line-based approaches are popular . Our objective is to develop a physically-based method for the atmospheric correction of frame images, one that includes a radiometric block adjustment utilizing radiometric tie points and utilizes in situ irradiance measurements in UAV and/or on the ground, but we are still applying many simplifications to the method [20,29,33]. While the radiometric processing proved to be quite complicated, due to the variability in the illumination conditions, we also found that both radiometric block adjustment and in situ irradiance measurement-based methods greatly improved the data quality. In the future, it will be of interest to integrate these methods . The investigated and developed processing methods are useful for airborne UAV frame format imagery in general. Further investigations are still needed in order to develop accurate radiometric correction methods for high-resolution, multi-overlap frame image data collected under variable conditions. In the future, there will be a need to thoroughly consider the reflectance output products resulting from UAV remote sensing . The quantitative radiometry is expected to improve the performance of the remote sensing application in general and, furthermore, will enable the use of rigorous radiative transfer modeling-based methods in the analysis of object characteristics; this would be advantageous for UAV-based precision agriculture, as the need for site-specific training data would be eliminated; the importance of the accurate radiometric processing and atmospheric correction is highlighted also in agricultural applications with global and regional focus . Recently, researchers have conducted experiments with UAV imaging systems with hyperspectral scanners using the pushbroom principle [7,9]. In comparison to those systems, the FPI spectral camera collects less spectral bands that are not as narrow (10–40 nm in comparison to 1–10 nm). The advantages of the FPI spectral camera include its light weight and the fact that a direct orientation solution requiring expensive GNSS/IMU equipment is not needed, as well as the fact that it offers the possibility to conduct stereoscopic measurements and multi-angular reflectance measurements. All innovations developed for frame geometry images can be directly utilized when processing FPI spectral camera images; these techniques are expected to develop further due to the invasion of computer vision technologies in personal mobile equipment. We expect that the FPI spectral camera concept could provide more robust and cost-efficient applications than systems based on the pushbroom principle and that data collection can be optimized by using carefully selected spectral bands for each application. For many applications, we expect that the FPI spectral camera will be used as one component in an integrated sensor system; in the agricultural application, the important sensors that need to be integrated are a high spatial resolution, wide-bandwidth camera that provides more accurate DSMs, as well as a thermal camera [4,9]. We demonstrated the use of FPI data for estimating crop biomass in order to validate the data processing phase. Our results when using radiometrically corrected data and a supervised classification method provided at best a 15.5% normalized root-mean-square-error (NRMSE) during the biomass estimation process, which is in line with the results presented in the existing literature ; the NRMSE was 26.3% for the radiometrically uncorrected data. We assume that the results can be improved upon in many ways, such as if we were to use spectral band selection, spectral indices or multivariate statistics for the feature extraction . The results when using the new sensor data and the entire data cube were better than the results from the previous year, which were obtained using the 2011 prototype sensor [20,21]. We will emphasize the optimization of the estimation process in our future investigations [4,15–17]. Integrating the vegetation heights into the estimation process is also an interesting option . In the future, it will be important to develop an operational concept for precision agriculture using UAV technology [6,20]. In this operational concept, one of the crucial steps will be to quantify the geometric and radiometric properties required for the UAV remote sensing data, which has also been emphasized by Zhang and Kovacs . Further legislation also needs to be developed; this is an important factor influencing the way in which UAV technology is used in practical applications, as discussed by Watts et al. . Rapidly developing lightweight unmanned airborne vehicle (UAV) sensor technology provides new possibilities for environmental measurement and monitoring applications. We investigated the processing and performance of a new Fabry-Perot interferometer (FPI)-based spectral camera weighing less than 700 g that can be operated from lightweight UAV platforms. By collecting frame-format images in a block structure, spectrometric, stereoscopic data can be obtained. We developed and assessed an end-to-end processing chain for the FPI spectral camera data, together with image preprocessing; spectral data cube generation, image orientation, digital surface model (DSM) extraction, radiometric correction and supervised biomass estimation. Our results provided new knowledge about high-resolution, passive UAV remote sensing. The pre-processing provided consistent results, and the orientations of the images could be calculated using self-calibrating bundle block adjustment using regular photogrammetric software. The quality of the DSM provided by automatic image matching was not as high as what was obtained with a wider spectral bandwidth, higher spatial resolution camera. The estimated root-mean-square-error was 40 cm in height and 20 cm in horizontal coordinates, for output image mosaics, and a DSM with ground sample distances of 20 cm, for image data collected using a flying altitude of 140 m. The varying illumination conditions caused great radiometric differences between the images; our radiometric correction methods reduced the variation of grey values in overlapping images from 14%–18% to 6%–8%. The supervised estimation of biomass provided a normalized root-mean-square-error of 15.5% at best. Data quality is an important factor influencing the performance of a remote sensing application; our results showed that signal-to-noise-ratio (SNR) and the radiometric uniformity amongst individual images forming the image mosaics impacted the biomass estimation quality. These results proved that a lightweight imaging sensor that is based on the sequential exposure of different bands can provide spectrometric, stereoscopic data. Furthermore, the results validated that useful spectrometric, stereoscopic data can be collected using lightweight sensors under highly variable illumination conditions, with fluctuating cloudiness, which is a typical operating environment for these systems. The results showed that all FPI technology-related processing steps (image preprocessing and spectral data cube generation) can be taken care of in separate steps and that the rest of the processing can be carried out using regular photogrammetric and remote sensing software. The fact that images can be processed using regular software is an important aspect for users integrating the FPI spectral camera into their operational workflows. For radiometric processing, we have developed new quantitative methods that are suited for frame format images collected in variable illumination and atmospheric conditions. This was the first quantitative experiment with FPI camera-type technology covering the entire remote sensing processing chain. Our emphasis in developing analysis tools for extremely challenging illumination conditions represents a new approach in hyperspectral remote sensing. Our results confirmed the operability of the FPI camera in UAV remote sensing and the high potential of lightweight UAV remote sensing in general. We identified many aspects that should be improved in our processing line. These are also recommendations for method development universally. In general, there is a fundamental need to develop reliable methods for the geometric and radiometric processing of huge numbers of small, overlapping images. Another important conclusion is that it will be crucial to develop all-weather processing technology in order to take full advantage of this new technology and to make this technology operational in practical applications. We expect that there will be a great demand for these methods in the near future. We note that in the future, it will be of great importance to develop reliable error propagation for all phases of the process to enable quantitative applications for these data. Numerical tolerances and criteria will be required by the user community as soon as the UAV-based remote sensing business increases. Further development of the quality indicators that were presented in this investigation is necessary. The research carried out in this study was partially funded by the Academy of Finland (Project No. 134181). The National Land Survey of Finland is acknowledged for the open airborne laser scanning data. We are grateful for the anonymous reviewers for their valuable comments. Conflict of Interest The authors declare no conflict of interest. References and Notes - Hunt, E.R., Jr.; Hively, W.D.; Fujikawa, S.J.; Linden, D.S.; Daughtry, C.S.T.; McCarty, G.W. Acquisition of NIR-Green-Blue digital photographs from unmanned aircraft for crop monitoring. Remote Sens 2010, 2, 290–305. [Google Scholar] - Lelong, C.C.D.; Burger, P.; Jubelin, G.; Roux, B.; Labbé, S.; Baret, F. Assessment of unmanned aerial vehicles imagery for quantitative monitoring of wheat crop in small plots. Sensors 2008, 8, 3557–3585. [Google Scholar] - Zhou, G. Near real-time orthorectification and mosaic of small UAV video flow for time-critical event response. IEEE Trans. Geosci. Remote Sens 2009, 47, 739–747. [Google Scholar] - Berni, J.A.; Zarco-Tejada, P.J.; Suárez, L.; Fereres, E. Thermal and narrowband multispectral remote sensing for vegetation monitoring from an unmanned aerial vehicle. IEEE Trans. Geosci. Remote Sens 2009, 47, 722–738. [Google Scholar] - Laliberte, A.S.; Goforth, M.A.; Steele, C.M.; Rango, A. Multispectral remote sensing from unmanned aircraft: Image processing workflows and applications for rangeland environments. Remote Sens 2011, 3, 2529–2551. [Google Scholar] - Saari, H.; Pellikka, I.; Pesonen, L.; Tuominen, S.; Heikkilä, J.; Holmlund, C.; Mäkynen, J.; Ojala, K.; Antila, T. Unmanned Aerial Vehicle (UAV) operated spectral camera system for forest and agriculture applications. Proc. SPIE 2011, 8174. [Google Scholar] [CrossRef] - Hruska, R.; Mitchell, J.; Anderson, M.; Glenn, N.F. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle. Remote Sens 2012, 4, 2736–2752. [Google Scholar] - Kelcey, J.; Lucieer, A. sensor correction of a 6-band multispectral imaging sensor for UAV remote sensing. Remote Sens 2012, 4, 1462–1493. [Google Scholar] - Zarco-Tejada, P.J.; Gonzalez-Dugo, V.; Berni, J.A.J. Fluorescence, temperature and narrow-band indices acquired from a UAV platform for water stress using a micro-hyperspectral images and a thermal camera. Remote Sens. Environ 2012, 117, 322–337. [Google Scholar] - Delauré, B.; Michiels, B.; Biesemans, J.; Livens, S.; Van Achteren, T. The Geospectral Camera: A Compact and Geometrically Precise Hyperspectral and High Spatial Resolution Imager. Proceedings of the ISPRS Hannover Workshop 2013, Hannover, Germany, 21–24 May 2013. - Nagai, M.; Chen, T.; Shibasaki, R.; Kumgai, H.; Ahmed, A. UAV-borne 3-D mapping system by multisensory integration. IEEE Trans. Geosci. Remote Sens 2009, 47, 701–708. [Google Scholar] - Jaakkola, A.; Hyyppä, J.; Kukko, A.; Yu, X.; Kaartinen, H.; Lehtomäki, M.; Lin, Y. A low-cost multi-sensoral mobile mapping system and its feasibility for tree measurements. ISPRS J. Photogramm. Remote Sens 2010, 65, 514–522. [Google Scholar] - Wallace, L.; Lucieer, A.; Watson, C.; Turner, D. Development of a UAV-LiDAR system with application to forest inventory. Remote Sens 2012, 4, 1519–1543. [Google Scholar] - Mäkynen, J.; Holmlund, C.; Saari, H.; Ojala, K.; Antila, T. Unmanned Aerial Vehicle (UAV) operated megapixel spectral camera. Proc. SPIE. [CrossRef] - Alchanatis, V.; Cohen, Y. Spectral and Spatial Methods of Hyperspectral Image Analysis for Estimation of Biophysical and Biochemical Properties of Agricultural Crops. In Hyperspectral Remote Sensing of Vegetation, 1st ed; Thenkabail, P.S., Lyon, J.G., Huete, A., Eds.; CRC Press: Boca Raton, FL, USA, 2012; pp. 289–305. [Google Scholar] - Zhang, C.; Kovacs, J.M. The application of small unmanned aerial systems for precision agriculture: a review. Precis. Agric 2012, 13, 693–712. [Google Scholar] - Yao, H.; Tang, L.; Tian, L.; Brown, R.L.; Bhatngar, D.; Cleveland, T.E. Using Hyperspectral Data in Precision Farming Applications. In Hyperspectral Remote Sensing of Vegetation, 1st ed.; Thenkabail, P.S., Lyon, J.G., Huete, A., Eds.; CRC Press: Boca Raton, FL, USA, 2012; pp. 591–607. [Google Scholar] - Zecha, C.W.; Link, J.; Claupein, W. Mobile sensor platforms: Categorisation and research applications in precision farming. J. Sens. Sens. Syst 2013, 2, 51–72. [Google Scholar] - Nackaerts, K.; Delauré, B.; Everaerts, J.; Michiels, B.; Holmlund, C.; Mäkynen, J.; Saari, H. Evaluation of a lightweigth UAS-prototype for hyperspectral imaging. Int. Arch. Photogramm. Remote Sens. Spat. Infor. Sci 2010, 38, 478–483. [Google Scholar] - Honkavaara, E.; Kaivosoja, J.; Mäkynen, J.; Pellikka, I.; Pesonen, L.; Saari, H.; Salo, H.; Hakala, T.; Markelin, L.; Rosnell, T. Hyperspectral reflectance signatures and point clouds for precision agriculture by light weight UAV imaging system. ISPRS Annal. Photogramm. Remote Sens. Spat. Inf. Sci 2012, I-7, 353–358. [Google Scholar] - Pölönen, I.; Salo, H.; Saari, H.; Kaivosoja, J.; Pesonen, L.; Honkavaara, E. Biomass estimator for NIR image with a few additional spectral band images taken from light UAS. Proc. SPIE 2012(8369). [CrossRef] - Scholten, F.; Wewel, F. Digital 3D-data acquisition with the high resolution stereo camera-airborne (HRSC-A). Int. Arch. Photogramm. Remote Sens. Spat. Infor. Sci 2000, 33, 901–908. [Google Scholar] - Leberl, F.; Irschara, A.; Pock, T.; Meixner, P.; Gruber, M.; Scholz, S.; Wiechert, A. Point clouds: Lidar versus 3D vision. Photogramm. Eng. Remote Sens 2010, 76, 1123–1134. [Google Scholar] - Haala, N.; Hastedt, H.; Wolf, K.; Ressl, C.; Baltrusch, S. Digital photogrammetric camera evaluation—Generation of digital elevation models. Photogramm. Fernerkund. Geoinf 2010, 2, 99–115. [Google Scholar] - Hirschmüller, H. Semi-Global Matching: Motivation, Development and Applications. In Photogrammetric Week 2011; Fritsch, D., Ed.; Wichmann Verlag: Heidelberg, Germany, 2011; pp. 173–184. [Google Scholar] - Rosnell, T.; Honkavaara, E. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera. Sensors 2012, 12, 453–480. [Google Scholar] - Rosnell, T.; Honkavaara, E.; Nurminen, K. On geometric processing of multi-temproal image data collected by light UAV systems. Int. Arch. Photogramm. Remote Sens. Spat. Infor. Sci 2011, 38, 63–68. [Google Scholar] - Schaepman-Strub, G.; Schaepman, M.E.; Painter, T.H.; Dangel, S.; Martonchik, J.V. Reflectance quantities in optical remote sensing—Definitions and case studies. Remote Sen. Environ 2006, 103, 27–42. [Google Scholar] - Honkavara, E.; Hakala, T.; Saari, H.; Markelin, L.; Mäkynen, J.; Rosnell, T. A process for radiometric correction of UAV image blocks. Photogramm. Fernerkund. Geoinfor 2012. [Google Scholar] [CrossRef] - Förstner, W.; Gülch, E. A Fast Operator for Detection and Precise Location of Distinct Points, Corners and Centers of Circular Features. Proceedings of Intercommission Conference on Fast Processing of Photogrammetric Data, Interlaken, Swizerland, 2–4 June 1987; pp. 281–305. - Beisl, U. New Method for Correction of Bidirectional Effects in Hyperspectral Images. Proceedings of Remote Sensing for Environmental Monitoring, GIS Applications, and Geology, Toulouse, France, 17 September 2001. - Walthall, C.L.; Norman, J.M.; Welles, J.M.; Campbell, G.; Blad, B.L. Simple equation to approximate the bidirectional reflectance from vegetative canopies and bare soil surfaces. Appl. Opt 1985, 24, 383–387. [Google Scholar] - Hakala, T.; Honkavaara, E.; Saari, H.; Mäkynen, J.; Kaivosoja, J.; Pesonen, L.; Pölönen, I. Spectral imaging from UAVs under varying illumination conditions. Int. Arch. Photogramm. Remote Sens. Spat. Infor. Sci. 2013, XL-1/W2, 189–194. [Google Scholar] - Kuvatekniikka Oy Patrik Raski. Available online: http://kuvatekniikka.com/ (accessed on 5 September 2013). - Intersil ISL29004 Datasheet. Available online: http://www.intersil.com/content/dam/Intersil/documents/fn62/fn6221.pdf (accessed on 9 October 2013). - Kotsiantis, S.B. Supervised machine learning: A review of classification techniques. Informatica 2007, 31, 249–268. [Google Scholar] - National Land Survey of Finland Open Data License. Available on line: http://www.maanmittauslaitos.fi/en/NLS_open_data_licence_version1_20120501 (accessed on 10 October 2013). - Chiang, K.-W.; Tsai, M.-L.; Chu, C.-H. The development of an UAV borne direct georeferenced photogrammetric platform for ground control point free applications. Sensors 2012, 12, 9161–9180. [Google Scholar] - Turner, D.; Lucieer, A.; Watson, C. An automated technique for generating georectified mosaics from ultra-high resolution Unmanned Aerial Vehicle (UAV) imagery, based on Structure from Motion (SfM) point clouds. Remote Sens 2012, 4, 1392–1410. [Google Scholar] - Snavely, N. Bundler: Structure from Motion (SFM) for Unordered Image Collections, Available online: phototour.cs.washington.edu/bundler/ (accessed on 12 July 2013). - Mathews, A.J.; Jensen, J.L.R. Visualizing and quantifying vineyard canopy LAI using an Unmanned Aerial Vehicle (UAV) collected high density structure from motion point cloud. Remote Sens 2013, 5, 2164–2183. [Google Scholar] - Honkavaara, E.; Arbiol, R.; Markelin, L.; Martinez, L.; Cramer, M.; Bovet, S.; Chandelier, L.; Ilves, R.; Klonus, S.; Marshal, P.; et al. Digital airborne photogrammetry—A new tool for quantitative remote sensing?—A state-of-the-Art review on radiometric aspects of digital photogrammetric images. Remote Sens 2009, 1, 577–605. [Google Scholar] - Richter, R.; Schläpfer, D. Geo-atmospheric processing of airborne imaging spectrometry data. Part 2: Atmospheric/topographic correction. Int. J. Remote Sens 2002, 23, 2631–2649. [Google Scholar] - Richter, R.; Kellenberger, T.; Kaufmann, H. Comparison of topographic correction methods. Remote Sens 2009, 1, 184–196. [Google Scholar] - Beisl, U.; Telaar, J.; von Schönemark, M. Atmospheric Correction, Reflectance Calibration and BRDF Correction for ADS40 Image Data. Proceedings of the XXI ISPRS Congress, Commission VII, Beijing, China, 3–11 July 2008. - Chandelier, L.; Martinoty, G. Radiometric aerial triangulation for the equalization of digital aerial images and orthoimages. Photogramm. Eng. Remote Sens 2009, 75, 193–200. [Google Scholar] - Collings, S.; Cacetta, P.; Campbell, N.; Wu, X. Empirical models for radiometric calibration of digital aerial frame mosaics. IEEE Trans. Geosci. and Remote Sens 2011, 49, 2573–2588. [Google Scholar] - López, D.H.; García, B.F.; Piqueras, J.G.; Aöcázar, G.V. An approach to the radiometric aerotriangulation of photogrammetric images. ISPRS J. Photogramm. Remote Sens 2011, 66, 883–893. [Google Scholar] - Atzberger, C. Advances in remote sensing of agriculture: Context description, existing operational monitoring systems and major information needs. Remote Sens 2013, 5, 949–981. [Google Scholar] - Watts, A.C.; Ambrosia, V.G.; Hinkley, E.A. Unmanned Aircraft Systems in remote sensing and scientific research: Classification and considerations of use. Remote Sens 2012, 4, 1671–1692. [Google Scholar] |Parameter||Prototype 2011||Prototype 2012| |Horizontal and vertical FOV (°)||>36, >26||>50, >37| |Nominal focal length (mm)||9.3||10.9| |Wavelength range (nm)||400–900||400–900| |Spectral resolution at FWHM (nm); depending on the selection of the FPI air gap value||9–45||10–40| |Spectral step (nm): adjustable by controlling the air gap of the FPI||<1||<1| |Pixel size (μm); no binning/default binning||2.2/8.8||5.5/11| |Maximum spectral image size (pixels)||2,592 × 1,944||2,048 × 2,048| |Spectral image size with default binning (pixels)||640 × 480||1,024 × 648| |Camera dimensions (mm)||65 × 65 × 130||<80 × 92 × 150| |Weight (g); including battery, GPS receiver, downwelling irradiance sensors and cabling||<420||<700| |Raw Data, 42 Bands| |Central peak wavelength (nm): 506.8, 507.4, 507.9, 508.4, 510.2, 515.4, 523.3, 533.0, 541.3, 544.1, 550.5, 559.6, 569.7, 581.3, 588.6, 591.3, 596.7, 601.7, 606.7, 613.8, 629.5, 643.1, 649.7, 657.2, 672.6, 687.3, 703.2, 715.7, 722.7, 738.8, 752.7, 766.9, 783.2, 798.1, 809.5, 811.1, 826.4, 840.6, 855.2, 869.9, 884.5, 895.4| |FWHM (nm): 14.7, 22.1, 15.2, 16.7, 19.7, 23.8, 25.5, 24.9, 22.7, 12.7, 23.9, 23.0, 27.2, 21.4, 18.3, 41.1, 22.1, 44.0, 21.4, 41.5, 41.1, 35.3, 12.9, 40.4, 36.5, 38.3, 33.5, 29.9, 32.7, 32.8, 27.6, 31.8, 32.1, 25.9, 14.7, 28.2, 29.5, 26.5, 28.3, 28.4, 26.4, 22.3| |Smile Corrected Data, 30 Bands| |Central peak wavelength (nm): 511.8, 517.9, 526.6, 535.5, 544.2, 553.3, 562.5, 573.1, 582.7, 590.6, 595.2, 599.5, 606.2, 620.0, 634.4, 648.0, 662.5, 716.8, 728.2, 742.9, 757.0, 772.1, 787.5, 801.6, 815.7, 830.3, 844.4, 859.0, 873.9, 887.3| |FWHM (nm): 19.7, 23.8, 25.5, 24.9, 22.7, 23.9, 23.0, 27.2, 21.4, 18.3, 41.1, 22.1, 44.0, 41.5, 41.1, 35.3, 40.4, 29.9, 32.7, 32.8, 27.6, 31.8, 32.1, 25.9, 28.2, 29.5, 26.5, 28.3, 28.4, 26.4| |Calculation Case (id)||Dataset||Parameters| |No correction (no corr)||Full, Strip 3||aabs, babs| |Relative radiometric correction using wide-bandwidth irradiance measured in UAV (uav)||Full||Cj, aabs, babs| |Relative radiometric correction using spectral irradiance measured on the ground (ground)||Full||Cj(λ), aabs, babs| |Radiometric block adjustment with relative multiplicative correction (BA: relA)||Full||arel_j, aabs, babs| |Radiometric block adjustment with BRDF and relative additive correction (BA: relB, BRDF)||Strip 3||brel_j, a’, b’, aabs, babs| |Band||σ0||RMSE Positions (m)||RMSE Rotations (°)||RMSE GCPs (m)||N| |x0 (mm)||y0 (mm)||k1 (mm·mm−3)||x0 (mm)||y0 (mm)||k1 (mm·mm−3)| © 2013 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license ( http://creativecommons.org/licenses/by/3.0/).
<urn:uuid:a40509f9-828b-43e7-bda1-7ff422b2ef84>
CC-MAIN-2016-26
http://www.mdpi.com/2072-4292/5/10/5006/htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00047-ip-10-164-35-72.ec2.internal.warc.gz
en
0.900304
18,281
2.9375
3
Last modified: 2010-10-22 by ian macdonald Keywords: minas gerais | conselheiro lafaiete | cross (red fimbriated white) | lozenge (white) | coat of arms | Links: FOTW homepage | search | disclaimer and copyright | write us | mirrors image by Joseph McMillan The flag of Conselheiro Lafaiete was designed by the heraldist Arcinóe Antônio Peixoto de Faria. It consists of a green field with a red cross, bordered white, and overall the municipal coat of arms on a white lozenge. As usual with Peixoto de Faria's designs, the use of a cross to "quarter" the field is described as following Portuguese traditions for civic flags, although in fact it does not. The cross is also said to symbolize the Christian spirit of the people of the municipality. As is also usual for Peixoto de Faria's statements of symbolism, the coat of arms is supposed to symbolize the municipal government, the lozenge the city of Conselheiro Lafaiete proper, the green field the rural areas of the municipality, and the arms of the cross the radiation of municipal authority throughout the Joseph McMillan, 2 April 2003
<urn:uuid:6dcf2ea6-66d5-4fe0-a2cf-36eab7c0105a>
CC-MAIN-2016-26
http://www.crwflags.com/fotw/flags/br-mg-cl.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.894952
287
2.8125
3
History of Hasidism The Hasidic Jews beliefs and practices date back to the Eighteenth century. Hasidism - the Hasidic Judaism movement was founded by Rabbi Israel Baal Shem Tov. There are many mystical stories surrounding his great personality. Hassidic Jews tell over a very interesting story on the birth of the founder of Hasidism Rabbi Israel Baal Shem Tov. His father Rabbi Eliezer was a very pious Orthodox Jew, who together with his wife Sara was childless till close to age 100. R' Eliezer and his wife always did the great deed of taking in guest, as called by Orthodox Jews - Hachnasat Orchim. There was great admiration in heaven for the great deeds of R' Eliezer and his wife Sara. The court of heaven decided that they must be rewarded. Hasidic Jews relate the story further. The Satan argued that he wants to test R' Eliezer if he will take in as a guest even poor and dirty person. The court of heaven agreed. The Satan dressed up as a poor person dressed in rags and dirty from head to toe. He had a terrible body odor and knocked on the door of R' Eliezer and asked if he can stay there. R' Eliezer and Sara agreed. He asked for food, and behaved in a very immoral fashion. He asked for more and more food and later requested to sleep in R' Eliezers bed. All his needs were fulfilled. As soon as the Satan returned to heaven, the heavenly court decided that he must be rewarded. Hasidic legend goes that it was decided in heaven that Sara give birth to a son with a holy soul, a soul that merits to come on earth only once in a thousand years. Little Srulik was born, and his parents died when he was at very young age. Srulik devoted his childhood years for deep learning of Torah. Hasidism history started then at his very young age. Hasidic Jews accept many stories about his holiness in his very young age. His devotion to Hashem (God), his love for every fellow Jew and his happiness to every Mitzvah (commandment) of Hashem (God), were seen at his very young age. As Rabbi Yisroel grew older he looked around on his fellow Orthodox Jews. He looked around and saw many of them very broken hearted and upset with their Judaism. He started to go around to towns and villages where Orthodox Jews lived, and taught Hasidic Jews Beliefs. He explained the love of Hashem (God) to every single Jew, the importance of love to every fellow Jew, and the vision of doing every commandment of Hashem with great love. R' Yisroel, or as Hasidic Jews call him the Baal Shem Tov (the man with the Great Name) starting gathering around him many followers. This is when the Hasidic Judaism movement started. It was at the beginning of the eighteenth century that he started gaining followers in the thousands. Many many stories are retols by Hassidic Jews on the divine power of R' Yisroel. His blessing worked wonders. Childless parents were blessed with children, the sick were healed and the lost were found through his blessings. Even gentiles came to him for his blessing. He would be able to say what's happening on the end of the world, and tell in advance on upcoming events. His followers and Hassidim today believe in him as heavenly divine sage. It is important to stop here and explain a little of the beliefs of Hasidic Judaism beliefs of Hasidic Judaism, in order to understand the later development of Orthodox Hasidic Jews and its movements. The Baal Shem Tov emphasized the importance of singing to Hashem and the value of every small deed. The following is a list Hasidism beliefs which can describe the definition of Hasidism: At the outset of the Hassidic Jewish movement there was great opposition. Many great Jewish leaders were against the teaching of the Baal Shem Tov and his followers. They feared that it is a start to a shift away of authentic Judaism. They looked at it as a beginning of a new movement, somehow like reform or Conservative Judaism that later developed. These fears turned out later to be baseless, as the Hassidim devoted their life to even stricter Orthodox Jewish standards. You can read more on the history of the opposition to Hassidism here. Hitnagdut - Opposition After the passing of the Baal Shem Tov, Hasidism spread in the nineteenth century from Ukraine to Russia, Poland and Lithuania. It is in these years that the Hasidic Jews clothing style started to develop. Hasidic Jews started to grow in the numbers of hundreds of thousands under many different leaders, all of them disciples of the Baal Shem Tov. Pictures of Hasidic Jews in those days can be found in museums and old books. The Hasidic Jews photos of those days show the Hassidic Jews dressed in long black garbs, small caps on their head and big nice curls to their side. As Hasidism progressed it started becoming more organized. It developed into multiple streams and sects, all of them following the basic teaching of the Baal Shem Tov. It's hard for someone that isn't a Hassid to understand the many differences. There are a few dominant sects within Hassidic Judaism. Satmar - Lead by late Rabbi Yoel Teitelbaum. Rabbi Teitelbaum was the one of the greatest Rabbis to rebuild Hassidism in New York after WW2. He revived Hasidic Jews in America. Most Brooklyn Hasidic Jews are from his followers, of influenced by them. He also known for his strong anti- Zionist views. He believed that the Jews are not to have their own State before the coming of Messiah. Breslov. Breslov Hasidim believe in Reb Nacmen of Breslov, a grandson of the Baal Shem Tov. They don't have a leader today, only so called Mashpi'im (influencers). They practice Simcha (hapiness), love for every Jew and all other principles of Hasidism. Reb Nachmen added to Hasidism the importance of Hitbodedut - standing alone with Hashem (God). They Breslov Hassidim will lock themselves in a room or go out in the woods and communicate with Hashem. Have a question, on Orthodox Jewish Matters? Need an answer? Please Email your questions , Chava will answer your questions with insight and wit. The life of Orthodox Jews. Find all info on the beliefs, lifestyle, culture and customs of Orthodox Judaism Taharat Hamishpacha - Family Purity laws. Get familiar with the Jewish laws of family purity. Learn about the prohibition of having sex during Niddah - menstrual period. Opinions about orthodox jews by our website visitors
<urn:uuid:e766194d-e114-49ed-8737-7507d53fe9a6>
CC-MAIN-2016-26
http://www.orthodox-jews.com/hasidic-jew.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979644
1,451
3.21875
3
How To Evaluate Figure Skating Injuries The United States Figure Skating Association (USFSA) regulates the testing and competition of 170,000 members, a number that represents a 63 percent increase over just one decade. This surge in popularity can be partially attributed to figure skating becoming more financially accessible in lieu of the newer focus on only the freestyle aspect of the sport. The success of American figure skaters like Michelle Kwan and Brian Boitano have also increased the popularity of the sport. This greater participation has subsequently brought about a higher level of competition. Both the athletic and artistic aspects require expert conditioning, technique, dance ability and aesthetic presentation. As a result, elite skaters can spend three to five hours a day, five to six days per week on the ice. In addition to their rigorous on-ice practice schedule, these athletes may spend several more hours engaged in off-ice conditioning and ballet classes. On- and off-ice training dramatically increases when the skater reaches more advanced levels, often coinciding with adolescence and a time of asynchronous bone and soft tissue development. The stress of performing repetitive jumps and other difficult moves during this period of development presents a unique set of circumstances under which skating injuries can occur.1 Given the potential for various injuries among figure skaters, there is a strong need for practitioners who demonstrate a fundamental understanding of the sport and the equipment involved (see “What You Should Know About Skating Boots” below). Understanding The Biomechanics Of Skate Gait Stroking, the basis of skate gait, is essentially skating across the ice using a series of push-offs to facilitate momentum. Proper stroking technique is important in that the skater desires maximum speed without having to expend needless energy or put unnecessary stress on the body. Stroking, in conjunction with good balance and posture, consists of the interplay of inside and outside edges. Simply stated, the skater obtains the inside edge by gliding on the medial side of the blade and obtains the outside edge by gliding on the lateral side. Though the skater constantly changes edges and direction, fundamental skate gait remains the same. The skater begins with the push off of one blade against the ice to create gliding on the weightbearing side. At the beginning of each stroke, there is a slight flexion of the free or non-weightbearing hip. This is followed by forced hip abduction as the blade concurrently pushes off the ice and produces momentum on the weightbearing skate. The extremity that provided the push then purchases the ice, the body weight shifts accordingly and the mechanism reverses. The weightbearing knee should be slightly bent throughout to keep the center of gravity over the blade rocker and prevent forward canting from the intrinsic heel of the skate. Accordingly, flexibility of the posterior leg is important to prevent injury secondary to this stress. Key Considerations In Evaluating Skating Injuries While the podiatrist may assume the boot manufacturer has a level of anatomical understanding when it comes to the foot, evaluating a skater’s biomechanics and the problems that can occur therein is more suited to the orthopedic professional. In addition to emphasizing that the skater bring his or her skates to the appointment, clinicians must weigh several factors in evaluating the injuries of these patients. • Who came with the skater? In addition to parental involvement in the actual visit, it is important to ensure that someone is able to communicate any necessary treatment/training modifications to the skater’s coach. • Evaluate the overall skater. What age and sex is the skater? Does he or she skate in singles, pairs or ice dancing? What level is the skater at? Novice, junior and senior skaters are practicing double and triple jumps. This will provide a guideline as to the physical intensity of the skater’s practices. • Does the skater appear underweight? Both male and female skaters are at risk for eating disorders such as anorexia, which can lead to osteoporosis and other serious health issues.2,5 Making a referral to the pediatrician or primary care physician may be advisable. • Evaluate the boots. Are they too stiff? There should be a slight bend to the upper when applying average force with the hands. Many skaters have various pathologies related to stiff boots. If the skater is practicing double and triple jumps, there is need for more support but not more than previously described. Contrarily, are the boots breaking down? If worn, crease lines are usually prominent along the medial and lateral upper. Tongue creases are common and not necessarily indicative of detrimental wear. • When were the blades last sharpened? Overly sharp blades stick and dull blades skid. Depending on the weight of the skaters, how much the skaters practice, the intensity at which they skate and their personal preferences, they should sharpen their blades every three weeks to three months with regular use. • If the boots are new, were the blades mounted correctly? A properly placed blade to a boot is like neutral position to an orthotic casting. If the skater is having a difficult time with the inside or outside edge of either skate after a sharpening or replacement, a professional should reevaluate the blades. Most training rinks have someone with expertise in this area or they can refer the skater to a local professional who has this experience. Some boot manufacturers do provide this service. • Is there indication for an orthotic? Orthoses for both the skates and street shoes are reasonable for any skater as skate boots lack arch support. Graphite or thin polypropylene with intrinsic posting and a thin top cover works best. Metatarsal pads or bars are also appropriate for skaters with histories of metatarsal head stress fracture, sesamoiditis or any metatarsalgia. Send tracings of the manufacturer’s insoles with the prescription and if possible, send an older pair of skate boots. The goal is to make a small enough device that is wide enough so the skater does not pronate into the gap between the orthotic and the medial aspect of the skate. • Is there rubbing from the boot causing soft tissue injury? If a boot modification is necessary, it is helpful to write down detailed instructions for the patient to take to the manufacturer regarding the location and suggested alteration. Marking the area with a pencil can be helpful. Pertinent Pearls Regarding Acute Injuries Out of 236 male and female figure skaters at four consecutive World Junior Figure Skating Championships, a study revealed 25 percent of female skaters and 27.9 percent of male skaters sustained some kind of acute skating injury over the course of their careers. The authors of the study found that acute injuries were semi-specific depending on the discipline of the skater involved.1 For instance, contusions and lacerations were common in ice dancing where the man and woman are required to stay close together while engaged in quick changes of direction and hand holds. These injuries were also common for those involved in pair skating, which requires high lifts and big throws. Other acute injuries included traumatic fracture of hands, wrists and arms as well as sprains and strains of shoulders, knees and wrists. Ankle sprains were the most frequently reported injury among all the skating disciplines.1 Interestingly, the researchers noted these injuries occurred most often during off-ice activities. In her article, “The Young Skater,” Angela Smith, MD, postulates that the skater spends so much time in a stiff boot that the peroneal muscles weaken. This is similar to what happens to muscles when in a cast for an extended period.2 When skaters subsequently engage in off-ice training, they are more likely to sprain an ankle. Boots that have worn out or boots without enough upper support may also be the culprit of an on-ice sprain, particularly if the skater engages in rigorous jumping. When it comes to the prevention and treatment of a figure skater’s acute ankle sprain, there is a two-pronged approach. One should emphasize improving muscle strength via balance and proprioception exercises, which the skater can integrate into an off-ice, cross-training regimen. Clinicians should also encourage skaters to wear a skate boot that provides adequate support while still allowing some ankle plantarflexion/dorsiflexion to promote intrinsic strength. Other acute skating injuries include Achilles tendon rupture (secondary to jumping), peroneal or posterior tibial tendon rupture, fracture and plantar fascial strain/rupture. What You Should Know About Chronic Injuries Lipetz and Kruse say the foot is the most common location of injury to a figure skater, and that most of these foot injuries are overuse injuries.4 Dubravcic-Simunjak, et. al., note that stress fractures are the most frequent chronic injuries in female skaters. Stress fractures of the most common sites — the metatarsals, tibia, fibula and navicular — can occur in both the take-off and landing extremity.1 This suggests that skaters not only place a high level of force on the landing leg but also exert a significant amount of force to vault into the air in order to achieve sufficient height, rotation and distance across the ice. In support of the effect these forces have on adolescent figure skaters, Oleson, et. al., found that lower bone mass secondary to young age and the prolonged wearing of a stiff skating boot did not contribute to these types of injuries.5 Stress fractures and other injuries may actually feel better in the skates while being made worse by them. A stiff boot is like a short cast and a skater with an injury such as a metatarsal stress fracture may experience minimal or no pain while engaged in lower intensity skating. This creates a false sense of wellness or improvement in the skater’s mind. However, the injury is likely to worsen with rigorous practice. Particularly when one is treating a young skater or one such patient with an enthusiastic parent, it may be preferable to utilize a non-weightbearing cast over a removable Cam walker or stiff-soled shoe for a period of time in order to keep patients off the ice until the injury has healed. How Boot Fit Can Contribute To Injury Skate boots can cause and exacerbate soft tissue conditions. For instance, when skaters use a new or excessively stiff boot, there may be insufficient ankle dorsiflexion, which causes forward leaning and an eccentric load on the back leg in an attempt to maintain the center of gravity over the skate rocker. With continuous forward canting, the skater must maintain excessive knee flexion and ankle dorsiflexion to maintain balance. This can strain the posterior elements and contribute to Achilles injury or tendonitis. These are common in skaters, especially those who are predisposed to these conditions.6 Over-training, poor technique and the mechanical pressure and rubbing of a stiff posterior upper against the Achilles can also cause tendonitis (see “Treating Tendinitis Conditions Related To Boots” below).4 Contrarily, if the boot is too loose and the heel is allowed to move up and down excessively (more than 1/2 inch), Haglund’s deformity or retrocalcaneal bursitis may occur.2 If the boots are relatively new, advise the skater of the etiology of the condition so the manufacturer can perform any necessary posterior upper modifications such as stretching, molding or the addition of extra padding. Another posterior ankle/leg soft tissue injury caused by boot fit is irritation or dermal thickening of the lower posterior leg due to repeated plantarflexion. However, one can easily remedy this by adding a modification called a dance back. To do so, one would remove a portion of the posterior-superior upper and insert a soft, closed-cell foam material. The manufacturer can do this during or after boot fabrication. The manufacturer can also “punch out” other areas of boot irritation such as those over bony prominences. Padding can also be used for such problems. Clinicians can recommend moleskin, felt and silicone devices such as Bunga Pads (Absolute Athletics) to the skater. The boot manufacturer and ice arena pro shops usually carry these materials. Skate boots can also cause abrasions, blisters and ganglion cysts. As an example, a 13-year-old female skater presented in my office with a large ankle joint ganglion cyst just anterior to the lateral malleolus. The skate boot was secondhand and excessively stiff. There was no bend to the upper with forceful effort. There had been no modifications for malleoli or bony prominences. I proceeded to aspirate the ganglion and used a corticosteroid injection followed by compression. I advised the skater that she needed new boots or, at the very least, her current pair would have to be molded to accommodate pressure points. The skater missed her follow-up but related that she felt “fine” during a phone conversation two months after treatment. She had obtained new boots. Bursitis, hammertoes, Sever’s disease and plantar fasciitis (especially in skaters with tight posterior leg muscles or increased longitudinal arches) are other chronic pedal conditions that have been linked to figure skating.3 It is not currently known whether hallux valgus, limitus or rigidus are directly correlated to skating or its equipment. However, painful bursa and neuritis can form over deformities due to the rigidity of the boot. Again, heat-molding of the boot by the manufacturer and/or padding is appropriate. Why Off-Ice Conditioning Is So Important A skater uses a variety of muscle groups to execute the most difficult of triple jumps down to the placement of a finger and facial expression. Combining proper technique with adequate strength and flexibility can reduce the risk of injury. Such conditioning begins off-ice, where skaters should incorporate into their training a regimen of strengthening and stretching as well as cardiovascular work. However, any mature athlete understands the benefits of rest and one should recommend to the skater to have one day a week away from strenuous physical activity in order to allow for recuperation. A special consideration for skaters is the environment in which they train. Ice arenas are damp and cold, and elite skaters frequently train in the early morning. Properly warning up off and on the ice is important prior to and after practice. Skaters should also do this after resurfacing of the ice or breaks. Older skaters should be particularly cognizant of the need for proper stretching and warming up in order to prevent injury. What You Should Know About Skating Boots Since many acute and chronic injuries are caused by the skating boot itself, it is a good idea to have skaters bring their skates to the appointment for evaluation.1-4 A skate is not just a blade mounted to a boot. The apparatus can vary drastically in regard to the position of the blade, the fit of the boot, any modifications made and the materials used in the construct. Although boot and blade combinations are mass produced and sold cheaply in various sporting good and department stores, competitive skaters will likely obtain their boots from a reputable boot company or dealer, purchase their blades separately and have them properly mounted by a professional. A custom fitted boot with a quality set of blades can cost in excess of $1,000. Many skaters opt to purchase stock boots, which are a good alternative to the more expensive custom boots in lieu of the expense involved in replacing skates which continue to be outgrown. However, even these stock boot/blade combos can cost hundreds of dollars. For custom skate boots, the professional fitter will take various foot measurements. (One may take a plaster cast such as that for a custom molded shoe although this is not standard.) The measurements attempt to accommodate for all bony prominences like malleoli as well as other osseous considerations such as hallux abductovalgus deformities. Stock boots may or may not include accommodation for normal bony anatomy but the manufacturer can easily mold or “punch out” the leather of both custom and stock boots. It is important for podiatrists to know that custom and stock boots lack intrinsic arch support or orthoses although most professional boot companies provide foam box casting services by their personnel at additional cost. Treating Tendinitis Conditions Related To Boots Other common tendinitis conditions related to skate boots are that of the extensor hallucis, tibialis anterior and posterior tibial tendons.6 Extensor tendonitis or lace bite results from lateral slipping and compression of the tongue across the top of the foot and ankle with dorsiflexion. One can add midline lace hooks or alternate lacing, or supplement the skate tongue with porous rubber, felt or lamb’s wool in order to treat this problem in conjunction with antiinflammatory medication and ice. Skating can exacerbate or even cause posterior tibial tendinitis because the standard boot construct lacks intrinsic arch support. Custom orthoses are certainly indicated in this case. In Conclusion Educating the skater, coach or trainer and/or parents in regard to the etiology of the skater’s condition is essential to ensure compliance. Clinicians should emphasize a gradual return to the ice with off-ice conditioning and a slow increase in the difficulty of moves after returning to the ice. Keep in mind that prior to or during the competitive season (October to March), skaters will be especially anxious to get back on the ice so it is important to have a thorough discussion about the importance of compliance. The particulars of figure skating are unique to a certain sect of athletes. Clinicians should recognize that youth, finances and external motivation are key factors in a skater’s treatment course. Not only must the health care provider treat the skater’s injury and contributing factors, he or she should educate the skater and his or her support system to ensure there is not a premature return to the ice that risks further aggravation or re-injury. Dr. Janowicz is a podiatrist for Kaiser Permanente in Oakland, California and a member of the United States Figure Skating Sports Medicine Society. References 1. Dubravcic-Simunjak S, Pecina M, Kuipers H, Moran J, Haspl M. The Incidence of Injuries in Elite Junior Figure Skaters. Am J Sports Med 31(4):511-17, 2003. 2. Smith AD. The Young Skater. Phys Med Rehab 19(4):741-55, 2000. 3. Bloch RM. Figure Skating Injuries. Phys Med Rehab 10(1):177-88, 1999. 4. Lipetz J, Kruse RJ. Injuries and Special Concerns of Female Figure Skaters. Phys Med Rehab 19(2):369-80, 2000. 5. Oleson CV, Busconi BD, Baran DT. Bone Density in Competitive Figure Skaters. Phys Med Rehab 83(1), 2002. 6. Muller DL, Per Renstrom, AFH, Pyne JIB. Ice Skating: Figure, Speed, Long Distance, and In-Line, Sport Injuries, Mechanisms. Prevention. Treatment. Edited by Fu, FH and Stone, DA, Williams and Wilkins, 1994.
<urn:uuid:5e74dd59-229b-47f5-9bd4-36923e30f494>
CC-MAIN-2016-26
http://www.podiatrytoday.com/article/5374?page=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00051-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930979
4,001
3.046875
3
A 12-kW buck converter is implemented by a half-bridge IGBT detailed model. Based on the thermal characteristics of the selected IGBT module, both switching and conduction losses are calculated. Thermal blocks of the Simscape foundation library are used to model the heat dissipation provided by the heat sink. The simulation illustrates the impact of switching frequency and load on the total losses of the buck converter. You can select among three different commercial IGBT modules. A procedure given in .m files allows you to add your own device characteristics in the provided component libraries. A Help file containing useful information on the model is also included. Authors: Pierre Giroux, Gilbert Sybille, Olivier Tremblay Hydro-Quebec Research Institute (IREQ) Very detailed engineering work! It could be used as foundation to simulate many topologies. It addresses the important issue of estimating power losses in semiconductors and its relationship to junction temperature in a complete fashion.
<urn:uuid:3244407f-4859-49a4-8e7e-587f584e2147>
CC-MAIN-2016-26
http://www.mathworks.com/matlabcentral/fileexchange/35980-loss-calculation-in-a-buck-converter-using-simpowersystems-and-simscape
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.891999
199
2.84375
3
Why Multi-disciplinary Care is Important in Sarcomas The National Cancer Control Network (NCCN) in the US and the National Institute for Health and Clinical Excellence (NICE) have issued detailed recommendations for the management of patients with sarcoma. These are accessible at www.nccn.org and www.nice.org.uk respectively. A critical component of these recommendations is that all patients with sarcoma should be treated at centres with appropriate expertise and relevant multidisciplinary teams (MDT). The evidence for these recommendations is available from NICE (http://www.nice.org.uk/nicemedia/pdf/SarcomaFullGuidance.pdf). As these recommendations are based on the different populations and health care systems in the US and UK, the Australasian Sarcoma Study Group has slightly modified these for the Australian setting. There is sound evidence to support the following conclusions1: - Radiology: That specialist review of the imaging in people with suspected sarcoma reduces clinical error rates and delay in diagnosis (p 43). - Pathology: That the histopathological diagnosis of sarcoma is often changed on review by an expert pathologist; this includes the diagnosis of sarcoma, sarcoma sub-type and tumour grade (p 50). - MDT: That being treated at a sarcoma MDT centre results in better overall survival, increased disease-free survival, reduced risk of amputation, better conformity to clinical practise guidelines and greater use of preoperative imaging and biopsy (p 58-59 ). - Supportive care: That coordinated supportive care is associated with improved patient quality of life, fewer days in hospital, fewer home visits required and better physical, social and emotional outcomes (p 82). - Participation in clinical trials correlates with survival rates for patients with sarcoma (Bleyer et al.2). Please note that the recommendations for centralisation of care for paediatric sarcomas (osteosarcoma, Ewing sarcoma/Primitive neuroectodermal tumor, rhabdomyosarcoma) is particularly important, owing to: the rarity of these cancers; the lack of a large evidence base for treatment; the complexity and intensity of the treatment regimens; and the high mortality from these cancer types. As a consequence, the ASSG strongly recommends that all patients with paediatric sarcomas under the age of 16 years be treated at a paediatric cancer centre, and that older patients be treated at a specialist sarcoma centre. For other sarcoma types, the recommendation is that a specialist sarcoma multidisciplinary team assesses patients even if subsequent treatment is carried out elsewhere. Modified key recommendations are based on the National Institute for Health and Clinical Excellence Guidance on Cancer Services3 - All patients with a confirmed diagnosis of bone or soft tissue sarcoma (except children with certain soft tissue sarcomas) should have their care supervised by or in conjunction with, a sarcoma multidisciplinary team. - A soft tissue sarcoma MDT should meet minimum criteria for caseload. In the UK this is specified as at least 100 new patients with soft tissue sarcoma per year, or at least 25 new patients with bone sarcoma per year (p 54). Given the difference in population and the size of the landscape, these figures must me modified for the Australian setting. - The sarcoma MDT should include - A specialist sarcoma pathologist and/or radiologist who is able to review each patient’s pathology and radiology - A surgeon who is a member of a sarcoma MDT or a surgeon with tumour site-specific or age-appropriate skills, in consultation with the sarcoma MDT - Medical and radiation oncology expertise. Chemotherapy and radiotherapy should be carried out by appropriate specialists as recommended by a sarcoma MDT - Dedicated ancillary supportive care, which includes nursing, physio- and occupational therapy, age-appropriate psychosocial support and palliative care - Access to relevant clinical trials. - All sarcoma MDTs should participate in national audit, data collection and training. - Patients with functional disabilities as a consequence of their sarcoma should have timely access to appropriate support and rehabilitation services. 1National Institute for Health and Clinical Excellence, 2006. Improving outcomes for people with sarcoma. NICE guidance on cancer services. 2Bleyer, A., Montello, M., Budd, T., and Saxman, S. 2005. National survival trends of young adults with sarcoma: lack of progress is associated with lack of clinical trial participation. Cancer 103:1891-1897 3National Institute for Health and Clinical Excellence, 2006. Improving outcomes for people with sarcoma. NICE guidance on cancer services. Last update: 22-Jul-2011 05:30 PM
<urn:uuid:0f76d6e0-289b-4bd6-a45d-e6f22de1448c>
CC-MAIN-2016-26
http://www.australiansarcomagroup.org/multi-disciplinary-care.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.901267
1,030
2.65625
3
Breast cancer is malignant tumor that starts from cells of the breast. A malignant tumor is a group of cancer cells that may grow into surrounding tissues or spread (metastasize) to distant areas of the body. The disease occurs almost entirely in women, but men can get it too. Here are some interesting facts about breast cancer. 1. One in eight women develops breast cancer over a lifetime of 80 years. 2. Breast cancer is the second leading cause of cancer deaths among women. 3. Besides skin cancer, breast cancer is the most commonly diagnosed cancer among American women. More than 1 in 4 cancers are breast cancer. 4. As of 2010, there were more than 2.5 million breast cancer survivors in the U.S 5. A woman’s risk of breast cancer approximately doubles if she has a first degree relative (mother, sister, daughter) that has been diagnosed with 6. About 20-30% of women diagnosed with breast cancer have a family history of breast cancer. 7. The chance of having breast cancer for a woman in her nineties is about 1 in 9. 8. Men also get breast cancer, however, men account for less than 1% of all breast cancer cases. 9. The youngest known survivor of breast cancer is Aleisha Hunter from Ontario, Canada. She had gone through a complete mastectomy to treat her juvenile strain of breast cancer at the age of three in 2010. 10. There’s a link between increased weight and breast cancer, especially those who gained weight in adolescence or after menopause. 11. Every 13 minutes a woman dies of breast cancer. 12. Ninety-six percent of women who find and treat breast cancer early will be cancer-free after five years. 13. You are never too young to develop breast cancer. Breast Self Exam should begin by the age of twenty. 14. oral contraceptives may cause a slight increase in breast cancer risk.
<urn:uuid:b1cfee7a-2dba-4b06-a290-b5b6d291cf05>
CC-MAIN-2016-26
http://healthlob.com/2011/06/breast-cancer-facts/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00199-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953296
408
3.265625
3
Mountains of broken TV sets, obsolete computer monitors and outdated laptops that once piled up in California's garages, attics and basements have achieved a milestone. The state's electronic-waste recycling program has reached its 1 billionth pound of unwanted electronics. That's more than any other state has recycled -- and amounts to roughly 20 million TVs and computers kept out of landfills. "In the six short years this program has been operating, California has really gotten on board with e-waste recycling," said Jeff Hunts, e-waste program manager for the state Department of Resources Recycling and Recovery. "People are understanding it's hazardous and needs to be managed responsibly." Despite the impressive numbers, experts say California's e-waste efforts still have at least three significant shortcomings. In the years since California became the first state to pass an e-waste law, 24 other states have passed similar laws. But California remains the only state that charges consumers to fund a government-run program by paying recycling fees every time they buy a TV, laptop or monitor. The other states make the industry pay to set up recycling programs. Many set quotas for the amount each company must recycle, based on how many monitors, printers or other equipment it sells, with fines for violators. Equipment not covered Second, California's law only funds recycling of TVs, laptops and computer monitors and requires they be recycled in state. Other devices, such as old VCRs, printers and hard drives, are not covered and sometimes end up in developing nations like India or China, where children breaking them apart are exposed to mercury, lead, cadmium and other toxics. "It's admirable that so much material has been collected and that it stayed out of landfills," said Sheila Davis, executive director of the Silicon Valley Toxics Coalition, a San Jose environmental group. "But it's only a small baby step to where we need to go." California needs to expand its program and embrace the producer take-back model of other states, she said. "I give California's program a D, maybe a D-plus. It's not fair to customers," Davis said. "And as many pounds of computers as they have collected, there are an equal number of loopholes for other products that aren't properly recycled." After millions of computers were discarded leading up to the Jan. 1, 2000, "Y2K'' scare, environmentalists turned up the pressure for a state law in California. In 2002, former state Sen. Byron Sher, D-Palo Alto, wrote a bill that would have required consumers to pay a fee when they bought new computers or TV sets -- similar to the deposit they pay under the state's bottle and can recycling law -- with the money funding a recycling program. But former Gov. Gray Davis vetoed it, saying it should be the computer industry's responsibility. Sher tried to negotiate a compromise with Hewlett-Packard and other computer makers. But talks fell apart. So he pushed through a similar bill to his first one, and on Sept. 24, 2003, facing a recall election in two weeks that would drive him from office, Davis signed it. Today, the law requires consumers to pay a fee of $6 to $10, depending on the size of the screen, when they buy a new TV, laptop or computer monitor. That money funds a state-run program that pays 39 cents per pound to 52 recycling companies and 590 collection organizations, which range from private companies to charitable groups such as Goodwill Industries and the Salvation Army. "I'm perfectly fine with it," said Brent Anderson, of Morgan Hill, who was shopping for big-screen TVs Saturday at the Fry's on Brokaw Road in San Jose. Seeking a national law He said the first time he encountered the recycling fee a few years back, he was a bit miffed to have to pony up an extra $10. But considering that the unit he and his father, Jim, were eyeing retails for more than $700, Anderson didn't feel it was too onerous a bite. After all, he said, motioning around the cavernous store, "There's a lot of stuff to recycle in here." Where a decade ago, people had to either throw their old computer in the garbage or pay up to $25 each to find a recycler to take it, today schools, civic organizations and scout troops regularly hold fundraisers asking for the old machines. California's program has paid out $436 million since 2005. After California's law passed, however, retail giants fought similar consumer-pay laws in other states. Now environmental groups and the electronics industry both want a national law but can't agree on how strict it should be -- or who should pay. "We have to comply with a patchwork of 25 different state requirements. It's a national problem. It deserves a national approach," said Walter Alcorn, vice president for environmental affairs at the Consumer Electronics Association, an industry group. New devices come on the market every year. So even though the state collects roughly 5 million used TVs and computers a year now, Californians replace those by buying about 9 million a year. "Like with anything, you can back-seat drive, but you've got to hand it to them. They got in and got a program started early," said Ken Taggart, vice president of ECS Refining, an electronics recycling company in Santa Clara. "But it could take some tweaks and become an even better program." Staff writer Peter Delevett contributed to this report. Contact Paul Rogers at 408-920-5045.
<urn:uuid:ca5a36fd-88d4-42af-98e5-0d685beee2dd>
CC-MAIN-2016-26
http://www.mercurynews.com/science/ci_18210254?source=rss_viewed&nclick_check=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00084-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969001
1,168
2.59375
3
This review appeared in the Apr-Jun 2013 vol. 170 no. 2 issue of Biblotheca Sacra, DTS’s quarterly academic journal.Subscribe Today Greek Prepositions from Antiquity to the PresentOxford University Press, USA, Oxford June 18, 2010 This monograph is a detailed diachronic study (with synchronic focuses) on Greek prepositions. The first part of the book gives foundational information. Chapter 1 explores the function of prepositions. For example in languages such as Greek, which have both case systems and adpositions that exhibit similar functions and meanings, the adpositions have more specific distinctions (p. 17). Although similar, the two phenomena are not identical. They may work together to form or clarify relational meaning (p. 24). In chapter 2 Bortone discusses the meaning of prepositions and cases, and introduces the “localistic hypothesis,” which suggests that prepositions originated as function words to indicate concrete spatial or local relationships. Bortone also addresses the confusing issue of prepositions with multiple, sometimes apparently contradictory, meanings. He follows a “family relations” model in which new elements, as they are created, may show little resemblance to those in the early stages of development (pp. 71–75). Instead of seeing the meanings of a preposition as all within one inclusive semantic domain (which can be difficult to do with diverse meanings), they are related, Bortone suggests, like links are related in a chain (pp. 73–74, 77–78). Chapter 3 discusses the development of adpositions. This section of the book will be of interest to both students of New Testament grammar and those interested in linguistics. Bortone draws his examples from many languages (including Hebrew). Part 2 is the larger section of the book and is a diachronic description of prepositions throughout the written history of Greek. The chapter on Hellenistic Greek is the shortest (pp. 171–94) but will be most immediately useful for readers of this review. However, much in the chapter on classical Greek (pp. 107–70) will also be of interest. Many of the prepositions familiar to New Testament students are discussed here. Although a preposition’s meaning is not always identical in the classical and Hellenistic periods, there are enough similarities to enrich one’s understanding of the prepositions in the New Testament. The chapter on Hellenistic (Koiné) Greek builds on the previous chapter. Bortone notes problems with “Hellenistic” Greek (pp. 171–73). The shadow of classical Greek is large and Hellenistic Greek covers a huge range of usages, both geographically and ethnically. Bortone notes that the Septuagint is the longest extant text of this period. Although Semitic influence exists, Bortone correctly notes that the Septuagint is a true example of “common” Greek. It was the type spoken during this period (pp. 173–74). Also Bortone suggests that the Septuagint has been underrated and that it was “more familiar to uneducated Greeks than classical literature” (pp. 175–76). In terms of Greek language development, the Septuagint was very influential. It was “a point of departure, the beginning of new Greek usage” (p. 176). Bartone focuses on where the prepositions in Hellenistic Greek differ from classical usage. The meaning of prepositions is not discussed in depth. Rather, conclusions of comparative research within the develop-ment of the language are noted. Concerning this period, Bortone makes ten observations (pp. 179–94): (1) Preposition usage increases (against case). (2) The use of “improper” prepositions increases. (3) Dative case usage decreases. (4) Fewer cases are governed by prepositions. (5) The meaning of the cases that are governed by prepositions decreases. (6) Some prepositions are fading out, especially with pairs (e.g., suvn was being replaced by metav). (7) Few new prepositions are created. (8) Any new prepositions tend to have “local” senses (e.g, a[pwqen, uJpokavtw, ojpivsw). (9) Improper prepositions sometimes are immediately followed by simple prepositions (mainly among literary authors [e.g., Acts 17:24; Polybius 11.20.1]). (10) Some Hellenistic features were not continued in later Greek. While this chapter is a descriptive discussion of Hellenistic prepositions in the context of the history of Greek, one could wish for a little more precision when dealing with biblical Greek. Although Semitic influence is acknowledged, no significant discussion of the Septuagint as a translation is undertaken. Thus both the Septuagint and the New Testament are essentially treated the same (see e.g., p. 193). Although both are genuine examples of common Greek, more discussion of the direct influence of the translation process (rather than only Semitic influence) would have been welcomed. The final two chapters discuss prepositions and cases in medieval (Byzantine) and modern Greek. Although the longest period in the language history (approximately one thousand years), medieval Greek is the least studied (p. 195). The book concludes with an epilogue (pp. 302–3), a lengthy biblio-graphy (pp. 304–35), and an index of subjects and Greek terms (pp. 337–45). The first three chapters of this fasincating book will be of most value for New Testament students who have a solid background in Greek, but even they may find the volume difficult. The study will, however, make the reader more confident in handling prepositions. It will also be of interest to linguists. The countless examples from other languages will enhance its value for Bible translation because it could provide some insight on target languages. In addition Bortone contributes to general linguistics by validating the localistic hypothesis in a single language. This can be confirmed by further studies in other languages. —Joseph D. Fantin
<urn:uuid:739aecf2-f53a-4652-85d6-5a037e8b46f6>
CC-MAIN-2016-26
http://www.dts.edu/reviews/bortone-pietro-greek-prepositions/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00147-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937267
1,323
2.875
3
Does fear of God make students more honest? Apparently so, according to a new study by psychology researchers the University of British Columbia. The study found that students who think God is a mean and punishing figure are less likely to cheat than those who think God is caring and forgiving. "Taken together, our findings demonstrate, at least in some preliminary way, that religious beliefs do have an effect on moral behavior, but what matters more than whether you believe in a god is what kind of god you believe in," said one of the researchers, Azim Shariff. The study found no difference in attitudes on cheating between non-believers and those who believe in a forgiving god. So it is not religiousity per se that makes the key difference, but a punitive kind of religion. However, other studies have found that religious students in general are less likely to cheat. For instance, a study published in 2005 by David A. Rettinger and Augustus E. Jordan reported that "more religiosity correlates with reduced reports of cheating in all courses." One reason, these authors said, was that religious students were less likely to have the grade orientation associated with cheating.
<urn:uuid:7e4ca8b1-1405-4e1b-b649-b27e1b823db2>
CC-MAIN-2016-26
http://www.cheatingculture.com/academic-dishonesty/2011/6/3/views-of-god-influence-cheating-behavior.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977063
237
2.609375
3
Between 1940 and 1943, the number of farm workers in the United States noticeably decreased because of armed forces manpower requirements and competition with higher paying jobs in the defense industries. At the same time, farmers were asked to increase production as part of the successful prosecution of World War II. By 1943, the successful harvest of the nation's food supply was in jeopardy. On April 29, 1943, the 78th U.S. Congress approved Public Law 45, the Farm Labor Supply Appropriation Act, in order to "assist farmers in producing vital food by making labor available at the time and place it was most needed." The state's agricultural extension services assumed responsibility for the emergency labor programs, primarily by coordinating and overseeing labor recruitment, training and placement. In Oregon, the Emergency Farm Labor Service was established by the Oregon State College Extension Service. Between 1943 and 1947, Oregon's Emergency Farm Labor Service assisted with over 900,000 placements on the state's farms, trained thousands of workers of all ages, and managed nine farm labor camps. Farm laborers included urban youth and women, soldiers, white collar professionals, displaced Japanese- Americans, returning war veterans, workers from other states, migrant workers from Mexico and Jamaica, and even German prisoners of war. This exhibit is a tribute to everyone who was a part of this unparalleled wartime effort -- the farmers, Extension Service personnel, and, most of all, the emergency farm workers.
<urn:uuid:31efb9f9-f81e-4240-998f-d3a01be7b63c>
CC-MAIN-2016-26
http://scarc.library.oregonstate.edu/omeka/exhibits/show/fighters
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954688
286
3.75
4
Society History By Time Period Twentieth Century Wars and Conflicts World War II By Country United States Related categories 2 Behind Japanese Lines in Burma Retells the story of Detachment 101, which gathered intelligence, harassed the Japanese through guerrilla actions, identifyied targets for the Army Air Force to bomb, and rescued downed Allied airmen from April, 1942. A Brief History of the U.S. Army in World War II An overview of the war in Europe and the Pacific, including maps. Declarations of a State of War with Japan, Germany, and Italy Includes declarations of war, proclamations affecting enemy aliens, internment and radio speeches by President Franklin Roosevelt (The Avalon Project at Yale University Law School). Greenland Censorship & WW2 APO's Study of US military censorship on Greenland 1941-45. Unit, base and naval censor used by patrol cutters and shore bases. Includes Allied and German Operations in the Arctic Region. A History of Victory Gardening A multimedia presentation containing historical information about "Victory Gardening" efforts in support of the war. The Information War in the Pacific, 1945 Documents the activities of the US Office of War Information and the role it played in the surrender of the Japanese empire. Kodiak Alaska Military History WWII and Cold War structures remaining on Kodiak Island explained. Navajo Code Talkers An account of the Navajo volunteers employed by the United States Marine Corps in communications security roles in the Pacific Theater during World War Two. The OSS Society The first organized effort to implement a centralized system of strategic intelligence, including a newsletter and event archive. Shipyard Day Care Centers of World War II Research project into the Kaiser Company's provision of shipyard child care centers for working mothers during the Second World War. Tracing American Airborne's German Heritage Examines the German roots of the United States' airborne units of World War Two. U.S. Combat Medals of WWII Description of American medals awarded for valor in WW II. Includes pictures and sample citations. Last update:April 21, 2016 at 10:05:03 UTC
<urn:uuid:42a91615-aa5e-4b8d-9892-92558cbf4fe2>
CC-MAIN-2016-26
http://www.dmoz.org/Society/History/By_Time_Period/Twentieth_Century/Wars_and_Conflicts/World_War_II/By_Country/United_States/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.870812
446
3.328125
3
Graphic Solution of a Second-Order Differential Equation This Demonstration shows the Euler–Cauchy method for approximating the solution of an initial value problem with a second-order differential equation. An example of such an equation is , with derivatives from now on always taken with respect to . This equation can be written as a pair of first-order equations, , . More generally, the method to be described works for any system of two first-order differential equations , with initial conditions , . The particular kinds of systems used as examples here, , reduce to that general type by introducing to get the system , . The method consists of simultaneously calculating approximations of (cyan) and (green): The pairs are the coordinates of points ,, …, that form the so-called Euler's polygonal line that approximates the graph of the function . In the same way, the pairs are the coordinates of points , , …, that form Euler's polygonal line, which approximates the graph of the function . The Euler method is the most basic approximation method. The Demonstration compares it with more advanced methods given by the built-in Mathematica function NDSolve.
<urn:uuid:9494d8eb-47b5-4556-906d-bb6f2c2eaa6f>
CC-MAIN-2016-26
http://demonstrations.wolfram.com/GraphicSolutionOfASecondOrderDifferentialEquation/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.914388
251
3.453125
3
Additional needs, often referred to as special needs, is a term applied to those pupils who need extra support in order to attain a level of achievement or realise their full potential. They may be disabled (mentally or physically), may be financially disadvantaged, may be carers, may be home-schoolers, may be gifted or talented, or may be immigrants for whom English is a second language. Additional help will be given depending on need but may include: help with reading, writing and comprehension, behaviour management, anger management, organisational skills, social integration and emotional literacy. The pages on this site aim to offer help and guidance for teachers, parents and pupils. Please use the menu above or the links below to browse our site This content is generously provided by Pat Stanley. Pat Stanley is widely regarded as a top cell phone technical genious, specializing in such fields as cell phone tracking and spy technology, cell phone location and location services for the cell phone tracking industry. Issues with anger can often result in strained relationships between both parents and employers. When strains such as this present themselves it is always a good idea to keep track of and monitor those involved in situations such as these. If your teenager has become involved in a precarious situation and perhaps as anger issues and is in need of anger management, these situations should not be ignored. Employers may also find themselves dealing with negligent or careless employees. For employees that have shown good work ethic and trustworthiness over the years, employers may feel compelled to help employees. A good tool to help parents monitor children and employers to monitor employees is a cell phone monitoring system. Capable cell phone monitoring and spy software can give you insight into what is truly going on in the minds of these people. Having the information available to you can greatly assist you in helping those who are affected by stress and an inability to effectively manage their anger and anxiety. |Additional Needs Topics||Reference||Information| Challenging Behaviour Generations Search this site A-Z of Additional Needs Educational Websites Directory Follow us on Facebook
<urn:uuid:8a10b505-ae80-4067-8069-67fa04cd4ab2>
CC-MAIN-2016-26
http://www.additionalneeds.net/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942
425
2.96875
3
Everyone’s familiar with fishfinders; they’ve been around for years. But if your boat operates in deep water and you need outstanding resolution for, say, serious offshore fishing, your boat should have a modular electronics network composed of individual components instead of an all-in-one fishfinder. A key component of such a system is the sounder module, which powers and controls the transducer and feeds data from it to the network display. That data contains the bottom profile and images of fish and any other objects in the water column, images that are much more accurate than those from a fishfinder. A properly selected, installed, and working sounder module can not only help you catch more fish, it can also improve your navigation skills by giving you accurate depth and bottom-contour information that you can compare with paper charts. In that regard you could say that a sounder module is a piece of safety equipment. The sounder module does all this by sending a high-power, high-frequency signal to the depth transducer, which may be mounted on the boat’s transom, pass through the hull, or be bonded to the inside of the hull. Once it receives this signal the transducer sends a “ping” that travels down through the water column until it hits something—the bottom, a piece of structure, or a fish—at which point it travels back up to the transducer. In the meantime, after having transmitted the ping, the transducer has turned into a receiver; it’s now listening for this tiny return signal. When it’s received it sends the signal back to the processing unit in the sounder module. Since the processor knows when the ping was sent, when the return signal was received, and the speed of sound through water, it can accurately calculate the distance—or more accurately, the depth. By further analyzing the input, the module can also determine which echoes are fish and which are bottom, since the quality of an echo is partly determined by the target’s density. Higher-quality systems are able to further break down the return signal into shapes that are accurate enough to let you determine the number and even the species of fish. Of course, these images eventually appear on some kind of network display, which is independent of, but usually must be compatible with, the sounder module. Although most high-speed marine networks are based on Ethernet, they are actually proprietary, which is why you generally cannot attach a sounder module from one manufacturer to the network or display of another. The sounder module’s output power partially determines the maximum depth of the water the system can operate in. Because the transducer changes from transmitting a high-power pulse to receiving a small signal, the amount of interference or “noise” generated by the module is critical to how well the quality of this signal is preserved. A well-designed, low-power system with a low-noise sounder module can read deeper and detect more fish than a poorly designed, high-power sounder with high noise. Ping frequency is measured in kilohertz or kHz. Higher-frequency units are typically designed for use in shallower water, lower-frequency ones for deeper water. Specifically, sounder modules typically support 200 kHz for shallow water and 50 kHz, 38 kHz, or 28 kHz for deep water. Lower-frequency transducers usually cost a good deal more than higher-frequency ones. This article originally appeared in the March 2009 issue of Power & Motoryacht magazine.
<urn:uuid:f15941bd-0000-4d6e-95b6-60b8c3f30a03>
CC-MAIN-2016-26
http://www.powerandmotoryacht.com/fish-stories/send-strong-signal
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00088-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949589
736
2.84375
3
Why does Vi have multiple modes? I'm not interested in the functions of the modes unless that was a reason to have different modes. I'm only interested in the design / engineering reasons for multiple modes. Considering the primary two modes, COMMAND and INSERT, demonstrates the purpose of a modal interface. In INSERT mode you can type normally, inserting text into the document. You can bind keys to perform special functions, although these are generally limited in complexity. COMMAND mode is sort of like an unlimited special function. Something similar could have been implemented using Ctrl, so that you hold down it down and press some other special key, and in fact most generic editors do work that way: you use ctrl-x to cut, ctrl-p to paste, etc. However, this method limits what you can do; it would be a bit of a pain to hold down ctrl and type "open myfile.txt". GUI editors and some TUI editors usually get around this with drop down menus. However, that's still limited: if you have a lot of features you end up needing awkward cascading nested sets of menus. There is of course an advantage to all this, and it's probably a major reason for It could be observed that the editing aspect of many large, complex, graphical applications such as integrated development environments (IDEs) and word processors is deficient compared to vim because of the simple, non-modal, menu and ctrl driven interface. Considered this way, the reason vim uses modes is to make it a more powerful tool. If it did not have a control mode and an insert mode it would not have been able to distinguish between the operations on a text and the text itself.
<urn:uuid:f72033ad-25b5-4c25-b918-40e16d55fdd5>
CC-MAIN-2016-26
http://unix.stackexchange.com/questions/139798/why-does-vi-have-multiple-modes
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949042
358
2.734375
3
Melbourne, AUSTRALIA - Monash Institute of Medical Research scientists have found a protein in the female reproductive tract that protects against sexually transmitted diseases (STIs) such as chlamydia and herpes simplex virus (HSV). It is estimated that 450 million people worldwide are newly infected with STIs each year. Chlamydia has the highest infection rate of all the STIs reported in Australia. The research, published today in the prestigious journal, Science, was led by Prof Paul Hertzog, Director of MIMR's Centre for Innate Immunity and Infectious Diseases, and his team including, Ka Yee Fung and Niamh Mangan. The team discovered a protein, which they called Interferon epsilon (IFNe), and showed it plays an important role in protecting females against infections. It could have clinical potential to determine which women may be more or less susceptible to disease such as STIs or to boost protective immunity. IFNe could also be used potentially to treat STIs or other inflammatory diseases. "One way this protein is unusual is because of the way it's produced," Prof Hertzog said. "Most proteins protecting us against infection are produced only after we're exposed to a virus or bacteria. "But this protein is produced normally and is instead regulated by hormones so its levels change during the oestrous cycle (an animal's menstrual cycle) and is switched off at implantation in pregnancy and at other times like menopause," Prof Hertzog said. "Some of these times when normal IFNe is lowest, correlate with when women are most susceptible to STIs so this might be an important link to new therapeutic opportunities - IFNe follows different rules to normal immuno-modulatory proteins, and therefore this might also be important to vaccines and the way they're formulated to boost our protective immunity. "Since this protein boosts female reproductive tract immune responses, it's likely, although we haven't addressed it directly, that this finding will be important for other infectious diseases like HIV and HPV and other diseases." Prof Hertzog said STIs are a critical global health and socioeconomic problem. According to the 2011 Australian Bureau of Statistics, chlamydia has the highest infection rates of the notifiable STIs, and infection rates have more than tripled over the past decade. Men and women in the 15-19-year age group saw the largest increase in infection rates. According to these statistics, chlamydia affects more women than men, with 46,636 women aged over 15 diagnosed compared with 33,197 men aged 15 and over. Prof Hertzog said the next step for this research would be to work towards clinical studies within the next five years. He is also keen to see whether this work can be applied across other diseases including cancer, female reproductive tract related disorders including endometriosis and pelvic inflammatory disease, as well as other non reproductive tract diseases. This research was carried out in collaboration with partners at other departments of Monash University, the University of Newcastle, University of Adelaide, Peter MacCallum Cancer Centre and the University of Oklahoma. For more information contact: MIMR Communications Manager Mobile: +61 3 408 267 346
<urn:uuid:00ba2c34-d59c-48bb-a695-b70ea82999b0>
CC-MAIN-2016-26
http://www.eurekalert.org/pub_releases/2013-02/miom-mrf022213.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955396
661
3.140625
3
Wednesday, July 09, 2008 empire of the roaches Pixar's post-apocalyptic love story Wall-E finished No. 2 at the box office over the Fourth of July weekend after hauling in $65 million the weekend before. The film depicts a future Earth abandoned by humans, blanketed in garbage, and nearly devoid of life. At the outset, Wall-E, a robot, has but one companion: a friendly cockroach. How did we come to believe that cockroaches will outlive everything else on Earth? The cockroach survival myth seems to have originated with the development of the atom bomb. In The Cockroach Papers: A Compendium of History and Lore, journalist Richard Schweid notes that roaches were reported to have survived the blasts at Hiroshima and Nagasaki, leading some to believe that they would inherit the Earth after a nuclear war. This idea spread during the 1960s, in part due to its dissemination by anti-nuclear activists. For example, a famous advertisement sponsored by the National Committee for a Sane Nuclear Policy and referenced in a 1968 New York Times article read, in part, "A nuclear war, if it comes, will not be won by the Americans … the Russians … the Chinese. The winner of World War III will be the cockroach." more from Slate here. Posted by Morgan Meis at 03:45 PM | Permalink
<urn:uuid:5faa9ab6-e094-49aa-96a3-eb7d170041b7>
CC-MAIN-2016-26
http://3quarksdaily.blogs.com/3quarksdaily/2008/07/empire-of-the-r.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00070-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955655
286
2.984375
3
From Socrates to Nietzsche Farrar, Straus and Giroux, Hardcover, 9780374150853, 432pp. Publication Date: January 4, 2011 A New York Times Notable Book for 2011 We all want to know how to live. But before the good life was reduced to ten easy steps or a prescription from the doctor, philosophers offered arresting answers to the most fundamental questions about who we are and what makes for a life worth living. In Examined Lives, James Miller returns to this vibrant tradition with short, lively biographies of twelve famous philosophers. Socrates spent his life examining himself and the assumptions of others. His most famous student, Plato, risked his reputation to tutor a tyrant. Diogenes carried a bright lamp in broad daylight and announced he was “looking for a man.” Aristotle’s alliance with Alexander the Great presaged Seneca’s complex role in the court of the Roman Emperor Nero. Augustine discovered God within himself. Montaigne and Descartes struggled to explore their deepest convictions in eras of murderous religious warfare. Rousseau aspired to a life of perfect virtue. Kant elaborated a new ideal of autonomy. Emerson successfully preached a gospel of self-reliance for the new American nation. And Nietzsche tried “to compose into one and bring together what is fragment and riddle and dreadful chance in man,” before he lapsed into catatonic madness. With a flair for paradox and rich anecdote, Examined Lives is a book that confirms the continuing relevance of philosophy today—and explores the most urgent questions about what it means to live a good life. James Miller is a professor of politics and the chair of liberal studies at the New School for Social Research. He is the author of The Passion of Michel Foucault and Flowers in the Dustbin: The Rise of Rock & Roll, 1947–1977, among other books. He lives in Manhattan. Praise for Examined Lives “Fascinating. . . Miller does not rest with digging out petty failings or moments of hypocrisy. He shows us philosophers becoming ever more inclined to reflect on these failings, and suggests that this makes their lives more rather than less worth studying.”—Sarah Bakewell, The New York Times Review of Books “Reading Jim Miller's Examined Lives is like watching Roger Federer play tennis. The graceful movement of his mind is a joy to behold.”—Lewis H. Lapham “This book proves, once and for all, that philosophy isn't simply a body of knowledge, but a practice that requires a body—a living, breathing person in relentless pursuit of ever-elusive wisdom. May the Socratic passion that infuses its pages infect all who read them!” —Astra Taylor, director of Examined Life and Zizek! “James Miller has achieved an unlikely feat: he's written a page-turner about the history of philosophy. Examined Lives does for the great philosophers what Dr. Johnson did for the English poets in Brief Lives—given us biographies in miniature, portraits of the life behind the work. He makes even the toughest cases—Kant, Descartes, Nietzsche—come alive. It's a great story, and Miller is a superb story-teller.”—James Atlas, author of Bellow: A Biography “All too often, philosophers’ ideas are presented acontextually. James Miller artfully shows how philosophers’ ideas reflect their lives and often, in turn, impact those lives.” —Howard Gardner, The John H. and Elisabeth A. Hobbs Professor of Cognition and Education, Harvard University “James Miller’s Examined Lives is a wise and courageous book that reminds us of the sheer delight of the love of wisdom and the unsettling effect of the philosophic life. Our age is in many ways a battle between the hard-earned serenity of Montaigne and the inescapable torment of Nietzsche. Miller gives us armor in this battle!” —Cornel West, Princeton University "James Miller's Examined Lives is a tour de force of biography, history, and philosophy. Rarely have great lives and great ideas of the past been presented so accessibly or with such relevance for the present." —James Carroll, author of Constantine's Sword
<urn:uuid:afbc144f-c62c-4ec2-90ec-79b232641c5f>
CC-MAIN-2016-26
http://www.indiebound.org/book/9780374150853?aff=NPR
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00027-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928198
900
2.546875
3
2000 Jordan Institute 4, No. 1 for Working with an Interpreter Given the increase of the Latino population in North Carolina, you probably work with individuals who do not speak English. If you bring in a translator, the following guidelines are suggested: - Introduce yourself and the interpreter to your client(s). Describe the role each of you will serve. - Learn basic words and phrases in the family's language. - Avoid body language that could be misunderstood. - Speak directly to the family and not the interpreter. Look at and listen to family members as they speak. - Use a positive tone of voice and facial expressions. Be sincere and talk to them in a calm manner. - Limit your remarks and questions to a few sentences between translations. - Avoid using slang words or jargon. - From time to time, check on the family's understanding of what you have been talking about by asking them to repeat it back to you. Avoid asking, "Do you understand?" - Whenever possible, use materials printed in the family's language. Lynch. E. (1992). From culture shock to cultural learning. In E. W. Lynch and M. J. Hanson (Eds.), Developing Cross-Cultural Competence: A Guide for Working with Young Children and Their Families. Baltimore, MD: Paul H. Brooks Publishing Co., 35-62. © 1999 Jordan Institute for Families
<urn:uuid:47dbb309-78ad-478d-a2a8-9c9b4440a7a7>
CC-MAIN-2016-26
http://www.practicenotes.org/vol4_no1/guidelines_working_interpreter.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.867471
308
3.0625
3
You are hereSt Mary’s Loch St Mary’s Loch James Hogg (born 1770 – died 21 November 1835) ‘The Ettrick Shepherd’ wrote the following concerning a water cow that was said to have lived in the 5 km long St Mary’s Loch, which is the largest natural loch in the Borders. "A farmer in Bowerhope once got a breed of her, which he kept for many years until they multiplied exceedingly; and he never had any cattle thrive so well, until once, on some outrage or disrespect on the farmer's part towards them, the old dam came out of the lake one pleasant March evening and gave such a roar that all the surrounding hills shook again, upon which her progeny, nineteen in number, followed her all quietly into the loch, and were never more seen”. This story was again repeated by James Mackinley in his Folklore of Scottish Lochs and Springs (1893). According to local tradition St Mary’s loch is also thought bottomless. A statue of James Hogg can be found near the loch close to Tibbie Shiels Inn.
<urn:uuid:cd829519-83ff-4a82-a54a-112c2a7a7c14>
CC-MAIN-2016-26
http://www.mysteriousbritain.co.uk/scotland/selkirkshire/folklore/st-mary%E2%80%99s-loch.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00021-ip-10-164-35-72.ec2.internal.warc.gz
en
0.985046
243
2.859375
3
Tippie Professor on UI Team That Designs Tool to Improve Wikipedia Accuracy Check the Microsoft entry on Wikipedia at some point in the past and you might have learned that the company’s name is Microshaft, its products are evil, and its logo is a kitten. Similarly, you may have learned from Abraham Lincoln’s Wikipedia entry that he was married to Brayson Kondracki, his birth date is March 14, and Pete likes PANCAKES. None of these are correct and/or relevant, but they all showed up at one time or another in the online encyclopedia’s listings. They are also an example of one of the challenges facing Wikipedia—finding and undoing the malicious editing that introduces facts that are incorrect, misleading, editorializing, or just plain bizarre. But a group of University of Iowa researchers are developing a new tool that can detect potential vandalism and improve the accuracy of Wikipedia entries. The tool is an algorithm that checks new edits to a page and compares them to words in the rest of the entry, then alerts an editor or page manager if something doesn’t seem right. Existing tools do exist that try to weed out potential vandalism and are quite useful in many cases, said Si-Chi Chin, a graduate student in UI’s Interdisciplinary Graduate Program in Informatics. Those tools are based on rules and screens that spot obscenities or vulgarities, or major edits, such as deletions of entire sections, or significant edits throughout a document (changing “Microsoft” to “Apple” in the Microsoft entry, for instance). But those tools are built manually, with prohibited words and phrases entered by hand, so they’re time-consuming and easy to evade. They also aren’t as good for catching smaller types of vandalism that led Chin and her professors to develop the automated tool. They recently tested the algorithm by reviewing all the edits made to the Abraham Lincoln and Microsoft entries, Wikipedia’s two most vandalized pages, to see how many of the pernicious edits it could find. That meant reviewing more than 4,000 edits in each entry. Some are still on the page, but most have been deleted and archived. As described in their paper, “Detecting Wikipedia Vandalism with Active Learning and Statistical Language Models,” the statistical language model algorithm works by finding words or vocabulary patterns that it can’t find elsewhere in the entry at any time since it was first written. For instance, when someone wrote “Pete loves PANCAKES” into Lincoln’s section, the algorithm recognized the graffiti as potential vandalism after scanning the rest of the entry. “It determines the probability of each word appearing, and because the word ‘pancakes’ didn’t turn up anywhere else in the history of Lincoln’s entry, the algorithm saw it as something new and possible graffiti,” Chin said. In all, the statistical language model algorithm caught more of the vandalism in some categories than existing tools. “Experimental results show that our approach can identify both large-scale and small-scale vandalism and is strong in filtering out various types of graffiti and misinformation instances,” said Padmini Srinivasan, a professor of computer science and one of Chin’s co-researchers. It detected about half of the graffiti in both the Lincoln and Microsoft entries, and about a quarter of the large-scale editing and misinformation types of vandalism. It was less successful in detecting link spam (hyperlinking to irrelevant or non-existent web sites) or image attacks (replacing a portrait of Lincoln with a photo of a redwood tree, a change that managed to survive for two years and 4,000 edits). But those are particularly difficult to detect with vocabulary algorithms because the tool can’t read images, and web spam can only be found by manually clicking the link. The algorithm also has the advantage of being able to adapt to catch future forms of vandalism. Co-researcher Nick Street, professor of management sciences in the Tippie College of Business, said it’s not unlike a virus detector in that way. “It learns to recognize changes so it keeps one step ahead of the vandals,” he said. Their paper, co-authored with David Eichmann of the UI Institute of Clinical and Translational Science, was presented recently at the Fourth Workshop on Information Credibility on the Web in Raleigh, N.C. Contact: Tom Snee, UI News Services, 319-384-0010
<urn:uuid:3402d4e8-f804-479a-9aa0-dbeafbbe5cbd>
CC-MAIN-2016-26
http://tippie.uiowa.edu/news/story.cfm?id=2462
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946736
959
2.75
3
The increasing volume of student-loan debt may have a significant effect on the nation's economy, hindering borrowers' ability to qualify for home and automobile loans, save for retirement, and pursue entrepreneurial ventures, according to a report from the federal government's consumer-watchdog agency. The report, released on Wednesday by the Consumer Financial Protection Bureau, analyzes more than 28,000 comments solicited from the public on how to create affordable repayment options for borrowers with private student loans. With collective student-loan debt now totaling more than $1-trillion and far outpacing wage growth for college graduates, borrowers are less likely to take financial risks that would normally help fuel economic growth, the report says. The report focuses on the plight of borrowers who have loans from banks and other private lenders, which have a simpler application process than federal student loans but carry fewer consumer protections. Borrowers with private student loans make up the vast majority of high-debt borrowers; in 2008 more than 80 percent of college graduates with at least $40,000 in student debt had private loans, the report says. "While federal loans remain a student's best option, the CFPB's important work highlights that many students are struggling to repay debt from private lenders," Secretary of Education Arne Duncan said in a written statement. In February the bureau began soliciting comments on how to create more flexibility for borrowers with private student loans after many frustrated borrowers said they were struggling to make ends meet with high monthly loan payments and few options to refinance or negotiate other repayment options. "Student debt has become the defining feature of their lives—the millstone around their necks that holds them back from a full financial future," Richard Cordray, the bureau's director, said in a written statement. Many of those who responded to the bureau's request for comments expressed concern about the potential "domino effect" that can result when monthly student-loan payments deplete consumers' savings, prevent other types of consumer spending, and influence how graduates make choices about their careers and living situations. In the past, individuals with student loans typically had higher rates of home ownership because the loans were a sign of higher levels of education, and therefore higher incomes. However, wages for college graduates have not increased at the same pace as student-loan debt, and borrowers today have been more hesitant to make such investments, the report says. The National Association of Home Builders said in its response to the bureau that carrying a large amount of debt can exclude potential homebuyers from access to credit by lowering the amount of mortgage debt for which they qualify. Those decisions can in turn affect the housing market by decreasing demand and preventing existing homeowners from "moving up" and purchasing their next home, the report says. High monthly loan payments and lower wages also mean that borrowers may be less able to save for retirement, or may have to rely on aging parents to help pay their debt. Equal Justice Works, a nonprofit organization for law students and lawyers, said in its response to the bureau that many borrowers the organization works with are employed but unable to plan for the future because of the size of their debt. Some borrowers have delayed marriage until they can pay off their debt, the organization's statement said. Others have had adverse effects on their credit, and some even struggle to afford basic necessities because their monthly payments are so high. "We've heard borrowers anguish over whether to pay their private loans or groceries and rent," the Equal Justice Works response said. "For these borrowers, private student loans have become a source of anxiety and deprivation, rather than an aid to accessing a higher education, financial stability, and a successful career." Though financial-aid administrators typically encourage students who borrow to pay for college to take out federal student loans first, many borrowers still turn to private loans. Some private student loans offer lower interest rates, and are easier to get than federal loans. Some borrowers take out private loans because they have hit the lifetime borrowing limits that most federal loans have, but are still unable to pay for college. In its report, the bureau mentions a number of policy solutions suggested by the public, including creating more options for borrowers to refinance their private student loans, more negotiable repayment plans, and a "credit clean slate," in which borrowers can repair their credit scores by adhering to a negotiated payment plan. Rohit Chopra, the bureau's loan ombudsman, said in a conference call with reporters that most borrowers asserted they were not looking to get off the hook for their debt, but rather wanted "a payment plan that works." Policy makers should look not only for a sustainable long-term solution, but also for options that can help borrowers who are struggling now, Mr. Chopra said. "As we consider the tremendous challenge posed by rising levels of student debt, it is very tempting for policy makers to focus solely on future generations of student-loan borrowers, so it can be avoided for the next group," Mr. Chopra said. "But for borrowers struggling today, that singular focus feels like rearranging deck chairs on the Titanic."
<urn:uuid:99fec6ff-18eb-4ad0-ad57-8a21219c4037>
CC-MAIN-2016-26
http://chronicle.com/article/Rising-Student-Loan-Debt-Hurts/139141/?cid=at
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972544
1,047
2.5625
3
- Donna Brazile: Weekend marks 150th anniversary of preliminary Emancipation Proclamation - She says April 16, 1862 also was significant in struggle for freedom - On that date, Lincoln signed bill to free the slaves in the District of Columbia - She says it began the process, which continues today, of seeking freedom and justice This weekend we pause to celebrate the 150th anniversary of the preliminary Emancipation Proclamation that set in motion the freeing of the slaves throughout the South. But as we focus on Abraham Lincoln's action on September 22, 1862, we should also realize that there was another crucial date in the story of freedom. Perhaps the most significant event in American history --other than the creation of the documents that created our nation -- was Abraham Lincoln's emancipation of the slaves in the District of Columbia on April 16, eight and a half months before the historic signing of the Emancipation Proclamation. It was the most significant because it began the process of stopping the one thing that could have ended this nation: slavery. The D.C. proclamation predated the Emancipation Proclamation by months, but it received no less attention from the nation. It was a highly controversial document, and it was the first and only time that the government tried to compensate owners for freeing the enforced laborers. Owners loyal to the United States were paid $300 per person, and each freed man, woman or child was paid $100 -- almost one-third of a working man's yearly wage in 1862 -- for those people who chose to return home. Compensating slaveholders was never tried again. Washington was a hub of the slave trade. Slaves were sold across from the White House in Lafayette Park. Slave pens, or jails, holding the slaves for sale were located throughout the District. Charles Ball, a slave in Washington, would take walks to Georgetown. "I frequently saw large numbers of people of my color chained together in long trains, and driven off towards the South," he wrote. Frederick Douglass was one of Lincoln's most severe critics. For those unfamiliar with him, among man, Douglass stands beside Lincoln as a towering giant of the Civil War. Born a slave, Douglass, in his own words, "stole this body" and escaped to New Bedford, Massachusetts. Self-educated, Douglass became known nationwide. Lincoln invited him to the White House four times. (Twice for meetings, once to his second Inauguration and once to the Summer White House, which Douglass declined because he had a speaking engagement.) The first meeting changed Douglass' tune. After that conversation Douglass said, "I felt as if I had known him all my life." What was most important, Douglass had Lincoln's ear; the president listened. Ending the sale of slaves in the District was a thing of wonder. The act, said Douglass, was "a priceless and an unspeakable blessing." A District citizen (an African-American who was a free man all his life), wrote a friend in Baltimore, "Were I a drinker I would get on a Jolly spree today, but as a Christian I can but kneel in prayer and bless God for the privilege I've enjoyed this day." Because of the Civil War, the District's Emancipation Day was not formally celebrated until 1866, when 5,000 marched from the U.S. Capitol up Pennsylvania Avenue to Franklin Square, cheered by a crowd of 10,000 lining the way. Earlier this year, the District's non-voting member of Congress, the Honorable Eleanor Holmes Norton, led Washington's celebration of Emancipation Day. Many years ago, her great-grandfather, Richard, lived in the city. President Lincoln's signature didn't free him. He freed himself when he walked away from a slave plantation in Virginia in the 1850's. But Lincoln's mighty pen made 3,100 men, women and children equal in the laws of the land. The congresswomen did more than simply honor legacy; she brought the emancipation legacy home to us now. She said that the city, unlike the 3,100 who were emancipated, could free itself of congressional rule. (Washington's citizens do not have senators and have but one non-voting representative in Congress. Congress must approve all legislation Washington's City Council enacts.) Norton said, "Our freedom is locked up in the U.S. Capitol. We can claim it, or leave it there. "We can claim it, or leave it there." This fall, the descendants of slaves, millions of ethnic and religious minorities from other lands, African-Americans and immigrants -- Latinos, Asians, Europeans -- and women, as well as working- and middle-class Americans, will decide whether to claim their future. We are all in this together. All Americans will have a chance to move Lincoln's vision forward to help close the opportunity gap, to end the economic inequality resulting from government policies that favor a handful over the many who work equally hard. Abraham Lincoln would be proud to see the progress we have made. But he also would understand that there is still more work to do. Together.
<urn:uuid:9cacd467-05d3-44e3-b532-68b8dcce336b>
CC-MAIN-2016-26
http://edition.cnn.com/2012/09/22/opinion/brazile-emancipation-150-anniversary/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.976175
1,076
3.71875
4
Types of PsoriasisSkip to the navigation The major types of psoriasis include the following. Plaque psoriasis is the most common type. Nearly 90% of people with psoriasis have this type.footnote 1 Symptoms of plaque psoriasis include: - Round or oval sores that may expand into patches. - Sores that are red and covered with loose, silvery, scaling skin. - Sores that are usually found on the elbows, knees, and trunk. Guttate psoriasis is the second most common type, affecting up to 10% of people who have psoriasis.footnote 1 It is also called raindrop psoriasis. People with guttate psoriasis may have: - Many small sores the size of small drops of water. - Sores that develop suddenly, usually on the trunk, arms, legs, and scalp. - Outbreaks of sores that may occur with a cold or other upper respiratory infection. The sores also may occur after an episode of tonsillitis or strep throat. Psoriatic arthritis occurs in 10% to 15% of people who have psoriasis.footnote 2 Estimates vary depending on the population being studied and the method of diagnosis. Symptoms of psoriatic arthritis include: - Joint symptoms that occur before, at the same time, or after skin symptoms develop. - Joint symptoms in the hands and feet. - Joint and skin symptoms that are long-lasting and return often (chronic). Symptoms can range from mild to disabling. A chronic, low-level bacterial infection or a serious joint injury in people who have psoriasis may trigger arthritis. The joint symptoms usually improve after skin symptoms improve. Inverse psoriasis includes sores that are: - Large and red and very inflamed and dry. There is not a lot of scaling. - Commonly found in the skin folds near the armpits, under the breasts and the buttocks, in the groin area, around the anus, behind the ear, and on the face. Pustular psoriasis is another type, and its symptoms include: - Fluid-filled (noninfectious pus) sores that appear on the palms of the hands and soles of the feet. The skin is very scaly. - Larger affected areas of skin (plaque) or small, drop-sized sores that may also appear on other body parts. - Nail changes. - Flares that occur after you stop taking certain medicines (such as oral corticosteroids) or stop using certain creams (such as high-strength corticosteroid creams). Erythroderma, or exfoliative psoriasis, is an extremely rare form that may be disabling or fatal. People with erythroderma may have: - Symptoms that affect the entire body, not just the skin. - Inflammation and redness on skin all over the body. The skin may shed or slough off and is usually itchy and painful. - Chills and inability to regulate body temperature. Primary Medical Reviewer Adam Husney, MD - Family Medicine Specialist Medical Reviewer Amy McMichael, MD - Dermatology Current as ofFebruary 5, 2016 Current as of: February 5, 2016 To learn more about Healthwise, visit Healthwise.org.
<urn:uuid:aaae1173-4247-4b5f-b94f-8b1640778941>
CC-MAIN-2016-26
http://www.uwhealth.org/health/topic/special/psoriatic-arthritis/hw57612.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00195-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925783
713
2.96875
3
Big Stone Lake is a long, narrow freshwater lake and reservoir forming the border between western Minnesota and northeastern South Dakota in the USA. The lake covers 12,610 acres (51 km²) of surface area, stretching 26 miles (42 km) from end to end and averaging around 1 mile (2 km) wide, and at an elevation of 965 feet (294 m) is the lowest point in South Dakota. Big Stone Lake is the source of the Minnesota River, which flows 332 miles (534 km) to the Mississippi River. Flow from the lake to the Minnesota River is regulated by the Big Stone Lake Dam, located at the southern end of the lake. The lake is fed by the Little Minnesota River at its north end, which flows through the Traverse Gap. Big Stone was formed at the end of the last ice age when glacial Lake Agassiz drained through the gap into Glacial River Warren. The valley of that river now holds Big Stone Lake.
<urn:uuid:65287983-6817-4667-a2cc-b60f6b672f76>
CC-MAIN-2016-26
http://virtualglobetrotting.com/map/big-stone-lake/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952824
197
3.1875
3
During Roman birthday celebrations family and friends offered congratulations and brought gifts. The gifts included gemstone jewelry, such as the Opal, and also flowers - the first traditions and origins of the October Birth Flower. The Language of Flowers The language of flowers developed during the highly conservative period of the Victorian era. The Victorians were strongly restricted by the rules of etiquette when it was considered totally inappropriate to express feelings of love or affection. The "Language of Flowers" therefore evolved when a message was assigned to specific flower such as the Calendula (Marigold). A lover could then send flowers which conveyed a hidden romantic meaning. The Meaning of the October Birth Flower, the Calendula (Marigold) The meaning of the October Birth Flower, the Calendula (Marigold) symbolizes sorrow or sympathy. The Hidden message of October Birth Flower, the Calendula The hidden message of the Birth Flower, the Calendula, so favored during the Victorian era was "My thoughts are with you". Colors of the Calendula (Marigold) The colors of the October Birth Flower, the Calendula (Marigold), include the following: Birth Month Flowers - Gifts for Special Occasions All over the World people give Birth Month Flowers as gifts to celebrate special occasions or events. Flowers, such as the Calendula (Marigold), are always given to celebrate the birth of a new baby and included in wedding flowers or a wedding bouquet. Many people also like to give October Birth Flower, including the Calendula (Marigold), to celebrate special events at different times and months of the year and especially during holiday periods. Knowing the flowers which are associated with the October Birth Flower and their meaning adds to the significance of the flowers. The special events where it would be appropriate to give the October Birth Flower, the Calendula (Marigold) are as follows: - Month of October Flowers, the Calendula (Marigold), to celebrate Halloween on October 31 and Colobus Day on October 13
<urn:uuid:e6e0eb5e-01ae-4cf2-aaa3-70007544d94c>
CC-MAIN-2016-26
http://www.birthdaygems.org/birth-month-flowers/october-birth-flower.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958196
427
2.765625
3
Thousands of years ago, Roman soldiers played hopscotch to test their strength and speed, sometimes hopping over 100 feet (30.5 m) carrying heavy weights! Today, hopscotch is a backyard game enjoyed by children (and lighthearted grown-ups) all over the world. Whether you never learned this game as a child, or simply need a brushing-up, you can easily learn to play this classic game, along with some variations to make it more challenging. Playing Classic Hopscotch 1Draw a hopscotch design on the ground. Chalk is the best drawing medium on asphalt, patio stones or concrete. The squares should be large enough to fit one foot and to make sure that a stone thrown into the square will not bounce out too easily. While there are variants on drawing the design, a common schoolyard design is shown here. - It is common to designate the "10" section shown here as a rest or stop area. This is where the player can take a moment to turn around and/or regain their balance. Sometimes a more creative name, like "Heaven" is given to the space. 2Throw a flat stone or similar object (small beanbag, shell, button, plastic toy) to land on square one. It has to land inside the square without touching the border or bouncing out. If you don't get it within the lines, you lose your turn and pass the stone to the next person. If you do get it, however, go on to the next step. - Hopscotch can be played with just one person. If that's your case, make up the rules as you see fit! 3Hop through the squares, skipping the one you have your marker on. Each square gets one foot. Which foot you start with is up to you. You can't have more than one foot on the ground at a time, unless there are two number squares right next to each other. In that case, you can put down both feet simultaneously (one in each square). Always keep your feet inside the appropriate square(s); if you step on a line, hop on the wrong square, or step out of the square, you lose your turn. 4Pick up the marker on your way back. When you get to the last number, turn around (remaining on one foot) and hop your way back in reverse order. While you're on the square right before the one with your marker, lean down (probably on one foot still!) and pick it up. Then, skip over that square and finish up. 5Pass the marker on to the next person. If you completed the course with your marker on square one (and without losing your turn), then throw your marker onto square two on your next turn. Your goal is to complete the course with the marker on each square. The first person to do this wins the game! - Ashrita Furman holds the Guinness World Record for completing the fastest game of hopscotch, coming in at 68 seconds. In case you were curious. 1Change the shape of the hopscotch course. Make it circular, with the numbers going in a spiral direction. Maybe that's why the French call it "escargot?" Or make it a rectangle, triangle, or firework! - It's easiest to start from the middle and go outward. That way you can make it as big as you need -- instead of ending up with your last square being microscopic! 2Vary the size and shape of the squares. Make some of them smaller so that people have to step on their tip toes. You can even make some in the shape of a shoe to control the direction in which the person faces. Get creative! 3Make some squares into islands. That way, a person needs to jump over a distance to get to it. Just make sure the spaces are jump-able! And who said hopscotch didn't require skill? 4Set a time limit. Make into a game of "speed hopscotch." The person has a certain amount of time to complete the course, or else they lose their turn. Or you could turn it into a race! What items are needed to play hopscotch?wikiHow ContributorChalk and a rock. Can I put both feet down in square 10?wikiHow ContributorYes, you can. Sixty or 65 years ago, children would draw the outline for 8 to resemble a semi-circle and split 9 and 10 with a middle line above the semi-circle. They then hopped on one foot in 8, then both left and right in 9 and 10 consecutively. They then did a reverse jump landing one foot in each box without touching the lines to head back down the hopscotch. What do I do when I put my stone on one number in a space of two (like 1 and 2)?wikiHow ContributorIf your stone, or marker, is in number 1, you need to hop on one foot to square 2, and then hop to number 3, etc. If there are markers in both 1 and 2, you need to hop over both to square 3 and finish from there. How many markers are on the hopscotch squares?wikiHow ContributorTypically one, but variations do exist. - You can use masking tape to make a hopscotch layout if preferred. It will lift up easily and is good for indoor games. - The final square can be designated a "rest area" if you would like to have a break from hopping. - Be wary of your surroundings. It's best you play on concrete rather than gravel or an uneven surface. You might get injured! In other languages: Español: jugar a la rayuela, Deutsch: Himmel und Hölle spielen, Português: Brincar de Amarelinha, Nederlands: Hinkelen, Русский: играть в классики, Français: jouer à la marelle, Italiano: Giocare a Campana, 中文: 玩“跳房子”游戏, ไทย: เล่นตั้งเต, Bahasa Indonesia: Bermain Dampu, हिन्दी: हौपस्कौच खेलें, Tiếng Việt: Chơi Nhảy Lò cò, العربية: لعب الحجلة, Čeština: Jak skákat panáka Thanks to all authors for creating a page that has been read 591,924 times.
<urn:uuid:936b7627-97b9-4118-b16f-4dc2e9f691ea>
CC-MAIN-2016-26
http://www.wikihow.com/Play-Hopscotch
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917369
1,478
3.015625
3
Skin Problems in Cats When Is It Time to See the Vet? You should visit your vet for an exam as soon as you notice any abnormality in your pet’s skin, such as excessive hair loss, flaking and scaling, redness and bald patches, or if your pet begins to excessively scratch, lick and/or bite areas on his fur. How Are Skin Problems Diagnosed? After obtaining a history and performing a thorough physical examination of your cat, your vet may perform some of the following diagnostic tests in order to find the cause of your cat’s symptoms: - Skin scraping with findings evaluated under a microscope to check for mites - ‘Tape test’ to check for parasites - Individual hair examination under a microscope - Bacterial culture and sensitivity tests - Skin biopsy - Food and other allergy testing - Blood tests to assess your cat’s overall health - Microscopic evaluation of cells to establish if bacteria or yeast are present Which Cats Are Prone to Skin Problems? Because of the wide ranges of causes, cats of all ages and breeds are susceptible to issues involving skin. Young, elderly, immunocompromised and cats living in overcrowded, stressful environments may be more susceptible to skin problems than others. How Can Skin Problems Be Prevented? - Use natural, hypoallergenic soaps and shampoos recommended for use in cats. - Brush your cat regularly to prevent matting of hair. - Feed your cat a healthy, balanced food without fillers or artificial ingredients. - Implement a flea-treatment program recommended by your veterinarian. - Thoroughly clean and vacuum your home (and remember to always throw away the bag). - Provide calm living conditions for your cat. - Your vet may prescribe skin creams and/or oral medications to prevent skin problems. How Can Skin Problems Be Treated? Ask your vet about the following treatments: - Topical products, including shampoos, dips and sprays, to prevent and treat parasites - A balanced diet to help maintain healthy skin and coat - Antibiotic or antifungal medications - A dietary supplement containing essential fatty acids - Corticosteroids and antihistamines may be prescribed to control itching. - Hypoallergenic diet for food allergies
<urn:uuid:c58f674c-682d-4851-a435-b0b78d0880eb>
CC-MAIN-2016-26
http://pets.webmd.com/cats/skin_problems_in_cats?page=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.879269
490
2.625
3
We have a new job opening for a Research Assistant(RA)/Developer in the Extreme Citizen Science (ExCiteS) group, in the Department of Civil, Environmental and Geomatic Engineering at University College London. More... Published: Apr 20, 2016 5:04:39 PM What is 'extreme' citizen science? Citizen Science is hardly a new concept, but during the last decade it has seen a rise in both academic and popular interest for the topic. This trend is in part driven by an increased interest for open paradigms, as well as, Information Communication Technology (ICT) innovations such as smartphones, mobile Internet and cloud computing. This has given rise to the emergence of a growing and highly diverse crop of new – and often innovative – initiatives that are being, or could be, labelled as Citizen Science1. Whilst there are often big differences between projects, for instance when it comes to power relations – “Who is working for who?” – or the determination of goals and outcomes – “Who is solving whose problems?” – there is hope that, at the very least, this rediscovery of citizen science might lead to a renewed mutual interest, and perhaps understanding, between scientists and the general public. Most citizen science initiatives are set in affluent areas of the world, and by and large they target an educated, or at least literate, public. Extreme Citizen Science aspires to extend the reach and potential of citizen science beyond this restricted context. We collectively define 'Extreme Citizen Science' as: Extreme Citizen Science is a situated, bottom-up practice that takes into account local needs, practices and culture and works with broad networks of people to design and build new devices and knowledge creation processes that can transform the world. 1Related terms include: Community Science, Citizen Cyber Science, Community Sensing, Participatory Mapping, Participatory (Mobile) Sensing and so on. Page last modified on 22 jun 13 17:24
<urn:uuid:ec2dc357-735f-492a-94b5-61c6dd1fc50c>
CC-MAIN-2016-26
http://www.ucl.ac.uk/excites/home-columns/full-what-is-extreme-citizen-science/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00070-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937554
420
2.96875
3
As for the second issue, computer models that link climate effects to changes in the carbon cycle have predicted that tropical forests will survive and continue to act as a "sink" by absorbing carbon, provided that emissions can be kept under control . The efficiency of the tropical forest as a carbon sink might in fact diminish over time, but the authors expect that it will not disappear completely. The political challenges to reducing deforestation in the tropical developing world are varied and complex. Traditionally, many countries have viewed their forests as an economic resource that they have the right to harvest, much as oil- and ore-rich nations exercise the right to harvest those resources. As such, many proposed solutions are centered on direct economic incentives to reduce rates of tree clearing. However, the authors of the policy article describe low-cost measures that can enhance the success of carbon-trade systems and subsidized low-carbon development programs. For example, by strategically evaluating forest land to determine its value for other uses, developing countries can focus on clearing only areas with high agricultural value. "It will require political will and sound economic strategy to make the RED initiative work," explains Field. "But the initiative provides a big reduction in emissions at low cost. It is a good example of the kind of creative thinking that can help solve the climate problem."
<urn:uuid:4f9fe626-06bd-40cb-97a3-cbad54b65da7>
CC-MAIN-2016-26
http://news.bio-medicine.org/biology-news-3/Climate-policy-3A-Its-good-to-be-in-the-RED-1461-2/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00143-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957763
265
3.609375
4
Cattle aren’t the only hungry mouths in drought-parched Arkansas: so are grasshoppers and blister beetles. Exceptional drought covers 10.81 percent of the state, compared with 3.25 percent in the previous week’s U.S. Drought Monitor map. The most intense drought covers all of Conway and Perry counties, most of Pope County and parts of Faulkner and a few other counties. “Blister beetles and grasshoppers are looking for what little green is left here,” said Phil Sims, Pope County Extension staff chair. “That means row crops under irrigation.” Sims has seen hordes of blister beetles nearly carpeting row crops; scurrying under short soybean plants in search of food and moisture. Blister beetles are a scourge to some livestock owners -- they are toxic to horses. Grasshoppers are also a problem. Kelly Loftin, Extension entomologist, has been fielding questions about grasshoppers for about six weeks. “At the University of Arkansas farm at Savoy, grass is sparse,” he said. “All the grasshoppers are on the fence line just waiting for the grass to grow. And because it’s so dry, fungal pathogens don’t do so well, which is limiting natural control of grasshoppers.” They’re also harder to kill with pesticides. “The best time to control grasshoppers would be the time period when they’re still nymphs and they can’t fly long distances,” Loftin said. “When they’re big adults, they’re more mobile and harder to control.” However, nature didn’t give grasshoppers all the cards. “Blister beetle larvae eat grasshopper eggs,” he said, adding that the drought-hardened ground has forced the grasshoppers to lay eggs above the soil, leaving them and unprotected. For more information on coping with drought, visit Arkansas Drought Resources at or contact your county Extension office.
<urn:uuid:cba1ba6e-cf69-46f7-97c5-e32e3f6e8ecb>
CC-MAIN-2016-26
http://deltafarmpress.com/print/management/grasshoppers-blister-beetles-plague-drought-stretched-parts-arkansas
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00040-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925038
442
2.875
3
LOFTIN CEMETERY, LOFTIN ROAD, MAURY COUNTY TENNESSEE Looking west shows the central part and a view of most of the stones in the cemetery. As you can see from this view the Alfred Loftin inscription is not on the opposite side of the stone from his wife Emily, assuming (I have not proven that) Emily was his wife. This is a closer look from the same angle. Panning the camera to the right yields this photo of the north side of this small family cemetery. This photo was taken while standing in the southeast corner of the cemetery shooting across the unknown rock crypt. This is a large stack of select stones used also as a means of marking a grave in earlier times. This tends to be the first type of grave markings in use and sprang up before inscribed stones started becoming widely available in the southeast. Inscribed stones in Maury county can be found as old as about 1814. Of course there were also far fewer settlers in that time frame. I would therefore date this as between the dates of 1814 and 1850 as a theoretical approach to a resolution as to who might be interred here. I don't believe this crypt it is for the graves of Longfield & Mary Loftin. They have foot and head stones still in place. They died in 1836 & 1851 and the style of memorial in use tells me their stones were placed sometimes after 1870 and before 1915 by later descendents to mark their graves. This is a backside view from the southwest corner of the cemetery. The John & Sarah Loftin, 1846 memorial is on our left center. Photos and information by Wayne Austin 23 Dec 2008.
<urn:uuid:cad58861-5835-4d7a-a219-61dde54ac75d>
CC-MAIN-2016-26
http://freepages.genealogy.rootsweb.ancestry.com/~maury/cemetery/Loftin(Longfield)/Loftin-views.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00070-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97931
347
2.84375
3
Abstract In Indonesia pesantren is known as the indigenous religious educational institution. The basic elements of pondok pesantren are: Kiai as a central of figure, santri as a student who persues knowledge , pondok as dormitory where santri lives, and mosque which constitutes as the center of educational activities. In general, due to system and method of teaching, pondok pesantren classified into two kinds; traditional and modern. Never the less they are having the same vision and mission, that is to say that education is pondok pesantren is community oriented education by cultivating values, moral attitude, and character building of muslim community. That is the main reason that life atmosphere inside pesantren is in inspired strongly by what so called Panca Jiwa - Five spirit – namely; sincerity, simplicity, self – reliance, Islamic brotherhood, and accountable freedom. In modern Indonesia today the expectation toward the role of pesantren move on since early twentieth century, it is not only performing it’s three traditional roles as locus for transforming religious education, preserving muslim traditions, and producing scholars, but nowadays it plays an important role to educate and prepare leaders of tomorrow who posses specific qualities. Keywords: religious education, element, five spirit, character building, indigenous. Introduction Pesantren, madrasah and school are three kinds of the National Educational System in Indonesia. Different from school and madrasah, pesantren has been identified with Islamic and indigeous originality, infect pondok pesantren is the oldest educational institution known in Indonesia. In the early development, many pondok pesantren only focused their program on religious learning (tafaqquh fiddin) and reading a variety of Islamic classical books such as in the field of fiqh (Islamic law), theology and tasawwuf (Islamic mysticism). The main reason for attending pondok pesantren was to gain the blessings of Allah. Therefore, a certificate of learning graduation was not given adequate attention, and there was not a precise regulation regarding the study program. In pondok pesantren, santri, student of pesantren would learn to be come Muslims who obey God’s command, have good characters, show strong and comprehensive personal features, prossess intellectual capability and are independent. Upon returning to their community, santris have been projected to be good examples for them to spread Islamic message as rahmatan li al’alamin. There are a number of principles adopted by pondok pesantren such as sincerity, modesty, peace, wisdom, accountable freedom, autonomy, togetherness, harmonious relationships (among santris, teachers, parents, and community). Element of pondok pesantren 1. Kiai as central figure 2. Santri as student 3. Pondok serves as dormitory 4. Mosque as center of activities First, Kiai. The Kiai is always a central figure in pondok pesantren. The Kiai is not only a spriritual leader, but also a holistic leader in all aspects of life in pondok pesantren. In traditional pesantren the kiai teaches the classical Islamic textbooks with a sorogan method, i.e. a teaching – learning process in wich the kiai personally addresses individual students (known as santri) or a small group of santri at alementary level. Another popular method employed by the kiai is wetonan or bandongan, that is what in modern terms is called ‘lecture’. The kiai gives specific, scheduled lectures on certain topics of some classical textbooks (kitab kuning) in front of a large number of intermediate audiences. As for more advance santri or takhassus, usually the kiai employs a teaching – learning method called musyawarah in delivering his lecture, which is similar to conference. Second, Santri. The santri is someone or a group of people who pursue knowledge in pondok pesantren and is usually accounted as an indicator of its progress and popularity. Santri usually have a strong solidarity and familiar bond among themselves or between them and the kiai. Third, Pondok. The Kiai is a dormitory where santri live and study under the guidance of the kiai. In a number of pondok pesantren it is santri themselves who take care of their pondok and all their other needs under the supervision of senior santri. Fourth, Mosque. Mosque constitutes a main resource and space in which the kiai carries out his obligation to educate and train his santri, to perform ibadah (divine certitude), Learning Islamic textbooks and conducting social activities. Functionally, the mosque is not only for praying, but it is usually can be used for empowering muslims in a broad sence. According to Education Management Information System (EMIS), Directorate General of Islamic Institution. The total number of pondok pesantren in Indonesia 14.361, 1.381 located in Sumatra, 11.664 in Java, Bali and Nusa Tenggara, 294 in Kalimantan, 661 in Celebes, and 25 in Papua. It is worth mentioned that traditional pondok pesantren give more attention to the Islamic classical textbooks, known as kitab kuning (yellow books), the reservoir of the intellectual richness inherited from previous Muslim scholars from the early and medieval periods of Islamic history. These textbooks in fact contain analytical thoughts of earlier ulama responding to religious, political, economic as well as social and cultural problems of their time and places. Meanwhile modern pondok pesantren have contributed a lot in developing the quality of Islamic education and of religious and national life where the government can fight ignorance and solve the universal human problem. The spirit of pondok pesantren: 1) Sincerity (al-ikhlas) 2) Simplicity (al-basathah) 3) Self reliance (al-i’timad ‘alan-nafsi) 4) Islamic brotherhood (al-ukhuwwah al-islamiyyah) 5) Freedom (al-hurriyah) As it was mentioned before that the silent features of pesantren as a religious educational institution is the presence of a learned Muslim scholar, usually called kiai, who plays the role of central figure in the system, the availability of dormitories, named pondok, the presence of students, called santri, and the existence of a mosque as the center of activities and religious education. It is within those pillars along with its spirit that pesantren education was very effective in developing morality and mentality, as well as intellectuality of the students. This spirit, according to Imam Zarkasyi, can be simplified into five spirit, Panca Jiwa, namely: Sincerity (al-ikhlas), Simplicity (al-basathah), Self reliance (al-i’timad ‘alan-nafsi), Islamic brotherhood (al-ukhuwwah al-islamiyyah), Freedom (al-hurriyah). Sincerity is a principle for work, it is the spirit of all activities, the Holy Qur’an suggests that one shoul follow those who do not ask for salary and they are among the guided people. Simplicity is a way of behaving that is applicable to an individual conduct in his or her daily life, it is very positive conduct towards every situation of life. This implies that one should live based on his or her basic needs and not on demand, because this spirit will cultivate strength, courage, determination, and self control. The pesantren it self as an education institution is managed to be self-reliance which means it does not depend on the help of others. People may give financial or material support but pesantren develops not because of supports or other, pesantren has to rely on its own resources without having to be dependent on others for aids or assistance. Islamic brotherhood is a principle through which every student learns how to build strong friendship and empathetic solidarity towards others, and how to respect each other. Fighting, quarrel, or other types of dispute among students are regarded as a crime. As far as Freedom is concerned, it is a mental attitude in which one should be free of group fanatism. This spirit makes santri optimistic in facing the problem of life, freedom in shaping his future and selecting his way of life. There were, infect, more than those five spirit taught to the student in the pesantren system, both traditional and modern one. These were reiterated in various occasions inside as well as outside the class, pasted on the walls of the campus, and written in books, brochure and guide-book of the pesantren. The Arab maxim man jadda wajada (whosoever works hard will get), for example, is the famous spirit of work that none of the santri will ever forget. Other maxims like: Hidup sekali, hiduplah yang brarti (you live once hence live meaningfully), and: wa mal lazztu illa ba’dat ta’bi (no gain without pain), man yazra’ yahshud (whosoever sows he will reap) are learned by heart. It means that pesantren’s system of education is values-oriented wich are mainly derived from the teachings of Islam. The Ideal Model of Pesantren The expectations towards the role of pondok pesantren amongst the Muslim community in Indonesia move on since the early twentieth century. Pondok pesantren in this due regard, has in many different ways responded to the demands of the modern Islamic education and social economic changes of Indonesian society, it is not only to perform its three traditional roles as a locus for transforming Islamic learning, preserving Muslim traditions and reproducing scholars. Further more, as well discuss in the case of Pondok Modern Darussalam Gontor (Darussalam Islamic Institution Gontor, commonly known as Gontor), it becomes center to educate leaders of tomorrow with specific qualities, namely: noble character, sound body, broad knowledge, and independent mind. It is worth noted that pondok pesantren can maintain its development and progress not only because it has flexibility to make adjustments and readjustmens to the ever-changing situation and needs. But also due to the fact that pondok pesantren has a strong bond and proximate relationships with its surrounding community. This closeness may be traced back through the historic account that education in pondok pesantren is community oriented education and therefore it serves as community based learning center. Pondok Modern Darussalam Gontor located at Gontor, Ponorogo, East Java, 200 km from Surabaya, the capital of East Java Province. Historically Gontor founded on Sepetember. 20.1996 by three brothers; K.H. Ahmad Sahal (1901-1977), K.H. Zaidudin Fanani (1908-1967), and K.H. Imam Zarkasyi (1910-1985). They were known as Trimurti. The main characteristic of Gontor is its distinct approach towards modernizing Islamic education by using integrated system of pondok pesantren and madrasah into a new system of Islamic education. The madrasah was a good system for formal education but not for non-formal and informal education. Students may learn well in the class but what happen outside the class was beyond the system. The madrasah is precisely like the modern school system and is not sufficient ti inculcate other Islamic teachings that are not covered by the madrasah curriculum. The positive aspect of pondok pesantren was to be found in its boarding system where non-formal and informal education and activities can be carried out within the spirit and bound of Islam. Imam Zarkasyi, one of the founding fathers and a real architect of Gontor system of Islamic education, tried his best to integrate both of them in one system by adopting the positive aspects of both the madrasah and the pesantren system assimilating them within a specific identity. The nature of this new system is discernable from his statement below: “This pondok pesantren, Gontor, is an Islamic educational institution like any other institution. The difference is only in its teaching method. We use modern teaching method but do not teach something new in realigion. This pondok is a waqf for the muslim ummah and is not the property of Kiai any more. This pondok is not inclined to any political party; therefore its motto is Berdiri di atas dan untuk semua golongan (stand above and for all groups). Its educational goal is to produce a Muslim who has noble character, sound body, broad knowledge, and independent mind. The final objective of this pondok is li I’la’I kalimatillah”. In the mean time, actualize this integrated system Gontor has adopted the best prototype of educational institution in the world. There were four ideal institution in this regard, namely: Al-Azhar University in Egypt, Shinquit in Mauritania, Aligarh Muslim University, and Shantiniketan both were in India. Al-Azhar University is known as the center of Islamic knowledge in the Muslim world and was highly reputed with its survival for centuries due to its waqf property. Al-Azhar University could give scholarship to muslim student from all over the world. Shinquit was a well known institution, not only for its boarding system but also for the sincerity of its founders and teachers, and their hospitality as well. Located in a remote area in Mauritania and under the guidance of its founding father Sidi Abdullah, it could accommodate around 3000 to 5000 students with full scholarship. Allegedly, the graduate of this institution played a pivotal role in the spread of Islam in north West Africa in 19th century. Shantiniketan was basically a traditional boarding school that belongs to Rabindranath tagore, a Hindu philosopher and noble prize winner. Located in a village and under the authoritative figure of Tagore, this institution inculcated the philosophy of life, the most important of which is the principle of simplicity of life with peaceful atmosphere, Shantiniketan etymologically means abode of peace. In this institution teachers and students learn together in a milieu that is fully designed for education. The fourth and the last model is Aligarh Muslim University (AMU) which was and is still an Islamic university in the history of India. It was founded in 1920 under the name Mohammedans Anglo Oriental Collage by Sir Syed Ahmad Khan but later it became first university in India, it main objectives was to revive the Muslim ummah by the inclusion of knowledge through education. It was because of its objective that Gontor made it a model for the future of Islamic education. So the ideal educational institution envisioned by Gontor was an Islamic educational institution that was to be the center of learning for Islamic studies, which could generate its own fund and able to give scholarship to its student. This institution shoul be driven by the spirit of sincerity, simplicity, brotherhood, self reliance, and accountable freedom, and other Islamic spirits which are instrumental for one’s religious and worldly life. By this spirits and principles the institution could hopefully be a world class educational institution. The Characteristic of Gontor Institution We shall discuss this point only on certain important matters pertaining two important aspects, namely; new Islamic educational system and new institutional system founded by Trimurti to reform pondok pesantren. 1. New Islamic educational system a). In Gontor’s eyes there is no dichotomy of knowledge in Islam, santri learns religious subjects, fiqh, ‘aqidah, nahwu, sharf, balaghah, hadist,tafsir, with modern teaching method, and studies social-natural science, method of teaching, mathematics, biology, algebra, physics, and cosmography with religious approaches. When former President Soeharto visited Gontor in 1971, he ask Imam Zarkasyi about the ratio of social-natural sciences and religious in the curriculum. Imam Zarkasyi replied: “Here the curriculum consist of 100% religious and 100% social-natural sciences.” By this new curriculum model Gontor intended to produce Muslim intellectual who are conversant of not only leligious knowledge but also social-natural sciences. The ideal out put for this system, as reiterated by Imam Zarkasyi in many occasions was: “to produce Ulama with high intellectual capacity and not intellectual who knows little about religion (Ulama yang intelek dan bukan intelek yang tahu agama). The implication of Imam Zarkasyi’s obsession is quite clear that in the future there should be Muslim scientist who speaks about their expertise from Islamic perspective. b). Beside this new curriculum system, Gontor also applied new instructional methods, especially in teaching Arabic and English. The main idea of this principle was that the method of teaching is more important than the subject taught (at-thariqah ahammu minal maddah), however the teacher is more important than the method (al-mudarris ahammu min at-thariqah). To simplify this maxim Imam Zarkasyi use to draw the parable of the knife and the apple. The skill of cutting the apple is more important than the knife, yet knowledge about the skill is not important for someone who is already skillful. So, the personal factor of a teacher is the most important one, and that is the spirit of the teacher (ruhul-mudarris). To improve the spirit of teachers, Gontor employed the religious approach by enforcing the spirit of pesantren that the teacher should have, for example the spirit of sincerity (ikhlas) when he teach his students. c). Apart from the enforcement of Islamic spirit by utterance or verbal and written words, Gontor designed student activities with the objective of inculcating mental skill. Imam Zarkasyi asserted that mental skill is more important than job skill. On this point he disagreed with the national education system that emphasizes on job skills. On this point he disagreed with the national education system that emphasizes on job skill. True Islamic education should be directed for worshiping Allah (ibadah), for seeking knowledge (thalabu al-‘ilmi) not for becoming government servant. Imam Zarkasyi used to say that it is better to become an entrepreneur who manages his own business and employes many officers rather than to become government servant. This is what he meant by self-reliance (al-I’timad ‘alan-nafsi) as one of the spirits of the pondok pesantren. The learning strategy for inculcating mental skill, according to him, is “learning by doing” wich can be carried out by involving all students through informa and non-formal education. In this system students are given the responsibilities to manage their own activities under the umbrella of Student Organization. The guiding principle in this regard is that everyone should be “ready to lead and ready to be led” (siap memimpin dan siap dipimpin) sincerely based on the spirits of pesantren. As a result, from early morning of Subuh prayer, four o’clock in the morning, until ten a clock at night, all students are preoccupied with activities. Imam zarkasyi is of the opinion that young men should be kept busy and should not be free from any meaningful activities. “Take break or rest is no other than shifting from one activity to another”(Ar-rahatu hiya al-intiqolu min ‘amalin ila ‘amalin Akhar). That is the best way for inculcating the mental skil to the students, such as the spirit of team works, leadership and sense of responsibility, entrepreunership and management, beside, of course, cultivating discipline habit. 2. New Institutional System Another innovative and reformative step taken by Gontor is concerning the status of the institution, its organization and its future. As a matter of fact, the traditional pesantrens mostly plagued with stagnancy and ineffective educational management. Kiai, the central figure, and his family were so dominant that when he died, he would be substituded by his son or son-in-law, otherwise the pesantren would case to operate. The indicates that the weak point of pesantren system was its regeneration process and the structure of its organization. Trimurti the founding fathers of Pondok Modern Darussalam Gontor have initiated a new pesantren system, which applied effective and efficient management, and adopt modern ideas of progress as well as modern system education. At the outset, they endowed almost all the land inherited from their parent for the sake of the pesantren. They started with a small step in educating the illiterate villagers that was in 1926 but they have in their minds big aspirations, great idealism og building a world class education institution. The real institutional system began three decades after the establishment of pesantren that was in 1958, when they declared that they endowed sincerely in writing all their inheritance to the Muslim ummah. From that moment the pesantren was no longer the property of the founders or their descendents. The waqf declaration also mentioned that survival of the pesantren should be the responsibility of the fifteen appointed members of Waqf Board. Another point of the declaration asserted that the pesantren should be developed further to qualify as an Islamic University and become major center for Arabic Islamic Studies that offers its service to the ummah. At present, after the death of the last Trimurti, K.H.Imam Zarkasyi in 1985, Gontor has survived well and has been properly maintained under the dynamic leadership of Dr. K.H.Syamsul Hadi Abdan. Its property has been successfully developed. Gontor today has fifteen branches to fulfill the deman of society using the same curriculum, the same method and inculcate the same values as well with total students more than 20.000, these are: 1. Gontor 2, at Madusari, Ponorogo, East Java. 2. Gontor 3, at Gurah, Kediri, East Java. 3. Gontor 1 for girls, at Mantingan, Ngawi, East Java. 4. Gontor 2 for girls, at Mantingan, Ngawi, East Java. 5. Gontor 3 for girls, at Widodaren, Ngawi, East Java. 6. Gontor 4 for girls, at Konda, Konawe Selatan, South East Celebes. 7. Gontor 5 for girls, at Kandangan, Kediri, East Java. 8. Gontor 5, at Kali Agung, Banyuwangi, East Java. 9. Gontor 6 for girls, at Sawangan, Magelang, Middle Of Java. 10. Gontor 7, at Podahoa, Konawe Selatan, South East Celebes. 11. Gontor 8, at Labuhan Ratu, Eastern Lampung. 12. Gontor 9, at Kalianda, Southern Lampung. 13. Gontor 10, at Seulimeun, NAD. 14. Gontor 11, at Sulit Air, West Sumatra. 15. Gontor 12, at Tanjung Jabung Timur, Jambi. 16. Gontor 13, at Poso. Presently, Gontor has no less than 29 business enterprises in various sectors such as book stoor, pharmacy, mini market, rice field, publishing house, radio station, packing mineral water, and the likes. Not less than 828,05 Ha of land possessed by Gontor, and its system had been developed and modeled by its graduate in about 215 pesantren. Conclution Until today, Pondok Pesantren still and always implement its commitments to be the center for community development by cultivating positive moral attitude and character building of the muslim community mainly based on values derived from Islamic teaching, for a man to be successful in life character is more assential than erudition. This character become so significant keeping in mind that cultivating good ethics has has to include developing desired attitude, comprehending values system as well as personal appreciation which must be manifested in people’s behavior. At this very point, in modern Indonesia Pondok Pesantren work hand in hand with the government to prepare qualified human resources who are pious, virtuous, intelligent, and meaningful for the sake of better tomorrow. Bibliography Bruinessen, Martin van,”Kitab Kuning: Books in Arabic Script Used in the Pesantren Milieu”, Bijdragen tot de Taal-, Land- en Volkenkunde 146 (1990). Dasuki, A. Hafidh, Sejarah Pondok Modern Gontor, vol. I, Gontor, Pondok Modern Gontor, 1960. Departemen Agama, Dinamika Kehidupan Pesanten di Indonesia, (Jakarta 2004). Dhofiear, Zamakhsyari, Tradisi Pesantren: Studi tentang pandangan hidup kyai. Jakarta: LP3ES. 1982. Yunus, Mahmud, Sejarah Pendidikan Islam, Jakarta: Hidakarya Agung, 1985. Ext.
<urn:uuid:a1343063-ab91-4dd8-973c-56fd6c47bd61>
CC-MAIN-2016-26
http://taufiqbboyfans.blogspot.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00055-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939872
5,378
2.984375
3
Two American doctors who were infected with Ebola while treating patients in Liberia are coming home to be treated at Emory Hospital in Atlanta. Even in Wyoming, doctors are watching the situation overseas to examine the risk of an outbreak occurring here. "We already follow those guidelines for treating with, you know, infected patients or patients that may have an infectious disease," says Kristy Bleizeffer of Wyoming Medical Center. If there was an outbreak in the United States, the CDC will notify medical centers across the nation. If a patient comes in an ER unknowingly carrying the virus, hospital staff will inform other patients to be aware of the symptoms and to go to their medical provider. "I think if everybody is vigilant and follows the reporting requirements, then everybody should be able to be on top of it pretty quickly," says Tracy Murphy, MD at Wyoming Department of Health. While there has not been an outbreak in the United States, Dr. Murphy advises Americans to be especially aware while they are traveling. "If they were to travel to these nations that have Ebola going on, they need to certainly stay away from places that may house sick people," says Dr. Murphy. Medical experts are not extremely concerned with the Ebola Virus hitting Wyoming any time soon and urge that the CDC should be able to contain it.
<urn:uuid:91c48c67-4ef9-4369-b52e-85842d229ec5>
CC-MAIN-2016-26
http://www.kcwy13.com/home/headlines/United-States-Prepares-for-Ebola-269626461.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00077-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973195
265
2.796875
3
Martyrs Boniface, Probus, Ares, Timothy, Polyeuktos, Eutychios and Thessaloniki This Saint, who lived during the reign of Diocletian, was the servant of a certain Roman woman of senatorial rank named Aglais. Mistress and servant lived together in an unlawful union, and Boniface was moreover given to drunkenness and riotous living. Nevertheless, he was generous to the poor, hospitable to strangers, and compassionate to those in misfortune. At last, Aglais, moved at hearing the accounts of the Martyrs, and believing in the power of their intercessions to obtain the mercy of God, sent Boniface to Tarsus to obtain relics of holy Martyrs. Before he departed, he asked her in jest, "And what if they bring back my body as holy relics?" He then set out with some of his fellow slaves for Cilicia, where the Saints were contesting in martyrdom. As he went among the Martyrs and encouraged them in their pains he was arrested by the ruler and confessed Christ with boldness, and suffered death as a martyr in the year 290. Thus what he had said in jest to his mistress was fulfilled when he himself was brought back to her as sacred relics by his fellow servants. Saint Aglais devoted the remainder, of her life to prayer and works of virtue, and reposed in sanctity. Saint Boniface is especially invoked for help against the passion of drinking. Apolytikion of Martyr Boniface & Companions in the Fourth Tone Thy Martyr, O Lord, in his courageous contest for Thee received the prize of the crowns of incorruption and life from Thee, our immortal God. For since he possessed Thy strength, he cast down the tyrants and wholly destroyed the demons' strengthless presumption. O Christ God, by his prayers, save our souls, since Thou art merciful. Kontakion of Martyr Boniface & Companions in the Fourth Tone Thou didst offer up thyself of thine own choosing as a spotless sacrifice to Him that for thy sake, O Saint, shall soon be born of a Virgin Maid, O all-renowned and wise crown-bearer Boniface.
<urn:uuid:cb2f7051-1d9a-4de7-a192-3ee55f464dbb>
CC-MAIN-2016-26
http://www.goarch.org/chapel/saints_view?contentid=344&language=en
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974642
478
2.828125
3
If you still need to checkoff the Least, or Roseate, terns from your birding life list, or simply like to watch birds, you might consider visiting Cape Cod National Seashore as those species are staging there in advance of their migration south. The importance of the seashore's beaches to nesting piping plovers and least terns is well known to most Cape residents and visitors. However, the importance of outer-Cape beaches to Common, Least, and Roseate terns and other shorebirds before and during migration may be less widely appreciated, said Dr. Jason Taylor, the seashore's chief of natural resource management. Even while some least terns and piping plovers are raising chicks on seashore beaches, roseate and common terns are beginning to gather, particularly on the barrier beaches that form Hatches Harbor, Jeremy Point, and Coast Guard/Nauset, he said in a release. "Terns, sometimes referred to as 'sea swallows,' are smaller than gulls and more graceful flyers, with forked tails and slender wings. After the nesting season, adult and fledgling terns disperse from their breeding grounds to 'stage' on beaches and flats in southern New England before their 4,500-mile migration to South America," said Chief Taylor. During this staging period, the ecologist said "the terns rest and feed so they can build body mass and fat reserves necessary to fuel their long migration south. This is a critical period for these terns and other staging and migrating shorebirds; it is important that disturbances to these birds while resting on beaches and flats are minimized." Visitors to Cape Cod National Seashore are encouraged to enjoy terns and other shorebirds from a distance. That's because vehicles, boat landings, kayaks, dogs, and pedestrians can flush staging birds, interrupting feeding and resting and forcing them to expend energy they are trying to preserve for migration. Over the years, Cape Cod National Seashore staff has assisted the Massachusetts Audubon Coastal Waterbird Program and the U.S. Geological Survey in documenting the importance of seashore beaches to terns about to embark on their fall migration. From July through mid-September, researchers have counted many thousands of terns congregating at Hatches Harbor, Race Point, Coast Guard/Nauset Marsh, North Beach and South Beach/Monomoy beaches. Most terns observed are common terns, a Species of Special Concern in Massachusetts, and roseate terns, listed as endangered by the State and the U.S. Fish and Wildlife Service. Based on counts of color-banded roseate terns, researchers estimate that 75 percent, or more, of the entire Northwest Atlantic Coast breeding population of roseate terns use Cape Cod National Seashore beaches and mudflats during their migration. A more detailed three-year study on the importance of the seashore to staging roseate terns is planned to begin in 2014. Cape Cod National Seashore Superintendent George Price "encourages visitors and residents to observe this amazing phenomenon of thousands of birds utilizing national seashore beaches in preparation for their migration. It is important for people to witness this wonder of nature and share their experience with family and friends. Visitors are encouraged to enjoy terns and other shorebirds from a distance, but should avoid disturbing these staging birds, since this period of resting is critical to their survival."
<urn:uuid:f53c1143-cfe8-4cf5-ac72-9f9ee5f1b842>
CC-MAIN-2016-26
http://www.nationalparkstraveler.com/2013/09/least-and-roseate-terns-staging-cape-cod-national-seashore-trek-south23909
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00166-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94692
710
2.96875
3
A Liberal government establishes the foundation of social welfare in the sovereign, democratic Dominion of New Zealand. Britain retains great influence on government, education, and culture, and is the primary market for New Zealand's tightly regulated "protein exports" of meat, cheese, and butter. William Massey of the Reform Party, conservative despite its name, is elected prime minister in 1912. New Zealand supports Britain in World War I at a staggering cost: 60,000 casualties out of a total population of 1 million. Public calls for a wartime coalition government force Prime Minister Massey to share power with the Liberals, led by former Prime Minister Joseph Ward. The failure of a general workers' strike prompts the formation of the Labor Party, and three of its leaders enter Parliament. Ward withdraws from the governing coalition to distance himself from public criticism regarding war casualties, inflation, and poor urban conditions. He woos the left wing with promises of coal mine nationalization. Prime Minister Massey campaigns on a policy of patriotism, stability, and protection of private property. Massey wins reelection in 1922, but with a greatly reduced majority. The export-led economy begins to recover as export prices improve, state loans establish agricultural banks, and farmers receive tax cuts. Prime Minister Massey, despite failing health, succeeds in taking steps to combat inflation and to increase pensions and spending on public works. Massey dies in 1925. The Reform Party nominee, Minister of Public Works Joseph Gordon Coates, wins the election for prime minister. But, lacking political skills, he fails to live up to expectations. The economy deteriorates. Ward's new Liberal United Party, stressing free-enterprise policies, attracts many businessmen. Coates resigns, and Ward, with reluctant Labor support, takes office as prime minister. The economy contracts during the Depression. Prime Minister Ward, in poor health, hands over leadership to George Forbes of the United Party. Forbes alienates Labor and forms a coalition government with the Reform Party. The government devalues the currency, establishes a Central Bank, and increases its intervention in the marketplace. Wage cuts and unemployment cause urban demonstrations. Michael Savage leads Labor to a sweeping victory in 1935 and is elected prime minister. Government intervention increases community purchasing power, stimulates the economy, and creates jobs. The government introduces a comprehensive social welfare system and compulsory unionism. Savage begins to address the needs of the indigenous Maori population. The Reform Party becomes the National Party. New Zealand fights on the side of the Allies in Africa and Europe, suffering a very high casualty rate, while American troops train in New Zealand for the Pacific campaign. Peter Fraser, acting prime minister under the ailing Savage, becomes prime minister in 1940. Under his pragmatic leadership, Labor introduces military conscription and an economic stabilization system. Trade and diplomatic contacts with the U.S. increase. New Zealand is a founding member of the United Nations. A new Department of Maori Affairs addresses urbanization and poverty among the Maori. Years of wartime shortages and controls have eroded support for the ruling party. Sidney Holland's center-right National Party wins control of the government in 1949 and adopts many existing welfare measures. The inefficient Upper House of Parliament, comprising newly appointed "suicide" members, abolishes itself. Australia, New Zealand, and the United States form the ANZUS mutual-defense alliance. New Zealand supports the U.S. in Korea with naval and ground forces. A boom in wool prices boosts the economy. The government responds to a prolonged dockworkers' strike by restricting civil liberties. The government lifts some import-licensing and price controls. But a high level of integration between state, business, farming, and workers' unions persists. A prosperous economy conceals a deteriorating balance-of-payments situation. Prime Minister Holland, his health failing, is replaced by his deputy, Keith Holyoake, in 1957 until Labor wins a narrow victory two months later. Faced with a balance-of-payments crisis, the new Labor government under Walter Nash goes back on campaign promises and implements its "Black Budget," which includes increases in direct and indirect taxes. The public is furious. New Zealand embarks on import-substitution industrialization. Despite progressive social policies, the Nash government loses to National's Keith Holyoake in 1960. Prime Minister Holyoake, committed to private enterprise, takes a low-key approach to economic management and focuses on civil liberties. The government publishes a searing report on the disadvantaged position of the Maori in New Zealand. New Zealand expands its international contacts in Southeast Asia and enters a limited free-trade agreement with Australia. Holyoake wins reelection in 1963. In 1966, reelected for a third term, Prime Minister Holyoake makes the controversial decision to send troops to aid United States forces in Vietnam. New Zealand protests French nuclear testing in the Pacific region. Although he wins a fourth election in 1969, Holyoake loses support by 1970 as his government is perceived as care-worn and out of touch with the public. He steps down in 1972. Norman Kirk is elected to lead the third Labor government. The slowing world economy, the rise in oil prices and government expenditure, and soaring inflation end 25 years of solid economic growth. Britain joins the European Economic Community and adopts its trade barriers to New Zealand's agricultural products. Unemployment rises and strikes are common. Prime Minister Kirk dies in 1974. Under Prime Minister Robert Muldoon, the National Party government regulates many sectors of the economy and invests in massive and inefficient "Think Big" industrial projects. Overseas money finances budget deficits. His authoritarian leadership and emphasis on traditional social values put Muldoon in conflict with Maori rights organizations, feminist groups, and the environmental movement. As unemployment and inflation soar, the government freezes wages and prices. The economy is on an unsustainable course, with massive budget and current account deficits. Thousands immigrate to Australia. Minister of Labor Jim Bolger introduces voluntary unionism and pushes for a more balanced approach to the economy. Prime Minister Muldoon begins to lose support in Parliament. David Lange of the Labor Party is elected to succeed Prime Minister Muldoon. He immediately begins major market-oriented economic reforms, including deregulation. The new government eliminates agricultural subsidies and wage and price controls, lowers tax rates, and begins a program of privatization. Tight monetary policy designed to reduce the government deficit brings down the inflation rate. Labor adopts an uncompromising anti-nuclear stance. Many Maori tribes are granted financial reparations. In 1987 Labor is reelected, but the share market crashes, and the economy falls into recession. Unemployment and negative economic growth follow. The social costs of reform become apparent in the form of increased inequality. In 1990 the National Party's James Bolger is elected prime minister. Under Prime Minister Bolger, New Zealand begins an export-led recovery, bolstered in part by a free-trade agreement with Australia. The Reserve Bank increases money supply, fueling stock market activity. After peaking at 15 percent in 1992, unemployment drops. The government reigns in public spending, yet in most respects maintains the welfare state. The National Party is reelected in 1993. The government continues the deregulation and privatization program initiated by Labor. New Zealand becomes one of the world's most open and competitive economies. A National/Labor coalition government under Bolger and Deputy Prime Minister Winston Peters is elected through a new mixed-member proportional system that replaces the "first past the post" system. Neither party has an absolute majority. Bolger resigns and is replaced by New Zealand's first woman prime minister, the National Party's Jenny Shipley. The Asian financial crisis and two years of drought lead to slow economic growth in 1997 and '98. After nine years in power, the short recession costs the National Party dearly. Labor Party leader Helen Clark is elected prime minister of a center-left coalition government in 1999. A low NZ dollar, high commodity prices, and good weather increase exports; unemployment hits a 13-year low. The government buys back a majority stake in failing privatized Air New Zealand. Clark's Labor-led coalition is reelected in July '02. Economic growth slows in line with global conditions; business confidence falls, particularly in agriculture. The NZ-filmed Lord of the Rings boosts tourism revenues. back to top
<urn:uuid:2c063422-8da6-4eed-adda-22ec71a8ab74>
CC-MAIN-2016-26
http://www.pbs.org/wgbh/commandingheights/lo/countries/nz/nz_full.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937753
1,700
3.40625
3
UNDP Launches 2013 Human Development Report on “Rise of the South” 14 March 2013: The UN Development Programme (UNDP) has launched the 2013 Human Development Report (HDR), titled “The Rise of the South: Human Progress in a Diverse World.” The 2013 HDR examines the long-term human development implications of the rise of new powers in the developing world and highlights Algeria, Brazil and Mexico as among the top 15 countries in reducing Human Development Index (HDI) shortfalls. The launch took place in Mexico City, Mexico. The report analyzes the “Rise of the South” and emphasizes that this phenomenon extends beyond rising nations such as Brazil, China and India to include Mexico, South Africa, Thailand and Turkey and a growing middle class. Over 40 developing countries have made better progress on human development indicators than expected, according to the HDR. The report also notes that no country had a lower HDI in 2012 than in 2000. Speaking at the launch, Mexican President Enrique Peña Nieto said “the world today has greater awareness of the poverty and inequality gaps that divide the inhabitants of this planet.” UNDP Administrator Helen Clark highlighted how today's development landscape is “very different” than when the HDR was first published in 1990. Emphasizing the economic, environmental, political and social influence of the South, she said today's challenge is “to carry that progress forward, share the experiences, and enlist the growing influence of the South to move our world onto a sustainable and inclusive development path for all.” Clark, cautioned, however, that slow action on climate change could halt or reverse development gains. The HDR identifies policies that could promote increased progress on human development, including innovative social policies, such as cash transfer programs, and policies that pursue inclusive growth. It also identifies sustained investments in human capabilities, including education, health and nutrition, global economic engagement and resilience to economic, environment and other shocks as key to human development gains. The HDR recommends four focus areas for sustaining momentum on development: enhancing equity; expanding citizen participation, including better representation of the South in global governance systems and of youth; and increasing efforts to address environmental and demographic challenges. The HDR also calls for new institutions to facilitate South-South cooperation and regional integration and for greater transparency and accountability. The HDR includes chapters on: the state of human development; a more global South; drivers of development transformation; sustaining momentum; governance and partnerships for a new era; and a statistical annex. [UNDP Story] [Publication: 2013 Human Development Report] [Helen Clark Statement at Launch] [HDR Blog]
<urn:uuid:de7220ef-d325-45e6-819f-92e449eb32c9>
CC-MAIN-2016-26
http://climate-l.iisd.org/news/undp-launches-2013-human-development-report-on-%E2%80%9Crise-of-the-south%E2%80%9D/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932487
544
2.515625
3
For a printer-ready version of this page, click here. Alizarin crimson ("Alizarin madder lake," "Alizarin"; also see Madder lake): Origin and History: Madder lake was made from the European madder root, Rubia tinctorum. Since the 1850s (approximately) it has been made synthetically-- under the name alizarin-- with an identical chemical composition to madder but with a superior clear transparent tone and lightfastness. By manipulating these chemicals, a range of shades has been made from scarlet to ruby. Making the Pigment: Roots of the madder plant are dried, crushed, hulled, boiled in weak acid to dissolve the dye, and fermented to hydrolyze anthraquinones from the glycosides. The extracted dye is made into a pigment by dissolving the dye in hot alum (aluminum potassium sulphate; AlK(SO4)2 · 12 H2O) solution, and precipitating pigment with soda or borax. Synthetic alizarin lakes are prepared by reaction of alizarine with aluminum hydroxide. Chemical Properties: Stable di-arylide (PY 83). The base on which both varieties are substrated is indistinguishable under the microscope. Nor can the natural and artificial be identified even at high magnification. They are both soluble in hydrochloric acid. Artistic Notes: All alizarin lake colors are permanent to light and to the gaseous atmospheres of urban areas. However, when mixed with ochre, sienna and umber, they lose their permanence, and when mixed with blacks or oxides, their permanence is not affected at all. Excellent as a glazing color over a dry surface. Alizarin madder lake is a coal-tar color, and in permanence exceeds the natural product, which in contrast ages more gracefully than the artificial. Top. |Madder plant||Synthesizing alizarin crimson| Origin and History: Brazilwood dye comes from the Caesalpinia tree, and was named "brazil" even before the discovery of that country. In its natural state, brazilwood is a light, brownish red; mahogany in appearance. It is sold nowadays in blocks or chips, and sometimes in scrapings or shavings (as of 1960s). In the Middle Ages it was always sold in blocks, and the craftsman was obliged to reduce the solid wood to powder by scraping it with a piece of glass, or filing or pounding, as the finer the powder the more easily the color can be extracted from it. Making the Pigment: When the brownish powder of brazilwood is wet it turns reddish. When steeped in a solution of lye it colors the liquid deep, purplish red, and hot solutions of alum extract the color from the wood in the form of an orange-red liquor. Most medieval brazil lakes were made either from the extract made with lye (a weak solution of potassium carbonate) or from the alum extract, as these solutions get the color out of the wood more thoroughly than plain water. Just what the shade is that is extracted depends on how acid or alkaline the mixture of solutions is made. The more alum: the warmer the color, the more lye: the colder the red. The precipitate is collected by settling and pouring off the liquid. The pasty mass is smeared on an absorbent surface such as a new brick or tile to dry. Then it is ground, and has the same degree of transparency as the alumina of which it is chiefly composed. When chalk is added to the alum, a more opaque pink rose is produced by the resulting admixture of calcium sulphate to the alumina lake. When white lead was used, it had no other effect than to give substance to the lake and slightly less transparency, rather than to make it opaque. When marble dust and powdered egg shells were added to newly formed lakes, they further controlled the color produced by reacting chemically with any excess of alum which might give a brown cast instead of rose. In all these cases the brazil color was mordanted upon the white material, so to speak, dyed with the brazil, and the pigment so formed was different from a mixture of a finished lake with a white pigment. Artistic Notes: Brazil lakes are not very permanent. Top. |Brazilwood branch, pods, and dye| Chemical Properties: Cadmium seleno-sulfide (PR 108). Artistic Notes: Regarded as the best substitute for vermilion (which is mercury based as a sulphide of Mercury). As an oil color it needs a little wax and 40% oil. In tempera the color easily solidifies in the tube and is therefore better for the painter to prepare the color just before use. It is known to turn brown in outdoor frescos; with copper colors such as emerald green it turns black, as do all cadmiums. It should be noted that none of the reds the old masters used were as permanent as the cadmium range. In commercially available colors, it comes in light, medium and dark. Over-grinding can often make the paint gummy in feel, and quite dense, needing extra emulsions to attempt to bring it back to the feel of "paint". Cadmiums should never be mixed with lead white or other lead based paints, but can be mixed with the older Lithopone, Titanox and zinc oxides, and the more contemporary titanium whites. Top. |Cadmium red medium swatch| Carmine ("Cochineal," "Crimson Lake"): Origin and History: A dyestuff precipitated on clay, made from the ground female Coccus cacti, or cochineal, insect which lives on various cactus plants in Mexico and in Central and South America. It was brought to Europe shortly after the discovery of those countries, first described by Mathioli in 1549. The finest quality, known as nacarat carmine, is non-poisonous and quite beautiful with the peculiarity of being more permanent in transmitted light as a transparent color, than when under direct light. Chemical Properties: Soluble in ammonia. Carmine is an aluminum and calcium salt of carminic acid, an anthraquinone derivative, and carmine lake is an aluminum or aluminum-tin lake of cochineal extract, whereas crimson lake is prepared by striking down an infusion of cochineal with a 5 per cent solution of alum and cream of tartar. Purple lake is prepared like carmine lake with the addition of lime to produce the deep purple tone. Carmine lake is insoluble in water. It burns completely leaving a white ash, and smells in the process like burnt horn. Artistic Notes: According to Maximillian Toch, it is only legitimate as a food coloring, as exposure to the sunlight for three months bleaches the pigment completely. Carmine lake does not behave much better, being even weaker and less stable; it is of a maroon shade. Top. |Cochineal insects on a cactus pad||Diagram of Coccus cacti||Ground carmine pigment| Origin and History: An important vegetable source or red is an East Indian shrub known as Pterocarpus draco, or Dradacoena draco. The sap of this shrub dries into a deep brownish-red gum resin which is known now as it was in the Middle Ages, as dragonsblood. The long, slender stems of the genus are flexible, and the older trees develop climbing propensities. The leaves have prickly stalks which often grow into long tails and the bark is provided with many hundreds of flattened spines. In classical times, dragonsblood was called Indian cinnabar by Greeks writers, but Pliny (whose word was law in the Middle Ages) professed that it was a product of a battle between the dragon and the elephant which ended in the mingling of the blood of each. Making the Pigment: The berries are about the size of a cherry, and pointed. When ripe they are covered with a reddish, resinous substance which is separated in several ways, the most satisfactory being by steaming, or by shaking or rubbing in coarse, canvas bags. An inferior kind is obtained by boiling the fruits to obtain a decoction after they have undergone the second process. Chemical Properties: Dragonsblood is not acted upon by water, but most of it is soluble in alcohol. It fuses by heat, and if heated gives off benzoic acid. It is astringent. The solution will stain marble a deep red, penetrating in proportion to the heat of the stone. It is very brittle, and breaks with an irregular, resinous fracture, is bright red and glossy inside, and darker red sometimes powdered with crimson, externally. Small, thin pieces are transparent. Various chemical analyses: 1) 50 to 70 per cent resinous compound of benzoic and benzoyl-acetic acid, with dracoresinotannol, and also dracon alban and dracoresene. (2) 56.8 per cent of red resin compounded of the first three mentioned above, 2.5 per cent of the white, amorphous dracoalban, 13.58 of the yellow, resinousdracoresene, 18.4 vegetable debris, and 8.3 per cent. ash. (3) 90.7 per cent of red resin, draconin, 2.0 of fixed oil, 3.0 of benzoic acid, 1.6 of calcium oxalate, and 3.7 of calcium phosphate. (4) 2.5 per cent of draco-alban, 13.58 of draco resen, 56.86 of draco resin, benzoic dracoresinotannol ester and benzoylaceticdracoresinotannol ester, with 18.4 of insoluble substances. Artistic Notes: The main uses of this red resin in the Middle Ages were coloring metal, improving the color of gold, and for glazing other metals to imitate gold. It was used as a pigment chiefly by book painters. Though not a lake pigment, it resembles them in transparency. Top. Origin and History: The work "Lake" in pigments derives from a material known as Lacca from which they were prepared. We don't know what just was meant by lacca but have supposed that was the material we now call Lac, the gum lac of India, a dark-red encrustation of resin which is produced on certain kinds of trees by the sting of certain insects. This resin is the source of our shellac. Making the Pigment: Crude lac, known as stick-lac, consists of the resin, the encrusted insects, lac dye, and twigs. When crushed and washed free of the dye, twigs, and insects, it becomes granular and is known as seed-lac or grained lac. If the crude material is boiled with water and a little alkali, the coloring matter dissolves in the water; it is dried into thin layers or flakes and sold (rarely now) as "lac dye". Chemical Properties: Lac dye is pH sensitive and produces its red color in acidic mediums only; alkaline mediums produce a bluish color. Artistic Notes: Lac dye is used for painting, or a lake can be colored with it. The one pigment still made with it is called Indian Lake. The colors that lac dye produces are generally violet, and not very brilliant. The color is too dark and dull for books, and not stable enough for walls. Top. |Flakes of dried lac dye| Origin and History: The boiled root of Rubia tinctoria, a field plant which grows wild in Italy and was cultivated in France as a dyestuff in the end of the thirteenth century. As an extract of the root of the madder plant, which was allowed to grow for two years in the ground, the root is not red itself, but contains alizarine, which can be made to produce red lakes of several shades and precipitated on a clay base. Artistic Notes: It is a beautiful transparent red, but impermanent. In the trade it is available as rose, light, medium to dark, and violet. Rose madder bleaches out in a few months, but the darker tones are more permanent. In fresco, lime destroys madder completely. In the original root there is a second coloring agent called purpurin, which when removed creates a superior permanence. Madder lake requires about 70% binder, dries poorly and should therefore be first mixed with linseed oil and ground with an addition of varnish (damar). It has been observed over time that madder lake bleeds, and when so it has been an indication that it has not been used properly, perhaps too thickly in underpainting, or that it has been mixed with impermanent coal-tar dyes. Top. |Rubia tinctoria||Chipped madder root||Rose madder swatch| Minium ("Saturn red," "red lead"): Chemical Properties: Red oxide of lead, not to be confused with iron oxide. It is highly poisonous, sensitive to hydrogen sulphide, attacked by hydrochloric acid but indifferent to alkalis. Red lead also should not discolor alcohol; if it does it has been adulterated with coal tar dye (most likely a test of the color upon purchase from the chemists at that time); also, when doctored with coal tar it has a tendency to bleed when painted over with white lead. Dilute nitric acid turns red lead into brown lead peroxide: the last stage to which white lead may be oxidized. Under great heat red lead becomes a light violet, and when cooled again, a yellowish red. Making the Pigment: It is produced by heating white lead in the presence of air. Artistic Notes: As a pigment, it quickly turns dark in the light, but when mixed with oil (and it requires only 15% binder), it is fairly permanent. When mixed in oil with white lead, it tends to fade rather than turn dark, and stands up better than white lead with vermilion. It can only be used as an oil color; as a powder and in fresco it eventually turns black. Freshly ground is best, and when red lead is produced under insufficient heat, red-yellow oxide forms, which is not sufficiently permanent. This can be eliminated by washing with sugar-water. When it is ground in oil, a little wax should be added to guard against its hardening too quickly. Red lead ground in oil dries the quickest of all pigments. Top. Origin and History: Realgar was not as common as orpiment in medieval paintings, with references limited largely to preservation of glair, and only sometimes used as a pigment. Chemical Properties: The first cousin of orpiment, both being an arsenic disulphide; realgar is an orange-scarlet to orpiment's yellow. Arsenic tri-sulfide is sometimes made, if you can find it. Artistic Notes: This is a color rarely available today, but the best crystals look clear and transparent. Top. |Realgar mineral||Ground realgar pigment| Sinopia ("Sinoper, "Sinope") : Origin and History: The name of a shade of red ocher that eventually was used to describe any red ocher, of which there are many variations in color: a light and warn tone is Venetian Red, or Mars Red; darker, more cool-toned purple versions are called Indian Red, Mars Violet or Caput Mortuum; Terra Rosa from Pozzuoli has a salmon pink color which is easily recognizable in some medieval Italian wall paintings, whereas the dark wine red of ground hematite is more common on the wall paintings of Florence. The red iron oxides are artificial pigments made from iron ore or the waste material of chemical industries, though they are closely related to the red earths and have very similar properties. Artistic Notes: Red iron oxides, if ground too finely in oil, have a tendency to bleed, whereas versions of sinopia will not. English red, which is a light red, is often cut with gypsum when in the powder form, making it dangerous to use in fresco. All of these pigments need 40-60% binder, possess good covering power, and dry fairly well. When mixed with whites, they yield cool tones, and can be used for all purposes in all techniques. Sinopias work well for underpainting. Top. |Mining red ocher||Venetian red swatch||Sanguine red earth used for sketching| Origin and History: Vermilion is the standard name given to the red pigment based on artificially-made mercuric sulfide. The common red crystalline form of mercuric sulfide is cinnabar, a name reserved only for the natural mineral. The natural product found chiefly in Almaden and Idria has been eliminated for practical purposes (including that it is slightly poisonous). Making the Pigment: The synthesis of these mercury and sulphur into cinnabar is accomplished by mixing them together and heating them; if simply mixed and ground together, a black sulfide of mercury is formed, but at the proper temperature this vaporizes and recondenses in the top of the flask in which it is heated. The flask is then broken and the vermilion is removed and ground. Upon grinding the red color begins to appear, and the longer it is ground, the finer the color becomes. This process was understood before the year 800 AD. Chemical Properties: The properties of both natural and artificially prepared are practically identical. Cinnabar, a dense red mineral, is the principal ore of mercury or quicksilver. Vermilion is not generally considered today to be a permanent pigment. It has been known since Roman times that specimens of vermilion darken when exposed to light. In tests it has been discovered that impurities in the alkali polysylfides used to "digest" the pigment, leading to the instability of the red. This catalyzes the transition of the red to black. Also, we've found that the darkening of vermilion occurs mainly in paintings in egg tempera but it is not unknown in oil paintings. It is however fairly unreactive to other colors' chemical makeups; therefore, when mixed with lead white to produce flesh tones, it did not produce the black sulfides. Artistic Notes: The traditional use of red glazes of madder, kermes, and cochineal lakes over vermilion underpaint not only increases the purity of the color but has been shown to reduce the tendency to darken as well. It is also known that the farther light can penetrate into the binding medium, the more quickly the vermilion will darken. To give vermilion an agreeable luster in manuscript decoration, the use of egg yolk along with the glair was how the color was normally tempered in books. Top. |Natural cinnabar||Ground vermilion| Barium yellow ("Permanent yellow," "yellow ultramarine"): Chemical Properties: Barium chromate. When heated it becomes reddish, and returns to yellow when cooled. Artistic Notes: It appears to be similar to zinc yellow, but somewhat brighter, and is luminously bright under artificial light, almost white. It is also slightly poisonous, and superior to zinc yellow in its permanence, and its requirement for binder being only 30% and thus leaner. It is fairly permanent in tempera, watercolor and pastel, but doubtful in fresco, though even then better than zinc yellow. Top. Cadmium yellow ("Aurora yellow"): Origin and History: The cadmium yellows are bright and opaque, permanent and non poisonous, first introduced to the public at the 1851 Exhibition, and said to have been made first in 1846. Making the Pigment: A solution of 9,7 g Cd(NO3)2 · 4 H2O in 50 ml deionized water is added slowly to a stirred solution of 8,3 g Na2S · 9 H2O in 50 ml deionized water. The resulting precipitate is filtered, dried and homogenized in a mortar. Chemical Properties: Cadmium zinc sulfide (PY 35) = Cadmium Yellow Lemon; cadmium sulfide (PT 37) = Cadmium Yellow Light, Medium. They are also permanent in lyes, but behaves oddly when heated: if heated to red it returns to yellow, but turns to orange within a year. In the open, they turn brown when mixed with lime. Cadmium lemon is precipitated upon a white filler. Artistic Notes: Some commercial samples turn green under light, but others stand up well. The darker cadmiums have more covering power and are more permanent. Cadmium is not compatible with copper colors such as emerald green, as in mixtures with them turns them permanently black. Many cadmiums in powder form show streaks under light; the cause being cadmium salts other than sulphide. Top. |Greenockite (CdS) mineral||Precipitating cadmium yellow pigment| Gamboge ("rattan yellow," "wisteria yellow"): Origin and History: The earliest evidence of the use of gamboge comes from eighth century East Asia. After its arrival in Europe in the seventeenth century, gamboge was used as a transparent oil color by Flemish painters, but additives such as resin or wax were necessary to enhance its permanence and durability. The pigment was also made more usable by mixing it with other yellow pigments such as lemon yellow or alumnia. Many sources refer to gamboge being used to make a transparent yellow varnish for the coloring of wood, metals, and leather. Making the Pigment: Gamboge is most commonly extracted by tapping from the tree Garcinia hanburyi. The trees must be ten years old before they are tapped. The resin is extracted by making spiral incisions in the bark and by breaking off the leaves and shoots of the tree and letting the milky yellow resinous gum drip out. The resulting latex that exudes out is collected in hollow bamboo. When the latex is congealed, the bamboo is broken away and large rods of raw gamboge remain. Raw gamboge is usually in the form of hard, brittle lumps of a dull, dark yellow color, which when pulverized, turns into a bright yellow powder. This powder is ground or mixed with a variety of binders in order to make paints and varnishes. It burns with an odor of resin, is poisonous, is not attacked by acids, and turns red in alkalis. Gamboge usually contains about 70% to 80% yellow resin, and 15% to 25% water-soluble gum. the remaining portion is composed of esters, hydrocarbons, wax, ash residue, and vegetable detritus. Most investigation of commercial gamboge products have found that the major constituent of the resin is gambogic acid. R1=(CH2)2C=CH(CH2)2,R There has been less investigation of the gum component of the pigment, but it is hydrocarbon based. Artistic Notes: A transparent dark mustard yellow gum resin pigment much used in watercolor, which is neither lightproof alone or when mixed with other pigments. In thick layers it shows a gloss because of its resin content, and as an oil color it strikes through, therefore being unusable. When mixed with most high-chroma pigments, the gamboge eventually disappears and what was once a beautiful green or orange turns back into blue or red. Economically, other pigments are more sensible to use, so the use of gamboge has drastically declined in the twentieth century. In fact, modern-day pigment experts recommend that the only admissible use for gamboge is in leaf-gilding on paper or parchment by pre-coating the support, then breathing on the paper to make it tacky. Top. Gold leaf and shell gold: Origin and History: The most important yellow color of the Middle Ages, the light reflecting from gold leaf was the source of the term "illuminated" manuscript. Making the Pigment: Gold leaf is composed of 22k or 23k gold, pounded to a thickness of only a few µm (micrometers). Shell gold is made from powdered gold held together with a binder such as gum arabic; it is called "shell" gold because the pigment is commonly stored in a shell. You can make your own shell gold from gold leaf scraps. Top. |Raw gold||Patent and loose-leaf gold||Shell gold| Origin and History: All of the paintings that have been identified as containing lead-tin yellow date between approximately 1300 and 1750. This hue was used most frequently in the 15th, 16th, and 17th centuries. One of the most important uses of lead-tin yellow pigment was in the color glass production in Venice and Bologna in the Middle Ages. It was used widely in Western Europe in frescoes and panels. Chemical Properties: Lead-tin oxide, or lead stannate, Pb2SnO4. Massicot pigment decomposes very slowly even when boiling with very concentrated acids. This pigment can be used in lime medium because it is not affected by alkalis. Soluble sulfides cause darkening of the pigment with formation of lead sulfide. Artistic notes: Commonly used in foliage with green and earth pigments. Top. |Reaction mixture after heating||Ground lead-tin yellow pigment| Massicot ("Cassel Yellow," "Litharge"): Origin and History: May also have been called "giallorino" in the Middle Ages. Massicot is a very old pigment that can be dated back to as early as 1300 in Medieval Europe. Chemical Properties: White monoxide of lead, PbO. Heated slowly, it forms a warm yellow. It is the heaviest of the lead pigments, slightly orange in color, and forms a cement which is impervious to water when mixed with linseed oil. To form an actual cement, it is mixed with glycerine. It makes a drier known as lead drier when cooked at over 500 degrees F and mixed with linseed (this is the lowest temperature in which it becomes soluble). Naples yellow ("Lead antimonate yellow"): Origin and History: It was used as early as 500 BC, replenished 79 AD in the Vesuvius eruption, and said to have been found on the tiles of Babylon. Making the Pigment: Twelve oz. of ceruse, 2 oz. of the sulphuret of antimony, 1/2 oz. of calcined alum, 1 oz. of sal ammoniac. Pulverize these ingredients, and having mixed them thoroughly, put them into a capsule or crucible of earth, and place over it a covering of the same substance. Expose it at first to a gentle heat, which must be gradually increased till the capsule is moderately red. The oxidation arising from this process requires, at least, 5 hours' exposure to heat before it is completed. The result of this calcination is Naples yellow, which is ground in water on a porphyry slab with an ivory spatula, as iron alters the color. The paste is then dried and preserved for use. There is no necessity of adhering so strictly to the doses as to prevent their being varied. If a golden color be required in the yellow, the proportions of the sulphuret of antimony and muriate of ammonia must be increased. In like manner, if you wish it to be more fusible, increase the quantities of sulphuret of antimony and calcined sulphate of alumina. Chemical Properties: Opaque lead antimoniate. Identical crystal-structure with the mineral, bindheimite. Artistic Notes: It is very heavy and dense, and therefore of exceptional covering power; moreover, as a lead color it is a good dryer, but poisonous as all leads are. Manufacturers distinguish between Naples yellow and dark Naples yellow; both are very permanent. In oil, tempera and even fresco, excellent use can be made of Naples yellow as it is compatible with all other colors. It is a much more compact compound of lead than lead white, requiring only 15% binder, and it is totally unaffected by light. It seldom cracks, and in varnished tempera is invaluable because of its covering power. Discoloration has been proven in tempera colors from the tube, where the metal of the tube was attacked by disinfectants contained in the tempera medium, which creates a dirty gray. Naples yellow needs little grinding; only a brief working with the spatula, and if too finely ground becomes heavier and earthy in texture, therefore best when made by hand, and not previously manufactured. Top. |Bindheimite||Naples yellow pigment| Orpiment ("King's Yellow"): Origin and History: Orpiment is a stone, found throughout the world as a low-temperature product of hydrothermal veins, hot-spring deposits, and volcanic sublimation. In its natural state, it has a mica-like sparkle which recalls the luster of metallic gold. An ancient pigment, used throughout the Middle East and Asia through the late 19th century. During the Renaissance, orpiment was imported to Venice from Asia Minor. Making the Pigment: Mineral orpiment is heated with sulfur, allowing a more pure orpiment to sublime out. Orpiment is particularly difficult to grind into a pigment. An artificial variety of pigment is made by fusing of arsenic or arsenic oxide with sulfur. Chemical Properties: Yellow Arsenic Tri-sulfide, As2S3. Monoclinic, transparent to opaque. In close proximity, orpiment can react with lead colors such as flake white and minium ("red lead," an orange color). It outgases, meaning that orpiment vapors will creep over to the lead colors and corrupt them, reverting them to a lead-gray color. This process can creep a couple of inches in as many months, or less. It cannot be used to modify green tones containing verdigris, as the sulphur of the orpiment attacks copper in the same way. It also has a corrosive action on binding materials, and has quite often decayed and come away from panels and parchment. Orpiment is highly toxic. Artistic Notes: Its color is light, vivid yellow in color, sometimes pure yellow but often inclined toward orange. When mixed with zinc or titanium white, it loses its yellow tones becoming a pale brown/beige. Top. |Orpiment mineral||Ground orpiment pigment| Making the Pigment: Colored earth is mined, ground and washed, leaving a mixture of minerals-- essentially rust-stained clay. Ochre can be used raw (yellowish), or roasted for a deeper (brown-red) color from loss of water of hydration. Chemical Properties: One of many earth-tones created by natural iron hydoxide (PY 43). When heated, it turns red, losing its chemically bound water content to become thick and dense. Under moderate heat, yellowish-red colors are produced; however, the stronger the heat, the more rich and saturated the color produced, which if mixed with white create colder tones than one would expect. The coloring agent is of an iron oxide. To heat, the ochres have to be pure and free from adulterant admixtures such as chalk, because this would create quick-lime if heated. Artistic Notes: Gypsum should also never be present especially if intended for fresco use. They are opaque and non-poisonous, but require a high percentage of binder. Produces a quick-drying oil paint. Top. |Raw yellow ocher||Ground yellow ocher pigment| Chromium oxide green ("chrome green"): Origin and History: First made in 1809, this is a permanent, opaque, and less toxic alternative to emerald green. Chemical Properties: Anhydrous chromium senquioxide (PG 17), Cr2O3. The relatively weak ligand field of the chromium-oxygen bonding at the chromiums produces color in a similar manner to that of the emerald green below. Top. |Chromium oxide green swatch| Origin and History: This pigment of mineral origin was known to the Egyptians, the Greeks and the Romans. Chrysocolla was a classical name to indicate various compounds that were useful in the hard soldering of gold, and among these were certain green copper minerals, the basic carbonate, the silicate, etc. The name is now used by mineralogists, specifically, for natural copper silicate (approx. CuSiO3.nH2O), a mineral fairly common in secondary copper ore deposits. Often chrysocolla is found together with malachite and azurite in the same deposit. In its natural state, chrysocolla appears similar to malachite, except that the color is somewhat more bluish. Microscopically, it is nearly amorphous or cryptocrystalline, and is practically colorless or, at most, only a pale green by transmitted light. Chemical Properties: CuSiO3 * nH2O + Cu2CO3(OH)2 + CuCO3(OH)2 monoclinic aggregate, idiochromatic copper. Copper silicate. Artistic Notes: Chrysocolla is stable to light and fairly opaque. When ground to a powder it retains its green color quite satisfactorily and may serve for a pigment in an aqueous medium. Since the copper is bound in a silicate matrix it is not as soluble when in an acidic medium. Therefore chrysocolla, unlike azurite and malachite may be used in oil. It can be used for fresco and tempera. It is not suitable for encaustic. It had mostly fall into disuse by the early 17th century, possibly clinging on longest as a watercolor pigment. Top. |Polished chrysocolla mineral| Emerald green ("Paris green," "Veronese green," "Schweinfurt green"): Origin and History: Emerald green was developed in 1808 in an attempt to improve Scheele's green. It was first produced commercially by the firm of Wilhelm Sattler at Schweinfurt, Germany in 1814. Making the Pigment: Justus Von Liebig and Andre Bracconot separately published papers on its method of manufacture. Verdigris (or acetic acid) was dissolved in vinegar and warmed. A watery solution of white arsenic was added to it so that a dirty green solution was formed. To correct the color, fresh vinegar was added to dissolve the solid particles. The solution was then boiled and bright blue-green sediment was obtained. It was then separated from the liquid, washed and dried on low heat and ground in thirty- percent linseed oil. Chemical Properties: A basic copper aceto-arsonite, Cu3(AsO4)2 * 4H2O Highly poisonous; it will blacken any adjacent sulphur color (ie: vermilion, French ultramarine, cadmium yellow). Emerald turns black when heated and smells of garlic. Potassium hydroxide discolors emerald to an ochre color, and in weak sulfuric acid it dissolves, turning the solution blue. The copper colors of the old masters look under the microscope like coarse glass splinters as compared with modern colors which have a mud-like character. Artistic Notes: It can be used for tempera and oil. Not advised for fresco and encaustic. The color is luminous by itself, bluish or yellowish green, highly permanent and would be very useful except that it is incompatible with sulphur colors such as cadmium yellow, vermilion and ultramarine. Aas an oil color, emerald green requires only small amounts of oil: no more than 30%, and dries well. Top. |19th century methods for making emerald green||Emerald green pigment| Malachite ("mountain green," "Bremen green," " Olympian green," "copper green," "green bice"): Origin and History: Malachite is found in many parts of the world in the upper oxidized zones of copper ore deposits, associated in nature with azurite, the native blue carbonate of copper, which contains less chemically bound water. Geologically, azurite is the parent, and malachite a changed form of the original blue deposit. Malachite was first used in Egypt and China. In fact, Egyptians probably used the pigment as eye paint even before the first Egyptian dynasty. In Western China, malachite is found in many paintings from the ninth and tenth centuries. Europeans did not use malachite very much in medieval times, but it was very popular during the Renaissance. However, it had been replaced by synthetic green pigments by about 1800. Making the Pigment: The natural mineral is crushed, ground to a chalk-like powder, then washed and levigated--swirled in water to separate the finer particles. The powder becomes paler the more it is ground. In the lab, 12,5 g CuSO4 · 5 H2O are solved in 50 ml deionized water. A solution of 5,8 g Na2CO3 in 55 ml deionized water is added slowly to the vigorously stirred solution of copper(II) sulfate. After adding of approximately 40 ml solution strong reaction sets in and carbon dioxide is being developed. The remaining solution should be added slowly after the onset of the reaction. The reaction mixture is let to stand for one to two days at temperatures between 5 and 10°C in order to obtain finely crystallized precipitate. The suspension is decanted two times with deionized water, filtered off and washed thoroughly. Chemical Properties: A secondary mineral ore of copper carbonate, CuCO3.Cu(OH)2. When it is slowly heated, finely crushed malachite gives off water and carbon dioxide (CO2), finally becoming black cupric oxide (CuO). In acid solution, it dissolves and releases carbon dioxide, but remains green. In reaction with hot sodium hydroxide (NaOH), cupric oxide forms on the surface, but the pigment does not react with cold sodium hydroxide. Finely powdered malachite is also slowly darkened by hydrogen sulfide (H2S). Artistic Notes: To be useful as a bright green it must be ground coarse, as finely ground renders it too pale. It is moderately permanent and unaffected by strong light. When used as a paint, malachite's shade also changes with the medium used. As a watercolor, it is a pale green, but it becomes darker in oil. It works better in egg tempera than in oil. Top. |Rough malachite||Polished malachite||Making artificial malachite||Ground malachite pigment| Phthalocyanine green ("phthalo green," "monastral green"): Origin and History: A bright blue-green developed in 1935 and in use since 1938. Chemical Properties: Chlorinated copper phthalocyanine (PG 7), or copper with one of its atoms removed to make a non-metallic pigment. Artistic Notes: Phthalo green has a very high tinting strength and transparency. Many painters have been warned away from it because of this color strength. This pigment doesn't react to sulfur as metallic copper does; it's transparent and covers from yellow-green to cyan. Top. |Phthalo green pigment| Origin and History: The berries of buckthorn, in the genus Rhamnus, were gathered when they were ripe, yielding the color now known as sap green. When gathered before ripening, they yielded a yellow color; the compound of their unripe juice with alum was not much used, but was recorded in the sixteenth century under the name berry yellow. The fourteenth century method was to use the verjuice alone in its natural state to enrich mixed greens. Rhamnus berries are still sold, dried, under the name of graines d'Avignon, or Persian berries. Artistic Notes: Not permanent or lightfast. Top. |Leaves, flowers, and berries of the buckthorn tree||Sap green swatch| Terre verte ("green earth"): Origin and History: The name terre verte is applied to several different minerals, but most importantly in medieval painting is the light, cold green of celadonite, found chiefly in small deposits in rock in the area of Verona, Italy. The chief deposits of glauconite which yield the yellowish and olive sorts are in Czechoslovakia. Today the color is chiefly a durable mixture of chromium oxide, black, white and ochre, since the natural product is scarcely obtainable, though possible with effort. Chemical Properties: They are not poisonous, dissolve partially with a yellowish-green color in hydrochloric acid, but not in alkalis, and should not discolor water, alcohol or ammonia. Artistic Notes: Can be rather dull, transparent, and soapy in texture, like a clay. The color is also not constant, ranging from a light bluish gray with a greenish cast to a dark, brownish olive. In manuscripts and on panels they were chiefly used to underpaint the warm flesh tones. Top. |Ground terre verte pigment| Verdigris ("green of Greece," "salt green"): Making the Pigment: Made by treating copper sheets with the vapors of vinegar, wine, or urine and scraping the resultant corroded crust. Placing copper in ammonia will cause it to turn blue; adding a few drops of acetic acid (vinegar) precipitates a light cyan-green salt. Copper sheets can also be spread with honey and sprinkled with salt before treated with the acid for a slightly different shade termed "salt green." Chemical Properties: Copper acetates ranging in color from green to blue. Reactions with copper acetate vary among substances such as the following: copper acetates dissolve in mineral acid, alkalis convert them into blue copper hydroxide, oils, resins and proteins react to form green transparent copper oleates, resinates, and proteinates. Of the many different types of verdigris each type can be classified into either basic or acidic. Neutral verdigris is Cu(CH3COO)2· H2O, and basic verdigris contains more Cu(OH)2 and H2O. Neutral verdigris is neutral copper acetate which occurs when basic acetates are dissolved in acetic acid, or when basic verdigris is ground up with strong acidic acid. Decomposition of neutral verdigris occurs when a solution is boiled. This verdigris is dissolved in acidic acid. The shape of neutral verdigris is hexagonal and rhombic with distinct boundaries. Basic verdigris forms from the combination of air, water vapor, acetic acid vapor, and copper or copper alloy mix. It forms a solid of blue, or blue-green. It is often made up of fine needles. In the presence of HCl, verdigris is soluble and forms a green solution. From interaction with NaOH it is soluble and precipitates. In the first three months of use the verdigris formulations can change from blue-green to green. All verdigris reacts with resin to form copper resinates. This copper resinate is rather transparent and often used as an overpaint to increase depth of saturation of an opaque green. Artistic Notes: Verdigris is reactive and unstable, requiring painters to use isolating varnishes to protect its color. Sulfur compounds in the air darken all forms of verdigris. It is, however, lightfast. Top. |Corroded copper plate||Ground verdigris pigment| Viridian ("Guignet's green," "Permanent Green"): Origin and History: Guignet of Paris patented the process for manufacturing viridian or transparent oxide of chromium in 1859. Viridian is a non-poisonous, permanent color that replaced verdigris and emerald green as a glazing color by the turn of the 20th century. Making the Pigment: Mix 3 parts of boracic acid and 1 part of bichromate of potassa, heat to about redness. Oxygen gas and water are given off. The resulting salt when thrown into water is decomposed. The precipitate is collected and washed. Chemical Properties: Hydrated chromium sesquioxide (PG 18). Its tint is muted like colors of the natural world. Interestingly, viridian becomes very permanent when roasted into chromium oxide green. Viridian is distinguished microscopically by its large particle size. Viridian's particles are slightly rounded and the pigment is insoluble and unchanged in chemical tests. Artistic Notes: Although viridian requires much more binder to grind it as an oil paint (50-100%) compared to thirty percent in chromium oxide green, both are good driers. Viridian, however, is prone to cracking if one uses this transparent pigment too thickly. Top. |Chromium(III) oxide heated, cooled, and hydrated||Ground viridian pigment| Azurite ("Blue verditer," "mountain blue," "lapis armenius," "azurium citramarinum," "blue bice"): Origin and History: Latin borrowed a Persian word for blue, lajoard, which in the form of lazurium became azurium, and gave us our word azure. It is composed of a basic carbonate of copper, found in many parts of the world in the upper oxidized portions of copper ore deposits. Azurite mineral is usually associated in nature with malachite, the green basic carbonate of copper that is far more abundant. Azurite was the most important blue pigment in European painting throughout the middle ages and Renaissance by contrast, despite the more exotic and costly ultramarine having received greater acclaim. Making the Pigment: To prepare a color from it, lump azurite is ground into a powder, and sieved. Coarsely ground azurite produces dark blue, and fine grinding produces a lighter tone; however if not ground fine enough, it is too sandy and gritty to be used as a pigment. The medieval system included washing it to remove any mud and then separating the different grains by some process of levigation. If plain water is used it is a slow, laborious process, so they used solutions of soap, gum and lye. When azurite is washed, the very fine particles are rather pale, greenish sky-blue, and not much admired for painting. The best grades of azurite for painting were coarse: not sandy, but so course that it could be quite laborious to lay them on. Chemical Properties: Cu3[CO3]2[0H]2, H3.5, SG-3.7, monoclinic. Azurite sometimes looks a little like lapis lazuli, and the two were often confused in the middle ages. To tell them apart with certainty the stones were heated red-hot. Azurite turns black when this is done, and true lapis is not injured. It does not blacken from the effects of sulphur gases as some chemists have supposed, but from the action of the strong alkalis improperly used in picture cleaning, and from the purely optical effect of darkened varnish surrounding its particles. The color can, however, be ruined by the presence of acids. Artistic Notes: Glue size was often used as a binder to hold the pigment grains firmly in place. (Size is more easily affected by protracted dampness or by washing than egg tempera, and blues in wall paintings have therefore sometimes perished through the destruction of their binder where colors in tempera have stood.) It was necessary to apply several coats of azurite to produce a solid blue, but the result was quite beautiful. The actual thickness of the crust of blue added to the richness of the effect, and each tiny grain of the powdered crystalline mineral sparkled like a minute sapphire, especially before it was varnished. The open texture of a coat of azurite blue has often been its undoing on panels; the varnish sinks into it and surrounds the particles of blue. As the varnish yellows and darkens, the power of the azurite to reflect blue light is destroyed, strangled by the varnish---a large number of blacks in medieval paintings were originally blues, only obscured by the discoloration of the varnish. It is incredibly permanent. Top. |Rough azurite mineral||Ground azurite pigment| Origin and History: Although Höpfner introduced cerulean blue as early as 1821, it was not widely available until its reintroduction in 1860 by George Rowney in England. Its name was derived from the Latin word caeruleum, meaning sky or heavens. Caeruleum was used in classical times to describe various blue pigments. This is a greenish, light, very pure and dense compound of cobaltous and tin oxides (is supposed to be a stannate of cobalt). You can get a similar blue by mixing and firing tin and copper chalcanthite with quartz sand like the Egyptians did to make their highly prized frit colors. Making the Pigment: Caeruleum is cobaltous stannate and is made by mixing cobaltous chloride with potassium stannate. The mixture is thoroughly washed, mixed with silica and calcium sulfate and heated. Artistic Notes: As a color, it is very valuable to the landscape artist in atmospheric tones, though this color can also be made by using greenish prussian blue with zinc oxide. It is absolutely permanent (though in the tube it needs an addition of 2% wax). This variety is a fairly true blue (not greenish or purplish) but it does not have the opacity or richness of cobalt blue. It is not recommended for use in watercolor painting because of chalkiness in washes. In oil, it keeps its color better than any other blue. Top. |Cerulean blue swatch| Cobalt blue ("Azure"): Origin and History: A modern replacement for smalt, cobalt blue is a non-poisonous metal color. The isolation of the blue color of smalt was discovered in the first half of the eighteenth century by the Swedish chemist Brandt. In 1777, Gahn and Wenzel found cobalt aluminate during research on cobalt compounds, but the color was not manufactured commercially until late in 1803 or 1804. Making the Pigment: 1 g Cobalt(II)-chloride (CoCl2 · 6H2O) and 5 g Aluminum chloride are homogenized in a mortar and heated in a test tube with a gas burner for about 3 to 4 minutes. Chemical Properties: Cobalt oxide and aluminum oxide (PB 28); black cobalt oxide fires blue. It is unaffected by acids, alkalis and heat. It has coarse particles, like azurite and ultramarine, genuine but is distinguished microscopically by their non-crystalline appearance. It is chemically insoluble and unchanged, even in strong hydrochloric acid. Artistic Notes: Cobalt blue is useful in all techniques, as well as being lightproof. It needs 100% binder but dries very quickly in oil, with the same drying power as the metal lead. Because of this, it often causes cracks in the picture when painted over layers which are not sufficiently dry. Cobalt blue is susceptible to the yellowing of oils, as all cool tones do, but yields a clear tone whereas ultramarine in thick layers, if not mixed with sufficient white, appears to be black. It is totally stable in watercolor and fresco techniques. Top. |Heating reaction mixture||Ground cobalt blue pigment| Egyptian blue ("frit," "Pompeiian blue"): Origin and History: Very stable synthetical pigment of varying blue colour. It is one of the oldest man-made colors commonly found on wall paintings in Egypt, Mesopotamia and Rome. Many specimens, well over 3000 years old, appear to be little changed by the time. Making the Pigment: Heating a mixture of a calcium compound (carbonate, sulfate or hydroxide), copper compound (oxide or malachite) and quartz or silica gel in proportions that correspond to a ratio of 4 SiO2 : 1 CaO : 1 CuO to a temperature of 900°C using a flux of sodium carbonate, potassium carbonate or borax. The mixture is then maintained at a temperature of 800°C for a period ranging from 10 to 100 hours. Chemical Properties: Calcium copper silicate, CaCuSi4O10. It is insoluble in acids even in warm temperatures. Artistic Notes: It has a discreet covering power. It can be used in fresco. Not advised in tempera, oil and encaustic. Top. |Egyptian blue pigment|| This blue was the color of Egyptian royalty French ultramarine ("French blue," Guimet's blue," "permanent blue," "synthetic ultramarine"): Origin and History: Ultramarine is imitated nowadays by a process which was invented in France in the eighteenth century as a result of a prize offered by the French government. The raw materials of ultramarine manufacture are soda and china clay and coal and sulphur, all common and inexpensive materials. The process requires skill, is inexpensive, and the product is many thousand times less costly than genuine ultramarine prepared from lapis lazuli. The first observance of the substance was made by Goethe in 1787, when he noticed blue deposits on the walls of lime kilns near Palermo. He mentioned that the glassy blue masses were cut and used locally as a substitute for lapis in decorative work. In 1928, Jean Baptiste Guimet perfected a method of producing an artificial, and cheaper, ultramarine pigment. Making the Pigment: Artificial ultramarine, also known as French ultramarine was made by heating, in a closed-fire clay furnace, a finely ground mixture of China clay, soda ash, coal or wood, charcoal, silica and sulfur. The mixture was maintained at red heat for one hour and then allowed to cool. It was then washed to remove excess sodium sulfate, dried and ground until the proper degree of fineness was obtained. Chemical Properties: Because the particles in synthetic ultramarine are smaller and more uniform than natural ultramarine, they diffuse light more evenly. Chemically, the artificial ultramarines are not distinguishable from the blue particles of genuine lapis; you can only tell by the percentage of colorless optically active crystals, whereas the artificial is pure blue and free from diluting elements. Artistic Notes: Synthetic ultramarine is not as vivid a blue as natural ultramarine. Synthetic ultramarine is also not as permanent as natural ultramarine. French ultramarine is light-resisting, but owing to the use of sulphur in its manufacture, may discolor in the presence of acid. Top. |French ultramarine pigment| Indigo (also see Woad): Origin and History: Indigo was probably used as a painting pigment by ancient Greeks and Romans. Marco Polo (13th century) was the first to report on the preparation of indigo in India. The Indiagofera tinctoria thrives in a tropical climate; the active ingredient is found in the leaves, an indol derivative is fermented from a sugar. Aniline blue has the same chemical composition and replaced it in 1870. Making the Pigment: To prepare the dye, freshly cut plants are soaked until soft, packed into vats and left to ferment. It is then pressed into cakes for use as a watercolor or dried and ground into a fine powder for use as an oil paint. In the lab, 4 g o-nitrobenzaldehyde is dissolved in 40 ml acetone using a 200 ml erlenmeyer flask. 20 ml deionized water are then added and the flask is shaken thoroughly. Next, 16 ml of a 1 molar solution of sodium hydroxide is added slowly. The mixture is stirred with a glass rod and left standing for five minutes. The precipitated indigo is then filtered off and dried at room temperature. Chemical Properties: C16H10 N2O2. Some of the various chemical tests by which indigo may be identified are: sublimation test, nitric acid test, hydrosulfite test, solubility tests, and thin-layer chromatography. Indigo is characterized as having a good lightfastness (light resistance), good to moderate alcohol resistance, and low oil resistance. Indigo's chemical properties make it difficult to dissolve in hot ethanol, amyl alcohol, acetone, ethyl acetate, and pinene, but readily soluble in boiling aniline, nitrobenzene, naphthalene, phenol and phthalic anhydride. It is heat resistant to 150 degrees Celsius and is resistant to air. This precipitation is insoluble in water. Alkalis dissolve it and form the sodium salt indigo white, which oxidizes into many shades of blue. A by-product of this natural plant dye formed a pigment which is heavy and impermanent, therefore cumbersome to use, along with Thioindigo, a red-violet coal tar pigment which is permanent, though only in watercolor. Artistic Notes: Indigo does not hold up in an oil base. It has fair tinting strength and may fade rapidly when exposed to strong sunlight. Worked in tempera or beneath varnish it can be very stable. It is also stable when exposed to hydrogen sulfide. Top. |Blocks of indigo dye||Artificially precipitating indigo||Examples of indigo dye colors| Lapis lazuli ("Genuine ultramarine," "azzurrum ultramarine", "azzurrum transmarinum", "azzuro oltramarino", "azur d'Acre", "pierre d'azur", "Lazurstein"): Origin and History: Made from the semi-precious stone lapis lazuli. A rock of many compounds: lazurite, a sodium sulfosilicate ore, deep blue crystals. Hauyne, sodalite (a sodium aluminum silicate with sodium chloride that occurs in crystals and masses), and nosean. Lapis lazuli is a contact metamorphic mineral found in limestone and granite; the best is found in Afghanistan, from ancient times until today. The earliest occurrence of use as a pigment was in the sixth and seventh century wall paintings in cave temples at Bamiyan in Afghanistan. When it was used in medieval Italy, its most extensive use was in illuminated manuscripts and panel paintings, complementing the use of vermilion and gold, and was as expensive as gold to work in. The highest quality and most intensely blue-colored ultramarine was often reserved for the robes of Christ and the Virgin. Graduations of color were easily achieved and quite beautiful, but the cost eventually drove it into obsolescence. Making the Pigment: As it is such a hard stone, it is difficult to separate the pigment from the other constituents. The blue cannot be separated from the impurities by washing with water, as noted in Byzantine texts, as doing so would create a gray powder. Natural ultramarine is purified from ground lapis lazuli by mixing it with wax and kneading in a dilute lye bath. The brilliant blue lazurite crystals preferentially wash out and are collected. For Cennino Cennini's late 14th-century recipe for ultramarine pigment, click here. Chemical Properties: A complex sulfur-containing sodium aluminum silicate, Na8-10Al6Si6O24S2-4. Chemically, the mineral lapis lazuli from which the pigment is made is an extremely hard and complex rock mixture: a mineralized limestone containing grains of the blue cubic mineral called lazurite, which is the essential constituent of the pigment. Also present, however, are two isomorphous minerals, one containing sulphate and the other, chloride, both of which sometimes occur in a blue form, as well as other colors. There are other silicates which may also be present, creating variables in the quality and appearance of the stone. The best are of a uniform deep blue, but can be paler as there is a great deal of white calcite and iron pyrites in it that sparkle like gold; there is also the possible intermingling with white crystalline materials. Artistic Notes: Natural ultramarine has a high stability to light as is proven by the fact that examples on paintings as much as five hundred years old have as intense and pure a blue color as either the freshly extracted pigment or the best synthetic. There is a disorder known as "ultramarine sickness" which has occasionally been noted on paintings as a grayish or yellowish gray mottled discoloration of the paint surface which also occurs from time to time with artificial ultramarine used industrially, which is brought about by the action of atmospheric sulphur dioxide and moisture. An alternative cause may be the acidity of an oil or oleo-resinous paint medium: the slow drying of the oil during which time water may have been absorbed to cause swelling, opacity of the medium and therefore whitening of the paint film. A difference in color tone and value was achieved by glazing with other colors, such as ochre and white, particularly in skies, over the ultramarine. This was made necessary by the cost of the pigment, which would be largely lost in mixtures. Top. |Rough lapis||Kneading with wax and water, and straining||Highest quality ultramarine pigment| Phthalocyanine blue ("Monastral blue," "heliogen blue," "phthalo blue," "copper phthalocyanine"): Origin and History: An organic blue dyestuff that was developed by chemists under the trade name, "monastral blue," and presented as a pigment in London, November 1935. Making the Pigment: It is prepared by fusing together phthalic anhydride and urea to copper chloride, first washing it in dilute caustic soda and then in dilute hydrochloric acid. It then becomes copper phthalocyanine, but is not conditioned as a pigment until it is dissolved in concentrated sulfuric acid and carefully washed in excess water and filtered, the resulting paste being used thus directly in the preparation of lakes by adsorption on aluminum hydrate, or dried for incorporation into non-aqueous mediums. Chemical Properties: It is a highly complex organic synthesis. Pure copper phthalocyanine in crystalline form is a deep blue with a strong bronze reflection, but when dry in pigment form is bright blue without any bronziness. They' re lightfast, and an ideal pure blue for it absorbs light almost completely except for the green and blue bands. When photographed, this line of colors tends to turn brown in the camera lens, being logically attributed to the fact that though it absorbs all other colors of light, there must be some refractive or reflective bounce of the initial bronze tone of the mineral in crystal that is not evident to the eye. Artistic Notes: Phthalo green has a very high tinting strength and transparency. Many painters have been warned away from it because of this color strength. It doesn't react to sulfur as metallic copper does. The pigment is extremely fine and light in its powdered form; a drop of denatured alcohol helps it go into solution with a binder. Top. |Phthalo blue pigment| Prussian blue ("Paris blue"): Origin and History: The pure pigment, called the first of the modern pigments, is Paris blue and has a coppery reddish sheen. Its invention at the beginning of the eighteenth century displaced azurite from the European palette. It was made by the colormaker Diesbach of Berlin in about 1704. Diesbach accidentally formed the blue pigment when experimenting with the oxidation of iron. The pigment was available to artists by 1724 and was extremely popular throughout the three centuries since its discovery. Making the Pigment: Dissolve sulphate of iron (copperas, green vitriol) in water; boil the solution. Add nitric acid until red fumes cease to come off, and enough sulphuric acid to render the liquor clear. This is the persulphate of iron. To this add a solution of ferrocyanide of potassium (yellow prussiate of potash), as long as any precipitate is produced. Wash this precipitate thoroughly with water acidulated with sulphuric acid, and dry in a warm place. Chemical Properties: It is a compound of iron and cyanogen, ferri-ammonium ferrocyanide (PB 27:1). Antwerp blue and Milori blue are adulterated products which, because of their intense chromatic power, are often met with. Paris blue is instantly discolored by potassium hydroxide, and is sensitive to all alkalis. Artistic Notes: Paris blue is non-poisonous, uncommonly strong in coloring power and very permanent in all techniques except fresco, where it loses intensity and leaves rust colored spots. It can also in very light mixtures be known to bleach out. Paris blue dries well but takes up 80% binder. Paris blue in paintings is splendid when used with oxide of chromium brilliant or in shadows when mixed with madder lake; being sparing in its use, because as an oil color, it tends to give the picture a darker, heavier character than cobalt blue or ultramarine. It can be used in tempera and watercolors, where when mixed with zinc white, it has the peculiar characteristic of fading when exposed to light, but completely regaining its chromatic strength in the dark. Top. |Potassium ferricyanide and iron(III) chloride are poured together and precipitate||Prussian blue pigment| Smalt ("starch blue"): Origin and History: First described by Borghini in 1584. A moderately fine to coarsely ground potassium glass of blue color, due to the small but variable amounts of cobalt added as cobalt oxide during manufacture. The principal source of cobalt used in this preparation in Europe during the Middle Ages appearing to be the mineral smaltite, one of the skutterudite mineral series. In the seventeenth and eighteenth centuries other associated cobalt minerals were probably used as well (erythrite and cobaltite). Cobalt ores were also used for coloring glass in Egyptian and classical times. The origin or cobalt tinted glass probably coincided with the development of vitreous enamel techniques; near east in origin, as enamels were made from easily fusible powdered and colored materials similar to glass. Making the Pigment: The cobalt ore was roasted and the cobalt oxide obtained was melted together with quartz and potash or added to molten glass. When poured into cold water, the blue melt disintegrated into particles, and there were ground in water mills and elutriated. Several grades of smalt were made according to cobalt content and grain size. In the complex ores in Saxony, as they were first roasted, much of the arsenic was volatilized. The oxides of cobalt, nickel and iron were then melted together with siliceous sand, and the resulting product called Zaffre or Zaffera were, in part, sold to potters and glassmakers. The rest of the product was used instead of potash. A violet tint was obtained. Artistic Notes: As smalt is a glass, its particles are transparent, and its hiding power is lower, even than that of cobalt blue. Therefore it must be coarsely ground for use as a pigment. When used in oil medium, it has a tendency to settle and streak down perpendicular surfaces. Like all glass based pigments, it is stable unless improperly made, and is better in aqueous media and lime for fresco. Top. |Heating sand, potassium carbonate and cobalt oxide||Product pulverized and filtered||Ground smalt pigment| Turnsole ("folium," "heliotrope"): Origin and History: An organic pigment made from the plant now called Crozophora tinctoria. The turnsole violet was highly esteemed in fourteenth century Italy, as a common and universal shading for all colors. The clothlets were the most convenient form of colors for illuminators, as it was placed in a dish, wetted with a little glair or gum water, and the color would dissolve out of the cloth and into the medium, forming a transparent stain. Making the Pigment: The dye was called turnsole when blue, and folium when red, the variation being a result of pH sensitivity. Extraction of the color from the seeds was done by saturating bits of cloth with the juice of the seed of capsules, which were gathered in the summer. The juice was extracted by squeezing gently so that the kernels were not broken; when a good supply was collected the cloths were dipped into it, dried, and re-dipped and re-dried over and over until they had soaked up substantial color. For red, plain linen cloth would work, for violet, they were first soaked in lime water and dried so the lime would neutralize the acidity; for blue the cloths were used to soak up the color and then exposed to ammonia to increase the alkaline content. As a blue it was impermanent and would revert to violet, but this was not considered a flaw, and large quantities of turnsole were used in the later Middle Ages. Top. Woad (also see Indigo): Origin and History: A substitute for the imported Indian indigo (even in classic times) was known in the native European weed called in Latin, Glastum or Isatic, and in English, woad: a shrubby herb with broad, green leaves which contain the raw material of a blue dyestuff. Both indigo and woad were a very dark, purplish, even blackish color, and less attractive than when it is mixed with a material to lighten it. Color was also sometimes made with a lime made from eggshells. A whole family of indigo or woad pigments consisting of mixtures of indigo with powdered marble, natural and calcined, calcined gypsum, calcined eggshells and white lead we now regard as pigments in themselves, independent of the indigo from which they were made. However, considering the extra cost of indigo, naturally it was largely replaced in the middle ages by domestic indigo from woad. Woad was grown commercially in England until the early 1950s as an adjunct to dyeing with true indigo. It's known as a "gross feeder" that exhausts the land it's grown on unless the salts it extracts are constantly replaced. Making the Pigment: Simply gathering the leaves produces a deep and lasting blue-black stain on the hands. Woad leaves were stripped from the plants, crushed, made up into balls forming the common raw material of commerce in domestic indigo; for use in dyeing they were powdered, spread out, damped and allowed to ferment. They were then made up into a dye bath with water and bran (other materials were used as well) and subjected to further fermentation, all of which took great skill. In the course of dyeing, a scum collects on he surface of the vat. Called "Florey" or the flower of the woad, this was skimmed off, dried and used alone or in elaborate compounds under the name of indigo in the Middle Ages. Top. Chemical Properties: Chemically, there is little difference between this blue and that of indigo. Origin and History: A purplish-brown iron oxide color. Adding heat [calcine] brings out the red side of iron oxide and adds a transparent quality if silicates are involved, as in sienna. Magnesium is also present in caput mortuum. Top. |Caput mortuum swatch| and History: The remarkable range of pigments that could be produced with cobalt included cobalt violet, known since 1859. Salvetat first described the preparation of cobalt violet, dark in Comptes Rendus des Seances de l'Academie des Sciences XLVIII in an article titled, "Matieres minerales colorantes vertes et violettes." The light variety was developed in Germany in the early nineteenth century and is anhydrous cobalt arsenate. Making the Pigment: The dark variety is anhydrous cobalt phosphate which was made by mixing soluble cobalt salt with disodium phosphate. It is washed and then heated at a high temperature. The light variety is particularly poisonous because of its arsenic content. 2 g of cobalt chloride and 1,3 g of sodium hydrogenphosphate are each solved in 20 ml deionized water, the solutions are poured together and the resulting precipitate is filtered off. Chemical Properties: Co3(PO4)2 or Co3(AsO4)2. This French product is a cobaltous oxide arsenate, and therefore extremely poisonous. Dark cobalt violet, a cobaltous phosphate, is a German product and is very permanent as opposed to the French variety, but is quite expensive, and thus hardly necessary. Cobalt violets appear as irregular-shaped particles and particle clusters under the microscope and are largely unaffected by chemical tests. Artistic Notes: In tempera it is not a good tube color as it hardens too easily, and the commercial watercolors of this pigment also have an extremely short tube life. The light variety of cobalt violet turns dark in oil due to the yellowing of linseed oil. Both of the cobalt violets are considered to be very permanent. They are both compatible with all painting media. Their transparency, weak tinting strength and high cost limited their use but their fastness to light made them more desirable than the older organic dye violets. Top. |Precipitating the pigment||Cobalt violet pigment| Origin and History: This is a Nuremberg violet, a mineral violet and is permanent, heat-proof and non-poisonous. However, as a manufactured product, it is not a beautiful tone, tinting use occasionally in fresco and mineral painting. The new manganese violets of German manufacture are powerful, fast colors furnished in several tones. This pigment was invented by Leykhuf in 1868. Chemical Properties: Manganese ammonium phosphate (PV 16), (NH4)2Mn2(P2O7)2 - Mn3(PO4)2 * 3H2O. When heated it fuses into a hard white substance. It is a pyrophosphate mananoso ammoniac and a manganese phosphate with phosphoric acid, by making the purple mass boil with carbonate ammonium. It is filtered, washed and finally fused. Artistic Notes: It covers and dries well in oil and tempera, and works well in pastel, encaustic and watercolor, but not in fresco. It has a discreet opacity. Top. |Manganese violet swatch| Tyrian purple ("Royal purple"): Origin and History: This organic dye was prepared from various mollusks or whelks, including Murex brandaris , Purpura haemostoma, Purpura lapillus, and Carpillus purpura, which can be found on the shores of the Mediterranean and Atlantic coasts and which excrete the fluid from which the dye is won. One gram of this dye is made from the secretion of 10,000 of these large sea snails. Chemical Properties: This purple color is remarkably stable, resisting alkalis, soap, and most acids. It is insoluble in most organic solvents. Artistic Notes: Tyrian purple was used in the preparation of a purple ink and in dyeing parchments upon which the codices of Byzantium were written. It was also the traditional "Imperial Purple" of ancient emperors, kings, and magistrates. Top. |Murex shellfish, 9 cm| Origin and History: A French brown pigment of organic, vegetal and synthetic origin, which as used as a chalk or an ink. In Britain, and especially France from where the name derives, the colour is in fact produced from beechwood, the root of which is mixed with gum Arabic and water. In the seventeenth century, it was used primarily as a wash, as can be seen in Rembrandt's drawings, which were mostly done in this technique. Questionable, however, is whether bistre's source was done with or without the presence of air in the process. Chemical Properties: It is a compound not clearly identified in chemistry. Artistic Notes: Its major use is the watercolor medium because its color is similar to asphaltum which is used in oil medium, although it can be used in oils if required. It is soluble in turpentine and naphtha and it is also used in the oil technique. Not advised for fresco, encaustic and tempera. Top. Bitumen ("Asphaltum," "Judaic Bitumen," "Antwerp Brown"): Origin and History: A pigment of mineral organic and natural origin and is a mixture of hydrocarbons, waxes and petroleum. It is sometimes classified as dark brown. Found in Egypt, Trinidad, Perù, this blackish reddish color, which can vary dependant on source, became of more importance in oil painting techniques, in which it is often responsible for a characteristic cracking or 'alligatoring' of the surface. Very popular in sixteenth and seventeenth to provide fleshtone shadows. Making the Pigment: It is produced from the slow oxidation and slow polymerization of petrol and similar organic materials. Chemical Properties: CnH2n + 2. It is soluble in turpentine, naphtha and organic solvents while it is insoluble in water and alcohol. Artistic Notes: It has a discreet opacity but is sometimes difficult to dry. Not advised in the tempera technique, fresco and encaustic. Making the Pigment: Like bone black, it is made by charring bones, but the brown is an incomplete process, and therefore contains tarry matter which is non-drying and retards the drying of all other pigments it is mixed with. This matter also makes the pigment fugitive to light, further reducing the value of it in traditional methods. Top. Making the Pigment: Burnt sienna is prepared by calcining raw sienna which in the process undergoes a great change in hue and depth of color; in going from ferric hydrate of raw earth to ferric oxide, it turns to a warm, reddish brown. Chemical Properties: Fe2O3 * nH2O + Al2O3 (60%) Manganese dioxide (1%), calcined natural iron oxide (PBr 7). Microscopically, heating makes the pigment more even in color and the grains are reddish brown by transmitted light. Artistic Notes: Because of its transparency, burnt sienna is used as a fiery glazing color which requires much binder, about 180%, and as an oil color is apt to jelly. This is remedied by washing, which however dilutes the intensity of the color. In 1768, Martin Knoller stated that very strong heat will produce a sienna resembling vermilion that may be used in fresco out of doors. American Burnt Sienna is a strong type of ochre and is neither as clear nor as brilliant as the Italian Sienna. It supposedly imparts a muddy tone but is very permanent in all techniques. Top. |Burnt sienna swatch| Making the Pigment: Burnt umber is a combination of iron oxide, oxide of manganese and clay, made by burning raw umber to drive off the liquid content. Chemical Properties: An ochre containing manganese oxide and iron hydroxide (PBr 7), Fe2O3 · MnO2. In acids it dissolves in part leaving a yellow solution; hydrochloric acid gives it an odor of chlorine. In alkalis it discolors a little and when heated, becomes a reddish brown. It has the same properties as natural umber. Completely lightfast and unaffected by gases, and makes a good glaze when thinned with oil or varnish. Artistic Notes: Because of the manganese content it is an excellent dryer. It can be used in all techniques but requires 80% binder, with an additional 2% wax when in tubes to prevent hardening. Many umbers have a greenish tinge, and in oil, it tends to turn dark later on, especially if the underlayers were not thoroughly dried, but this darkening may also occur in alla prima painting. It is best not to use the color in fresco, as in the open it tends to decompose and produces a burnt heavy tone. Can be mixed with all other pigments except for the Lakes. Burnt umber turns especially dark, surprisingly as the burnt tones are usually more reliable in this respect. This tendency to darken is increased by the modern practice of grinding the tube colors too finely. Top. |Ground umber pigment| Vandyke brown ("Cassel brown", "cologne earth," "Caste Earth"): Origin and History: Made from humic substances in soil, peat or brown coal, this color is found in the pictures of the old masters, among them Rubens, who used it mixed with gold ochre as a warm, transparent brown, which held up particularly well in resin varnish. Artistic Notes: It is partially soluble in oil and has a slight tendency to turn gray (most apparent when used in whites). When used with resin ethereal varnish it is more permanent than when used in oil; however, this is impossible in painting, and unnecessary. It requires 70% binder. For restoring purposes it is useful when mixed with varnish. It is sensitive to lyes and becomes a cold gray in fresco, making it useless on a wall. This color is fugitive to light and unreliable. Top. Origin and History: Made from bones which have not been entirely charred, and treasured by painters for their warm tones. Artistic Notes: This is the least permanent of all the black colors, requires 100% binder as all blacks do, and doesn't always dry well. Top. |Bone black pigment| Origin and History: Prepared by charring bones, horns etc., in the absence of air. Ivory black was established in antiquity by the example of Apelles, but there is no evidence that it was continued in the Middle Ages. Chemical Properties: It is partially soluble in acids. Artistic notes: It is the purest and deepest black. It is the best dryer, and can be used in all techniques. When used by itself over a smooth white ground of for example, lead or cremnitz white, it cracks, but not when slightly mixed with other colors. Top. |Ivory black swatch| Making the Pigment: Lampblack is made by allowing a flame to play on a cold surface and collecting the soot which the flame deposited. Sometimes a beeswax candle, sometimes tallow; sometimes the flame of a lamp burning linseed, hempseed or olive oil; or by burning pitch or incense. It makes a difference as to what the source of the flame is as the black itself is pure carbon, but there are apt to be unburnt particles which may affect the color and working properties of the pigment. You can make your own lampblack ink using this method. Artistic Notes: Lampblack is often used for ink-making as it has an extremely fine grain and doesn't need grinding; it only needs to be mixed with a little gum water to make what we call India ink. Lamp black pigment is absolutely permanent; we know that pure carbon black will never fade, but the material with which the ink was bound has often perished or become brittle, and the surface of parchment is so hard and close grained that even the fine grains of lampblack may fail to penetrate it if the lampblack is suspended in a strong solution of gum. When mixed with water or water media, it becomes so light that the powder floats and is not very manageable; a drop or two of denatured alcohol helps it go into solution. It tends to be a bit greasy; and though an excellent pure black, apt to muddy a bit in mixtures. Top. Origin and History: Charcoal made from young shoots of grape vines were referred to in medieval times as the best of blacks. It is now referred to as more of a blue-black, considering the coolness of the grays that it produces in mixtures. Making the Pigment: It is important that the vine sprigs be thoroughly burnt and reduced to carbon, otherwise the color will be brownish and an unpleasant consistency; but they must not be burnt in the air or they might reduce to ashes instead of to carbon. They are packed tightly in little bundles in casseroles, covered and sealed, and baked in a slow oven. You can make your own vine black with a similar method. The resulting charcoal is used in sticks for drawing; for painting it is first powdered and ground up dry, and then mixed with water and ground for a long time between two hard stones. Top. |Vine black charcoal sticks||Ground vine black pigment| Cremnitz white (Also see Lead white.): Origin and History: Some companies offer the color Cremnitz white, but this is a misnomer because the original pigment for Cremnitz white has not been made since 1938. Chemical Properties: The same as lead or flake white, but made by a slightly different chemical process which leaves a faint vinegar odor. Artistic Notes: Not very permanent to sulphur gases, and therefore other whites are far better to use. Top. Lead white ("flake white," "kerms white," "Berlin white," "silver white," "slate white") : Origin and History: Used since antiquity, lead white was the only white used in European easel paintings until the 19th century. Lead white strongly absorbs X-rays, thus can be detected in paintings easily. It is one of the oldest man-made pigments, and its history dates back to the Ancient Greeks and Egyptians. Making the Pigment: It is a by-product of lead, and the purity of the color depends on the purity of the lead. Purifying processes greatly increase the cost of the product. White lead has always been one of the most important pigments in many painting techniques; yet chemists are still undecided as to just what our normal modern lead white is. The traditional method is called "the stack process." The "stack" consists of hundreds or thousands of earthenware pots containing vinegar and lead, embedded in fermenting tanbark or dung. They are shaped in a way that the vinegar and lead are separate, but the lead is still exposed to the vapors of the vinegar, by being coiled into a spiral which stands on a ledge inside the pot, above the well of vinegar in the bottom. It is then loosely covered with a grid of lead, which keeps the tan from falling in, allowing the carbon dioxide formed by the fermenting of the tan to enter the pot and act upon the coils and plates of lead with the vapors of vinegar and moisture. A thick layer of tan is spread out on the ground: the bottom of the pit, and the pots with lead and vinegar are arranged upon it, covered with their leaden grids. More tan is laid over them and then usually a loose flooring of boards, followed by more pots, more tan, and so on until all the pots are imbedded. Old tan partly used up, in certain proportions, will continue to maintain proper heat. The heat, moisture, acetic acid vapor and carbon dioxide do their work for a month or so, and the stacks are dismantled. The metallic lead by this point has been largely converted into a crust of white lead on the coils and grid. These are then separated from the unconverted metal and washed free of acid and soluble salts, and ground for future use in painting. Chemical Properties: Lead is a poison that builds up an incurable case of lead poisoning by breathing in a little of the dust of white lead, day after day, over time. Once it gets into the human system, it stays there until the body's tolerance level is met, and then becomes symptomatic. Medieval writers warn against the dangers of apoplexy, epilepsy, and paralysis, that come with exposure to it.Lead white darkens in the presence of sulphur, so should not be used in conjunction with cadmium colors or French ultramarine. Artistic Notes: Lead white has the warmest masstone of all the whites. It has a very subtle reddish-yellow undertone that is almost unnoticeable unless you are looking for it, or comparing lead white side by side with other kinds of white. This undertone is minimal in the best quality of lead whites. You will notice that lead white has a heavier consistency than other whites. This is because the pigment is particularly dense and does not lend itself to a paint of soft consistency. Lead white is also the fastest drying of all of the whites because of the drying action of the lead pigment upon the oil. This makes lead white particularly valuable for painters who need a relatively fast drying time for underpainting or Alla Prima techniques. Top. |Raw lead||19th century scraping mill||Lead white pigment| Origin and History: A titanium pigment, manufactured by F. Weber Co. of Philadelphia in 1921. Was at one time the best known white pigment used by artists. Today it is not nearly so well-known. Artistic Notes: It was 78% pigment with 22% oil, which must have prevented it from yellowing. Its hiding power was greater than that of flake white, and had no chemical reactions to other pigments. It was completely permanent and worked in all techniques. Top. Origin and History: Titanium White is truly the white of the 20th century. Although the pigment titanium dioxide was discovered in 1821, it was not until 1916 that modern technology had progressed to the point where it could be mass produced. First made commercially in Norway for industrial purposes. There are many industrial grades of titanium white pigment, none of which are used in their pure form for artists oil color. In oil, it dries to a spongy film that is quite unsuitable for artistic purposes. For this reason, titanium dioxide is always blended with one or more of the other white pigments, or an inert pigment to make a suitable artists oil color. Since titanium dioxide, by itself, dries to a spongy film and zinc oxide dries to a brittle film, the two are combined in a balanced blend for better quality, professional grade titanium whites. In some brands, where zinc oxide predominates in the mixture , the color is called titanium-zinc white. Cheaper brands of budget grade paint are known to use a mixture of titanium dioxide with Barytes or other inert pigments. Use of these types of whites is really a false economy because they lack both the brilliance and tinting strength of professional grade Chemical Properties: Titanium dioxide, TiO2. Artistic Notes: A non-poisonous, good covering paint which is useful in all techniques. The masstone of titanium white is neither warm nor cool and lies somewhere between lead white and zinc white, in that respect. It has a tinting strength superior to either of the other whites, and a drying time that is slower than that of lead white but faster than that of zinc white. It is truly an all-purpose white oil color. There is certain dispute about its drying abilities, but does remain even in mixtures. It yellows easily, especially in tube paints which have been mixed with heavy oil. Avoid any tubes of titanium that have oil residue at the top of the tube when opening, as these will surely yellow, and within a very short time. Titaniums are sometimes cut with large quantities of zinc white to improve their drying time and cohesion with the oil. Top. |Anatese mineral||Rutile mineral||Removing titanium dioxide from the oven||Titanium white pigment| Zinc white ("Chinese white," "silver white"): Origin and History: First introduced in 1840, this white is colder in appearance than lead white, and doesn't cover nearly as well, yet it is far less expensive. Zinc has been known as a mineral since antiquity when it was melted with copper to form brass. It was also known then, as it is today, as a medicinal ointment. In 1782, zinc oxide was suggested as a white pigment. Guyton de Morveau at L'Académie de Dijon, France, reported zinc oxide as a substitute for white lead. Metallic zinc had originally come from China and the East Indies. When zinc ore was found in Europe, large-scale production of the extracted metallic zinc began. In 1834, Winsor and Newton, Limited, of London, introduced a particularly dense form of zinc oxide which was sold as Chinese white. It was different from former zinc white in that the zinc was heated at much higher temperatures than the late eighteenth century variety. By 1844, a better zinc white for oil was developed by LeClaire in Paris. He ground the zinc oxide with poppy oil that had been made fast drying by boiling it with pyrolusite (MnO2). In 1845, he was producing the oil paint on a large scale. Making the Pigment: The French method of manufacturing, known as the 'indirect process' used the zinc smoke derived from molten zinc, which was heated to 150°C and collected in a series of chambers. Zinc oxide, ZnO. If you heat zinc white, it turns to lemon yellow, but will revert to white when cooled. It differs from lead white in this respect. Since zinc oxide is derived from smoke fumes, its particles are very fine and are difficult to observe except at very high magnification. It readily dissolves in alkaline solutions, acids and ammonia without foaming. It is non-poisonous, permanent and doesn't yellow, though these factors are true only with pure zinc white. It also disintegrates quickly out of doors, and increases in volume causing massive crackling, so it is not useful in fresco. Ground in oil it dries slowly, especially in poppy oil, where the retarded drying time is needed. It does not dry as solid as lead white, due to some transparency in the pigment. A small addition of damar or mastic varnish speeds up the drying time. As it is very fine in powder form, it can be sufficiently mixed with only a spatula, requiring 30% binder and an addition of 2% wax in the tube to prevent hardening. It is compatible with all other pigments, including copper-based, but in watercolor it is destructive to the permanency of coal-tar colors and accelerates the process of fading (though it doesn't do this in oil.) Zinc is essentially permanent in sunlight although the yellowing in oil affects its brightness. It is neither as opaque nor heavy as lead white and it takes much longer to dry. Because zinc white is so "clean" it is very valuable for making tints with other colors. Tints made with zinc white show every nuance of a color's undertones to a degree greater than tints made with other whites, and the artist has time to complete his work before the paint dries. Despite its many advantages over lead white, zinc white oil color also has a drawback; it makes a rather brittle dry paint film when used unmixed with other colors. Zinc whites' lack of pliancy can cause cracks in paintings after only a few years if this color is used straight up to excess. Top. |Zinc white pigment| Incompatibilities/ Chemical Reactions: |Azurite||Must be coarse-ground; does not blacken from the effects of sulphur, but from the action of the strong alkalis improperly used in picture cleaning and from the optical effect of darkened varnish surrounding its particles; also discolors in the presence of acid| |Emerald green||Blackens any adjacent sulphur color| |Malachite||Must be coarse-ground; theoretically subject to blackening when mixed with sulphur colors, but not found in practice; discolors in the presence of acid| |Phthalocyanine green||Non-metallic copper-- can be used with sulphur colors| |Phthalocyanine blue||Non-metallic copper-- can be used with sulphur colors; when photographed, this color tends to turn brown in the camera lens| |Verdigris||Blackens sulphur colors; reactive and unstable, requiring isolating varnishes for protection| |Cadmium colors||Should never be mixed with lead white or other lead based paints; turns black with copper colors; known to turn brown in outdoor frescos (when mixed with lime); discolors in the presence of acid| |Orpiment and realgar||Even close proximity turns lead colors gray; should never be mixed with copper colors; discolors in the presence of acid| |Vermilion||Impurities can make it turn black when exposed to light; turns black with copper colors; has a corrosive action on binding materials; discolors in the presence of acid| |French ultramarine||Prone to discoloring or blackening in the presence of acid| |Lead white||Incompatible with sulphur colors| |Lead-tin yellow||Incompatible with sulphur colors| |Massicot||Incompatible with sulphur colors| |Minium||Can only be used as an oil color-- as a powder and in fresco it eventually turns black| |Naples yellow||Incompatible with sulphur colors| |Alizarin lake colors||Loses permanence when mixed with ochre, sienna and umber| |Barium yellow||Doubtful permanence in fresco| |English red||Often cut with gypsum when in the powder form-- dangerous in fresco| |Madder lake||Lime destroys it completely-- no fresco| |Ocher colors||Gypsum should never be present if intended for fresco use| |Prussian blue||Loses intensity in fresco and leaves rust colored spots; mixed with zinc white, it fades when exposed to light, but regains its chromatic strength in the dark| |Saffron||Alone or mixed with another color, the saffron will usually fade away| |Zinc white||Disintegrates quickly out of doors, and increases in volume causing massive crackling-- no fresco; in watercolor it is destructive to the permanency of coal-tar colors and accelerates fading (not in oil)| Pigment Links: The above information and photographs have been partly gleaned from the following color theory websites, and is an attempt to condense some of the incredible amount of information--- historical, chemical, and artistic--- that is available for some of the better known pigments. For further study, these sites are a good start and more complete than what I have provided. Top. Sally Randall, History of Pigments and Painting Techniques-- historical and chemical descriptions. Don Jusko, Pigments and Color Theory-- highly technical chemical descriptions, color charts, some history, etc. Pigments Through the Ages-- informative site about various pigments, though unfinished The Household Cyclopedia-- making different kinds and colors of paints from scratch. Cennino Cennini, Il Libro dell' Arte-- the complete text. Courtesy of John Rose (Master John the Artificer).
<urn:uuid:8efa2042-5368-44de-93d8-af2a0c4b2d59>
CC-MAIN-2016-26
http://www.jcsparks.com/painted/pigment-chem.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00132-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951313
21,470
2.796875
3
The Changing Face of World Cities A seismic population shift is taking place as many formerly racially homogeneous cities in the West attract a diverse influx of newcomers seeking economic and social advancement. In The Changing Face of World Cities, a distinguished group of immigration experts presents the first systematic, data-based comparison of the lives of young adult children of immigrants growing up in seventeen big cities of Western Europe and the United States. Drawing on a comprehensive set of surveys, this important book brings together new evidence about the international immigrant experience and provides far-reaching lessons for devising more effective public policies. The Changing Face of World Cities pairs European and American researchers to explore how youths of immigrant origin negotiate educational systems, labor markets, gender, neighborhoods, citizenship, and identity on both sides of the Atlantic. Maurice Crul and his co-authors compare the educational trajectories of second-generation Mexicans in Los Angeles with second-generation Turks in Western European cities. In the United States, uneven school quality in disadvantaged immigrant neighborhoods and the high cost of college are the main barriers to educational advancement, while in some European countries, rigid early selection sorts many students off the college track and into dead-end jobs. Liza Reisel, Laurence Lessard-Phillips, and Phil Kasinitz find that while more young members of the second generation are employed in the United States than in Europe, they are also likely to hold low-paying jobs that barely life them out of poverty. In Europe, where immigrant youth suffer from higher unemployment, the embattled European welfare system still yields them a higher standard of living than many of their American counterparts. Turning to issues of identity and belonging, Jens Schneider, Leo Chávez, Louis DeSipio, and Mary Waters find that it is far easier for the children of Dominican or Mexican immigrants to identify as American, in part because the United States takes hyphenated identities for granted. In Europe, religious bias against Islam makes it hard for young people of Turkish origin to identify strongly as German, French, or Swedish. Editors Maurice Crul and John Mollenkopf conclude that despite the barriers these youngsters encounter on both continents, they are making real progress relative to their parents and are beginning to close the gap with the native-born. The Changing Face of World Cities goes well beyong existing immigration literature focused on the U.S. experience to show that national policies on each side of the Atlantic can be enriched by lessons from the other. The Changing Face of World Cities will be vital reading for anyone interested in the young people who will shape the future of our increasingly interconnected global economy. MAURICE CRUL is professor of sociology at Erasmus University Rotterdam. JOHN MOLLENKOPF is Distinguished Professor of Political Science and Sociology, and director, Center for Urban Research, at The Graduate Center, City University of New York. CONTRIBUTORS: Richard Alba, Susan K. Brown, Leo Chavez, Louis DeSipio, Rosita Fibbi, Nancy Foner, Barbara Herzog-Punzenberger, Philip Kasinitz, Elif Keskiner, Jennifer Lee, Laurence Lessard-Phillips, Leo Lucassen, Liza Reisel, Jeffrey G. Reitz, Jens Schneider, Philipp Schnell, Patrick Simon, Thomas Soehl, Van C. Tran, Constanza Vera-Larrucea, Mary Waters, Min Zhou. To purchase an electronic edition of this book please visit one of the following vendors: In This Section Clearing the Float This clears the float You might also be interested in... From Parents to Children The Intergenerational Transmission of Advantage Timothy M. Smeeding Keeping the Immigrant Bargain The Costs and Rewards of Success in America
<urn:uuid:6ef7387b-96f7-4360-a88a-1c77622dbf02>
CC-MAIN-2016-26
https://www.russellsage.org/publications/changing-face-world-cities
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.900663
778
2.53125
3
When the Capitol was a Bakery Civil War bake ovens in the US Capitol basement When Fort Sumter was attacked in April of 1861, President Lincoln sent out a call for troops. Tens of thousands of volunteers arrived to protect Washington, D.C., and they needed to be fed. Since four thousand soldiers were using the U. S. Capitol building as a barracks, the Army hastily constructed brick ovens in the basement. The huge amount of flour needed each day, was purchased from local mills or those further north, and stored in “Washington’s Crypt” and in the hallways. When needed, the barrels were rolled down planks on the main staircase to the ovens below. After a few weeks, most of the soldiers were transferred from the Capitol, but the ovens and bakers remained. Within months, complaints were raised about the black smoke and soot, but it would take a year of the Senate trying to convince the House and then the Army, before the ovens were removed. During the four years of the Civil War, the Capitol Bakery and its successor produced fifty million loaves of bread. This interesting and overlooked part of the Civil War, and the history of the US Capitol will be revealed in a heavily illustrated talk. For information contact Patricia Bixler Reber
<urn:uuid:9b8a7546-98e3-44ed-a88e-c7150ef8763b>
CC-MAIN-2016-26
http://www.angelfire.com/md3/openhearthcooking/aaCapitol.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972777
272
3.5
4
John Wightman was born ca. January 1598/99 in Burton-on-Trent, Staffordshire, England1,2, and died 1669 in Rhode Island Colony3. He was the son of Edward Wightman and Frances Darbye. He married a woman whose name is not known, but was born ca. 1601 in England, and died Bef. 1654 in England. As a boy, John saw his father burned at the stake for his religious views. He apparently moved to London with his mother after that pivotal event, eventually marrying an unknown woman and raising his family for many years in London, before immigrating to the new world as a middle-aged man, after the death of his wife. During his life in England, John experienced the reign of Charles I (1625-1649), with all the political and religious upheaval that it brought. This terrible period in English history ended with Charles' beheading and Cromwell's Republic. John immigrated to the new world during the Cromwell era. Oddly, the early years of the Cromwell "Protectorate" were relatively tolerant of religious dissent; Jews were allowed to return to England, Quakerism prospered, and congregations were allowed to choose their own form of worship. Thus the motivation for the Wightman emigration is unclear. John's son Valentine emigrated first, before 1648, perhaps in response to the upheaval under Charles I. His success in the New World may have persuaded the rest of the family to leave troubled England behind. Certainly, the journey of the so-called Pilgrims in 1620, the Puritans in 1628, and John Winthrop's fleet in 1630 had established a sense that flight to America was a possibility for nonconformists and the disadvantaged of England. A possible association between the Wightman family and that of Roger Williams in England may have added a personal connection to the great and free community Williams had by now established in Rhode Island: Williams' sister, Catherine, married Ralph Wightman, merchant of London, in 1606. There have been various claims that this Ralph was a brother or child of John. The latter is impossible, since our John was only a small child in 1606. The former is not supported by the available data, since no child named Ralph is indicated in the will of Edward Wightman of Burton-on-Trent. However, this Ralph Wightman is probably of the House of Wykin, which makes him a probable cousin of John. A John Wightman entered Jesus College of Cambridge University in 1634. There is no evidence that identifies this John Wightman as ours (and there were many other Wightman's in the London area), but it is possible. John arrived at Newport in 1654 (almost certainly via Boston) and then moved on with his son George (at least) to Richard Smith's trading post in Wickford, RI, across the Narragansett Bay. Little is known about John's time in Rhode Island; he was already 55 when he arrived. Presumably, he lived in the household of one of his sons until his death. Some sources claim that John died on November 13, 1692 in Weymouth, Massachusetts Bay colony. This is another John Wightman (Whitman), probably a child of Zechariah Wightman (relationship unknown), who came to Massachusetts in 1635. Furthermore, this would require our John (son of Edward the heretic) to have lived to the very ripe age of 94 (not impossible, but unlikely). The Wightman/Whitman's of Weymouth were from Norfolk Co., England, making any connection to the Burbage Wightman's likely to be indirect. This has not stopped sloppy genealogists from conflating the two men, but as shown here and by the careful research of Mary Ross Whitman, the John Wightman of Weymouth was not closely related to John, son of Edward of Burton-on-Trent. In the interest of establishing what is certain, it is important to note that there is no written record of our presumed John Wightman in Rhode Island. His name, existence, and date of death are based on family tradition and speculation. The tradition that George's father's name was John and that he was the son of Edward of Burton-on-Trent is solidly represented in 19th century American writings, particularly those of Baptist history and genealogy. The first statement to this effect has been dated back to 1771, in an early Baptist history. It is clear from the written record that Edward of Burton-on-Trent had two children named John, and if the second John survived it would make perfect sense for him to be the father of George of Quidnessett. We know virtually nothing about John's wife. Various, unsubstantiated claims suggest that her name was Ruth or Mary or perhaps Mary Ruth. However, some of these same sources claim she was born in 1601 in Rhode Island, which would be impossible since there were no European settlements in New England at that time, and their children were definitely born in England, most likely in London. I have taken the 1601 birth year as reasonable. Whatever her name, she clearly died young, perhaps after giving birth to her last child in 1634, when George's ancestor George was only a toddler. Children of John Wightman are: Valentine was the first Wightman of George R.'s family to arrive in the New World, likely around 1648, probably settling at Providence initially (although some sources hold that he was in Warwick this early). He was preceded only by Zechariah Wightman and his family (relationship to ours unknown) who came to Massachusetts Bay Colony in 1635, and established the extended Wightman family of the Boston area. Valentine mastered the difficult Native American languages relatively easily. Thus he was employed as an Interpreter for various commercial and government interests. In particular, like Roger Williams, Valentine was one of the few European men who could communicate freely with the Narragansett tribe. He was also employed by George R.'s ancestors Richard Smith and Roger Williams as an envoy and interpreter to the Narragansett tribe. His skills did not go unnoticed by the Massachusetts Bay Colony either. He witnessed an agreement on August 18, 1654 between Governor John Winthrop of the Massachusetts Bay Colony and the Pequot sachem Ninigret, which led to the release of some Pequot captives during the Pequot War. In 1660, he served as a primary witness between Chief Ninigret and the United Colonies (Mass., Plymouth, Conn., New Haven, Hartford). Like almost all settlers, Valentine was actively involved in land purchasing and speculation. On January 28, 1655, he purchased a meadow and 25 acres of upland from Robert Coles in the Providence area. On April 27, 1657, he purchased land in Warwick, RI located between that of William Harris and Edward Manton and moved his family down the Narragansett Bay, perhaps due to the arrival of his father and brothers in this area. He was made freeman of Warwick on May 18, 1658. During this time, Valentine served as a witness to the Atherton Company's purchase of Quidnessett and other land on the western shore of the Narragansett Bay. In 1660, Valentine himself purchased over one hundred acres of prime land in the northernmost portion of Quidnessett, adjacent to the Bay, establishing the Wightman Homestead, which would be handed down intact through the Wightman line for over 200 years. He would eventually sell this land to his brother, George's ancestor George Wightman. Valentine's relocation to Warwick did not last long, and Valentine moved back to Providence in 1661, finally settling in the part of Providence called Smithfield. Valentine remained active in political affairs. He was appointed to a petit jury in Providence by Roger Williams in 1661, engaged in land trading during the 1660's (including an April 27, 1655 purchase of Warwick land from Robert Coles), was elected constable of Providence in 1671, and served as a member of the early RI colonial general assembly as a Deputy in 1675. On May 31, 1666, Valentine took an oath of allegiance to King Charles II. He was among the few who stayed in Providence during King Philip's War between colonists and the Wampanoag, Narragansett and Nipmuck tribes (1675-1676). Thus, Valentine likely witnessed his own town being burned to the ground, almost certainly including his own home, by the very same tribe he had negotiated with years before. In 1676, Valentine was elected Town Treasurer of Providence and in 1685 appointed to a general court at Newport. One source (US International Marriage Records, 1340-1980) lists Valentine's wife Mary's surname as Wightman and her birth location as Rhode Island. Since there were no Europeans in Rhode Island in 1630, this seems unlikely. Daniel settled in Newport, after arriving in Rhode Island at Richard Smith's trading post in Wickford with his father and brothers. In the absence of Dr. John Clarke, the first minister of the First Baptist Church of Newport, a theological schism between members of the church occurred in 1656. Daniel was among the dissenters who broke away and formed the Second Baptist Church in that year. Another of George's ancestors, Rev. Obadiah Holmes, was the pastor of the First Baptist Church at that time, and presumably on the opposite side of the issue. Daniel served as an assistant pastor of the Second Baptist Church. There is some confusion and debate over Daniel's existence. Mary Ross Whitman argues that some of the dating on the relevant documentary data is questionable. If indeed the dates are wrong, the association of this "brother Daniel" of George might be incorrect. The Daniel Wightman of the record could be Rev. Daniel Wightman, the son of George, who is known to have come to Newport in 1694. The Daniel I describe here might not have existed. On the other hand, John Wightman of London supposedly had five sons who immigrated to Rhode Island. Accounting for these "five sons" would lead one to reasonably expect a Daniel Wightman in this generation. 1. Adams, Charles Collard, Middletown Upper Houses, (1908, New York: Grafton). 2. Mary Ross Whitman, George Wightman of Quidnessett, RI and Descendants, (1939, Chicago: Edwards Brothers). 3. "Legends at Rootsweb", electronic resource. 4. Anonymous, U.S. and International Marriage Records, 1560-1900, (2004, Yates Publishing). 5. James Savage, A Genealogical Dictionary of the First Settlers of New England, Before 1692 (1860, Boston). RETURN to the WIGHTMAN FAMILY HISTORY Homepage.
<urn:uuid:bce8c98c-5c2b-4cb5-9c07-e111e76a9fc9>
CC-MAIN-2016-26
http://freepages.genealogy.rootsweb.ancestry.com/~wightman/John1599.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00188-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979591
2,288
2.53125
3
Help support New Advent and get the full contents of this website as an instant download. Includes the Catholic Encyclopedia, Church Fathers, Summa, Bible and more all for only $19.99... Archbishop of York, d. on 29 February, 992. Of Danish parentage, Oswald was brought up by his uncle Odo, Archbishop of Canterbury, and instructed by Fridegode. For some time he was dean of the house of the secular canons at Winchester, but led by the desire of a stricter life he entered the Benedictine Monastery of Fleury, where Odo himself had received the monastic habit. He was ordained there and in 959 returned to England betaking himself to his kinsman Oskytel, then Archbishop of York. He took an active part in ecclesiastical affairs at York until St. Dunstan procured his appointment to the See of Worcester. He was consecrated by St. Dunstan in 962. Oswald was an ardent supporter of Dunstan in his efforts to purify the Church from abuses, and aided by King Edgar he carried out his policy of replacing by communities the canons who held monastic possessions. Edgar gave the monasteries of St. Albans, Ely, and Benfleet to Oswald, who established monks at Westbury (983), Pershore (984), at Winchelcumbe (985), and at Worcester, and re-established Ripon. But his most famous foundation was that of Ramsey in Huntingdonshire, the church of which was dedicated in 974, and again after an accident in 991. In 972 by the joint action of St. Dunstan and Edgar, Oswald was made Archbishop of York, and journeyed to Rome to receive the pallium from John XIII. He retained, however, with the sanction of the pope, jurisdiction over the diocese of Worcester where he frequently resided in order to foster his monastic reforms (Eadmer, 203). On Edgar's death in 975, his work, hitherto so successful, received a severe check at the hands of Elfhere, King of Mercia, who broke up many communities. Ramsey, however, was spared, owing to the powerful patronage of Ethelwin, Earl of East Anglia. Whilst Archbishop of York, Oswald collected from the ruins of Ripon the relics of the saints, some of which were conveyed to Worcester. He died in the act of washing the feet of the poor, as was his daily custom during Lent, and was buried in the Church of St. Mary at Worcester. Oswald used a gentler policy than his colleague Ethelwold and always refrained from violent measures. He greatly valued and promoted learning amongst the clergy and induced many scholars to come from Fleury. He wrote two treatises and some synodal decrees. His feast is celebrated on 28 February. Historians of York in Rolls Series, 3 vols.; see Introductions by RAINE. The anonymous and contemporary life of the monk of Ramsey, I, 399-475, and EADMER, Life and Miracles, II, 1-59 (also in P.L., CLIX) are the best authorities; the lives by SENATUS and two others in vol. II are of little value; Acta SS., Feb., III, 752; Acta O.S.B. (Venice, 1733), saec. v, 728; WRIGHT, Biog. Lit., I (London, 1846), 462; TYNEMOUTH and CAPGRAVE, ed. HORSTMAN, II (Oxford, 1901), 252; HUNT, Hist. of the English Church from 597-1066 (London, 1899); IDEM in Dict. of Nat. Biog., s.v.; LINGARD, Anglo-Saxon Church (London, 1845). APA citation. (1911). St. Oswald. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/11348b.htm MLA citation. "St. Oswald." The Catholic Encyclopedia. Vol. 11. New York: Robert Appleton Company, 1911. <http://www.newadvent.org/cathen/11348b.htm>. Transcription. This article was transcribed for New Advent by Herman F. Holbrook. Saint Oswald, and all ye holy Bishops and Confessors, pray for us. Ecclesiastical approbation. Nihil Obstat. February 1, 1911. Remy Lafort, S.T.D., Censor. Imprimatur. +John Cardinal Farley, Archbishop of New York. Contact information. The editor of New Advent is Kevin Knight. My email address is webmaster at newadvent.org. Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
<urn:uuid:0cde776e-ccd7-4f68-bdb6-8ccc38059181>
CC-MAIN-2016-26
http://www.newadvent.org/cathen/11348b.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00105-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964763
1,026
2.71875
3
Silver Rainbowfish - Chilaterina crassipinosa The Silver Rainbowfish was scientifically described by Weber in 1913. Its scientific name is Chilaterina crassipinosa. It was first collected for scientific purposes by the Dutch North New Guinea Expedition in 1903. The Silver Rainbowfish can reach a length of roughly 13 cm (5 inches). Just as the common name suggests, the body of this fish is silvery. On the upper and lower edges of the caudal fin, you can see a faint streak. The Silver Rainbow is quite similar to the Bulolo Rainbowfish (Chilaterina bulolo), but the Bulolo Rainbowfish has a blunter snout. The Silver Rainbow can also be distinguished on its tall first dorsal fin and on how the distance between its eyes is shorter. The adult male Silver Rainbowfish is adorned with narrow orange striping; there is one stripe between each row of scale along the sides of his body. Dorsal and anal fins are of a pale yellow shade. Geographical distribution, habitat and conservation The Silver Rainbowfish lives in northern New Guinea. It has been found in the river systems of Markham, Gogol, Ramu, Sepik, Pual and Mamberamo. Along the coast, you can also encounter Silver Rainbowfish in some of the small independent drainages. The typical Silver Rainbowfish habitat is a clear stream where the bottom consists of sand, gravel or rocks. These streams flow through forested terrain, but Silver Rainbowfishes are known to congregate in parts of the streams that are fairly non-shaded (probably because they want to feast on algae). The surrounding terrain consists of hills and mountains and the Silver Rainbowfish has been found up to 600 meters (1,969 feet) above sea level. The Silver Rainbowfish is not included in the IUCN Red List of Threatened Species. It is still abundant throughout its natural range. Keeping Silver Rainbowfish in aquariums Silver Rainbowfish is not a common aquarium species. It was collected for the aquarium hobby in 1978 by Allen and Parkinson, but is still not a frequently kept rainbow, especially not outside Australia. Try to resemble its natural environment as closely as possible in the aquarium. Use gravel or sand as substrate and include a lot of stones and rocks in order to create hiding spots. Keep the water temperature in the 23-28 degrees C (73-82 degrees F) range. The water should be alkaline, from pH 7.5 to 8.0. Wild Silver Rainbowfishes are omnivores and feeds chiefly on small insects, such as ants that have fallen into the water, and filamentous algae that they graze from rocks. It is important to provide them with a similar diet in the aquarium. Include stones in the set up and promote natural algae growth. The natural algae growth should be supplemented with algae based food. In addition to this, give your Silver Rainbowfish small live foods. You don’t have to set up your own ant farm in the kitchen; meaty food like insect larvae and tiny crustaceans are just as good. Breeding Silver Rainbowfish As far as we know, Silver Rainbowfish has not been bred in aquariums yet. Other New Guinea Rainbowfishes Tami River Rainbowfish Lake Wanam Rainbowfish Goldie River Rainbowfish Lake Tebera Rainbowfish Irian Jaya Rainbowfish Lake Kutubu Rainbowfish Lake Kurumoi Rainbowfish Pima River Rainbowfish Dwarf Neon Rainbowfish Fly River Rainbowfish Red Striped Rainbowfish Van Heurn’s Rainbowfish Central American Cichlids Frogs and Turtles Lake Victoria Cichlids Marine Aquarium Fish Responsible Fish Keeping South American Cichlids Tropical Fish Food
<urn:uuid:bd9053a2-f654-4a94-a86e-231907d3620c>
CC-MAIN-2016-26
http://www.aquaticcommunity.com/rainbowfish/silver.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00147-ip-10-164-35-72.ec2.internal.warc.gz
en
0.879904
797
3.75
4
Desulfation is the process of reversing the process of sulfation that occurs to a lead-acid battery over time. Desulfation restores, at least partially, the ability of the battery to hold a charge over the life of the battery originally caused by sulfation). Sulfation is the formation of large non-conductive crystals of lead(II) sulfate (PbSO4) on the battery plates. Eventually so much of the battery plate area is unable to supply current that the battery capacity is greatly reduced. Most DIY desulfator circuits in use today can trace their roots back to an article in issue # 77 of Home Power magazine written by Alistair Couper in June/July of 2000. Many versions were spawned by his design but they all accomplish the same thing, that is, they use various pulsing circuits to force the lead sulphate crystals back into the electrolyte thus rejuvenating the battery and restoring its lost capacity. The desulfator shown schematically above is being simulated. So far it shows promise. It combines features from multiple sources and sequences through them using a simple arrangement of 555 timers. Microprocessors are great for things like this, but for many people the programming tools are not available. The planned cycle has four steps. The first step is to pulse the battery for 15 seconds using a Charged-Induced-Pulse described by desufonator2. The second cycle is a settling period of 1 second. Third is a 100 microsecond pulse that shorts the battery (180 amps?) to remove dendrites. And finally, a 5 second period to measure the battery voltage.
<urn:uuid:89f0f2f0-4efd-4d14-ae9b-8c26c2a7c43a>
CC-MAIN-2016-26
http://diagram4schematic.com/battery-desulfator-circuit
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00186-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94639
333
3.9375
4
Research consistently shows boys’ reading lags behind girls'. The All Party Parliamentary Literacy Group Boys' Reading Commission was a joint venture with the National Literacy Trust from January to June 2012. The report says action needs to be taken in homes, schools and communities. The Commission's findings, published on Monday 2 July 2012, reveal that three out of four (76%) UK schools are concerned about boys’ underachievement in reading despite no Government strategy to address the issue. Last year an estimated 60,000 boys failed to reach the expected level in reading at age 11. The Commission’s report, compiled by the National Literacy Trust, reveals the “reading gender gap” is widening and says action needs to be taken in homes, schools and communities, with recommendations including boys having weekly access to male reading role models. MPs and Lords who sat on the Commission heard evidence from teachers, researchers, literacy experts and children’s authors Michael Rosen and Anthony Horowitz. 0 Comments · Add a Comment
<urn:uuid:200c17c2-c4d5-4c15-b906-c5d70a63dd3c>
CC-MAIN-2016-26
http://www.sla.org.uk/blg-boys-reading-commission-final-report.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00137-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955405
211
2.875
3
June 13, 2014 Horned Frogs Use Adhesive Tape-Like Tongues To Lift Three Times Their Own Bodyweight redOrbit Staff & Wire Reports - Your Universe Online The tongues of horned frogs are capable of lifting up to three times their own bodyweight, thanks to a biological mechanism comparable to adhesive tape, according to a new study published Thursday in the journal Scientific Reports.Dr. Thomas Kleinteich of the University of Kiel in Germany and his colleagues conducted a series of experiments in which they placed a horned frog in front of a glass slide, and then tempted it by placing a cricket on the other side. When the amphibian’s tongue shot out in order to seize the prey, the forces exerted by the tongue were recorded by a transducer attached to the slide. In the wild, South American horned frogs (also known as the Pacman frog) have proven themselves capable of snatching whole mice, though the adhesive performance of their tongues and the mechanism of the contact formation with its prey were unknown. They also consume lizards, snakes, small birds and other frogs – all of which would normally be large enough to escape being swallowed without the special adhesive apparatus on the creatures’ tongues. Dr. Kleinteich used four specimens obtained from local pet shops, explained Amanda Onion of Discovery News, but even those creatures demonstrated “incredible tongue power and speed,” with the forces from their tongues averaging more than the weight of the tongues themselves, and over three times bigger in the case of one younger frog. Frog tongues produce an average of just 1/15 the adhesive strength of a gecko’s feet, the researcher told National Geographic’s Jennifer S. Holland, but “in terms of prey capture, frog tongue adhesive forces are enormous – on average 1.4 times their body weight.” That would be the equivalent of a 176-pound human lifting 246 pounds using only his or her tongue, and doing so within “milliseconds” of first making contact, she added. After each of the experiments, Dr. Kleinteich and his colleagues removed the equipment and allowed them to consume the cricket, ensuring that the creatures would be satisfied and able to continue participating. A total of 20 measurements were collected from each frog, with scientists analyzing tongue prints left behind on each glass slide, said BBC News science reporter Jonathan Webb. “It's the first time we've ever measured how well frog tongues stick,” Dr. Kleinteich told Webb. “The common belief is... that the mucus acts as some sort of superglue. But what we found was actually that we got higher adhesive forces in trials where we found less mucus. That was quite interesting.” The mucus appeared to accumulate over time, the BBC News writer noted, but the study authors found mucus coverage tended to be on the low side during initial contact. While they said that the mucus is certainly a wet adhesive system and plays a role in the phenomenon, it does not act like superglue. The key is the combination of structure and the mucus, which is what led the researchers to make the adhesive tape comparisons. Dr. Kleinteich and his team will now use microscopes as part of an in-depth examination of the surface of the horned frogs’ tongue, Webb said. The researcher should have few complaints about doing so, as he told BBC News that working on the experiments was “fun… I used to do a lot of morphological, descriptive work with amphibians - I used to study dead, museum specimens. For me it was quite exciting to work with the living frogs.”
<urn:uuid:458b28e9-4f27-4a71-b372-7a8f06763f20>
CC-MAIN-2016-26
http://www.redorbit.com/news/science/1113169396/pacman-frog-tongue-is-a-heavyweight-061314/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962387
767
2.8125
3
First published in Double Helix Network News, Winter 2001, Rev. April 2014 Elbow dysplasia (ED) may be the most unrecognized common health issue in Australian Shepherds. Affected dogs may only show lameness occasionally, something easy to dismiss as a minor injury in an active breed. A few affected dogs may not show signs at all. But testing results from Europe, where elbow exams are as standard as those for hips, makes it clear that this disease is much more frequent in Aussies than most breeders in North America are aware. In addition, having elbow dysplasia is a risk factor for also having hip dysplasia; the more serious the condition the higher the risk. The wise owner should consider it whenever he encounters an unexplained case of front-end lameness in one of his dogs. Breeders should make elbow screening part of their standard health screening practices. ED is not a single disease, but rather a set of related defects that are grouped under the term “elbow dysplasia.” If your dog is diagnosed with ED, it may have any of the following: - Fragmented medial Coronoid Process (FCP) – a prominence of the ulna at the elbow becomes separated from the bone below it. This is the most common ED defect. Roughly 60% of dogs with FCP will also have OCD (see below). - Ununited Anconceal Process (UAP) – a prominence at the upper (elbow) end of the ulna is completely or partially detached from the rest of the bone. This occurs because the bone growth center between the process and the rest of the bone fails to join the pieces together. This normally happens around 4-5 months of age. In some cases UAP may be due to injury. - Osteochondritis Desicans (OCD, also called osteochondrosis) – an area of cartilage fails to mature and becomes separated from the underlying tissue. It may be partially attached, like a flap, or become free-floating in the joint. (OCD can also occur in other joints than the elbow.) Some dogs diagnosed with incomplete ossification of the humeral chondile are deemed to have elbow dysplasia. The condition arises in the cartilagenous growth plate at the elbow end of the humerus, the bone above the elbow joint when the growth plate fails to harden as it matures. This particular problem seems to be restricted to Spaniel breeds and is probably not be a concern for Aussie breeders. The inheritance of elbow dysplasia is complex and no specific genes have yet been indicated. Unaffected parents can have affected offspring and if affected animals are bred together the offspring may not be affected. It is also possible that some or all of these conditions may be inherited independently though the frequency of the FCP/OCD connection indicates some relationship between them at least in a significant number of cases. OCD is also felt to be the same disease no matter what joint it occurs in, therefore breeders should keep shoulder OCD cases in mind in relation to ED until science gives us better genetic information than is available at present. The defects are most common in large, heavy-boned or fast-growing breeds, so it is possible that the disease may be to some degree secondary to body morph (which is itself inherited) but not all large, heavy-boned or fast-growing dogs get ED. All of the ED defects are developmental in nature, arising from something going wrong during the growth of the bones that join at the elbow. Environmental factors play a part in shaping the growing puppy, but skeletal development is governed by the action of genes. Given that ED is more common in some breeds than others and in some families of dogs within a breed, an inherited cause should be assumed. OCD, FCP and UAP all cause stiffness, stilted gait or lameness, usually while the dog is under a year of age and sometimes as young as 4 months. The affected joint will be swollen and painful. There may be atrophy of nearby muscles. The disease is often bilateral; occasionally one elbow may show signs before the other. Untreated, the joint will degenerate, resulting in diminished range of motion and chronic pain. For the sake of the dog, early surgical treatment accompanied by weight reduction and restriction of activity is recommended. Some type of medication may be necessary. It should be noted that while husbandry practices may impact the severity of ED, it cannot be prevented or cured by diet, restriction of exercise and the like. Affected dogs remain affected and will pass ED genes on to their offspring if bred. Diagnosis of ED is usually confirmed by x-ray of the affected joint. In very young dogs, the bone changes may not yet be visible in an x-ray so it is recommended that the procedure be repeated in 4-6 weeks to see if there is evidence of change to support an ED diagnosis. If x-rays still fail to reveal a cause for lameness, magnetic resonance imaging (MRI) or arthroscopy may be necessary. Not all ED affected dogs will have clinical disease, so screening of apparently normal Aussies in ED families is very important. Screening of these dogs is vital to prevent the disease from becoming more widespread in the breed. In North America the Orthopedic Foundation for Animals (OFA_ evaluates a lateral flexed view of each elbow. In this side-on view the elbow is flexed as much as possible. The x-rays are reviewed by board-certified veterinary radiologists and the elbows will be graded normal or dysplastic. They do not distinguish between the type of defect present (FCP, UAP or OCD.) If dysplastic, they are further graded I, II or III based on the amount of joint damage with III being worst. If the animal is 2 years or older, they will issue a numbered certificate which will also be reported to the appropriate breed club. OFA’s registry is “semi-open.” It will release information about affected dogs with the owner’s express permission to do so, however since it charges the same fee on all submissions, most people with affected dogs do not choose to spend the extra money and the results never reach OFA’s database. European elbow evaluations are based on the findings of the International Elbow Working Group (IEWG), a group of veterinary radiologists and surgeons, geneticists and dog breeders who have developed a screening protocol to facilitate international exchange of data. It has proven effective for control of ED. They originally accepted the same single view as OFA, but beginning in 2001 they required two views, Lateral flexed and craniocaudal. The latter is taken with the joint extended and viewed from the top. IEWG does not feel a single view is adequate for diagnosis in all cases. European scoring systems are numerical, ranging from 0 to 5, with 0 being ED-free and scores of 1 – 5 indicating the severaity of the condition with 5 being worst. Dogs must be at least one year of age at time of screening. ED is a disregarded risk in Aussies. It is crippling to the affected dog with correction requiring expensive surgery. Because of this, regular screening of breeding stock should become the norm. (It is a requirement for certification with the Canine Health Information Center – CHIC.) Affected dogs should not be bred. First-step relatives (parents, full and half siblings) should be bred only to mates screened clear of ED who do not have family history of HD. With more diligent screening practices we can limit the impact of this disease on the breed.
<urn:uuid:9a6b4897-3ff0-4824-ae2b-e3ec81185bc1>
CC-MAIN-2016-26
http://www.ashgi.org/home-page/genetics-info/bones-joints/elbow-dysplasia
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00155-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955692
1,590
2.578125
3
Buy an Interpretation Interpreting your Dreams Increasing Dream Recall Determining Dream Type Developing Dream Intuition About Nancy Wagaman, M.A. Dream Symbol: bells ringing The ringing of a bell or bells can represent a message being delivered. For example, the ringing of a school bell sends the message that class is beginning or ending. The ringing of church bells sends a message that a significant event has occured. The ringing of a delicate bell or chime could mean there's a message you need to pay attention to, either in the dream or in real life. Using this Dream DictionaryTips to Understand Dream Meaning Dream symbol meanings are different for each person. It's IMPORTANT to consider: - Personal meaning - What the dream symbol means to you, what it reminds you of, how it makes you feel. - Context - How the dream symbol appears in the dream. For example, in a dream about a bee - what was the bee doing, how and where it was doing it, and how did you feel about it? - Look beyond the obvious - A dream is often about something other than its obvious meaning. Physical events in the dream commonly represent mental or emotional matters. What Does Your Dream Mean?About Dream Symbols This dream dictionary gives suggested meanings of dream symbols. A dream symbol often means something different in different dreams. There is no standard meaning of a dream symbol or dream that is accurate for all dreams. Dream meaning is very subjective, and your dream symbol may mean something completely different from the meaning listed in this dream dictionary. IMPORTANT: This dream dictionary gives suggested meanings of dream symbols. There is no single "standard meaning" of a dream symbol or dream. Dream meaning is very subjective, and your dream symbol may mean something completely different from the meaning listed in this dream dictionary. I am not a therapist. If you are experiencing physical or psychological problems, or if you are distressed, consult a medical professional.
<urn:uuid:67e30fde-7941-4b45-8e36-c591c69451e8>
CC-MAIN-2016-26
http://www.mydreamvisions.com/dreamdictionary/symbol/56/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00187-ip-10-164-35-72.ec2.internal.warc.gz
en
0.901159
409
2.65625
3
This is no comic book movie plot: Urban ecology data suggests that at least some types of spiders get bigger when they live in cities. A team of researchers from the University of Sydney sampled spiders across their home city and found that Nephila plumipes, a type of golden orb-weaving spider, is larger and carries more eggs when it lives in more urbanized areas. “We found associations with size and hard surfaces and lack of vegetation,” explains Lizzy Lowe, a graduate student in ecology and lead author on the study. Lowe first suggested that cities might be supersizing spiders at a meeting of the Ecological Society of Australia in 2012, and the final results of this research appear in PLOS ONE today. As cities encroach on the natural environment, many wild organisms suffer, and urbanization often comes with a significant drop in biodiversity or worrisome shifts in animal behavior. Some species just can’t hack it, but others thrive in the urban jungle. In Sydney, the largest city in Australia, golden orb-weaving spiders are common in the Royal Botanical Gardens and other urban oases. “They are abundant in urban Sydney, and I was interested to find out why,” says Lowe. Lowe and her colleagues searched for golden orb-weaving spiders in 20 sites around Sydney with varying degrees of urbanization—parks, bush patches and forested areas. Capturing 222 spiders in total, they measured each spider’s front leg and weight to gauge size. To look at fertility specifically, they also measured the ovary and fat storage weight in 29 spiders spanning the different sites. Lowe then calculated degrees of urbanization around where the spiders were found. Her team looked at the amount of vegetation covering the land, distance from the city center and even socioeconomic information such as income and population density. Overlaying all of this data with the spider measurements, she began looking for patterns. Lowe and her colleagues found that overall, larger spiders with more eggs lived in more urbanized spaces with less vegetation and more hard surfaces, such as sidewalks and concrete walls. “These surfaces retain heat, leading to the urban heat island effect,” says Lowe. This increase in temperature could mean the spiders spend less energy keeping warm, helping them grow. It’s also possible the spiders are getting fat because they have more to eat. Large spiders were frequently found on or around light posts and other manmade objects. Especially at night, artificial light could attract a smorgasbord of beetles, flies and moths for the orb-weaving spiders to munch on. Large spiders with bigger ovaries were also found in densely populated, wealthy suburbs. The researchers suggest that these areas might produce more trash for spider prey to eat, or healthier parks and green spaces for prey to inhabit. Either way, the spiders get better food options. Urban orb-weaving spiders also might encounter fewer predators and parasites, like wasps and other spiders. Smaller dewdrop spiders steal prey and even entire webs from golden orb-weaving spiders, and the team found fewer of these “kleptoparasites” in webs from more urbanized regions. Every spider species is a bit different when it comes to foraging strategies, diet and behavior, and some may not thrive in the city. “Organismal ecology is rarely a one-size-fits-all discipline,” says Chad Johnson, an ecologist at Arizona State University. For instance, cities could destroy hunting grounds for non-weavers such as wolf spiders, which rely on subtle vibrations in soil or water to find prey. It also depends on the habitat that’s being urbanized. “Urbanizing a desert has very different effects compared to urbanizing a prairie or a temperate forest—and this variation will likely affect different species differently,” says Johnson. In 2012, Johnson’s lab found that urban black widows had fewer eggs than their desert brethren. In that study, though, the team only compared one site in the desert to one site in the city. Unpublished data looking at eight sites apiece shows a trend more consistent with the orb-weavers: spiders with more eggs in urban areas. So are we in for a city spider explosion? Unlikely. On the plus side, these spiders will keep populations of other critters in check, but they might hit a tipping point themselves. Running out of food could cause a population crash, and if things get too warm as the climate changes, the spiders may not be able to handle the heat. Or perhaps the wealth of spiders will attract new arachnid predators. “As the density of urban spiders grows, it is likely that other species will begin to exploit this abundance,” says Johnson. Given recent interest in insect cuisine, that other species might even be us.
<urn:uuid:699d1877-93b3-4284-9d5d-1f8d9d4bc9ea>
CC-MAIN-2016-26
http://www.smithsonianmag.com/science-nature/friendly-neighborhood-spiders-bigger-cities-180952378/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00164-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948932
1,007
3.71875
4
Bromeliads are one of the best kept secrets in the plant world. My goal is to introduce you to the wonderful world of bromeliads and try to let you know what you are missing if they are not part of your plant collection. They are diverse, fascinating, and relatively easy to grow. There are 54 genera and 3,168 species of identified bromeliads throughout the world. They have been hybridized extensively and many new striking plants have resulted with both bloom and foliage that offer more color than any other plant I am aware of. Because bromeliads are "different" looking than traditional plants and most people consider them exotic, therefore perceived as hard to grow, bromeliads have not caught on among plant enthusiasts nearly as much as they deserve to. Newly discovered or hybridized plants often sell to collectors for big bucks, but in a matter of years become affordable to most people as they are reproduced asexually by dividing "pups" from the developed plants. Most plant nurseries don't offer bromeliads in wide varieties so the really nice and choice plants most likely will need to be purchased from specialty growers. Bromeliads in their native habitats (unique to the Americas with one exception in western Africa) grow in such diverse places as 13,000 ft elevations to sea level, rain forests to deserts among cacti and succulents, and even as far north as the Virginia coast and as far south as southern Argentina.. In nature, epiphytic bromeliads provide habitat for frogs, aquatic insects, and lizards as part of the tropical and subtropical ecosystem. Some are true "air plants" like Spanish or ball moss (Tillandsia) while others are terrestrial like those sweet pineapples we enjoy as a popular fruit. . Most are epiphytic deriving their nutrients from their cupped shape. The optimum temperatures for bromeliads range from 70 to 90 in daytime to 45 - 60 at night F. Most bromeliads like good air circulation and 50 - 75% humidity. There are no general guidelines for growing bromeliads as they are so diverse. You need to know about the specific genera and species and what it takes to grow it well - but with that knowledge, you will find them relatively easy to grow and enjoy. INFLORESENCES can be cupped, bracted, branched, single spiked, or insignificant. FOLIAGE can be smooth edged (Tillandsias), spined, or succulent. BLOOM PERIODS range from less than one week (Billbergias) to greater than a month (Vrieseas). RELATIVE SIZES can range from less than one inch to greater than three feet wide and tall. These variances can occur within the same genera of bromeliads depending on the particular species. Tips on Growing and Enjoying Bromeliads - For epiphytic (non-terrestrial) varieties, we grow bromeliads in small pine bark as a soil base. This provides excellent aeration and circulation for the roots that form, and provides sufficient support for the plant. For terrestrials, use a loose and light organic soil mixture. For small epiphytic tillandsias, mounting them on driftwood or cork is an excellent and healthy way to display them. - Location is everything! Since different bromeliads prefer different levels of light, they will let you know how to please them. If the foliage becomes bleached or burned, reduce the light. If the plant isn't producing the color you know it should have, increase the light. Finding the right level of light makes all the difference in bringing out the colorful qualities of these plants. Good air circulation is a common and vital need to all genera of bromeliads. - Bromeliads should not be fertilized regularly unless you are trying to increase pup production. There are some exceptions. Tillandsias and Cryptanthus respond well to regular fertilization. Fertilization will reduce the coloration in most bromeliad hybrids that are noted for their color, e.g. Neoregelias and Billbergias. When fertilizing, use a liquid soluble 20/20/20 fertilizer at half the recommended strength. Never use urea based nitrogen fertilizer. Avoid mineral salts of any kind. Most city water supplies are fine for bromeliads. Never mount bromeliads on chemically treated lumber. Water with pH of 5.5-6.5 is preferred (avoid alkaline water). - How to display bromeliads is always a good question. Some suggestions follow. They can be grown in large hanging baskets with three plants average per basket. We group the plants by commonality, e.g. Neoregelias in one basket, Aechmeas in another, or mixed genera that share the same light requirements. Other ways are to display them on single poles with pot loops in spiral form. Yet another way would be to incorporate them into a ground level display by digging out a hole, placing a one gallon nursery container in the hole, and inserting an 8" plastic pot into the nursery container with the plant potted in small pine bark. This gives the appearance they are terrestrial without them ever touching the soil. As long as the basic cultural requirements are met, bromeliads can be displayed in a number of other imaginative ways. They can also be attached to trees to resemble their natural habitat. However, collector or rare plants might best be grown as individual plants for greenhouse or other special display. - After the plant flowers, it will produce "pups" or young plants then die. The young pups will take over the next generation. Pups should not be removed until visible root structures can be seen at their base or they are at least 1/3 to 1/2 the size of the mother plant. Make sure the pups are cut off with a solid base. Some bromeliads reproduce so abundantly, you'll be sharing them with friends. Dead flower stalks can be cut off if unattractive until the mother plant dies. All bromeliads require winter protection in central Texas, except the ball moss (Tillandsia rotunda), seen in our native oak trees, but most can adapt as house plants during winter months. Exceptions might be Tillandsias (air plants) which require good air circulation and high humidity. For more information about growing bromeliads, go to www.centraltexasgardening.info/bromeliads.htm . There you will find more resources for learning about these fascinating and easy to grow plants. You can also meet fellow bromeliad enthusiasts through the Bromeliad Society of Austin which meets the second Tuesday of each month at 7:00pm in the Green Room of the Zilker Garden Center. Contact Dr. Steve Reynolds at firstname.lastname@example.org for more information. COLORFUL BROMELIADS BRIGHTEN UP A SHADED AUSTIN PATIO
<urn:uuid:a6c8b8ed-3846-4722-b44b-c7c47563025e>
CC-MAIN-2016-26
http://zilkergarden.org/gardens/tips/tipsbromeliads.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00141-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933687
1,465
3.546875
4
1 Answer | Add Yours Crooks, the stable buck, is introduced in Chapter 2 when George and Lennie are being shown around the bunk house by the old man referred to later as "the old swamper." We learn that Crooks is the only hand on the ranch who is African-American and that he is victimized because of it. Crooks isn't allowed to live in the bunk house with the other men; instead, he lives in a room by himself and is not allowed to come out and socialize. The previous Christmas, George is told, the boss brought whiskey to the bunk house, and Crooks was "let in" for that one night. According to the old swamper, Crooks was then attacked by one of the men: Little skinner name of Smitty took after the nigger. Done pretty good, too. The guys wouldn't let him use his feet, so the nigger got him. If he coulda used his feet, Smitty says he woulda killed the nigger. The racism Crooks endures is evident in the old man's remarks. We also learn that Crooks keeps books in his room, reads a lot, has a temper, and isn't especially impressed when the boss gets angry. Most of the time he's "a pretty nice fella." He is called "Crooks" because of his crooked back, an injury he suffered when he was once kicked by a horse. We’ve answered 328,196 questions. We can answer yours, too.Ask a question
<urn:uuid:94bfd56c-ff2a-4e2f-b887-1d1bec62d148>
CC-MAIN-2016-26
http://www.enotes.com/homework-help/what-does-chapter-reveal-about-stable-buck-84611
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.989259
321
2.796875
3
Cool Heads And Warm Hearts A young man seemed to take an unusually long time to place his order at the flower shop. When the clerk asked how she could help, he explained that his girlfriend was turning 19 and he couldn't decide whether to give her a dozen roses or 19 roses -- one for each year of her life. The woman put aside her business judgment and advised, "She may be your 19-year-old girlfriend now, but someday she could be your 50-year-old wife." The young man bought a dozen roses. He made his decision from both his head and his heart. Abraham Lincoln has been considered one of the greatest leaders of all time. He maintained a cool head, even under personal attack. Though constantly criticized in public, he rarely answered back. "If I were to try to read, much less answer, all the attacks made on me, this shop might as well be closed for any other business," he said. He showed courage in the face of unjust criticism. He refused to retaliate and chose instead to quietly do the very best he could. And Lincoln was also widely known for his compassion. He made difficult and tough decisions during America's Civil War, but at the same time showed great leniency. He pardoned more prisoners than any U. S. president before or since. And when a general asked Lincoln how the defeated Confederates should be treated, Lincoln replied, "Let 'em up easy." He was both cool-headed and warm-hearted. Too many people get it the other way around. They have hot heads and cold hearts. They react in the heat of anger or passion. They are cold and unfeeling. And they invariably make poor decisions. A cool head asks the hard questions. A cool head thinks it through. A cool head fairly weighs the options and asks, "What is the logical thing to do?" A warm heart empathizes. A warm heart considers feelings and relationships. A warm heart asks, "What is my spirit telling me to do?" Some decisions we make with our heads. Others with our hearts. But I think it takes both to get it right.
<urn:uuid:8b205849-77ca-4eff-acb0-02fc252250e0>
CC-MAIN-2016-26
http://www.skywriting.net/inspirational/messages/cool_heads--warm_hearts.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00193-ip-10-164-35-72.ec2.internal.warc.gz
en
0.984439
442
2.859375
3
Design pattern is a well-known approach to solve some specific problems which each software developer comes across during his work. Design patterns capture higher-level constructs that commonly appear in programs. If you know how to implement the design pattern in one language, typically you will be able to port and use it in another object-oriented programming language. The choice of implementation language affects the use of design patterns. Naturally, some languages are more applicable for certain tasks than others. Each language has its own set of strengths and weaknesses. In this book, we introduce some of the better known design patterns in Python. You will learn when and how to use the design patterns, and implement a real-world example which you can run and examine by yourself. You will start with one of the most popular software architecture patterns which is the Model- View-Controller pattern. Then you will move on to learn about two creational design patterns which are Singleton and Factory, and two structural patterns which are Facade and Proxy. Finally, the book also explains three behavioural patterns which are Command, Observer, and Template. This book takes a tutorial-based and user-friendly approach to covering Python design patterns. Its concise presentation means that in a short space of time, you will get a good introduction to various design patterns. Who this book is for If you are an intermediate level Python user, this book is for you. Prior knowledge of Python programming is essential. Some knowledge of UML is also required to understand the UML diagrams which are used to describe some design patterns.
<urn:uuid:ba4853e9-3152-4308-ba42-99db8de55b51>
CC-MAIN-2016-26
http://shop.oreilly.com/product/9781783283378.do
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954699
315
3.453125
3
- Flower bulb history - Flower bulb production - Bulb flower production - Landscaping information - Essay items - Spring blooming bulbs - Summer blooming bulbs - Autumn blooming bulbs Record show that tulips were in cultivation in Turkey as early as this. Tulips are used in the initials of an Italian bible. Omarr Khayam writes a poem about tulips. The poet, Rumi, sings the praises of tulips in many songs. The 'Tulip Era' takes place in Turkey under Suleiman II. First drawing of a tulip in western Europe. First book in which a tulip is portrayed (C. Gesner). First portrayal of a tulip in a Dutch book. (R. Dodoens' 'Cruydtboeck' (Herbal). First tulip appears in England. Matthias de l' Obel describes 41 varieties of tulips in his 'Cruydtboeck'. Carolus Clusius plants the first tulip in the Botanical Garden in Leiden, the Netherlands. The first tulips bloom in the Netherlands. The first tulip appears in France. The tulip is an exclusive garden plant. It is planted in strategic places in the garden. Establishment of the first cultivation operations south of Haarlem, especially along the Wagenweg and the Kleine Houtweg. These absolute monopolies would hold onto their positions for about 150 years. Emanuel Swerts publishes the first trade catalogue and includes tulips in it. The development of a lively trade in tulip bulbs results in a wild speculation in tulips. It was especially during 1623 and 1637 that prices rose steeply. An example: 'Semper Augustus' cost 1200 florins per bulb in 1624; in 1625 it cost 3000 florins; in 1633 it cost 5000 florins; and in 1637, 3 bulbs cost 30.000 florins. In comparison, a house along a canal in Amsterdam cost 10.000 florins in those days. This period may receive a lot of attention. Since that time there has never been such a speculation in the Netherlands. The first parrot tulip is described. The firm of Voorhelm is established in Haarlem. A record of tulips written by F. Morin appears in Paris. The Elector from Brandenburg records 126 different tulips in an inventory. Tulipmania in Turkey. Mohammed Lalizari is a great tulip enthusiast. During this time he imports thousands of bulbs to Turkey from the Netherlands. The Tulip becomes less important than the hyacinth. Around 1730 there is somewhat of a speculation in hyacinths. The Margrave van Baden-Durlack publishes a catalogue which includes the statement that he has bought bulbs from 17 Dutch companies, 15 of them in Haarlem. Dialogue of Waermondt and Gaergoedt about the tulip speculation is published again, this time as a result of a threatening speculation in hyacinths. The introduction of the tulip called 'Keizerskroon' (still cultivated on 2.3 ha of land in the Netherlands). Cultivation is expanded, at first in the direction of Overveen and Bloemendaal, and then, in the second half of the nineteenth century, toward Hillegom, Lisse and Noordwijk. Tulips were included in the group called 'bijgoed' (miscellaneous kinds of bulbs and tubers). Only hyacinths were listed under the term 'bollen' (bulbs). Introduction of the tulip variety 'Couleur Cardinal' [still cultivated today on 23 hectares in the Netherlands; it has also produced a number of mutants such as 'Arma' (44 ha) and 'Prinses Irene' (72 ha)]. J.B. van der Schoot is the first 'bollenreiziger' (travelling bulb salesman) to go to the United States. Bulbs were sold to the U.S. from Holland as early as the 18th century. Introduction of the fragrant tulip 'Prins van Oostenrijk' and the double early tulip, 'Murillo'. The discovery of T. greigii takes place via P.L. Graeber. Bulbs are sent to C.G. van Tubergen who ensures that the tulips are introduced. E.A. Regal described Tulips kaufmanniana. Introduction of Darwin tulips. Introduction of 'Bartigon' in 1898. These tulips would turn out to be the most commonly cultivated tulips. First 'Classified List of Tulip Names'. Included in how the flowers were arranged were: flowering period, shape and the degree of 'bloembreking' (the striping and splashing of colours within the flower). Establishment of the Hortus Bulborum Limmen. First professional publication about viruses which affect tulips, complete with clinical pictures. As a result of crossing, D.W.Lefeber develops enormous red tulips which are known as Darwin Hybrids. The most famous one is 'Apeldoorn'. A trip by carriage from Turkey to the Netherlands to commemorate the existence of tulips in western Europe for 400 years. Introduction of 'ice tulips' tulips held in sustained coolness to delay forcing beyond normal time period to extend the availability of cut tulip flowers into 'down months' when they were previously not available.
<urn:uuid:77a3c3bf-5878-4acb-b8cc-d17d8b80e78a>
CC-MAIN-2016-26
http://www.bulbsonline.org/ibc-jsp/en/education/beroepsonderwijs/flowerbulb-history/Tulip-time-table.xml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.903734
1,192
2.921875
3
In his annual testimony to Congress in March, Dept. of Transportation Inspector General Calvin L. Scovel III outlined the top ten management issues facing DOT, and for the first time he put traffic congestion on the list. “Given the impact of congestion, in the air and on the ground, on the quality of life for travelers and on economic growth, we believe that the Department's initiative to reduce congestion among all modes of transportation is noteworthy. It represents an overall framework for federal, state and local authorities to begin addressing congestion and includes elements ranging from alternative funding sources for infrastructure to cross-modal solutions.” The IG's elevation of congestion to its top-ten list brings home the point that getting stuck in traffic represents not only an inconvenience, but a threat to the nation's economic well being and safety. Scovel was referring in part to a plan introduced last May by then- Transportation Sec. Norman Mineta to reduce congestion, which called for, among other ideas, variable road-pricing schemes based on congestion. These so-called “High Occupancy Toll” or HOT lanes seek to price tolls based on real-time congestion reports and change throughout the day. Another suggestion is to open public roads to privatization in the hope that the private sector can manage traffic flow in ways in which public entities cannot because of declining funding. A third suggestion calls for construction of traffic corridors that would relieve congestion on the borders of Canada and Mexico. Selling roads to private firms is very enticing to cash-strapped states, as evidenced by a spate of activity over the past year in about a dozen of them, including leasing the Indiana Toll Road to an overseas private consortium at a cost of $3.8 billion for 75 years. Are selling public roads, installing HOT lanes or building new roads the answer to congestion? Some experts suggest these solutions are too simplistic and call on policy makers to change the way they think about congestion. As pessimistic as it may seem, a growing number of people who study congestion proffer that congestion is a natural outgrowth of economic success and we have to accept it. “Contrary to what the government tells us, congestion is not caused by poor policy choices, but by economic success,” says Anthony Downs, senior fellow, Metropolitan Policy Program at The Brookings Institution. Downs suggests the most congested regions are the most prosperous. “Expanding road capacity does not reduce congestion, because we're continuing to grow population, increasing the number of vehicles and driving more miles,” he says. Downs notes that while HOT lanes may encourage people to alter their work schedules, we lose productivity because people are not working at the same time. Brian Taylor, director of UCLA's Institute of Transportation Studies, concurs: “Congestion is a drag on productivity, but it may mean that a place is economically viable.” He says that traffic engineers and policy makers have it wrong when they focus solely on congestion. Instead, they should study ways to achieve “consistency” in traffic flow so drivers can count on how long a trip will take. “That's what businesses, especially trucking firms, really want. They want a trip to take the same time every day — even if it's longer.” Taylor says that small changes, as opposed to large sweeping changes, are sometimes most effective. For example, instead of allowing vehicles to enter a freeway at will, spreading out entry through a green-light, red-light system makes a roadway “stable.” Another innovative idea is to push disabled cars off a road quickly. As for HOT lanes, Taylor says they sometimes speed up a trip, but they are not consistent in the travel times they deliver.
<urn:uuid:32658498-ac0a-4433-aaea-a5a0de68f382>
CC-MAIN-2016-26
http://fleetowner.com/management/inbox/fleet_congestion_dot_list
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960422
766
2.515625
3
Ari Steinfeld embraced the challenge. Eight months after his limb-threatening accident, Ari’s march down the aisle was a testament to what patient determination — and advanced orthopaedic surgery — can achieve. Go, Ari. Read Ari’s story and find your own inspiration at ANationInMotion.org. The American Academy of Orthopaedic Surgeons (AAOS) and the Orthopaedic Trauma Association (OTA) are working together to increase awareness about road safety. In 2010, approximately 4,280 pedestrians were killed and 70,000 were injured in the U.S. That’s one injury every eight minutes! Orthopaedic surgeons—the specialists who put bones and joints back together after road crashes and traumas—want drivers, passengers, and pedestrians who share the road to be safe. Fight for Your Mobility! Learn how to prevent pedestrian crash-related injuries: - Cross streets at designated crosswalks. - Be careful at intersections where drivers may fail to yield right-of-way while turning—especially if signal is changing. - Be sure crosswalk is clear before crossing street. - Increase your visibility at night by carrying a flashlight and wearing reflective clothing. - Walk on the sidewalk. If you must walk in the street, walk facing traffic. - Avoid wearing headphones, using the phone, or texting while walking. - Hold your children’s hands. For more road safety tips, visit orthoinfo.org, ota.org, and decidetodrive.org. Download Press-Ready PDFs: Download High-Resolution JPG:
<urn:uuid:de9ce2c7-fbe9-4bfd-a7e9-0a0becac7a63>
CC-MAIN-2016-26
http://newsroom.aaos.org/PSA/print/Trauma/a-nation-in-motion.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.908361
344
2.671875
3
In which Olympic event does a human travel fastest? Decide which events to include in your Alternative Record Book. To investigate the relationship between the distance the ruler drops and the time taken, we need to do some mathematical modelling... Have you ever wondered what it would be like to race against Usain Bolt? Andy wants to cycle from Land's End to John o'Groats. Will he be able to eat enough to keep him going? How would you design the tiering of seats in a stadium so that all spectators have a good view? The triathlon is a physically gruelling challenge. Can you work out which athlete burnt the most calories? How would you go about estimating populations of dolphins? Does weight confer an advantage to shot putters? Simple models which help us to investigate how epidemics grow and die out. A problem about genetics and the transmission of disease. Use the computer to model an epidemic. Try out public health policies to control the spread of the epidemic, to minimise the number of sick days and deaths. Can you rank these sets of quantities in order, from smallest to largest? Can you provide convincing evidence for your rankings? Investigate circuits and record your findings in this simple introduction to truth tables and logic. Which countries have the most naturally athletic populations? Learn about the link between logical arguments and electronic circuits. Investigate the logical connectives by making and testing your own circuits and fill in the blanks in truth tables to record. . . . Can you deduce which Olympic athletics events are represented by the graphs? Can you visualise whether these nets fold up into 3D shapes? Watch the videos each time to see if you were correct. Where should runners start the 200m race so that they have all run the same distance by the finish? Two trains set off at the same time from each end of a single straight railway line. A very fast bee starts off in front of the first train and flies continuously back and forth between the. . . . These Olympic quantities have been jumbled up! Can you put them back together again? Formulate and investigate a simple mathematical model for the design of a table mat. Can you work out which processes are represented by the graphs? Can you work out what this procedure is doing? Can Jo make a gym bag for her trainers from the piece of fabric she has? Make an accurate diagram of the solar system and explore the concept of a grand conjunction. Get some practice using big and small numbers in chemistry. In Fill Me Up we invited you to sketch graphs as vessels are filled with water. Can you work out the equations of the graphs? Can you work out which drink has the stronger flavour? If I don't have the size of cake tin specified in my recipe, will the size I do have be OK? Is it cheaper to cook a meal from scratch or to buy a ready meal? What difference does the number of people you're cooking for make? Imagine different shaped vessels being filled. Can you work out what the graphs of the water level should look like? Explore the properties of isometric drawings. Which dilutions can you make using only 10ml pipettes? Explore the properties of perspective drawing. What shapes should Elly cut out to make a witch's hat? How can she make a taller hat? How do you write a computer program that creates the illusion of stretching elastic bands between pegs of a Geoboard? The answer contains some surprising mathematics. When a habitat changes, what happens to the food chain? What shape would fit your pens and pencils best? How can you make it? Can you suggest a curve to fit some experimental data? Can you work out where the data might have come from? Use trigonometry to determine whether solar eclipses on earth can be perfect. When you change the units, do the numbers get bigger or smaller? Are these estimates of physical quantities accurate? Work with numbers big and small to estimate and calculate various quantities in physical contexts. Which units would you choose best to fit these situations? Practice your skills of measurement and estimation using this interactive measurement tool based around fascinating images from biology. How efficiently can you pack together disks? Work with numbers big and small to estimate and calulate various quantities in biological contexts. Water freezes at 0°Celsius (32°Fahrenheit) and boils at 100°C (212°Fahrenheit). Is there a temperature at which Celsius and Fahrenheit readings are the same? Explore the relationship between resistance and temperature Work with numbers big and small to estimate and calculate various quantities in biological contexts.
<urn:uuid:e2444cee-0baa-4a6c-a3d4-e84284604a52>
CC-MAIN-2016-26
http://nrich.maths.org/public/leg.php?code=-445&cl=3&cldcmpid=4744
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00155-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928518
972
3.59375
4
Im reading nursing skills by potter its a very good book. They say to inject air into vials before drawing up meds yet for ampules you dont have to inject any air. So I'm confused why we even inject air in the first place and why we dont inject it into ampules? Im guessing the pressure is greater INSIDE the bottle hence we inject the air. What would happen if you dont inject the air? Would the bottle blow up?
<urn:uuid:39f2f1bb-1fd1-407a-a4bc-f42149a82ad2>
CC-MAIN-2016-26
http://allnurses.com/nursing-patient-medications/im-confused-when-506722.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928255
94
2.53125
3
- Year Published: 1609 - Language: English - Country of Origin: England - Source: Shakespeare, W. The sonnets. In R. G. White (Ed.), The complete works of William Shakespeare. New York: Sully and Kleinteich. - Flesch–Kincaid Level: 11.0 - Word Count: 125 Shakespeare, W. (1609). Sonnet 128. The Sonnets (Lit2Go Edition). Retrieved June 26, 2016, from Shakespeare, William. "Sonnet 128." The Sonnets. Lit2Go Edition. 1609. Web. <>. June 26, 2016. William Shakespeare, "Sonnet 128," The Sonnets, Lit2Go Edition, (1609), accessed June 26, 2016,. How oft when thou, my music, music play’st, Upon that blessed wood whose motion sounds With thy sweet fingers when thou gently sway’st The wiry concord that mine ear confounds, Do I envy those jacks that nimble leap, To kiss the tender inward of thy hand, Whilst my poor lips which should that harvest reap, At the wood’s boldness by thee blushing stand. To be so tickled they would change their state And situation with those dancing chips, O’er whom thy fingers walk with gentle gait, Making dead wood more blest than living lips, Since saucy jacks so happy are in this, Give them thy fingers, me thy lips to kiss.
<urn:uuid:a80554d7-0540-400f-b59e-76578da3917b>
CC-MAIN-2016-26
http://etc.usf.edu/lit2go/179/the-sonnets/4191/sonnet-128/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00169-ip-10-164-35-72.ec2.internal.warc.gz
en
0.822539
333
2.578125
3
The Baptist Mission in India: Containing a Narrative of Its Rise, Progress, and Present Condition, A Statement of the Physical and Moral Character of the Hindoos, Their Cruelties, Tortures and Burnings, With a Very Interesting Description of Bengal. by William Staughton, D.D. Philadelphia: Hellings and Aitken, 1811. William Staughton (1770-1829), baptized by Samuel Pearce, was a theological student at Bristol Academy (Bristol Baptist College), Bristol, England, at the establishment of the Baptist Missionary Society in 1792. According to S. Pearce Carey, William Carey, D.D., Fellow of the Linnaean Society (New York: George H. Doran Co., 1923), pp. 92-93, Staughton was present at the founding of the BMS in 1792, and contributed ten shillings for membership in the Society. He did not sign the founding document, but he was entered as "Anon." (i.e., anonymous). S. Pearce Carey says of Staughton, "The 'Anon.' was student William Staughton, thanking his lucky stars that he was there, but, true to studentdom, moneyless, even after his five Sundays' 'supplying' in 'College Lane.' He used to say, 'I rejoice over that half-guinea more than over all I have given in my life besides.' As just a bird of passage, he modestly withheld his signature." [S. Pearce Carey, William Carey, D.D., Fellow of the Linnaean Society (New York: George H. Doran Co., 1923), p. 93; cf. S. W. Lynd, Memoir of the Rev. William Staughton, D.D. (Boston: Lincoln, Edwards, and Co., 1834), p. 173.] In 1793, Staughton immigrated to Georgetown, South Carolina, where he served as a Baptist minister. After two subsequent pastorates in New Jersey, Staughton became the minister of First Baptist Church, Philadelphia, Pennsylvania, in 1805. In 1811, he helped to form a new church, Sansom Street Baptist Church, Philadelphia, and he became its first minister. Always committed to theological and higher education, Staughton began a theological school in his home in 1811, the first of its kind in the United States. In 1798 when Staughton was twenty-eight years old, Princeton College recognized his theological insight and awarded him an honorary doctor of divinity (D.D.) degree. In addition to The Baptist Mission in India (1811), Staughton translated and published Edward Wettenhall's Graece grammaticae institutio compendiaria, A Compendious System of Greek Grammar (1813), and two editions of The Works of Virgil (1812; 1813). At First Baptist Church, Philadelphia, Pennsylvania, May 18, 1814, the first "General Missionary Convention of the Baptist Denomination in the United States of America for Foreign Missions" (i.e., Triennial Convention) elected Staughton--an ardent supporter of Christian missionary work--as its first corresponding secretary for the newly formed Baptist Board of Foreign Missions. Along with the Richard Furman, the first president of this national body of Baptists in the United States and founder of Furman University, and Thomas Baldwin, the first secretary of the body, Staughton became known as a key leader of Baptists in America and Christian missionary outreach. "Baldwin, Staughton, and Furman were the leading preachers during the only time in American history when the Baptists were genuinely united" [Thomas R. McKibbens, Jr., The Forgotten Heritage: A Lineage of Great Baptist Preaching (Macon: Mercer University Press, 1986), p. 174]. So eminent was Staughton that upon the simultaneous deaths of Thomas Jefferson and John Adams on July 4, 1826--the fiftieth anniversary of the Declaration of Independence--various people in Washington, D.C., requested that he deliver a memorial sermon in the United States Capitol. Staughton's text was 2 Samuel 1:23 "lovely and pleasant in their lives, and in their death they were not divided: they were swifter than eagles, they were stronger than lions." McKibbens says, Staughton honored the lives of Jefferson and Adams and gently led his hearers to remember that, as he so picturesquely said it, "the rock is unshaken, though the aspen tremble on its side." Although leaders, no matter how great, must fall and die, "the Lord God omnipotent reigneth." He concluded his sermon with a reminder that life can be compared to walking on a bridge that is full of trap doors that lie concealed. "Each step . . . is step of jeopardy." Thus it is wise for every person to be "well prepared for the final plunge." [McKibbens, The Forgotten Heritage, p. 171; McKibbens attributes the quotes' origin to "William Staughton, "Sermon, Delivered in the Capitol of the United States; on Lord's Day, July 16, 1826; at the Request of the Citizens of Washington, on the Death of Mr. Jefferson and Mr. Adams" (Washington: Published at the Columbian Office, 1826), p. 11.] On two different occasions, Staughton served as Chaplain of the United States Senate. Appointed December 10, 1823, his first term ended on December 13, 1824. His second term began on December 12, 1825, and ended on December 7, 1826. The Board of Commissioners of the Triennial Convention of Baptists established Columbian College, now The George Washington University, Washington, D. C.; Staughton was elected as the first president (1821-1827). For The George Washington University's description of Staughton, click here; for a brief introduction to Columbian College at The George Washington University, click here). Because of bad health, Staughton resigned his position at George Washington University. Subsequently in 1829, Staughton was elected as the first president of Georgetown College, Georgetown, Kentucky, to which he sent his books and papers. However, Staughton died suddenly prior to arriving at the College [cf. Robert Snyder, A History of Georgetown College, ca. 1980]. As Staughton reports in the "Preface" to The Baptist Mission in India: The following pages have been selected for the most part from the writings of the missionary brethren at Serampore, and those of their friends. The "brief narratives" was drawn up in England. The Essays are formed chiefly from a series of interesting dialogues composed by Dr. Marshman. The other Articles are selected from "the Periodical accounts" of the Society, excepting the article "Bengal," which is a production of Mr. Ward, and taken from his interesting history "of the writings, religion, and manners of the Hindoos." This compilation is presented to the public from an anxious desire that Missionary Intelligence may be circulated, and that an holy ardour may be excited and vigorous efforts employed for the conversion of the heathen and the consequent diffusion of the great Saviour's empire. The detail is limited to the Baptist Mission in India. To read Staughton's work, The Baptist Mission in India, click on the page links listed below: i, Title Page ii-iii, Preface iv-v, Preface vi-vii, Contents viii-ix, Contents 10-11 12-13 14-15 16-17 18-19 20-21 22-23 24-25 26-27 28-29 30-31 32-33 34-35 36-37 38-39 40-41 42-43 44-45 46-47 48-49 50-51 52-53 54-55 56-57 58-59 60-61 62-63 64-65 66-67 68-69 70-71 72-73 74-75 76-77 78-79 80-81 82-83 84-85 86-87 88-89 90-91 92-93 94-95 96-97 98-99 100-101 102-103 104-105 106-107 108-109 110-111 112-113 114-115 116-117 118-119 120-121 122-123 124-125 126-127 128-129 130-131 132-133 134-135 136-137 138-139 140-141 142-143 144-145 146-147 148-149 150-151 152-153 154-155 156-157 158-159 160-161 162-163 164-165 166-167 168-169 170-171 172-173 174-175 176-177 178-179 180-181 182-183 184-185 186-187 188-189 190-191 192-193 194-195 196-197 198-199 200-201 202-203 204-205 206-207 208-209 210-211 212-213 214-215 216-217 218-219 220-221 222-223 224-225 226-227 228-229 230-231 232-233 234-235 236-237 238-239 240-241 242-243 244-245 246-247 248-249 250-251 252-253 254-255 256-257 258-259 260-261 262-263 264-265 266-267 268-269 270-271 272-273 274-275 276-277 278-279 280-281 282-283 284-285 286-287 288-289 290-291 292-294 294-295 296-297 298-299 300-301 302-303 304-305 306-307 308-309 310-311 Carey Center Home Page Created: May 17, 2001 Updated: August 9, 2012
<urn:uuid:3ad28b03-4ca5-42a3-92e3-4fdad84fbef0>
CC-MAIN-2016-26
http://www.wmcarey.edu/carey/staughton/staughton.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00112-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92044
2,086
2.6875
3
No News Available You won't see it coming. Learn what to do if you think you're having a stroke http://bit.ly/22JTYTa Posted on Mon, June 27 2016 Flashback Friday! In the early 1900s Wesley Hospital introduced the Nurses' Training School. The classroom skeleton that the students used was kept in a bushel basket in the basement storeroom. When it was needed for class, two students were sent to fetch the basket. As the walk from the basement to the classroom required going outdoors, the basket was kept covered by an old blanket to keep from shocking the neighbors. Posted on Fri, June 24 2016
<urn:uuid:c979140b-6e6a-44f8-b628-94784ba30e93>
CC-MAIN-2016-26
http://wesleymc.com/about/newsroom/?filterSelect=weight%20loss%20cardiology%20cardiac%20heart%20cardiothorasic%20cardiac%20surgeons
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00113-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972288
134
2.546875
3
Reykjavik, Iceland (UPI) Feb 11, 2011 Researchers say Iceland's second-largest volcano seems ready to erupt and might create an ash cloud bigger than one that disrupted European air travel in 2010. Geologists say an increased swarm of earthquakes around the Bardarbunga volcano tell them there is "no doubt" lava is rising, increasing the risk of an eruption, Britain's Daily Telegraph reported this week. Pall Einarsson, a professor of geophysics at the University of Iceland, says the signs of increased activity provide "good reason to worry." The sustained earthquake tremors in the remote volcano range are the strongest recorded in recent times, he said. "This is the most active area of the country if we look at the whole country together," he said. "There is no doubt that lava there is slowly growing, and the seismicity of the last few days is a sign of it." The last recorded eruption of Bardarbunga was in 1910. Bardarbunga is much larger than the Eyjafjallajokull volcano, which shut down air travel across most of Europe last year when its ash cloud drifted across the continent. Share This Article With Planet Earth Bringing Order To A World Of Disasters When the Earth Quakes A world of storm and tempest Flights delayed as Japan's 'James Bond' volcano erupts Tokyo (AFP) Feb 2, 2011 A series of spectacular eruptions from a volcano in southern Japan fired columns of ash and smoke thousands of metres in the air early Wednesday, with the cloud delaying some international flights to Tokyo. The 1,421-metre (4,689-feet) Shinmoedake volcano in the Kirishima range, featured in the 1967 James Bond film "You Only Live Twice", continued the series of deafening blasts which began ... read more Australia flags taxpayer levy for floods| Australia PM introduces contentious floods tax Australian MPs weep for disaster victims Disasters could reverse growth: Australia Yap.TV a virtual living room for show lovers Nokia needs to make Windows phones hip Cartoon news is the future: Hong Kong media mogul Web makes 15 mins fame a lifetime of shame Kenya's Fisheries Management Promotes Species That Grow Larger And Live Longer New map charts a 'leaky' Earth Thailand closes dive spots due to reef damage China earmarks $303 bn for safe water: report VIMS Team Glides Into Polar Research Russia, Norway sign Barents agreement Norwegian house ratifies Arctic border agreement with Russia Greens: Alaska oil delay a win for polar bears Healing Our Planetary Ills From The Ground Up Putting Trees On Farms Fundamental To Future Agricultural Development Livestock Boom Risks Aggravating Animal Plagues Morales aborts visit amid food riot fears Powerful quake rocks Chile year after disaster Another Iceland volcano may erupt Sri Lanka flood damage $600 mln UN's Sri Lanka flood appeal falling short China FM urges West to lift sanctions on Zimbabwe Chad military still using child soldiers: Amnesty China's foreign minister visits 'good brother' Zimbabwe Arms seized in Nigeria were for Gambia: Iran ambassador Discovery Could Change Views Of Human Evolution Multiculturalism loses appeal in Europe Bleak future seen for U.K. brain research Mathematical Model Explains How Complex Societies Emerge And Collapse |The content herein, unless otherwise known to be public domain, are Copyright 1995-2010 - SpaceDaily. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by SpaceDaily on any Web page published or hosted by SpaceDaily. Privacy Statement|
<urn:uuid:36e280e4-30d6-456d-9d18-bc2f1f4af23c>
CC-MAIN-2016-26
http://www.terradaily.com/reports/Another_Iceland_volcano_may_erupt_999.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00092-ip-10-164-35-72.ec2.internal.warc.gz
en
0.869367
833
2.875
3
DEHYDRATION among hospital patients has been under the spotlight this week following the inquest into the death of a man in South London. Kane Gorny, 22, was admitted to St George’s Hospital, in Tooting, for a hip replacement, but within three days had died of thirst after medical staff ignored both his pleas for water and symptoms of dehydration. However, his life could have been saved with a simple invention called The Hydrant, which is being rolled out across wards at the Great Western Hospital . The GWH and Stoke Mandeville, in Aylesbury, are the only two hospitals in the country that provide the hydrants for patients – with evidence showing a significant increase in hydration levels and a dramatic reduction in dehydration-related disorders including urinary tract infections and falls. Originally introduced on Jupiter Ward at the GWH last October, the Hydrant was invented by Mark Moran, of Hydrate for Health, who came up with the idea after being in hospital for a back operation and finding it difficult to reach for a drink. It is a hands-free drinks system which helps patients to have access to fluids at all times without having to reach for, or hold their drink. It also enables staff to accurately measure how much fluid a patient is taking in. Karen Braid, Project Lead Productive Ward at GWH said: “The feedback from patients and staff continues to be positive. “The key thing is to monitorpatient’s fluid balance. We have been promoting the Hydrant and its benefits to GPs as well.” In 2009, dehydration was a contributory factor in the deaths of 816 hospital patients in England and Wales according to the Office for National Statistics. Mark Moran, whose invention is now transforming patient care at the GWH, decreasing the need for patient drips and, in turn, reducing the risk of infection, said: “Kane was able to use his mobile phone to call the police — which means he could certainly have helped himself to water without troubling the nurses at all. “I receive so many letters and emails from people who tell me the Hydrant has transformed their lives. I am still amazed at how something so simple can make such a massive difference to people, in hospitals, in care homes and in their own homes.”
<urn:uuid:82123dc6-33d8-4534-a1f2-93fa4fc17f07>
CC-MAIN-2016-26
http://www.thisiswiltshire.co.uk/news/headlines/9840629.Dehydration_falls_at_GWH_and____lives_are_transformed___/?ref=rss
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00078-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97923
482
2.546875
3
Operator: China Meteorological Administration Launch date: June 25, 2000 Type of orbit: Geostationary VISSR is a 3 channels device: visible channel is 0.55-1.05 µm, infrared channel is 10.5-12.5 µm and water vapour channel is 6.2-7.6 µm. In the visible channel, the resolution is 1,25 km. In the infrared and water vapour channels, the resolution is 5 km. The radiometer scans the earth's surface line by line; each line consists of a series of individual image elements or pixels. For each pixel the radiometer measures the radiative energy of the different spectral bands. This measurement is digitally coded and transmitted to the ground station for pre-processing before being disseminated to the user community.
<urn:uuid:a8f3a2f9-d60d-4ec8-8146-b0a9e50d299f>
CC-MAIN-2016-26
http://en.allmetsat.com/satellite-fy2.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.893362
169
2.640625
3
OREGON, Ohio – Many of the major players in the Lake Erie battle against phosphorus pollution gave shape to the magnitude of the problems in testimony before the Ohio Legislature's Lake Erie Legislative Caucus on Friday. Agricultural practices have been spotlighted as a major source of the phosphorus that has fueled harmful algal blooms, or HABs. Cyanobacteria in the algal blooms produce a liver toxin called microcystin, and Ohio has been ground zero in dealing with it. High levels of microcystin led to a three-day ban on drinking water in Toledo on Aug. 2. Microcystin could be a problem all around the country, but that is hard to determine. There were no standards for it in the U.S. until the Ohio EPA recently set a maximum microcystin level of 1 part per billion, matching the standard set by the World Health Organization. Conservation groups on Friday demanded state and federal aid to provide more research and mandatory controls, especially for Ohio's farmers. The Ohio Farm Bureau warned too many controls would interfere with the ability of Ohio farmers to grow food and raise livestock. Water plant officials from along the Ohio shoreline explained how difficult and expensive it has been to test, analyze and eliminate the toxic microcystin produced by the HABs. Larry Fletcher of the Lake Erie Shores & Islands visitor's bureau told a dozen legislators on the panel immediate action is needed. "The eight counties along the Lake Erie shoreline generated $12.9 billion in tourism dollars, 119,000 jobs and $1.7 billion in state taxes in 2013," he told the packed ballroom at Maumee Bay State Park. "The negative impact of the Toledo drinking water ban (because of high microcystin levels) was heard all over the country and around the world. It had a negative impact felt up and down the shoreline." "Ohio must develop an economical, comprehensive nutrient reduction program for the state that accounts for both regulated 'point' and non-regulated 'non-point' sources of pollution," Frank Greenland, director of watershed programs for Cleveland's Northeast Ohio Regional Sewer District. Greenland reminded everyone that when the Cuyahoga River infamously burned in 1969, it resulted in the Clean Water Act of 1972. New rules and regulations were responsible for the recovery of Lake Erie in the late 1970s. "The Clean Water Act regulations were not voluntary," said Greenland. "For Lake Erie to recover again, you need mandatory rules." That flies in the face of the latest agricultural legislation aimed at reducing the phosphorus overload from fertilizer and manure running off Ohio's farm fields. Those regulations are mostly voluntary. The only mandatory rule is required training for some fertilizer applicators, which won't be in effect until 2016. A sore point was the legislation's failure to include a ban on spreading manure on frozen farms fields, a common practice. Sudden rain events can wash the manure, which is rich in phosphorus and nitrogen, into rivers and streams that feed Lake Erie. "There's a lot of interest in banning the application of manure on frozen or snow-covered ground," said Jack Fisher, executive vice-president of the Ohio Farm Bureau Federation. "On the surface, I agree. We need to look at that. If you do have a weather incident right after that and it doesn't stay in place, that's adding to pollution. "The unintended consequence is that if you have a long period of time where farmers can't get on the field, you can only store so much animal waste. Then what are you going to do?" Fisher wanted a middle of the road approach to phosphorus pollution, which would help farmers to grow crops while still providing clean drinking water. Sandy Bihn of Oregon, Ohio and Lake Erie Water Keeper Inc. has been fighting for Lake Erie for a quarter century. She had the longest list of demands, from a phosphorus ban on lawn fertilizer to dealing with failing septic systems, setting phosphorus discharge limits at wastewater plants, federal standards for microcystin in drinking water and declaring all of Western Lake Erie as impaired under the Clean Water Act. That is what Chesapeake Bay and Wisconsin's Fox River and Green Bay have done to both assess and implement plans for recovery. The Fox River, according to Bihn, has a better-managed plan than Lake Erie and has had notable progress. In the middle of the phosphorus war has been Heidelberg University's National Center for Water Quality Research. Its spring phosphorus report from the Maumee River Watershed, the largest watershed in the Great Lakes and a conduit for agricultural phosphorus in spring, was instrumental in forecasting the severity of this year's algal blooms. "The increasing severity and extent of harmful algal blooms in western Lake Erie correspond to increases in dissolved phosphorus loads since the mid-1990s," said Dr. Kenneth A. Krieger, director of NCWQR. "This is because dissolved phosphorus is nearly 100 percent available for growth of algae and cyanobacteria and is the form of phosphorus most responsible for the development of HABs." The record Lake Erie HAB of 2011 followed a record spring discharge of phosphorus, while the small HAB of 2012 came during a drought. That suggests, said Krieger, the phosphorus coming from land is much more important in determining the severity of the HAB than phosphorus released from lake sediments. "It is crucial to address the sources of the problem to reduce or eliminate the costs of treating the symptoms," said Krieger. That would mean a major reduction in agricultural fertilizer, which Krieger says must be done quickly with the adoption of best management practices by food producers. A trio of companies testified that they had the ability to eliminate algae or spotlight phosphorus hot spots. Algix, with plants in Georgia, Alabama and Mississippi makes equipment that harvests algae from southern fish ponds and converts it into plastic. Nebraska's HAB Aquatic Solutions proposed treating Lake Erie with alum in strategic locations of Maumee Bay. Blue Water Satellite of Stow, Ohio said its two satellites could locate phosphorus hot spots on both land and water so they can be treated.
<urn:uuid:ab6b91b7-a36c-4854-a93c-46752e10d2a5>
CC-MAIN-2016-26
http://www.cleveland.com/outdoors/index.ssf/2014/08/there_are_many_questions_but_n.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00187-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958121
1,276
2.90625
3
Parents who are prepared for infant's crying bouts can maintain better control, studies say,, THURSDAY, March 5 (HealthDay News) -- An educational program for parents can help prevent shaken baby syndrome that's triggered by infant crying, according to American and Canadian studies. "Typically, crying begins within two weeks of birth, so it's imperative that new parents receive information and learn coping strategies," Dr. Fred Rivara, an investigator at the Harborview Injury Prevention and Research Center, vice chairman of pediatrics at the University of Washington in Seattle and co-author of the U.S. study said in a university news release. He and his colleagues tested an educational program called "The Period of PURPLE Crying," which includes a 12-minute DVD and information booklet. The program -- designed to teach parents that infant crying is normal and can be frustrating for caregivers -- identifies several behaviors as normal: The U.S. study included 2,738 mothers of new infants. Half of them received the PURPLE material, and the others were given information about infant safety. The mothers who received the PURPLE materials scored six points higher in knowledge about crying and one point higher in knowledge about shaking, were 6 percent more likely to share information with caregivers about how to cope with the frustration of infant crying and were 7 percent more likely to warn caregivers about the dangers of shaking, the study found. In the Canadian study, mothers who received PURPLE materials scored six points higher in knowledge about crying; were 13 percent more likely to share information with caregivers about how to deal with infant crying, were 13 percent more likely to share information about the dangers of shaking and were about 8 percent more likely to share information about infant crying. The U.S. study was published in the March issue of the journal Pediatrics, and the Canadian study was published in the Canadian Medical Association Journal. "Changing knowledge is a critical first step in changing behavior, and this is important public health work because the results show it's possible to change people's ideas about crying," said Dr. Ronald Barr, the lead author of both studies. Barr is director of community child health at the Child & Family Research Institute and a professor of pediatrics at the University of British Columbia. In the United States, about 1,300 infants are hospitalized or die from shaken baby syndrome each year, and about 80 percent of those who survive suffer brain injury, blindness and deafness, fractures, paralysis, cognitive and learning disabilities or cerebral palsy, according to background information in the news release. The U.S. National Institute of Neurological Disorders and Stroke has more about shaken baby syndrome. -- Robert Preidt SOURCE: University of Washington-Harborview Medical Center, news release, March 2, 2009 All rights reserved
<urn:uuid:c1478ca6-3ea0-471b-9edf-462b6f8f2be3>
CC-MAIN-2016-26
http://www.bio-medicine.org/medicine-news-1/Preventing-Shaken-Baby-Syndrome-38746-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962043
570
3.203125
3
General Land Office, established (1812) in the U.S. Treasury Dept. and transferred (1849) to the U.S. Dept. of the Interior. Empowered to survey, manage, and dispose of the public domain, the office administered the preemption acts, homestead laws, and all legislation affecting public lands. After 1900 it was more concerned with conservation of the remaining land. In 1946 it was consolidated with the Grazing Service into the Bureau of Land Management. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
<urn:uuid:276f07fd-c879-4070-a810-fed71f3bc4ae>
CC-MAIN-2016-26
http://www.factmonster.com/encyclopedia/history/general-land-office.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924466
122
3.078125
3
YOKOTA AIR BASE, Japan — As computer technology advances, so do methods hackers use to infiltrate networks and computer systems. Fortunately for servicemembers and Department of Defense employees, the DOD provides them free software to protect their home computers from these attacks. Recently, one newly discovered virus known as “Conficker” or “Downadup” gained notoriety, infecting more than 9 million computers, according to an article Thursday on CNET News online. “If it finds a vulnerable computer, it turns off the automatic backup service, deletes previous restore points, disables some security services, blocks access to a number of security web sites and opens infected machines to receive additional programs from the malware’s creator,” stated a description of the virus on the Symantec.com Web site. “The worm then tries to spread itself to other computers on the same network.” But even as hackers get craftier in how they infect computers, the best way to keep your personal computer safe is to keep it properly updated and to have a good anti-virus program. “Most individuals don’t always keep their computers updated to the Microsoft or anti-virus software standards,” said Staff Sgt. Luis Nunez, the information system security manager at Yokota’s Wing Information Assurance Office, adding that the DOD provides free corporate editions of both Symantec and McAfee anti-virus software for home use. Nunez explained that, commonly, viruses will infect a home computer when the user clicks a link to a malicious Web site embedded in an e-mail or on another Web site. In the case of Conficker, the virus also tries to spread by copying itself into shared folders on networks and infecting USB devices such as memory sticks, according to Symantec. Viruses that spread like this are the reason the Department of Defense has restricted the use of USB thumb drives and similar storage devices, Nunez said. To prevent home systems from becoming infected, Nunez said that users should not only rely on updates and firewall protection, but they should also have reliable anti-virus software installed. To obtain a copy of the free anti-virus software provided by the DOD, visit https://www.cert.mil. To download the software, a user must access the site from a government computer.
<urn:uuid:2d94d2af-9efb-4a23-8074-97bce8ea5d8d>
CC-MAIN-2016-26
http://www.stripes.com/news/dod-providing-free-anti-virus-for-home-computers-1.87750
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925898
499
2.671875
3
common name: southern pine beetle scientific name: Dendroctonus frontalis Zimmermann (Insecta: Coleoptera: Curculionidae: Scolytinae) Introduction - Distribution - Description - Biology - Hosts - Outbreaks - Survey and Detection - Prevention and Control - Selected References The southern pine beetle (SPB), Dendroctonus frontalis Zimmermann, is the most destructive insect pest of pine in the southern United States. A recent historical review estimated that SPB caused $900 million of damage to pine forests from 1960 through 1990 (Price et a1. 1992). This aggressive tree killer is a native insect that lives predominantly in the inner bark of pine trees. Trees attacked by SPB often exhibit hundreds of resin masses (i.e., pitch tubes) on the outer tree bark. SPB feed on phloem tissue where they construct winding S-shaped or serpentine galleries. The galleries created by both the adult beetles and their offspring can effectively girdle a tree, causing its death. SPB also carry, and introduce into trees, blue-stain fungi. These fungi colonize xylem tissue and block water flow within the tree, also causing tree mortality (Thatcher and Conner 1985). Consequently, once SPB have successfully colonized a tree, the tree cannot survive, regardless of control measures. Figure 1. Pitch tubes of the southern pine beetle (SPB), Dendroctonus frontalis Zimmermann, on outer bark. Photograph by James. R. Meeker, FDACS, Division of Forestry. Figure 2. S-shaped galleries of the southern pine beetle (SPB), Dendroctonus frontalis Zimmermann. Photograph by Wayne N. Dixon, FDACS, Division of Plant Industry. When beetle populations are low (endemic), attacks are generally restricted to senescent, stressed or damaged pines; however, epidemics periodically occur (Thatcher et al. 1980). During epidemics, SPB infestations often begin in weakened or injured trees, but the high beetle populations can invade and overcome healthy vigorous trees by attacking in large numbers over a short period of time (Thatcher et al. 1980). Widespread and severe tree mortality can occur during epidemics, SPB spots (groups of infested trees) may expand at rates up to 50 ft. (15 m)/day, and uncontrolled infestations may grow to thousands of acres in size (Ron Billings, Texas Forest Service, personal communication). SPB attacks are not limited to conventional forest sites; they also may kill high-value trees in yards, parks, and other ornamental settings (Thatcher et al. 1978). Because of the seriousness of SPB infestations, care should be taken not to confuse SPB with the less aggressive but more common pine bark beetles of Florida, the pine engravers (Ips spp.) and the black turpentine beetle (D. terebrans (Olivier)) (Dixon 1984, 1986). The SPB occurs in a generally continuous distribution across the southern and southeastern United States (AL, AR, DE, FL, GA, KY, LA, MD, MS, NC, OK, SC, TN, TX, VA, WV), roughly coinciding with the distribution of loblolly pine, Pinus taeda L. SPB has occurred as far north as PA and NJ. SPB also occurs from AZ and NM south through Mexico and Central America into northern Nicaragua (Thatcher et al.. 1980). Although range maps indicate that SPB may occur throughout the state of Florida, there is no known record of a SPB outbreak south of Hernando County (Chellman and Wilkinson 1975, Price et al. 1992). Infestations, however, have been documented as far south as in Pasco County. SPB is unlikely to occur south of N 28º 15' latitude in FL, due to the scarcity and/or lack of loblolly pine in the southern half of the state. Eggs are ca. 1.5 X 1.0 mm, oval in shape, shiny, opaque and pearly white. Larvae range in size from 2 to 7 mm in length and are wrinkled, legless, yellowish-white, with reddish-colored heads. Pupae have the same general color of larvae and the same general form and size of adults. Adults are 2 to 4 mm in length, short-legged, cylindrical and brown to black in color. The broad and prominent head has a distinct notch or frontal groove on male beetles. Females possess a broad, elevated, transverse ridge (mycangium) along the anterior pronotum. The rear end or abdomen of adults is rounded. Callow (new) adults progressively change in color from yellowish-white to yellowish-brown to reddish-brown to finally become dark brown (Thatcher et al. 1980). Figure 3. Dorsal view of southern pine beetles, Dendroctonus frontalis Zimmermann, with male on the left and female on the right. Photograph by David T. Almquist, University of Florida. Figure 4. Lateral view of southern pine beetle, Dendroctonus frontalis Zimmermann. Photograph by David T. Almquist, University of Florida. Adult female SPB are responsible for host selection (Thatcher et al. 1980). After locating a suitable host tree, a female beetle bores through the bark to initiate gallery construction in the inner phloem. Soon after initial attack, females emit an aggregation pheromone (frontalin), which attracts males and more females to the tree. This pheromone, in conjunction with host odors stemming from resin exudation at attack points, attracts more SPB to the tree. The aggregation of beetles results in a mass attack over a short period of time (Dixon and Payne 1979). Mass-attacking enables the beetles to overcome the natural defense mechanism of the tree, its resin production system. Resin under pressure within the tree can successfully force out or pitch out beetles if there are only a few beetles and the tree is relatively healthy. Mass-attacking SPB deplete the resin production capabilities of the tree causing resin flow to cease, after which point the tree is easily overcome. Mating soon takes place and females begin to construct long, winding S-shaped galleries that cross over each other. These galleries are packed with frass and boring material by males. Up to 30 eggs are deposited in niches along each gallery. Parent adults may then reemerge from the tree one to 20 days following oviposition and proceed to attack the same tree or another (Thatcher et al. 1980). Eggs hatch three to nine days following oviposition. Larvae feed in the inner phloem and construct winding galleries perpendicular to parent egg galleries. As larvae develop, they progressively tunnel towards the outer bark. During the fourth and final larval instar, the legless grubs move to the outer bark and form a pupal cell. The pupal stage lasts five to 17 days, before insects turn into callow adults. Callow adults remain under the bark for six to 14 days while their cuticle hardens and darkens. The young adults then bore an exit tunnel directly through the outer bark, leaving an open "shot" hole behind. Generally, the emerging beetles fly off to attack another tree (Thatcher et al. 1980). Adult beetles are capable of flying ca. 2 miles (3 km), and it is estimated that during dispersal phases, half of the beetles travel more than 0.43 mile (0.69 km) (Turchin and Thoeny 1993). The duration from egg to adult ranges from 26 to 60 days. There may be as many as seven to nine generations per year in Florida. SPB exhibit behavioral changes with changes in the seasons. In the South, emergence of overwintering beetles has been correlated with the blossoming of flowering dogwood (Cornus Florida L.) in the spring (Thatcher and Barry 1982). This spring emergence represents the primary dispersal phase of SPB, during which beetles often initiate multiple and widespread infestations. During the summer months, beetle development is hastened and infestations tend to proliferate and expand very rapidly. SPB populations undergo a secondary dispersal phase in the fall, tending to produce scattered small infestations. These infestations typically remain small and dispersed during the winter months when beetle activity is slowest (Thatcher et al. 1980). SPB will infest and kill all species of pine within its distribution (Thatcher et al. 1980). In the southern United States, the preferred hosts are loblolly pine, shortleaf pine (Pinus echinata Mill.), pond pine (P. serotina Michx.), and Virginia pine (P. virginiana Mill.) (Thatcher and Barry 1982). In Florida, SPB will also readily attack and kill spruce pine (P. glabra Walter), and sand pine (P. clausa (Chapman ex Engelm.) Vasey ex Sarg.) (Chellman and Wilkinson 1975). Slash pine (P. elliottii Engelm.) and longleaf pine (P. palustris Mill.) are generally considered to be more resistant to SPB attacks, but during outbreaks even healthy trees of these species can be successfully colonized (Belanger et al. 1993, Belanger and Malac 1980). Outbreaks of this insect tend to be cyclical in occurrence. Outbreaks have occurred on six to 12 year intervals and generally last for two to three years in areas where SPB has long been a problem. Southwide, the time between outbreaks has decreased while the intensity and distribution of each outbreak has increased since 1960 (Belanger et al. 1993, Price et al. 1992). In Florida, infestations have been relatively few and small in the past (Chellman and Wilkinson 1975, 1980; Dixon, unpublished data). Many factors are involved in the development of outbreak conditions, such as the abundance and susceptibility of preferred hosts, and weather patterns and events (e.g., drought, storms). Historically, Florida has not experienced many destructive SPB episodes probably because of the lack of large contiguous areas of loblolly and shortleaf pine in susceptible stages. However, an epidemic in and around Gainesville in Alachua County during 1994, warranted a reconsideration of the serious threat SPB poses to Florida's pine forests. In 2001, Gainesville experienced an even worse outbreak in the urban and wildland urban interface areas than in 1994-95. Of the more than 400 infestations detected throughout the county, in early 2000, approximately half were located within the city limits of Gainesville. Scores of landowners/homeowners spent hundreds to thousands of dollars attempting to tackle the problem (Anonymous 2001). In 2000, the state issued a declaration of emergency due to a southern pine beetle epidemic in Hernando County. Since then the emergency has been expanded to 25 counties. Southern pine beetle surveys, once conducted on an annual basis by the Florida Division of Forestry, are now being taken monthly. Forest inventory statistics indicate that from 1970 to 1995, the acreage of loblolly pine forest in Florida more than doubled from a mere 337,000 ac. (136,380 ha) to more than 807,300 ac. (326,704 ha) (Brown 1999, Knight 1969, McClure 1970). The current acreage of loblolly pine also represents an all-time high since inventory statistics were reported in 1949. This alarming increase, and current level of preferred host material suggests that SPB epidemics in Florida may be more frequent, widespread and destructive in the future (Dixon, unpublished data). Often the first noticeable indication of SPB attack is foliage discoloration. Crowns of dying pines change color from green to yellow to red before turning brown and falling from the tree. The time it takes for these changes varies seasonally. Frequently, by the time crowns are red the beetles have already vacated the tree. The earliest signs of possible SPB-attack is the presence of brownish-orange boring dust and tiny white pitch pellets accumulating at the base of the tree, in bark crevices, in nearby spider webs, and on understory foliage. A more noticeable indication of SPB attack is the presence of multiple popcorn size lumps of pitch (i.e., pitch tubes) on the outer bark of pine stems. These pitch tubes may occur from near ground level up to 60-ft. (18-m) high, but may not develop at all on trees severely weakened before beetle attack. The most diagnostic sign of SPB activity is the presence of the winding S-shaped galleries that cross over each other and are packed with boring dust and frass. These can be found by exposing a portion of the inner bark beneath pitch tubes or by removing a section of bark. Another sign of possible SPB activity is the presence of clear shot-like holes (ca. 1 mm in dia.) on the exterior bark surfaces where SPB have emerged (Billings and Pase 1979, Thatcher and Conner 1985). SPB infestations typically kill groups of trees, which allows for prioritizing investigations of suspect mortality. Preventative strategies for homeowners and forest managers include: - planting more resistant species such as longleaf pine and slash pine in place of loblolly pine and planting loblolly pine only on appropriate sites (i.e., right tree for the right place); - thin overstocked, dense or stagnant stands to a basal area of 80 sq. ft. per ac. (18 sq. m per ha) or less; - maintain at least 25 ft. (8 m) distance between mature pines in urban settings; - promote tree diversity in the landscape; - remove damaged pines; - maintain tree health and vigor by supplemental watering during extended dry periods; - minimize construction and logging damage to pines and avoid soil compaction during operations; - minimize changes in soil and water levels around pines; - conduct logging or land clearing operations during coolest winter months; - shorten rotation ages to less than 30 years; and - apply an approved insecticide to high-value trees when the threat of SPB attack is imminent and the potential benefits outweigh the costs and risks of chemical use. Remedial control measures to suppress existing infestations are limited. Generally, the most effective and desirable approach is to remove and process all SPB-infested pines as soon as possible. Trees can be salvaged and SPB will be destroyed in the milling process. If trees cannot be salvaged, the bark should be destroyed, buried, or chipped and composted. In forested settings, it is recommended that a 50 to 100 ft. (15 to 30 m) buffer strip of green uninfested trees also be removed to ensure that recently infested trees are not left behind. Where tree removal is not feasible, infested stems can be felled, bucked and hand-sprayed with an approved insecticide. Where none of the above approaches is feasible, infested trees, with or without a buffer strip, may be simply felled toward the center of the spot. This cut-and-leave approach has had limited use with variable results (Belanger and Malac 1980, Swain and Remion 1981, Thatcher et al. 1978). Much research continues toward the development of effective methods of utilizing semiochemicals to suppress SPB infestations, but as yet, operational uses are still on the horizon (Billings and Upton 1993, Hayes and Strom 1994, Payne and Billings 1989). - Belanger RP, Hedden RL, Lorio Jr. PL. 1993. Management strategies to reduce losses from the southern pine beetle. Southern Journal of Applied Forestry 17: 150-154. - Belanger RP, Malac BF. 1980. Silviculture can reduce losses from the southern pine beetle. USDA Forest Service, Combined Forest Pest Research Development Program. Handbook No. 576. 17 pp. - Billings RF, Pase III HA. 1979. A field guide for ground checking southern pine beetle spots. USDA Forest Service, Combined Forest Pest Research Development Program. Handbook No. 558. 19 pp. - Billings RF, Upton WW. 1993. Effectiveness of synthetic behavioral chemicals for manipulation and control of southern pine beetle infestations in East Texas. USDA Forest Service, Southern Forest Experiment Station. General Technical Report: 555-568. - Brown MJ. 1999. Florida's forests, 1995. Resour. Bull. SRS-48. Asheville, NC: U.S. Department of Agriculture, Forest Service, Southern Research Station. 83 pp. - Chellman CW, Wilkinson RC. 1975. Recent history of southern pine beetle, Dendroctonus frontalis Zimm., (Col.; Scolytidae) in Florida. Florida Entomologist 58: 22. - Chellman CW, Wilkinson RC. 1980. Southern pine beetle outbreaks in Florida since 1974. Florida Entomologist 63: 515. - Dixon WN. 1984. lps engraver beetles. FDACS, Division of Forestry. Forest and Shade Tree Pests Leaflet No. 2. 2 pp. - Dixon WN. 1986. Black turpentine beetle. FDACS, Division of Forestry. Forest and Shade Tree Pests Leaflet No. 4. 2 pp. - Dixon WN, Payne TL. 1979. Aggregation of Thanasimus dubius on trees under mass-attack by the southern pine beetle. Environmental Entomology 8: 178-181. - Fidgen J. (2000). Southern pine beetle information directory. http://everest.ento.vt.edu/~salom/SPBinfodirect/spbinfodirect2.html (5 July 2001). - Fidgen J. (2001). Southern pine beetle Internet control center. http://whizlab.isis.vt.edu/servlet/sf/spbicc/ (5 July 2001). - Foltz JL, Meeker JR. (2001). The southern pine beetle in Florida. http://eny3541.ifas.ufl.edu/pbb/spb_info.htm (5 July 2001). - Hayes JL, Strom BL. 1994. 4-Allyanisole as an inhibitor of bark beetle (Coleoptera: Scolytidae) congregation. Journal of Economic Entomology 87: 1548-1556. - Knight HA. 1969. Forest statistics for northwest Florida, 1969. USDA Forest Service, Southeastern Forest Experiment Station. Resource Bulletin SE-14. 35 pp. - Mayfield AE. (2004). Southern Pine Beetle Aerial Survey Procedures. Forest Management. http://www.fl-dof.com/forest_management/fh_insects_spb_aerial.html (1 July 2008). - McClure JP. 1970. Forest statistics for northeast Florida, 1970. USDA Forest Service, Southeastern Forest Experiment Station. Resource Bulletin SE-15. 33 pp. - Payne TL, Billings RF. 1989. Evaluation of (S)-verbenone applications for suppressing southern pine beetle (Coleoptera: Scolytidae) infestations. Journal of Economic Entomology 82: 1702-1708. - Price TS, Doggett C, Pye JL, Holmes TP, eds. 1992. A history of southern pine beetle outbreaks in the southeastern United States. Sponsored by the Southern Forest Insect Work Conference. The Georgia Forestry Commission, Macon, GA. 65 pp. - Swain KM Sr., Remion MC. 1981. Direct control methods for the southern pine beetle. USDA Forest Service, Combined Forest Pest Research Development Program. Handbook No. 575. 15 pp. - Thatcher RC, Barry PJ. 1982. Southern pine beetle. USDA Forest Service, Washington, D.C. Forest and Disease Leaflet No. 49. 7 pp. - Thatcher RC, Conner MD. 1985. Identification and biology of southern pine barkbeetles. USDA Forest Service, Washington D.C. Handbook No. 634. 14 pp. - Thatcher RC, Coster JE, Payne TL. 1978. Southern pine beetles can kill your ornamental pine. USDA Forest Service, Combined Forest Pest Research Development Program, Washington, D.C. Home and Garden Bulletin No. 226. 15 pp. - Thatcher RC, Searcy JL, Coster JE, Hertel GD, eds. 1980. The Southern Pine Beetle. USDA, Expanded Southern Pine Beetle Research and Application Program, Forest Service, Science and Education Administration, Pineville, LA. Technical Bulletin 1631. 265 pp. - Turchin P, WT Thoeny. 1993. Quantifying dispersal of southern pine beetles with mark-recapture experiments and a diffusion model. Ecological Applications 3: 187-198.
<urn:uuid:1b6a315a-55f8-4cd7-816a-6078fbd202c4>
CC-MAIN-2016-26
http://entnemdept.ifas.ufl.edu/creatures/trees/southern_pine_beetle.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00009-ip-10-164-35-72.ec2.internal.warc.gz
en
0.893607
4,389
3.5
4