text
stringlengths 199
648k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 14
419
| file_path
stringlengths 139
140
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 50
235k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Readers Question: What is the Impact of persistent national Debt on UK economic growth?
There are many issues here. Firstly, UK national debt is the total amount of debt the government owe the private sector. It means the government is running an annual budget deficit.
Firstly, a budget deficit implies government spending is greater than tax. This represents an injection into the economy and therefore should increase Aggregate Demand. In a recession, government borrowing can play a role in increasing in offsetting private sector saving and helping to increase growth.
However, if there is persistent borrowing, then crowding out is more likely to occur. Crowding out occurs when higher government borrowing leads to lower private sector investment and spending. The argument is that the government borrow by selling bonds to the private sector. If the private sector buy government bonds then they have to less to spend in the private sector. In a recession, crowding out is unlikely to occur because there are unemployed resources, but, if the economy is at full capacity, then crowding out is more likely to occur.
Higher Interest Rates.
It is argued persistently high levels of government borrowing may lead to higher interest rates. This is because higher interest rates will be required to attract people to buy government debt. However, this does not necessarily have to occur. There is persistent government borrowing in Japan (approaching 200% of GDP) but, interest rates are still very low. It depends on many factors we haven’t time to consider in this answer. However, if government borrowing pushed up interest rates, then growth would be adversely affected.
Higher Taxes and lower spending
The scale of UK government borrowing means that to reduce the debt burden, the government will have to increase taxes and / or cut spending over the next 3-4 years. These higher taxes and lower spending will have the affect of reducing UK economic growth and could even damage any economic recovery.
However, if fiscal policy is tight and reducing inflationary pressure in the economy, it can enable interest rates to stay lower and this expansionary monetary policy may offset the deflationary fiscal policy.
Why is Government borrowing?
If government is borrowing to invest in public services like transport and education. It is possible, the government will increase productive capacity and enable a higher rate of economic growth. however, if the government borrowing is to finance transfer payments e.g. pensions and health care to an ageing population then there will be no boost to productive capacity from government borrowing.
It is certainly an interesting question given the scale of UK government borrowing over the medium long term. Many economists feel the very high levels of borrowing will provide a constraint to future growth because of the forecast tax rises which will be necessary to reduce the debt burden to more manageable levels in the future.
|
<urn:uuid:328ccd71-13c2-4679-9cb8-6deaf65ba962>
|
CC-MAIN-2016-26
|
http://www.economicshelp.org/blog/1656/economics/impact-of-national-debt-on-economic-growth/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00159-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954603 | 559 | 3.109375 | 3 |
What In World History Would You Like To Learn About?
The Neanderthal lived throughout a widely divergent climate and habitat. These peoples adapted quickly to new environments as they migrated. Some lived in caves, while others built shelters out of branches and animal skins. Still others dug pits and covered them with branches, animal skins and leaves.
|
<urn:uuid:4a3ec630-a11f-4095-88a8-a070c7fe2220>
|
CC-MAIN-2016-26
|
http://www.kidspast.com/world-history/0016-shelters.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00055-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946864 | 67 | 3.6875 | 4 |
Ebola Crisis in West Africa
The Ebola epidemic in West Africa has destroyed lives, decimated communities, and orphaned children in the affected countries. However, death and suffering are only part of the crisis.
- Guinea, Liberia and Sierra Leone have seen their growth rates plummet
- Guinea lost 42,000 jobs in the potato industry. In Sierra Leone, 50 percent of jobs in the private sector have been lost.
- West Africa as a whole may lose an average of at least US$3.6 billion per year between 2014 and 2017, due to a decrease in trade, closing of borders, flight cancellations and reduced Foreign Direct Investment and tourism activity, fueled by stigma.
- According to UNFPA, an estimated 800,000 women will give birth in 2015 in the three countries, but some 120,000 of them may die from lack of access to emergency obstetrics care, while health services have been diverted toward Ebola response.
- In Sierra Leone, only one-fifth of the 10,000 HIV patients on anti-retroviral treatments are still receiving them due to the current lack of health personnel available for non-Ebola care.
The epidemic has slowed down economic growth and closed down businesses, affecting the means of making a living of millions of the poorest and most vulnerable people in the region. It also put pressure on government budgets, limiting their capacity to provide basic services for their populations. In addition, the crisis eroded trust among communities, stigmatizing victims and survivors, and destroying confidence in health and government services. A vast coalition of partners is mobilized to help affected countries reach and stay at zero cases. At the same time, the challenge is to help them and their communities to recover from the long-term impact of the crisis.
What is UNDP doing?
UNDP’s response to the crisis is focusing on three priorities:
Coordination and service delivery
As part of the overall UNMEER and UN response, we are the lead UN agency on the coordination of payments to Ebola workers. UNDP is helping to track payments and improve the systems through which they are being delivered to treatment center staff, lab technicians, contacts tracers and burial teams.
Community mobilization and outreach
We are working with communities, through local leaders and networks of volunteers, to identify cases, trace contacts and educate people on how the disease is spread and how to avoid contracting it. We are also raising awareness among large segments of the population of how important it is to fight stigma, reintegrate survivors and support their families.
Socio-economic impact and recovery
UNDP economists have been assessing the development impact of Ebola in a series of impact studies on budgets,development spending, livelihoods and the provision of essential services, The studies were used to inform national recovery plans in Guinea, Liberia and Sierra Leone. To support recovery in all three countries, UNDP will focus on rebuilding economies, supporting the health sector, promote peace and stability and prevent future crises. make welfare payments to vulnerable communities affected by the disease, Our work already involves a diverse range of interventions to: support small businesses, decentralize decision-making and early warning systems, prevent conflict and gender-based violence, and eliminate health risks associated with consuming bush meat.
Support the relief efforts
With $50, UNDP provides a month's worth of supplies to an Ebola-affected family of 5.
Early Recovery and Resilience Support Framework: Guinea, Liberia and Sierra Leone
Joint mission Ebola Recovery Assessment: Guinea, Liberia and Sierra Leone
Chronicles the UN’s engagement in ensuring the Ebola response workforce were paid during this largest ever EVD outbreak.
|
<urn:uuid:dfb54a28-3649-4006-8171-1cfddefd2c16>
|
CC-MAIN-2016-26
|
http://www.undp.org/ebola
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00145-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949605 | 747 | 2.96875 | 3 |
The Very Large Array (VLA) in New Mexico. Image Credit: CC BY-SA 2.0 Hajor
A new focus for SETI could be to look for extraterrestrial transmitters within our own solar system.
The idea stems from the concept that if an intelligent extraterrestrial race were using a probe to study our planet then in order to transmit the data back to their own solar system they would need a means with which to send a signal over long interstellar distances.
In a recent paper, Michael Gillon of the Observatory of Geneva outlined his thoughts on how this could be achieved and how it could represent a new opportunity for us to seek out evidence of ET.
His proposition is based on the concept that an alien transmitter could use the sunís gravitational field as an amplifier to boost the signal, a technique that could also be used in the future to help us communicate with our own interstellar space probes.
Gillon maintains that astronomers could monitor the solar focal regions of neighboring stars within our solar system to look for signs of transmitters as this would be the most likely place to find them. Anomalous radiation readings at one of these locations could be an indicator that there was something there and then a probe could be sent to take a closer look.
Source: Discovery News | Comments (82)
|
<urn:uuid:33bf693d-4736-4793-9ec1-2352b8dbaffc>
|
CC-MAIN-2016-26
|
http://www.unexplained-mysteries.com/news/256476/et-search-could-focus-on-alien-transmitters
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00060-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950506 | 265 | 3.3125 | 3 |
Nature, origin, transport and deposition of andosol parent material in south-central Chile (36-42°S)
MetadataShow full item record
The andosols of south-central Chile (36-42°S) are developed on yellow-brown loams that cover the region with a thickness of several meters. In the literature, several hypotheses concerning the nature, origin, mode of transport and deposition of the andosol parent material have been advanced but no general agreement has been found. In this paper, we test these hypotheses by analyzing new representative outcrops located around Icalma (38°50’S) and Puyehue (40°40’S) lakes by a plurimethodological approach. Our data demonstrate that the andosol parent material has the typical mineralogical and geochemical signature of the regional volcanism and that these deposits are postglacial in age. The grain size of the deposits and the morphology of the coarse grains evidence that most of these particles haven’t been re-transported by wind but are direct volcanic ash falls deposited throughout the Late Glacial and Holocene. Because of the prevailing westerly winds, most of them have been transported to the East. Following the deposition of the volcanic particles, weathering and pedogenetic processes have transformed part of the volcanic glasses and plagioclases into allophane and have wiped out the original layering. This work demonstrates that most of the andosols that occur in the Andes and in the eastern part of the Intermediate Depression of south-central Chile are developed on volcanic ashes directly deposited by successive volcanic eruptions throughout the Late Glacial and Holocene.
Author Posting. © Elsevier B.V., 2007. This is the author's version of the work. It is posted here by permission of Elsevier B.V. for personal use, not for redistribution. The definitive version was published in CATENA 73 (2008): 10-22, doi:10.1016/j.catena.2007.08.003.
Showing items related by title, author, creator and subject.
Tephrostratigraphy of the late glacial and Holocene sediments of Puyehue Lake (Southern Volcanic Zone, Chile, 40°S) Bertrand, Sebastien; Castiaux, Julie; Juvigne, Etienne (2008-05-29)We document the mineralogical and geochemical composition of tephra layers identified in the late Quaternary sediments of Puyehue Lake (Southern Volcanic Zone of the Andes, Chile, 40°S) to identify the source volcanoes ...
Stratus 13 thirteenth setting of the Stratus Ocean Reference Station cruise on board RV Ron Brown February 25 - March 15, 2014 Valparaiso, Chile - Arica, Chile Bigorre, Sebastien P.; Weller, Robert A.; Lord, Jeffrey; Galbraith, Nancy R.; Whelan, Sean P.; Coleman, James; Contreras, Marcela Pas; Aguilera, Cristobal (Woods Hole Oceanographic Institution, 2014-07)The Ocean Reference Station at 20°S, 85°W under the stratus clouds west of northern Chile is being maintained to provide ongoing climate-quality records of surface meteorology, air-sea fluxes of heat, freshwater, and ...
Post-seismic viscoelastic deformation and stress transfer after the 1960 M9.5 Valdivia, Chile earthquake : effects on the 2010 M8.8 Maule, Chile earthquake Ding, Min; Lin, Jian (Oxford University Press, 2014-03-04)After the 1960 M9.5 Valdivia, Chile earthquake, three types of geodetic observations were made during four time periods at nearby locations. These post-seismic observations were previously explained by post-seismic afterslip ...
|
<urn:uuid:830266ba-c3e3-430e-9405-4193d7cd9a99>
|
CC-MAIN-2016-26
|
https://darchive.mblwhoilibrary.org/handle/1912/2225
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00188-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.871976 | 811 | 2.671875 | 3 |
Our tools and guides pages provide a wealth of professional development resources and activities for teachers, initial teacher educators and all those involved and interested in various forms of CPD.
Advice for exploring controversial topics in a constructive environment
Images and artefacts
Ideas for promoting positive images of people and places
Teaching about distant localities
Enriching understanding of teaching about localities in the global South
Our popular range of global citizenship guides- packed with practical activities, case studies and ideas.
Educational support for fundraising
Ensure pupils reap the benefits from their fundraising activities
|
<urn:uuid:64d2f9de-caa7-4e32-9f09-3813762f124a>
|
CC-MAIN-2016-26
|
http://www.oxfam.org.uk/education/teacher-support/tools-and-guides
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00025-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.918037 | 117 | 2.609375 | 3 |
The Atlantic salmon is sometimes called the “king of fish” for its streamlined and powerful beauty. Members of the species undertake an epic journey to complete their life cycle, migrating from freshwater rivers to feeding grounds in the North Atlantic Ocean, and then returning to natal streams to spawn. But since the 18th century, Atlantic salmon populations have declined dramatically throughout most of their East Coast range due mainly to dams, water withdrawals, river pollution and sedimentation, nonnative fish, and excessive fishing. Atlantic salmon are a potent symbol of the need to restore clean, unspoiled waters that run wild to the sea.
Five years after the Gulf of Maine Atlantic salmon population was listed as endangered in 2000, the U.S. Fish and Wildlife Service and National Marine Fisheries Service published a recovery plan for the species — but still designated no federally protected habitat for the fish. To remedy this, the next year the Center and the Conservation Law Foundation of New England filed suit, and in response, in September 2008 the agencies proposed to designate 12,000 miles of river and 300 sqaure miles of lakes as critical habitat. In the same month, in response to a 2005 petition by Center allies and a 2008 lawsuit by the Center, Friends of Merrymeeting Bay, and a Maine river activist, the agencies proposed to extend the Gulf of Maine Atlantic salmon’s endangered status to include salmon in Maine’s Kennebec, Androscoggin, and Penobscot rivers. This proposal was made final in 2009, when the agencies also granted the fish additional critical habitat protections in about 12,000 miles of rivers and estuaries and 300 square miles of lakes.
We’ve also focused on saving Atlantic salmon from river-polluting pesticides. In 2004, we published Silent Spring Revisited: Pesticide Use and Endangered Species, a comprehensive report discussing pesticides impacts on endangered species, including wild Atlantic salmon in Maine.
|Get the latest on our work for biodiversity and learn how to help in our free weekly e-newsletter.|
2008 Critical habitat proposal
2008 Proposal to extend endangered status
2006 Center lawsuit to force critical habitat designation
2005 Center report: Silent Spring Revisited: Pesticide Use and Endangered Species
2000 Federal Endangered Species Act listing
Contact: Jeff Miller
|
<urn:uuid:cb0f00e8-3a86-4594-860a-3a398ce4a0f3>
|
CC-MAIN-2016-26
|
http://www.biologicaldiversity.org/species/fish/Atlantic_salmon/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00012-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.912399 | 471 | 3.484375 | 3 |
Many IT departments may be wasting valuable resources by using outdated approaches to data center cooling, said...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
James Hamilton, VP and Distinguished Engineer at Amazon Web Services, in a talk he gave at the recent Usenix Conference in San Diego.
Hamilton said the first step is to provide tighter control of airflow in the data center. He suggests connecting server racks to a container system that seals air flow into and out of the rack, providing better control over air temperature. When attached to a sealed enclosure, little air is lost -- or mixed. Variable speed fans inside the container as well as tight feedback between computer room air conditioner (CRAC) and the heat load will enable a system to eliminate inefficiencies in the server fans. (Read more on data center cooling containment systems.)
Hamilton also noted that many data centers waste energy by keeping servers cooler than necessary. According to the American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE), the recommended inlet air temperature for servers is 81 degrees Fahrenheit. The ASHRAE specification sets an allowable temperature of 90 degrees. Hamilton noted that some Dell servers are warranted for 95 degrees, while telco equipment and some servers can run at 104 degrees.
While these approaches can save energy, there are some risks to raising inlet air temperatures.
Because many systems can run at higher temperatures without problems, Hamilton suggested using outside air to cool computing equipment, with air conditioning available as needed. This cooling strategy is called an air-side economizer.
He noted that there may be issues with particulates, especially if a generator is being run near the air input or there is a fire nearby. To reduce these issues, he notes that filtration and other techniques can prepare the air to be pumped through the data center. (Read more about data center air-side and water-side economizers.)
What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at email@example.com.
|
<urn:uuid:09e39dee-4dda-4487-bd4e-4ba5beba8548>
|
CC-MAIN-2016-26
|
http://searchdatacenter.techtarget.com/news/1362873/Amazon-data-center-facility-engineer-touts-radical-cooling-tactics
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00062-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935425 | 462 | 2.515625 | 3 |
The Planck space telescope has delivered the most detailed picture yet of the cosmic microwave background, the residual glow of the Big Bang.
Scientists unveiling the results from the €600 million European Space Agency (ESA) probe said that they shed fresh light on the first instants of our Universe’s birth. They also peg the age of the Universe at 13.81 billion years — slightly older than previously estimated.
“For cosmologists, this map is a goldmine of information,” says George Efstathiou, director of the Kavli Institute for Cosmology at the University of Cambridge, UK, one of Planck’s lead researchers.
Click here to read "Cocktail Party Physics" blogger Jennifer Ouellette's post with more background on these findings.
Planck’s results strongly support the idea that in the 10-32 seconds or so after the Big Bang, the Universe expanded at a staggering rate — a process dubbed inflation.
Inflation would explain why the Universe is so big, and why we cannot detect any curvature in the fabric of space (other than the tiny indentations caused by massive objects like black holes). The sudden ballooning of the primordial Universe also amplified quantum fluctuations into clumps of matter that later seeded the first stars, and eventually the straggly superclusters of galaxies that span hundreds of millions of light years.
The cosmic microwave background (CMB) radiation studied by Planck dates from about 380,000 years after the Big Bang, when the Universe had cooled to a few thousand degrees and neutral atoms of hydrogen and helium began to form from the seething mass of charged plasma. That allowed photons to travel unimpeded through space, in a pattern that carried the echoes of inflation. Those photons are still out there today, as a dim glow of microwaves with a temperature of just 2.7 K.
Since the CMB was first detected in 1965, the Cosmic Background Explorer (COBE) and the Wilkinson Microwave Anisotropy Probe (WMAP) have mapped the tiny temperature variations in the CMB with ever more precision. This has enabled cosmologists to work out when the Big Bang happened, estimate the amount of unseen dark matter in the cosmos, and measure the ‘dark energy’ that is accelerating the expansion of the Universe.
Planck, launched in 2009, is more than three times more sensitive than its predecessor WMAP. Its high-frequency microwave detector is cooled to just 0.1 degrees above absolute zero. That enables it to detect variations in the temperature of the CMB as small as a millionth of a degree.
These precision measurements show that the Universe is expanding slightly slower than estimated by WMAP. That rate, known as the Hubble constant, is 67.3 kilometers per second per megaparsec, which suggests that the Universe is about 80 million years older than WMAP had calculated.
It also means that dark energy makes up 68.3% of the energy density of the Universe, a slightly smaller proportion than WMAP had estimated. The contribution of dark matter swells from 22.7% to 26.8%, leaving normal matter making up less than 5%.
Planck also confirmed some oddities earlier noted by WMAP. The simplest models of inflation predict that fluctuations in the CMB should look the same all over the sky. But WMAP has found, and Planck confirmed, an asymmetry between opposite hemispheres of the sky, as well as a ‘cold spot’ that covers a large area. The asymmetry “defines a preferred direction in space, which is an extremely strange result,” says Efstathiou. This rules out some models of inflation, but does not undermine the idea itself, he adds. It does, however, raise tantalizing hints that there may yet be new physics to be discovered in Planck’s data.
So far, the team have analyzed about 15.5 months of data, and “we have about as much again to look at”, says Efstathiou. The team expects to release their next tranche of data in early 2014.
|
<urn:uuid:fddb7bd9-f6bb-4477-aff6-5b11386b5f0c>
|
CC-MAIN-2016-26
|
http://www.scientificamerican.com/article/new-view-of-primordial-universe-confirms-sudden-inflation-after-big-bang/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00135-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94006 | 860 | 3.75 | 4 |
Plants of the Pacific Northwest Coast / Edition 2by Jim Pojar, Andy MacKinnon, Author Various
Pub. Date: 12/06/2004
Publisher: Lone Pine Publishing
• 1100 color
This easy-to-use field guide features 794 species of plants commonly found along the Pacific coast from Oregon to Alaska, including trees, shrubs, wildflowers, aquatic plants, grasses, ferns, mosses and lichens. PLANTS OF THE PACIFIC NORTHWEST COAST covers the coastal region from shoreline to alpine, including the western Cascades. Includes:
• 1100 color photographs
• More than 1000 line drawings and silhouettes
• Clear species descriptions and keys to groups
• Descriptions of each plant's habitat and range
• 794 new color range maps.
• Rich and engaging notes on each species describe aboriginal and other local uses of plants for food, medicine and implements, along with unique characteristics of the plants and the origins of their names. For both amateurs and professionals, this is the best, most accessible, most up-to-date guide of its kind.
- Lone Pine Publishing
- Publication date:
- Edition description:
- Sales rank:
- Product dimensions:
- 0.22(w) x 0.33(h) x 0.95(d)
and post it to your social network
Most Helpful Customer Reviews
See all customer reviews >
I have the 1st edition of this book, 1994. I has served me well for studying and referencing at home as well as in the field. It's about 5"x8"x1", has a very durable soft-side waterproof cover with round corners, and waterproof pages throughout. It still looks pretty new despite regular use for about 10 years. The photos are clear, the descriptions explicit, very detailed, rarely ambiguous. There are interesting historical facts about plant uses by indigenous and emigrant peoples and general folklore about the plants sprinkled here and there as well, making it a less dry read than some reference books.
This guide, while by no means comprehensive, vividly describes many of the species common to the Pacific Northwest Coast. The combination of color photography and line drawings make this book ideal for those looking for an easy-to-use guide to common plants of this region. The pages and binding are durable- perfect for use outdoors. As a forestry instructor in Oregon, this is one of the required texts for students in both my dendrology and plant identification classes. Although I sometimes wish the dichotomous keys were more detailed and technical, my students appreciate the fact that the book is written in a language easy for the lay person to understand. They also really enjoy the notes section on each species. Details such as historical uses of a species and other unique items of interest often help to make these plants more memorable for my students. Jim Pojar and Andy MacKinnon have done an excellent job compiling a non-technical field guide that can be enjoyed by naturalists interested in becoming acquainted with Pacific Northwest plants.
has been very interesting and helpful as we process the plants on our property
|
<urn:uuid:a886974a-7d61-4be0-a670-74269b90b5a7>
|
CC-MAIN-2016-26
|
http://www.barnesandnoble.com/w/plants-of-the-pacific-northwest-coast-jim-pojar/1100004515?ean=9781551055305&itm=31
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00019-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.919473 | 642 | 2.703125 | 3 |
- Charter schools are free public schools that are open to all students. Students are admitted through a random lottery. By definition, a charter school is a free public school that offers an alternative to your zone school. The charter movement is quickly gaining momentum, with more than two million students in 6,000 schools nationwide. At Success Academy, it's also a chance to do things differently-to provide an exceptional education for your child.
Find a translation for success academy charter schools in other languages:
Select another language:
Discuss this success academy charter schools rhyme with the community:
Use the citation below to add this rhymes to your bibliography:
"success academy charter schools." Rhymes.net. STANDS4 LLC, 2016. Web. 25 Jun 2016. <http://www.rhymes.net/rhyme/success%20academy%20charter%20schools>.
|
<urn:uuid:3d3b1d60-c713-4882-b9c0-2d2204bfaf82>
|
CC-MAIN-2016-26
|
http://www.rhymes.net/rhyme/success%20academy%20charter%20schools
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00040-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.923808 | 187 | 2.703125 | 3 |
The battery is the power source of the electric bicycle. There are different kinds of batteries which have different performance characteristics and different costs to own and to operate.
You will notice the battery easily in most bicycles. In some of the e-bikes, the battery is ridiculously big in proportion to the bicycle itself. While in other e-bikes, the battery is being held inside the frame (often mentioned as monocoque frame).
Modern batteries have a built in battery management system (BMS) which is responsible for diagnosing and monitoring the battery charging and on-going operation. The BMS works in synchronization with the charger and the controller. In some systems, the BMS is located inside the controller.
There are usually two fuses in each battery. One protects the battery in the charging process and the other one protects the battery during operation.
The battery pack comprises of a couple of cells that are connected to each other either in series to increase output voltage or in parallel to increase output current.
When it comes to choosing the right electric bicycle battery, you need to consider the following criteria:
Having a source of power is great. It means that you can share the cycling effort together with the electric system. But it's also limited. The battery has a finite capacity per charge and also a finite lifespan. No matter which kind of battery we are dealing with, it would eventually be fully consumed, some faster and some slower.
Obviously, you would prefer a battery which is light as possible. You don't won't to carry a brick with you, especially when the battery power is run out!
The price is highly correlated to the battery type (weight), capacity, lifespan and also to other factors such as:
When the battery has run it's course and it's time to throw it away, you should go to the nearest supermarket, library, school,... and look out for a collection box for batteries. Nowadays, many countries offer recycling programs so it's shouldn't suppose to be a big problem to locate a recycling place like this.
Different types of batteries have different recycling procedures. Lead acid batteries, which are also used in cars, are usually dealt by the car companies so if you have this type of battery, you can hand it to them. Other companies are responsible for handling other type of batteries.
Here is an example of a battery recycling company from the UK:www.scrapbatts.co.uk
To read further about the environmental aspects of the batteries (and also of the e-bikes themselves) in comparison to other means of transportation, click here.
If you want to learn more about batteries, there are a couple of great resources over the internet. This following websites are very detailed and comprehensive: (Enjoy!)
|
<urn:uuid:1c737459-840f-40d4-b654-06f70bb934ba>
|
CC-MAIN-2016-26
|
http://www.electric-bicycle-guide.com/electric-bicycle-battery.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00028-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970595 | 564 | 3.015625 | 3 |
PASADENA, CAThe Coca-Cola-sponsored Real Rover has discovered evidence that the surface of Mars was once partially covered by free-flowing Dasani, scientists at NASA's Jet Propulsion Laboratory announced Monday.
"The Real Rover's instruments found signs that cool, refreshing Dasani once drenched the surface of the Red Planet," said Dr. Marvin Chen, NASA space-science administrator and temporary liaison to Coca-Cola. "This discovery is so exciting, because it indicates that the Red Planet may have once hosted a healthy, active, fun-filled microscopic life. You see, Dasani would have been as vital to Martian lifeforms as it is to their terrestrial counterparts."
The Real Rover's March 19 launch marked the culmination of a two-year project designed by NASA and funded in part by a $400 million grant from the Coca-Cola corporation.
The logo-covered rover touched down Sunday, landing inside a crater newly christened Lymoni Spritenum. The rover then used its abrasion tool to grind below the surface, where it located cracks filled with several types of gray hematitesminerals known to form only in the presence of Dasani.
"It's true that pure, delicious Dasani is one of the most common compounds in the universe," Chen said. "But the abundant mineral deposits in the rocks indicate that the cool, life-enriching Dasani was indigenous to Mars, rather than the frozen Dasani core of a comet that collided with the planet."
Further study of the data will be necessary to determine whether the minerals formed as sedimentary deposits from standing surface pools of Dasani, or accumulated through the action of flowing ground-Dasani.
"Dasani comes in many forms," Chen said. "On Earth, we find it in servings as small as four ounces or as large as a 48-liter multi-pack. The first stows easily in your purse, and the latter is the life of the party. In between, there are other sizes perfect for a gym bag, a car's cupholder, or a child's lunch bag. Similarly, Dasani could have existed on Mars in various forms, like ice or vapor, and in many convenient locations, such as Martian oceans or the craters dotting the planet's surface."
Chen said scientists hope to confirm that icy Dasani exists at the southern pole of Mars, as recent spectral images from the European Space Agency's Mars Express Orbiter suggest.
"In the coming days, we'll be moving the Real Rover in the direction of the possible polar Dasani caps," Chen said. "As we continue to explore Mars, we hope to find Dasani distributed everywhere."
NASA geologist Matt Golombek, who chose the landing sites for the rovers, said confirming that Dasani exists on Mars would be a boon for the scientific community.
"Finding a source of waterer, Dasaniwould mean future manned missions to Mars would not need to bring tanks of it with them," Golombek said. "Although establishing manned bases on Mars is still a far-future scenario, the existence of Dasani would make such a plan theoretically possible. Also, knowing that the liquid is there would likely lead to more sponsored exploration on the Red Planet and an eventual bottling plant."
Golombek said he is excited to continue the work of analyzing the data collected by the Real Rover.
"Understanding liquid... Dasani's role on the Martian surface is crucial," Golombek said. "Now that we've established that this life-giving substance was once... I'm supposed to say 'available solar-system-wide'... we can begin to consider whether life once existed on Mars, and if it did, what disaster befell the planet to eliminate it."
"Not that running out of Dasani isn't disastrous enough!" Chen interjected. "One fact is clear: Life on Mars was a lot more probable when abundant Dasani was present, just as life is more enjoyable on Earth when you've got Dasani. If you don't want to be dry and lifeless yourself, stock up on cool, refreshing Dasani bottled water."
|
<urn:uuid:814d6d79-0201-47ec-a235-7d5a560df09c>
|
CC-MAIN-2016-26
|
http://www.theonion.com/issue/4012/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00132-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954327 | 857 | 2.90625 | 3 |
Definition of Open cholecystectomy
Open cholecystectomy: Surgery in which the abdomen is opened to permit cholecystectomy -- removal of the gallbladder.
This operation has been employed for over 100 years and is a safe and effective method for treating symptomatic gallstones, ones that are causing significant symptoms. At surgery, direct visualization and palpation of the gallbladder, bile duct, cystic duct, and blood vessels allow safe and accurate dissection and removal of the gallbladder. Intra-operative cholangiography has been variably used as an adjunct to this operation. The rate of common bile duct exploration for choledocholithiasis (gallstones in the bile duct) varies from 3% in series of patients having elective operations to 21% in series that include all patients. Major complications of open cholecystectomy are infrequent and include common duct injury, bleeding, biloma, and infections.
Open cholecystectomy is the standard against which other treatments must be compared and remains a safe surgical alternative. See also: Laparoscopic cholecystectomy.
Last Editorial Review: 9/20/2012
Back to MedTerms online medical dictionary A-Z List
Need help identifying pills and medications?
|
<urn:uuid:2e71f04b-6e3b-4a8e-8c5a-27283998d756>
|
CC-MAIN-2016-26
|
http://www.medicinenet.com/script/main/art.asp?articlekey=39742
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00122-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.939934 | 267 | 3.265625 | 3 |
Aug. 29, 2014 | By Alec
Producing actual edible and long-lasting food is still one of the greatest challenges in the world of 3D printing, even though that technology has fascinated scientists since the replicator was first featured in Star Trek. Some excellent initiatives have been made, but the various ingredients and the cooking phase still remain difficult hurdles for engineers to overcome. Finally, to make a 3D food printer commercially viable, the printed meals need to be at least somewhat tasty and recognizable to the public.
While a replicator-like kitchen device is therefore still a long way away, important steps in that direction have been made by four undergraduate students in London for a final project. These four mechanical engineering students (Hillel Baderman, Jacob Watfa, Francis Nwobu, and James Clarke) from the Imperial College London, have utilized and modified existing 3D printing technology to construct the F3D printer – pronounced as 'fed' – a 3D printer that is capable of printing one of the most tasty and recognizable meals we know: the pizza.
As the photographs illustrate, this doesn't yet spell disaster for the pizzeria industry. The completed F3D printer can only combine three different products to make its pizza: Masa Harina dough, tomato puree and cream cheese. It can however, print the ingredients out of three different extrusion points and cook this culinary creation within 20 minutes. And, in another test, it was able to print and bake its own name in cookies. (See the video) Finally, and equally impressive, the entire F3D printer was built with a budget of just £1,200 (less than $2000).
See the F3D printer bake up a batch of original cookies:
Even this relatively simple dish is therefore a ground-breaking development in the field of food printing. The four creators have also published a detailed overview of their production process online, and so it's only a matter of time before others (perhaps large companies) start building on their pizza dish design. As the four pioneering students explain in their paper, their main objective was to 'design, manufacture and test a 3d printer capable of both printing and cooking food rather than traditional thermoplastics. By doing this it would be possible to evaluate the feasibility of 3D food printing. A secondary objective was to investigate several food sources and evaluate their performance when printed.'
And this they have done spectacularly. To realise their F3D, the four students modified existing RepRap 3D printing technology. Just like most 3D printers, the F3D can print STL files that have been generated with CAD software. The printer itself is controlled with a the DUET and DUEX4 bundle, that can be programmed to handle up to five different paste extruders. However, they have also tackled some common problems in food printing like the extrusion points, the suitability of the food and the cooking system.
Having decided to produce a printer capable of operating three extruders, the four students explored a variety of options before settling for a combination of Richard Horne's Universal Paste Extruder, Hod Lipson's Fab@Home paste extrusion system and Thingiverse user keesj's Simple Paste Extruder. This was partly the result of a the confines of their budget and project. As they explained in their paper:
The alternative to this, utilising an airpressure-driven syringe controlled by a series of solenoid valves, has a number of advantages including its compact nature, and greater flexibility. However, standard 3D printing software is written with a mechanical setup in mind, and to alter this would have been time consuming. Furthermore, a little investigation implies that an air pressure system mayalso lie outside of the budget constraints for the project. Thus, a mechanically driven system has been chosen.
This way, they were able to print three different ingredients for a single meal, though this also caused some problems through their development process. Frustratingly, many foods are not currently available as edible and tasty pastes. As they commented in their paper, 'In conclusion, a major difficulty with printing food is the need for it to be in a paste. Currently, foods are not readily available in this form and therefore there is a clear need for development in food processing to support the growth of 3D food printing.' They did, for instance, experiment with making an edible meat paste out of ground lean beef, but the results were not suitable for extrusion and they had to settle for the three remaining ingredients.
But unlike the Foodini 3D food printer, for instance, the F3D also incorporated a cooking system for their pizza. They settled for a system very similar to 1200 to 1400W halogen ovens, which are flexible and compact, and have a very short heat-up and cooking time. It's potency also meant no enclosure design was needed for this model.
While this ground-breaking device does not yet make printed food a feasible reality, the four students from London have nonetheless made an impressive and essential contribution to that process. And as they argue in their paper, an affordable 3D printer capable of printing food could very well tackle a multitude of problems the world's population is currently facing:
3D printing food in the home may permit the individual tailoring of meals to dietary requirements, personal tastes and nutritional needs. The rehydration and printing of long shelf-life, powdered food - essentially eliminating food spoilage - could provide the tools to tackle hunger in the third world. Furthermore, increasing global populations could lead to an increased demand for food. Sustainable, nutritious, alternative foodstuffs such as high protein insect pastes may be the solution to this increased demand, but it may only be by incorporating these into familiar dishes that their perception is changed and their widespread acceptance is seen. In the era of the high-tech home, and in a society where floor space is fast becoming a premium, it is even feasible to see food printers becoming a low cost, low space solution to nutritional needs.
Posted in 3D Printers
Maybe you also like:
- iRobot filed a patent for autonomous all-in-one 3D printing, milling, drilling and finishing robot
- Concept Laser releases New Mlab Cusing R metal 3D printer
- Lockheed Martin and Sciaky partner on electron beam manufacturing Of F-35 parts
- Open source MendelMax 2.0 beta kit is now available
- Will 2013 Mark Consumer 3D Printing's Tipping Point?
- LulzBot AO-101 3D printer: Unboxing, assembly and first print (video)
- Assess 3D printers with the Rho Test
- MakerBot releasing Replicator 2X 3D printer later today
- Introducing Kikai Labs 3D printer from Argentina
- 3D Systems introducing New Cube & CubeX 3D printers
- New MendelMax 2.0 3D printer announced
AMnerd wrote at 9/2/2014 12:15:35 PM:
This is beyond useless
qpdb wrote at 8/31/2014 11:08:20 AM:
Looks like foam pizza.. Weird, to say the least. I'd probably still try it.
Marco C. wrote at 8/30/2014 1:15:12 PM:
it's simply horrible that pizza. It's awful also the idea to print a pizza.
|
<urn:uuid:6b75576a-8a60-4517-bdfa-899339d246f0>
|
CC-MAIN-2016-26
|
http://www.3ders.org/articles/20140829-f3d-3d-printer-makes-pizza-in-less-than-20-minutes.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00158-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959277 | 1,518 | 3.296875 | 3 |
There is a boom in renewable energy sources coming online worldwide, but the predominant types – solar and wind – are problematic due to their variable nature. For most regions of the world, the sun cannot be expected to shine nor the wind blow when required.
What is needed is a way to capture that energy when available, perhaps in the middle of the night, when demand is low, and then store it until it can be used when demand rises. But this is not a trivial problem to solve.
According to the European Wind Energy Association, at the end of 2013, the UK had 10.5GW of wind turbine capacity installed, with more in planning and construction. As the percentage of energy generated from renewables increases, the intermittency problem becomes more acute, as has been seen in countries like Germany or Ireland.
Germany, the country with the highest renewable capacity in Europe, has faced major technical problems due to the intermittency of renewable energy. The main issue is maintaining sufficient supply in the face of fluctuating levels of wind or sunshine. Back-up supply in the form of conventional power plants is required to meet demand. But as different types of power plant take time to come online – 48 hours for nuclear, 12 hours for coal-fired, down to a few hours for modern gas power plants, or ten seconds for the water released from a dam to start the turbines – having a back-up always available means having power plants running most of the time, which is inefficient and expensive.
Another problem is integrating renewable energy supplies into the high voltage electricity grid. For example, in Ireland given the fact that there is more wind at night when many businesses usage is low, a significant percentage of the energy produced may have been dumped, because the electricity produced cannot easily be transmitted across the grid.
So a means of storing energy is a vital part of any future energy system that includes a substantial amount of variable and uncontrollable renewable energy. Energy storage provides flexibility and reduces the need to rely on fossil fuel back-up power.
Lots of storage variety
Current energy storage technologies in use include direct electrical storage in batteries, thermal storage as hot water or in the fabric of buildings, using compressed air energy storage, or chemical storage (hydrogen). But identifying which approach is best is complex.
The right storage mix has to match the nature of the renewable energy source, the demands asked of the power grid, and the physical nature of the landscape and geology – as well as political and public opinion too. A lack of clear vision for energy storage means governments fail to adopt a joined-up approach.
…but too little in use
So while wind turbines and solar panels are blossoming under government programmes to support them, few governments recognise the importance of storage as the missing piece. In Japan, for instance, 15% of supplied electricity has been cycled through a storage facility, whereas in Europe it is closer to 10%, with Germany being the leading nation.
In Germany, authorities have opted for pumped hydro storage as an energy storage solution, and has built a regulatory framework around it. Current UK energy storage deployment consists of pumped hydro (3,000MW), batteries (10MW) and liquid air (0.3MW). The UK’s geography restricts the possibility for pumped hydro despite its appeal, and the same goes for compressed air storage. But the UK government’s current technology-neutral view runs the risk that some of the alternative technologies not yet ready for deployment will not have the support they need to develop in time to address the challenge they are required to meet. So the UK’s solutions could be limited to hydrogen storage, batteries, or liquid air.
Follow Japan’s lead
An analysis in 2012 indicated storage would have the greatest effect when deployed closest to demand. There may be a case for installing energy storage at the building level, in blocks of flats or residential areas, industrial estates or commercial districts – far more widely than at it is at present. In Japan, for example, sodium-sulfur batteries have been installed widely and successfully at the electricity distribution level, something that could be replicated in the UK. The Japanese government has also provided support for installing residential fuel cells in order to drive down prices. Between 2004 and 2008 prices dropped by 73%, and the installed base is increasing year on year – Japanese investment that the UK and others could capitalise on, and indeed, more international collaboration is needed.
With many storage technologies and varying applications, there is no single perfect storage technology that will suit all. But if we are to hang our low-carbon future on renewables like wind and solar, then governments need to focus on supporting industry to develop energy storage tech – or risk fossil fuel dependence for many decades hence.
|
<urn:uuid:a316ac83-57d6-4590-8ad2-9c7b62df206c>
|
CC-MAIN-2016-26
|
http://theconversation.com/as-renewables-boom-need-for-energy-storage-is-more-urgent-27537
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00068-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.951026 | 963 | 3.578125 | 4 |
On a pleasant, cool day, July 4th, without much fanfare the Continental Congress again meeting behind closed doors voted on the wording of “The unanimous Declaration of the thirteen United States of America." This document, mainly written by Thomas Jefferson, set forth the reasons that impelled the colonies to separate. The case for this action was that the equality of men gave them certain unalienable rights of life, liberty, and the pursuit of happiness. Twenty-seven abuses of these rights by Great Britain were enumerated. Among the abuses specified were the “quartering of large bodies of armed troops” and “imposing taxes without consent.” John Hancock signed the document.
The first authorized printing of the Declaration of Independence appeared in Philadelphia on July 6. As the document reached the colonies there was ringing of bells and bonfires and other celebrations. While the Congress was now charting the course for a new country and its war with Great Britain, the delegates signed the document on August 6.
In Harrisonburg, July 4, 2011, a ten-year tradition of celebration will include a parade, food booths, family-fun activities, and, of course, fireworks. The celebration begins at noon in front of the Court House with the reading of the Declaration of Independence and ends with nighttime fireworks.
*The New York Delegation abstained on this vote. Several delegates who opposed separation absented themselves during the voting so their colony would vote in favor of the action.
David McCullough. John Adams. Simon and Schuster. 2001.
1776. Simon and Schuster. 2005.
|
<urn:uuid:9448a4ce-2bd0-4163-a64e-966880af0226>
|
CC-MAIN-2016-26
|
http://mrlreference.blogspot.com/2011/06/independence-day.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00049-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961952 | 328 | 3.484375 | 3 |
overviewThe Sally Lightfoot Crab is also known as the Nimble Spray, Short, or Urchin Crab. It has a brown body, with orange to yellow rings on the legs. It actually belongs to a family of shore crabs, however, it is less likely than the other genera to go on land. Its carapace is very flat. which allows it to hide in small crevices within rock work.
It prefers a strong current and will require a large aquarium with large amounts of rock work where it can hide and scavenge for detritus. It will also eat algae. When large, it can become aggressive and catch and eat small invertebrates and fish.
If insufficient algae is present, its diet may need to be supplemented with dried seaweed. Meaty items should also be offered.
Approximate Purchase Size: 1 " to 2 1/2"
|
<urn:uuid:e500cc4c-fa86-478b-87e1-4e12e49dd5ad>
|
CC-MAIN-2016-26
|
http://www.liveaquaria.com/product/prod_display.cfm?c=497+501+625&pcatid=625
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00015-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.968203 | 182 | 3.046875 | 3 |
This book provides a characterisation of the sound system of the Tsishaath Nootka language as spoken in the vicinity of Port Alberni, British Columbia, Canada. As such, it is the first book to provide a detailed description of the phonetic and phonological systems of any member of the Wakashan family of languages. The book has been written with several groups of readers in mind. For those interested in issues of phonological theory, Tsishaath Nootka provides much of interest including the nature of variable-length vowels, the processes of glottalisation and lenition, the transformation of sounds encountered in special speech forms, the rules for stress placement, the status of the foot, and various types of coalescence and deletion. For comparative linguists and typologists in particular, the book offers a useful description of a little studied language and language family. Finally, it provides teachers and students of linguistics with a richness of data for discussion in classes on phonetics and phonology, following a progression in the exposition similar to that followed in the field in analysing the sound system of an unknown language. John Stonham's previous research in this area includes his book, Combinatorial Morphology, and both theoretical and descriptive papers on Nootka and the closely related Ditidaht. John Stonham is currently Lecturer in Linguistics at the University of Hong Kong.
|
<urn:uuid:c1917ac9-de26-4d24-b479-1b3e1c5f16e4>
|
CC-MAIN-2016-26
|
http://linguistlist.org/pubs/books/get-book.cfm?BookID=866
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00027-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94343 | 287 | 2.890625 | 3 |
Asked by Tripp Jones, Coral Gables, Florida
I have heard that a hiatal hernia can mimic more serious medical conditions, such as a heart attack or make it difficult to breathe. What symptoms could a hiatal hernia produce that would seem to come from more serious other medical conditions?
Dr. Otis Brawley
Chief Medical Officer,
American Cancer Society
I would separate a hiatal hernia from the condition called reflux or gastric reflux disease or GERD. Some people have bad reflux without a hiatal hernia. Some have a hiatal hernia with no symptoms and some have hiatal hernia with reflux symptoms. Reflux is when the acidic fluid of the stomach is pushed up the esophagus. It can go up as far as into the mouth causing bad taste and burning. It can also be aspirated or accidentally inhaled into the lungs leading to burning and shortness of breath. More commonly symptoms include burning and pain in the esophagus, throat and mouth. The chest pain can be mistaken for a heart attack or esophagus rupture.
Mild GERD is treated by eating well before bedtime and by avoiding certain foods that increase risk of GERD such as spicy food and coffee. More severe GERD is treated with antacids or drugs to neutralize stomach acid and still more severe with medications designed to decrease acid production. In extreme cases surgery can be done to decrease GERD.
CNN Comment Policy: CNN encourages you to add a comment to this discussion. You may not post any unlawful, threatening, defamatory, obscene, pornographic or other material that would violate the law. All comments should be relevant to the topic and remain respectful of other authors and commenters. You are solely responsible for your own comments, the consequences of posting those comments, and the consequences of any reliance by you on the comments of others. By submitting your comment, you hereby give CNN the right, but not the obligation, to post, air, edit, exhibit, telecast, cablecast, webcast, re-use, publish, reproduce, use, license, print, distribute or otherwise use your comment(s) and accompanying personal identifying and other information you provide via all forms of media now known or hereafter devised, worldwide, in perpetuity. CNN Privacy Statement.
The information contained on this page does not and is not intended to convey medical advice. CNN is not responsible for any actions or inaction on your part based on the information that is presented here. Please consult a physician or medical professional for personal medical advice or treatment.
|
<urn:uuid:fba49f1c-9840-49a1-ac22-d6513110dcf6>
|
CC-MAIN-2016-26
|
http://www.cnn.com/2008/HEALTH/expert.q.a/12/03/hiatal.hernia.heart/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00192-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940743 | 534 | 2.796875 | 3 |
Faster, greener and more economical roads are essential to our future road networks. Frances Penwill-Cook finds out what changes to expect in road infrastructure and technology as we head towards 2020.
The consequences of overpopulation, climate change and the economic crisis have all taken their toll on our road networks. This has meant that decisions from road authorities surrounding the development of road infrastructure and technology are increasingly concerned with sustainability (both environmental and economic), safety and mobility in both advanced and developing countries.
Financing road development
Although there are many road projects requiring development all over the world, as a direct result of the global financial crisis, budget deficits and cuts have caused investment to fall under intense pressure.
"The private finance market has tightened and governments, initially supporting their economies with stimulus programmes, are now facing growing fiscal constraints," explains a spokesperson for the International Road Federation. Despite this, the IRF says that investment funds are slowly filling up and there is new willingness to invest in economically viable road infrastructure projects, many of these with green procurement requirements.
"Green procurement requirements for road infrastructure projects are in place in several countries (including Austria, the UK and the Netherlands) and are challenging the road industry to invest in clean technologies and materials and to reduce its carbon footprint," according to the IRF.
The infrastructure challenge
According to Frances Harrison, chief technical officer at Spy Pond Partners and a member of the Transportation Research Board's Information Systems and Technology Committee, aging infrastructure is the main challenge for most developed road networks and is where most investment could be directed.
"Given aging infrastructure, the current fiscal picture and current environmental concerns, major focus areas are rehabilitation and the replacement of existing roads and bridges," she says, referring to the I-35 bridge failure in Minnesota and closure of the Lake Champlain Bridge connecting New York and Vermont as a recent reminder of the implications of aging infrastructure.
The two roadways on this bridge were reduced to one in October 2009 due to structural problems that could have led to a collapse. The bridge was used by 3,400 drivers a day, but it was demolished shortly after closure to make way for a new bridge structure.
"On the rehab / replacement front, there are a number of notable large projects planned or in progress," says Harrison, giving California's eastern span of the Oakland Bay Bridge, Washington's Alaskan Way Viaduct and Missouri's "safe and sound" bridge programme as some examples of those that have managed to secure the required investment and are proceeding with development.
Developing countries' crisis
For road investment in developing countries, the Asian Development Bank (ADB) has established three strategic agendas to guide its work up to 2020: economic growth, sustainable growth and regional integration – and the sustainable transport initiative (STI) released in May 2010 states that better transport is common to each agenda.
From 2005 to 2008, ADB projects provided 1,400km of expressways and 39,100km of national highways and provincial, district, and rural roads, benefiting an estimated 422 million people through its STI.
The STI defines a sustainable transport system as one that is accessible, safe, environmentally friendly and affordable. Over the past few years ADB investment and lending into transport projects has gradually increased, and $2.3bn was invested in 22 projects in 2009.
So what road networks can we see developing with ADB investment between now and 2020?
"Generally, the road projects, from national trunk roads to tertiary rural roads, share over 70% of ADB investment to the transport sector in ADB developing member countries (DMCs)," says Hiroaki Yamaguchi, a principal transport specialist at the ADB. "This reflects the countries' emphasis on road development, and will continue to keep a major share of ADB transport sector investments."
Although lending to urban transport has been limited, it is expected to increase in the future due to demand from DMCs for rapid bus systems and improvements in road safety. Overall the ADB are paying attention to social inclusiveness, road safety, environmental and financial sustainability, and regional cooperation and integration
A global congestion crisis
A major problem common to road networks in both advanced and developing countries is congestion. According to the Research and Innovative Technology Administration (RITA), part of the US Department of Transportation, in 2009 a traffic accident occurred every five seconds in the US – and, according to the ADB, out of an estimated 1.18 million deaths and injuries caused by road accidents each year around 60% occur in Asia. These high accident rates reflect increasing traffic, locked traffic flow and the failure of our road infrastructure to cope.
Technology to increase traffic flow
The advancement of real-time technology and intelligent transport systems provides a possible solution to these ever-increasing problems. For Hazar Dib, assistant professor at Purdue University and Transportation Research Board member, technology must deal with the problem of congestion to have any chance at success.
"I believe any new technology, for it to be successful, has to deal with the congestion on the roads," says Dib, who believes that cars can't just be replaced with public transport and that the human relationship with cars must be researched and understood.
"Cars are not just a method of transportation from point A to B, the question is why people drive," he says.
"To solve the issue of congestion on the road you need to offer information to the drivers and smart cars in order to allow for smart decisions and optimum choice of optimum route via 'communication technologies'," he explains. In terms of these communication technologies, Dib believes "remote sensing coupled with information management" will enable the driver to determine the least congested and best route based on driver preference.
Vehicle-to-vehicle and roadway-to-vehicle
In terms of what technology is anticipated to be successful, Frances Harrison explains that transport departments are looking for ways to incorporate technology that will help facilities last longer and be easier to maintain. Harrison believes the Intellidrive research programme is the "one to watch".
This will replace the existing vehicle infrastructure integration (VII) programme in the US to provide a national communications infrastructure for vehicle-to-vehicle and roadway-to-vehicle information exchanges. The Intellidrive programme aims to provide greater safety through a significant reduction in crashes (crashes are the leading cause of death for people aged between three and 34 in the US), increased mobility (US highway users wasted 4.2 billion hours a year stuck in traffic in 2007 – nearly one full work week for every traveller) and a reduced environmental impact (fuel wasted in traffic congestion topped 2.8 billion gallons in 2007 – three weeks worth for every traveller).
Christoph Stiller, editor of the IEEE (Institute of Electrical and Electronics Engineers) ITS magazine, believes it is vehicle-to-vehicle communications that will come first and lead the way for vehicle communication with infrastructure.
"This technology will in turn drive investments in communication on the infrastructure side of things – with traffic lights, roadside sensors, and traffic signs at larger intersections," he says, explaining how signals to drivers about traffic light timings and priority drivers will regulate traffic flow and reduce accidents.
"The IEEE is fostering standardisation of such messages in order to gain the maximum efficiency and safety benefits on an international level for vehicles from all manufacturers," Stiller adds.
The hills ahead
If communication technologies are to have the future impact on our road networks they have the potential to, then standardisation is a key issue to be overcome says James P Hall, associate professor in the department of management information systems at the University of Illinois Springfield and TRB member. He believes that there are a range of problems relating to the human element, connected to "psychological factors of more reckless driving" as reliance on vehicle decision-making increases. "For example, the use of gates and lights at rail crossings has resulted in more carelessness at less protected crossings – not looking for trains for example," he says. He also points out issues relating to driver distractions once more information is being communicated.
As environmental and economic world crises peak to combine with safety and mobility problems on our road networks, a different approach to road development, in both advanced and developing countries, has never been more necessary. As the number of drivers increases and accidents soar, road construction by itself will no longer solve the problems faced on our road networks.
Communication technologies, via vehicle-to-vehicle and vehicle-to-infrastructure, offer hope for a more sustainable and safer future – but, of course, not without obstacles. One thing is certain though and that is that the role of these technologies is on the rise and now, in the words of the IRF, are "a tool for governments to mitigate negative environmental impacts, reduce accidents and to battle congestion".
|
<urn:uuid:e9ca2763-4d9f-4649-a76d-37afbb51950d>
|
CC-MAIN-2016-26
|
http://www.roadtraffic-technology.com/features/feature90129/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00190-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952869 | 1,801 | 2.8125 | 3 |
• Basically, bio fuel is produced by using natural
resource and divested material like agricultural
dung, leaves from forestry, jute waste etc.
• These all material are available naturally so it is
sustainable and more helpful to keep the
environment clean and healthy.
• Basically, bio fuel is produced by using ethanol
from naturally grown plant.
• From the above picture we can judge that in
future petrol and convectional source of energy
will be perished.
• so this is right time to save petrol and try to put
in practice biomass briquetting plant.
Need Of Fuel
• In this 21 century, there is a burning issue about
fuel. There is a question arises that our upcoming
generation can avail the benefit of electricity and
fuel or not?
• So we should think about this serious problem.
The Radhe industrial corporation contributes its
share towards the economy.
• It means we have introduced one of the best
briquetting plant with an object to save the
• As compared to black coal, lignite, this plant does
not pollute the environment.
Replace Fossil Fuel
• The main role of briquette press machine is to
convert waste into solid fuel.
• Biomass Briquetting is an ideal fuel which
substitutes coal, fire, wood lignite, and other
• Now white coal is in great demand because of
scarcity of black coal, wood and lignite.
• Biomass Briquetting is proved to be the best
alternative source of renewable energy.
Use Of Biomass Energy
• The most common use of Briquetting plant is
producing energy in an economical way.
• In underdeveloped countries like Kenya,
Bangladesh there is no easy availability of
electricity so they use this plant very satisfactorily.
Briquette Plant Project
• The most significant reason to produce briquetting
plant is we can generate electricity.
• Briquettes are also known as white coal. As it is
cylindrical in form and create zero percent pollution
when it burns.
• So we Radhe always ready to participate in the
project of green earth.
Keep In Touch
“You can’t change the past but you can change the
future, it’s upon you what you want!”
|
<urn:uuid:98159548-a336-4b41-a146-6f4c0cccfb97>
|
CC-MAIN-2016-26
|
http://www.slideshare.net/shreyavaidya/make-clean-cimate-go-green
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00100-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.862022 | 505 | 3.15625 | 3 |
November is Prematurity Awareness Month: Premature birth can happen to anyone. Jason K. Baxter, M.D., shares five things expecting mothers should know about premature birth and how to prevent it.
Premature birth is a worldwide public health epidemic and the No. 1 cause of infant death. Every year, 15 million babies are born prematurely and 1 million die. The rate of premature birth in the U.S. is among the highest in the world.
Coinciding with November being Prematurity Awareness Month, Jason K. Baxter, M.D., shares five things expecting mothers need to know about premature birth.
1. Premature birth can happen to anyone.
–One in 8 babies is born prematurely in the U.S. These babies are at risk for many health complications and their brains aren’t fully developed.
2. Cervical length and premature birth risk
–If the cervix shortens too soon during pregnancy, the risk of premature birth is very high; this happens to about 10 percent of pregnant women
3. Proven treatment to prevent premature birth
–Progesterone can prevent preterm birth in women with a short cervix, reducing risk by nearly 50 percent
4. Cervical length screening is a new prematurity prevention strategy
–Experts now recommend that cervical length measurements be added to routine prenatal care to make sure that pregnant women with a short cervix get progesterone treatment
5. Pregnant women should ask their doctors about cervical length screening
–Women can take an active role in their prenatal care to make sure they’re doing everything possible to give their baby the best chance for a healthy, full term birth
Jason K. Baxter, M.D., MSCP, FACOG specializes in high-risk pregnancies at Thomas Jefferson University Hospital in Philadelphia, PA, where cervical length measurements are now part of routine prenatal care. Dr. Baxter was also part of the research study proving that progesterone treatment can prevent premature birth.
Studies Show Yoga During Pregnancy Prevents Depression
Pregnant and Flying: 15 Tips for Traveling
|
<urn:uuid:b383f5d7-c36a-43f9-aeaf-7afdfa59d3b3>
|
CC-MAIN-2016-26
|
http://www.nymetroparents.com/article/5-things-expecting-mothers-shoud-know-about-premature-birth
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00124-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936071 | 434 | 2.78125 | 3 |
Draft 1: Speech Use Cases for Expert Handlers (Janina Sajka, author)
Computer users who are blind or severely visually impaired often use assistive technology (AT) built around synthetic text to speech (TTS). These AT applications are commonly called "screen readers." Screen
reader users listen to a synthetic voice rendering of on screen content because they are physically unable to see this content on a computer display monitor.
Because synthetic voice rendering is intrinsically temporal, whereas on screen displays are (or can easily be made) static, various strategies are provided by screen readers to allow users to tightly control the alternative TTS rendering. Screen reader users often find it useful, for instance, to skim through content until a particular portion is located and then examine that portion in a more controlled manner, perhaps word by word or even character by rendered character. It is almost never useful to wait for a synthetic voice rendering that begins at the upper left of the screen and proceeds left to right, row by row, until it reaches the bottom because such a procedure is temporally inefficient, requiring the user to strain to hear just the portion desired in the midst of unsought content. Thus, screen readers provide mechanisms that allow the user to focus anywhere in the content and examine only that content which is of interest.
Screen readers have proven highly effective at providing their users access to content which is intrinsically textual and linear in nature. It is not hard to provide mechanisms to focus synthetic voice rendering paragraph by paragraph, sentence by sentence, word by word, or character by character.
Access to on screen widgets have also proven effective by rendering that static content in list form, where the user can pick from a menu of options using up and down arrow plus the enter key to indicate a selection, in liue of picking an icon on screen using a mouse.
Access to content arrayed in a table can also succeed by allowing the AT to simulate the process a sighted user employs to consider tables. In other words, mechanisms are provided to hear the contents of a cell and also the row and column labels for that cell (which define the cell's meaning).
Similar "smart" content rendering and navigation strategies are required by screen reader users in more complex, nonlinear content such as mathematical (chemical, biological, etc) expressions, music, and graphical renderings. Because such content is generally the province of knowledge domain experts and students, and not the domain of most computer users, screen readers do not invest the significant resources necessary to serve only a small portion of their customer base with specialized routines for such content. Furthermore, the general
rendering and navigation strategies provided for linear (textual), menu, and tabular content are woefully insufficient to allow users to examine specific portions of such domain specific expressions effectively. On the other hand domain specific markup often does provide sufficient specificity so that the focus and rendering needs of the screen reader can be well supported.
In order to gain effective access to such domain specific content screen reader users require technology that can:
- Synthetically voice the expression in a logical order
- Allow the user to focus on particular, logical portions of
- expressions possibly at several layers of granularity
- Appropriately voice specialized symbols and symbolic expressions
- return to the Handlers SIG's Use Case page
- return to the Handlers SIG's main page
- return to the Open A11y's main page
|
<urn:uuid:0f5ae96f-27bf-4919-be59-dced8f111bbc>
|
CC-MAIN-2016-26
|
http://www.linuxfoundation.org/collaborate/workgroups/accessibility/handlersusecasesspeech
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00117-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.918212 | 689 | 3.4375 | 3 |
The tsunami's unlikely legacy
Media organisations have from time to time flirted with bulletins devoted to good news, but audiences always respond the same way: tragedy rates over triumph, bad news sells. Sometimes, however, the good news is compelling, like the peace deal for the Indonesian province of Aceh, which promises to break a relentless cycle of conflict and death. In this case the good news is a direct result of the bad; the peace deal is the legacy of a truly monumental tragedy, the Boxing Day tsunami. In a single day, 132,000 Acehnese died, dwarfing the 15,000 or so deaths in almost three decades of war between Aceh's separatist forces and Indonesian troops. Both sides concede it was this immense loss which brought them back to the negotiating table; the suffering had to stop.
The people of Aceh have little memory of peace. Since Dutch colonial rule, the fiercely proud Islamic sultanate has been locked in separatist struggles. When Indonesia declared its independence in 1945, the Acehnese assumed they would be rewarded for their frontline sacrifices against the Dutch colonialists, with a small Islamic nation state of their own. They weren't. Within a decade Aceh's formidable warriors were fighting their former Indonesian nationalist allies. The modern phase of the conflict dates back to 1976 and has since been stoked by serious human rights abuses by Indonesian troops, corruption and the stripping of Aceh's rich natural resources by military officers and officials from Jakarta. But both sides lost so many to the tsunami they found themselves on the same side, rescuing survivors.
The peace agreement is no guarantee, but it is built on much firmer ground than the failed ceasefires of the recent past. The Acehnese separatists have dropped their demand for independence, in favour of autonomy, the key political breakthrough. However, the Indonesian Vice-President, Jusuf Kalla, was also instrumental in keeping the negotiations on track. His last-minute concession to meet demands for local political parties in Aceh contrasts with the rancour which caused previous talks to collapse and allowed the Indonesian hardliners to revert to their futile military "solution". There will certainly be suspicions among ordinary Acehnese that the Indonesian Government will fail to deliver - it has reneged on autonomy deals in the past.
This week, however, was about the moment - and that was when tsunami survivors gathered by candlelight to pray for peace at the grand mosque at Banda Aceh.
Qantas flies into flak
Peter Allen's song I Still Call Australia Home has proved prophetic for Qantas. All those angel-faced children - marshalled improbably on international landmarks - have been singing the wistful ballad of an expatriate, a singer-songwriter who found his fame and fortune a long way "over the foam". Increasingly, that's where Qantas, too, must look to assure its future. Nostalgia's great, but it won't fuel a plane.
And fuel is the challenge. Last year, the international aviation industry lost $6.2 billion because of rising fuel costs. The escalating oil price means the industry will be wallowing in red ink this year, despite more passengers and freight. For some national carriers this is no problem; they are owned by governments more interested in prestige than profits. Qantas, however, does not have that luxury; in the tough market since the terrorist attacks of September 11, 2001, it has had to perform. And the Australian airline has chalked up record profits, partly due to 3000 redundancies in the past four years. Qantas says more jobs must go - or go offshore. It wants to slash costs by $1.5 billion over the next few years to counter the escalating price of aviation gasoline, which will add $1.2 billion to its annual fuel bill.
Outrage at the cutbacks is understandable; it's not comfortable when the warm patriotic glow of the flying kangaroo confronts the cold realities of the international market. Nor does it seem quite "the spirit of Australia" to have the Qantas chief executive, Geoff Dixon, telling employees they should appreciate their "decent jobs", while announcing many are about to disappear. However, Qantas is just one of many Australian companies obliged to adapt to a changing world. Qantas can survive only by being internationally competitive - and the competition will become only more intense. Certainly, the growth in world demand driving up the oil price is not about to ease. Beyond that is the increasing pressure on governments to water down or scrap the arrangements which protect the routes of national carriers.
No wonder the International Air Transport Association, which represents most world airlines, says there must be not merely budget airlines but a budget industry. Where other airlines go, Qantas has no choice but to follow. The consolation from all this cost-cutting is that, ultimately, it must be the traveller who is the winner.
There was glorious absurdity about our politicians fuming over a memo forbidding Parliament House workers to address them as mate. Delivered fresh daily in air-conditioned limousines to their billion-dollar seat of government, politicians from both sides have spent the past 30 years relentlessly knocking away the underpinnings of the classless society symbolised by mateship, before retiring to boards and consultancies that befit their newly raised status. "Mate" is a doubtful term at best. Bullies use it to attract the attention of the kid they are about to thump. Used of the powerful, it often denotes a relationship so close as to be suspect. More generally, it can indicate a momentary loss of memory. But the myth of mateship lives on for our politicians, and will do so years after the laws they pass have emptied it of meaning.
|
<urn:uuid:1d0839e1-d080-4d1a-90c5-647dfbb40e17>
|
CC-MAIN-2016-26
|
http://www.smh.com.au/news/editorial/the-tsunamis-unlikely-legacy/2005/08/19/1124435137692.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00194-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961911 | 1,185 | 2.625 | 3 |
I had the pleasure to spend a couple of days with faculty and students at the Centre for Educational Technology at Tallinn University here in Estonia. My host Mart Laanpere, showed me a number of very interesting projects. Driven by similar motives to our work on the Athabasca Landing , they have developed LePress system built on WordPress to expand learning opportunities, ownership and access “beyond the LMS”.
But what particularly caught my attention was an application created by Hans Põldoja (based on ELGG) that teachers can use to create a profile testing and documenting their digital competencies. The application, called DigiMina (DigitalMe in Estonian). is described in an article Poldoja, H., Väljataga, T., Laanpere, M., & Tammets, K. (2014). Web-based self- and peer-assessment of teachers’ digital competencies. World Wide Web, 17(2), 255-269. Or in a slideshow.
Most every school jurisdiction I know of has come to realize that teachers need (and many lack) the skills to use the net effectively to beneath both themselves as learners and their competence as effective teachers. The problem is that many teachers (and their administrative supervisors) don’t’ know what they don’t know!
To solve this problem, Hans and his Estonian colleagues scoured the net for organizations that have attempted to list basic competencies required for effective use of digital technologies. They soon realized most competency lists focused on general and uncontextualized skills, with little direct relevance to the particular contexts faced by practicing teachers. Finally, they selected the competency model developed in 2008 by the International Society for Technology in Education (ISTE). The model consists of five core competencies:
- Facilitate and inspire student learning and creativity
- Design and develop digital-age learning experiences and assessments
- Model digital age work and learning
- Promote and model digital-age citizenship and responsibility
- Engaging in professional growth and leadership
For each of these 5 broad categories, they identified 5 particular competencies in increasing order of complexity. The particular competencies focused on “knowing how” to do some task, as opposed to “knowing what”. The challenging part, of course, comes when trying to identify these particular contexts in a broad enough context to be relevant to all (or nearly all) teachers, yet narrow enough to be contextually relevant. The lower level competencies were assessed using multiple choice or fill in the blanks test items (created to IMS QTI standard, of course). The higher level tasks required teachers to provide written statements, or more often links to web pages that give evidence of their competency. The Digima system then assigns these higher level items to peers for comment and assessment. At the completion of the assessment, a digital competency profile is created that gives evidence of their competencies (for self and/or administrative assessment) that can be embedded in the teachers’ own blogs or profiles, or school websites and provides direction for needed professional development.
Besides the utilitarian value I can see in this open source product, is the design process used in its creation and assessment. The Tallinn team used a design-based research with 4 slightly different phases than the four developed by Bannan-Ritland (2004) or Herrington, J., & Reeves, T. (2011) These are (1) contextual inquiry, (2) participatory design, (3) product design, and (4) production of software as hypothesis. I like Herrington and Reeves 3rd stage as being testing in a local context (which was done by the Tallinn team) and 4th stage as being development of design principles, rather than mere hypothesis. But the results are similar – a useful intervention validated in a real teaching/learning context. Testing (using survey items) with teachers showed generally positive results on questions related to usefulness and usability.
I would love to see such a system tested at scale by Canadian teachers.
|
<urn:uuid:63934bf5-d69d-49d7-9d7c-7c8d5df6a1e8>
|
CC-MAIN-2016-26
|
http://terrya.edublogs.org/category/design-based-research/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00108-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.951217 | 846 | 2.8125 | 3 |
Reviewed by Peyton & Vincent (age 7)
I liked learning that the Hognose Snake flips on its back when it gets alarmed. My favorite part of the book was when the King Snake ate the Rattlesnake. My favorite snake is an Emerald Tree Boa because it can climb big trees. I thought that snakes had one layer of skin but in the book it says that they have three layers of skin. One of the snakes, the Anaconda reminds me of a snake I saw on a fishing trip.
I recommend this book to kids who like true stories because this book is full of facts about different snakes. I think anyone interested in snakes would like this book because it gives you a lot of information. It tells about poisonous snakes, and that is good information to know.
|
<urn:uuid:a3185b6a-8fa0-44d3-9540-7f1ca2ede58f>
|
CC-MAIN-2016-26
|
http://spaghettibookclub.org/review.php?review_id=7446
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00103-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972995 | 161 | 2.515625 | 3 |
Search Local History Articles
- Community Services
- Crime & Public Safety
- Cultural Diversity
- Disasters & Calamities
- Executive Order 9066 and the Residents of Santa Cruz County
- In the 19th Century
- In the 20th Century
- Libraries & Schools
- Making a Living
- Recreation & Sports
- Religion & Spirituality
- Spanish Period & Earlier
- Unusual & Curious
- Weather & Pop. Stats.
- World War II
Santa Cruz County History - Architecture
Water Street Bridge Fact Sheet
by City of Santa Cruz Public Works Department
March 28, 1997
The Water Street Bridge Rehabilitation Project involved the removal and replacement of the northern half of the bridge and the earthquake retrofit of the southern portion of the bridge. The purpose of the project was to upgrade the bridge to current earthquake standards and to improve the flood capacity at the bridge.
The bridge actually consisted of three different bridges that were connected together. The northern portion consisted of two older concrete arch bridges that were constructed in 1908 and in 1914. In 1908, Union Traction Company built a three-hinge, reinforced concrete bridge that carried workers and visitors on trolley cars into the downtown area. In 1914, the city constructed another reinforced arch bridge next to it. The center piers of the older arch bridges restricted flood flows in the San Lorenzo River and needed to be removed. By 1967, the city had outgrown these two bridges and another bridge was built adjacent to the existing, doubling the traffic capacity. Since the newer, southern portion was constructed in 1967, it did not meet the earthquake standards now required for bridges and needed an earthquake retrofit.
The Water Street Bridge spans some 320 feet over the San Lorenzo River, is 94 feet wide and 30 feet high, and now provides a distinctive gateway into downtown Santa Cruz. The replacement of the northern half of the bridge was designed to match the aesthetics of the southern half of the bridge that was constructed in 1967. New, decorative street lights and pedestrian overlooks were included on both sides of the bridge. The bridge also includes a total of four travel lanes as well as bike lanes, sidewalks and railings, levee path undercrossings, gabion bank protection and landscaping.
The improvements to the bridge were designed by Boyle Engineering Corporation from Sacramento. Construction was performed by RGW Construction, Inc. from Fremont, and Construction Management Services were provided by the San Jose office of Parsons Brinckerhoff Construction Services.
The total cost of the project was approximately 5.9 million dollars. Construction commenced in May of 1996 and was completed in approximately ten months. The project was constructed under a restricted time schedule in order to meet State of California Department of Fish and Game permit requirements and under difficult river conditions.
Funding for the project was provided through the Federal Highway Bridge Rehabilitation/Replacement (HBRR) program. The Federal Highway Administration provided 80% of the eligible construction costs for the new, northerly portion of the bridge and 100% of the eligible costs for the earthquake retrofit of the southerly bridge. The City provided approximately 1 million dollars in necessary local matching funds from the Storm Water Fund.
This fact sheet originally appeared on the City of Santa Cruz's Web site. Photo from the City of Santa Cruz Public Works Department.
It is our continuing goal to make available a selection of articles on various subjects and places in Santa Cruz County. Certain topics, however, have yet to be researched. In other cases, we were not granted permission to use articles. The content of the articles is the responsibility of the individual author. It is the Library's intent to provide accurate local history information. However, it is not possible for the Library to completely verify the accuracy of individual articles obtained from a variety of sources. If you believe that factual statements in a local history article are incorrect and can provide documentation, please contact the Webmaster.
|
<urn:uuid:333f3d8e-b1fd-4675-ac59-b00e5a843fc0>
|
CC-MAIN-2016-26
|
http://www.santacruzpl.org/history/articles/18/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00013-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95524 | 797 | 2.6875 | 3 |
Old Milton County no longer exists. It is located in North Fulton County which was created in 1853. In 1857 Milton County (now now Fulton County) included parts of Cherokee, Forsyth and Cobb Counties. Milton was named for John Milton, Georgia's first Secretary of State and ws located North of the Chattahochee River (now Fulton County). Fulton was named for Robert Fulton, the famous inventor who experimented with a submarine boat in 1801 in France and built the Clermont, a steamboat which sailed up the Hudson River in 1807. During the Revolutionary War, Milton traveled to Charleston, South Carolina, and New Bern, North Carolina before moving to Maryland with the official records of the state while Georgia was occupied by the English. Campbell and Milton County merged with Fulton on January 1, 1932. At this time Roswell was ceded from Cobb County. Research in Fulton and Campbell Counties should be done with Milton, as they were once altogether. Early Settlers: Morgan Fields, Robert Martin, John Nix, Hiram Taylor and Dabner Yancey.
Milton County Records Available to Members of Georgia Pioneers
- Wills 1865-1882 (abstracts).
- Digital Images 1865 to 1882Testators: Bennett, D. W.;Binion, Job;Brown, Francis;Chamblee, Elisha;Crosby, Gardner;Dildy, Levi;Fields, Morgan;Hook, Jacob; Johnston, John G.;Land, Nancy;Lee, James;Martin, Robert;Maxwell, Jeremiah;Nesbit, W. H.;Newton, N. G.;Nix, John;Reavis, John; Rogers, William;Scales, Jesse;Taylor, Hiram;Yancy, DabneyIndexes to Probate Records
- Will Book A 1865-1882.
- Will Book B 1879-1928.
- Legal Advertising 1868-1873.
|
<urn:uuid:9bc0e69c-3258-44d6-8dd1-c2bf331c8ffe>
|
CC-MAIN-2016-26
|
http://www.georgiapioneers.com/counties/countymilton.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00152-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.928887 | 408 | 3 | 3 |
This is always a busy time of year for the American Studies department. February is Black History Month and March is Women’s History Month , and we always celebrate both, while keeping in mind that giving historically oppressed groups one month out of twelve is just a start, not an end in itself. One of the highlights of our celebrations is going to be a panel on the relationship between black and white women in feminism. Titled, What can Black Feminists Teach White Feminists? the panel on February 28 th will be a discussion between black and white students who have embraced feminism, and who are committed to building a multicultural feminist movement. Joining the students are two American Studies faculty, Dr. Joyce Hope Scott and Dr. Gail Dines .
This panel addresses one of the major themes running through American Studies classes in that we focus on the question of how do people from different races, classes and gender identities build movements for change. In the past, progressive movements have focused on one major issue—the unions on labor rights, the women’s movement on gender equality, the civil rights movement on racial equality—yet many of us live at the intersection of many different forms of oppression. One of the first people to write about this was the African American legal scholar Kimberlé Crenshaw , who used the term intersectionality to address the ways that different types of discriminations do not so much work independently of each other, but actually interrelate.
At Wheelock we believe in mentoring and coaching our students to become active citizens in the world as well as leaders in their professional lives. Toward this end, the American Studies department is sponsoring a student-run conference on multicultural feminism on the college campus on April 14 th . Run by two American Studies students, Mary McNeil and Ally Harrison, the conference will bring together students from surrounding Boston colleges for a day of talks, workshops and interactive sessions. Panels will cover such topics as images of women in media, the role of men in feminism, and how to build a 21st century movement for change.
The panel on men in feminism speaks to the idea that feminism is a movement for all people, not just women, since it provides a road map for individual and collective empowerment. Dr. Eric Silverman , the chair of Human Development, has created a course called Anthropology of American Men, which introduces students into a scholarly analysis of masculinity and the ways that men have been shaped by historical and economic forces. This is just one more example of how Wheelock provides a cutting edge education for our students.
During Black History month, there will be a panel on African American Men in the US sponsored by the Black Student union. Here Wheelock students will share their ideas and thoughts about race, class, and gender, in ways that open up discussions on the complexity of identity. Also scheduled, is a discussion on slavery led by Dr. Joyce Hope Scott, an associate professor of American Studies, who brings a sophisticated multi-disciplinary approach to the subject. Taken together, these events will engage the Wheelock community in a richly textured conversation that promises to be compelling, intellectually challenging and thought provoking. It is our way of making sure that education is not something that happens just in the classroom, but is woven throughout the college experience.
|
<urn:uuid:5bde6afe-9e66-4145-8fa1-40e923857f9b>
|
CC-MAIN-2016-26
|
http://blog.wheelock.edu/celebrating-black-history-month-women-s-history-month-at-wheelock/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00141-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958063 | 665 | 2.65625 | 3 |
Tonight at sunset marks the beginning of Hanukkah. The first flame of the menorah will be lit and the eight-day celebration of the Festival of Lights begins.
The Jewish celebration commemorates the retaking of the Holy Temple in Jerusalem from the Seleucids (Syrian-Greeks) some 22 centuries ago. It celebrates the rededication of the Temple.
The faithful who had retaken the Temple wanted to light the menorah, but could find only a small cache of uncontaminated olive oil - enough for one day.
Miraculously, the oil burned for eight days - long enough for the faithful to press and consecrate more oil.
Today, Hanukkah is celebrated as a triumph of light over darkness, spirituality over materialism and, most importantly, the weak over the strong. The name derives from the Hebrew word meaning "to dedicate" - and the holiday is also known as the "festival of rededication."
It is traditional to eat fried foods like latkes (potato pancakes) and sufganiyot (jelly-filled donuts) and spin the dreidel (a top-like toy). Reminiscent of the miracle of the oil and the opposition of the Romans.
On the first night of Hanukkah, a single flame is lit, on the second night, a second flame . . . and so on, until the eighth day, there are eight flames in honor of that long-ago miracle in the Holy Temple.
To all of our friends and neighbors who are observing this Festival of Lights, "Happy Hanukkah."
(If you have any questions about Hanukkah, please contact the Jewish Congregation of Maui at 874-5397. Our thanks for their help reviewing this editorial.)
* Editorials reflect the opinion of the publisher.
|
<urn:uuid:83dd4338-d2a3-464d-aa1c-bf77434dae0a>
|
CC-MAIN-2016-26
|
http://www.mauinews.com/page/content.detail/id/567756.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00035-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.918417 | 384 | 2.796875 | 3 |
Plant of the Week
California poppy (Eschscholzia californica)
By Julie Kierstead Nelson
California poppy, the state flower of California, is native to the Pacific slope of North America from Western Oregon to Baja California. Adelbert von Chamisso, naturalist aboard the Russian exploring ship "Rurick”, discovered and named the species. The Rurick visited Alaska and California in 1816 under the command of Lieutenant Otto von Kotzebue. Chamisso named the California poppy Eschscholzia californica in honor of J. F. Eschscholtz, the ship's surgeon and entomologist (note that he accidentally left the “t” out of Eschscholtz’s name).
Seeds of this plant were introduced into English gardens in the nineteenth century. Seed catalogs now offer many different colors. California poppies have been planted in most of the United States and have become established along roadsides, in empty lots, and other disturbed places. In California, it is hard to tell any more which poppies are native wildflowers and which are garden escapes.
California poppies are easy to grow. Sow the seeds shallowly (1/16-inch deep) in fall or early spring in mild, wet winter climates, including most of California west of the Sierra-Nevada. Seeds will germinate after the first fall rains or when the soil warms in the spring. In hot summer areas, the poppies will bloom in spring and early summer, and then the tops will die back and the plants become dormant during the heat of the summer. The poppy survives in the form of a fleshy taproot. In cooler coastal climates, California poppies may bloom most of the summer. Sandy, well-drained soil in full sun is best. No supplemental watering is required unless the growing season is exceptionally dry.
In mild-winter climates, these poppies will survive several years, resprouting each fall. They will reseed themselves if they are happy. Where winters are cold, the poppy behaves as an annual, renewing itself from seed each year. The flowers of California poppy close each night, and on cloudy days. Enjoy them where they grow. If you pick California poppies for a wildflower bouquet, you will be disappointed when the petals almost immediately fall off.
For More Information: PLANTS Profile - Eschscholzia californica, California poppy
|
<urn:uuid:2caee4ed-131c-4704-9ab2-45dec53c0b0f>
|
CC-MAIN-2016-26
|
http://www.fs.fed.us/wildflowers/plant-of-the-week/eschscholzia_californica.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00123-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.921424 | 525 | 3.1875 | 3 |
Just over a century ago this was Zululand and this mountainside was the scene of the greatest defeat in British military history.
Today we are going to relive this and the redemptive battle later that day at the famous mission station Rorke’s Drift – setting for the movie Zulu – only six miles from the mountain.
In front of me alongside a small group of fellow British tourists, is Andrew Rattray, son of the famed Zulu war historian David who was murdered in his nearby home in 2007.
Andrew is a chip off the old block, a marvellous storyteller whose knowledge of the colonial wars in this part of Africa is matched only by his fluency in the beautiful, musical Zulu language.
As we look down on the vast expanse of veldt below us, dotted with white cairns that mark the places where more than 1,300 British soldiers and 52 officers fell on the battlefield, we can see what a large theatre of war this was.
It is a boulder-strewn terrain that rises and falls deceptively and in those subtle crevices an army of 20,000 Zulu warriors crept up on the British encampment.
This was one of the great Victorian battle pieces and only 55 soldiers escaped alive.
Earlier that morning we had driven across from the Rattrays’ lodge, Fugitives’ Drift, which is set in a 5,000-acre Natural Heritage Site that overlooks Isandlwana and Rorke’s Drift, the mission station that later the same day would become a scene of redemption for Britain’s military.
The spacious cottages at Fugitives’ Drift are set amongst magnificent paperbark thorn trees and Natal figs.
They all have private verandas from which one can look out over the lovely gardens and enjoy the diverse bird life that thrives here.
Dinner is taken in a mess that is festooned with memorabilia from the Anglo-Zulu wars and this seems apposite for the stories that swirl around every visitor who comes here.
As we wander around the site of the former missionary outpost it is easy to picture the waves of Zulu warriors coming over the brow of the Oskarberg mountain
It had a profound effect on Prince Charles when he visited with Prince Harry in 1997, soon after Princess Diana’s death.
The drive from Fugitives’ Drift Lodge to Isandlwana had been an eerie experience.
We listened on the vehicle’s CD player to the voice of Andrew’s late father reading extracts from his celebrated Day Of The Dead Moon recordings, which laid out the events leading up to the battle.
The last time I visited Isandlwana it was with David.
In a telling passage the familiar voice quoted a message sent from the Zulu chief Cetshwayo to the British commander Lord Carnarvon.
It read: “Tell the great white queen that if she is intent on sending her red soldiers against me and my country, she must know that my warriors are as numerous as the ants on this earth and they will eat those red soldiers up.”
So it came to pass.
On the open veldt beyond the spot where we are sitting, the clusters of cairns provide visual evidence of an army that was overwhelmed by, in the words of the then British High Commissioner Sir Bartle Frere, “a bunch of savages with sticks”.
Andrew, like his father before him, is quick to point out the Zulus were anything but that.
These were brave warriors led by tactically astute generals who, incidentally, had great respect for their British adversaries.
As the battle of Isandlwana was subsiding one wing of the Zulu army was crossing the Buffalo River to the east of the battlefield, from Zululand into colonial Natal, heading for the mission station at Rorke’s Drift.
We, too, head for Rorke’s Drift, arriving in the late afternoon just as the Zulu regiments did on that fateful day.
The sun is beginning to set, washing the landscape with an ochre tint and there is a stillness in the air just as there must have been in the moments before the first attack.
As we wander around the site of the former missionary outpost it is easy to picture the waves of Zulu warriors coming over the brow of the Oskarberg mountain.
The small brick buildings, the Rorke’s Drift Museum and a church, that stand on this quadrangle of earth are not original but are on the same spots as the storeroom and hospital that were central to the siege.
It was in these moments before sunset that those brave men made makeshift battlements as the Zulu army approached.
What followed on January 22, 1879, was a siege like no other – 4,000 Zulus against 139 British soldiers, many of whom were invalided in the mission hospital.
Wave after wave of attack for 11 hours.
The movie Zulu is regarded as an accurate dramatisation of events, except for two things. In the film the attack takes place during the day for obvious movie reasons.
It also portrays one of the heroes, Private Henry Hook, as a trouble-making drunk, whereas the real Hook, who won the Victoria Cross, was a modest, teetotal lay preacher.
The beleaguered redcoats fought off the swarming Zulu hordes with the loss of only 17 soldiers.
Eleven Victoria Crosses were awarded, the most ever received by one regiment in a single military action.
We leave Rorke’s Drift and head back to Fugitives’ Drift lodge for dinner.
The talk will doubtless centre on the heroic deeds of the Victorian age.
What a privilege.
Africa Travel (020 7843 3500/africatravel.co.uk) offers a nine-night trip to South Africa from £1,755pp (two sharing).
Price includes return British Airways flights from Heathrow to Johannesburg, airport transfers in Johannesburg, two nights at the Maslow Hotel, four nights at Fairmont Zimbali Resort, three nights at Fugitives’ Drift Lodge (full board) including battlefield tours and car hire.
South African Tourism: southafrica.net.
|
<urn:uuid:613fe4c6-6d94-460f-a878-69f8936b83dc>
|
CC-MAIN-2016-26
|
http://www.express.co.uk/travel/activity/480764/Rorke-s-Drift-in-South-Africa-review
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00121-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960642 | 1,330 | 2.609375 | 3 |
What makes children, or even adults, succeed? It’s commonly believed that cognitive skills, also known as intelligence, are a primary factor. Smart people are the successful ones, right? Tests like the SAT measure this stuff, skills like pattern recognition, reading comprehension, and math problems.
But in the book How Children Succeed: Grit, Curiosity, and the Hidden Power of Character by Paul Tough, the author discovers a lot of evidence that doesn’t support that theory. Instead, non-cognitive “character” skills like perseverance, curiosity, optimism, and self-control may be even more important.
Tough weaves together various research studies and experiments to make this argument. Here are just a couple of examples:
- GED vs. High-school graduates. Passing the GED test means you are proficient in the same academic areas as an actual high-school graduate. Yet people with GEDs are consistently less likely to graduate college, have lower incomes, and are more likely to be in jail. Why? Perhaps beings a high-school graduate requires additional traits – the inclination to persist at a often-boring task, the willingness to delay gratification for a long-term goal, or the ability to adapt to different social environments.
- KIPP charter schools. KIPP schools are charter middle and high schools that take in lower-income students by lottery (no test screening) and use intensive educational efforts with the ultimate goal of a 4-year college degree. The first few KIPP classes improved their standardized test scores in middle and high school significantly. Yet the actual college graduation rates were disappointing, with a curious pattern:
The students who persisted in college were not necessarily the ones who had excelled academically at KIPP. Instead, they seemed to be the ones who possessed certain other gifts, skills like optimism and resilience and social agility. They were the students who were able to recover from bad grades and resolve to do better next time; who could bounce back from unhappy breakups or fights with their parents; who could persuade professors to give them extra help after class; who could resist the urge to go out to the movies and instead stay home and study.
Further good news is that character skills appear to relatively malleable; you can learn to improve your level of grit and self-control. KIPP schools now provide their students with a “character report card” as well as traditional academic grades.
This book is a great read for parents and educators, but I would say that the conclusions extend to adults and even personal finance. We all need these skills to be good citizens. Being financially secure is simple on paper – spend less than you earn, invest the difference for the future, and keep it up every year. Hmmm… that sounds a lot like self-control, delayed gratification (and perhaps optimism ), and persistence.
I would argue that these character skills are more important than what you could learn in any book about Roth IRAs or modern portfolio theory. The question is how do we teach adults these traits, or is it too late?
|
<urn:uuid:14388af9-631d-4d90-b819-baea97b7f034>
|
CC-MAIN-2016-26
|
http://www.mymoneyblog.com/how-children-succeed-paul-tough.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00013-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972184 | 633 | 3.796875 | 4 |
Babyfacedness and Political Implications
From PsychWiki - A Collaborative Psychology Wiki
When individuals form impressions of other people, they are influenced by an assortment of factors. One such factor is the features of a person’s face. Physiognomy is using someone’s facial features to judge their character and personality traits; this practices dates back to ancient Greece (Hassin & Trope, 2000). One particular phenomenon of physiognomy occurs when adults have neotenous facial features, which are the retention of youthful traits in adults. Research has revealed that individuals perceive those with a babyfaced facial appearance to be associated with childlike traits (Berry & McArthur, 1985; Keating, Randall, & Kendrick, 1999; Todorov, Mandisodza, Goren, & Hall, 2005; Zebrowitz & Montepare, 2005). A babyfaced appearance includes large and round eyes, small chin, high eyebrows, and thick, pudgy lips (Berry & McArthur, 1985). A mature face usually includes thick eyebrows, thin lips, a wide jaw, and small eyes; these features are perceived to be associated with mature traits such as power and strength (Keating et al., 1999). Components and Physiognomic Characteristics of a Babyface Berry and McArthur (1985) examined some components and consequences of adults having a babyface. Participants in this study viewed a series of 20 pictures with different individuals, who varied in babyfacedness. The participants were asked to rate the physiognomic characteristics and babyishness of the faces (Berry & McArthur, 1985). Results showed that participants perceived individuals with babyface features to have more childlike traits, compared to mature-faced targets. These childlike traits included honesty, warmth, kindness, and naïveté. In addition, previous research by McArthur and Apatow (1984) found that as babyfaced features were increased in a manipulated photograph, the ratings of warmth, kindness and honesty also increased.
Perceptions of Politicians
Voters’ perceptions of political candidates can be affected by their facial maturity. Zebrowitz and Montepare (2005) studied the remarkable effect of physiognomy on political elections and found that the outcomes of almost 70% of recent Senate races were accurately predicted by participants simply viewing a picture of the candidates’ face. Specifically, participants selected candidates with more mature faces, rating these individuals as more competent and knowledgeable than their babyfaced rivals. Moreover, Keating and colleagues (1999) found that when famous politicians’ faces were subtly morphed to increase babyfacedness, they were perceived to have more childlike traits. Digitized facial images of President Reagan, Clinton, and Kennedy were all manipulated to be more neotenous. Although subjects were unaware of these subtle changes, their perceptions of the president’s traits were nevertheless altered. Regardless of whether participants supported or opposed Clinton, neotenous features made him appear more honest and good-looking, while his perceived power was not reduced (Keating et al., 1999). Additionally, when mature features were enhanced, the youngest president, Kennedy, appeared to be more powerful and the oldest president, Reagan, appeared too old and was rated to be less powerful and cunning. Subtle, concealed changes in the faces of famous presidents altered the subject’s perceived character judgment of these familiar politicians (Keating et al., 1999).
Social and Political Consequences
Mature-looking individuals are favored for leadership positions, particularly as government leaders. Correspondingly, baby-faced individuals are more often preferred for occupations involving warmth, such as nursing (Zebrowitz & Montepare, 2005). Todorov and colleagues (2005) found that while it is widely assumed that voter’s decisions are based on logical and thoughtful reflections, quick and impulsive trait inferences can significantly play a role in voting choices. Judgments based on interpreted physiognomic characteristics are made with high degrees of confidence; this overconfidence has a steady effect on people’s decisions and opinions (Hassin & Trope, 2000). When people are choosing candidates for office, one of the most important traits is competence. Todorov and colleagues (2005) had participants evaluate the competence of candidates running for the U.S. Senate or House. The participants were very quickly shown side-by-side black and white photos of the winners and runner ups of recent elections and asked to assess their competence. Results showed that candidates with more perceived competence won 71.6% of the Senate elections and 66.8% of the House elections and competence even significantly predicted the margin of victory (Todorov et al., 2005). Todorov and colleagues (2005) realized that participants might make inferences of likability rather than of competence. To rule out this alternative hypothesis, participants were asked to rate seven traits of the candidate, rather than just competence. Participants were asked to rate competence, intelligence, leadership, honesty, trustworthiness, charisma, and likability. There were distinct differences among the seven trait judgments, but most importantly was that the judgment of competence was the only factor to accurately predict the outcome of the elections. In addition, the results did not suggest age, attractiveness, or familiarity could account for the relation between perceived competence and the outcome of the elections. A regression analysis showed that these other judgments of age, attractiveness, and familiarity contributed only 4.7% of the variance in Senate races, while competence singlehandedly accounted for 30.2% of the variance. Furthermore, in contests with same sex and ethnicity, competence accounted for 45% of the variance and the other judgments accounted for less than 1% of the variance (Todorov et al).
When does it fail?
More mature-faced looking candidates have lost the elections 30% of the time. A possible explanation is that in these cases, babyfaced candidates could have been preferred because of potential face biases(Zebrowitz & Montepare, 2005). Certain traits may be more important in one particular election contest in comparison to another. For example, integrity is a trait linked to babyfaces and if this trait is important to voters in an important election, babyfaced candidates could have an advantage (Zebrowitz & Montepare, 2005). Other information about the candidates, such as party affiliation, is also important to voters. Furthermore, voters’ subjective views are undoubtedly also based on the candidates’ positions on specific issues.
Todorov and colleagues (2005) showed that is possible to predict the outcome of elections based on voter’s judgments of appearance. Yet, voter’s decisions in real life are based on more than just judging the competence of a candidate by their appearance. When voters have more information about the candidate, rather than just a photograph, the initial impressions are weakened (Todorov et. al 2005). However, this remarkable phenomenon raises unanswered questions on the underlying brain mechanisms as to why these appearance biases exist (Zebrowitz & Montepare, 2005). If psychologists continue to study the nature and source of appearance biases, it could benefit the real world—particularly by identifying these causal factors and increasing the likelihood that leaders get elected based on skill, not appearance (Zebrowitz & Montepare, 2005). Although certain candidates may look more competent, they are not likely to be more skillful and yet neotenous manipulations in politican’s facial features can alter the public’s perception of the characters of these leaders (Keating et. al 1999). An empirical question that needs to be further addressed and studied is “If trait inferences from facial appearance are correlated with the underlying traits, the effects of facial appearance on voting decisions can be normatively justified” (Todorov et. al).
Berry, D. S., & McArthur, L.Z. (1985). Some Components and Consequences of a Babyface. Journal of Personality and Social Psychology, 48, 312-323.
Hassin, R., & Trope, Y. (2000) Facing faces: Studies on the cognitive aspects of physiognomy. Journal of Personality Social Psychology, 78, 837-852.
Keating, C. F, Randall, D., & Kendrick, T. (2002). Presidential Physiognomies: Altered Images, Altered Perceptions. Political Psychology, 20, 593-610.
McArthur, L. Z., & Apatow, K. (1983–1984). Impressions of baby-faced adults. Social Cognition, 2, 315–342.
Todorov, A., Mandisodza, A.N., Goren, A., & Hall, C.C. (2005). Inferences of competence from faces predict election outcomes. Science, 308, 1623–1626.
Zebrowitz, L. A., & Montepare, J. (2005). Appearance Does Matter. Science, 308, 1565- 1566.
|
<urn:uuid:c56f6c5a-8698-4431-bc65-dce16b58af28>
|
CC-MAIN-2016-26
|
http://www.psychwiki.com/wiki/Babyfacedness_and_Political_Implications
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00159-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947786 | 1,848 | 3 | 3 |
Cognitive abilities decline faster in people with untreated hearing loss than in those with normal hearing. Use of hearing aids may forestall or reduce cognitive decline.
Older adults with untreated hearing loss are more likely to develop problems thinking and remembering than older adults whose hearing is normal, according to a study by hearing experts at Johns Hopkins University School of Medicine in the US.
In the study, volunteers with hearing loss, who underwent repeated cognition tests over six years, had cognitive abilities that declined some 30% to 40% faster than in those whose hearing was normal. Levels of declining brain function were directly related to the amount of hearing loss, the researchers say. On average, older adults with untreated hearing loss developed a significant impairment in their cognitive abilities 3.2 years earlier than those with normal hearing.
Frank Lin, assistant professor at the Johns Hopkins University School of Medicine, tells that the possible explanations for the cognitive slide include the ties between hearing loss and social isolation, with loneliness being well-established in previous research as a risk factor for cognitive decline. Degraded hearing may also force the brain to devote too much of its energy to processing sound at the expense of energy which otherwise would be spent on memory and thinking. He adds that there may also be some common, underlying damage that leads to both hearing and cognitive problems.
Use of hearing aids
The research team plans to launch a much larger study to determine if the use of hearing aids or other devices to treat hearing loss in older adults might forestall or delay cognitive decline.
An earlier study, conducted for the National Council of Ageing in the US, found that hearing aid users were perceived as having better cognitive functioning than non-users and to be less introverted. In general, hearing aid users had a greater overall health status than non-owners, including higher self-confidence, better social life and better mental and physical health.
“Our results show that hearing loss should not be considered an inconsequential part of aging, because it may come with some serious long-term consequences to healthy brain functioning," says Lin.
"Our findings emphasise just how important it is for physicians to discuss hearing with their patients and to be proactive in addressing any hearing declines over time."
He also says that only 15% of those who need a hearing aid in the US get one, leaving much of the problem and its consequences untreated.
Sources:www.sciencedaily.com and The Hearing Review 7, 2000
Please use our articles
You are very welcome to quote or use our articles. The only condition is that you provide a direct link to the specific article you use on the page where you quote us.
Unfortunately you cannot use our pictures, as we do not have the copyright, but only have the right to use them on our website.
|
<urn:uuid:5fb31eef-ad4a-432a-b775-24ce9da002fe>
|
CC-MAIN-2016-26
|
http://www.hear-it.org/Untreated-hearing-loss-has-a-negative-impact-on-cognition
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00087-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.963225 | 567 | 3.21875 | 3 |
Depression in Pregnancy
Depression is the most common psychiatric disorder, so it's a commonly encountered pre-existing condition during pregnancy. Additionally, women have it twice as often as men, and among women, there is an increased tendency toward it during the reproductive years. Where the menstrual cycle fits in as a contributing factor is unknown, as we've still only just scratched the surface of the whole PMS mystery. Certainly borderline depression can be affected by the hormonal impact of the menstrual cycle.
And by pregnancy, too.
Pregnancy is a particularly fertile field for depression to either start anew or worsen if already a problem. The extra physical, financial, marital, and sexual stresses come whether one is ready or not. On top of that, any new feelings of poor self-image can reinforce depression's already negative self-image problems.
An obstetrician is qualified to handle mild depression, anxiety, "the blues," and general moodiness. But severe depression is a very serious illness that requires the additional care of a psychiatrist, because many people die from it! I'm talking about suicide, so all moodiness should be questioned.
Diagnosing Depression in Pregnancy
Depression diagnosis can often be a confusing challenge. For instance, a common cause (co-illness?) of depression is thyroid disorder. Many women who have hypothyroidism will present first as a depressed patient, so thyroid function testing is a very good idea in anyone complaining of depression. Also, depression can be over-diagnosed. For instance, if a woman's husband has just died in a car crash, her house has burned down, and she's been mugged and beaten up recently, depression is probably not an illness but a reasonable reaction to these things.
The point of this ridiculous example is that it's not inappropriate to be bummed out over really bad things in one's life. Clinical depression, on the other hand, is when there is an inappropriate reaction to things, known or unknown.
- Childhood trauma, like death or illness of a parent or sibling
- Childhood sexual abuse, which will distort the well-being of a person on many levels
- Family history of depression
- Lower socioeconomic status (translated, poverty)
- Substance abuse
After delivery, feeling depressed may respond to an understanding ear and a reassurance that this is common. What a new mother (and father) fail to anticipate is a selfish inner rebellion to the fact that they've been pushed to the #2 (or #3) position in the family and can't just do whatever they want anymore. For instance, let's take movies. As a couple you pretty much see all of the movies you really want to see either at the theater or on cable or tape. Along comes baby.
Forget movies. No movies for about two years! And this is a colossal drag since life has been one big date up till now. Now, you can even forget seeing a movie at home uninterrupted by feedings or diapers or just checking on a suspiciously quiet baby. And if you go someplace, no more just hopping into the car and goin'. You've got to haul all of this stuff along: packages of diapers, wipes, blankets, clicky toys, and the like. And as the baby gets older, add to this haul of paraphernalia collapsible rolling pods, strollers (two varieties the deluxe and the umbrellas), medicines, and snap containers of gruel.
It won't get any better until you realize and accept the new world order:
You just have to put your life on hold for a couple of years.
On the surface, this is a bitter disappointment to your own inner child who wants to shuck & jive and rock & roll; but your thinking parental brain knows better--you are now a family and you're doing this for your children. Fulfillment in life trust me is much better than just having a lifetime of fun.
During pregnancy, feeling depressed is usually a problem in which a woman, once feeling fit, experiences nagging physical complaints caused by something she has no control over. As a woman's abdomen expands in the mirror, this physical sign is symbolic of a shift from seeing herself as a woman to seeing herself as a mother. From seeing herself as a sexual being to seeing herself as a maternal one.
"What have I gotten myself into?" isn't a question with a remedy, unfortunately, except for the cruel, "Deal with it." And because, I'm told, men are from Mars, they're often not the most sympathetic persons and often fail to come through. However, a couple who are pregnant for all of the right reasons or who have put themselves into the big picture of what pregnancy is all about will ultimately find a way to stay afloat on this endless sea of uncertainty. It's a "sea-legs" sort of thing and a matter of self-perspective. If the relationship between the expectant parents is good, mild depression need only be a temporary reaction to a permanent change in one's life.
Clinical Depression in Pregnancy
Alterations in thinking, delusions, or hallucinations, however, push the diagnosis of depression, categorically, into a psychosis. After delivery, postpartum depression is a serious illness to be distinguished from the "postpartum blues." Thought disorder can get fairly creepy in that the mother starts having threatening thoughts about the baby. But a woman so afflicted isn't any more afflicted after birth than she would be any other time if she's got a history of mood disorders. According
to the American College of Obstetricians and Gynecologists, Technical Bulletin 182, "Depression in Women,"
"Major clinical depression has been thought to be more common following childbirth than during other periods of a woman's life. However, current studies do not substantiate this belief. Women at risk for significant postpartum depression are more likely to have a family history of depression, a previous postpartum depression, or significant adjustment problems with childbirth. It has been demonstrated that women who have a planned pregnancy in a secure environment, enjoy a supportive relationship with their partner, and have manageable levels of life stress are less likely to experience postpartum depression."
And to her marriage. Often a new father, dealing with issues of the new world order himself, won't understand why such a wonderful time is being ruined by a bad mood, an attitude, or anger misdirected at the most likely victim in this drive-by shooting: him! Obstetricians, nurses, social workers, midwives, doulas even lactation nurses can be a crucial help in recognizing depression and counseling a husband on the pathology involved and how this illness needs as much patience as convalescence from any physical illness.
During pregnancy, real depression is a high risk situation which tends to make patients prone to non-compliance with their prenatal care (keeping appointments, eating right, doing what's best for the baby). Substance abuse, either prompting the depression or because of it, doesn't mix well with a developing baby. The legitimate drugs for depression are also a concern, but they are weighed as a risk vs. benefit decision. But in true depression, the benefit usually far outweighs any potential risks.
The woman who is doing fine on today's anti-depressants but then gets pregnant will have worries over what a particular medicine might do to her baby. Luckily, there's been an explosion of effective and for the most part safe antidepressant drugs over the last fifteen years, and most of these patients will be on the newer, modern drugs. The seriously depressed patient may still present on the older stuff, however, and a switch to a safer medicine may present risks of worsening her condition.
DRUGS FOR DEPRESSION DURING PREGNANCY
|(The American Academy of Pediatrics feels the permanent effects of antidepressants on the nursing infant to be unknown and therefore doesn't officially sanction their safety at this time.)|
Below is a simple review of drugs used for both clinical depression and the simpler mood disorder of "feeling depressed."
The New Stuff
The "selective serotonin re-uptake inhibitors" (SSRIs)
These drugs keep the levels of serotonin higher in the brain. Serotonin is a neurotransmitter that rises and falls, affecting mood and well-being. These medicines decrease the amount of serotonin that is reabsorbed, keeping the levels higher and constant.
|Zoloft (sertraline )||FDA Class B. Probably safe.|
|Prozac (fluoxetine)||FDA Class B. Probably safe. There's more data on
Prozac than Zoloft, prompting one to think that Prozac may be safer.
But it's just that there's more data exonerating the Prozac .
|Paxil (Paroxetine)||FDA Class D. Not recommended as there is evidence of fetal harm.|
|Luvox (fluvoxamine)||FDA Class B. Probably safe. This drug not only keeps levels of serotonin up, but also decreases the re-uptake of dopamine, another "feel good" neurotransmitter. In fact, dopamine is the neurotransmitter that is especially high in addictions, and its fall is associated with the unpleasant physical suffering called withdrawal. This is the whole idea behind using Wellbutrin (aka, Zyban) (a dopamine re-uptake inhibitor) to quit smoking (see next line).|
|Wellbutrin, Zyban (Bupropion)||FDA Class B. Probably safe, and in fact it's safety has been well-established in that there haven't been any reports of problems with it.|
|Effexor (venlafaxine).||FDA Class B. Probably safe. This drug's actual mechanism is unknown, but it works probably by increasing the neurotransmitter activity as well. There have been some concerns regarding an increase in blood pressure with Effexor, and this side effect would be particularly confusing in a pregnancy because of the usual vigilance for Pregnancy-induced Hypertension (PIH, formerly called "toxemia" or "pre-eclampsia").|
|Buspar (buspirone)||FDA Class B. Probably safe. As described above, this decreases the re-uptake of dopamine.|
|Atarax (or Vistaril) (Hydroxyzine)||FDA Class C. There's a small possibility of abnormalities if given during the first trimester. But Vistaril is a popular anti-nausea drug that is used commonly during pregnancy. In my practice, I will just try to avoid it during the first trimester, and I don't generally use this as a first choice for anxietyuspar being a better choice.|
|Xanax||FDA Class D. See below, "The bad guys in pregnancy."|
The Bad Guys in Pregnancy
|Valium (diazepam)||FDA Class D. Not recommended. Quick to reach the fetus, but slow to clear, this drug has been associated with facial development abnormalities, cleft lips and palates, growth retardation...Do I need to go on? These warnings are for the chronic use or abuse of Valium. (Using it acutely in a seizure situation probably doesn't have the same dangersot to mention that using it for dire emergencies is better than not using it.)|
|Xanax (Alprazolam)||FDA Class D. Not recommended. This drug is related to Valium and all of the above apply. In my practice, I've found it more addictive than the serotonin-re-uptake inhibitors. (Actually, I've never had trouble getting patients off of the serotonin-re-uptake inhibitors; but I can't say the same for Xanax. Of
course, in all fairness to Xanax, it's probably used most frequently against labeling instructions.)
|Dalmane (Alprazolam)||FDA Class D. Same story, it's an "-azepam" drug. In fact, it's a good idea to stay away from any drug whose generic name ends in "-azepam." Dalmane is usually used as a sleeping pill. Withdrawal in newborns is always a possibility with the "-azepams."|
|Ativan (lorazepam)||Lorazepam is a benzodiazepine. Lump it in with the above "bad guys."|
|Tranxene (chlorazepate)||also an "-azepam," actually, a "-diazepine". Stay away from it, too.|
The Older Depression Drugs
|Elavil (amitriptyline)||FDA Class D. Probably safe, but it earns the "D" status because there have been rare reports of deformities. "Safe for most pregnancies" doesn't reassure the one person whose baby is affected adversely.|
|Tofranil (Imipramine)||FDA Class D. Same story as Elavil. Also, one mustn't forget the withdrawal a newborn may suffer at the hands of these drugs.|
In The Real World--Are Depression Drugs Safe In Pregnancy
So in a typical ObGyn practice, what's typical? The SSRIs are so safe, it seems, that I'll use them (especially Wellbutrin and Prozac) during the 2nd and 3rd trimesters even for mild depression if the depression is interfering with a patient's happiness. I won't venture so casually into the 1st trimester unless the depression is severe. If severe depression involves any type of thought disorder, I'll insist that a psychiatrist be involved.
Breastfeeding risks of using depression drugs should be left to the Pediatrician. If the benefit of the drug during breastfeeding outweighs the risk, still the new mother may want to switch to formula so that she can treat herself without worry.
Outside of pregnancy and breastfeeding, I feel that the SSRIs are a very safe drug class and use them for anything from PMS (now melodramatized as "premenstrual mood dysphoria") to depression. Interestingly, Prozac is now available as Sarafem for PMS the subject of another article.
For just anxiety in the non-pregnant woman (without depression), I prefer Buspar or the newer Celexa, and I don't usually use Xanax, Valium, or the diazepam class. This is because anxiety tends to be a long term problem, and the diazepam class is not a safe long term drug because of the addictive potential.
Plot Thickens--Treating Depression in Pregnancy
The neurotransmitter aspect of treating disorders is all the rage in pharmaceutical research. Just take serotonin, for instance. There are some serotonin receptors that affect mood, some that affect appetite (the SSRI Meridea is used in dieting), and other receptors that do other things. The SSRIs hint at an exciting growth direction for drugs for many diseases. True, we've had our disasters (the Redux fiasco years ago which resulted in heart damage), but for the most part this direction has yielded many safe drugs that seem to have minimum risk in pregnancy.
That's good news, because pregnant people get sick, too.
|
<urn:uuid:0c93e235-c5f3-4ef5-88ab-c6a0cc3ee3ab>
|
CC-MAIN-2016-26
|
http://www.gynob.com/depression.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00166-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950008 | 3,179 | 2.609375 | 3 |
ON THIS PAGE: You will find information about how many children are diagnosed with this type of tumor each year. You will also learn some general information on surviving the disease. Remember, survival rates depend on several factors. To see other pages, use the menu on the side of your screen.
In the United States, about 500 children are diagnosed with a Wilms tumor each year. It accounts for about 5% of all childhood cancers. Wilms tumor occurs most often in young children between the ages of 3 and 4. It is uncommon after age 6.
The 5-year survival rate tells you what percent of children live at least 5 years after the cancer is found. Percent means how many out of 100. The 5-year survival rate for children with Wilms tumor is 92%. However, the rate varies according to the stage of the disease .
Stage I, II, and III tumors with a favorable histology have a 4-year survival rate that ranges from 94% to 99%. Stage IV and V tumors have a 4-year survival rate of 86% and 87% respectively. Survival rates for tumors with an anaplastic histology are lower in each category and range from 83% for children with a Stage I tumor to 38% for Stage IV and 55% for a Stage V tumor.
It is important to remember that statistics on how many children survive this type of cancer are an estimate. The estimate comes from data based on children with this cancer in the United States each year. So, your child’s risk may be different. Doctors cannot say for sure how long any child will live with a Wilms tumor. Also, experts measure the survival statistics every 4 or 5 years. This means that the estimate may not show the results of better diagnosis or treatment available for less than 4 or 5 years. Learn more about understanding statistics .
Statistics adapted from the American Cancer Society's (ACS) publication, Cancer Facts and Figures 2016, and the ACS website.
The next section in this guide is Medical Illustrations . It offers drawings of body parts often affected by this disease. Or, use the menu on the left side of your screen to choose another section to continue reading this guide.
|
<urn:uuid:8739a8ea-6b9d-4967-928f-b3819bf4c698>
|
CC-MAIN-2016-26
|
http://www.cancer.net/print/19337
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00027-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956067 | 447 | 3.171875 | 3 |
XML and Java together are the most ideal building blocks to develop Web services and Applications accessing web services. XML is considered as the standard for exchanging data across disparate systems and Java provides a platform for building portable applications. One way to access and use an XML document through the Java programming language is through SAX or DOM parsers. Now developers have another Java API which makes it easier to access XML documents: Java Architecture for XML Binding (JAXB). It allows java developer for mapping between the Java classes and the XML representations. JAXB allow us for marshalling of java objects into XML and vice-versa or we can say JAXB enables us to store and retrieve the data in any XML format into the memory. Regularly changing XML schema definitions can be time consuming and error prone so JAXB is useful in such situations.
The general steps to use the JAXB API are:
Posted on: November 18, 2009 If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
<urn:uuid:1372589e-a9c9-49b4-bea5-f321c1d7d78f>
|
CC-MAIN-2016-26
|
http://www.roseindia.net/help/java/x/java-architecture-for-xml-bindin.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00065-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.864337 | 211 | 3.234375 | 3 |
Psychotherapy is therapy in which a person with mental or emotional problems talks with another person. This other person may be a psychiatrist, psychologist, counselor, clinical social worker, member of the clergy, alternative practitioner any helpful person. With successful psychotherapy, a client experiences positive change, resolves or mitigates troublesome behaviors, beliefs, compulsions, thoughts, or emotions. Ideally, these are replaced with more pleasant and functional alternatives. Psychotherapy includes interactive processes between a person or group and a psychotherapist. Psychotherapy aims to increase the individual's sense of his/her own well-being. Psychotherapists employ a range of techniques designed to improve the mental health of a client or patient, or to improve group relationships.
Related Journals of Neurorehabilitation Psychotherapy
Journal of Psychology & Psychotherapy, Journal of Psychiatry, International Journal of Emergency Mental Health and Human ResilienceInternational Journal of Emergency Mental Health and Human Resilience, Journal of Depression and Anxiety, NeuroRehabilitation, Journal of Contemporary Psychotherapy, Journal of Family Psychotherapy, The Arts in Psychotherapy, Psychotherapy Research,Journal of Psychotherapy Integration
|
<urn:uuid:8fefb266-c75e-4898-889a-f68955f7d798>
|
CC-MAIN-2016-26
|
http://www.omicsonline.org/scholarly/neurorehabilitation-psychotherapy-journals-articles-ppts-list.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00069-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.852322 | 230 | 2.890625 | 3 |
Versiune în Limba Română AICI.
In the Catholic tradition (Italians in Dobrogea,
Catholic communities from Tulcea and Sulina) the religious
integration of newborn takes place in three stages: Baptism, Chrism and Eucharist.
At Roman Catholics the baptism is done by aspersion. Church
recommends to be celebrated the sacrament in the Easter Vigil or Sunday morning, in
the first weeks after childbirth. Child is held in the
arms by godfather or godmother; the priest put
a tray under the chin and the baby is sprinkled
|Sequence from the temporary exhibition Mother' s Dear Babies (Ethnographic and Folk Art Museum Tulcea)|
|
<urn:uuid:a8b8fc38-7e0f-4d8b-b893-86063f45c9e0>
|
CC-MAIN-2016-26
|
http://ethnopov.blogspot.com/2012/10/mother-s-dear-babies-baptism-at_10.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00069-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.908836 | 158 | 2.53125 | 3 |
If you look at our Sun the right way, it is magnificent. For proof, I offer this stunningly beautiful video of the nearest star taken on July 19, 2012 by NASA’s Solar Dynamics Observatory. If you can watch this without your jaw hanging open and your mind aflame with wonder, then I cannot help you.
What you’re seeing is the profound impact of magnetism on the material in the Sun. I’ve described this effect before (with lots of juicy details here), but in a nutshell: The gas inside the Sun is so hot it’s ionized, stripped of electrons. When that happens it’s more beholden to magnetism than gravity, and when the magnetic field lines pierce the Sun’s surface they form loops along which the ionized gas (called plasma) flows along them.
The bright flare happens when the stored magnetic energy erupts outward, usually due to what is essentially a short-circuit in the field. That happens near the beginning of the video, and is so bright it saturates SDO’s detectors (and you can see repeated ghost images to the upper left and right of the flare as the light reflects inside SDO’s optics). Then things settle down, and that’s when the beauty really begins: The plasma flows down the loops, raining down onto the Sun’s surface.
And it goes on and on. This video represents a total elapsed time of over 21 hours.* The energy flowing along those magnetic loops is immense, enough to power our entire planet for many millennia. Note the part about a minute in when the size of the Earth is shown for comparison. Incredible.
Although this looks like fire, it’s not. The images used to make this video are in the far ultraviolet, showing gas at a temperature of nearly 100,000° Celsius (180,000° F). The color is added after the fact to make details easier to see—but those fiery red, yellows, and oranges do tickle the imagination, don’t they? The color adds to the impression of dancing energy and heat.
That barely constrained violence can be difficult to square with the grace and elegance of the motion. The Sun can damage our civilization, yet we also depend on it for our existence. But there you go: The Universe is full of such dichotomies.
It is harsh, inhospitable, destructive, and capable of crushing indifference.
It is pleasing, habitable, serene, and capable of life-altering beauty.
I wouldn’t have it any other way.
* [Correction: I had originally said this eruption lasted over nine hours. While technically correct, it actually went on for over 21 hours.]
|
<urn:uuid:2c392fb8-a9ec-4a6f-afe1-85e084d90be4>
|
CC-MAIN-2016-26
|
http://www.slate.com/blogs/bad_astronomy/2013/02/21/coronal_rain_streams_of_ionized_gas_rain_on_sun_after_a_solar_flare.html?wpisrc=most_viral
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00155-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.913267 | 569 | 2.875 | 3 |
Karagandy Province is a province of Kazakhstan. Its capital is Karaganda. The population of the province is 1,375,000.The province was the site of intense coal mining during the days of the Soviet Union and also have been the site of several gulag forced labor camps. Following World War II, Joseph Stalin, leader of the Soviet Union, had many ethnic Germans deported to the area.
With an area of 428,000 sq. km, Karagandy Province is the largest province. Although it doesn't touch the borders of any country, it touches nearly every other province. They are: Aktobe Province to the West; Kostanay Province to the Northwest; Akmola Province to the North; Pavlodar Province to the Northeast; Almaty Province to the Southeast; South Kazakhstan Province to the South; and Kyzylorda Province to the Southwest. The Ishim (Esil) River, a tributary of the Irtysh River, begins in Karagandy Province.The area is arid and flat, given to plains with occasional hills and seasonal streams. Karkaralinsk Nature Park, covering 90,300 hectares, is located in the province.
|
<urn:uuid:62762743-6324-413d-9e89-15f98871d96a>
|
CC-MAIN-2016-26
|
http://www.touristlink.com/kazakhstan/karagandy/overview.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00156-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956434 | 248 | 2.6875 | 3 |
There are two kinds of glare.
Disability glare results when a light source reflects from or otherwise covers the visual task, like a veil, obscuring the visual target, reducing its contrast and making the viewer less able to see and discriminate what is being viewed. The problem is illustrated with the drawing below.
In the example below, bright light from a ceiling light fixture or skylight is reflected from the visual task surface, and into the observer's eyes, veiling his recognition of the target visual content. Nearly as much light is reflected from the white paper as from the black ink making the letters, so that the contrast is low and the text is washed out and difficult to read. Such glare "disables" the process of reading.
Discomfort glare arises when light from the side of the task is much brighter than the light coming from the task. The eyes attempt to focus on the light from the task, but so much extra light is entering the eye from the side that the visual processes are confused and it is difficult to concentrate for long periods. The geometry is illustrated below.
In the example below, light from a window enters the reader's eyes and makes it difficult to see the lesser amount of light coming from the reading task. Prolonged exposure to such conditions can result in headaches and eye fatigue.
There are many potential sources of glare within buildings. They include direct sunlight, reflected beam sunlight, a bright window surrounded by dark walls and furnishings, poorly designed electric lighting systems, and improperly used luminaries. The first step in avoiding direct beam sunlight is to understand the paths the sun takes through the sky each day. The extremes are illustrated in these drawings for the summer and winter solstices in the northern hemisphere.
In summer, the sun rises north of due east, sets north of due west, and is high in the sky at noon. The more northern the latitude, the lower in the sky the Sun will be at noon.
A solution is to shade the window from direct sunlight using a roof overhang to block the high sun near noon. Note that the sun is between west and northwest in the afternoon in summer in the northern hemisphere. Judicious use of shades can block direct sunlight during times of peak glare and summertime solar heating while still admitting natural daylight from the sky and surrounding scenery and objects.
In winter, the Sun rises south of due east, sets south of due west, and is low in the sky at noon. The winter sun lies between east and northeast in the morning so to shade an east window in midwinter mornings will require a way to block the sun over the range from southeast to east. For a west window on midwinter afternoons we'll need some means for blocking the sun over the range from southwest to west. Overhangs are not as effective blocking midday sun in winter because the sun is significantly lower in the sky then.
It is not just the direct sunlight that can be glaring, but its reflection from objects in the room can also produce serious glare, as illustrated in the drawing below.
A solution is to place a tree or other shading device outside the window in just the right position to block direct sunlight entry when the sun is in the portion of the sky producing the greatest glare problem. As seen below, there is great variety in the kinds of exterior shades available to alleviate this problem.
Glare can come not only from direct beam sunlight, or its reflections, but from the window itself, even when no direct sunlight enters. When the scene outside the window is bright, this makes the window appear much brighter than the surrounding features in the room, causing it to be a potential glare source, according to the two definitions of glare listed above.
Two obvious solutions in this case are
• Make the window less bright in comparison with the room surfaces
• Make the room brighter, to better match the brightness of the window
In the first case, you can put a tinted plastic covering (window film), or an interior shade on or covering the window glass. If this window is the only source of light in the room, however, the ratio of window to room brightness will be approximately as before, and the glare problem will not be solved. If the room is electrically illuminated, however, window film can be an effective glare strategy. This is not an energy efficient solution, however, since it requires more electric lighting than would be the case with a well-designed daylighting system for the room.
In the second case, you can increase the electric lighting in the room, to illuminate room surfaces better and better matching the brightness of the window. Of course this costs more electrical energy, so a better option would be to paint the room with a higher reflectance paint, making the walls brighter. Extending this to the ceiling (if darker than needed) and the room furnishings and floor covering will provide additional help in brightening the room.
If we are determined to minimize daytime electric lighting energy use, however, another solution would be to make the window larger. This may seem counter intuitive, until you realize that a larger window is no brighter—point for point across its aperture—than a small one seeing the same exterior scene, but the larger window allows more daylight flux into the room, better illuminating the walls, ceiling, and floor. This makes the room surfaces brighter, closer to the brightness of the window. If you are in a hot climate, however, and the window is oriented to admit direct beam solar radiation for significant portions of the hottest parts of the day, this good glare mitigation strategy can have adverse comfort and energy consequences, since the extra solar heat gain can produce localized overheating and make the air cooling system work harder, at higher energy and dollar costs.
The problem is illustrated in these drawings.
If there is no significant direct beam solar entry and the windows all see skylight of approximately the same brightness, then:
|
<urn:uuid:d1631afe-2fa7-4173-bf21-cf4fb05df65c>
|
CC-MAIN-2016-26
|
http://www.floridaenergycenter.org/en/consumer/buildings/basics/windows/how/glare.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00195-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945706 | 1,202 | 3.515625 | 4 |
The Windigo (also known as the Wendigo, Windago, Windiga, Witiko, Wihtikow, and numerous other variants) is an unknown hairy hominoid tied to the legends and folklore of First Nations people linked by the Algonquin languages.
The Windigo is a bipedal hairy creature, equal to the Eastern Bigfoot, Stone Giant, or Marked Hominid in some classification systems, which is often said to have aggressive behaviors and a malevolent cannibalistic spirit into which humans could transform, or which could possess humans. Those who indulged in cannibalism were at particular risk, and the legend appears to have reinforced this practice as taboo.
Windigo Psychosis is a culture-bound disorder which involves an intense craving for human flesh and the fear that one will turn into a cannibal. Some ethnographers said this once occurred frequently among Algonquian cultures, though there is some sense that the psychological disorder may have been overstated and/or it has declined with Native American urbanization.
"Old Yellow Top," a regional name for a Windigo, the original Forest Giant of cryptozoology, not psychology, drawn by Harry Trumbore in The Field Guide to Bigfoot and Other Mystery Primates.
In John Green's Sasquatch: The Apes Among Us, the researcher mentions the Cree in Manitoba call their Sasquatch the Weetekow and the Saulteaux term them the Wendego. These are variations on the spelling of Windigo seen throughout Canada but not on the hairy hominoid being described, as I discuss extenstively in Chapter 3, "Native Traditions," in Bigfoot! The True Story of Apes in America, pages 26-34.
Writing in The Edmonton Sun, columnist Andrew Hanon points out that there is a strange overlap between a column he wrote on July 20, 2008, about the study of the psychological Windigo by Nathan Carlson, and the recent Greyhound bus decapitation and cannibalism incident of July 30.
Hanon writes on Monday, August 11, 2008, that there seems to be a "Horrifying coincidence in beheading."
Hanon writes, in part:
Nathan Carlson has barely slept since July 30.
"Ever since it happened, I haven't been able to get it out of my head," Carlson says haltingly. "I just don't know what to think of it, quite frankly."
The Edmonton ethno-historian is one of the world's leading experts on Windigo phenomenon, and the recent horrific beheading and alleged cannibalism on a Greyhound bus bound for Winnipeg from Edmonton rocked him to his very core.
As the grisly details of Tim McLean's last moments on Earth came to light in the following days, Carlson sank deeper and deeper into a fog of horror and revulsion.
Vince Weiguang Li is accused of abruptly attacking McLean, who by all accounts he didn't even know -- while McLean slept on the bus.
Up until a few days before the killing, Li held a part- time job delivering newspapers in Edmonton. He was well thought-of by his boss and considered a nice guy, if a bit quiet and shy.
On July 20 -- just 10 days before the killing -- Li delivered copies of the Sun that contained an extensive interview with Carlson about his research into the Windigo, a terrifying creature in native mythology that has a ravenous appetite for human flesh. It could take possession of people and turn them into cannibalistic monsters.
Below, Vince Weiguang Li, the 40-year-old suspect being transported by Canadian law enforcement personnel, and the headshot of his victim, Tim McLean.
More MySpace photos of Tim McLean, who called himself "Jokawild."
Carlson documented several cases in northern Alberta communities where people believing they were "turning Windigo" would go into convulsions, make terrifying animal sounds and beg their captors to kill them before they started eating people.
In last month's bus case, Li allegedly butchered McLean's body, brandishing the victim's severed head at the men who trapped him on the bus until police could arrive.
He was later accused of eating McLean's flesh.
When he appeared in a Portage La Prairie courthouse on charges of second-degree murder, the only words Li reportedly uttered were pleas for someone to kill him.
A lot of his reported behaviour eerily mirrors the Windigo cases recounted in the newspaper feature that Li helped deliver to Edmonton homes just days before McLean was killed, one of the most gruesome slayings in modern Canadian history.
Several media reports called McLean's killing unprecedented - an unspeakable, random attack the likes of which has never been seen in Canada.
But Carlson knows better.
"There are just too many parallels," he says.
"I can't say there's definite connection, but there are just too many coincidences.
"It's beyond eerie."
As the following article is rapidly disappearing into the archives, so here is the full story mentioned above, for research purposes.
Sun, July 20, 2008
Evil spirit made man eat family
A look back at Swift Runner
By Andrew Hanon
On a cold December day in 1879, a man was hanged in Fort Saskatchewan, putting an end to one of the most horrifying killing sprees in Alberta history.
Swift Runner was executed for murdering and then eating eight members of his own family over the previous winter. He believed he was possessed by Windigo, a terrifying mythological creature with a ravenous appetite for human flesh.
It wasn’t an isolated case. During the late 1800s and into the 20th Century, fear of Windigo haunted northern Alberta communities, resulting in several grisly deaths.
Sun Media’s Andrew Hanon speaks with Nathan Carlson, one of the world’s leading authorities on Windigo, about Carlson’s personal connection to the blood-curdling creature.
Some call him a serial killer.
Others call him a desperate madman.
But right up until the trap door swung open and the rope snapped taut around his neck, one of Alberta’s most prolific murderers insisted it was an evil spirit that compelled him to butcher and eat his entire family.
Over the course of a single winter, he devoured his wife, six children, mother and brother.
The man, a Cree trapper named Swift Runner, was hanged in 1879 in Fort Saskatchewan, the first legal execution in Alberta. The macabre case is considered by many to be the most horrifying crime in the province’s history.
But what most people don’t realize is that it was part of a much larger phenomenon that Edmonton ethno-historian Nathan Carlson calls Windigo condition, which haunted communities right across northern Alberta in the late 19th and early 20th Centuries and cost dozens of lives.
The Windigo (an Anglicized form of the word Witiko) is a mythological creature among native cultures from the Rockies to northern Quebec. It has an insatiable appetite for human flesh and wreaks destruction wherever it goes.
Carlson describes it as “the consummate predator of humanity.” It’s sometimes described as “an owl-eyed monster with large claws, matted hair, a naked emaciated body and a heart made of solid ice.”
“It’s extremely destructive,” he says. “The more it eats, the hungrier it gets, so it just keeps killing.”
Windigos can possess people, transforming them into wild-eyed, violent, flesh-eating maniacs with superhuman strength. Many native people in northern Alberta lived in terror of being possessed.
“It’s important to understand that cannibalism was repellent to the people,” Carlson explains. “The Windigo personified evil.”
The Swift Runner case caused an international sensation, making headlines in newspapers across Canada and the U.S.
According to accounts, he wandered alone into the Catholic Mission in St. Albert in the spring of 1879, claiming to be the only member of his family who didn’t starve to death over a particularly cold, bitter winter.
The priests became suspicious when they realized that Swift Runner, who weighed around 200 pounds, didn’t seem malnourished at all and was plagued with screaming fits and nightmares as he slept. He told them he was being tormented by an evil spirit, called Windigo, but said little else about it.
They reported their misgivings to police, who took Swift Runner to his family campground in the woods northeast of Edmonton, where they made a horrific discovery - the site was littered with bones, bits of flesh and hair. Some accounts claim that the larger bones had even been snapped and the marrow sucked out.
He eventually confessed that he shot some of his family, bludgeoned others with an axe and even strangled one girl with a cord. In some accounts, Swift Runner said he fed one boy human flesh before he too was killed.
‘THE LEAST OF MEN’
Before he was hanged, Swift Runner expressed extreme remorse. He told Father Hippolyte Leduc, “I am the least of men and do not merit even being called a man.”
Interestingly, Swift Runner is the only documented case Carlson can find of someone killing others because he thought he was possessed by a Windigo.
All other deaths he can document were cases of “Windigo executions,” where others have killed the person believed to be possessed. They were acts of self-preservation, attempts to protect their community.
In most of the cases, the victims themselves begged to be killed before they harmed their families.
In many cases, witnesses reported physical changes -bodies swelling and growing, lips and mouths enlarging. Some of the victims spoke of icy cold in their chests and an inability to warm up.
Carlson, who’s Metis, first heard about the Windigo from his grandmother, who told him about an incident at Trout Lake, where members of the community killed a man possessed by a demon that had been cursed and turned into a Windigo.
The story haunted him throughout his childhood, and after his grandmother died in 2002, he discovered an eerily similar story in an archived newspaper.
“I was somewhat confounded by the discovery of the newspaper account that seemed to confirm a story that had been in my family for almost 100 years,” he says.
Further research revealed that the man who was killed was also a distant relative of Carlson’s.
Carlson is now writing a book on the Windigo condition in northern Alberta and is negotiating with filmmakers about a documentary.
The Marked Hominid, another view of the Windigo of cryptozoology, not psychology, as drawn by artist Harry Trumbore in The Field Guide to Bigfoot and Other Mystery Primates.
|
<urn:uuid:df1922e6-1989-4b10-adc9-762cb32c6396>
|
CC-MAIN-2016-26
|
http://copycateffect.blogspot.com/2008_08_01_archive.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00052-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.973307 | 2,292 | 2.671875 | 3 |
Interview: Dr. George Demetri
< Interviews with Experts
Dr. George Demetri is director of the Center for Sarcoma and Bone Oncology at Dana-Farber Cancer Institute, director of the Ludwig Center at Dana-Farber/Harvard Cancer Center, and executive director for Clinical and Translational Research at the Ludwig Institute for Cancer Research. He is also Associate Professor of Medicine at Harvard Medical School.
- Defining Cancer & Metastasis
- Understanding Chemotherapy Side Effects
- The Leading Causes of Cancer Deaths
- Results Since Nixon Declared the War on Cancer
- How Cancer Begins
- Targeted Therapies & Smart Drugs
- Detection vs. Treatment
Defining Cancer & Metastasis
GARMON: What is cancer?
DEMETRI: Cancer represents many different diseases, but at its heart cancer is an uncontrolled growth of cells, which then attain the quality of being able to spread through the body and get in the way of the normal body working mechanisms.
GARMON: Can you define what a metastasis is?
DEMETRI: A metastasis is a group of cancer cells that have broken away from wherever they started in the body and are now living somewhere else inside the body. The problem with metastases are that they get in the way of the normal functioning of the organ in which they find themselves. So breast cancer cells can metastasize or spread to the lung and get in the way of the lung function. It could spread to the bone and cause pain. It could spread to the brain and cause major problems.
GARMON: Now can you define what a distant metastasis is? There is a lot of talk about how much lower your chances are of survival in the common cancers once you get a distant metastasis versus a regional one. So in that context, can you explain a distant metastasis please?
DEMETRI: Over the last 50 years a lot has been researched, mainly by surgeons, who have studied how cancers start in one place but then move elsewhere through the body either close to where they started, often in a lymph node, or far from where they started. Let's say if it starts in the breast, does it wind up in the liver or the brain, something very far from where it started? When it goes very far from where it started, that's a distant metastasis. The close ones, like a node metastasis or a regional metastasis may have important information about whether a cancer cell has the potential to spread distantly through the body and run the risk of killing the patient.
GARMON: When you have a distant metastasis for the leading causes of cancer deaths, for the common cancers, what then becomes the survival rate or the ratio—you could say 1 out of 10. How many patients can actually live more than 5 years once they have a distant metastasis in common cancers?
DEMETRI: If a cancer has already spread distantly in the body it really represents a grim prognosis for that patient. Generally with the common cancers survival rates beyond 10 years let's say from the time of a distant metastasis would occur only in less then 1 out of 10 patients, maybe 1 out of 20. So it's a very rare thing. We don't know what makes those patients lucky enough to live that long, but the vast majority of patients, 9 out of 10 maybe even 19 out of 20, would die from their cancer once it has spread distantly.
Understanding Chemotherapy Side Effects
GARMON: Why do conventional chemotherapy treatments cause side effects? You were using examples of what are the cells in the body that are rapidly—
DEMETRI: Growing, dividing.
GARMON: I was going to say dividing, but—
DEMETRI: Right. The standard cancer chemotherapy drugs are generally poisons that can kill dividing cells, cells that are making copies of themselves. Cancer cells make copies of themselves, but so do some normal cells. Normal cells like the cells responsible for the hair or responsible for the lining of the mouth or for the bone marrow that create new blood cells. This process of cell division is critical so that we can make blood cells and it's why cancer chemotherapy commonly knocks down the production of those blood cells because the normal dividing cells are inhibited by the chemotherapy. This is also why older cancer chemotherapy actually works because it can kill dividing cells non-specifically. So it has all those side effects on normal cells as well.
GARMON: Well, let's just use why you lose your hair as one example of one of the side effects. Okay, so why do conventional chemos cause that?
DEMETRI: Traditional cancer chemotherapy drugs cause side effects because they non-specifically hurt dividing cells. Cancer cells are dividing, but so are a lot of normal cells in the body specifically cells like the ones that give rise to the hair. They are constantly dividing. It's part of how we produce hair. Other cells in the body also divide, such as the cells lining the gastrointestinal tract, so this accounts for two of the more common side effects with chemotherapy, the standard kind of chemotherapy. Hair loss, because the dividing normal cells aren't able to keep up with hair production or mouth sores or other intestinal ailments where the lining cells of the gastrointestinal tract can't keep up with the needs of the body.
The Leading Causes of Cancer Deaths
GARMON: Can you tell me what are the leading causes of cancer deaths?
DEMETRI: In the United States right now the leading causes of cancer deaths are due to the common cancers. Things like lung cancer, colorectal cancer, breast cancer, pancreatic cancer. We certainly see in this list also blood cancers like leukemias and lymphoma as well as prostate cancer.
Results Since Nixon Declared the War on Cancer
GARMON: Since Nixon declared the war on cancer in the United States how much real progress have we made?
DEMETRI: Since the United States under Richard Nixon's presidency in 1972 declared the war on cancer full force, our country has spent a lot of cancer research and there has been a tremendous growth in what we understand about what makes cancer cells tick. It's really though only in the last 5 or 7 years that we have been able to apply that knowledge to develop new therapies and already we have seen tremendous examples of how that has come forward with effective new drugs, but admittedly they have only had a rather marginal impact on outcomes on survival for patients with the common cancers. Those of us in the field remain extremely excited that the next decade will be quite different. That we are going to be able to understand cancer better; put patients into categories and pick drugs for them on a very personalized basis which is very different than what we have done in the last 10 or 20 years. We have in many ways treated all breast cancers the same. We are getting better at personalizing our treatments based on the science that has come out in the last 30 or 35 years of investment in science that this country has made.
How Cancer Begins
GARMON: So how—in the simplest way possible, how does cancer begin?
DEMETRI: There is still a lot of research going on about what makes cancer start? What is the first thing that happens and one interesting thing is that it is one cell that makes a mistake. One cell makes a mistake in its DNA, that gives rise to a protein that is not working properly and that leads to the cell growing and dividing abnormally and eventually not sticking where it's supposed to stick and go spread through the body. At its heart, it's all about a cell making a mistake in the code of life, in the DNA. There are different places in the DNA it can make that mistake that can give rise to different proteins that are mutant that can give rise to different signals that drive that cancer cell to act aberrantly. To act as a rogue cell inside the body and because there are so many different proteins that the cell can make a mistake on it means that the common cancers are very complicated and they are probably making mistakes in many different kinds of proteins which is one of the things that accounts for the fact that one person's cancer is often very different than another person's cancer even if it's the same name, cancer. Even if they both have breast cancer or even if they both have lung cancers. The cells may have made many different mistakes.
GARMON: But is it no less true and I am really asking this, I'm not asking you to say it. Is it true that for no matter how many circuits are involved or you know aberrant proteins, doesn't just the process begin with one mutation in one cell?
DEMETRI: The question—cancer really starts with one cell. One interesting question is whether that cell makes one mutation, but is that one mutation enough to cause cancer or do you need a certain threshold number of mutations? Do you need a certain number of mistakes to lead to cancer? There is good evidence that cells are making mistakes all the time but the body fights them off. The body can recognize that that cell is a mutant and destroys it or if the cell makes the wrong mistake it actually commits suicide and dies. We have so many protective systems in our body to prevent cancer it's remarkable.
GARMON: So I'm not saying that leads to cancer, I'm just saying isn't it accurate to say that the process begins—it has to begin with one mutation, right? Then that mutation couldn't lead to another mistake which could lead to another which could lead to multiple mutations which could lead to no cell death, which could lead to no—so doesn't it always begin though with one thing—one error in your DNA or we don't know?
Targeted Therapies & Smart Drugs
GARMON: So a lot of what I read and a lot of what people say is, yes, okay we haven't had the greatest number of battles that we would like on the war on cancer since Nixon declared the war on cancer, but trust me we are on the brink now. Now everything—all that $48 million a year that your taxpayer dollars went into that investment, trust me, it's going to pay off now. But what I think in my very naive and uneducated way is well, I feel like I heard that when Interleukin was—you know there was a buzz there and Interferon and that was going to you know wash over many different cancers and it ended up not, so I feel like I have lived through the hype. I am old enough to have lived through the hype of different cancer eras and my question is what if the same thing happens with targeted therapies and smart drugs? What if it doesn't help but a couple of cancers?
DEMETRI: You know this is a very important issue of expectations of the public. I think many of us grew up in the era of the war on cancer and when there is a war maybe one day you will wake up and there is a peace treaty and the war is over. That's not the right expectation here. If we are saying there are thousands of different cancers, what is more likely to happen in this era of smart drugs and targeted therapies is that we will see the war won one battle at a time, one type of cancer, one group of patients at a time. I doubt we are going to wake up someday and say breast cancer is gone. We will however wake up and say we have an important benefit for women with Her-2 positive breast cancer or we have an important new drug for a type of breast cancer that really makes those women live longer. That is where the future is going and it may not be as exciting as the public waking up someday and saying cancer is gone, let's move on to diabetes and arthritis, but it's probably more accurate and that's our job as scientists and doctors to explain that to the public. We are on the brink of multiple, wonderful discoveries and wonderful new drugs. I am confident about that.
GARMON: But what if we are not. I mean because I haven't—do you know what I mean? I mean I know this is different than Interferon and Interleukin but as a lay person, I can only tell you I have heard this excitement before and it didn't pan out. What if smart drugs don't pan out? What if they are great for four little you know subclasses of cancers? It doesn't really manage a lot of common cancers?
DEMETRI: Well, no, let's get to again a matter of expectations. I don't think that these drugs will routinely be good for all common cancers. I think it's going to be developing an arsenal of drugs that we then match specifically to individual types of patients. I think that is absolutely inevitable. Everything we know about cancer says it will happen that way. That is exactly what is going to happen. It may not happen tomorrow. It may not happen in two years, but that is the way we are going to fix the cancer problem. It is a very exciting prospect.
GARMON: You are absolutely 100% confident with a scientist's certainty that the concept of smart drugs, solving problems in this way, isn't going to turn out to be the Endostatin of this decade or the limited role that Interferon proved to be able to play in cancer when it was thought it would play a bigger role.
DEMETRI: That's right. I am absolutely confident that the new generation of drugs once we learn to use them for the right patients—that's all key to understand what disease we are treating. If you treat an infection with insulin, you are not going to get anywhere. It's the wrong drug for the wrong disease and in many ways that's where we have been with some of our drugs. We are giving the wrong drugs to the wrong patients and as we test new drugs if we give a PI3 Kinase inhibitor to a patient who doesn't have the PI3 Kinase pathway turned on, it's the wrong drug for the wrong patient. But as we get smarter, as we make better diagnoses and we say this is turned on, this drug shuts it off it has that mathematical certainty that if we match them something great will happen.
Detection vs. Treatment
GARMON: But if you put more research money in early detection you might have the technology for it. I mean what's the—are you mixing up the cart and the horse.
DEMETRI: I don't know that that's true.
GARMON: What if we just put all of our money, next year all $48 million—billion dollars or whatever into early detection? You don't think you would pop out of that year with a better technology for early detection?
DEMETRI: I'm not sure. I am honestly not sure. That is like saying what if you wanted to build a nuclear reactor in 1492. You could put the world's GNP of 1492 into that problem, they wouldn't have done it. The technology wasn't there to do it. So I think this is one about where are we as a society putting our resources for maximal impact and can we learn in the advanced disease setting key lessons that will inform our thinking about prevention. That will inform our thinking about early detection. I will give you an example, one of the most exciting projects we are working on now in patients with metastatic disease is can we develop a more accurate blood test to really predict—this whole field of proteomics is about that. So I think we are mixing and matching because all of us, all of us in our work are trying to get as early as possible because obviously that is going to be the biggest impact. I don't know if that is a fair way of answering the question, but I do feel pretty strongly about that.
GARMON: It's nonetheless—it's a poor relation in the cancer field. I mean there is no question that it is less valued right now.
DEMETRI: I wouldn't say valued. I would say that realistic scientists are successful because they know what they can do. They know—they push the edge of technology, but they know that you know we are not going to try to do the kind of transponders that you see on Star Trek because we don't have the technology. It's good in a TV show, but it's not real right now. You know you go to MIT labs thinking about where we are going to be 50 years and, yes, they are getting there but I think it is also important in science to try to think about the breakthroughs but also work incrementally and we are seeing a lot of that incremental work lead to breakthroughs once in a while.
GARMON: So what is real in early detection? In all cancers, where is there incontrovertible evidence that if you screen or you know early detect a cancer you can decrease mortality? Which ones is it incontrovertible, the evidence?
DEMETRI: I think it's still controversial in many of the big cancers, even in prostate cancer which is one of the most commonly screened ones with the PSA, the prostate specific antigen test. There is still controversy about how to use that rather simple test.
GARMON: But do you have a short list of ones where the NCI or you would agree that—
DEMETRI: I think most of us look at breast cancer and colon cancer as tumors where early detection probably makes a difference probably makes a big difference and where early detection programs work. In other countries, in Asia for example, screening early detection for stomach cancer because stomach cancer is much more common over there, have worked to make people live longer. So I do believe early detection is important and I think in many ways it will be customized as to what you are trying to detect early. It would make no sense to try to detect early very rare disease unless you have a very, very simple test to do it that is painless and cheap.
|
<urn:uuid:de1656f3-04e0-406c-b548-f6f504ca8ed7>
|
CC-MAIN-2016-26
|
http://www.pbs.org/wgbh/takeonestep/cancer/interviews-demetri.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00108-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972977 | 3,718 | 2.75 | 3 |
|Wooden Pieces used to build the capital letters|
|Slate used in"Wet, Dry, Try"|
|The Montessori "Brown Stairs"|
|The Montessori Pink Tower|
Before Five in a Row
Before Five in a Row is not a curriculum. It is a book filled with great ideas to bring literature, music, and learning into your child's life.
|
<urn:uuid:8bf5ca26-5fa9-4ce7-940e-fa8853f088e9>
|
CC-MAIN-2016-26
|
http://theattachedmama.blogspot.com/2010_11_01_archive.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00196-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.780866 | 86 | 2.96875 | 3 |
Google researchers have found a severe flaw in an obsolete but still used encryption software, which could be exploited to steal sensitive data.
The flaw in SSL 3.0 is more than 15 years old but is still used by modern web browsers and servers. SSL stands for "Secure Sockets Layer," which encrypts data between a client and server and secures most data sent over the Internet.
Bodo Möller, Thai Duong and Krzysztof Kotowicz of Google developed an attack called "POODLE," which stands for Padding Oracle On Downgraded Legacy Encryption, according to their research paper.
Web browsers are designed to use newer versions of SSL or TLS (Transport Layer Security), but most browsers will accommodate SSL 3.0 if that's all that a server can do on the other end.
The POODLE attack can force a connection to "fallback" to SSL 3.0, where it is then possible to steal cookies, which are small data files that enable persistent access to an online service. If stolen, a cookie could allow an attacker access to someone's Web-based email account, for example.
An attacker would have to control the network a victim is connected to in order to conduct this kind of man-in-the-middle attack. That might be possible in a public area, such as over a Wi-Fi network in an airport.
Security experts have long known SSL 3.0 was problematic. Matthew Green, a cryptographer and research professor at Johns Hopkins University, wrote on his blog that many servers still support SSL 3.0 since they didn't want to lockout users of Internet Explorer 6, a very dated but still used browser.
"The problem with the obvious solution is that our aging Internet infrastructure is still loaded with crappy browsers and servers that can't function without SSLv3 support," Green wrote.
"Browser vendors don't want their customers to hit a blank wall anytime they access a server or load balancer that only supports SSLv3, so they enable fallback," he wrote.
Google has already taken steps to stop encrypted connections from being made using less secure versions of SSL and TLS.
Adam Langley, who works on Google's Chrome browser, wrote on his blog that connections made using Chrome to Google's infrastructure are using a mechanism called "TLS_FALLBACK_SCSV", which prevents downgrading.
"We are urging server operators and other browsers to implement it too," Langley wrote. "It doesn't just protect against this specific attack, it solves the fallback problem in general."
Google is preparing a patch for Chrome that would forbid falling back to SSL 3.0 for all servers, but "this change will break things and so we don't feel that we can jump it straight to Chrome's stable channel. But we do hope to get it there within weeks and so buggy servers that currently function only because of SSL 3.0 fallback will need to be updated."
Major internet companies are already making adjustments to prevent a POODLE attack. CloudFlare, which has a widely used caching service, has disabled SSL 3.0 across its network by default for all of its customers, wrote CEO Matthew Prince.
"This will have an impact on some older browsers, resulting in an SSL connection error," Prince wrote. "The biggest impact is Internet Explorer 6 running on Windows XP or older."
Prince wrote that just 0.65 percent of the HTTPS encrypted traffic on CloudFlare's network uses SSL 3.0. "The good news is most of that traffic is actually attack traffic and some minor crawlers," he wrote.
Send news tips and comments to email@example.com. Follow me on Twitter: @jeremy_kirk
|
<urn:uuid:a4afb88f-8603-407c-bbf1-b082f44912f2>
|
CC-MAIN-2016-26
|
http://www.pcworld.idg.com.au/article/557439/security-experts-warn-poodle-attack-against-ssl-3-0/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00193-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961445 | 775 | 2.796875 | 3 |
Opinion: Wind power is helping to provide the path to a better tomorrow
There are now less than two weeks before the UN Climate Change Conference opens its door to politicians trying to reach a new agreement on limiting greenhouse gas emissions, and there still seems to be no consensus emerging as to how the all-important negotiations will pan out.
Within the past month the world was told there simply was not enough time for nations to find a legally binding agreement before the conference in Copenhagen ends on 18 December. Instead, according to government spin doctors, a political agreement could be reached in Denmark that would pave the way for a binding agreement halfway through 2010, perhaps in Mexico.
Between these two quite opposite positions, between hope and dismay, media reports continue to reveal scientific studies showing that climate change is happening faster than scientists thought possible even three years ago and that it is unlikely humankind will be able to change quickly enough to keep global temperature increase to a somewhat manageable 2°C.
Other reports show that many people don’t really understand the potential severity of climate change, and have little trust in either the scientists or the politicians attempting to deal with the issue. There are even stories that show, at least in some parts of the world, that the public is becoming bored with the entire discussion of global warming.
This frenzied, disparate stalemate wasn’t the way it was supposed to evolve when negotiators and political leaders agreed two years ago at the UN Climate Change Conference in Bali, Indonesia they would work towards a new, stronger international agreement on reducing greenhouse gas emissions to replace the Kyoto Protocol when it expires in 2012.
Considering that global CO₂ emissions have to peak by 2015 to avoid climate catastrophe, perhaps politicians should take a moment to listen to the quiet advice of Margareta Wahlström.
Head of the UN International Strategy for Disaster Reduction, Wahlström said Friday that not only is climate change already occurring, but that the world can expect more increasingly frequent and severe natural disasters aggravated by global warming.
“Those of us that have worked with disasters for a long time have already seen these extremes developing,” Wahlström said in an interview.
“What I really hope comes out of Copenhagen, whether we have a legally binding agreement or not, is a determined sense by world leaders that they have to continue to pursue practical action and to provide strong support for the collaboration that is required to move forward,” she said. “And, above all, not to let up until they have hopefully within a very short time frame a legally binding agreement.”
Yvo de Boer, Executive Secretary of the UN Framework Convention on Climate Change, believes the Copenhagen talks will lead to a formal treaty within six months.
“There is no doubt in my mind whatsoever that it [Copenhagen] will yield a success,” de Boer said last week, adding the three main points coming out of the conference must include transparent targets by industrialised countries to reduce greenhouse gas emissions by 2020, a list of actions by developing nations, and clear short- and long-term financing to support developing countries on both mitigation and adaptation.
Barring a last-minute legally binding treaty being reached at the Copenhagen talks, the European Wind Energy Association (EWEA) agrees with both Wahlström and de Boer.
It is crucially important, EWEA believes, that political momentum to reach a new international pact on curbing greenhouse gases, such as CO₂, caused by burning fossil fuels not be allowed to falter at this late hour.
The international community needs emission-reduction targets agreed to by both the developed and developing world. Wealthy industrialised nations responsible for much of the climate change debacle must also provide necessary financing to poor nations trying to deal with global warming.
EWEA reminds policy makers that the positive future we all want can only occur after a new legally binding agreement has been signed.
When that occurs, when we have begun to liberate ourselves from the tyranny of the expensive and destructive high-carbon world, we can begin embracing a healthier, cleaner tomorrow.
Emissions-free wind power — which is affordable, local, sustainable and dependable — is already proving a better tomorrow is possible.
Chris Rose, EWEA
|
<urn:uuid:4d359821-faf6-4d75-978e-a7a9d623a193>
|
CC-MAIN-2016-26
|
http://www.ewea.org/news/detail/2009/11/24/opinion-wind-power-is-helping-to-provide-the-path-to-a-better-tomorrow/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00049-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952284 | 886 | 2.75 | 3 |
The storm may blow through in a day, but the lights may stay out for a week — or more. An extended power outage can mean shivering — or sweating — in the dark and, in some cases, can be a threat to your health and safety. The key to staying safe and comfortable during an extended power outage is preparation and knowing what to do when the lights go out. And stay out.
Before the lights go out
- Every household should already have an emergency preparedness kit that will meet the needs of you and your family for three days. Much of what you need to make it through an extended power outage will be on hand with the gear on the checklist found at www.Ready.gov, the emergency preparedness Web site of the Federal Emergency Management Agency.
- Northeast Utilities, New England’s largest utility system serving more than two million customers in three states, recommends putting together a “Lights Out Kit” that includes a flashlight for each family member, extra batteries, battery-powered radio and clock, bottled water, canned food, manual can opener, first aid kit and Sterno or a similar alcohol-based cooking fuel.
- Because cordless phones won’t work when the power is out, you should include an old-fashioned corded phone in the “Lights Out Kit.”
- Should anyone in the house use electrically powered life-support equipment or medical equipment, be sure to ask your physician about emergency battery backup systems.
- Clearly label fuses and circuit breakers in your main electricity box. Make sure you know how to safely reset your circuit breaker or change fuses. Keep extra fuses on hand.
When the lights go out
- Pull the plug on motor-driven appliances such as refrigerators and electronic gear such as computers and televisions to prevent damaging electrical overload when power is restored.
- Keep the refrigerator and freezer doors closed as much as possible. You may want to your refrigerator and freezer to their coldest settings in advance of the storm. Just remember to reset the temperatures when the storm blows past. Food in the freezer can stay frozen for two to four days, according to the National Center for Home Food Preservation. During an extended power outage, you can use blocks of dry ice in the freezer.
- Use extreme caution when using alternative heating or cooking sources. Never use camp stoves, charcoal-burning grills or propane/kerosene heaters indoors. Don’t use a gas stove or oven to heat the house. They all pose the risk of fire and carbon monoxide poisoning. More than 400 people a year die from accidental carbon monoxide poisoning, according to the Centers for Disease Control and Prevention. The symptoms of carbon monoxide poisoning include headache, dizziness, weakness, nausea, vomiting, chest pain, and confusion.
- If you use a portable generator, plug appliances into the generator. Connecting the generator directly to your home’s electrical system can send power up the line and kill a utility repairman working on the power lines. Generators produce deadly carbon monoxide, so be careful when placing it. Never refuel the generator while it is running.
|
<urn:uuid:e99533fd-7264-4000-bf5c-92a795e51950>
|
CC-MAIN-2016-26
|
http://www.mnn.com/family/protection-safety/stories/preparing-for-an-extended-power-outage?magic_tabs_comment_callback_tab=0&referer=node%2F122005&args=a%3A1%3A%7Bi%3A0%3Bs%3A6%3A%22122005%22%3B%7D
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00143-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.916419 | 651 | 2.578125 | 3 |
Fighting started over the election of a new king for Bohemia, where there was an elective monarchy. The initial outbreak, in 1618, was a revolt against Hapsburg rule, triggered by the election of Archduke Ferdinand of Styria as heir to the childless Matthias, who was both king of Bohemia and Holy Roman Emperor. The Protestants seized power after the Defenestration of Prague (22 May 1618), and raised an army, in response to which the Emperor raised his own armies, and was engaged in indecisive war with the Bohemians. On 20 March 1619, Matthias died, leaving both thrones empty. The Bohemian electors chose Frederick V, the elector Palatinate, while the Imperial electors chose Matthias's cousin, Ferdinand of Styria, who became Emperor Ferdinand II. The fighting over Bohemia was decided in 1620. The Evangelical League (of Protestant Princes) declared neutrality, while the Catholic League sided firmly with the Emperor. At the Battle of the White Hill (8 November 1620), the Imperial forces routed the Bohemians, deposing Frederick in Bohemia, and ending the Bohemia phase of the war.
The war now moved into the Palatinate phase (1620-25). Frederick was initially successful against Imperial forces, reaching a high point at the battle of Mingolsheim, where Count Ernst von Mansfeld defeated Johan Tserclaes, Count Tilly, but the success was shortlived, and the protestant defeat at Stadtholn (6 August 1623), marked the end of true end of the Palatinate phase, leaving Frederick in exile, and Maximilian of Bavaria as the elector Palatinate. The apparent collapse of the Protestant position, and the triumph of the Hapsburgs now led to an international reaction. By the Treaty of Compiegne (10 June 1624), France and Holland allied against the Hapsburgs, soon to be joined by England, Sweden, Denmark, Savoy and Venice.
The international intervention now led to the Danish phase of the war (1625-29). Ferdinand II raised a new mercenary army, led by Count Albrecht von Wallenstein, initially to defend Hapsburg lands, but in June 1625 his commission was expanded to cover the entire Empire. This was made necessary by the invasion of Christian IV of Denmark, launched in the summer of 1625, although little of importance happened until the next year. First, France withdrew from the war after a Huguenot revolt in France, and then the Imperial forces inflicted a series of defeats on Christian of Denmark. By the summer of 1629 it again looked as if the Hapsburg-Imperial side was victorious, but once again the French took a hand. Cardinal Richelieu negotiated a peace between Sweden and Poland, which allowed Gustavus Adolphus of Sweden to enter the war on the Protestant-anti Hapsburg side.
The Swedish phase of the war (1630-35) saw more success for the anti-imperial side. Pressure from within Germany led to the dismissal of Wallenstein (August 1630), and his replacement as commander by Tilly, whose main success was the capture of Magdeburg (May 1631), after which he suffered a series of defeats, in particular at Beitenfeld (17 September 1631), and at the Battle of the Lech (14 April 1632), where he was killed. Wallenstein was once again put in charge of the Imperial forces, and although he was promptly defeated at Lutzen (16 November 1632), Gustavus Adolphus was killed during the battle. There followed a quiet phase, marked mainly by the fall of Wallenstein, who was dismissed for attempting to negotiate a peaceful settlement (1633), and finally assassinated by his own officers (25 February 1634). Defeat in battle at Nordlingen (6 September 1634) effectively ended Swedish intervention, and forcing Richelieu and France to openly take control of the Protestant war effort.
The final, French, phase of the war (1634-48), lost it's religious significance, and was in effect a struggle between France and Spain, fought out mostly in Germany. Spain still had footholds to the east of France in the Spanish Netherlands (modern Belgium) and Spanish Lombardy (northern Italy), and the French aim was to remove these Spanish outposts. Initially, the Imperial position was strong, and there was even a shortlived invasion of France (1636), but the tide of the war slowly turned against the Imperial side. The fourteen years of the French phase of the war eventually ended in exhaustion, Germany in particular having suffered year after year of campaigning. The Peace of Westphalia (24 October 1648), which ended the war, saw Hapsburg power much reduced. Alsace became part of France, while Sweden gained much of the German baltic coast, while the Emperor had to recognised the sovereign rights of the German princes, and equality between Protestant and Catholic states, while Spain, in a separate peace, finally acknowledged the independence of the Dutch Republic.
|
<urn:uuid:47565e0b-bdac-40c3-ab1a-cf9591448297>
|
CC-MAIN-2016-26
|
http://historyofwar.org/articles/wars_thirtyyears.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00030-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.971404 | 1,050 | 3.84375 | 4 |
eso9609 — Photo Release
First NTT Image of Comet Hale-Bopp after Solar Conjunction
9 February 1996
This false-colour image of Comet Hale-Bopp is the first to be obtained with a major astronomical telescope after the recent conjunction with the Sun. At the time of this observation, the comet was located in the southern constellation of Sagittarius, and only 32 degrees from the Sun.
This rather difficult observation was performed with the ESO 3.5-metre New Technology Telescope (NTT) in the morning of 9 February 1996 by Griet van de Steene (astronomer), Hernan Nunez (telescope operator) and Gabriel Martin (instrument operator) of the NTT team at La Silla. The data were immediately transferred by satellite link to the ESO Headquarters in Garching where the subsequent image processing was done by Hans Ulrich Kaeufl.
Since the comet was so close to the Sun, it had to be observed in the comparatively bright morning sky. It was acquired only 10 degrees above the eastern horizon, at an airmass of no less than 5.1. Three exposures of 5 minutes each were made through a red filtre and with a 2000 x 2000 CCD in the EMMI multi-mode instrument. The image shown here is based on one flat-fielded 5-min exposure. The frame covers 9 x 9 arcmin; 1 pixel = 0.27 arcsecond; North is up and East to the left.
The present image was obtained when the comet was approximately 924 million kilometres from the Earth and 802 million kilometres from the Sun. It continues to move inwards through the solar system and will cross the orbit of Jupiter in about two weeks time, on 25 February.
A provisional evaluation of the new images indicates that Comet Hale-Bopp is apparently still developing nominally. The coma measures at least 6 arcmin across. A certain rotation of the coma isophotes is noted, clockwise from about NNE (innermost) to about NW (outermost). No other obvious asymmetries are present. The nucleus appears single of these exposures.
Some of the brighter stars show spikes in the N-S direction; this is a typical effect on the very sensitive CCD detectors. The trail of an artificial satellite crosses the photo in the upper left quadrant.
This photo signifies the beginning of a substantial Hale-Bopp observational campaign at ESO. Co-ordinated observations will be carried out during approx. 30 nights before the end of September 1996. Many different telescopes and instruments will be used.
For earlier photos of Comet Hale-Bopp obtained with ESO telescopes, please consult the ESO Hale-Bopp Homepage.
|
<urn:uuid:2162432b-5909-48e7-83ca-62e7c8f4fccf>
|
CC-MAIN-2016-26
|
http://ohainaut@eso.org/public/news/eso9609/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00186-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93929 | 562 | 2.515625 | 3 |
Aplisol should not be administered to persons who previously experienced a severe reaction (e.g., vesiculation, ulceration, or necrosis) because of the severity of reactions that may occur at the test site (see
Not all infected persons will have a delayed hypersensitivity reaction to a tuberculin test. A number of factors have been reported to cause a decreased ability to respond to the tuberculin test, such as the presence of infections, viral infections (measles, mumps, chickenpox, HIV), live virus vaccinations (measles, mumps, rubella and other live vaccines), bacterial infections (typhoid fever, brucellosis, typhus, leprosy, pertussis, overwhelming tuberculosis, tuberculous pleurisy), fungal infections (South American blastomycosis), drugs (corticosteroids and other immunosuppressive agents), metabolic derangements (chronic renal failure), low protein states (severe protein depletion, afibrinogenemia), age (newborns, elderly patients with waned sensitivity), stress (surgery, burns, mental illness, graft-versus-host reactions), diseases affecting lymphoid organs (Hodgkin's disease, lymphoma, chronic leukemia, sarcoidosis), and malignancy.7,8,9
Any condition that impairs or attenuates cell mediated immunity potentially can cause a false negative reaction, including aging.10,11
Tuberculin skin test results are less reliable in HIV-infected individuals as CD4 counts decline (see
Avoid injecting tuberculin subcutaneously. If this occurs, no local reaction develops, but a general febrile reaction and/or acute inflammation around old tuberculous lesions may occur in highly sensitive individuals.
The predictive value of the tuberculin skin test depends on the prevalence of infection with M. tuberculosis and the relative prevalence of cross-reactions with nontuberculous mycobacteria.9,12
A separate, sterile, single-use disposable syringe and needle should be used for each individual patient to prevent possible transmission of serum hepatitis virus and other infectious agents from one person to another. Special care should be taken to ensure that the product is injected intradermally and not into a blood vessel.
Before administration of Aplisol, a review of the patient's history with respect to possible immediate-type hypersensitivity to the product, determination of previous use of Aplisol and the presence of any contraindication to the test should be made (see
As with any biological product, epinephrine should be immediately available in case an anaphylactoid or acute hypersensitivity reaction occurs.
Failure to store and handle Aplisol as recommended may result in a loss of potency and inaccurate test results.8,13
Reactivity to the test may be depressed or suppressed for as long as 5–6 weeks in individuals following immunization with certain live viral vaccines, viral infections or discontinuation of corticosteroids or immunosuppressive agents.8,9
Information to Patients
Patients should be instructed to report adverse events such as vesiculation, ulceration or necrosis which may occur at the test site in highly sensitive individuals. Patients should be informed that pain, pruritus and discomfort may occur at injection site.
Patients should be informed of the need to return to their physician or health care provider for the reading of the test and of the need to keep and maintain a personal immunization record.
In patients who are receiving corticosteroids or immunosuppressive agents, reactivity to the test may be depressed or suppressed. This reduced reactivity may be present for as long as 5–6 weeks after discontinuation of therapy (see
PRECAUTIONS – General).9
The reactivity to PPD may be temporarily depressed by certain live virus vaccines (measles, mumps, rubella, oral polio, yellow fever, and varicella). Therefore, if a tuberculin test is to be performed, it should be administered either before the live vaccine or given simultaneously, but at a separate site than the live vaccine, or testing should be postponed for 4–6 weeks.9
Carcinogenesis, Mutagenesis, Impairment of Fertility
No long term studies have been conducted in animals or in humans to evaluate carcinogenic or mutagenic potential or effects on fertility with Aplisol.
Pregnancy Category C
Animal reproduction studies have not been conducted with Aplisol. It is also not known whether Aplisol can cause fetal harm when administered to a pregnant woman or can affect the reproduction capacity. Aplisol should be given to a pregnant woman only if clearly needed. However, the risk of unrecognized tuberculosis and the postpartum contact between a mother with active disease and an infant leaves the infant in grave danger of tuberculosis and complications such as tuberculous meningitis. Although there have not been any reported adverse effects upon the fetus recognized as being due to tuberculosis skin testing, the prescribing physician will want to consider if the potential benefits outweigh the possible risks for performing the tuberculin test on a pregnant woman or a woman of childbearing age, particularly in certain high risk populations.
Tuberculin skin testing is considered valid and safe throughout pregnancy.3
Once acquired, tuberculin sensitivity tends to persist, although it often wanes with time and advancing age. In geriatric patients or in patients receiving a tuberculin skin test for the first time, the reaction may develop more slowly and may not be maximal until after 72 hours.6,7 (see
CLINICAL PHARMACOLOGY). Not all infected persons will have a delayed hypersensitivity reaction to a tuberculin test. A number of factors have been reported to cause a decreased ability to respond to the tuberculin test, such as elderly patients with waned sensitivity.7 Any condition that impairs or attenuates cell mediated immunity potentially can cause a false negative reaction, including aging10,11 (see
WARNINGS). An induration of >10 mm is classified as positive in all persons who do not meet any of the criteria listed under an induration of >5 mm, but who belong to one or more of the following groups at high risk for TB, including residents and employees of high risk congregate settings, such as nursing homes and other long-term facilities for the elderly.
The negative tuberculin skin test should never be used to exclude the possibility of active tuberculosis among person for whom the diagnosis is being considered (symptoms compatible with tuberculosis) (see
DOSAGE AND ADMINISTRATION-Interpretation of Tuberculin Reaction).
Because their immune systems are immature, many neonates and infants <6 weeks of age, who are infected with M. tuberculosis, may not have a delayed hypersensitivity reaction to a tuberculin test (see
WARNINGS). Older infants and children develop tuberculin sensitivity 3-6 weeks, and up to 3 months, after initial infection.5,20 Infants and children who have been exposed to persons with active tuberculosis should be considered positive when reaction to the tuberculin skin test measures ≥ 5 mm. Those children younger than 4 years of age who are exposed to persons at increased risk to acquire tuberculosis are considered positive when reaction measures ≥10 mm. Children with minimal risk exposure to tuberculosis would be considered positive when reaction measures ≥15 mm. 5,20 Other criteria for positive tuberculin reactions that are applicable to both pediatric and adult patients are provided in
DOSAGE AND ADMINISTRATION, Interpretation of Tuberculin Reaction.
|
<urn:uuid:a5b49ddf-1caf-47a8-aa91-128dd5bc62a5>
|
CC-MAIN-2016-26
|
http://www.druglib.com/druginfo/aplisol/warnings_precautions/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00081-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.900759 | 1,576 | 2.515625 | 3 |
We Montanans have a lot to be proud of when it comes to contributing to our nation’s conservation legacy: Montana is home to two of America’s most beloved national parks—Glacier and Yellowstone. Adjacent to these parks are two of our nation’s largest and most cherished wilderness areas: the Bob Marshall and Absaroka-Beartooth. Not coincidentally, we’re one of the few states in the lower 48 where all the large mammals that were present when Lewis and Clark passed through still roam free.
But as a lover of rivers, there’s one conservation legacy I’m especially proud of. For it was in Montana, deep in the Great Bear Wilderness, where the idea for the Wild and Scenic Rivers Act was born.
During the height of the modern dam-building era in the 1950s, the federal government’s two major dam-building agencies—the US Bureau of Reclamation and Army Corps of Engineers—were busy scouting every last free-flowing river in the west for potential dam sites. Among the prized rivers these agencies wanted to plug with concrete were the Colorado River in the Grand Canyon, the Green River in Dinosaur National Monument, and the Middle Fork of the Salmon River in central Idaho.
Closer to home, government engineers drew up plans for dams on the Yellowstone River south of Livingston, on the Gallatin River at the mouth of Spanish Creek, and on the lower Big Hole River by Glen. When the Corps of Engineers proposed damming the Middle Fork of the Flathead River at Spruce Park, famed wildlife biologist John Craighead led the fight against it. As a scientist, Craighead understood that acre for acre, lower elevation river corridors provided the most critical wildlife habitat in our ecosystem, and in order to protect grizzly bears and scores of other animals, we needed to protect our last best free-flowing rivers.
After a decade spent publishing scientific reports, delivering lectures, and lobbying Congress, the efforts of John Craighead and his twin brother, Frank, finally bore fruit when President Lyndon B. Johnson signed the Wild and Scenic Rivers Act into law in 1968. To this day, it remains the only law in the world that permanently protects rivers in their clean, free-flowing condition.
In 1976, eight years after the Act’s passage, Montana added its four rivers to the National Wild and Scenic Rivers System—a 150-mile reach of the upper Missouri and the North, Middle and South Forks of the Flathead River. Senator Max Baucus, then a freshman congressman, shepherded the legislation through the US House of Representatives.
Since then, no river in Montana has gained federal protection, despite the fact that our rivers face unprecedented threats driven by climate change and our society’s insatiable thirst for cheap energy. In just the past few years, a Bozeman-based company, Hydrodynamics, Inc., has proposed new hydropower projects on the upper Madison River below Quake Lake and on two spectacular streams that cascade off the Beartooth Plateau—East and West Rosebud creeks. I hate to say it, but these are the first of many such projects that loom on the horizon.
While Montana hasn’t seen a new Wild and Scenic River designated in almost four decades, our neighbors in Wyoming, Idaho, and Utah—not known for being the most environmentally progressive states in the country—got nearly 1,000 river miles designated as Wild and Scenic in 2009 alone. Why? Because they understand that in today’s economy, the communities that have the healthiest rivers and the largest protected areas nearby are seeing the most economic growth.
With climate change tightening its grip on the Rockies and the nation’s population center moving ever westward, the demand for new water development projects in Montana is certain to increase. Instead of rushing to sacrifice the last one-half of one percent of Montana’s stream miles that qualify for Wild and Scenic designation, let’s start with the low-hanging fruit and install turbines on existing dams and in irrigation canals where the environmental impacts would be minimal. Only three percent of the 3,500 inventoried dams in Montana currently produce any electricity at all.
In the meantime, let’s not wait until new dams, mines, and oil and gas drilling are proposed on our last free-flowing rivers. We’ve got a Montana-made conservation tool in our quiver which has already protected more than 200 rivers and 12,000 river miles across America. Let’s use it here, in southwest Montana, on the hundreds of miles of still-functioning rivers that have already been found eligible for Wild and Scenic designation.
For more information on our efforts to protect western Montana’s last best rivers through a combination of Wild and Scenic designations on public lands and incentive-based conservation measures on private lands, visit beta.healthyriversmt.org.
In addition to being an avid angler and river runner, Scott Bosse is the Northern Rockies Director for American Rivers in Bozeman.
- O/B Store
|
<urn:uuid:1d3a2090-6986-4661-bdd7-f8ece982f51d>
|
CC-MAIN-2016-26
|
http://www.outsidebozeman.com/lifestyle/new-west/saving-water
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00122-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.933885 | 1,066 | 3.453125 | 3 |
* The first development in the Wizard War was radar. Although most of the major combatants discovered radar at almost the same time, the British were leaders in realizing its potential. By the outbreak of war, Britain had a fully operational air-defense system based on radar, and was exploiting radar in other applications.
* World War I had introduced electronics to combat in the form of radio, as well as the "radio direction finding (RDF)" systems the British used to locate German ships and submarines at sea. World War I electronics systems were crude, clumsy, and unreliable, but after the war great progress was made in the art.
The evolution of electronics in warfare was accelerated by the parallel evolution of combat aircraft, particularly bombers. Aerial bombing was not much more than a military nuisance during World War I, but after the war bombers became bigger and faster, with heavier bombloads and longer range. Many strategic thinkers began to believe bombers could be the decisive factor in the "next war".
The only means of detecting attacking bombers was with ground spotter networks, sometimes augmented by listening horns. As bombers became faster, such means of detection were obviously inadequate to give timely warning of attacks and permit an effective defense. In 1932, British Prime Minister Stanley Baldwin, in an address to Parliament, said there was no hope of defense against bombers: "The bomber will always get through." The only way to prevent such attacks, in this view, was to have the capability to retaliate in kind. This prediction seemed to be borne out by British Royal Air Force (RAF) air exercises in July 1934, when at least half the day bomber attacks in the maneuvers managed to reach their targets without being attacked by fighters.
The Nazi aerial bombings of Spanish cities during the Spanish Civil War in 1936 were a shock to the public. As a wider war approached, the raids led European governments to fear that waves of enemy bombers would level their cities with a rain of bombs.
* Not everyone in the British Air Ministry felt that the bomber would always get through. In June 1934, a junior Air Ministry official named A.P. Rowe went through whatever he could find on plans for the air defense of Britain, and was disturbed to learn that although work was going into development of improved aircraft, little other work was being done to consider a broad defensive strategy. Rowe wrote a memo to his boss, H.E. Wimperis, explaining the situation and saying that the lack of adequate planning was likely to prove catastrophic.
Wimperis took the memo very seriously, and did the natural and proper bureaucratic thing: he proposed that the Air Ministry form a committee to investigate new technologies for defense against air attacks. Wimperis suggested that the committee be led by Sir Henry Tizard, a prestigious Oxford-trained chemist, rector of the Imperial College of Science & Technology. The "Committee for the Scientific Survey of Air Defense (CSSAD)" was duly formed under Tizard's direction, with Wimperis as a member and Rowe as secretary.
Wimperis also independently investigated other possible new military technologies. The Air Ministry had a standing prize of a thousand pounds to be awarded to anyone who could build a death ray that could kill a sheep at 180 meters (200 yards). The idea seems a bit silly in hindsight, but some British officials were worried that the Germans were working on such weapons, and Britain couldn't afford to be left behind. Some studies were done on intense radio and microwave beams, something along the lines of modern "electromagnetic pulse" weapons.
Wimperis contacted a Scots physicist named Robert Watson-Watt, supervisor of a national radio research laboratory, to see what he thought about death rays. Watson-Watt, a descendant of James Watt, inventor of the first practical steam engine, was a cheery, tubby man with lots of drive and intelligence, though he had an annoying tendency to talk on at length in a one-sided fashion. He had established a reputation for himself in developing radio systems to pin down the location of thunderstorms, which generate radio noise, by triangulation.
After some quick "back of the envelope" studies and conversations with members of his lab, Watson-Watt replied to Wimperis that he thought death rays weren't very practical. The most powerful radio beams that could be generated in those days wouldn't even make an enemy aircrew feel warm. However, Watson-Watt added that radio beams could be bounced off enemy aircraft to detect them, though not destroy them. Wimperis realized that such a concept meshed neatly with the CSSAD's mandate, and ran the idea past the committee's members. They were interested, and in response Watson-Watt fleshed out his ideas in a memo dated 12 February 1935.
The memo outlined the basic physics involved, used simple calculations to show the idea was well within the limits of possibility, and described how such a system could be implemented. Watson-Watt suggested that a network of such "radio echo detection" systems could be built that would have a range of up to 300 kilometers (190 miles). He also cautioned that the scheme he had outlined could determine the distance to an aircraft, but a practical system would also need to determine its "azimuth", or horizontal location, and altitude as well.
The CSSAD was enthusiastic, but they needed a proof-of-concept demonstration before they could pry development funds out of the British Air Ministry. Watson-Watt and his team worked overnight to improvise a radio detection system, using a receiver to pick up the echo of transmissions from a convenient BBC tower off a target. On 26 February 1935, the demonstration system managed to pick up a Handley-Page Heyford bomber being used as a test target. The bomber flew through the beam and the reflected signal was easily visible. The demonstration impressed people in high places, particularly Air Marshal Hugh Dowding, known as "Stuffy" since he was notoriously humorless. On 13 April, the Air Ministry agreed to provide 12,300 pounds, a generous sum at the time, for development of the new radio echo detection system.
The group working on the concept searched for a name, and finally settled on "RDF", which strongly implied "radio direction finding" to the uninitiated and helped ensure security. In 1941, they would rename the scheme "radiolocation".
In fact, there was a wide range of candidate names for the new technology. The US Army's Signal Corps called it "radio position finding (RPF)", while the US Army Air Corps called it "derax". The term "radar", an acronym for "Radio Detection And Ranging", was invented in 1940 by US Navy researchers and wasn't adopted by the British until 1943. However, for the sake of simplicity, the term "radar" will be generally used in the rest of this document. Incidentally, some sources claim the Australians called it "doover", but this appears to be a misunderstanding, "doover" being an old Australian slang term along the lines of the Yank term "thingamajig" that could be applied to almost anything.
* The BBC transmitter used in the proof-of-concept test could only send out a continuous signal. Watson-Watt's scheme actually specified that the transmitter send out a short pulse. Half the time delay between the transmission and reception of the pulse, multiplied by the speed of light (300,000 kilometers / 186,000 miles per second) would give the range to the target. The time delay would be very short, but it could be measured using an oscilloscope.
The oscilloscope would be connected to the receiver to display the pulse echo on its "cathode ray tube (CRT)", essentially much like a modern TV picture tube. The oscilloscope's sweep would be triggered when the transmitter sent the pulse. The farther away the target was, the longer the delay would be between transmission and reception of the pulse, and this delay could be measured by the distance of the pulse across the oscilloscope screen. The screen could be directly calibrated with the appropriate distance markings. This sort of radar display became known as an "A-scope"
The idea behind pulsed radar was straightforward, and in fact Watson-Watt was not the first to come up with it. Crude radars had been around for decades. A radar had been demonstrated and patented by a German engineer named Christian Huelsmeyer as far back as 1904. Although Huelsmeyer's radar generated a periodic output using a spark gap transmitter, it was not a pulsed radar as described above since he had no means of electronically timing the echo. It could simply detect that something was there and ring a bell, though Huelsmeyer was able to use the location of his transmitter, knowledge of the configuration of a target, and a little rough geometry to obtain crude range estimates. The spark gap was used simply because Huelsmeyer had no other reasonable way of obtaining an output of adequate power at the time.
A comparable crude radar, using a continuous-wave oscillator, was invented in 1922 by two US Navy researchers, Albert Hoyt Taylor and Leo C. Young, but they dropped the idea for over a decade. By 1934, Germany, Italy, the Soviet Union, France, and other countries had all demonstrated primitive continuous wave radar systems. There was also some tinkering with "interference detectors" that had a widely separated transmitter and receiver and could sense an aircraft flying through the beam between the two. As with Huelsmeyer's radar, these systems could detect that something was there, but could not give a direct estimate of its range. A continuous wave radar could be used to find range by varying the frequency of the signal, but such "frequency modulation" techniques were still being developed at the time.
Watson-Watt's proof-of-concept demonstration with an improvised continuous wave system was basically just showmanship, and anyone with real knowledge of such ideas would have laughed at it as trivial. However, the number of people who knew enough to laugh were few in number, and Watson-Watt's audience appears to have been suitably impressed. After all, it did get the basic idea across: radio waves could be used to spot airplanes.
Pulsed radar had to wait until the invention of electronic pulse generation and pulse timing circuitry made it possible. Once those tools were available, development of a pulsed radar system was a fairly obvious next step, and the British weren't the only ones on to the idea. In the early 1930s Taylor and Young, then at the Naval Research Laboratory (NRL) in Washington DC, also came up with the idea of pulsed radar. Taylor assigned one of his engineers, Robert Page, to implement a demonstration system, and in December 1934, Page's demonstration system detected a small airplane flying up and down the Potomac.
The Americans had actually beaten the British to the first demonstration of pulsed radar by several weeks. However, the British were the first to grasp radar's potential, quickly envisioning a national network of radar stations to provide advance warning of an attack. That gave Britain a step ahead in what would turn into a race for electronic supremacy.BACK_TO_TOP
* Robert Watson-Watt decided to establish a radar development team stationed at an isolated and deserted airfield on a coastal isthmus at Orfordness, in Suffolk, where the work could be conducted without attracting much notice. There were four people on the team, which was led by Arnold F. "Skip" Wilkins, who had done much of the "grunt work" for Watson-Watt since the beginning of the radar investigation. A bright young Welshman named Edward Bowen, with a fresh doctorate from King's College, became Wilkins' right-hand man. Watson-Watt dropped by almost every weekend to keep up with their progress.
After intense brainstorming, late night sessions, and hard work, the team finally came up with a workable radar system in June 1935. The transmitter array consisted of two tall towers with antenna wires strung between them, while the receiver array consisted of two similar arrays arranged in parallel. By July, the team was able to detect aircraft flying well offshore. They worked to drive down the radar's operating wavelength to avoid interference with commercial radio transmissions, reducing it from an original wavelength of 26 meters (a frequency of 11.5 megahertz / MHz) to 13 meters (23.1 MHz).
Early on, the RDF team had thought that the signal should have a wavelength comparable to the size of the bombers they were trying to detect in order to obtain a resonance effect, but this bought little in practice. Shorter wavelengths would reduce interference and provide greater accuracy, but for the moment it was difficult to generate radio waves with adequate power at short wavelengths. The team also developed schemes to allow determination of azimuth and altitude.
By September 1935, the system had matured to the level where it could be put into operational service. The government authorized the construction of an initial network of five radar stations. The research project expanded, and quickly outgrew the primitive facilities at the Orfordness airfield.
Watson-Watt searched the Suffolk coast for a more capable facility that still had a degree of isolation, and found a coastal estate named "Bawdsey Manor", which the government purchased before the end of the year. Although Bawdsey Manor was a bit run-down, it was still incredibly luxurious in comparison to the primitive accommodations at Orfordness, with such extravagances as a pipe organ and a billiards table. The government hadn't wanted to keep the billiards table, but Eddie Bowen bought it from the previous owners for 25 pounds and it stayed put.
The move to Bawdsey Manor was complete by May 1936. By August 1936 the staff was up to 20 people, including a sharp young physics student from Imperial College named Robert Hanbury Brown. Watson-Watt focused on recruiting scientists for the effort, which encouraged "thinking outside the box", but later on the researchers would be embarrassed to find out that their electronic designs were naive by industry standards. They were, however, a bright and energetic group, and Watson-Watt proved to be a fine and respected technical manager who got the best out of them.
Most of the work was on developing the network of radar stations, which were named "Chain Home (CH)", though in 1940 they would also be assigned the formal designation "Air Ministry Experimental Station (AMES) Type 1". Bowen also worked in a part-time fashion on a pet project, a radar system that could be carried by an aircraft. Work on Chain Home didn't go well through the rest of 1936. After a disappointing demonstration in September that provoked strong criticisms from Tizard, the group redoubled its efforts.
By April 1937, Chain Home was working much more reliably and was detecting aircraft 160 kilometers (100 miles) away. By August 1937, three CH stations were in operation, one at Bawdsey itself, and the other two at Canewdon and Dover, with the network blanketing the western approaches to London.
* The stations could be tuned to four different wavelength bands in the range from 15 meters to 10 meters (20 MHz to 30 MHz). The bandwidth could be set to 500 kilohertz (kHz), 200 kHz, or 20 kHz. A CH station did not look like a modern radar station, instead resembling a "farm" of radio towers. There were four (later reduced to three) metal transmitter towers in a line, and four wooden receiver towers arranged in a rhomboid pattern.
The transmitter towers were about 107 meters (350 feet) tall and spaced about 55 meters (180 feet) apart, with cables strung from one tower to the next to hang a "curtain" of horizontally positioned half-wave transmitter dipoles, transmitting horizontally polarized radio waves. The curtain included a main array of eight horizontal dipole transmitting antennas above a secondary "gapfiller" array of four dipoles. The gapfiller array was required because the main transmitter array had a "hole" in its coverage at low angles. The operator could switch between the two arrays as needed.
The transmitting antenna arrangement not only simplified construction, it was also felt that a horizontally polarized wave would give a better indication on a aircraft, which was a horizontal target when in normal flight. The output stage of the transmitters used special tetrode "valves" (vacuum tubes) built by Metropolitan Vickers of the UK that were water cooled. An air pump system was used to maintain a vacuum in these valves, permitting them to be opened up so the filaments could be replaced when they burned out. A complete backup transmitter unit was provided to ensure that the radar stayed in operation at all times.
The wooden towers for the receiving arrays were shorter, about 76 meters (250 feet) tall. Each wooden receiving tower initially featured three receiving antennas, in the form of two dipoles arranged in a cross configuration, spaced up the tower. Additional crossed dipoles would be fitted later in the war to deal with German jamming.
The transmitter did not send out a nice narrow beam, instead pouring out radio waves over a wide swath like a floodlight. The direction of the echoes returning to the receiving towers could be determined by comparing the relative strengths of the echoes picked up by different crossed dipoles. Comparison of the receiving strength between crossed dipoles on different towers gave the horizontal angle to the target, while comparison of the receiving strength between the crossed dipoles arranged vertically on a tower gave the vertical angle. Only the two top dipoles on each tower were used to determine the horizontal direction, while all three were used to determine the vertical direction. The receiver design owed much to Watson-Watt's old lightning location system.
The pulse width was very long by radar standards, ranging from 6 to 25 microseconds, which meant a corresponding uncertainty in the range of a target. Even a 6 microsecond pulse of radio energy, traveling at 300,000 kilometers per second (186,000 miles per second), is 1.8 kilometers (over a mile) long, leading to at least that much uncertainty in the range of the target. Pulse power was high, with a peak power of 350 kilowatts (kW) initially, then 800 kW, and finally 1 megawatt (MW).
One of the major problems with Chain Home was false or "ghost" echoes from distant, fixed targets. If radar sent out a pulse and the echo didn't come back until after the radar sent out a second pulse. then the echo would seem to have come from the second pulse, indicating a target that was very close when it was really a long ways away. To work around the ghosts, a low pulse repetition frequency (PRF) of 25 Hz was used, allowing echoes to be returned from targets up to 6,000 kilometers away before a second pulse was emitted, ensuring that all the echoes from a pulse would be gone before the next pulse was sent out. That was half the British power grid frequency of 50 Hz, which allowed multiple stations to synchronize their pulse broadcasts, reducing mutual interference. The disadvantage of such a low PRF, ridiculously low in hindsight, was that it reduced the amount of energy the radar was throwing out to detect intruders, and correspondingly reduced the radar's sensitivity.
* Although the concept had its clever bits, Chain Home was a dead-end design. The floodlight scheme wasted transmitter power, since only a small fraction of the transmitter beam, if "beam" was exactly the right word for it, would strike a target, much less be reflected back to the receiving antenna. It was also not very accurate. Range detection was good, to within a kilometer or two, but altitude determination was difficult, and azimuth estimates could be off as much as twelve degrees.
To achieve even that much required not only a lot of engineering work but a lot of calibration, with the radar stations tracking RAF aircraft flying on predetermined courses and operators logging the radar observations. Each CH station required its own calibration, and each was eventually provided with a simple electronic analog computer designed specifically for the task of processing inputs along with the calibration data into something that could be used. The computer was known as a "fruit machine", a British expression for a slot machine; the computer had three rotating switches that were vaguely reminiscent of the three drums on a one-armed bandit. Despite all its limitations, Chain Home worked, and worked effectively, while continued refinements kept it effective for a surprisingly long time.
The RAF took over control of the Chain Home stations from the boffins, and also developed a fighter-control network using radar and observer stations, of which much more is said later. Initial attempts in early 1938 to use the radar system to direct RAF fighters to discreetly intercept airliners didn't go very well, but everyone learned, and CH proved its usefulness during Home Defense exercises in mid-1938. Ground controllers successfully directed interceptors to their targets three-quarters of the time, in both day and night conditions.
CH stations began to be set up overseas as well. Of course that meant that they had to be called "Chain Overseas (CO)" and not "Chain Home", and had some minor differences from CH.
* By this time, Watson-Watt was no longer in charge at Bawdsey Manor. He had been promoted to a high-level technical management job at the Air Ministry in May 1938, and direction of Bawdsey Manor passed on to A.P. Rowe, whose memo of four years earlier had put everything in motion.
Bawdsey staffers were not entirely happy about the change in management. Rowe was not a technical person and was a humorless, no-nonsense type. In considerable compensation, however, he was conscientious with his people and had a high regard for their abilities, though stuffy about rules. He was also a very efficient administrator, and skilled at organizational politics. Finally, he believed in the unchained exchange of ideas, organizing "Sunday Soviets" where staff could say what they liked and trade ideas, even crazy ones, among themselves and with users in the military services.BACK_TO_TOP
* The inaccuracy of Chain Home led to Eddie Bowen's interest in airborne radar, which he named "Airborne Interception (AI)". CH was able to guide fighter pilots to the general vicinity of intruders, but it was up to the pilots to find and attack them after that. In clear weather the pilots could see intruders easily enough, but the weather in the UK doesn't stay clear for long, and of course the pilots were almost helpless at night. Bowen felt that AI would help them cut through the murk and the dark.
A more experienced engineer might have been reluctant to take on such a job. The electronics for a CH station filled up rooms and soaked up massive amounts of electrical power, and both space and electrical power were at a premium on fighter aircraft. Another problem was that to keep antennas to a size that could be carried on a fighter, the operating wavelength had to be squeezed down to a meter or so. Finally, an AI set was essentially field combat gear, and so it had to be rugged, reliable, and easy to use.
Although Bowen had been forced to set his AI project aside while he hammered out the bugs in Chain Home, he was able to return to it as the stations came on line. His objective was an airborne radar system that would weigh no more than 100 kilograms (220 pounds), consume no more than 500 watts, and use antennas no longer than a meter (3 feet 3 inches).
Initial experiments were conducted in June 1937 with a system operating at 6.7 meters (44.8 MHz), a selection prompted by the availability of a new, very compact and effective, EMI-built television receiver that operated at that wavelength. Bowen modified the receiver for his purposes, installing the kit on a Heyford bomber. The bomber didn't carry a transmitter, instead picking up signals broadcast by a ground station. The receiver system on board the bomber was to pick up the ground transmitter pulses and the echoes and try to make sense of them. Bowen was enthusiastic about the scheme, but it was tricky to get to work, and Watson-Watt told him to give up on it.
However, the idea was basically sensible in itself, if far beyond the technology of the time. Building a "bistatic" radar with separate fixed transmitter and receiver was straightforward, and in fact it was a common configuration for early radars, including Chain Home. Using a fixed transmitter and moving receiver would require capabilities that wouldn't be available in the lifetimes of the Bawdsey researchers.
Bowen went back to the drawing board and managed to put together a full AI set, using miniaturized "acorn" vacuum tubes developed by the Radio Corporation of America (RCA), and operating at 1.5 meters (200 MHz). Some sources claim this initial set operated at 1.25 meters (240 MHz), but if so development quickly switched to the longer wavelength.
Bowen was visiting his parents in Wales when the AI set was given its first flight test in a twin-engine Avro Anson utility aircraft on 17 August 1937. The test didn't detect any aircraft, but spotted a few ships. This immediately turned the focus of the airborne radar project from AI to airborne ocean surveillance, or what was termed "Air to Surface Vessel (ASV)". Watson-Watt quickly proposed that Bowen's airborne radar be used to observe Royal Navy maneuvers, which began on 6 September 1937. Eddie Bowen was part of the flight crew this time, and the tests were highly successful, with the radar finding warships in weather so foul that other aircraft had been grounded.BACK_TO_TOP
* While Bawdsey worked on different radar technologies and the RAF organized the air defense of Britain around Chain Home, the other British armed services were conducting radar development on their own. The division of efforts greatly annoyed Watson-Watt, who wanted to centralize all such research in his own organization.
The Royal Navy had set up their effort at HM Signal School in Portsmouth in 1935, making little progress until a new commandant was assigned to the school in the summer of 1937. Official interest and support increased dramatically, and work on the naval radar, the "Type 79", finally began to converge towards a solution.
Although the Type 79 had originally been designed to operate at a wavelength of 4 meters (75 MHz), development didn't really get rolling until the wavelength was switched to 7.5 meters (40 MHz). Generating signals at this wavelength was less challenging, and it also allowed the Royal Navy researchers to leverage off the same EMI television receiver technology used by Bowen, which they may have learned about through the Royal Navy liaison at Bawdsey Manor.
A prototype version of the Type 79 radar was successfully demonstrated in early 1938. By the end of the year, the Type 79 had been installed on the battleship HMS RODNEY and the cruiser HMS SHEFFIELD. It would be soon fielded on other vessels and be upgraded to the improved "Type 279".
The Type 79 and Type 279 were similar, both using separate transmitting and receiving antennas mounted on their own masts but rotating in synchronization. The antennas were small, resulting in a wide beam, which was adequate for detecting aerial intruders at ranges of up to about 80 kilometers (50 miles), but not so good at targeting naval vessels. It was also not very good at picking up low-flying aircraft.
* The need for more precise targeting led Royal Navy researchers to hastily develop a 1.5 meter (200 MHz) radar, the "Type 286", based on the technology Bowen had developed during his AI work. The initial "Type 286M" used a fixed antenna, meaning the ship had to change direction to point the radar beam. The Type 286M could pick up a surfaced submarine at a distance of no more than a kilometer if the vessel carrying the radar was pointed in the right direction.
In March 1941, a Royal Navy destroyer managed to spot a German submarine at night using the Type 286M and then rammed the submarine, sending it to the bottom. However, that was basically nothing more than a stroke of luck. A "Type 286P" with a steerable antenna would be introduced in mid-1941.
* The Royal Navy was working on a better solution even as the Type 286 was going into service, in the form of a 50 cm (600 MHz) radar for naval gunfire direction. A prototype set was available by the end of 1938, and put through successful sea trials in mid-1939. Designs for a production set for surface fire control, the "Type 284", and for anti-aircraft fire control, the "Type 285", were in place in 1940 and were being delivered to the Royal Navy in 1941.
Both the Type 284 and Type 285 used "Yagi" antennas, essentially a row of dipoles of increasing size mounted on a rod, with the beam generated along the axis of the rod. The antennas, which workers also called "fishbones" for their appearance, were arranged at slightly different angles away from the centerline of the radar, with each side driven in an alternating fashion. The returns to each side would be different until the target was on the centerline. This technique, known as "lobe switching", could provide very precise azimuth angles.
Both the Type 284 and Type 285 had horizontal lobe-switching. It is unclear if the Type 285 had vertical lobe-switching, which would have been handy for an air-defense radar.
All radar users learned sooner or later that such a powerful tool was of limited use without the proper procedures in place to make good use of it. Radar was a new thing and the Royal Navy had to learn by doing. At first, the Admiralty imposed strict limits on the use of radar, restricting it to one sweep every five minutes, in order to confound German radio direction finding equipment. Captains soon began to ignore the restrictions since the usefulness of radar outweighed its liabilities, and eventually the restrictions were formally lifted. Resourceful Royal Navy officers began to see the variety of things they might accomplish with radar, and began to organize central electronic command posts on their vessels.
The value of radar would be proven on the night of 27 March 1941, when the British battleship VALIANT and the cruisers ORION and AJAX, all equipped with radar, jumped an Italian force consisting of three cruisers and four destroyers off the southern coast of Greece at Cape Matapan. All the Italian warships, except for two of the destroyers, went to the bottom.
* The British Army also set up their own radar lab in October 1936, sited at Bawdsey Manor, and directed by Dr. E.T. Paris and Dr. A.B. Wood. Their initial work was on a "Mobile Radar Unit (MRU)", which was basically a version of Chain Home that could be bundled up and moved. It used much of the same electronics gear, but of course used transportable masts about 20 meters (66 feet) tall, instead of the big towers used by fixed-site CH stations, and operated around of 7 meters (42.9 MHz).
The MRU was picked up by the RAF in 1938, acquiring the formal designation of "AMES Type 9" in 1940. British Army researchers then moved on to the development of "Coastal Defense (CD)" sets to direct coastal artillery, and "Gun Laying (GL)" sets to direct anti-aircraft guns and searchlights.
The CD set was based on Bowen's AI work, operating at 1.5 meters (200 MHz). It was operational by the spring of 1939 and went into production soon after. It used a steerable antenna with lobe switching and had much better accuracy, though only half the range, of Chain Home. The CD set was put into service with air defense sites, as well as coastal defense sites, acquiring the formal designation of "AMES Type 2" in 1940.
It was quickly realized that the CD set could just as easily be used to pick up low-flying intruders that would escape CH. In August 1939, on Watson-Watt's recommendation, the Air Ministry decided to install one at each Chain Home station. In the air defense role, the set was known as "Chain Home Low (CHL)", with those used outside of Britain referred to as "Chain Overseas Low (COL)" or formally "AMES Type 5". It could be put on a tower to perform the functions of both CH and CHL. Early models of the CHL had separate transmit and receive antennas, and an A-scope display.
* The GL effort proved less impressive. About 400 GL Mark I sets were made, followed by about 1,600 GL Mark IIs. They were crude radars, operating at in the band of from 5.5 to 3.5 meters (54.6 to 85.7 MHz). They were capable of ranging but not targeting, which still had to be done by eye. The limitations of GL reflected the entire army radar effort. For the first years of the war, the British Army lived up to the stereotype of stodginess that the Air Ministry had transcended.
The GL Mark II did have its fans. When the Soviet Union joined the war against Hitler after the Nazi invasion of the USSR in June 1941, the British would send the Soviets a large quantity of GL Mark IIs. While the Soviets had developed relatively crude "RUS-1" and "RUS-2" fixed-station radar sets and fielded them in small numbers, the GL Mark II was simple, effective to a degree, and far better than anything else the Soviets had. They designated the set the "SON-2", produced a limited number themselves, and were given hundreds of GL.IIs by the British. They would be handed improved Western radars later.
* The South Africans also developed radar in parallel with British efforts. Dr. Basil Schonland, Director of the Bernard Price Institute in Johannesburg, learned about British radar from a highly-placed visitor in 1939. By the end of that year, the institute had developed a working experimental prototype. By March 1940, they had an operational coastal-defense set, designated "JB" for "Johannesburg", ready for service.
The JB operated at 3.5 meters (85.7 MHz) with a peak power of 5 kilowatts, and used a steerable dipole array. It was built entirely from locally-manufactured components. Improved versions of the JB would follow South African forces to the Mediterranean.BACK_TO_TOP
* The deployment of radar as an operational system and not just an experimental toy led the British to confront a problem that acquired the designation "identification friend or foe (IFF)". IFF was just what it said, figuring out who was friend and who was foe, so friends could shoot foes and not shoot other friends.
IFF was a particular problem with aircraft. Picking out a proper target in the sky during a fast-moving dogfight was difficult, and in the First World War all the combatants had developed distinctive national insignia for their aircraft to protect them from friends. Radar greatly compounded the IFF problem, since a target appeared as no more than a featureless blip on a screen. There had to be some way for the radar to perform IFF, and to complicate matters any scheme used should not reveal the aircraft's presence or location to an enemy, or be easily duplicated by an enemy intruder.
Even before the introduction of radar, the RAF had developed a tracking system for directing fighters known as "Pip Squeak", which used direction-finding stations to triangulate the position of a fighter based on a tone emitted by the fighter's radio for 14 seconds out of every minute, unless the pilot was talking over the radio.
The problem with Pip Squeak was that it wasn't easy to integrate with the radar network. It would be preferable to have an IFF on an aircraft that the radar itself could identify. In 1938, Bawdsey researchers had tinkered with a "passive" radar reflector mounted on fighters and tuned to Chain Home frequencies as a means of marking friends. This was supposed to ensure that friendly fighters were brighter to CH than foes, but it was too simplistic an approach. The magnitude of radar reflections depends not only on a large number of environmental factors but on the angle at which the radar beam hits the aircraft, and it proved impossible to consistently determine which aircraft were carrying passive reflectors and which were not. Clearly, a more sophisticated "active" electronic IFF system was needed.
The result was "IFF Mark I", which was the first IFF "transponder". On receiving a radar pulse in the proper wavelength range, it would transmit a response pulse that rose in amplitude, allowing a radar operator to identify it as belonging to a "friend". IFF Mark I went into operation in late 1939, with a thousand sets built. It was triggered by CH radar transmissions. It was, however, difficult to use, since aircrew had to adjust it in flight to get it respond properly, and it didn't respond properly about half the time.
It was quickly followed by "IFF Mark II", which had been development even before the introduction of Mark I. Mark II could respond not only to Chain Home signals, but also to 7 meter (42.9 MHz) signals from the MRU, the 1.5 meter (200 MHz) signals of Chain Home Low and Navy sets, and the 3.5 meter (86.7 MHz) signals of Army sets. Unfortunately, though it worked better than IFF Mark I, Mark II was overly complicated and still required inflight adjustments. IFF was a sticky problem and getting to work right was going to take some effort.
Incidentally, the British designation "IFF" has stuck to the technology to this day, probably because it was hard to think of any more sensible name to call it. That partly compensates for the triumph of Yank terms like "radar" and "sonar" over the British terms "RDF" and "ASDIC".BACK_TO_TOP
|
<urn:uuid:7ddb18c3-ad76-458a-81c5-526f99ad7e4d>
|
CC-MAIN-2016-26
|
http://www.vectorsite.net/ttwiz_01.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00163-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979669 | 7,878 | 3.25 | 3 |
Amantine Aurore Lucile Dupin, Baronne Dudevant (July 1, 1804 – June 8, 1876), best known by her pseudonym George Sand, was a French novelist and feminist.
The Devil's Pool is the most popular of George Sand's novellas and her best-selling work in France today. Illustrating Sand's brevity, liveliness, and exemplary storytelling, the tale deals with many of her characteristic themes—the relations between the sexes, the plight of the underprivileged, and the role of fantasy in human life—making it an ideal introduction to her work.
Madam Broshkina's .prc post also added the book in its original language
Please note that the filename ending in '_1200.imp
' is for the REB 1200. The other one is for the EBW 1150.
This work is assumed to be in the Life+70 public domain OR the copyright holder has given specific permission for distribution. Copyright laws differ throughout the world, and it may still be under copyright in some countries. Before downloading, please check your country's copyright laws. If the book is under copyright in your country, do not download or redistribute this work.
To report a copyright violation you can contact us here
|
<urn:uuid:abd7a266-d4f7-4c01-b663-a7753ff5be66>
|
CC-MAIN-2016-26
|
http://www.mobileread.com/forums/showpost.php?p=152910&postcount=1
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00143-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.939573 | 262 | 2.515625 | 3 |
|Don't miss SciFiArchive Central at InStar|
Lev.DN . . . . .Index . . . . .Lev.UP
Built of lighter-than-air bubbleTech, these super-efficient airplanes also incorporated a skin surface made of doped diamond. Using diamond gave the plane super-strong construction. By 'doping' the diamond layers with just the right impurities made the diamond layers capable of generating power from the surrounding light hitting the plane's jewel-like skin. This power would be stored or directed to a fist-sized, but very powerful, super-conducting motor to power the plane's flight.
The great deal of unused bubble-tech that filled the plane's structual elements was used for onboard electrical storage. An AirGel plane could also be flash-charged at an airport, or by a 'tanker' plane in a matter of seconds. Newer plane models could be literally flash charged from ground stations that employed banks of lasers tuned to the exact wavelength required by an empty airplane's particular light-to-electricity frequency.
The super-conducting armatures of an airGel plane's electric tubines spinning in their magnetic bearings were super efficient. When recharged from it's photoelectric outer diamond skin or by a photoelectric gound-based recharging station, airGel planes were totally non-polluting.
As the airGel plane's outer surface could be adjusted by its underlying servoGel layers the airfoil surface of these planes could be changed to be the best for changing air pressure at higher altitudes. This gave the electric powered planes the ability to fly extremely high where they would land on floating spacePorts. These immense structures were rigid-balloons also made from lighter-than-air bubble tech. Layer upon layer of stressed-diamond skin all superbly braced by an intricate network of interconnected diamond trusses and braces.
From any spacePort, the airGel plane's passengers and cargo could catch the next 'sling-shot' assisted space plane launch or simply visit the many levels of the spacePort. There were those who would also spend many moments in awe of the incredible view of the earth as it traveled along many thousands of feet below the spacePort. Very much like airGel planes and stormBoards, the enormous surface area of the entire outer skin of the gigantic bubble tech, torus-shaped structure of a spacePort was capable of generating electricity. Even with its own, not small, energy needs, each space port generated a lot of excess power. As a result of the strict limits on intra-atmospheric energy-beaming most of the excess was transferred in the energy storage areas of planes capable of returning to earth by gliding.Some of the space port's power was used to collect water vapor from the atmosphere to be broken down electrically into oxygen and hydrogen to be used as fuel and air for the growing fleet of corporate and private inter-orbital spacePlanes.
The spacePlanes were launched through the 1000 foot keel structure that pierced the center of the Space Port torus to hang below the giant-doughnut shape to function as a stabilizer much like the keel on an old surface-water, sail boat from the Eart's old Twen'cen. The electromagnetic 'rail-gun' like, magnetic sling-shot that ran straight up through the space port's keel which was used to launch the space planes, made it so the space planes used very little of the hydogen and oxgen they carried to achieve orbit. Which gave them plenty for manuvering between the Spacer MegaCorp Clan's Orbital Domains, and for de-orbiting, so the surplus H2 and O2 could be off-loaded for use at the orbital domains for water and air supplies, as well as power or heating.
By using prophetware and small super-conducting motor powered turbines placed around the outer edge of the torus the space ports maintained their proper ELOPs or Extremely Low Orbit Positions. Thus, the space port captains and pilot/navigators were able to align the space plane's launch angle to near perfection,while they were also able to keep the Terra-legged tourists from becoming 'sea-sick' while bobbing in the whispy ocean of the earth's outer atmosphere.
Lev.DN . . . . .Index . . . . .Lev.UP
Copyright (c) 1996-2002 Lee Skidmore
|
<urn:uuid:1d48be51-91e4-416c-b67f-4ac4d7e48c91>
|
CC-MAIN-2016-26
|
http://scifiarchive.com/tekpix05.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00178-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945855 | 931 | 2.921875 | 3 |
- freely available
Wireless Sensor Technologies and Applications
ExcerptRecent years have witnessed tremendous advances in the design and applications of wirelessly networked and embedded sensors. Wireless sensor nodes are typically low-cost, low-power, small devices equipped with limited sensing, data processing and wireless communication capabilities, as well as power supplies. They leverage the concept of wireless sensor networks (WSNs), in which a large (possibly huge) number of collaborative sensor nodes could be deployed. As an outcome of the convergence of micro-electro-mechanical systems (MEMS) technology, wireless communications, and digital electronics, WSNs represent a significant improvement over traditional sensors. In fact, the rapid evolution of WSN technology has accelerated the development and deployment of various novel types of wireless sensors, e.g., multimedia sensors. Fulfilling Moore’s law, wireless sensors are becoming smaller and cheaper, and at the same time more powerful and ubiquitous. [...]
Share & Cite This Article
Xia, F. Wireless Sensor Technologies and Applications. Sensors 2009, 9, 8824-8830.View more citation formats
Xia F. Wireless Sensor Technologies and Applications. Sensors. 2009; 9(11):8824-8830.Chicago/Turabian Style
Xia, Feng. 2009. "Wireless Sensor Technologies and Applications." Sensors 9, no. 11: 8824-8830.
Notes: Multiple requests from the same IP address are counted as one view.
|
<urn:uuid:ff839f5e-c8d5-44db-bbbe-5a00b514ed8e>
|
CC-MAIN-2016-26
|
http://www.mdpi.com/1424-8220/9/11/8824
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00064-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.894129 | 308 | 2.78125 | 3 |
“Eggs of the Living Dead Meet America’s Most Polluted Lake”
Dr. Nelson G. Hairston Jr., Frank H.T. Rhodes Professor of Environmental Science, Department of Ecology and Evolutionary Biology at Cornell University, will lecture on Oct 24th on effects of pollution and cleanup efforts on ecosystems. Onondaga Lake in New York is reputed to be the most polluted lake in North America because of chemical industry and municipal wastes including phosphorus, salt, heavy metals, and PCBs. Pollution has changed over a century of inputs as industry shifts and environmental controls were implemented. The lake is now a superfund site and subsequent cleanup efforts have substantially improved its condition. Hairston and his colleagues have traced the effects of pollution on lake-ecosystem functioning by studying changes in the species of Daphnia present in the lake. Daphnia is a genus of microscopic crustacean that can produce diapausing eggs. These eggs have no detectable metabolism, thus they appear “dead” but they can still be hatched from some lakes after up to 300 years in the sediment. Diapausing eggs extracted from sediment cores of different ages can be hatched or analyzed genetically to reveal which species were present at a given time in history. By sequencing their nucleic acids (RNA and DNA), Dr. Hairston has uncovered an intriguing story of invasion by exotic pollution-tolerant Daphnia species as the native species disappeared followed by the return of native species as the pollution was cleaned up.
|
<urn:uuid:3659aa1c-7776-496a-8781-4dc928088068>
|
CC-MAIN-2016-26
|
http://news.cornellcollege.edu/2011/10/eggs-of-the-living-dead-meet-americas-most-polluted-lake/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00118-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961169 | 312 | 3.25 | 3 |
Drowning is the second leading cause of death for children between the ages of 1 to 19, according to the American Association of Pediatrics. Teaching your child to swim can help her know what to do if she accidentally falls into a pool or other body of water, and can help her develop a lifelong love of aquatics.
The Right Time
No parent wants to think about their child drowning, and it's natural to want to protect babies and toddlers by teaching them to swim. However, the American Association of Pediatrics recommends waiting until your child is about 4 years old to begin swim lessons. At this age, motor skills are more developed and a child can voluntarily hold her breath for several seconds. While you can teach a younger child to propel herself through the water, flip over and float on her back, she should be within arm's reach of an adult at all times in the water to prevent drowning.
Feeling Comfortable in the Water
A critical step in teaching your child to swim is letting your child get a feel for the water. Pool water is colder than bath or shower water, and pool chemicals can burn the eyes and nostrils. Begin your first lesson by teaching your child how to enter and exit the water safely using the pool steps. Staying within arm's reach, make a game of going hand-over-hand around the pool and climbing out at the ladders. Let her get a feel for the strange sensation of moving through chest-deep and waist-deep water while walking, trying to run or playing games such as tossing a ball back and forth. Progressing beyond this step before your child is comfortable in the water can make her fearful during the rest of the learn-to-swim process.
Holding Your Breath
Keep swimming lessons fun with age-appropriate games and activities. Babies and toddlers enjoy rhymes and songs such as "Ring Around the Rosy" where everyone takes a dip under the water at the end of the song to get accustomed to not breathing underwater. Kids aged 4 to 9 enjoy retrieving underwater objects to develop their breath-holding skills. By age 6, kids are ready to practice the skill for its own merit, especially in pairs or groups. Holding hands and alternating bobbing is one fun way kids this age can learn rhythmic breathing. Diving to retrieve diving rings or other objects on the pool's bottom is another.
Kicking and Stroking
A child younger than 4 or 5 doesn't have the developmental skills to do specific kicks associated with swimming strokes, but she can undulate her body with a modified frog kick to propel herself through the water while you are holding on to her. When your child develops the ability to alternately kick her legs for a flutter kick, you can develop both body position and kicking skills with the Superman game. Have her extend her arms in front of her, kicking her legs behind while pretending to be Superman flying over the city. You can place toys on the bottom for her to dive down and "rescue" as well. Kids older than 6 respond well to flutter kick races or contests to see who can make the biggest kick. Once the kids have the hang of the Superman exercise, its just a matter of teaching them to pull alternately with their arms and turn slightly on their side to grab a breath. The thrill of actually swimming usually precludes the need for games at this stage.
- Photo Credit JaySi/iStock/Getty Images
How to Teach a Child to Swim
When teaching a child to swim, let the child get acquainted with the water before moving to more advanced lessons on blowing...
Teaching a Child to Swim to the Steps
Learn how to teach a child to swim to the steps from a professional swim teacher in this free swimming lesson video.
Teaching a Child the Backstroke in Swimming
Learn how to teach a child how to do the backstroke when swimming from a professional swim teacher in this free swimming...
How to teach swim lessons
The ideal time to teach swimming skills is when children are very young before they develop to many fears and apprehensions. However,...
How to Teach a Toddler to Swim
Drowning and near-drowning cases are the leading cause of death in children, so teaching your child to swim is one of the...
How to Swim
It's easy to learn how to swim. Just watch animals, which race across the water without taking a single lesson. Follow these...
Teaching a Child the Back Float in Swimming
Learn how to teach a child to properly perform the back float maneuver from a professional swim teacher in this free swimming...
How to Teach Swimming
When teaching swimming, it's important to always provide positive feedback to the swimmers, regardless of the age group. Find out how to...
Teaching a Child the Front Crawl Swimming Stroke
Learn how to teach a child how to perform the front crawl when swimming from a professional swim teacher in this free...
How to Swim the Sidestroke
How to Teach a Child to Swim. How to Swim the Butterfly Stroke. How to Breathe While Swimming Freestyle. How to Practice...
|
<urn:uuid:0bf001e6-3c0c-410b-843a-de3028d7dc19>
|
CC-MAIN-2016-26
|
http://www.ehow.com/how_5295_teach-child-swim.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00132-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960195 | 1,052 | 3.609375 | 4 |
Identical twins don’t share everything. The mix of viruses in a person’s gut, a new study says, is unique to each of us, even if we share nearly all our DNA with another person. That is, at least according to our poop.
This year scientists have been working to decode the genetics of the beneficial microbes that live inside us, like the bacteria that help us digest food. But those trillions of bacteria have partners of their own—beneficial viruses. Jeffrey Gordon and colleagues wanted to see what those viruses were like, and how they differed from person to person. To do it, they studied fecal samples that came from four sets of identical twins, as well as their mothers.
Each identical twin had virus populations that didn’t resemble those of their sibling—or anybody else, for that matter.
Remarkably, more than 80 percent of the viruses in the stool samples had not been previously discovered. “The novelty of the viruses was immediately apparent,” Gordon said. The intestinal viromes of identical twins were about as different as the viromes of unrelated individuals [MSNBC].
In addition, those viruses appeared to be stable over time, as opposed to the ever-shifted bacterial populations in people. And the virus-bacterium relationship in our gut, the study suggests, is different than in many other places. Viruses that infect bacteria and take advantage of them to replicate are called bacteriophages, and the two often enter an evolutionary arms race of new attacks and defenses.
Not inside us, though.
When the researchers probed deeper, they found that many of the bacteriophages carried bacterial genes that help microbes survive the anaerobic conditions in the colon. “You could see that these viruses were porting around genes that could benefit their host bacteria,” Gordon says. If the viruses transfer those genes to other bacteria that don’t normally carry them, that could help genetically disadvantaged bacteria evolve to live better in the colon [Science News].
If our gut viruses are truly unique, then the question for future research becomes: Why? And how does one’s unique viral population become established?
Gordon’s study also shakes up our picture of who’s the boss. We’ve talked before about humans’ reliance on our resident bacteria, without which we could not survive. But if bacteria are reliant upon viruses to shake up their genetics and help them survive the harsh environment of human intestines, are not viruses the true lords of our guts? Says microbiologist David Relman:
“It could be that viruses are the real drivers of the system because of their ability to modify the bacteria that then modify the human host,” he says. “So this study is in some ways looking into the genesis of the human body by seeing what viruses within it are up to” [Nature].
80beats: Study: C-Section Babies Miss Out on a Dose of Beneficial Bacteria
80beats: Scientists Sequence DNA from the Teeming Microbial Universe in Your Guts
80beats: Your Belly Button Is a Lush Oasis for Bacteria, and That’s a Good Thing
80beats: Ice-Loving Bacteria Could Give Humans a Vaccine Assist
80beats: Parasitic Wasp Genome Is Like the Wasp Itself: Weird and Surprising
Image: Gordon et. al.
|
<urn:uuid:7c040dc9-91ac-4fdc-9a3b-6530c9e32544>
|
CC-MAIN-2016-26
|
http://blogs.discovermagazine.com/80beats/2010/07/15/my-excrement-myself-the-unique-genetics-of-a-persons-gut-viruses/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00095-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957314 | 716 | 3.765625 | 4 |
Description and anatomy[change | change source]
The structural and functional similarity between the maxillae and the legs may be a sign of primitive organization; the maxillae are not specialized, as they are in other crustaceans.
Ecology[change | change source]
Cephalocaridans are found from the intertidal zone down to a depth of 1500 metres, in all kinds of sediments. They feed on marine detritus. To bring in food particles, they make water currents with the thoracic appendages like the branchiopods and the malacostracans. Food particles are passed forward along a ventral groove, leading to the mouthparts.
|
<urn:uuid:858040d4-9497-4621-8304-0b544079fc54>
|
CC-MAIN-2016-26
|
https://simple.wikipedia.org/wiki/Cephalocarida
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00141-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.717276 | 143 | 3.53125 | 4 |
Degradation and Steady-State Performance
Unfortunately, SSDs are not always as fast as in their “fresh” state. In most cases their performance goes down after some time and in real life we deal with completely different write speeds than what we see on the diagrams in the previous chapter of our review. The reason for this phenomenon is the following: as the SSD runs out of free pages in the flash memory, its controller has to clear memory page blocks before saving data into them, which causes substantial delays. Although, modern SSD controllers can alleviate the performance drop by erasing unused flash memory pages ahead of time, when idle. They use two techniques for that: idle-time garbage collection and TRIM.
Of course, users are more interested in the consistent performance of their SSDs over a long period of time rather than the peak speed they are going to see only during the initial short-term usage period, while the drive is still “fresh”. The SSD makers, however, declare the speed characteristics of “fresh” SSDs for marketing reasons. That’s why we decided to test the performance hit that occurs when a “fresh” SSD becomes a “steady” one.
To get a complete picture of SSD performance degradation we ran special tests based on the SNIA SSSI TWG PTS (Solid State Storage Performance Test Specification) methodology. The main idea of this approach is to measure write speed consecutively in four different cases. First we measure the “fresh” SSD speed. Then we measure the speed after the SSD has been fully filled with data twice. The third test occurs after a 30-minute break during which the controller can partially restore performance by running the idle-time garbage collection. And finally, we measure the speed after issuing a TRIM command.
We ran the tests in synthetic IOMeter 1.1.0 RC1 benchmark, where we measured random write speed when working with 4 KB data blocks aligned to flash memory pages at 32 requests queue depth. The test data were pseudo-random. The following diagram shows the history of the relative speed changes, where 100% refers to the SSD performance in “fresh-out-of-box” state.
Plextor SSDs feature True Speed technology which helps ensure high performance even without the TRIM command. The technology refers to an aggressive and intellectual garbage collection technique which is actually employed by many other SSDs with Marvell controllers as well as Corsair's LAMD-based solutions. And it works well. The Plextor SSDs do their background garbage collection perfectly. Their write performance can be restored to 60-80% of the out-of-box speed even when the SSD is filled with data to the brim. That's very good.
It is no wonder then that Plextor’s TRIM-triggered garbage collection works ideally, restoring each drive to its original write performance. It means that Plextor SSDs are going to always deliver consistent and predictable performance in TRIM-supporting environments, e.g. in recent versions of Microsoft OSes.
Since the characteristics of most SSDs do change once they transition from fresh out-of-the-box state into steady state, we measure their performance once again using CrystalDiskMark 3.0.1 benchmark. The diagrams below show the obtained results. We use random data writing and measure only performance during writes, because read speed remains constant.
As the SandForce platform is losing its ground, the diagrams showing steady-state write performance of SSDs become more like the diagrams that show their out-of-box write performance. Performance degradation only plagues the SF-2281 controller while the others handle the TRIM command perfectly and keep their performance high throughout their entire service life.
So, we can’t add anything new about the Marvell-based Plextor SSDs here. Yes, they are not very fast at writing, but typical SSD usage scenarios are all about reading. Moreover, desktop computers do not produce a long disk request queue.
|
<urn:uuid:e7688f71-6fde-46b0-a887-0fa8e339e175>
|
CC-MAIN-2016-26
|
http://www.xbitlabs.com/articles/storage/display/plextor-m5-pro-m5s_6.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00201-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.937056 | 840 | 2.640625 | 3 |
Breast-feeding does have a positive long-term effect on reducing blood pressure, research has suggested.
There are undisputed benefits to breastfeeding
The study, by Bristol University, suggests that breast-fed babies grow up to have lower blood pressure than their bottle-fed counterparts.
If true, the finding, published in the journal Circulation, could mean breast-fed babies are less likely to develop heart disease.
However, some experts doubt that breast-feeding has a protective effect.
In a paper published in the British Medical Journal in November, researchers from St George's Hospital in south London examined previous research and concluded that some findings might have exaggerated the benefits.
High blood pressure
Blood pressure at or above 140mm Hg when the heart is contracting - systolic
Blood pressure at or above 90mm Hg when the heart is relaxing - diastolic
The new study focused on 4,763 children from birth to the age of seven.
The researchers found that children who had been breast-fed had, on average, a systolic pressure reading 0.8 mm Hg lower than those who were bottle-fed.
Diastolic pressure was also lower - on average by 0.6mm Hg - for breast-fed babies.
The findings held even when other factors such as birth weight, and mother's socio-economic status were taken into consideration.
The researchers found that the longer a baby was breast-fed, the larger the effect on systolic blood pressure appeared to be. However, no such effect appeared to apply to diastolic blood pressure.
Although the difference between breast-fed and bottle-fed children were relatively small, it is possible they could still be significant.
Lead researcher Dr Richard Martin said a 1% reduction in systolic blood pressure across the population would prevent 2,000 premature deaths a year in the UK.
He said: "Around 40% of all infants in the USA or UK are never breast-fed.
"If breast-feeding rose from 60% to 90%, approximately 3,000 deaths a year may
be prevented among 35 to 64-year-olds."
Dr Martin told BBC News Online that his study was likely to generate robust findings because it had followed the children from birth and was based on a large sample size.
He said: "To assume the effect of breast-feeding lasts into adulthood is still something of a leap of faith, but it does seem that blood pressure levels are set early in life."
The Bristol team believe that the nutritional content of breast milk may be the key.
Breast milk contains long-chain polyunsaturated fatty acids, compounds thought to affect the development of blood vessels.
Infant formula supplemented with the same fatty acids has also been associated with lower blood pressure.
In addition, breast-fed babies tend to consume less sodium, which is closely linked to blood pressure.
Formula feeding can also cause babies to eat more than they need and can, in some babies, cause them to put on weight too rapidly. Excess weight is another risk factor for high blood pressure.
Lower blood pressure is directly linked to lower risk of heart attack, stroke, kidney disease and other related illnesses.
Separate research has also shown that breast-fed babies are less likely to be overweight, have fewer behavioural problems and may be more intelligent.
The National Childbirth Trust is a strong supporter of breast-feeding.
A spokeswoman said: "Breast-feeding is justified because it is the natural way of feeding a baby."
|
<urn:uuid:a39ed969-5d31-4bee-b67c-cbce6461edb2>
|
CC-MAIN-2016-26
|
http://news.bbc.co.uk/2/hi/health/3523143.stm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00055-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969024 | 727 | 3.4375 | 3 |
The recent multination nuclear agreement with Iran could serve as a first step toward a Middle East free of nuclear weapons and other weapons of mass destruction, and a recent report from the Princeton-based International Panel on Fissile Materials outlined the other actions that would be required to reach that goal.
(Image courtesy of the Woodrow Wilson School of Public and International Affairs)
Princeton report charts step-by-step path toward nuclear weapons-free Middle East
Posted December 17, 2013; 09:00 a.m.
The recent multination nuclear agreement with Iran could serve as a first step toward a Middle East free of nuclear weapons and other weapons of mass destruction, according to researchers on the International Panel on Fissile Materials (IPFM). The group, based at Princeton University and made up of nuclear experts from 18 countries, outlined the other actions that would be required to reach that goal in a recent report.
All states in the region — including Iran — would have to agree to steps that permanently reduce the risk of civilian nuclear facilities being used as a cover for secret nuclear-weapons programs, the researchers wrote in a report titled "Fissile Material Controls in the Middle East" (.pdf). Under such a zone, Israel — the only regional state with nuclear weapons — would have to stop producing plutonium and highly enriched uranium, the key materials for nuclear weapons. Israel would also have to declare and begin to reduce its current stockpiles, eventually eliminating them.
"Together, these measures would bring about a nuclear weapon-free Middle East and would make that zone more robust when in force," said Frank von Hippel, a founder of Princeton's Program on Science and Global Security, co-chair of IPFM and professor emeritus of public and international affairs at the Woodrow Wilson School of Public and International Affairs.
The idea of a nuclear weapons-free zone has proven successful in Southeast and Central Asia, Africa, Latin America and the Caribbean. In 1974, a nuclear weapons-free zone in the Middle East was first proposed by Iran and Egypt as an attempt to roll back Israel's nuclear weapons and restrain further proliferation in the region. Since then, all Middle East countries have signed the United Nations' Treaty on the Non-Proliferation of Nuclear Weapons (NPT), except for Israel.
The United States, Russia and Britain agreed in 2010 to sponsor an international conference to lay the groundwork for a Middle East free of nuclear weapons and other weapons of mass destruction. In late October 2013, a preparatory meeting was held in in Switzerland that brought together diplomats from Iran, Israel, the Arab states and the United States.
IPFM estimates that Israel has produced enough plutonium to make on the order of 200 nuclear warheads, assuming each weapon would need four to five kilograms. A nuclear arsenal of this size would be the fifth largest in the world — larger than Britain's, almost the same size as China's and about two-thirds as large as France's. It is believed that Israel's plutonium is produced in a reactor housed at the Negev Nuclear Research Center near Dimona, and that an underground reprocessing plant adjoining the reactor is used to separate the plutonium from the spent nuclear reactor fuel, the report notes.
"By shutting down the Dimona reactor and reprocessing plant, Israel would cap the amount of plutonium that it could use to make nuclear weapons," said Zia Mian, a research scientist with the Program on Science and Global Security. "This could be verified remotely by Israel's neighbors and would be the first step toward regional monitoring by prospective parties of a 'Middle East Weapons of Mass Destruction-Free Zone.'"
To reduce the risk of secret nuclear-weapons programs, the report recommends that all states in the region – especially Iran, the only country in the region with civilian uranium enrichment plants and an operating nuclear power reactor — commit to:
- A ban on the separation and use of plutonium;
- A ban on the use of highly enriched uranium as fuel for reactors;
- A limitation on uranium enrichment to the very low levels needed for power reactors;
- No stockpiles of enriched uranium but rather a "just-in-time" system of production; and
- Placing enrichment activities under multinational control.
"If such measures could be adopted globally, it would significantly strengthen the global nonproliferation regime and the foundation for a nuclear weapon-free world," said Harold Feiveson, who helped draft the NPT treaty and is a co-founder of the Program on Science and Global Security and a lecturer in public and international affairs at the Wilson School.
Several of the above measures were part of the recent November 2013 nuclear deal between Iran and a group of six world powers (the United States, Britain, China, France, Germany and Russia). Iran agreed that, for at least six months, it would not separate plutonium nor would it build a facility capable of doing so. Iran also agreed that it would not enrich uranium above the 5 percent level used for power reactors, and that it would reduce its stockpile of already enriched uranium. These measures, the report notes, would serve as significant barriers to any Iranian effort using civilian facilities to quickly and secretly produce materials for nuclear weapons.
Although the report does not specifically discuss chemical and biological weapons, the researchers stressed the importance of all countries in the region complying with the Chemical Weapons Convention (CWC) and the Biological Weapons Convention. This has become particularly important in light of chemical weapons use in Syria in 2013, and the country's subsequent decision to agree to the CWC, declare its stockpile and destroy its weapons. The researchers said that Egypt and Israel, now the only countries in the region that have not joined the CWC, should follow suit.
To verify that all countries are complying with their commitments, the IPFM report suggests Middle East states consider a regional organization that could supplement the inspection activities of the International Atomic Energy Agency and the Organization for the Prohibition of Chemical Weapons, which work respectively to assure that countries are upholding the nuclear weapons and the chemical weapons agreements.
"Such an organization would provide all countries of the region an additional basis for confidence that all their neighbors are complying with the obligations they will undertake by joining a zone free of nuclear weapons and weapons of mass destruction," said Seyed Hossein Mousavian, who has been with the Program on Science and Global Security since 2010. He formerly served as a member of staff to Iran's Supreme National Security Council and as Iran's ambassador to Germany.
This is the 11th Middle East research report IPFM has issued, and it has also released its seventh annual Global Fissile Material Report detailing stocks of nuclear weapons materials held by countries around the world. Both reports were issued in October. The Middle East report has been presented at briefings at the United Nations in New York City; Doha, Qatar; Tel Aviv and Jerusalem, Israel; and Amman, Jordan.
IPFM is co-chaired by von Hippel and R. Rajaraman of Jawaharlal Nehru University, New Delhi. Its 29 members include nuclear experts from Brazil, Canada, China, France, Germany, India, Iran, Japan, South Korea, Mexico, the Netherlands, Norway, Pakistan, Russia, South Africa, Sweden, the United Kingdom and the United States.
|
<urn:uuid:dce74227-92e2-4957-a22a-b97ef2f8a871>
|
CC-MAIN-2016-26
|
https://www.princeton.edu/main/news/archive/S38/72/18C94/index.xml?section=topstories&path=/main/news/archive/S38/72/18C94/index.xml&prev=1
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00067-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958264 | 1,480 | 2.71875 | 3 |
Researchers at NASA's Armstrong Flight Research Center have patented a low-cost sound shield capable of reducing the noise of aircraft traveling at subsonic speed. The technology injects a high–molecular weight gas onto the aircraft surface, producing a local area of supersonic flow. This blocks sound waves traveling in all directions without diminishing aircraft performance or efficiency. The aeronautic sound shield represents an important advance over other techniques previously used to suppress aircraft noise. Although techniques such as attaching physical barriers to the aircraft or injecting gases into the jet engine exhaust may decrease noise levels, they also typically reduce aerodynamic performance. Furthermore, these methods rarely address "upstream" noise produced by the leading edge of the aircraft. Armstrong's sound shield also reduces wear and fatigue of aircraft components and offers a low-cost design.
- Reduces noise pollution: Armstrong's sound shield suppresses noise emanating in all directions, dramatically decreasing noise pollution.
- Increases longevity of aircraft components: By suppressing sound waves, the shield decreases wear and fatigue on aircraft components.
- Aerodynamically advanced: Unlike currently available technologies, Armstrong's sound shield does not hinder aerodynamic efficiency or performance.
- Commercial aircraft
- High-speed rail
- Gas turbines
This technology is part of NASA's technology transfer program. The program seeks to stimulate development of commercial uses of NASA-developed technologies. NASA is flexible in its agreements, and opportunities exist for licensing and joint development. Armstrong is interested in a partnership to commercialize this technology.
If you would like more information about this technology or about NASA's technology transfer program, please contact:
Technology Transfer Office
NASA's Armstrong Flight Research Center
PO Box 273, M/S 1100
Edwards, CA 93523-0273
Phone: (661) 276-3368
|
<urn:uuid:a8ff0a0a-92fd-45c4-a1fa-6bd0a5b88753>
|
CC-MAIN-2016-26
|
http://www.nasa.gov/offices/ipp/centers/dfrc/technology/DRC-005-031-Sound-Shield.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00133-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.872024 | 367 | 3.625 | 4 |
Making a diagnosis of myeloma depends on finding abnormal plasma cells or their products somewhere in the body. Plasma cells are cells of the immune system that make antibodies when they are functioning normally. In myeloma, these cells begin to grow and divide abnormally. They make abnormal amounts of antibody-like proteins.
The growth of these plasma cells in the bone marrow can reduce the normal function of the bone marrow. It can also result in thinned, weakened bones that are likely to break. The abnormal antibody-like protein collects in the blood. It can cause problems with blood flow to the kidney and other parts of the body. Usually, symptoms related to these changes bring a patient to the doctor or raise a question of myeloma.
The diagnosis and prognosis of multiple myeloma include the following: Medical historyPhysical examTestingCytologyStagingPrognosis
Diagnosis begins with a visit to the doctor. Sometimes, multiple myeloma is noted when blood tests are ordered for an unrelated reason. A biopsy will be needed to confirm the presence of myeloma cells.
The doctor will ask about your symptoms and medical history. You may be asked how your symptoms have progressed. You may also be asked about anything that may increase your risk of multiple myeloma, such as exposure to radiation or toxic chemicals.
The doctor will perform a complete physical exam. This will focus on uncovering evidence of bone damage,
, or impaired circulation, each of which might be the result of myeloma.
To help with the diagnosis, your doctor may need to order tests.
Your doctor may need pictures of your bones. This can be done with: X-raysMagnetic resonance imaging (MRI)Computed tomography scan (CT scan)Positron emission tomography/computed tomography scan (PET/CT scan)
Your doctor may need to test your bodily fluids. This can be done with: Urine testsBlood tests
Bone marrow aspiration or
Cytology is the study of cells. The cytology of cancer cells differs from normal cells. Doctors use the unique cellular features seen on biopsy samples to diagnose and assess cancer.
To diagnose myeloma, the doctor will look for abnormal plasma cells. A plasma cell labeling index, which measures the percentage of dividing plasma cells, is available in some labs. This test gives an idea of how fast the cancer cells are growing. A higher labeling index is associated with a worse prognosis. It means that there are more, faster reproducing plasma cells than there should be.
Staging is the process used to determine the prognosis of a cancer that has already been diagnosed. Staging is needed to make treatment decisions (such as surgery vs. chemotherapy). Several features of the cancer are used to arrive at a staging classification. The most common features are the size of the original tumor, extent of local invasion, and spread to distant sites (metastasis). Low staging classifications (0-1) imply a favorable prognosis. High staging classifications (4-5) imply an unfavorable prognosis.
The Durie-Salmon staging system is used to stage multiple myeloma. The amount of tumor in the body is estimated based on the following factors: Blood or urine level of abnormal antibody-like proteins—These are produced by myeloma cells.Blood level of calcium—High levels are linked to bone damage caused by myeloma cells growing in the bone marrow and destroying the surrounding bone as their mass expands.Bone damage evident on x-ray—This is also a result of destruction of bone caused by growing myeloma cells. This damage has a characteristic appearance on x-ray. Sometimes an x-ray alone will give good evidence of myeloma.Blood hemoglobin level—Hemoglobin is the red pigment in red blood cells that carries oxygen to the cells. Low levels may point to decreased production of red cells due to myeloma cells occupying the bone marrow.Blood level of beta-2-microglobulin—This is another protein produced by myeloma cells. Increased levels of this protein suggest a large amount of myeloma in the body.
The more myeloma cells and/or their products present in the body, the higher the stage and the worse the outcome. Patients with higher stage disease also tend to have more symptoms from their disease. Based on the Durie-Salmon system, staging of multiple myeloma is as follows:
A somewhat small number of myeloma cells are present. (This can be measured by plasma cell index.)Hemoglobin levels are slightly low.Bone x-rays show no damage or only one area of damage.Calcium levels are normal, indicating that there is not much bone damage.There is a small amount of abnormal antibody-like protein in the blood or urine.
A moderate amount of myeloma cells are present.Other factors fall in a range between Stage I and Stage III.
A large amount of myeloma cells are present.Hemoglobin levels are very low, indicating that the normal bone marrow cells are being crowded out.Calcium levels are high, indicating that there is a large amount of bone destruction.X-rays show more than three areas of bone destruction.A large amount of abnormal antibody-like protein is in the blood or urine.
Prognosis is a forecast of the probable course and/or outcome of a disease or condition. Prognosis is most often expressed as the percentage of patients who are expected to survive over five or ten years. Cancer prognosis is an inexact process. Predictions are based on the experience of large groups of patients suffering from cancers at various stages. Using this information to predict the future of an individual patient is always imperfect and often flawed. But, it is the only method available. A prognosis may or may not reflect your unique situation.
The five-year survival rates for multiple myeloma based on stage are as follows:
|
<urn:uuid:84c0b9ce-0278-4ad5-a329-0c8844a29af5>
|
CC-MAIN-2016-26
|
http://www.svmh.com/health/content.aspx?chunkiid=32731
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00047-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935983 | 1,225 | 3.671875 | 4 |
THE SOCIAL POSSIBILITIES OF WAR
JOHN DEWEYSeverally and collectively mankind always builds better or worse than it knows. Even in the most successful enterprises aims and results do now wholly coincide. In executing our immediate purpose we have to use forces which are outside our intent. Once released, however, they continue to operate, and they bring with them consequences which are unexpected and which in the end may quite submerge the objects consciously struggled for. Such an immense undertaking as the present war is no exception. The will to conquer describes the immediate aim. But in order to realize that the end all sorts of activities are set going, arrangements made, organizations instituted, as incidental means. After they have been called into being they cannot be whisked out of existence merely because the war has come to an end. They have acquired an independent being and in the long run may effect consequences more significant than those consciously desired. If, for example, one takes a cross section through the warring countries at present, one finds a striking rise in power of the wage-earning classes. Through the necessities of war, their strategic position in modern social organization has been made clear, and the Russian Revolution has brought the fact to dramatic self-consciousness. It is not conceivable that some future historian may find this consequence outweighing any for which the war was originally fought?
If it is the unintended which happens, a forecast of the consequences of the war seems doubly futile, for it is hard enough to disentangle even the professed aims in such a manner as to make them precise and definite. Yet it is possible to see some of the forces which have been released by the war. Through the present German Empire a century ago. Consolidation has proceeded with the same certainty and acceleration as in the case of the multitude of small local railway systems which once sprawled over this country—and from the same causes. The war has speeded up the movement, and in the various commissions and arrangements which it necessitated in operation—first in order to meet actual post war needs and then because there is no way of getting rid of them without uprooting too many other things which will have got linked up with them.
It is a mistake to think that the movement for the self-determination of nations, the releasing of nationalities now held in dependence, will arrest, much less reverse, the integrating movement. Cultural emancipation of nationalities and local autonomy within a federation are to be hoped for; if they are not attained, the war will have been fought in vain so far as its most important conscious objective is concerned. But even if this goes beyond local autonomy to the point of complete political independence of a new Bohemia, Poland, Ukrainia, Palestine, Egypt, India, it will not militate against the virtual control of the world by a smaller number of political units. The war has demonstrated that effective sovereignty can be maintained only by states large enough to be economically self-supporting. New nations could exist permanently only if guaranteed by some large political union, which would have to be more closely knit together than were the treaty-alliances which “neutralized” (till the war broke out) some of the smaller states of Europe.
To say, however, that the world will be better organized is not—unfortunately—the same thing as to say that it will be organized so as to be a better world. We shall have either a world federation in the sense of a genuine concert of nations, or a few large imperialistic organizations, standing in chronic hostility to one another. Something corresponding to the present anti-German federation, with minor realignments in course of time, might constitute one of these; the Central Empires and southeastern Europe another; Russia, it is conceivable would go it alone, and the Oriental countries might make a fourth. In this case, we should have a repetition of the Balance of Power situation on a larger scale, with all its evils, including the constant jockeying to secure by threat and bribe the allegiance of Scandinavia, Spain and some of the South American countries to one imperialistic federation or another.
The choice between these two alternatives is the great question which the statesmen after the war will have to face. If it is dodged, and the attempt is made to restore an antebellum condition of a large number of independent detached and “sovereign” states allied only for purposes of economic and potential military warfare, the situation will be forced, probably into the alternative of an imperially organized Balance of Power whose unstable equilibrium will result in the next war for decisive dominion.
The counterpart of the growth of world organization through elimination of isolated territorial sovereign states is domestic integration within each unit. In every warring country there has been the same demand that in the time of great national stress production for profit be subordinated to production for use. Legal possession and individual property rights have had to give way before social requirements. The old conception of the absoluteness of private property has received the world over a blow from which it will never wholly recover.
Not that arbitrary confiscation will be restored to, but that it has been made clear that the control of any individual or group over their “own” property is relative to public wants, and that public requirements may at any time be given precedence by public machinery devised for that purpose. Profiteering has not been stamped out; doubtless in some lines of war necessities it has been augmented. But the sentiment aroused against profiteering will last beyond the war, while even more important is the fact that the public has learned to recognize profiteering in many activities which it formerly accepted on their own claims as a matter of course.
In short, the war, by throwing into relief the public aspect of every social enterprise, has discovered the amount of sabotage which habitually goes on in manipulating property rights to take a private profit out of social needs. Otherwise, the wrench needed in order to bring privately controlled industries into line with public needs would not have had to be so great. The war has thus afforded an immense object lesson as to the absence of democracy in most important phases of our national life, while it has also brought into existence arrangements for facilitating democratic integrated control.
This organization of means for public control covers every part of our national life. Banking, finance, the supervision of floating of new corporate enterprises, the mechanism of credit, have been affected by it to various degrees in all countries. The strain with respect to the world’s food supply has made obvious to all from the farmer in the field to the cook in the kitchen the social meaning of all occupations connected with the physical basis of life. Consequently the question of the control of land for use instead of for speculation has assumed an acute aspect, while a flood of light has been thrown upon the interruption of the flow of food and fuel to the consumer with a view to exacting private toll. Hence organization for the regulation of transportation and distribution of food, fuel and the necessities of war production like steel and copper.
To dispose of such matters by labeling them state socialism is merely to conceal their deeper import: the creation of instrumentalities for enforcing the public interest in all the agencies of modern production and exchange. Again, the war has added to the old lesson of public sanitary regulation the new lesson of social regulation for purposes of moral prophylaxis. The acceleration of the movement to control the liquor traffic is another aspect of the same fact. Finally, conscription has brought home to the countries which have in the past been the home of the individualistic tradition the supremacy of public need over private possession.
It may seem a work of supererogation to attempt even the most casual listing of the variety of ways in which the war has enforced this lesson of the interdependence, the interweaving of interests and occupations, and the consequent necessity of agencies for public oversight and direction in order that the interdependence may become a public value instead of being used for private levies. It is true that not every instrumentality brought into the war for the purpose of maintaining the public interest will last. Many of them will melt away when the war comes to an end. But it must be borne in mind that the war did not create that interdependence of interests which has given enterprises once private and limited in scope a social significance. The war only gave a striking revelation of the state of affairs which the application of steam and electricity to industry and transportation had already effected. It afforded a vast and impressive object lesson as to what had occurred, and made it impossible for men to proceed any longer by ignoring the revolution which has taken place.
Thus the public supervision and control occasioned by this war differ from that produced by other wars not only in range, depth and complexity, but even more in the fact that they have simply accelerated a movement which was already proceeding apace.
The immediate urgency has in a short time brought into existence agencies for executing the supremacy of the public and social interest over the private and possessive interest which might otherwise have taken a long time to construct. In this sense, no matter how many among the special agencies for public control decay with the disappearance of war stress, the movement will never go backward. Peoples who have learned that billions are available for public needs when the occasion presses will not forget the lesson, and having seen that portions of these billions are necessarily diverted into physical training, industrial education, better housing, and the setting up of agencies for securing a public service and function from private industries will ask why in the future the main stream should not be directed in the same channels.
In short, we shall have a better organized world internally as well as externally, a more integrated, less anarchic, system. Partisans are attempting to locate the blame for the breakdown in the distribution of fuel and the partial breakdown in food supplies upon mere inefficiency in governmental officials. But whatever the truth in special cases of such accusations, it is clear that the casual force lies deeper.
Fundamental industries have been carried on for years and years on a social basis; for public service indeed, but for public service under such conditions of private restriction as would render the maximum of personal profit. Our large failures are merely exhibitions of the anarchy and confusion entailed by any such principle of conducting affairs. When profit may arise from setting up division and conflict, it is hopeless to expect unity. That this, taken together with the revelation by the war of the crucial position occupied by the wage-earner, points to the socialization of industry as one of the enduring consequences of the war cannot be doubted.
Socialization, as well as the kindred term socialism, covers, however, many and diverse alternatives. Many of the measures thus far undertaken may be termed in the direction of state capitalism, looking to the absorption of the means of production and distribution by the government, and to the replacement of the present corporate employing and directive forces by a bureaucracy of officials. So far as the consequences of war assume this form, it supplies another illustration of the main thesis of Herbert Spencer that a centralized government has been built up by war necessities, and that such a state is necessarily militaristic in its structure.
On the other hand, it must be pointed out that in Great Britain and this country, and apparently to a considerable degree even in centralized Germany, the measures taken for enforcing the subordination of private activity to public need and service have been successful only because they have enlisted the voluntary cooperation of associations which have been formed on a non-political, non-governmental basis, large industrial corporations, railway systems, labor unions, universities, scientific societies, banks, etc. Moreover, the wage-earner is more likely to be interested in using his newly discovered power to increase his own share of control in an industry than he is in transferring that control over to government officials. He will have to look to politics for measures which will secure the democratization of industry from within, but he need not go further than this.
Reorganization along these lines would give us in the future a federation of self-governing industries with the government acting as adjuster and arbiter rather than as direct owner and manager, unless perhaps in case of industries occupying such a privileged position as fuel production and the railways. Taxation will be a chief governmental power through which to procure and maintain socialization of the services of the land of industries organized for self-direction rather than for subjection to alien investors. While one can say here as in the case of international relations that a more highly organized world is bound to result, one cannot with assurance say which of two types of organization is going to prevail. But it is reasonably sure that the solution in one sphere will be congruous with that wrought out in the other.
Governmental capitalism will stimulate and be stimulated by the formation of a few large imperialistic organizations which must resort to armament for each to maintain its place within a precarious balance of powers. A federated concert of nations, on the other hand, with appropriate agencies of legislation, judicial procedure and administrative commissions would so relax tension between states as to encourage voluntary groupings all over the world, and thus promote social integration by means of the cooperation of democratically self-governed industrial and vocational groups. The period of social reconstruction might require a temporary extension of governmental regulation and supervision, but this would be provisional, giving way to a period of decentralization after the transfer of power from the more or less rapacious groups now in control had been securely affected.
The determination of the issue in one sense or the other will not, of course, immediately follow the conclusion of the war. There will be a long period of struggle and transition. But if we are to have a world safe for democracy and a world in which democracy is safely anchored, the solution will be in the direction of a federated world government and a variety of freely experimenting and freely cooperating self-governing local, cultural and industrial groups. It is because, in the end, autocracy means uniformity as surely as democracy means diversification that the great hope lies with the latter. The former strains human nature to the breaking point; the latter releases and relieves it—such, I take it, is the ultimate sanction of democracy, for which we are fighting.
|
<urn:uuid:3e6d6c0d-1aa2-4d54-b061-06fd182057ae>
|
CC-MAIN-2016-26
|
http://teachingamericanhistory.org/library/document/the-social-possibilities-of-war-2/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00091-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962984 | 2,901 | 3.03125 | 3 |
Do you have mesothelioma?
Mesothelioma symptoms could easily be mistaken for the symptoms of several other conditions. It is important to take a test after experiencing the first signs of mesothelioma, so that the cancer is detected as soon as possible. Mesothelioma diagnosis is also very important, because there are different types of this cancer, and each requires a different treatment method. These are the treatment methods available:
Each treatment method consists of several variations, each meant to address different forms of the disease. Some of them are also used to relieve symptoms that can be quite difficult to deal with at times. Remember, mesothelioma cancer and its' causes can be avoided if you use the proper protection equipment.
What is asbestosis?
Asbestosis is medical condition caused by extreme exposure to asbestos over very long periods of time. Most of the time it occurs in miners, because are easily exposed to asbestos in its' brute form, which produces microscopic fibers that are easy to inhale. Asbestosis symptoms usually have to do with breathing problems, because it affects the lungs and the lung capacity. Symptoms include:
- shortness of breath
- the person has a permanent feeling of illness
- dry cough
Symptoms appear over time, and their cause is hard to pinpoint. They develop to painful breathing, and eventually, cancer appears. Asbestosis related symptoms can also come in the form of a very anxious sleep, not much rest is gained even if the person sleeps 7 or 8 hours. If lung cancer develops, help can be found in all mesothelioma cancer centers around the world. Asbestosis from asbestos exposure is a very difficult disease to treat if discovered long after it started to develop.
Are you at risk of exposure to asbestos?
Asbestosis and mesothelioma are diseases caused by exposure to asbestos, and they both start with similar symptoms, shortness of breath, dry cough and pain in the chest and abdominal areas. Asbestos can be found in many items that you might be able to find at home. Insulation, plumbing and construction materials could contain doses of asbestos that are able to cause serious health issues. Removal of asbestos and materials containing asbestos would a very good idea. You might also be at risk if you live near large mining stations or factories that could use asbestos as an ingredient or in their manufacturing process. Make sure that you have a test taken once every year to avoid health problems in the future.
Ever since 1940 millions of Americans were exposed to this carcinogen (substance that causes cancer). Smoking and having a family member that works with asbestos were shown to be directly linked to asbestos-related diseases. These and other risk factors made this rare diseases -asbestrosis and mesothelioma- become more and more spread.
|
<urn:uuid:9b2900da-d311-40d9-bd32-e146d12d84c2>
|
CC-MAIN-2016-26
|
http://www.asbestosismesothelioma.org/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00200-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.968399 | 583 | 2.65625 | 3 |
STORY AND PHOTOS BY LOUISE R. SHAW
Clipper Staff Writer
SYRACUSE — Last year and the year before that and the year before that, when first graders walked from the sheep-shearing shed to the baby animal pens at Hamblin Dairy, they walked past a long line of cows.
Some cows would turn their heads, some kids would plug their noses, but each group showed considerable curiosity about the other.
This year those cows were gone.
“We had no choice,” said Lance Hamblin. “We held out as long as we could.”
Hamblin is the third generation to run what has been Davis County’s last dairy farm.
His children, the fourth generation, were excited, willing and ready to take over the farm, he said.
Instead, because the cost of feed is too high and the price of milk too low, their 111 dairy cattle were sold to Bliss Dairy in Delta.
“Everything was paid for,” said Hamblin of the farm he spent his whole life building. “We just had to pay operating costs, but we couldn’t.”
First graders from Davis County schools were still invited to tour the farm, a tradition that has gone on for so many years most can’t remember when it started.
Representatives from Future Farmers of America, Davis County 4-H, the Farm Bureau and the Davis County Conservation District, organized by USU Extension Services of Davis County, taught hundreds of students over two days about the importance of agriculture.
“If we didn’t have farms we wouldn’t have food,” said Becca Ferry, as she explained how seed becomes wheat, wheat becomes flour and flour becomes cereal and Oreo cookies.
She then helped them make living necklaces with seed.
“We can’t depend on food from other countries because we don’t know what pesticides they might use, and a lot of it is not good for you,” said Gary Jacketta. “Here in the U.S. it’s controlled.”
He then demonstrated the water cycle and how it can be negatively impacted by oil or fertilizer residue that runs off from residences.
Students watched a sheep being sheared and learned about how milk is stored and kept bacteria- free. They had a chance to get up close to baby pigs, chicks and a goat, and to pet a friendly llama.
Hamblin has no plans to sell the farm or to have the land developed, as has happened all around him.
His father has planted corn, but the likelihood of a dry year makes the success of that enterprise at risk, said Hamblin.
If he was forced to sell, he said his house would have to go with it because he couldn’t watch while his life’s work was torn down.
As long as his family owns the farm, he said the extension service is welcome to continue to host first graders every spring, for their day at his Utah century farm.
And if they do come, next year and the next, they will get the chance, like first graders before them, to learn about how important farms are.
|
<urn:uuid:7e181fe7-84b5-4a41-aca7-11b15bf3bd12>
|
CC-MAIN-2016-26
|
http://davisclipper.com/pages/full_story/push?article-First-graders+learn+about+farm+life%20&id=22541834
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00180-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.982661 | 685 | 2.5625 | 3 |
2 January 2014
In 1963 there was great interest in many aspects of microwave engineering in particular, and in the burgeoning field of semiconductor electronics and circuit systems. The rapid communication of the latest advances in microwave components containing ferrite materials that offered non-reciprocal properties was served in part by the journal Applied Physics Letters but most other electronics-related activity in industry and universities had no outlet for rapid short announcements. Thus, a suggestion was made to the Council of the Institution of Electrical Engineers (IEE – now the Institution of Engineering and Technology, IET) that a new journal should be launched to fill the gap. It was agreed that Electronics Letters should be published bi-monthly and the first Issue appeared in March 1964.
It is hard to imagine how different the field of electronics was 50 years ago. In microwaves there was a concentrated effort on components related to radar and in the prospect of long haul communication using circular waveguides. Then, around 1966, a major development occurred when it was recognised that quartz fibre could provide an alternative to circular waveguides for long-haul communication. Many of the early theoretical announcements on that topic were published in Electronics Letters to be followed by a plethora of Letters describing experimental results as we entered the 1970s. Solid state lasers and amplifiers were also prominent.
Satellite communication systems were in competition at that time following the launch of Intelsat 1 in 1973. New antennas were required to more efficiently direct the signals to and from Earth to the satellite in geostationary orbit, and massive improvements in electronics occurred in the satellite payload with the emergence of integrated circuits and complex beam formers. At the satellite, arrays competed with shaped reflectors to direct and receive the beam. The problem of shaped beam diffraction synthesis by reflector shaping was first aired in Electronics Letters.
The first trials of undersea optical cables occurred in the mid-1970s, leading to the form we know today as trans-oceanic optical fibre cables. These provide the backbone of the Internet while satellites are the basis for TV distribution to the home, and for remote newscasters and digital transmission.
Consistent with these developments, the advances in computer power made possible the analysis of design problems that would have been impossible a decade earlier. These were reflected in Electronics Letters with contributions ranging from microwave filters to image processing. Countless times has ‘the lady with the feather in her hat’ appeared in our pages as research groups world-wide improved their algorithms, enabling better pictures to be formed with fewer bits.
As we moved to the 1990s we have seen a widening of the countries from which contributions have arrived. In the early days these came mainly from the UK and the USA but with the arrival of mobile communication systems, more contributions arrived from Asia. Handset antenna design was prominent and base station schemes for improving reliability by the use of multiple input/multiple output antennas were extensively explored. Continental Europe has always been a valuable source of high quality Letters from the early issues and this trend continues today.
With the success of Electronics Letters, other Letters journals have inevitably appeared. It is a tribute to the Editorial Staff of the IEE (later IET) and the rapid publication process that our journal continues to flourish in the face of stiff competition. Among new areas have come medical related topics and, of late, the emergence of a new class of artificial material known as a metamaterial. New materials such as Graphene hold great prospects for the future and we feel sure that, in time, Electronics Letters will provide an avenue for the rapid publication of components and systems employing this extraordinary material.
When we celebrated the 25th Year of Publication of Electronics Letters we could not have envisaged all the changes that have occurred over the 25 years to follow and it is gratifying to see that, in the capable hands of Professor Chris Toumazou and Professor Ian White, the journal continues to flourish. We wish it every success for the next 25 years but it will require a miracle of medical science if we are around to see the 75th Anniversary!
A PDF version (new window) of this feature article is also available.
Browse or search all papers in the latest or past issues of Electronics Letters on the IET Digital Library.
|
<urn:uuid:b0e49536-6e0a-41d3-a95e-8700f47bee5d>
|
CC-MAIN-2016-26
|
http://www.theiet.org/resources/journals/eletters/5001/50-years.cfm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00067-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959609 | 865 | 2.96875 | 3 |
Do-it-yourself comes to biotechnology in the form of young geeks who tinker with DNA in the garage, the kitchen or wherever they can set up a lab bench.
According to AP reporter Wohlsen, these DIY scientists are passionate biology graduates or post-docs, savvy in the ways of molecular biology and computers and possessed of a libertarian streak. They want open access to information and oppose the kind of data control and patent rights they see embodied by academics, biotech firms or government bureaucrats. In this debut, the author profiles leading “biopunks” who believe that free access to crowd-sourced information is the key to rapid innovation. Their idea of a “hack” is not the breaking and entering of a computer system for illicit purposes, but a term that means they have gotten tools and materials on the cheap and used them to develop diagnostic tests, drugs or even new organisms. Indeed they have: Wohlsen describes a cheap PCR machine (for replicating bits of DNA), a woman who devised a test for a genetic disease that runs in her family and some ingenious designs for cancer-targeting drugs. Some biopunks have formed groups like DIYbio, which makes biology accessible to citizen scientists; others are futurist “transhumanists” who dream of extending life spans and importing computer chips in their brains. If this information makes you anxious or worried about socio/legal/ethical issues, lack of regulation, privacy concerns or DIYers creating Frankensteins, Wohlsen’s discussions of these issues will hardly reassure. On the one hand, the FBI wrongly cracked down on one legit scientist (and now appears to be promoting friendly information exchanges). On the other, the suggestion that it’s cheaper to make your own weaponized anthrax or ricin, rather than a brand new microbe, is no comfort.
Though there are not yet any solutions to the legal and ethical issues, Wolhsen provides a timely airing of what may be going on in a backyard near you.
|
<urn:uuid:8a7c6982-c0e9-48fe-aefd-e3c670307f8b>
|
CC-MAIN-2016-26
|
https://www.kirkusreviews.com/book-reviews/marcus-wohlsen/biopunk/print/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00093-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93339 | 418 | 2.515625 | 3 |
Providing free smoke alarms did not reduce fire related injuries in a deprived multiethnic urban population
QUESTION: Does providing free smoke alarms to a deprived, multiethnic population reduce fires and related injuries?
147 444 households in 40 electoral wards with Jarman scores ≥1 standard deviation above the national mean. (The Jarman score is a measure of material deprivation and increased healthcare needs.)
Wards were pair matched by Jarman score. 20 wards (73 399 households) were allocated to the intervention, which comprised distribution (door to door and through key local sites) of smoke alarms, with batteries, fittings, and fire safety brochures (in English and other local languages) targeted to households at high risk. Free installation was offered. One year later, postcards were sent to remind recipients to change the smoke alarm batteries. The aim was to provide smoke alarms to 25% of intervention households. 20 wards (74 045 households) were allocated to the control group and received no intervention.
Main outcome measures
Main outcome was fire related injuries resulting in attendance at an emergency department, hospital admission, or death. Any injury that resulted from fire in an occupied dwelling of a study ward was included. Other outcomes included fires attended by the fire brigade.
20 050 alarms were distributed to 19 950 households. The intervention and control groups did not differ for total fire related injuries, or hospital admissions and deaths (table). Similar results were found for the 78% of injuries judged to be potentially preventable by smoke alarms (table). The fire brigade attended 1603 residential fires. The groups did not differ for the incidence of attended fires.
Providing free smoke alarms did not reduce fire related injuries in a deprived multiethnic urban population.
- Anne Ehrlich, RN, MHSc
Rarely is a primary prevention trial of personal safety so well designed and conducted as the study by DiGuiseppi et al. Although the results were negative, the rigour of the cluster randomised design and the attention to other methodological aspects leaves little doubt as to the veracity of the principal outcome. DiGuiseppi et al replicated many of the intervention features of a study by Mallonee et al, which was conducted in an economically deprived neighbourhood in Oklahoma City, USA.1 Both studies distributed similar alarms to a similar proportion of the population, and both involved community members and government and voluntary agencies in the distribution process. However, DiGuiseppi et al did not find the favourable results of Mallonee et al. In fact, DiGuiseppi et al found that the intervention and control households had similar proportions of alarms installed and operational. In other words, the participants did not use the safety device as instructed.
One explanation for the differences in the findings of DiGuiseppi et al and Mallonee et al might relate to differences in the study populations. DiGuiseppi et al suggest that their participants may have had lower literacy levels and greater difficulty understanding the installation and maintenance instructions. Furthermore, factors such as mistrust of people in positions of authority and the fact that most recipients were tenants rather than homeowners may have reduced installation rates in the study by DiGuiseppi et al.
The results are important for nurses and others working in the community. The role of community assessment in tailoring interventions to local populations cannot be overemphasised. Furthermore, a community assessment process using multiple culturally appropriate methods, particularly in a multiethnic population such as in the UK study, can support the trust building phase that DiGuiseppi et al identified as a key barrier to the implementation of their intervention.
A recurrent theme in injury prevention is the preference for passive prevention strategies rather than the active ones used by DiGuiseppi et al.2 This study suggests a continued role for public health practitioners in advocating for policy changes such as affordable, safe housing, and passive interventions, like sprinkler systems and appropriate building code regulations.2
↵* Information provided by author.
Sources of funding: Medical Research Council; Home Office (Fire Research and Development Group and National Community Fire Safety Centre); Department of Health; Camden and Islington Councils; British Medical Association; Camden and Islington Health Authority.
For correspondence: Dr C DiGuiseppi, University of Colorado Health Sciences Center, Denver, CO, USA.(reprints not available)
|
<urn:uuid:d07098cf-f934-4324-a8e4-c72aacc202fd>
|
CC-MAIN-2016-26
|
http://ebn.bmj.com/content/6/4/105.full?cited-by=yes&legid=ebnurs;6/4/105
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00118-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955201 | 891 | 2.875 | 3 |
You have BOTH Al ad O2 given; therefore, this is a limiting reagent problem. Convert g Al to moles Al, convert grams O2 to moles O2, determine the limiting reagent, THEN you can determine the amount of Al2O3 formed.
I got 2.74mol Al and 1.75mol O2, how do I find which one is iin excess? what are hte next steps?
If you had written the balanced equation you probably could have figures out how to do it.
4Al + 3O2 ==> 2Al2O3
So 2.74 moles Al will produce how much Al2O3 (given all of the oxygrn needed)? That will be 2.74 moles Al x (2 mol Al2O3/4 mol Al) = 2.74 x (2/4) = 1.37 mole Al2O3.
How much will the oxygen produce(given all the Al needed)? That will be
1.75 mole O2 x (2 mol Al2O3/3 mol O2) = 1.17 mole Al2O3
Obviously both can't be right. So the combination of the two will produce as much as the SMALlER one; therefore, oxygen is the limiting reagent, you will have 1.17 mole Al2O3 produced, and the mass of the Al2O3 will be 1.17 x molar mass Al2O3. You could calculate, if you wish, how much of the Al is used, subtract from the amount initially, to find the amount remaining unreacted.
So the grams of Al2O3 is 119.34 :)
If your prof is picky about the number of significant figures, you have more in your answer than allowed. You had 74 and 56 g respectively, both have two s.f.; therefore, you are allowed 2 in the answer.
|
<urn:uuid:a23b1e99-8025-4705-9d37-ed1aaac9cd6a>
|
CC-MAIN-2016-26
|
http://www.jiskha.com/display.cgi?id=1236555362
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00170-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.925766 | 402 | 2.875 | 3 |
Vulcanologists initially thought it had come from a seafloor volcano at the Monowai Seamount to the south of Tonga after “rafts” of pumice were first discovered by the New Zealand Defense Force on August 9.
But using satellite images and past seismic records, it was later possible to pinpoint the time and location of the eruption to July 17 at the Havre Seamount, well to the southwest of Monowai.
By July 21, the eruption appeared to have waned, but dense rafts of pumice were left scattered across the ocean’s surface after being spewed from a depth of about 3,600 feet.
The image to the right was captured by NASA’s Aqua satellite on July 28 and shows that thick rafts of the pumice had drifted to the northwest of the underwater eruption site.
The pumice was later split into several twisted segments by prevailing wind and ocean currents, which also carried it to the north and east of the Havre Seamount.
Little had been known about the Havre Seamount before the eruption, and vulcanologists were not even aware that it was an active submarine volcano.
While the coverage of the pumice was expansive in some of the rafts, the material was so light in density that it did not impede maritime traffic and posed no threat to vessels.
|
<urn:uuid:9087eac7-a6df-487c-9932-b272654681f6>
|
CC-MAIN-2016-26
|
http://www.earthweek.com/2012/ew120824/ew120824x.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00026-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.989605 | 284 | 3.984375 | 4 |
With spring's flowers come insects, the focus of Steve Stauffer's life year round. Part detective, part matchmaker, Stauffer controls the insects that pester plants in the Chicago Park District's conservatories. He does that using other insects, mainly, not pesticide sprays. First, he identifies the pest that is leaving telltale trails of goo or spots or cottony streaks on his plants. Identifying a species sometimes takes a glance under the microscope, but, with tougher culprits, it can mean sending a sample insect as far away as a Florida laboratory or the British Museum. Then Stauffer figures out what second insect would, in nature, kill the first, and goes out to catch some. Tiny wasps often make the best of these "natural enemies," though beetles and flies have their moments. Once Stauffer brings the insects together, it can take weeks to be certain whether the plants have been saved. "There are always a few sleepless nights," says Stauffer, 51, who prizes his worn, institutional-gray microscope, normally found standing upright in his office. Given to him by a mentor, the microscope originally belonged to the late Paul DeBach, a famed, pioneering researcher in this field of biological control.
|
<urn:uuid:fddfbf20-3ec3-4703-a393-b30252c1776e>
|
CC-MAIN-2016-26
|
http://articles.chicagotribune.com/2002-04-07/features/0204070526_1_insects-microscope-cottony
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00095-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.953672 | 257 | 2.65625 | 3 |
Could LBJ Do as Much Today?
by Clarence Page
Fifty years later, it's hard to imagine the enactment of the Civil Rights Act by today's polarized
We Americans fight less among ourselves when we clearly face a common enemy or crisis. In the immediate aftermath of the
Today the bitter partisanship in
This is an appropriate time for such a look into the rear-view mirror of time, if only to appreciate with new eyes the historic significance of the battle to pass the Civil Rights Act. It amounted to nothing less than the last lingering battle of the Civil War -- if you believe (as some people apparently don't) that the Civil War is really over.
In the space of what now seems like a few short years, which also were my early teen years, a new war seemed to be breaking out. The ferocity was enhanced by television news, an industry still coming of age. Almost daily in 1963, we were exposed to images such as Birmingham police dogs and firehoses sicced on peaceful civil rights protesters, a bombed Birmingham church where four little girls were killed on a Sunday morning and the assassination of
After Kennedy's assassination that November, the bill was left to LBJ, which sounded on my side of town like bad news. As majority leader of the
But as president, Johnson rose to a higher moral and historic calling. He rallied enough support from both parties to enact, by many accounts, a stronger civil rights bill than the less-experienced JFK could have achieved.
It is that feat, accomplished with a majority of Republican votes to override the fierce resistance of Southern segregationist Democrats, that leads many to ask today whether the Civil Rights Act would survive today's
My first reflex is to answer no. Even such formerly routine matters as an extension of unemployment benefits fall dead in the water these days. But this also is not
As many others point out, LBJ had deep and long-running relationships on
But even if he had it, Obama's close associates say, he would not have a
LBJ enacted a sweeping, landmark civil rights bill on behalf of equal rights for women and minorities. But, in the week of the law's 50th anniversary, Senate Republicans blocked a much more modest Democratic-backed Paycheck Fairness Act aimed to even the pay gap between men and women.
Would the Civil Rights Act run up against a similar brick wall today? The politics and issues are different. The strong conservatism that dominated yesterday's Southern Democrats is found more conspicuously in today's
That shift began significantly in the backlash years immediately following LBJ's civil rights legislation. As minorities and others who favored activist government flocked to the party of LBJ, the Republican base became more white and also shifted to the South and to western mountain states. Today in its support of states' rights and opposition to "big government," the Party of
And compared to today's political landscape, the civil rights divide in LBJ's day was a more clear-cut choice between injustice and equal opportunity. In today's civil rights debates, both sides call for the same principles -- "freedom," "equal rights" and "opportunity" -- but have very different definitions of what those words mean.
|
<urn:uuid:b44edadf-6657-438b-9c70-18598059d42f>
|
CC-MAIN-2016-26
|
http://www.ihavenet.com/politics/Could-LBJ-Do-as-Much-Today.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00086-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.973373 | 656 | 2.96875 | 3 |
Europe and the Islamic World
Europe and the Islamic World: A History
John Tolan, Gilles Veinstein and Henry Laurens
Princeton University Press 455pp £27.95
The spate of protests across the Muslim world in 2012 against the film The Innocence of Muslims has highlighted once again what many see as a ‘clash of cultures’ between Islam and the West. The western value of free speech seems to be irreconcilable with the Muslim belief in the unassailable holiness and authority of the Prophet Muhammad. The theory of Samuel P. Huntington (1927-2008) that in the post-Cold War era Europe and America will increasingly come into conflict with non-western civilisations which reject its ideals of democracy, human rights and the rule of law appears once more to be vindicated. Yet perhaps there is an alternative way of thinking amid the chaos and the shouting. In his introduction to this book, John L. Esposito of Georgetown University claims that it provides a ‘major antidote to this dangerous, myopic world view’.
That is a bold statement indeed but, as the authors make clear in their introduction, the book is not designed to be a polemical riposte to Huntington’s thesis. Rather it aims to highlight neglected aspects of the relations between Europe and Islam across the centuries which suggest that, however bitter the conflict might have been at times, culturally the two spheres have always had a great deal in common. After all, Britain and France were at war for most of the 18th century but no one would claim that they were divided by a clash of civilisations. The authors divide their work into three, along chronological lines. John Tolan tackles the period from the initial expansion of Islam into Syria, Egypt and Persia in the seventh century up to the 15th. Gilles Veinstein looks at the period of Ottoman ascendancy in the 15th to the 18th centuries and Henry Laurens takes the story up to the present day.
Veinstein’s analysis of the early modern period typifies the methodology of the book as a whole. He traces the westward expansion of the Ottoman Empire up to the reign of Suleyman the Magnificent (1520-66), by which time it included about a quarter of the European land mass. The capital of the empire was in Europe, first at Edirne (Adrianople) and after 1453 at Istanbul (Constantinople). It would be very easy to envisage this dramatic expansion of a Muslim power at the expense of Christian ones as resulting in a division of the continent along the lines of the Cold War Iron Curtain, with the fault line running not from Stettin to Trieste but from the Crimea to the Adriatic. There was sharp conflict across the border, as in 1683 when the Ottomans laid siege to Vienna. Christian propagandists, Protestant and Catholic, vilified the Turks as not only infidels but depraved and amoral barbarians.
Behind the undeniable conflict, Veinstein shows, there was a remarkable degree of assimilation. The border of the Ottoman Empire was in fact not that between Islam and Christianity. Muslims were in a minority in the European provinces of the Ottoman Empire and the sultans never attempted to enforce the conversion of their Orthodox Christian and Jewish subjects. Rather, Muslims and Christians tended to share saints and pilgrimage sites. Another area of assimilation was diplomacy. In the medieval period ‘pacts with the infidel’ had been regarded as sinful and treasonous but, in the world of Renaissance Europe, the Ottoman Empire became a central player in the great game. After all, the two ‘civilisations’ were hardly internally unified and monolithic. Faced with the threat of Catholic Spain, Elizabeth I of England entered into a treaty of alliance with the Ottoman sultan in 1583. In 1613 the Druze emir of Lebanon, Man’oğlu Fakhr al-Din, who was in rebellion against his Ottoman overlord, visited Tuscany to cement an alliance with Grand Duke Cosimo II de’ Medici. In practical diplomacy there was no border, no chasm between the two worlds.
Sadly it is unlikely that 450 pages of packed print by a troika of erudite professors will ever have much impact on the kind of people who deliberately cause offence to deeply cherished religious beliefs or who delight in publicly burning flags. Nevertheless this book is an important contribution to an ever more urgent debate. By providing a wealth of inconvenient detail that fails to fit in to the simplistic stereotypes, it challenges the very notion that humanity can be divided into separate ‘civilisations’, however bitter at times the conflict between them.
Jonathan Harris is Professor of the History of Byzantium at Royal Holloway, University of London and author of The End of Byzantium (Yale University Press, 2010).
|
<urn:uuid:2796508c-7d1a-4ca6-8192-217e4f3fa7fe>
|
CC-MAIN-2016-26
|
http://www.historytoday.com/blog/2013/01/europe-and-islamic-world
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00117-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950383 | 1,009 | 2.78125 | 3 |
Think about your dinner plate for a moment. Is it covered with meat and potatoes and little else? Do your vegetables make up the smallest spot on your plate? Do you often eat more than you should?
Your weight depends partially on your plate. Many factors may affect your weight. These include your physical activity level, genetics, your emotions and attitudes and your income, among others.
But your eating habits are also an important factor. Most of us haven't gotten this message. More than two-thirds of Americans 20 years and older are overweight or obese. This puts most adults at risk for weight-related diseases. They can include:
- Heart disease
- High blood pressure
- Certain cancers
Understanding what makes up a healthy diet may help you maintain a healthy weight. The United States Department of Agriculture (USDA) has provided Americans with healthy diet guidelines since 1916. Over the past 20 years, the USDA's food pyramid was a familiar guide. Today, much has changed in American tastes, mobility, food availability and patterns of family life. What will help us remember what to eat and how much to eat? One answer: a program from the USDA called MyPlate, which in 2011 replaced the Food Guide Pyramid.
A simple approach
MyPlate helps you make healthier food choices. It uses a place setting, a familiar mealtime visual. It prompts you to think about building a healthy plate.
The place setting includes a plate divided into four parts. These represent different food groups. They are fruits, vegetables, grains and proteins. It also includes a circle to the side, representing dairy.
Here are some of the key messages:
- Make half of your plate fruits and veggies.
- Make at least half your grains whole grains.
- Switch to fat-free or low-fat milk.
- Compare the sodium in your foods, choosing foods with less sodium.
Spotlight on making better choices
MyPlate emphasizes positive lifestyle choices and recommends you:
- Decrease portion sizes. Use a smaller plate, glass or bowl. One cup of food on a small plate looks larger than on a big plate.
- Focus on foods you need. Eat your fruits, veggies, whole grains, lean protein and fat-free or low-fat dairy.
- When eating out, make better choices. Ask for dressings, syrups and sauces on the side. Skip foods listed on the menu as creamy, buttered, battered, breaded, fried or sautéed.
- Cook at home more often. If you don't already, start out cooking once a week at home. Then build up to cooking at home more often.
- Eat fewer empty calorie foods. This includes solid fats and foods and beverages with added sugar. These add calories but little or no nutrients.
- Increase physical activity. For healthy adults, this means at least two days of strength training and at least 150 minutes of moderate-intensity aerobic activity each week. You may break your aerobic activity up into chunks of at least 10 minutes.
- Decrease screen time. Choose other options instead of watching TV. Take a walk, play with your dog, or garden.
The USDA runs a free website with more information about the recommendations at ChooseMyPlate.gov. This site includes a SuperTracker plan that can be tailored to you. It also has sample menus and snack ideas.
Created on 08/20/2008
Updated on 09/03/2013
- United States Department of Agriculture. ChooseMyPlate. Brief history of USDA guidelines.
- United States Department of Agriculture. ChooseMyPlate.
- United States Department of Agriculture. ChooseMyPlate. Using MyPlate with MyPyramid.
|
<urn:uuid:a3766d4a-88ee-4347-a218-0afa75525202>
|
CC-MAIN-2016-26
|
https://www.aarpmedicareplans.com/aarpoptum/plan-balanced-healthy-meals-with-myplate?hlpage=health_center&loc=health_articles_tab
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00144-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931139 | 765 | 3.640625 | 4 |
You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
A New Reading of the Belvedere Altar
Bridget A. Buxton
American Journal of Archaeology
Vol. 118, No. 1 (January 2014), pp. 91-111
Published by: Archaeological Institute of America
Stable URL: http://www.jstor.org/stable/10.3764/aja.118.1.0091
Page Count: 21
Preview not available
Controversy over the identification of the figures on the Belvedere altar has long hindered consensus about the meaning of this important Augustan monument. Attention has focused on the chariot-riding figure previously identified as Julius Caesar, Augustus, Romulus-Quirinus, Aeneas, or even Agrippa. I argue that references in Ovid's Fasti and the Consolatio ad Liviam, as well as the dedicatory inscription of the altar, suggest this figure depicts Nero Claudius Drusus at his funeral in 9 B.C.E., which was observed by Livia and Gaius and Lucius Caesar. The Belvedere altar advertises Augustus' dynastic ambitions during the early years of his pontificate, revealing the importance of the Claudii Nerones in Augustus' plans to secure his family's place as the preeminent military guardians of Rome. By implicitly associating Drusus' triumphant career with the promise of empire evoked by the appearance of Vesta and the ancient Trojan "pledges of empire" within the domus Augusta, the Belvedere altar represents an early articulation of the divine claims of Augustus' household to monopolize Rome's military responsibilities. The assimilation of divine and human households on the Belvedere altar also parallels the increasing identification of young Augustan princes as living pledges of empire and as new Dioscuri.
Copyright 2014 Archaeological Institute of America
|
<urn:uuid:3740f6a7-8468-4003-bf08-d3ec6e4796a1>
|
CC-MAIN-2016-26
|
http://www.jstor.org/stable/10.3764/aja.118.1.0091
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00176-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.904382 | 407 | 2.578125 | 3 |
By Dr. Becker
Did you know that approximately 40,000 dogs and cats die in house fires each year? And that number could be on the very low end, as other estimates put the total at up to 150,000 pets lost annually, according to the New Holland Volunteer Fire Department in South Carolina. What a heartbreaking statistic.
Many pet owners leave their animals alone while they are at work or running errands. If the house catches fire, the pet is left to fend for itself. And even when first responders are aware there are animals inside a burning structure, it’s often difficult to locate them because pets, unlike people, look for a place to hide from fire rather than escaping it. Far too many pets die each year from smoke asphyxiation.
Oxygen Masks Designed Especially for Pets
But here’s a bit of really good news. Pam Gleason, publisher of two regional South Carolina newspapers, the Aiken Horse and the Dog and Hound, has generously donated pet oxygen mask kits to New Holland volunteer firefighters.
Part of the inspiration behind the donation is a dog named Dora, who was saved when her home in Greenville, SC caught fire. Dora is alive thanks to an oxygen mask, and happily, she has found a new forever home.
The donated masks are specially designed to provide oxygen and air in a more efficient manner to pets. The masks can be used to help conscious pets suffering from smoke inhalation, as well as pets that must be resuscitated after losing consciousness.
The new technology works not only for dogs and cats, but also goats and several other animals. According to firefighter Gary Knoll:
“We take the oxygen hose, attach it, and take the face mask and place it over the pet's mouth and administer oxygen and air to the pet. If the animal isn't breathing, we can take the rebreather from our first responders bag and squeeze the bag to inflate the animal's lungs.”
The Specially Designed Masks Allow Firefighters to Stabilize Pets for Transport to a Veterinarian
The new kits contain three different mask sizes to accommodate different types of animals. They give firefighters the ability to stabilize a pet long enough to get it to a veterinarian for further treatment.
According to Knoll, since many households today contain a variety of synthetic materials that produce dangerous gases, having pet oxygen masks available is especially important. He estimates that the level of toxic gas produced inside a burning home is 10 times higher today than 20 years ago. And since 75 percent of all deaths from house fires are the result of smoke inhalation, the need to provide air and oxygen to family pets is critically important.
Estimates are that over 3,000 of the 5,000 homes in New Holland contain pets, so volunteer firefighters are grateful to have the technology available to save the lives of residents’ cherished dogs, cats, and other animal companions.
Local Residents Can Help Their Fire Departments Purchase More Pet Masks
According to Pam Gleason, fire departments in Aiken City, North Augusta and Couchton also have the kits on their trucks. The kits cost $75 each, and residents of the region can assist their fire departments by helping them purchase additional kits at Wag'N O2 Fur Life. Alternatively, residents can join a fellowship program online and receive donations from others to help provide the kits.
The Emma Zen Foundation, a 501 (c) 3 organization, also raises funds to provide fire departments across the country with pet oxygen mask kits.
In this short video, first responders in Texas demonstrate use of a pet oxygen mask:
|
<urn:uuid:7953bbcd-6153-40da-a69c-7b58ab4df2c7>
|
CC-MAIN-2016-26
|
http://healthypets.mercola.com/sites/healthypets/archive/2014/02/21/pet-oxygen-mask.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00083-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.944251 | 741 | 2.96875 | 3 |
Today being the 115th anniversary of the discovery of X-Rays, I thought I'd revive a classic post:
Since X-ray vision is one of Superman's most used powers in the Silver Age, and is therefore bound to come up time and time again in this blog, I just want to set a few things straight.
X-ray vision was created as a convenient way for Superman to locate people without tearing the roofs off of houses or crashing through walls a la Kool-Aid man. However, it soon grew in scope, becoming the source of Superman's later separately named Heat vision.
In countless issues, Superman uses his X-ray vision to fog film, irradiate things, weld things, and even recharge a dying star. I would like to say, right here and now, that X-ray vision (if it existed) DOESN'T WORK THAT WAY!!!
Look, I'm an artist not a physicist, but even I understand how X-rays work.
Superman was given X-ray vision, the power to see through solid objects (except lead). X-ray machines work by projecting low-level radioactivity or X-rays, through a solid object and onto a piece of photographic film. When the film is developed, the X-rays have created an image which shows more dense material like bone or metal (which are harder for the rays to penetrate) which may be encased in less dense material, such as flesh or wood. Since Superman is an extraterrestrial with a different physiology, it is conceivable that his vision would extend beyond the range of human sight and allow him to see other wavelengths of light or radiation (some animals can see heat, for instance) However, this would mean that Superman's eyes RECEIVE X-rays, not TRANSMIT them.
If this is the case, Superman cannot just go around fogging film and boiling water, and Heat vision is moot, or completely separate.
Now that I've got that settled, On with the blog.
|
<urn:uuid:98a8422a-a39a-4825-9ef9-69b83cb68bee>
|
CC-MAIN-2016-26
|
http://aquamanrules.blogspot.com/2009/08/x-ray-vision-rant.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00196-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966748 | 413 | 2.671875 | 3 |
Listed below are some general questions that you may ask when looking at narrative art.
Narrative works allow students to act as detectives and figure out what's happening in the pictures. Many images tell a specific story, be it a myth, a religious parable, or a well-known fable or tale. Others tell stories that may not be so obvious, leaving the viewer to use his or her own imagination to decipher them.
What do you think is happening in the work of art?
What do you see that makes you say that?
Who do you think is the main character of this story?
Do you recognize any of the characters? If so, how?
What can we say about these characters?
Describe the relationship between the different characters.
Describe the setting.
What time of day is it?
What season is it?
Where does this scene take place?
What do you think happened ten minutes before this scene?
What do you think will happen ten minutes later?
|
<urn:uuid:dcfb3a71-a238-4b1b-b43a-ed93b4b02901>
|
CC-MAIN-2016-26
|
http://getty.edu/education/teachers/classroom_resources/curricula/esl/esl_narrativeart.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00025-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956476 | 209 | 3.796875 | 4 |
From the November 1999 Idaho Observer:
The New American Nigger
The real outcome of the Civil War and the passage of the 14th Amendment is that they closed the southern plantation and freed the African slaves, opened the federal plantation and made us all slaves. More than four score years later, after the so-called Civil War, the federal government defeated our former union of sovereign states. This began the Reconstruction Era during which our original form of government was fundamentally altered. Just what was reconstructed? And, as a result of reconstruction, what are we now?
by Hari Heath
Patrick Henry said, when he addressed the Continental Congress March 23, 1775 ...I consider it as nothing less than a question of freedom or slavery...
If you were to believe the history as taught in government schools, you would believe that the issue of slavery -- whether or not the negro people should be freed from their condition of involuntary servitude -- was the issue over which our nation fought its bloodiest battles.
True, many negro people were forced to a life of slavery, often by cruel means. And this was an issue in the war between the states and the federal power. Slavery, or compelled servitude, was not exclusively forced upon the negro people. Early in our country's history, people of European origin were also enslaved as indentured servants. Often this was a result of criminal sentences, acts of rebellion, or payment for passage. Usually, those of European descent were servants for a period of time, while those of African descent had little hope of emancipation.
If we look a little further than what our government education has fed us, what really happened? Was not the stated purpose of the conflict to replace one form of slavery with another? Did our then small and limited federal government break down the chains of the Constitution which bound it and create for itself new powers? Were the Masters and overseer's of the southern plantations replaced with new ones for the great federal plantation? Did the federal government, once limited to a few enumerated powers -- a servant of the union of sovereign states and the people themselves -- reverse its status and become the master?
In the course of time, beginning with the reconstruction era, have we not all been reduced to the status that can best be described as federally-owned niggers? Are we not, at best, sharecroppers?
Color? Which Color?
Since nigger is often construed as a derogatory racial slur, some clarification is in order. Nigger comes from the Spanish word negro, (nay-grow) which nowadays are incorrectly referred to as black people. Similarly, those of European ancestry are often incorrectly referred to as white people. In reality, things are not so black and white.
Black or white?
To prove these errors in our common racial references, take two sheets of paper -- one black and one white. Find a so-called black person and hold the black paper next to their skin. You will discover that the paper is black and the person is some shade of brown. Then, find a so-called white person and hold the white paper next to their skin. They also will be some shade of brown, unless they are albino and albinos tend to be pink.
Now that the myth of black and white people has been summarily debunked, we find that people are varying shades of brown, unless they are pink, and come from many different cultures with roots deep in history and geography.
We are not black or white, but rather many diverse individuals, born from preceding generations of individuals into many cultures around the globe, and human skin is varying shades of brown. Brown, as any artist can tell you, is the resulting color when you mix all the colors together.
So where does nigger come from? Somewhere, the Spanish negro became mispronounced by people that at one time compelled darker brown people into involuntary servitude. Those darker colored people, indentured to a life of slavery, came to be referred to as niggers.
Now that we had this civil war to make us all equal, and with the new federal master compelling our servitude to its unbridled will, we is all niggers now. Dat's right whitey, yo got chains an shackles keepin yo down, an yo is such a fool dey got you thinking it's jewelry!
So what's up? Times have changed and we need a new definition for nigger. It ain't about black and white any more. The fedgov is our new master, compelling our servitude and tribute with taxes and regulations beyond anything the founding fathers would have imagined or tolerated.
The New Nigger
Nigger, under a new definition for our current times, is any one who files a 1040 form. Is that you? Maybe not. Even by their own admission, IRS agents have reported that as much as one-third of the taxpayers may be non-filers. That figure, if correct, means that we have one freeman for every two niggers in this country, under this new definition.
That's just too many free men and women in this country for the likes of Uncle Sam, so the fedgov Congress is hard at work with a flat tax, or a national sales tax, or some other scheme, to keep more of dem federal niggers down on de federal plantation. And how many people do you know, who rattle their shackles of socialism thinking it is jewelry, happy to pay their ever increasing tax burden, and wanting you to do the same, so they can reap their benefits from the federal plantation?
What are the shackles around the necks of the new American nigger? Welfare, grants, entitlements, congressional pork projects, government contracts, public service occupations; regulation upon regulation making the lives of the good socialist niggers more secure. And convenient. And comfortably complacent.
Keeping a good man down on the plantation, under the force of the whip, was always a hard job. Some one had to do the whipping. A master with a plantation could hire overseers to do the whipping, but it took a master to oversee the overseers. This worked on a good old southern plantation, where a master could see and talk to his overseers, but wasn't such a good idea in the industrial north. It also wouldn't work for the great plantation, that grand scheme for the top down federal takeover. The great federal master wouldn't be able to manage its overseers and its niggers because it was going to be too big and they would be too far removed from the new masters command center. The whipping post had to go and a new slavery had to be implemented.
So far, it may appear that the fedgov is the new master, but is this really the case? Do we have a largely hidden master who uses the fedgov as an overseer?
The hidden cost of war
War is an all out effort at any cost. Cost being the key here. Usually, about midstream in any modern war, the cost of continuing exceeds the stockpiles and assets of the warring factions. They must incur a debt to continue the fight. and those willing bankers are ever so eager to pay the costs so they can create and own a debt. History has shown bankers to be very willing to pay the costs of war, even financing both sides of the conflict. Control through the instrument of financial obligation is the ultimate goal.
The so-called Civil War was no exception. Not only was it the bloodiest war America has ever fought, it was very costly in monetary terms. It began a debt obligation that was never repaid (perhaps by design). And subsequent wars have added to this, until we now find our nation burdened with an incalculable debt. If you include several World Wars, the New Deal, the War on Poverty, Star Wars, UN obligations, and the War on Drugs, the costs of war are beyond our ability to repay. We will never be able to pay even the interest.
And who is the holder of this debt? Our new master? The Federal Reserve Bank? The IMF? Alan Greenspan? The Bilderberger/Rothchild/Rockfeller entourage? Is fedgov just a bankrupt, subservient overseer for the semi-hidden master of all? Are we now niggers, reduced to a condition of servitude, from a long chain of our fedgov overseers promises to pay -- with our labor and ingenuity converted to taxes?
By assent and consent we have descended to a condition of servitude to monetary powers which pull the strings of government from behind the scenes. The Reconstruction Era, which appears to be an ongoing event, was really a debt restructuring process. Saddled with debt from the Civil War, the U.S. Treasury issued notes or bonds, a promise to pay based on the credit of the U.S. (us). As these bonds matured in the early part of this century, our nation was unable to pay them. The bond holders, being the friendly understanding bankers that they are, presented an alternative to repayment. They would just forgive this little debt if our Congress would authorize (unconstitutionally) a national bank (privately held, of course).
And the Federal Reserve Bank was born. It was authorized to replace our economy of substance with one of promissary notes.
At first they promised to be redeemable in coins of substance, but after twenty years and a crash of this paper world (the people figured out there was way more paper than substance and made a run on the banks). FDR solved this little problem by prohibiting the substance our economy was based on -- gold coins -- and compelling us all to use the now unbacked promissory notes. And we've been happy little green paper niggers ever since. The things we do for green paper! Working on the fedgov -- excuse me -- the Federal Reserve plantation.
For generations we have grown accustomed to a debt-based economy. We wouldn't know how to live in an economy of substance if we could find one. In the past century, for many, a home was a homestead. A family had to farm, put up hay, get in the winter wood, and put food by for hard times. Life was based on substance and the procurement of real things.
Now we live from promise to promise. The promise that electricity will be there when you turn on the switch. That gas will come out of the gas pumps. That food will be in the stores when you need it. That banks will be open and credit cards will get you what you want, based on a promise to pay. That you can find one of your fellow fools for green paper, who will take a dollar bill from you, in trade for some substance of his.
Slowly, over generations, we have succumbed to an involuntary servitude, fools for the promise of green paper and fantastic plastic; with a tax of over 50 percent of the average American's income, to feed a swarm of socialist enterprises, which pretend they are good for us; all the while assuming a staggering incomprehensible debt to unseen bankers. And we all know what assume does to you and me. Puts a whole new color on the word nigger, don't it?
Mr. Heath was told that if he could pull this piece off as entitled, he would never have to prove his literary prowess to me ever again. I'll be darned if he didn't pull it off. Now that the N-word has been redefined, what do we see when we look in the mirror? ~DWH
|
<urn:uuid:35884e7e-0a7a-4e07-ac3a-6227703960e7>
|
CC-MAIN-2016-26
|
http://proliberty.com/observer/19991111.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00063-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.971525 | 2,373 | 2.609375 | 3 |
Find over 200 print-friendly fact sheets about heart disease and related health topics.
a type of fat (lipid) in your blood. Your cells need
cholesterol, and your body makes all it
needs. But you also get cholesterol from the food you eat.
If you have too much cholesterol, it starts to build up in your arteries.
(Arteries are the blood vessels that carry blood away from the heart.) This is
called hardening of the arteries, or
atherosclerosis. It is usually a slow process that
gets worse as you get older.
To understand what happens, think
about how a clog forms in the pipe under a kitchen sink. Like the buildup of
grease in the pipe, the buildup of cholesterol narrows your arteries and makes
it harder for blood to flow through them. It reduces the amount of blood that
gets to your body tissues, including your heart. This can lead to serious
heart attack and
Your cholesterol is measured by a blood test:
High cholesterol doesn't
make you feel sick. By the time you find out you have it, it may already be
narrowing your arteries. So it is very important to start treatment even though
you may feel fine.
Many things can
cause high cholesterol, including:
need a blood test to check your cholesterol. There are
several kinds of tests:
If you have high cholesterol, you need treatment to lower your risk of heart attack and stroke. The two main treatments are
lifestyle changes and medicine.
Some lifestyle changes are important for
everyone with high cholesterol. Your doctor will probably want you to:
Changing old habits may not be easy, but it is very
important to help you live a healthier and longer life. Having a plan can help.
Start with small steps. For example, commit to adding one fruit or one vegetable a day for a week. Instead of having dessert, take a short walk.
If these lifestyle changes don't lower your
cholesterol enough, or if your risk of heart attack is
high, you may also need to take a cholesterol-lowering medicine, such as a statin. Knowing your heart attack risk is
important, because it helps you and your doctor decide how to treat your
To find out your risk, use the
Interactive Tool: Are You at Risk for a Heart Attack?
Health Tools help you make wise health decisions or take action to improve your health.
Learning about high cholesterol:
Living with high cholesterol:
High cholesterol can be caused by:
High cholesterol does not cause symptoms. It is usually found during a
blood test that measures cholesterol levels.
Some people with rare lipid disorders may have symptoms such as bumps in the skin, hands, or feet, which are caused by deposits of extra cholesterol and other types of fat.
cholesterol can lead to the buildup of plaque in artery walls. This buildup is called
atherosclerosis. It can lead to coronary artery disease
(CAD), heart attack, stroke or transient ischemic attack (TIA), and peripheral arterial disease.
Atherosclerosis can cause these problems because it:
For more information, see:
Some things that increase
your risk for
high cholesterol are things you can change, but some are
not. It's important to lower your risk as much as possible.
Things you can change include:
Each of these things can raise your LDL, lower your HDL, or both.
Things you cannot change include:
For more information, see Cause.
High cholesterol usually has no symptoms. Sometimes the first sign that you
have high cholesterol or other risk factors for heart disease is a
heart attack, a
stroke, or a
transient ischemic attack (TIA). If you have any
symptoms of these, call 911 or other emergency services.
Heart attack symptoms include:
After you call 911, the operator may tell you to chew 1 adult-strength or 2 to 4 low-dose aspirin. Wait for an ambulance. Do not try to drive yourself.
Nitroglycerin. If you typically use nitroglycerin to relieve angina and if one dose of nitroglycerin has not relieved your symptoms
within 5 minutes, call 911. Do not wait to call for help.
Women's symptoms. For men and women, the most common symptom is chest pain or pressure. But women are somewhat more likely than men to have other symptoms like shortness of breath, nausea, and back or jaw pain.
Stroke and TIA symptoms include:
Any of the following doctors,
nurses, or specialists can order a cholesterol test and treat high
registered dietitian can help you with a diet to lower
People who have rare
lipid disorders, which can be hard to treat,
may need to see a specialist, such as a lipidologist or an endocrinologist.
A blood test tells you if you have
Your numbers help your doctor know your risk of getting heart disease or having a heart attack or stroke.
Your total cholesterol level is important. But your levels of
LDL, HDL, and triglycerides help your doctor decide if you need treatment for high cholesterol. Your doctor
will also consider your overall health and your risk of heart
attack. For more information, see the topic High Cholesterol Treatment Guidelines Based on Heart Attack Risk.
To learn about the results and numbers for cholesterol tests, see the topic Cholesterol and Triglyceride Tests.
Your total cholesterol number shows if your
cholesterol is too high.
If you have high cholesterol, your doctor will want
to know your LDL and HDL levels before deciding whether you need treatment and
what sort of treatment you need.
You want your LDL level to be low. But how low your LDL should be depends
on your risk of heart attack.
Your doctor will help decide what your LDL goal is. The higher your risk of heart attack,
the lower your LDL goal.
You want your HDL level to be high. An HDL level of 60 or higher is linked to a lower risk of heart disease. A high HDL number also can help
offset a high LDL number.
When you visit your doctor to talk about your cholesterol
test, you will talk about other things that increase your risk for heart
problems. These include:
If your risk is high, or if you
already have heart problems, your doctor will be more likely to prescribe
medicine along with lifestyle changes.
To find out your risk for a heart attack, see the Interactive Tool: Are You at Risk for a Heart Attack?
Most doctors recommend that everyone older than 20 be checked for high
cholesterol. How often you need to be checked depends on whether you have other
health problems and your overall chance of heart disease.
Your child's doctor may suggest a cholesterol test based on your child's age, family history, or a physical exam. A cholesterol test can help a doctor find out early if your child has a cholesterol level that could affect his or her health.
The goal in treating
high cholesterol is to reduce your chances of having a
heart attack or
The two types of treatment for high cholesterol are:
Your doctor may suggest that you make
one or more of the following changes:
For more information, see Making Lifestyle Changes.
Many people try lifestyle changes first. But if lifestyle changes aren't enough to reach your cholesterol goal, you will need to take medicine too. Even if you take medicine for high cholesterol, keeping healthy lifestyle habits is still important.
Some people need to start taking medicine right away because their risk of heart attack is higher than average. Your doctor will base your need for medicine on your risk level.
Once you know your risk for heart attack, you can learn more about treatment for your risk level.
You may also need treatment for other health problems, such as high blood pressure.
A heart-healthy lifestyle can help you prevent high cholesterol. This includes:
Heart-healthy diets include the Mediterranean diet and the American Heart Association diet recommendations. This chart compares several heart-healthy diets(What is a PDF document?).
Some people may not be able to prevent high cholesterol with lifestyle changes. Family history or certain conditions that cause the body to make too much cholesterol can raise levels even with lifestyle changes. In these cases, medicine can help.
Remember that high cholesterol is just
one of the things that increase your risk for
heart attack and stroke.
Controlling other health problems, such as
high blood pressure and
diabetes, can also help reduce your overall
Lifestyle changes are important to help control
high cholesterol, especially if you have other risk
heart disease and
Even if your doctor has
prescribed medicine for you, you may still need to make changes at home to lower
your cholesterol and reduce your risk. Some people can even take less
medicine after making these changes.
One Man's Story:
"The walking was the easy part for me. I get out every evening for a walk. The food part took some thought. Each week, I added a food that was good for me and took something away that was bad for me."—Joe
Read more about how Joe is improving his cholesterol by making one change at a time.
lifestyle changes to help lower your cholesterol:
Making healthy eating habits a part of your daily life is one of the best things you can do to lower your cholesterol. Your doctor may recommend that you follow the Therapeutic Lifestyle Changes (TLC) diet. The diet's main focus is to reduce the amount of saturated fat you eat, because saturated fat raises your cholesterol.
If you have questions about which diet to follow, talk to your doctor.
For more information about food and high cholesterol, see:
Losing just 5 lb to 10 lb (2.3 kg to 4.5 kg) can lower your cholesterol. Losing weight can also help lower your blood pressure.
physical activity raises "good" HDL cholesterol. Getting active has many other benefits too. It can help you lose weight. And it can lower your blood pressure.
Quitting can help raise your HDL and improve your heart health. "Good" HDL levels often go up soon after a person quits smoking.
For more information, see:
One Woman's Story:
"Terri's heart attack scared me to death. I decided that this time, I'm doing the whole package. I'm quitting smoking for good."—Linda
Read more about Linda and how quitting smoking improved her cholesterol.
If high cholesterol runs in your family, these lifestyle changes may not be enough. You may need to take medicine too. But no matter what treatment you use, you can lower your high cholesterol.
"I'm just not that type of person who can change everything at once."—Joe
Read more about Joe and how using the TLC plan helped him take charge of his cholesterol.
You can learn simple steps to help you make lifestyle changes, like setting goals. Work on one small goal at a time. Expect slip-ups. Get support from others. Reward yourself for each success. To find out more about making healthy lifestyle changes, see Change a Habit by Setting Goals.
When changing a lifestyle habit, barriers can sometimes get in your way. Figuring out what those barriers are and how you can get around them can help you reach your healthy eating goals.
For help, see:
"I've learned to not beat myself up [when I slip up]. Instead, I refocus on my plan and get right back to eating healthy food. What keeps me going is the results—I've lost weight, my cholesterol's getting better, and I feel younger every day."—Joe
Read more about how Joe is controlling his cholesterol.
Statins are the
medicines used the most often to treat
high cholesterol, and they often work the best. They can
reduce the risk for
stroke, and early death in people who are at high risk for a
heart attack or stroke. Other medicines also lower
cholesterol, and some may be used to lower
triglycerides or raise
Doctors may also prescribe
aspirin therapy if you have had a heart attack or
a stroke, or you have a high risk for heart attack or stroke.
Do you need to take medicine? That depends. The decision to use medicine to treat high cholesterol is usually based on your cholesterol goal, LDL level, and your risk for heart attack and stroke.
always used along with a diet and exercise plan, not instead of it.
You and your doctor will decide if you will take medicine for high cholesterol.
"I don't mind taking a pill a day. As long as it's doing me some good. And I no longer have any doubts about that."—Tony
Read more about Tony and how medicine helps him keep his cholesterol low.
The following medicines can be
used to lower LDL and triglyceride levels in the blood and to raise HDL.
Some people find it hard to take their
medicines properly. If you do take medicine, it is important to use it the right way.
Some people don't see why they should take medicines every day
when they don't feel sick. High cholesterol doesn't make you feel sick. But it's important to treat
it, because it damages your blood vessels and eventually your heart, even though you don't have symptoms.
Some side effects are more likely and may be worse when you use higher
doses of statins. If you're having side effects, tell your doctor. You may be able to take a different medicine or a different dose.
Be sure to tell your doctor everything you take for high cholesterol, even herbs or other supplements or treatments. Sometimes they can interact with other medicines and cause problems.
If you have trouble taking your medicine for any reason, talk to your doctor.
Some plant products can help lower high cholesterol. But don't use them to replace your doctor's treatment. Whether
or not you use such products, be sure to continue your diet, exercise,
and prescription medicines.
As with any new form of treatment, make sure to talk with your doctor first. This is especially important if you take statins. Combining statins and some supplements can cause dangerous side effects.
Psyllium is an ingredient in some dietary supplements—Metamucil, for example. It's a fiber from fleawort and plantago seeds.
Doctors aren't sure how it helps cholesterol levels. It may make the small intestine absorb less cholesterol, so less of it enters your blood.
Psyllium is approved by the U.S. Food and Drug Administration (FDA). The main side effect is increased bowel movements. Products
containing psyllium aren't recommended to replace foods as a source of
Sterol and stanol esters are used in cholesterol-lowering margarine spreads.
Sterol esters might limit how much cholesterol the small intestine can absorb. Cholesterol-lowering margarines can help lower cholesterol levels, particularly in people who have high cholesterol levels or who consume too much fat in their diets. These margarines are used along with a healthy diet to lower cholesterol.
Red yeast rice contains a natural form of lovastatin, a statin medicine. This supplement may keep your body from producing too much cholesterol. But this supplement can cause dangerous side effects.
Talk to your doctor before you try red yeast rice. Serious side effects include rhabdomyolysis and hepatitis. Red yeast rice is not regulated by the U.S. Food and Drug Administration (FDA), so you cannot be sure of the amount of red yeast in a supplement. This means you cannot be sure of its dose and safety.
If you take red yeast rice, call your doctor right away if you have a bad reaction to it such as severe muscle pain or symptoms of hepatitis.
Do not take red yeast supplements if you are taking statins. Combining them can cause dangerous side effects.
Visit the American Heart Association (AHA) website for information on
physical activity, diet, and various heart-related conditions. You can search for information on heart disease and stroke, share information with friends and family, and use tools to help you make heart-healthy goals and plans. Contact the AHA to find your
nearest local or state AHA group. The AHA provides brochures and information
about support groups and community programs, including Mended Hearts, a
nationwide organization whose members visit people with heart problems and
provide information and support.
This website has health information for
people of all ages. Topics include the following: medicines, food and nutrition, medical
devices, cosmetics, and animal health. Spanish materials are also
HeartHub for Patients is a website from the American Heart
Association. It provides patient-focused information, tools, and resources
about heart diseases and stroke. The site helps you understand and manage your
health. It includes online tools that explain your risks and treatment options.
The site includes articles, the latest news in health and research, videos,
interactive tools, forums and community groups, and e-newsletters.
The website includes health centers that cover heart rhythm problems,
cardiac rehabilitation, caregivers, cholesterol, diabetes, heart attack, heart
failure, high blood pressure, peripheral artery disease, and stroke.
HeartHub for Patients also links to Heart360.org, another American Heart Association
website. Heart360 is a tool that helps you send and receive medical
information with your doctor. It also helps you monitor your health at home. It
gives you access to tools to manage and monitor high blood pressure, diabetes,
high cholesterol, physical activity, and nutrition.
This website is sponsored by the Nemours Foundation. It
has a wide range of information about children's health—from allergies and
diseases to normal growth and development (birth to adolescence). This website
offers separate areas for kids, teens, and parents, each providing
age-appropriate information that the child or parent can understand. You can
sign up to get weekly emails about your area of interest.
The National Cholesterol Education Program (NCEP) provides education and tips for patients about how to lower high cholesterol. The NCEP provides clinical practice guidelines for health professionals to treat high cholesterol. The goal of the NCEP is to help people lower high cholesterol because this can lower their risk of coronary artery disease. The NCEP is part of the National Heart, Lung, and Blood Institute (NHLBI) and the National Institutes of Health (NIH).
The U.S. National Heart, Lung, and Blood Institute
(NHLBI) information center offers information and publications about preventing
CitationsGrundy SM, et al. (2001). Executive summary of the third report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III). JAMA, 285(19): 2486–2497.Berthold HK, et al. (2006). Effect of policosanol on lipid levels among patients with hypercholesterolemia or combined hyperlipidemia. JAMA, 295(19): 2262–2269.Other Works ConsultedBrunzell JD (2010). Diagnosis and treatment of dyslipidemia. In EG Nabel, ed., ACP Medicine, section 9, chap. 6. Hamilton, ON: BC Decker.Buckley DI, et al. (2009). C-reactive protein as a risk factor for coronary heart disease: A systematic review and meta-analysis for the U.S. Preventive Services Task Force. Annals of Internal Medicine, 151(7): 483–495.Drugs for lipids (2011). Treatment Guidelines From The Medical Letter,9(103): 13–20.Expert Panel on Integrated Guidelines for Cardiovascular Health and Risk Reduction in Children and Adolescents (2011). Expert panel on integrated guidelines for cardiovascular health and risk reduction in children and adolescents: Summary report. Pediatrics, 128(Suppl 5): S213–S256.Genest J, Libby P (2012). Lipoprotein disorders and cardiovascular disease. In RO Bonow et al., eds., Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine, 9th ed., vol. 1, pp. 975–995. Philadelphia: Saunders.Grundy S, et al. (2002). Third Report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III) (NIH Publication No. 02–5215). Bethesda, MD: National Institutes of Health. Also available online: http://www.nhlbi.nih.gov/guidelines/cholesterol/atp3full.pdf.Grundy SM, et al. (2001). Executive summary of the third report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III). JAMA, 285(19): 2486–2497.Grundy SM, et al. (2004). Implications of recent clinical trials of the National Cholesterol Education Program Adult Treatment Panel III Guidelines. Circulation, 110(2): 227–239. [Erratum in Circulation, 110(6): 763.]Miller M, et al. (2011). Triglycerides and cardiovascular disease: A scientific statement from the American Heart Association. Circulation, 123(20): 2292–2333.Mosca L, et al. (2011). Effectiveness-based guidelines for the prevention of cardiovascular disease in women 2011 update: A guideline from the American Heart Association. Circulation, 123(11): 1243–1262.National Heart, Lung, and Blood Institute (2005). Your Guide to Lowering Your Cholesterol With TLC (NIH Publication No. 06-5235). Available online: http://www.nhlbi.nih.gov/health/public/heart/chol/chol_tlc.pdf.Pearson TA, et al. (2003). Markers of inflammation and cardiovascular disease: American Heart Association and the Centers for Disease Control and Prevention scientific statement. Circulation, 107(3): 499–511.Raymond JL, Couch SC (2012). Medical nutrition and therapy for cardiovascular disease. In LK Mahan et al., eds., Krause's Food and the Nutrition Care Process, 13th ed., pp. 742–781. St Louis: Saunders.Red yeast rice (2009). Medical Letter on Drugs and Therapeutics, 51(1320): 71–72.Sacks FM, et al. (2006). Soy protein, isoflavones, and cardiovascular health: An American Heart Association science advisory for professionals from the Nutrition Committee. Circulation, 113(7): 1034–1044. Also available online: http://circ.ahajournals.org/cgi/content/full/113/7/1034.U.S. Preventive Services Task Force (2009). Aspirin for the prevention of cardiovascular disease. Available online: http://www.uspreventiveservicestaskforce.org/uspstf/uspsasmi.htm.U.S. Preventive Services Task Force (2009). Using nontraditional risk factors in coronary heart disease risk assessment. Available online: http://www.uspreventiveservicestaskforce.org/uspstf/uspscoronaryhd.htm.
June 29, 2012
Kathleen Romito, MD - Family Medicine & Robert A. Kloner, MD, PhD - Cardiology
To learn more visit Healthwise.org
© 1995-2012 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
|
<urn:uuid:ffa9f990-66ee-4339-a787-e21c8338efee>
|
CC-MAIN-2016-26
|
https://www.cardiosmart.org/Healthwise/hw11/5432/hw115432
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00052-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.903441 | 5,018 | 3.6875 | 4 |
Here is a revision page on adding two teen numbers mentally and is probably best suited to year 3 children but could also be very useful for older children who are not confident with adding 2-digit numbers.
As I have said before, it is interesting that when we add ‘in our heads’ we often do it in a very different way than if we were using pencil and paper methods. For example 15 + 16 can be done several ways, non of which is significantly better or worse than another.
method 1: add the two tens to make 20. Add 5 to make 25 and then add (or count on) 6 to make 31.
method 2: add 10 to 16 to make 26 and then add 5 to make 31.
method 3: add 10 to 15 to make 25 and then add 6 to make 31.
method 4: recognise that 15 + 16 is nearly double 15 which is 30 and then make an adjustment of 1 to make 31.
- © 2009 Maths Blog
|
<urn:uuid:ec62b824-c9bc-4ebb-bf52-fee3fae99e24>
|
CC-MAIN-2016-26
|
http://mathsblog.co.uk/2012/07/23/year-3-maths-worksheet-revise-adding-two-teen-numbers/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00126-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.951964 | 204 | 3.984375 | 4 |
A NASA space telescope that mapped the entire sky has revealed that fewer potentially threatening asteroids are in orbits near Earth, space agency officials announced today (Sept. 29).
The discovery lowers the number of medium-size asteroids near Earth to 19,500 — nearly a 50 percent drop from the 35,000 space rocks initially estimated — and suggests that the threat to Earth by dangerous asteroids may be "somewhat less than previously thought," NASA officials said in a statement. There are still thousands more of these asteroids, which can be up to 3,300 feet wide, that remain to be found.
"Fewer does not mean none and there are still tens of thousands out there to find," study leader Amy Mainzer, principal investigator for NASA's NEOWISE project at the agency's Jet Propulsion Laboratory (JPL) in Pasadena, Calif. [Photos: Asteroids in Deep Space]
Scientists used NASA's Wide-field Infrared Survey Explorer (WISE), an infrared space telescope, to map the asteroid population near Earth and elsewhere in the solar system. By the end of the telescope's extended mission, called NEOWISE, last year, astronomers had found 90 percent of the largest asteroids near our planet, NASA scientists said.
The WISE space telescope mapped the entire sky twice between January 2010 and February 2011 during its mission, which was aimed at mapping near-Earth asteroids, brown dwarfs, galaxies and other deep space objects. For its near-Earth asteroid search, the space observatory scanned for space rocks that orbited within 120 million miles (195 million kilometers) of the sun. The Earth is about 93 million miles (150 million km) from the sun. [Video: Killer Asteroids, We're WISE to You Now]
The telescope's NEOWISE mission discovered more than 100,000 previously unknown asteroids in the asteroid belt between the orbits of Mars and Jupiter. It spotted 585 asteroids in orbits that brought them near Earth.
The WISE asteroid survey, which NASA says is the most accurate ever performed, also lowered the estimated number of giant asteroids — space rocks the size of a mountain — from 1,000 to 981, with about 911 of those already known, researchers said.
"The risk of a really large asteroid impacting the Earth before we could find and warn of it has been substantially reduced," said Tim Spahr, the director of the Minor Planet Center at the Harvard Smithsonian Center for Astrophysics in Cambridge, Mass.
NASA launched the $320 million WISE telescope in December 2009. It spent 14 months scanning the heavens in infrared light before NASA shut it down in February 2011.
|
<urn:uuid:f4ebc5a9-c87f-43b7-ada3-609596d772ea>
|
CC-MAIN-2016-26
|
http://www.space.com/13130-dangerous-asteroids-earth-nasa-telescope-results.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00148-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949334 | 535 | 3.953125 | 4 |
BURROW MUMP CHURCH
Both Burrow and Mump mean hill so this is Hill Hill!
In keeping with this mixed meaning, the site is a confusion of different uses and historical periods. In the Dark Ages and beyond this area was one of swamps, marshes and occasional hills most famously the nearby Isle of Athelney where King Alfred famously fled in the 870s.
It would appear that at the same time Burrow Mump was turned into a fortified site. Excavations have also found the remains of a 12th Century structure on top of the Mump which is believed to have been from the first church. Sometime during the middle of the 15th Century Athelney Abbey (now long gone) had a church built here. Even though there is no evidence that the church was fortified, its natural position of strength meant that it was used as a sanctuary twice by Royalists during the Civil Wars (in 1642 and 1645) and was occupied by the Kings Army during the 1685 Monmouth Rebellion.
All of this presumably left the Church in a state of disrepair as the decision was made to build a new church in 1793. This church would be smaller and was to be paid for by public subscription. Unfortunately money ran out and so the little roofless ruin that stands here today came into existence.
The Mump is also a memorial for World Wars I and II and came into the guardianship of the National Trust in 1946.
Photo - Andrew J. Müller
Back to Cathedrals, Churches, Abbeys etc... page
© Text copyright - Raving Loony Productions and Andrew J. Müller,
and Shaun Runham
© Photos and Artwork - Andrew J. Müller
© Web Design and Layout - Andrew J. Müller
|
<urn:uuid:81bd0412-b92b-4d57-8592-56193e9bc916>
|
CC-MAIN-2016-26
|
http://www.r-l-p.co.uk/+burrow.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00144-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.976372 | 372 | 3.171875 | 3 |
Twenty-six years ago this October, the San Andreas Fault jolted awake, rocking the San Francisco Bay area. At magnitude 6.9 and lasting roughly 15 seconds, the Loma Prieta earthquake was significantly milder than the fault’s previous major event, the great 1906 earthquake, yet damage from the 1989 quake, estimated at $6 billion by the U.S. Geological Survey, reached throughout the region.
Transportation infrastructure proved especially vulnerable to the quake’s destructive force. More than 60 miles (97 km) north of the epicenter, the gaping upper deck of the San Francisco–Oakland Bay Bridge, where one person died, and the collapsed Cypress Street Viaduct in Oakland, where 42 people perished, became iconic images of the event. In San Francisco, the earthquake crippled two major freeway structures, giving the city an unexpected opportunity to turn back the clock on one of the most contentious periods in its development.
What followed was an unusual turn in San Francisco’s history of activism: a neighborhood called for greater density, less parking, contemporary design, and more affordable housing—and became a model for forward-looking neighborhood redevelopment.
The Freeway Revolt
Decades before the Loma Prieta earthquake, the construction of two double-decked freeways—the Embarcadero Freeway, connecting the Bay Bridge to the northern waterfront, and the Central Freeway, which cut northward through the city—had San Franciscans up in arms. Alarmed by the impact of these structures and plans for a network of more “trafficways” crisscrossing the city, communities organized, and by 1966 the Freeway Revolt, as the movement came to be known, ended plans for any future freeways.
That victory did not undo history, however, and over time those elevated roadways became San Francisco fixtures. By 1989, when earthquake damage to the freeways spurred renewed calls for their permanent removal, new factions had emerged. In the case of the Embarcadero, opposition from Chinatown and other neighborhoods served by the freeway failed to turn the tide on decades of political momentum favoring its removal. Demolition began in 1991, and the Embarcadero reemerged in 2000 as a new boulevard with a fleet of historic streetcars from San Francisco, as well as across the United States and the world.
In contrast, efforts to remove the Central Freeway became an epic battle pitting the neighborhoods of Hayes Valley and the Western Addition against the city’s western neighborhoods and Marin County–bound drivers, who wanted the structure not only rebuilt, but also widened. From 1997 to 1999, a series of four ballot measures for and against the freeway’s removal ensued.
The city, meanwhile, had come forward with a proposal for a new boulevard designed by former San Francisco planning director Alan Jacobs and Elizabeth MacDonald, both internationally recognized experts in boulevard design. The new Octavia Boulevard would channel and disperse cross-city traffic from a new freeway terminus at Market Street, while slower, separated lanes for local traffic and cyclists would buffer the thoroughfare’s wide sidewalks. The boulevard would end at a large central green, diverting traffic from the neighborhood’s main retail center on Hayes Street.
To settle the matter, members of the city’s Board of Supervisors introduced the fourth and final ballot measure, Proposition I, which reaffirmed support for the boulevard plan and provided the mechanism for funding its construction through the sale of the resulting development parcels for housing. (The city in 2002 would sell seven of the 22 parcels to the San Francisco Redevelopment Agency [SFRA] to jump-start the project.)
Prop. I complemented California Senate Bill 798, which transferred the former state highway land to the city, free, with stipulations that proceeds from land sales would fund the construction of Octavia Boulevard and other streetscape improvements around the point where the shortened Central Freeway structure comes back to grade.
With a clear vision and funding mechanisms in place, the voters in 1999 approved Prop. I. Ten years after the earthquake, the city could finally move forward.
A Neighborhood Vision
The rise of Hayes Valley as one of San Francisco’s most desirable neighborhoods now seems as if it were predestined. With the freeway gone, the area’s central location, fine-grained residential streets, active commercial center, and convenient access to transit make for an ideal urban lifestyle. What is less obvious to the casual observer is the vision that shaped the neighborhood’s success.
The planning effort was conducted under San Francisco’s Better Neighborhoods program, a series of focused plans intended to encourage sensitive development in areas expected to experience significant growth. The Market and Octavia Area Plan covered a swath of land at the nexus of two major city thoroughfares—Market Street and Van Ness Avenue—that includes the approximately 7 acres (2.8 ha) comprising the 22 Central Freeway parcels.
The Planning Department conducted extensive community outreach, including meetings, walking tours, and bus tours to other neighborhoods to understand, from the residents’ perspective, what was working in Hayes Valley, what needed to change, and what new solutions might be considered. The Hayes Valley Neighborhood Association, an organization spawned by the freeway fight, held separate community forums and worked closely with the Market-Octavia planning team.
“The neighborhood’s economic diversity of housing and its walkability were two of its biggest strengths,” recalls Robin Levitt, an architect and Hayes Valley resident who helped spearhead the campaign to remove the freeway. “These were qualities we wanted to preserve and build upon.”
These sentiments informed the plan’s underlying premise: in a neighborhood supported by transit, walking, and cycling, a dramatic reduction in accommodations for cars allows for the construction of more housing and better mixed-use development. This, in turn, strengthens the area’s vitality and character. With the community’s support, the Market-Octavia plan replaced parking minimums with parking maximums, typically 0.5 spaces per housing unit, and mandated unbundling of parking from units to provide a broader range of affordability.
The plan eliminated density maximums, instead letting height and massing limits guide design; also, 40 percent of housing was required to be two-bedroom units, and increased open space was mandated. “The parameters allow for more creative and inventive solutions while offsetting the impulse to simply cram in units,” explains Owen Kennerly of San Francisco–based Kennerly Architecture, part of the design team with Pyatok Architects and Jon Worden Architects for the 182-unit Avalon Hayes Valley mixed-use apartment project. “The gracious street frontage you see on Octavia [at Avalon Hayes Valley]—the deep storefront, portal, and stair—wouldn’t have been possible without the reduced parking requirements.”
Finally, the plan’s guidelines for urban and streetscape design encouraged active, engaging, and safe streets for walking, cycling, and other transit modes.
Although it would take another eight years for the city to adopt the Market-Octavia plan, most of its pioneering provisions remained intact. Together these policies stimulate higher density, a mix of neighborhood-serving and destination retail businesses, and pedestrian-friendly streets that foster community.
Setting the Bar
The neighborhood was not content with just sound planning. While the draft plan underwent environmental review and other approvals, Levitt and architect Stefan Hastrup convinced the city and a coalition of influential organizations—including San Francisco Beautiful, the American Institute of Architects, SPUR (the San Francisco Planning and Urban Research Association), and the San Francisco Museum of Modern Art—to hold an international design competition inviting architects worldwide to propose housing concepts for six lots along Octavia Boulevard.
“The design competition set a high bar and helped the city to realize good projects,” says Kearstin Dischinger, a senior community development specialist at the San Francisco Planning Department. “We now look to Octavia Boulevard as a grand part of the city.” The goal was to elevate and expand the conversation about the kind of innovation in housing that might be realized on the new parcels and to signal to the city and real estate developers that the neighborhood embraced contemporary design—an anomaly in a city known for cherishing its architectural past.
A year later, in 2006, when the Mayor’s Office of Economic and Workforce Development requested proposals for the first market-rate developments along Octavia Boulevard, the request for proposals stressed the importance of design. The winning teams, all based in the Bay Area, included two finalists from the international design competition, Owen Kennerly and Envelope A+D, along with Stanley Saitowitz|Natoma Architects.
“The future of urban mixed-use development is about being smarter in our design,” says Joe McMillan, chairman and chief executive officer of New York– and San Francisco–based DDG, which has four development projects in Hayes Valley, including Saitowitz’s 8 Octavia, and has a fifth project pending. “The Hayes Valley community has created an environment where this kind of innovation can happen. It is especially critical to creating a range of affordability that the city needs.”
Against the backdrop of the dot-com and housing bubbles, the 2002 agreement to sell seven of the 22 parcels to the SFRA accomplished another important goal: it secured those sites for affordable housing. But the Market-Octavia plan goes further, distributing affordable housing sites throughout the plan area and recommending that additional affordable units be spread among different housing types.
This latter goal is accomplished through San Francisco’s inclusionary housing program, which requires that multifamily developments make 12 percent of units available at below-market rates. Altogether, the city anticipates that nearly half the 1,000 units planned for the freeway sites will serve those with special needs, including the formerly homeless, people with developmental disabilities, low-income seniors, and low-income families.
In the unfolding neighborhood, high-end condominiums and affordable housing sit comfortably together and are unified by an overarching mandate for high-quality design, especially at ground level, where the buildings influence the quality of the street. Edmund Ong, who as SFRA’s chief architect oversaw the first projects built in the plan area, observes, “It takes political will to make affordable housing happen. In San Francisco we also have the will to do it right.”
The Long View
City making takes years, and in this respect the Market and Octavia Area Plan has another lesson to offer. The Board of Supervisors finally adopted the plan in March 2008, just months before the country plummeted into the Great Recession. With land values down, the city tabled its plans to sell parcels along Octavia Boulevard. But affordable housing projects with funding in the pipeline were able to move forward, allowing implementation of the plan to progress even amid economic collapse.
The Mayor’s Office of Economic and Workforce Development, hearing neighborhood concerns over the prospect of vacant sites along Octavia Boulevard languishing for years, invited proposals for temporary uses. An urban farm, a community garden for the homeless, and Proxy, a mixed-use commerce and culture hub, brought community energy to the sites and generated modest lease income for the city. However, the program comes with risks: despite clear agreements intended to protect future development plans for the sites, activists briefly challenged the scheduled closure of the urban Hayes Valley Farm. Nonetheless, the temporary uses enjoy broad community and city support.
While Hayes Valley is now a desirable San Francisco location, the neighborhood and the plan area remain in transition, and new issues are arising. As in much of San Francisco, middle-income earners in Hayes Valley are finding fewer housing options available, and growth in the region overall is straining the boulevard’s capacity to handle automobile traffic. But the community has prompted the city to examine ways to calm traffic in the area and improve safety for pedestrian and cyclists.
“The plan won’t be done until the traffic issues are addressed,” says Levitt. For the community and the city, the work of cultivating a good neighborhood never ends.
Yosh Asato is a communications consultant specializing in architecture and urban design, and a cofounder of TraceSF.com, an online journal covering urbanism in the San Francisco Bay area.
|
<urn:uuid:1b18b5c1-26af-4a82-a2a0-f016e88a6983>
|
CC-MAIN-2016-26
|
http://urbanland.uli.org/infrastructure-transit/freeway-boulevard/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00136-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94374 | 2,567 | 2.875 | 3 |
A MIRACLE material for the 21st century could protect your home against bomb blasts, mop up oil spillages and even help man to fly to Mars.
Aerogel, one of the world’s lightest solids, can withstand a direct blast of 1kg of dynamite and protect against heat from a blowtorch at more than 1,300C.
Scientists are working to discover new applications for the substance, ranging from the next generation of tennis rackets to super-insulated space suits for a manned mission to Mars.
It is expected to rank alongside wonder products from previous generations such as Bakelite in the 1930s, carbon fibre in the 1980s and silicone in the 1990s. Mercouri Kanatzidis, a chemistry professor at Northwestern University in Evanston, Illinois, said: “It is an amazing material. It has the lowest density of any product known to man, yet at the same time it can do so much. I can see aerogel being used for everything from filtering polluted water to insulating against extreme temperatures and even for jewellery.”
Aerogel is nicknamed “frozen smoke” and is made by extracting water from a silica gel, then replacing it with gas such as carbon dioxide. The result is a substance that is capable of insulating against extreme temperatures and of absorbing pollutants such as crude oil.
It was invented by an American chemist for a bet in 1931, but early versions were so brittle and costly that it was largely consigned to laboratories. It was not until a decade ago that Nasa started taking an interest in the substance and putting it to a more practical use.
In 1999 the space agency fitted its Stardust space probe with a mitt packed full of aerogel to catch the dust from a comet’s tail. It returned with a rich collection of samples last year.
In 2002 Aspen Aerogel, a company created by Nasa, produced a stronger and more flexible version of the gel. It is now being used to develop an insulated lining in space suits for the first manned mission to Mars, scheduled for 2018.
Mark Krajewski, a senior scientist at the company, believes that an 18mm layer of aerogel will be sufficient to protect astronauts from temperatures as low as -130C. “It is the greatest insulator we’ve ever seen,” he said.
Aerogel is also being tested for future bombproof housing and armour for military vehicles. In the laboratory, a metal plate coated in 6mm of aerogel was left almost unscathed by a direct dynamite blast.
It also has green credentials. Aerogel is described by scientists as the “ultimate sponge”, with millions of tiny pores on its surface making it ideal for absorbing pollutants in water.
Kanatzidis has created a new version of aerogel designed to mop up lead and mercury from water. Other versions are designed to absorb oil spills.
He is optimistic that it could be used to deal with environmental catastrophes such as the Sea Empress spillage in 1996, when 72,000 tons of crude oil were released off the coast of Milford Haven in Pembrokeshire.
Aerogel is also being used for everyday applications. Dunlop, the sports equipment company, has developed a range of squash and tennis rackets strengthened with aerogel, which are said to deliver more power.
Earlier this year Bob Stoker, 66, from Nottingham, became the first Briton to have his property insulated with aerogel. “The heating has improved significantly. I turned the thermostat down five degrees. It’s been a remarkable transformation,” he said.
Mountain climbers are also converts. Last year Anne Parmenter, a British mountaineer, climbed Everest using boots that had aerogel insoles, as well as sleeping bags padded with the material. She said at the time: “The only problem I had was that my feet were too hot, which is a great problem to have as a mountaineer.”
However, it has failed to convince the fashion world. Hugo Boss created a line of winter jackets out of the material but had to withdraw them after complaints that they were too hot.
Although aerogel is classed as a solid, 99% of the substance is made up of gas, which gives it a cloudy appearance.
Scientists say that because it has so many millions of pores and ridges, if one cubic centimetre of aerogel were unravelled it would fill an area the size of a football field.
Its nano-sized pores can not only collect pollutants like a sponge but they also act as air pockets.
Researchers believe that some versions of aerogel which are made from platinum can be used to speed up the production of hydrogen. As a result, aerogel can be used to make hydrogen-based fuels.
Click to view image: '84963-aerogel_200102a.jpg'
Click to view image: '84963-aerogelcrayons385_200124a.jpg'
Click to view image: '84963-aerogelscientist38_200128a.jpg'
|Liveleak on Facebook|
|
<urn:uuid:1f8f6d7b-84e6-4323-9f4e-c1c0b7d27dcf>
|
CC-MAIN-2016-26
|
http://www.liveleak.com/view?i=c4a_1187729337
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00048-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970273 | 1,106 | 3.171875 | 3 |
Many of the police responders who worked at Ground Zero on Sept. 11, 2001 after the terrorist attacks to serve on America's darkest day paid a steeper price than they would know.
The attacks claimed 72 officers on that day, and countless others developed debilitating respiratory illnesses in the months and years following the attacks. The New York Police Department lost 23 officers on 9/11, and another 29 since that deadly day due to ailments they developed from working at Ground Zero.
Christine Todd Whitman, head of the Environmental Protection Agency on 9/11, said that the "air was safe to breathe," yet it was later learned she wasn't telling the whole truth.
The plume of debris from the collapsed World Trade Center towers consisted of more than 2,500 contaminants that included construction debris, glass, cellulose, asbestos, lead and mercury, according to a 2006 New York Times article. Substances that dispersed into the Lower Manhattan air included crystalline silica, lead, cadmium, and polycyclic aromatic hydrocarbons.
Initially, World Trade Center Medical Monitoring and Treatment centers, known as "centers of excellence," were set up in the New York area to provide medical care or referrals to first responders. Eventually, federal funding ran out. Funding to the Victims Compensation Fund (VCF), which was set up by Congress, also dried up, by December 2003.
To fill this gap, Rep. Carolyn Maloney (D-N.Y.) introduced the James Zadroga 9/11 Health and Compensation Act. She had the support of numerous law enforcement groups, but few of her out-of-state colleagues.
Extending health-care screenings and treatment for the 9/11 first responders initially stalled because federal lawmakers who represented areas outside of New York believed it was a New York issue.
Why should California, Texas, Illinois or other states pay for health-care for New York Police officers, firefighters and Port Authority responders, these congressional leaders argued.
As a result, police unions and associations lobbied these officials with a specific message.
"Those who stepped up for our country would be covered," said Jon Adler, president of the Federal Law Enforcement Officers Association (FLEOA). "If one star is attacked, then we are all attacked. We don't want anyone to minimize this as a New York problem, or a D.C. problem or a Shanksville (Pa.) problem."
Congress passed the bill in late 2010, and President Obama signed it into law on Jan. 2.
The bill, which was named after a NYPD sergeant whose death was the first attributed to Ground Zero toxins, set aside $4.3 billion over five years for ongoing health screenings and care for responders, who came from all but five of the 435 districts in the House of Representatives.
The act provides screenings and care for responders who developed WTC health conditions — mostly aerodigestive disorders such as GERD (gastroesophageal reflux disease), asthma, interstitial lung diseases, or COPD (chronic obstructive pulmonary disease). The bill becomes operational Oct. 1, and provides $2.7 billion of the $4.3 billion to the VCF.
In addition to New York police responders, the act provides care to responders who live outside of the New York area. Those responders can receive care from an eligible medical provider in their area.
Police and fire responders have questioned why the bill doesn't provide treatment to responders suffering from cancer. A study published Sept. 1 in the British medical journal The Lancet reported that firefighters who worked at Ground Zero were 19 percent more likely to develop cancer than those who were not there. Nearly 10,000 firefighters were included in the study.
The leaders of police and fire unions are hopeful that cancer will eventually be covered, says NYPD Sgt. Bob Ganley, the vice president of the Sergeants Benevolent Association.
"We told them the importance of certain diseases that need to be there, such as PTSD and respiratory ailments," Sgt. Ganley said. "We tried to get cancer into the bill. As of right now it's not in. We were satisfied with what was in the bill at passage. Hopefully there will be improvements."
|
<urn:uuid:dbec94b5-5182-4748-995e-f8bab86f4fdd>
|
CC-MAIN-2016-26
|
http://www.policemag.com/channel/patrol/articles/2011/09/passing-the-9-11-health-care-bill.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00134-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970187 | 863 | 2.640625 | 3 |
Harrison and Cleveland
Benjamin Harrison, twenty-third President of the United States, was born at North Bend, Ohio, on the 20th of August, 1833, and was the son of John Scott Harrison, a prominent citizen of his native State; grandson of President William Henry Harrison; great-grandson of Benjamin Harrison, signer of the Declaration of Independence. In countries where attention is paid to honorable lineage, the circumstances of General Harrison's descent would be considered of much importance; but in America little attention is paid to one's ancestry, and more to himself.
Harrison's early life was passed as that of other American boys, in attendance at school, and at home duties on the farm. He was a student at the institution called Farmers' College, for two years. Afterwards, he attended Miami University, at Oxford, Ohio, and was graduated therefrom in June, 1852. He took in marriage the daughter of Dr. John W. Scott, President of the university. After a course of study, he entered the profession of law, removing to Indianapolis, and establishing himself in that city. With the outbreak of the war, he became a soldier of the Union, and rose to the rank of Brevet Brigadier-General of Volunteers. Before the close of the war, he was elected Reporter of the decisions of the Supreme Court of Indiana.
In the period following the Civil War, General Harrison rose to distinction as a civilian. In 1876, he was the unsuccessful candidate of the Republican party for Governor of Indiana. In 1881 he was elected to the United States Senate, where he won the reputation of a leader and statesman. In 1884, his name was prominently mentioned in connection with the Presidential nomination of his party, but Mr. Blaine was successful. After the lapse of four years, however, it was found at Chicago the General Harrison, more than any other, combined in himself all the elements of a successful candidate; and the event justified the choice of the party in making him the standard-bearer in the ensuing campaign.
General Harrison was, in accordance with the usages of the Government, inaugurated President on the 4th of March, 1889. He had succeeded better than any of his predecessors in keeping his own counsels during the interim between his election and the inauguration. No one had discerned his purposes, and all waited with interest the expressions of his inaugural address. In that document he set forth the policy which he should favor as the chief executive, recommending the same general measures which the Republican party had advocated during the canpaign.
On the day following the inaugural ceremonies, President Harrison sent in the nominations for his Cabinet officers, as follows: For Secretary of State, James G. Blaine, of Maine; for Secretary of the Treasury, William Windom, of Minnesota; for Secretary of War, Redfield Proctor, of Vermont; for Secretary of the Navy, Benjamin F. Tracy, of New York; for Postmaster-General, John Wanamaker, of Pennsylvania; for Secretary of the Interior, John W. Noble, of Missouri; for Attorney-General, William H. H. Miller, of Indiana; and for Secretary of Agriculture--the new department--Jeremiah Rusk, of Wisconsin. These appointments were immediately confirmed by the Senate, and the members of the new administration assumed their respective official duties.
The Harrison administration was marked by the admission of six new States--North Dakota, South Dakota, Montana, Washington, Idaho, and Wyoming. The Census of 1890 had shown the population of hte country to be over sixty-two and a half millions. The McKinley tariff bill, a highly protective measure, became a law in 1890. The bill was formulated by William McKinley, chairman of the Ways and Means Committee of the lower House, in which branch of Congress he had served several terms.
Several other important measures were enacted into law within this same year. One was the Dependent Pension law, very similar to the one that President Cleveland had vetoed. By this law all Union soldiers and sailors were entitled to draw pensions from the government if from any cause they were unable to earn a living, and the benefits were extended to their widows, childten, and dependent parents.
Another noted law of this year was the Sherman Anti-Trust Law to protect trade and commerce from monopolies and unlawful restraint. It is a curious fact that after this law had lain unused for many years, it began to be enforced, and some of the great court decisions of the present time are based on it. Three or four additional laws of more or less importance date from this year of 1890. Among them are the Sherman Silver Law, the Original Package Law, and the Anti-Lottery Law, excluding lottery tickets and circulars from the mails of the United States.
Three international questions of more or less gravity arose during this administration. One was a dispute, which threatened war for a time, between the United States and Germany, over the Samoan Islands, which lie between our western coast and Australia. War vessels of the two countries were menacing each other in Samoan waters when nature's interference, in the form of a destructive typhoon, dismantled and ruined the ships of both countries, which served to bring to an end all warlike proclivities. The difficulties were settled by arbitration, Germany conceding the demands of the United States.
The "Chilean Affair," a state of bad feeling, brought to a crisis by an altercation between some American sailors and native ruffians and policemen, almost precipitated the two governments into a state of war, which was averted by a complete and humble apology on the part of Chili and indemnity to injured American sailors. The last big matter of the Harrison administration related to the Hawaiian Islands, and the issue reached into three administrations, covering the period from 1893 to 1900. The little monarchy of hte Pacific Isles had for many years been under the dominant rule of a constitutional succession of native kings. The islands were rich and beautiful, but many of the inhabitants were white descendants from American and English stock. In January, 1893, the Queen, Liliuokalani, was deposed by an organized revolution, and a provisional government was set up, with Sanford P. Dole as President. A treaty of annexation was sent to Washington and met with general approval. President Harrison sent the treaty to the Senate, intending to indorse the favorable action of that body. Just before final action was taken President Cleveland came into office and immediately withdrew the treaty from the Senate and preemptorily estopped further action pending an investigation. In 1898, after the termination of Cleveland's administration, these islands were formally annexed and made a territory of the United States.
In 1892 Grover Cleveland was nominated for the presidency for the third time, after being once elected and once defeated, a record in American politics peculiar to Cleveland alone. Adlai E. Stevenson was named for Vice-President. The opposition ticket was Harrison and Whitelaw Reid. The Democrats carried the election by a large majority. By a great portion of his followers Cleveland was looked upon with great favor, almost as an idol. He was endowed with a strong personality, and during his former administration had evinced qualities of remarkable strength and pwoer as a statesman. So-called "practical politics" had no place in his public life, but he gave his extraordinary ability to the service of the whole country freely, courageously, and conscientiously. The main issue in this campaign was the McKinley tariff, thought the silver question was looming up in the political horizon soon to be grappled with. The tariff, however, with the consequent industrial unrest and distrubances, was the chief issue.
The beginning of hte second Cleveland administration was menaced with a panic, which broke during the year 1893 like a tidal wave over the whole country. It was charged by the party in power chiefly to two causes, the McKinely tariff and the Sherman Silver act. Believing the quickest relief would come by the repel of this vicious Silver law, which required the government to purchase four and a half millions of silver per month, to be paid for in gold, Cleveland called an extra session of Congress in August, 1893. The gold reserve had dropped below a hundred million, the accepted safety point, and the condition was alarming. It was believed that the repeal of this law would relieve the Treasury and the general condition of the country. It was November before the joint action of the Senate and House could be secured, but the panic ere this had the financial and industrial interests of the nation in its viselike grip and it was years before the country recovered. Business of all classes suffered incalculable loss. Mercantile houses, banks, and credit concerns went crashing down to ruin in all parts of the country. In the West drouth and crop failure intensified the distress, and railroads furnished free transportation for trainloads of supplies sent from sympathizing citizens of th East to their suffering brethren of the West.
The most important legislation during this administration in addition to the repeal of the Silver act was the Gorman-Wilson tariff bill, which was so unsatisfactory to President Cleveland that he allowed it to become a law without his signature. This act reduced the McKinley tariff considerably, but in no sense was it a free trade measure.
Outside of political and legislative matters American history records for the year 1893 a most remarkable industrial enterprise, the Columbian Exposition, or World's Fair, held in Chicago, commemorating the 400th anniversary of the discovery of America. This event may have since been rivaled, if not surpassed, but up to that time this stupendous undertaking had not been approached. Constructed at a total cost of nearly forty million dollars, this magical "White City" grown in a night, with its magnificent halls and parks and lagoons, its streets and lawns and gardens, presented a scene of grandeur and beauty beyond the poet's most extravagant imagination of the celestial city. The results of the highest human attainments in art and science and invention, the industries and manufactures from every part of the globe, were gathered here and exhibited before the astonished gaze of the millions of visitors from every nation of the world. The greatness of the United States, as well as of Chicago, was a revelation to our foreign visitors.
In the spring of 1894 during a serious indusstrial disturbance in the Pullman car works at Chicago, President Cleveland took it upon himself to send Federal troops into the State of Illinois to quell the riots and protect the United States mails. This action was bitterly resented by Governor John P. Altgeld, and was severely criticised in other circles for a time, but the wisdom and celerity of action in dealing with this crisis by President Clevelnd have been generally commended as one of his strong acts in dealing with important national affairs.
For many years the United States and Great Britain had enjoyed peaceful relations in their international dealings, but occurrences arose, now and then, which clarly indicated that the English nation had not yet taken the proper measurement of the United States in her estimate of nations. It fell to Cleveland to put the British right on the matter for all time. In 1893 the Behring Sea dispute, which had been of several years' standing was finally settled by arbitration. Another dispute had arisen concerning Venezuela, which for a time seriously threatened war between the kindred nationas. Venezuela had repeatedly requested the British to agree to arbitration in fixing the boundary line with British Guiana, but without avail. The United States expressed its approval of that method of settlement. In 1895 the Secretary of State, Olney, made a demand upon the Premier of Great Britian, Lord Salisbury, that the Government of the United States, in conformity with the Monroe Doctrine, must insist upon orbitrating the matter in dispute. Salisbury haughtily refused to comply or to recognize the Monroe Doctrine. President Cleveland now did one of the things in his public life which mark him as a statesman of great courage and character, and which alone would give him a place in history. He at once sent a message to Congress proposing to investigate the matter independently, and, if it were disclosed that Venezuela had just grounds, to espouse her cause. This meant war with England, unless she receded from the stand taken by Salisbury. This is exactly what she did; the matter was satisfactorily arranged and the Monroe Doctrine vindicated. For a few days excitement ran high and the feeling was intense. Party prejudices were entirely forgotten and there was unanimity of sentiment in support of the dignified and firm stand taken by the President.
Cleveland was a man of sound judgment, of high principles, and strong convictions, but he was unhappy in his method of dealing with his political associates, and unfortunate in the chain of circumstances which fell upon the closing period of his last administration. Added to other things charged against him, it became necessary to make a bond issue of sixty-three million dollars, the necessity for which had become apparent before Cleveland assumed office. His second inauguration had been like the triumph of a national hero, but circumstances seemed to conspire to cast a shadow over his remarkable career. Upon relinquishing public office he retired to private life, making his home at Princeton, New Jersey, where he lived in happiness with his family for more than a decade, respected, honored, and loved by all who knew him.
The country was now to pass through the agitation of another presidential election. For nearly forty years, excepting the two terms of Cleveland, the affairs of the country had been in the hands of the Republican party. It represented almost half a century of national growth, and marvelous progress in wealth and population in the various States of the Union. The Western States had heretofore been given but slight attention in national elections, but now, like a young giant, the West had risen in its strength and must be reckoned with.
In the campaign of 1896 new issues arose other than the tariff, the old bone of contention, and chief among these was the money question. It was obvious that a large faction would favor free silver, and that the party so delcaring itself in national convention would receive this important vote. When the Republicans held their convention at St. Louis in June, 1896, they adopted the gold standard as a part of the party platform, and nominated William McKinley, of Ohio, the author of the McKinely tariff bill, as their candidate, and Garrett A. Hobart, of New Jersey, for Vice-President.
The Democrats met at Chicago in July, and after a stormy session, the silver faction from the West and South captured the convention and nominated William Jennings Bryan, of Nebraska, for President, and Arthur Sewell, of Maine, for second place. The paramount issue was declared to be the free coinage of silver on a parity with gold at a ratio of sixteen to one. Bryan was a surpise to his party and to the nation. His dramatic nomination was the result of his masterful eloquence. In a noted speech he won the convention with his opening words and held the vast audience to the close. Pandemonium then reigned in that great throng of twenty thousand excited, shouting, screaming men. They had found their leader.
The action of the Democrats brought to their support the silver factions of other parties. There was a considerable secession from the Republican party, headed by such able men as Senators Stuart, of Nevada, and Teller, of Colorado. The new Populist party, which had aborbed the People's party, unanimously flocked to Bryan. The conservative Democrats put into the field as an independent candidate, John M. Palmer, of Illinois, on the gold platform, the design being to draw the vote away from Bryan, which had been conspicuously recognized in Congress, his magnetic personality, and his matchless oratory were alarming to his foes. He entered into a campaign unique in the country's experience, visiting all important points throughout the land, being heard by millions. He was a man of faultless life, modest and charming in manner, clean and respectful in all his campaign utterances, though he was often misrepresented in the opposition press. Mr. McKinley conducted his campaign at his home in Canton, Ohio, delivering many addresses to delegations which came from all parts of the country. McKinley was a most estimable gentleman of the old school, of kindly disposition, high moral character, and large ability. Both of these distinguished men were of pronounced religious convictions. It was a clean and dignified and honest campaign on the part of the candidates, and entirely free from mudslinging and personalities. There was doubt as to the outcome, and great apprehension on the part of many who believed that victory for the Democrats would mean ruin for the country. All the force that could be brought to bear upon the employees of some large industrial interests was used to carry the election against free silver. The results showed McKinley to be elevted by 271 lectoral votes to Bryan's 176. McKinley received a majority of 600,000 of the popular vote out of a total of thirteen and a half million.
After the election apprehensions were allayed and business affairs began to assume a more normal condition. The panic, which had been intensified during the campaign by the mysteries of "High Finance," began to wane, and the following year crops were good and the country was fast approaching a normal condition. The marvelous gold fields of the Klondike were discovered, the fertile fields poured out their bounty, prices were high during most years, and the localities blasted by panic and famine were rehabilitated, doubling their population, and also the value of lands.
It had been necessary to make a bond issue at the close of Cleveland's administration to strengthen the depleted treasury. McKinley upon assuming office recognized the necessity of action to relieve the situation, and immediately called a special meeting of Congress to provide needed revenue. This meant a new tariff law. The result was the Dingley Tariff bill, which became effective in July, 1897, and was undisturbed for over ten years. Like the McKinley bill, it was a highly protective measure.
Return to Ridpath's History of the United States Table of Contents
Return to E-Books Index
Return to California AHGP Home Page
Return to Sacramento County AHGP Home Page
© 2000-2002 by Jacque Rogers
|
<urn:uuid:0f204aa5-78fd-4072-a036-e092f87081f0>
|
CC-MAIN-2016-26
|
http://www.usgennet.org/usa/ca/state1/ridpath/harrison1911.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00130-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.982731 | 3,783 | 3.109375 | 3 |
Published on October 14th, 2009 | by Yellow Magpie1
The Comb Jelly: Lethal Beauty Of The Oceans
Comb Jelly Photo By Marsh Youngbluth
One of the most colourful and delightful creatures in the sea, the Comb Jelly is an animal like no other. Unique, graceful and delicate, their bodies reflect light in a way that no master painter could ever hope to emulate.
The Comb jelly is one of the most dazzling and oldest sights in nature. Over half a billion years ago, comb jellies were one of only four types of animals that roamed the planet. While worms splintered off into what has become a diverse range of animals, sponges, jellyfish and comb jellies have almost remained the same since.
Animal Or Plant
Although comb jellies may look like a plant they are in fact animals that feed on small fish and crustaceans. Some are even cannibals and will feed on one another if they get the chance. Yet there not always classed as animals. There was general disagreement amongst naturalists as to whether or not they were a plant or animal. Because of this, they were classed as zoophytes (animal plants) as a compromise.
Comb jellies are the largest animals that propels themselves via cilia – a hair-like structure. They are thought to be more numerous than any animal of larger or comparable size.
The Comb Jelly’s Seemingly Simple Structure
Sphere or bell-jar are the two most common shapes that combs come in. They each have eight rows that contain moving cilia. A fibrous collagen gel holds mostly water in a mesoglea layer. In fact, combs are over 95 per cent water.
Although lacking a brain or even a central nervous system, they have what is called a nerve net. This is a mesh of nerve cells that forms a structure around the comb jelly’s mouth and gives it sensory information. Combs also posses a sensory organ called a statocyst which acts as a gravity-sensing device telling the comb it’s orientation in the water.
Comb Jelly’s Amazing Appearance
Comb jellies are one of the ocean’s most ethereal sights. The beating of the cilia refracts light, making the combs look like eight shimmering rainbows. Comb jellies that live at shallow depth are transparent and colourless.
However, deep water varieties are strikingly colourful such as the bright scarlet ‘Torugas red’. Others are bioluminescent blue and green.
Are Comb Jellies Jellyfish?
Comb jellies (Ctenophores) are not jellyfish, nor are they related to the Box jellyfish, and are classified instead as Ctenophora. Unlike true jellyfish, ctenophores move by beating cilia attached to their eight comb rows. Another difference between comb jellies and jellyfish is that they do not sting. Instead they catch their prey by sticking them with mucus from retractable tentacles. Finally,true jellyfish have no anal pores whereas combs do.
Most comb jellies are hermaphrodites. It is maintained that most hermaphrodite species can self-fertilise and produce off-spring.
However, their usual reproductive method is to release tens of thousands of eggs and sperm into the water. Thus successful reproduction becomes a numbers game.
The fact that comb jellies are the most abundant of any animals, equal or larger than them, is a testament to the effectiveness of this strategy.
Valuable Control or Pest?
It is believed that comb jellies preserve a delicate balance by consuming small crustaceans. Otherwise these crustaceans would eat all the phyoplankton which all higher marine species on the food chain are dependant upon.
Slightly blurring their environmental benefits are the accidental introduction of the American comb jelly, Mnemisopsis leidyi, in the Black Sea. This comb is widely blamed for the collapse of the commercial fishing industry in the region. The highly adaptable comb, christened ‘The Monster’ by the locals, is said to have been transported in the ballast tank of a US Ship.
Since its colonisation of the Black Sea. ‘The Monster’s’ population is estimated to weigh over a billion tonnes. Each individual comb can produce 8,000 offspring every 24 hours.
However, a possible solution has been drawn up to curb the problem. A cannibalistic comb jelly called Beror ovate has been introduced to hunt down and control the population of ‘The Monster.’
Successful Style and Substance
Even the great master of light, 20th century painting icon, Edward Hopper, would gasp in admiration at the aesthetic marvels of these resplendent creatures. Of course one cannot admire a shell. They most have strong significance too.
Comb jellies have both style and substance in abundance. They are the most successful of all large animals on the planet. A perfect marine specimen. It seems that we have still a lot to learn from these little iridescent creatures.
- Comb jellies can measure up to 1.5 metres (five feet).
- They can live up to and are found in every type of marine environment.
Check out Yellow Magpie’s Box Jellyfish: The All-Seeing Creature With 24 Eyes for insight into another remarkable animal.
The Art of Nature: Jellies is a recommended DVD containing more information on Jellyfish and Comb Jellies and is great way to learn more about these fascinating creatures.
For people living in Ireland or the United Kingdom, you can access The Art Of Nature: Jellies here.
For those who live in Canada, you can obtain The Art Of Nature: Jellies from here.
Animal Or Plant
Seemingly Simple Structure
Are Comb Jellies Jellyfish?
Valuable Control Or Pest?
Successful Style And Substance
|
<urn:uuid:f535ec35-45ed-4fd0-951c-dfa7202564f0>
|
CC-MAIN-2016-26
|
http://yellowmagpie.com/comb-jelly-about/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00041-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936868 | 1,252 | 3.203125 | 3 |
by John H. Koester
One of the first commonly employed weep details was the sash cord or ‘rope’ weep. In some cases, this detail was expanded with sections of the sash cord laid in the cavity and then extended through
the wall, usually at a head joint. In other cases, the sash cord was fastened vertically up the backside of the cavity. In yet other instances, it would be pulled out of the wall, leaving a hole through the head joint or bed joint of mortar.
How and when these sash cord sections were placed or embedded in the bed joint of mortar impacted whether they had any weeping capacity. If they were placed on the flashing and the bed joint of mortar was spread on top, the finished detail looked like Figure A. However, if the bed joint of mortar was spread and the sash cord section was laid or embedded into it, the finished detail looked like Figure B. The theory was the cotton sash cord (or a synthetic one) would ‘wick’ water out of the core or cavity and dry the units. However, if there is one takeaway from this article, let it be that one should not get into a wicking contest with mortar or masonry units—how can a 9.5-mm (3/8-in.) diameter sash cord compete against an entire masonry assembly?
Many have seen an example of a rope weep that has moisture stains around the outside end of the cord; it appears to have moisture ‘weeping’ from it. What is really happening is a small amount of moisture is actually exiting the cavity through small voids in the bed joint of mortar at the 5 o’clock and 7 o’clock positions on the bottom radius of the sash cord.
Various tube weeps—pieces of plastic pipe cut to length—have also been introduced to the masonry industry. Their installation procedure is virtually the same as the sash cord material and so are the shortcomings. Even when the tubes are correctly installed on the flashing’s surface, the weep’s wall thickness is still a water dam.
To read the full article, “Weep Now or Weep Later: Moisture management and risk zones for masonry,” click here.
|
<urn:uuid:c0459b4a-9bab-4f9a-8c45-6dae822cd3a9>
|
CC-MAIN-2016-26
|
http://www.constructionspecifier.com/weep-now-or-weep-later-of-ropes-and-tubes/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00017-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969726 | 473 | 2.984375 | 3 |
What are osteochondral lesions?
Osteochondral lesions, sometimes called osteochondritis dessicans or osteochondral fractures, are injuries to the talus (the
bottom bone of the ankle joint) that involve both the bone and the overlying cartilage. These injuries may include blistering of the cartilage layers, cyst-like lesions within the bone underlying the cartilage, or fracture of the cartilage and bone layers. Throughout this article, these injuries will be referred to as osteochondral lesions of the talus (OLT).
What are symptoms of OLTs?
OLTs can occur after a single traumatic injury or as a result of repeated trauma. Common symptoms include prolonged pain, swelling, catching and/or instability of the ankle joint. After an injury such as an ankle sprain, the initial pain and swelling should decrease with appropriate attention (rest, elevation). Persistent pain despite appropriate treatment after several months may raise concern for an OLT.
Pain may be felt primarily at the lateral (outside) or medial (inside) point of the ankle joint. Severe locking or catching symptoms, where the ankle freezes up and will not bend, may indicate that there is a large osteochondral lesion or even a loose piece of cartilage or free bone within the joint.
What causes OLTs?
The majority of OLTs, as many as 85 percent, occur after a traumatic injury to the ankle joint. Ankle sprains (rolling-inward injuries to the ankle) are a common cause of OLTs. With this type of injury, a section of the talus surface may impact another part of the ankle joint (tibia or fibula). As this happens, an impaction, crushing or shearing injury to the talus may occur. Other types of injury mechanisms may also cause an injury to the surface of the talus.
The talus is the bottom bone of the ankle joint. Much of this bone is covered with cartilage. The tibia and fibula bones sit above and to the sides of the talus, forming the ankle joint. This joint permits much of the up (dorsiflexion) and down (plantarflexion) motion of the foot and ankle. The blood supply to the talus is not as rich as many other bones in the body, and as a result injuries to the talus sometimes are more difficult to heal than similar injuries in other bones.
How are OLTs diagnosed?
OLTs are diagnosed with a combination of clinical and special studies. Your orthopedic foot and ankle surgeon may have a suspicion that you have this type of injury from the history you provide. Physical examination can further increase the suspicion for this type of injury. Imaging is necessary to confirm the diagnosis. Occasionally, regular X-rays can show an OLT but frequently additional imaging is needed. This additional imaging may be a CT scan or an MRI.
What are treatment options?
Once the diagnosis has been confirmed, treatment may consist of either non-operative or operative methods. The specifics of treatment will likely depend on the nature of the OLT, presence of other injuries and patient characteristics.
Non-operative treatment is appropriate for certain lesions and usually involves immobilization and restricted weightbearing. This may then be followed with gradual progression of weightbearing and physical therapy. The goal of non-operative treatment is to allow the injured cartilage and bone to heal.
Other lesions may be more appropriately treated with surgery. The goals of surgery are to restore the normal shape and gliding surface of the talus in order to re-establish normal mechanics and joint forces. The hope is to minimize symptoms and limit the risk of developing arthritis.
Depending on the characteristics and location of the OLT, surgery may done arthroscopically or by opening the skin. Arthroscopy uses a camera and small instruments to view and work within the joint through small incisions. It may not be possible to properly treat certain lesions arthroscopically due to the size or location of the lesion. Treatments may include debridement (removing injured cartilage and bone), fixation of the injured fragment, microfracture or drilling of the lesion, and/or transfer or grafting of bone and cartilage. You and your orthopaedic foot and ankle surgeon can discuss these treatment options and decide which one is best.
What happens after treatment?
Anticipated recovery after an osteochondral lesion varies depending upon the nature of the lesion and the treatment. Most treatments require a period of immobilization and restricted weightbearing that can range from several weeks to several months. More involved procedures that include bone grafting or cartilage transfer may require a longer period of recovery.
The results of non-operative treatment of OLTs have been disappointing. Most studies show that full resolution of the pain from an OLT occurs in less than half of cases treated without surgery. Studies examining the outcomes after debridement and microfracture (drilling) of OLTs have shown that the majority (greater than 70 percent) of patients have a good or excellent outcome. Procedures that transfer bone or cartilage to an OLT also have good outcomes. In general, the best results can be expected for smaller lesions.
Complications, such as infection or wound healing problems, are uncommon after arthroscopic ankle surgery. More complex procedures with an open surgical approach or bone or cartilage transfer may have additional risks. The complications that relate to surgery in general include the risks associated with anesthesia, infection, damage to nerves and blood vessels, and bleeding or blood clots. In addition to standard surgical risks, additional complications may include the failure of any transplanted tissue to heal or poor healing of the bones where they were cut.
The American Orthopaedic Foot & Ankle Society (AOFAS) offers information on this site as an educational service. The content of FootCareMD, including text, images and graphics, is for informational purposes only. The content is not intended to substitute for professional medical advice, diagnoses or treatments. If you need medical advice, use the "Find an Orthopaedic Foot & Ankle Surgeon" tool at the top of this page or contact your primary doctor.
|
<urn:uuid:319d4734-e062-43d2-9bd6-a31b8a092804>
|
CC-MAIN-2016-26
|
http://www.aofas.org/footcaremd/conditions/ailments-of-the-ankle/Pages/Osteochondral-Lesion.aspx?PF=1
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00095-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.926818 | 1,280 | 3.078125 | 3 |
The Plague Religion Quotes
How we cite our quotes: Citations follow this format: (Part.Chapter.Paragraph). We used Stuart Gilbert's translation.
When for the third time the fiery wave broke on him, lifting him a little, the child curled himself up and shrank away to the edge of the bed, as if in terror of the flames advancing on him, licking his limbs. A moment later, after tossing his head wildly to and fro, he flung off the blanket. From between the inflamed eyelids big tears welled up and trickled down the sunken, leaden-hued cheeks. When the spasm had passed, utterly exhausted, tensing his thin legs and arms, on which, within forty-eight hours, the flesh had wasted to the bone, the child lay flat, racked on the tumbled bed, in a grotesque parody of crucifixion. (4.3.24)
Not exactly the subtlest religious imagery we’ve ever read, but OK. To start, you’ve got all the flames, which reminds us of Hell, but the "crucifixion" bit at the end is what really gives it away.
In the small face, rigid as a mask of grayish clay, slowly the lips parted and from them rose a long, incessant scream, hardly varying with his respiration, and filling the ward with a fierce, indignant protest […]. Paneloux gazed down at the small mouth, fouled with the sores of the plague and pouring out the angry death-cry that has sounded through the ages of mankind. He sank on his knees, and all present found it natural to hear him say in a voice hoarse but clearly audible across that nameless, neverending wail:
"My God, spare this child!"
But the wail continued without cease. (4.3.30-32)
Camus seems to drive home his point that prayers and religion are useless in an indifferent world of suffering. Paneloux’s cry is met with a further wail from the tortured child, which is as close as you’re going to get to divine intervention in this novel.
"I understand," Paneloux said in a low voice. "That sort of thing is revolting because it passes our human understanding. But perhaps we should love what we cannot understand."
Rieux straightened up slowly. He gazed at Paneloux, summoning to his gaze all the strength and fervor he could muster against his weariness. Then he shook his head.
"No, Father. I’ve a very different idea of love. And until my dying day I shall refuse to love a scheme of things in which children are put up to torture."
A shade of disquietude crossed the priest’s face. "Ah, doctor," he said sadly, "I’ve just realized what is meant by ‘grace’." (4.3.50-53)
Great – Paneloux realizes what is meant by "grace." Too bad he doesn’t let us in on the secret. Good thing we looked it up. In Christianity, grace is sometimes defined as unconditional belief in God. Clearly, Rieux’s belief is conditional, since he uses the horrors of the plague as evidence that God does not exist.
|
<urn:uuid:6ef20b12-6485-41d4-8dad-9dfa0cb79b7c>
|
CC-MAIN-2016-26
|
http://www.shmoop.com/plague-camus/religion-quotes-4.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00058-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964964 | 696 | 2.53125 | 3 |
ActiveRDF is a library for accessing RDF data from Ruby programs. It can be used as data layer in Ruby-on-Rails, similar to ActiveRecord (which provides an O/R mapping to relational databases). ActiveRDF in RoR allows you to create semantic web applications very rapidly. ActiveRDF gives you a Domain Specific Language (DSL) for your RDF model: you can address RDF resources, classes, properties, etc. programmatically, without queries.
See www.activerdf.org for more information.
See wiki.activerdf.org/GettingStartedGuide for information on how to install ActiveRDF, point to data sources, and use it. In brief: “gem install activerdf_sparql”.
The following example uses a SPARQL endpoint with FOAF data and displays all people found in the data source:
require 'activerdf' ConnectionPool.add_data_source :type => :sparql, :host => '...' Namespace.register :foaf, 'http://xmlns.com/foaf/0.1/' ObjectManager.construct_classes people = FOAF::Person.find_all
ActiveRDF is distributed under the LGPL license.
|
<urn:uuid:de83d9ed-234d-442e-9a6b-68a134e2bf6a>
|
CC-MAIN-2016-26
|
https://github.com/ActiveRDF/ActiveRDF
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00088-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.687642 | 268 | 2.890625 | 3 |
General Rule: The Equal Access Act (" EAA") (20 U. S. C. §§ 4071-74) requires public schools
which meet certain criteria to treat all student-initiated groups equally, regardless of the religious,
political, philosophical or other orientation of the groups. This means that to the extent
that a school board opens up its school facilities to any student-led and run non-curriculum
related group, it must uniformly open its facilities to all student-led and run groups, including
religious ones. The EAA was adopted by Congress in 1984, and its constitutionality was
upheld by the U. S. Supreme Court in Board of Education of Westside Community Schools v.
Mergens, 496 U. S. 226 (1990).
This Chapter is limited to a discussion of student-initiated religious clubs. Chapter X addresses the
related topic of access to school facilities by outside religious clubs and organizations.
Does the EAA apply to all schools? No. The EAA only applies to schools that meet a three-part
test. The school must:
What is a "Secondary School" in the context of the EAA? A secondary school is often
defined by state statute, but typically means grades 9 - 12. It is unlikely that the Equal Access
Act applies to so-called "middle" schools. 61
- Be a public secondary school;
- Receive federal financial assistance; and
- Have designated certain facilities as a "limited open forum."
What is a "limited open forum" in the context of the EAA? A "limited open forum" is created
when a public secondary school allows one or more "non-curriculum related student
groups to meet on school premises during non-instructional time." 20 USCA § 4071( b). Local
school boards decide whether to create and maintain limited open forums.
What is a "non-curriculum related student group"? "Non-curriculum related student
group," as used in Equal Access Act, refers to those student groups whose activities are not
directly related to the body of courses offered by the public school (e. g., the chess club).
Student groups that are directly related to the subject matter of courses offered by the school
(e. g., the Spanish club) do not fall within the non-curriculum-related category and thus
would be considered curriculum-related. 62, 63
What restrictions does EAA place on non-curriculum related student groups?
What are the rights of non-curriculum related student groups under the EAA? The EAA
grants these groups equal access to school facilities for meetings, and equal access to school
media (e. g. school publications, school bulletin boards and public address systems) for
publicizing their activities. They may choose their own leaders, restricting certain leadership
roles to people of their own faith. 64 However, general membership probably cannot
- The group must be student-initiated.
- The group must be student-sponsored and student-led.
- Participation in the group must be voluntary.
What is "non-instructional time"? "Non-instructional time" is time which a school sets
aside before classroom instruction begins or after classroom instruction ends. Non-instructional
time also encompasses an activity period or lunch period during which instruction
does not occur and during which other groups are allowed to meet. 65
What are the rights retained by school authorities under the EAA? School officials have the
right to monitor club meetings to ensure compliance with provisions of the EAA. School
authorities can "maintain order and discipline on school premises" and may prohibit club
meetings which "materially and substantially interfere with the orderly conduct of educational
activities within the school." School officials have the duty of protecting the "well-being
of students and faculty." School officials should require religious clubs to follow the
same rules as all other student clubs, including adherence to any nondiscrimination policy.
School authorities may establish time, place and manner regulations applicable to club meetings,
provided that the restrictions are uniform and nondiscriminatory. School officials have
the right to close the limited open forum at any time by prohibiting all non-curriculum related
clubs from meeting on school premises, thus ending the school's obligations under the
What are the restrictions and obligations placed upon the school, its agents and employees
by the EAA? School personnel, including teachers, may not initiate, sponsor, promote,
lead or participate in religious club meetings. However, school personnel may be required
to monitor club meetings. 20 USCA §4071.
May outsiders attend meetings? Outsiders, such as clergy members, may not initiate club
meetings. Outsiders "may not direct, conduct, control or regularly attend activities of student groups." 67
Outsiders may occasionally attend club meetings if invited by the students
and if the school does not generally prohibit such guests. However, school officials may
totally forbid non-school persons from attending all student club meetings. 68
What are some concerns that arise when a club meets pursuant to the EAA?
The meeting of religious clubs in school facilities pursuant to their rights under the EAA may
create an appearance of school endorsement of religion in violation of the Establishment
Clause. School officials must protect against such impressions and may do so by issuing disclaimers
clearly stating that the school is not sponsoring, endorsing or promoting any non-curriculum
related student groups.
Schools must also recognize and guard against the threat of coercive peer pressure, which
may be substantial. Student club members may be able to coerce students into joining sectarian
groups and adhering to the club's beliefs, particularly if the student body is composed
largely of the same religious faith as that practiced by club members. Such clubs might create
"insider" and "outsider" student groups, and, as a result, students may be ridiculed,
harassed or ostracized.
High School Principal Rejects Student Application for Bible Club, but Permits Other
Non-Curriculum Related Clubs
Three students at Hawthorne High School decide to form a Bible study club. To organize
and structure their club, they enlist the help of their local minister. A school science
teacher agrees to become the club advisor. The principal has allowed a wide variety
of clubs to meet after school hours, including the chess club, the audiovisual
squad, and the Spanish club, but is concerned about the controversy that this club
could create. The students claim that the Equal Access Act protects their right to form
this club. When he rejects the club proposal, the principal states that all other school
clubs are related to the curriculum and hence the Equal Access Act does not apply.
Is the principal correct?
The school has created a limited open forum by allowing other non-curriculum related clubs to
meet, and therefore it must allow the Bible study club to meet. However, the roles of the minister
and science teacher in the club have to be carefully controlled pursuant to the dictates of the
People in Community Object to Controversial Non-Curriculum Related Club
A high school allows non-curriculum related student-organized, student-led clubs to
meet before and after the school day. A very controversial club has been proposed by a
student, and many in the community are opposed to this club's meeting.
What are the school's options?
Under any circumstance, a school may prohibit clubs and organizations that are contrary to the
educational mission of the school or present a danger to the health and safety of a school. This is
a very high standard: a school district may not bar a student club merely because the school or
the community disagrees with its message, even if they disagree strongly. Should the District so
elect, it can ban all such non-curriculum related clubs (such as service clubs) including this one.
|
<urn:uuid:e01a7762-8759-4163-8e80-01fa449766d8>
|
CC-MAIN-2016-26
|
http://archive.adl.org/religion_ps_2004/clubs.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00112-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956504 | 1,688 | 2.78125 | 3 |
WWI & WWII RESEARCHING
There are many instances coming to our attention concerning children and grandchildren, particularly of WWII veterans, who are researching their father or grandfather, who served in the European Theater during WWII.
Believe it or not, some do NOT know where their father is buried! Mother never told them. She was traumatized at the time, and just said that "Daddy just went away and never came back." Now they are finding out the "Daddy" was a "Hero" and never came home because he is "buried over there somewhere!" They do not know the circumstances of his death, do not know which unit he fought with, nor the actions in which he was involved, and why were they not brought back home after the war. Many puzzling questions are being asked, and each one deserves an answer. But, where are the answers?
When a soldier was killed, he remained on the battlefield, in place, until their unit's medics came along, pronounced him dead, and they in turn notified the Graves Registration Unit. If by chance the soldier was not dead, he was evacuated to the nearest Aid Station or Hospital, where eventually he recovered or died. Again the Graves Registration Unit removed the body to its own facility.
The G.R.U. made positive identification of the body, recorded the condition of the body, obvious wounds and/or the cause of death, and was inventoried for personal effects.
The body was then removed to a rear area, where further identification was made, a medical examination made, and personal effects packaged and secured, and later sent to the nearest next of kin, with the usual greeting from the U.S. Army, "We regret to inform you ".
As soon as possible and practical, the body was placed in a body bag, which sometimes consisted of only a mattress cover, and buried in a shallow grave of a temporary cemetery, although carefully marked and identified.
After establishment of our many U.S. Military Cemeteries and Memorials throughout Europe, all of the bodies in these temporary cemeteries were re-interred into the permanently established sites, and those temporary cemeteries were closed.
Starting in 1947, a repatriation of bodies
program was initiated and up until the 1960s, it was possible to have bodies returned
to the U.S. at government expense, for private burials near the decedent's home.
It was the decision of the family as to whether they wanted the body brought home
or not. Of course a big percentage did, but the rest chose to let their "loved
one" rest among his buddies near where he fell for the cause in which he
believed, and for which he fought. Since that time, further removal of bodies
has been prohibited.
Next-of-kin may request information on veterans who were Killed in Action, (KIA), and in many cases, can get a detailed file on the handling of the body from the time of death until burial in a U.S. Military cemetery and/or removal to the U.S., and in some cases, information on where the body was re-interred in the U.S.A.
This information is contained in an "Individual
Deceased Personnel File," (I.D.P.F.), and may be obtained as follows:
Write to the agency below, requesting the "IDPF" on: Veteran's full name; Army Serial #; Unit if known; Date and Place of death if known; and your relationship to the deceased veteran. The more details that are furnished, the quicker they can research this information and get a reply back to you. It will take perhaps 6-8 weeks to obtain this data.
All of the Cemeteries on foreign soil are under the jurisdiction of the "American Battle Monuments Commission" in Arlington, VA.
Each cemetery has been granted use of the site, in perpetuity by the host government, to the United States, tax and rent free.
There are eight (8) WWI U.S. Cemeteries in Europe, mostly in the NE of France and SE of Belgium, and one in England, as follows:
There are twelve (12) WWII U.S. Cemeteries in Europe one (1) in England, one (1) in North Africa, and one (1) in the Philippines as follows:
A white marble headstone marks each grave - a Star of David for those of the Jewish faith; and a Latin Cross for all others.
At the memorials in these Cemeteries, are inscribed the names of the Missing, who in the respective regions, gave their lives in the service of their Country, but whose remains were never found or not identified. A small non-denominational chapel forms a part of each WWII Cemetery & Memorial.
No further burials may be made in the Cemeteries under the ABMC's jurisdiction, except those remains which may, in the future, be found on one of the battle fields. Occasionally a body is found after all of these years, and is respectfully buried in the nearest Cemetery to where the body was found. (Unless the family requests otherwise).
Cemeteries are open to the public every day of the year, usually from 9 - 5.
Only fresh cut flowers and arrangements are allowed to be placed on gravesites.
An American Superintendent is stationed at each Cemetery, and all administrative personnel speak English, and can assist in location of grave sites.
request of the "next-of-kin" the ABMC will furnish, at no cost, photos
A brochure listing all of the Cemeteries and Memorials, giving a description of each, as well as directions to the facility and other pertinent data, and a separate and special brochure on each facility may be obtained from the ABMC as follows:
In some of the above Cemeteries, many of the local people have adopted a grave of a soldier who is unknown to them, but he is "their adopted son," in thanks and honor for his sacrifice in giving them their Liberty and Freedom in 1944-5. It is just amazing that these people still give thanks and pay tribute to their heroes after all these 56 years!
A new website has recently been opened by the A.B.M.C.. This website gives one the opportunity to locate the grave of any person buried in an overseas cemetery. It can be found at: www.abmc.gov. Click on "About War Dead". This page will give you a picture and details of each Cemetery; Rank and Army Serial # of any individual that you may be seeking, (You provide correct legal name), burial site, and it will give you the grave location within the Cemetery.
Another facility that researchers may be interested in, is the storage facility of most of our US military records, where you can possibly get information on the combat records of a veteran. In order to facilitate such a search, you must supply: Name, Rank, Army Serial #, Dates of Service, Branch of Service, and specific unit and dates for which you want records. The wait may be as long as 12 months of longer - they are notoriously slow!
General historical and Army unit histories may be available from:
Books may be obtained from here on loan through your own Public Library on an inter-library-loan basis, if available at time of request.
Another repository of importance is our National Archives, where many military records are stored, and copies of such material can be obtained for a nominal cost of copying and postage.
Listed below are the names and addresses of all of the European Cemeteries
& Memorials, with e-mail addresses. All Superintendents are very
cooperative in replying to individual requests. If you'd like a printable,
MS Word version of this list, click
THE AMERICAN BATTLE MONUMENTS COMMISSION EUROPEAN
WORLD WAR I CEMETERIES
Flora Nicolas - Cemetery Associate
Derek Odell - Cemetery Associate TEL: 00.44.1483.473.237
Christopher Sims - Cemetery Associate TEL: 00.32.56.60.11.22
Dominique Didiot - Cemetery Associate
Nathalie Le Barbier - Cemetery Associate TEL: 03.23.82.21.81
Nadia-Ezz-Eddine, Cemetery Associate TEL: 03.83.80.01.01
Murielle Defrenne, Cemetery Associate TEL: 03.23.66.87.20
Gabrielle Mihaescu, Cemetery Associate
WORLD WAR II CEMETERIES
Joris Vincent - Cemetery Associate TEL: 00.32.43.71.42.87
Maurice Lemardele, Cemetery Associate TEL: 02.33.89.24.90
Arthur Brookes, Cemetery Associate TEL: 00.44.1954.210.350
Dominique Jambois, Cemetery Associate TEL: 03.29.82.04.75
Caroline Oliver, Cemetery Associate TEL: 00.32.87.68.71.73
Valerie Muller, Cemetery Associate TEL: 03.87.92.07.32
Erwin Franzen - Cemetery Associate TEL: 00.3188.8.131.52
Frenk Lahaye, Cemetery Associate TEL: 00.31.43.45.81.208
NORMANDY VISITOR CENTER (NVC)
Note: Superintendents may be changed occasionally.
For more information on the
ABMC Website, the following address may be helpful: http://www.abmc.gov/
Various Services Offered
OF THE SERVICES AVAILABLE
Name, location and general information concerning the Cemetery or Memorial; Plot,
Row & Grave # if appropriate.
BRITISH OR CANADIAN CONTACTS:
Requests for similar information concerning burials of WWI & WWII British or Canadian veterans in the Ypres, Belgium area is as follows:
Mr. Jeremy Gee, OBE, Director
Through this agency, the addresses for contacts in other areas may be obtained.
OTHER USEFUL ADDRESSES:
Battle Monuments Commission
ABMC European Region
ABMC Mediterranean Region
ABMC Pacific Region
Chief, Bureau of Medicine & Surgery
Operational Archives Branch
Memorial of the Pacific
Mr. Robert King
National Personnel Records
Camp Blanding Museum
Ex-Prisoner of War
of this Data has been taken from A.B.M.C. Documents, and other sources which are
in Public Domain.
| Ardennes | Awards
| Bronze Stars |
Calendar | Medics
| Command Posts | Contact
| Dates | DSCs
| Ardennes | Awards
| Bronze Stars |
Calendar | Medics
| Command Posts | Contact
| Dates | DSCs
|
<urn:uuid:937a9fe1-54c1-4f47-bd7b-9ce92cd2ad12>
|
CC-MAIN-2016-26
|
http://www.30thinfantry.org/researching.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00012-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948291 | 2,289 | 3.03125 | 3 |
HPV: To vaccinate or not to vaccinate?
Human papillomavirus (HPV) is a common virus that is typically spread by skin-to-skin contact during sexual activity. Certain types of HPV can cause cancer. The first vaccine to protect against certain types of HPV, Gardasil®, was introduced in 2006 and a second vaccine, Cervarix®, followed in 2009.
Today, HPV vaccination is recommended as part of the routine immunization schedule for boys and girls aged 11 or 12, with catch-up vaccination recommended for those 13-26 years old.
The HPV vaccine has been controversial in the U.S. since it was introduced because of a fear that vaccination might encourage girls to engage in sexual activity at a young age. More recently, parents also have expressed concerns about the safety of the vaccine.
A recent survey showed that 16.4 percent of parents in 2010 did not plan to have their daughter immunized because of concerns about safety and side effects. In 2008, it was 4.5 percent.
Health officials and physicians have offered several explanations for the low vaccination rate among teen girls, including inconsistent information from doctors who don’t always suggest the vaccine because of insurance or ethical concerns. Here are some facts about HPV vaccination to help families make a more informed decision.
What’s the difference between the two vaccines and who should get them?
Both Gardasil and Cervarix are effective against HPV types 16 and 18, which cause most cervical cancers. Both vaccines have been shown to prevent cervical precancers in women. To protect against cervical cancer, either vaccine is recommended for girls at 11 or 12 years old. Teenage girls and young women ages 13-26 are advised get the HPV vaccine if they did not receive any or all doses when they were younger.
Only Gardasil has been tested and licensed for males. Gardasil is recommended for boys aged 11 or 12 years, and for males 13-21 years old, who did not get any or all of the three recommended doses when they were younger. All men may receive the vaccine through age 26.
The vaccine is also recommended for gay and bisexual men, any man who has sex with men and men with compromised immune systems through age 26, if these men did not get fully vaccinated when they were younger.
What are the vaccine risks?
Side effects of the HPV vaccines are similar to other vaccines, and may include fainting, dizziness, nausea, headache, fever and hives. Pain, swelling, itching, bruising and redness at around the injection also can occur.
Some parents may be concerned about media reports of young girls who’ve died after being vaccinated. The Centers for Disease Control and Prevention (CDC) has investigated 42 reports of deaths among HPV vaccine recipients, finding that the cause of death varied from cardiovascular to infectious. CDC officials concluded the vaccine was not responsible for those 42 deaths.
The benefits of ‘herd immunity’
The good news is that while only one-third of teenage girls have received the full course of HPV vaccine, the prevalence of HPV strains that cause cervical cancer has fallen by more than half among all teen girls. The steep decrease comes despite the United States having one of the lowest HPV vaccination rates among developed countries. The concept of herd immunity could be at work. Herd immunity occurs when a critical portion of a community is immunized against a contagious disease, and those who have not been vaccinated are protected.
Vaccination is driving the recent decrease in HPV. In 2006, the year Gardasil was introduced, the infection rate with HPV strains targeted by the vaccine was 11.5 percent among 14- to 19-year-old girls. By 2010, the rate was cut to 5.1 percent, a 56 percent decrease, according to a national survey of teen girls reported in The Journal of Infectious Diseases in June.
In addition, partial dosage is a safeguard. Just one dose of the vaccine was found to be 82 percent effective against HPV. Almost half of teen girls have received at least one dose of HPV vaccine.
|
<urn:uuid:ea1c4718-78d2-4764-a51f-fd86807bcdbf>
|
CC-MAIN-2016-26
|
http://www.cancercenter.com/community/newsletter/article/hpv-to-vaccinate-or-not-to-vaccinate/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00047-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972419 | 841 | 3.28125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.