text
stringlengths 199
648k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 14
419
| file_path
stringlengths 139
140
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 50
235k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Moudry, Jan (Jr.); Konvalina, Petr and Kolarova, Pavlina (2007) Bioproduction in the Czech Republic. Lucrări Ştiinţifice, Seria Agronomie, 50 (2007), pp. 277-281.
Organic farming in the Czech Republic is particularly located in Less Favoured Areas (LFA), which are often overlaid with environmentally sensitive areas. Acreage of organic farming in the Czech republic makes about 6% of the whole agriculture land. At present acreage of organic farming is 281 535 ha. More than 90% share of this acreage represent grasslands. Grasslands are linked with non-milk cattle breeding. Organic farming fulfils its environmental function good in contrast to its production function which is fulfilled much less. As the main product of organic farming in Czech Republic is produced bee, but most of this beef isn’t certified as a bio-product. The amount of bio-products is not sufficient enough and does not cover both market and export need. This absence is also caused by uniform range of bio-production and by insufficient manufacturing capacities. At the same time only 24% of organic farmers can place more than 50% of their production on the market as a bio-product. Important limiting factor for increase of manufacturing capacities are in comparison to EU very strict zoohygienic and veterinary laws and rules, related to animal production. Structure of farming on arable land within organic farming is limited by the system of subsidies. Currently this system is not satisfactory enough to ensure higher motivation for farmers to produce bio-production and farm on arable land. The demand for bio-products outweighs current offer and the resultant difference is covered by import. In contrast with earlier members of EU where the direct sell from farm is the main method of distribution, in the Czech Republic is distribution of bio-products realized by many ways, mainly through selling in supermarkets and specialised shops in cities. In contrast with earlier members of EU, most of organic farms in the Czech Republic feature relatively large acreage. These farms are primarily focused on production and far less on manufacturing. This is one of the reasons of lower share of straight sell from farms.
|EPrint Type:||Journal paper|
|Keywords:||Organic farming, Bio-production, distribution, structure, manufacturing|
|Research affiliation:||Czech Republic > University of South Bohemia (JCU)|
|Deposited By:||Předotová, Pavla|
|Deposited On:||09 Jul 2012 07:03|
|Last Modified:||09 Jul 2012 07:03|
|Refereed:||Peer-reviewed and accepted|
Repository Staff Only: item control page
|
<urn:uuid:795215aa-2f27-4ba7-abae-1679d402e9a5>
|
CC-MAIN-2016-26
|
http://orgprints.org/21030/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00057-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.928321 | 577 | 2.671875 | 3 |
The tree that you really don't want to grow
For some unknown reason, the tree that many people call "tree of heaven" or "paradise tree" after its Chinese local name, was brought to North America from the Far East in the late 1700s. The bearer must have meant well, for the ailanthus (Ailanthus altissima) isn't a terrible-looking tree. It grows straight-to 60' or 80' tall-and quickly.
You'll find it in a wide "natural" range that stretches from the Plains States to the East Coast and northern Michigan to Florida's panhandle. In fact, heat or cold doesn't hinder this species much. Nor poor soil. Nor city smog and smoke. Even dryness won't bother it. And the tree can survive submergence in salt water. So, there's little to stop its propagation (it spreads by seeds and sprouts from its deep root system). In many places, the ailanthus has become a real nuisance by aggressively crowding out native or ornamental species.
So why give this tree a bad rap? For one thing, it stinks. The blossoms of the male ailanthus produce a stench. The leaves and wood also have a formidable and unpleasant odor. And, it's not a very convincing shade tree. Nor does ailanthus live long-maybe 75 years. Lastly, ailanthus wood looks like white ash, but is weak and brittle.
Ailanthus' only claim to fame is that it is the tree referred to in the book and motion picture A Tree Grows in Brooklyn. Unfortunately, it really does.
|
<urn:uuid:e9b9eff2-7d1c-4ebd-8810-bc72405199c3>
|
CC-MAIN-2016-26
|
http://www.woodmagazine.com/materials-guide/lumber/wood-species-1/ailanthus/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00140-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958186 | 339 | 2.9375 | 3 |
Corn silage with very high dry matter (40-percent or more) decreased starch digestibility and milk yield, according to an extensive research review by University of Wisconsin dairy scientists.
Their summary shows milk production was not significantly affected when whole-plant dry matter stayed between 30 percent and 40 percent. However, much drier material (black layer and beyond) did cause a significant drop in both milk and fat-corrected milk production.
This is particularly true if you don’t do a good job of processing and chopping as the maturity of the corn silage increases, says Randy Shaver, University of Wisconsin extension dairy specialist.
“Keep that in mind if part of your strategy is to go for a drier corn silage to try to increase starch content,” he says.
The data showed no significant differences on the fat and protein content of milk. There also was no real evidence to show that a delayed harvest reduces neutral detergent fiber digestibility. If anything, Shaver says, the drier silage increased NDF digestibility. However, as the kernel becomes harder, there is a significant depression in total tract starch digestibility.
Shaver and graduate student Luiz Ferraretto examined two dozen published research papers, looking specifically at the effects of different corn silage harvest practices on parameters like milk production and dry matter intake. They reported these findings in July at the American Dairy Science Association’s annual meeting.
|
<urn:uuid:26993c50-926f-495b-918d-c84f2957c89d>
|
CC-MAIN-2016-26
|
http://www.dairyherd.com/dairy-herd/research-track/Should-you-harvest-drier-corn-silage--167842435.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00182-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945694 | 296 | 2.625 | 3 |
Obesity and Anesthesia
Surgery for obese patients presents special challenges for the anesthesiologist. Even simple monitoring tasks, which are essential and life-preserving for all patients, can be a challenge when the patient is significantly overweight. Veins may be harder to locate, finding an appropriate blood pressure cuff to fit the patient’s arm may be a challenge, and medication dosing can vary with heavier individuals.
A patient is considered obese if his or her body mass index is greater than 30 (click here for a BMI calculator). Illnesses associated with obesity, such as type 2 diabetes, obstructive sleep apnea, hypertension and cardiovascular disease, can have serious implications for patients requiring surgery and anesthesia.
In the operating room, obesity-related changes in anatomy make airway management challenging. Airway obstruction due to obstructive sleep apnea can result in decreased airflow and oxygen in patients receiving even minimal amounts of sedation. Placement of a breathing tube (intubation) may require special equipment and techniques. Anesthesiologists have to anticipate these difficulties, prepare for them and counsel patients regarding potential complications.
Obese patients who are scheduled for surgery should discuss their condition and their options with their physician. In addition, the patient’s anesthesiologist can explain the specific risks associated with anesthesia and obesity. Please note, patients who are considering a weight-loss program prior to surgery should discuss it with their physician first to ensure it will not interfere with the course of procedure.
The resources in this section offer obesity-related material for specific situations and suggestions for partnering with your physician to guard your safety during surgery
Other Helpful Information from the the American Society of Anesthesiologists: Obesity and Pain Management During Labor and Delivery Anesthesiology and Weight-Loss Surgery Treating Obese Patients at Ambulatory Surgery Centers Obesity and Obstructive Sleep Apnea
|
<urn:uuid:91586669-29fb-474b-aa08-16f751e174d8>
|
CC-MAIN-2016-26
|
http://www.lifelinetomodernmedicine.com/Anesthesia-Topics/Obesity-and-Anesthesia.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00060-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930195 | 380 | 2.609375 | 3 |
In the previous lesson we covered the basic settings on your camera. Today we're jumping into the fun stuff: manual mode. We'll learn the details about shutter speed, ISO, and aperture, as well as how those settings affect your photos.
If you're following along with your camera, be sure to set it into manual mode so you can access every setting we're going to discuss.
Aperture is often the most difficult concept for people to grasp when they're learning how their camera works, but it's pretty simple once you understand it. If you look at your lens, you can see the opening where light comes through. When you adjust your aperture settings, you'll see that opening get bigger and smaller. The larger the opening, or wider the aperture, the more light you let in with each exposure. The smaller the opening, or narrower the aperture, the less light you let in. Why would you ever want a narrow aperture if a wider one lets in more light? Aside from those situations where you have too much light and want to let less of it in, narrowing the aperture means more of the photograph will appear to be in focus. For example, a narrow aperture is great for landscapes. A wider aperture means less of the photograph will be in focus, which is something that's generally visually pleasing and isn't seen as a downside. If you've seen photographs with a subject in focus and beautiful blurred backgrounds, this is often the effect of a wide aperture (although that's not the only contributing factor—remember, telephoto lenses decrease depth of field as well). Using a wide aperture is generally considered the best method for taking in more light because the downside—less of the photograph being in focus—is often a desired result.
Aperture is represented in f-stops. A lower number, like f/1.8, denotes a wider aperture, and a higher number, like f/22, denotes a narrower aperture. Lenses are often marked with their widest possible aperture. If you see a lens that is a 50mm f/1.8, this means it's widest aperture is f/1.8. The aperture can always be set to be more narrow, but it won't be able to go wider than f/1.8. Some lenses will have a range, such as f/3.5-5.6. You'll see this on zoom lenses, and it means that when the lens is zoomed out to the widest position it's f/3.5, but when it's zoomed in all the way it can only have an aperture as wide as f/5.6. The middle changes as well, so halfway through the zoom range you'll end up with a widest aperture of about f/4.5. An aperture range is common with less-expensive zoom lenses, but if you pay more you can get a standard aperture throughout the range.
That's pretty much all you need to know about aperture. The important thing to remember is that a wide aperture, like f/1.8, lets in more light and provides a shallow depth of field (meaning less of the photo appears in focus). A narrow aperture, like f/22, provides deeper focus but lets in less light. What aperture you should use depends on the situation and the type of lens you're using, so experiment to see what effects you get and you'll have a better idea of how your aperture setting affects your photographs.
Photo by Digi1080p
When you press the shutter button on your camera and take a picture, the aperture blades take a specific amount of time to close. This amount of time is known as your shutter speed. Generally it is a fraction of a second, and if you're capturing fast motion it needs to be at most 1/300th of a second. If you're not capturing any motion, you can sometimes get away with as long of an exposure as 1/30th of a second. When you increase your shutter speed—the length of time where the sensor is exposed to light—two important things happen.
First, the sensor is exposed to more light because it's been given more time. This is useful in low light situations. Second, the sensor is subject to more motion which causes motion blur. This can happen either because your subject is in motion or because you cannot hold the camera still. This is fine if you're photographing a landscape at night and the camera is placed on a tripod, as neither the camera nor your subject is going to move. On the other hand, slow shutter speeds pose a problem when you're shooting handheld and/or your subject is moving. This is why you wouldn't want a shutter speed any slower than 1/30th of a second when photographing handheld (unless you're known for your remarkably still hands).
In general, you want to use the fastest shutter speed you can but there are plenty of circumstances where you'd choose a slower shutter speed. Here are a few examples:
- You want motion blur for artistic purposes, such as blurring a flowing stream while keeping everything else sharp and un-blurred. To accomplish this you'd use a slow shutter speed like 1/30th of a second and use a narrow aperture to prevent yourself from overexposing the photograph. Note: This example is a good reason to use the Shutter Priority shooting mode discussed in the previous lesson.
- You want an overexposed and potentially blurred photograph for artistic purposes.
- You're shooting in low light and it's necessary.
- You're shooting in low light and it's not necessary, but you want to keep noise to a minimum. Therefore you set your ISO (film speed equivalent) to a low setting and you reduce your shutter speed to compensate (and let in more light).
These aren't the only reasons but a few common ones. The important thing to remember is a slow shutter speed means more light at the risk of motion blur. A fast shutter speed means low risk of motion blur while sacrificing light.
ISO is the digital equivalent (or approximation) of film speed. If you remember buying film for a regular camera, you'd get 100 or 200 for outdoors and 400 or 800 for indoors. The faster the film speed the more sensitive it is to light. All of this still applies to digital photography, but it's called an ISO rating instead.
Photo by CNET Australia
The advantage of a low ISO is that the light in a given exposure is more accurately represented. If you've seen photos at night, the lights often look like they're much brighter and bleeding into other areas of the photo. This is the result of a high ISO—a greater sensitivity to light. High ISOs are particularly useful for picking up more detail in a dark photograph without reducing the shutter speed or widening the aperture more than you want to, but it comes at a cost. In addition to lights being overly and unrealistically bright in your photos, high ISO settings are the biggest contributors to photographic noise. High-end cameras will pick up less noise at higher ISOs than low-end cameras, but the rule is always the same: the higher you increase your ISO, the more noise you get.
Most cameras will set the ISO automatically, even in manual mode. Generally you can stick with the same ISO setting if your lighting situation doesn't change, so it's good to get used to setting it yourself. That said, sometimes lighting changes enough in dark, indoor settings that letting the camera set it for you automatically can be helpful—even when shooting manually.
Combining the Settings
In manual mode you set everything yourself (except ISO, if you set it to automatic), so you have to think about all three of these settings before you take a photograph. The best thing you can do make this easier on yourself and hasten the decision is to give priority to one of the settings by deciding what's most important. Do you want to ensure a shallow depth of field? If so, your priority is your aperture. Do you want the most accurate representation of light? Make ISO your priority. Do you want to prevent as much motion blur as possible? Concentrate on shutter speed first. Once you know your priority, all you need to do is set the other settings to whatever is necessary to expose the right amount of light to the photograph.
In manual mode your camera should let you know if you're over- or under-exposed by providing a little meter at the bottom (pictured to the left). The left is underexposed and the right is overexposed. Your goal is to get the pointer in the middle. Once you do that, snap your photo, and it should look just how you want it.
We're all done learning about how your camera works in all its modes. Tomorrow we're going to explore composition and technique . As always, if you're behind on our lessons, you can find everything you've missed and a PDF of all the lessons in the Basics of Photography Complete Guide .
Check out the full Lifehacker Night School series for more beginners lessons covering all sorts of topics.
|
<urn:uuid:d85dfe05-6bcc-490c-9f05-ceabd269d1db>
|
CC-MAIN-2016-26
|
http://lifehacker.com/5814173/basics-of-photography-your-cameras-manual-settings?tag=teach-yourself
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00019-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954527 | 1,857 | 3.890625 | 4 |
Library Open Repository
Lynch, DD (1972) Introduced fish. Papers and Proceedings of the Royal Society of Tasmania, The La. pp. 107-112. ISSN 0080-4703
Introduced_fish...pdf | Download (249kB)
Available under University of Tasmania Standard License.
Sportsmen fish the rivers and lakes of Australia for brown trout which is an introduced species. The acclimatisation of the trout into the Southern Hemisphere took place in 1864 when ova from England arrived safely in Tasmania. Atlantic salmon Salmo salar Linn., and brown trout, Salmo trutta Linn., hatched from ova in the Salmon Ponds hatchery at Plenty. Nearly all the Australian and New Zealand brown trout caught today can be traced back to stocks once spawned in Tasmanian waters.
|Keywords:||Royal Society of Tasmania, RST, Van Diemens Land, natural history, science, ecology, taxonomy, botany, zoology, geology, geography, papers & proceedings, Australia, UTAS Library|
|Journal or Publication Title:||Papers and Proceedings of the Royal Society of Tasmania|
|Page Range:||pp. 107-112|
|Collections:||Royal Society Collection > Papers & Proceedings of the Royal Society of Tasmania|
|Additional Information:||Edited by M.R. Banks. - Copyright Royal Society of Tasmania|
|Date Deposited:||05 Aug 2012 05:38|
|Last Modified:||18 Nov 2014 04:39|
|Item Statistics:||View statistics for this item|
Actions (login required)
|Item Control Page|
|
<urn:uuid:591de825-0aed-4c17-9ce4-b68aab7aa890>
|
CC-MAIN-2016-26
|
http://eprints.utas.edu.au/14604/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00008-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.716161 | 343 | 2.78125 | 3 |
TRAVERSE CITY - A small number of Asian carp might be enough to establish a population in the Great Lakes that eventually could pose a serious threat to other fish species and the region's economy, a Canadian scientist said earlier this week.
Researchers at the University of Waterloo in Ontario said in a paper published this month that under the right circumstances, as few as 10 Asian carp that find their way into one of the Great Lakes would have a 50-50 chance of becoming established. If 20 fish slip inside, the probability of gaining a foothold could jump to 75 percent, the study said.
The findings show how difficult it will be to shield the Great Lakes from the invasive fish over the long term - and the importance of developing rapid-response procedures that could limit their spread, said Kim Cuddington, an ecologist and the study leader.
This June 2012 file photo shows Asian carp, jolted by an electric current from a research boat, jumping from the Illinois River near Havana, Ill. Canadian scientists say a small number of Asian carp might be enough to establish a population in the Great Lakes, where they would pose serious threats to other fish and the region’s economy. (AP file photo)
It's probably "only a matter of time before the population migrates to the Great Lakes," Cuddington said. "We need to start thinking about how to get rid of a spawning population. Once you have large breeding populations in a couple of locations, I don't think you'll get them out."
Tom Goniea, fisheries biologist with the Michigan Department of Natural Resources, said a bigger potential for carp invasion is in places such as Saginaw Bay, Thunder Bay and the western shore of Lake Erie as opposed to Lake Superior.
Asian carp, he said, thrive in high-productivity waters with agricultural runoff. That type of aquatic habitat is not common in Lake Superior.
"When you get up in the Upper Peninsula, you have standard coniferous forests without much in the way of agricutural influence," Goniea said.
Asian carp, he said, should be expected to turn up elsewhere first.
"We would definitely anticipate seeing them in lower lakes before we see them in Lake Superior," Goniea said.
However, Goniea said the DNR still is asking the public to be vigilant for invasive species. People can send photos to the DNR, which will identify the species.
Bighead and silver carp were imported from Asia to the southern U.S. in 1970s to control algae in fish ponds and sewage lagoons. They escaped and have infested most of the Mississippi River and many of its tributaries, including the Illinois and Wabash rivers, which could provide linkages to Lake Michigan and Lake Erie.
Authorities have spent nearly $200 million on electric barriers and other measures to block the carp's paths.
According to the DNR website, bighead carp can reach up 100 pounds. Juveniles can consume up to 140 percent of their body weight daily while adults can eat up to 40 percent of their body weight daily. Although smaller than bigheads, silver carp can jump up to five feet out of the water when disturbed by vibrations often caused by boat motors and create a safety hazard.
Bighead and silver carp can spawn multiple times a year, thus quickly displacing native species.
Scientists differ on how many would be needed to form a growing Great Lakes population. Males and females would have to find each other in tributary rivers with fast, turbulent currents where fertilized eggs would stay afloat long enough to hatch. Cold temperatures and lack of food could hinder growth.
Because of such obstacles, some experts say it could take hundreds of carp reaching the lakes to become established. In their paper, Cuddington and her colleagues said they developed mathematical models that suggest far fewer fish might be needed.
Each of the lakes has about 10 rivers suitable for spawning - a favorable number for males and females to find one another, they said.
Duane Chapman, an Asian carp expert with the U.S. Geological Survey who wasn't involved with the study, said its findings are based on the unproven assumption that a few carp - perhaps just one male and one female - would mate. Asian carp tend to spawn in large groups, he said.
"We know there can be hundreds of fish spawning in the same place at the same time, but we don't know if there's a minimum number," Chapman said.
Another crucial factor is how quickly the carp would reach sexual maturity in the lakes, Cuddington said. If they mature and spawn by age 3, it could take 20 years to establish a moderate-sized population and twice as long for the population to become very large. But the carp could take longer to develop in cooler waters, requiring a century or more to spread widely.
Christie Bleck can be reached at 906-228-2500, ext. 250.
|
<urn:uuid:c5b6f4f9-77ee-4c67-b62a-7425e540dcea>
|
CC-MAIN-2016-26
|
http://www.miningjournal.net/page/content.detail/id/590826.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00169-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.968576 | 1,014 | 2.984375 | 3 |
By Paula Trotter
For Summer Camps
Deeply rooted in the past are lessons that can help children prepare for the future.
Reflecting upon and experiencing our agricultural heritage in a fun setting such as farm camp can subtly instil in children an appreciation for nature and modern-day conveniences.
“Learning about pioneer life gives kids an appreciation for what they have now,” says Lindsie Bruns, a history buff who runs the Young Pioneers Summer Camp.
“But it also offers lessons of hard work and how to make things for yourself, so you don’t have to depend on outside sources for food and toys.”
Geared to children between the ages of six and 12, the Young Pioneers Summer Camp is held at Patterson Springs Farm, a working farm southwest of Calgary that has been in Bruns’ family since 1902.
The program is influenced by the Laura Ingalls Wilder classic Little House on the Prairie — one of Bruns’ favourite books — and sees the children dress in costumes that mirror garments that were worn during the turn of century.
Most of the children who attend the camp are urban dwellers who don’t really understand that their food comes from somewhere besides the grocery store, Bruns says.
So campers plant and water seeds, then patiently watch as their bounty grows throughout the duration of the weeklong program.
They pick berries and vegetables from the garden and learn how to make bread, butter, cheese and even churn homemade ice cream.
They also make pioneer crafts such as corn-husk dolls.
“Kids really love it,” Bruns says. “They’re amazed that the can entertain themselves without TV, Internet or video games.”
They also feed chickens and rabbits daily and learn about the mules, goats and honeybees that call the Patterson Springs Farm home.
“The connection between kids and animals is so precious,” says Ruth Ludwig, director of fun at Butterfield Acres.
The popular farm located just beyond the northwestern boundary of Calgary offers a number of summer camps that focus heavily on interacting with an array of animals – bunnies, yaks, sheep, ponies and horses, emus, ducks, cows, pigs, alpacas and more.
All of the programs feature age appropriate activities for kids between the ages of three and 14, including games, stories and crafts that teach children about different animals.
Older campers are given more responsibilities, such as animal care.
All of this teaches children how to accept, respect and appreciate the animals — lessons that translate into getting along well with other people, Ludwig says.
One of the greatest lessons farm camps have to offer, however, is to care for this vast world that we share with countless other living beings.
“It’s important to start at an early age to develop an appreciation for wonder and joy,” Ludwig says.
“It takes one away from being self-focused and instead gives an appreciation for the other parts of the web of life.”
|
<urn:uuid:49ca39fb-2af2-4b72-aebc-86b5320c6b1a>
|
CC-MAIN-2016-26
|
http://www.calgaryherald.com/travel/Pioneer+camps+give+farms+good+name/6512044/story.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00157-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955232 | 641 | 2.671875 | 3 |
Joseph Priestley Static
Click for more information
An electric machine consists of
the combination of two materials, which when rubbed together produce
static electricity, and of a third material or object which acts as a
collector for the charges.
The first devices for producing
electricity were very simple. The ancient Greeks discovered the strange
effects of amber rubbed with fur and other material. In the 17th
century, scientists used sticks of resin or sealing wax, glass tubes and
other objects. By the time of Benjamin
Franklin (Franklin became interested in electricity about 1745)
large glass tubes about three feet long and from an inch to an inch and
a half in diameter were popular; these were rubbed either with a dry
hand or with brown paper dried at the fire.
There are two major categories of
electrical machines: Friction and Influence. A friction machine
generates static electricity by direct physical contact; the glass
sphere, cylinder or plate is rubbed by a pad as it passes by. Influence
machines, on the other hand, have no physical contact. The charge is
produced by inductance, usually between two or more glass plates.
All through the 18th and 19th centuries there was
tremendous interest in electricity. Scientists such as
made major advances. Prior to Faraday's invention of the induction coil
in 1831 however, the only way to generate high voltage electricity was
via a static generator such as these.
Rotating the wheel created a static
charge, which was available on the "prime collector" (the brass ball or
cylinder at the top or front of the device). The charge could then be
stored in a Leyden jar or measured by an
History of the Electrical Machine
Go to the next page for a detailed
history of electrical machines.
Below are some examples of the electrical
machines in my collection:
|
<urn:uuid:8562aef1-7b29-43f3-a978-9d2dab74b7e1>
|
CC-MAIN-2016-26
|
http://www.sparkmuseum.com/FRICTION.HTM
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00032-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.920084 | 399 | 3.546875 | 4 |
Date: December 3, 2010
Source: Computer World
Abstract: Security expert Bruce Schneier has called for governments to establish ‘hotlines’ between their cyber commands, much like the those between nuclear commands, to help them battle against cyber attacks.
Cyber security is high on the national agenda, and is regarded as a top threat to the UK’s security. It is also top a concern for other nations around the world. Last month, the EU announced plans to cybercrime centre by 2013, and it agreed with the US to set up a working group on cybersecurity. Meanwhile, NATO also adopted its Strategic Concept Charter, which outlines plans to develop new capabilities to combat cyber attacks on military networks.
Schneier, writing in the Financial Times, said that a hotline between the world’s cyber commands would “at least allow governments to talk to each other, rather than guess where an attack came from.”
He said that this would be a starting point and that more importantly, governments need to establish cyberwar “treaties”.
“These could stipulate a no first-use policy, outlaw unaimed weapon, or mandate weapons that self-destruct at the end of hostilities. The Geneva Conventions need to be updated too,” he said.
Another suggestion was to declare that international banking was off-limits, but Schneier added: “Whatever the specifics, such agreements are badly needed.”
Although he admitted that enforcing such agreements would be difficult, Schneier said that governments had to at least make an effort to do so.
“It’s not too late to reverse the cyber arms race currently under way. Otherwise, it is only a matter of time before something big happens: perhaps by the rash actions of a low level military officer, perhaps by a non-state actor, perhaps by accident,” he warned.Earlier this week, the UK government revealed that selling GCHQ’s expertise is one of the options it is considering for bridging the gap between the public and private sectors' intelligence capabilities, in order to strengthen the UK against cyber attacks (Computer World, 2010).
Title: Britain In Talks On Cybersecurity Hotline With China And Russia
Date: October 4, 2012
The discussions are at an early stage but they reflect anxiety from all sides that a calamity in cyberspace, whether deliberate or accidental, could have devastating consequences unless there is a quick and reliable way for senior officials to reach each other.
The US has been talking to the Chinese about a similar arrangement and the ideas will be among several raised at an international conference on cybersecurity in Hungary on Thursday.
The event will involve 600 diplomats from up to 50 countries and is a follow-up to a conference in London last year. One of the aims of the negotiations is to agree rules of behaviour in cyberspace at a time when states have become aware of the potential to attack, steal from and disrupt their enemies online.
China and Russia have been arguing for a more restrictive, state-controlled future for the internet and for formal arms-control-type treaties to govern what countries can and cannot do.
But they have been challenged by European countries and the US. The UK has said there is no need for treaties and that controls on the internet would restrict economic growth and freedom of speech.
Some progress has been made in reconciling the two positions, diplomats say, but the gulf between them is still huge, and the negotiations are continuing at snail's pace.
With the cyber arena evolving so quickly, and with the US and the UK saying cybertheft now represents a genuine threat to western economies and national security, the need for a hotline is pressing.
"At the moment, we don't really have sufficient information-sharing arrangements with some countries such as Chinaand the Chinese computer emergency response team," said a senior Foreign Office official.
"There isn't a form of crisis communication. If we can build that sort of partnership and relationship then the normative framework develops around that. If you ask for assistance, you get a response. That develops into an obligation to assist. One isn't naive about that, but I don't think the Chinese or the Russians enjoy uncertainty, not knowing who to turn to, who to talk to."
The official said the existing protocols and procedures were not robust enough for the type of emergencies that could materialise in cyberspace. "In theory, there are lists of people who to call, but I think they need to be tested and relied upon."
The foreign secretary, William Hague, and the cabinet secretary, Francis Maude, will be in Budapest for the two-day conference. They will announce that the UK is to establish a new £2m cyberhub at one of country's leading universities, which will provide guidance to the government and companies about where to invest money for initiatives in cyberspace abroad. The money will come from the £650m set aside for cybersecurity in the strategic defence and security review.
The official said talks with China were slow going and that there had not been any fundamental shift in Beijing's position. "Through initiatives such as its draft code of conduct, [China] has promoted a vision of cyberspace which has got much more sovereignty and government involvement in it. They have got particular points that they want to get across to the international community" (Guardian, 2012).
|
<urn:uuid:9f860c63-d174-4857-911e-fb2bb999c1f4>
|
CC-MAIN-2016-26
|
https://sites.google.com/site/trutherswitzerland/cyber-terror/Cyber-Terror-Hotline
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00054-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.971344 | 1,109 | 2.90625 | 3 |
Examples of blandish in a sentence
<blandished her into doing their work for them by complimenting her shamelessly>
Did You Know?
The word blandish has been a part of the English language since at least the 14th century with virtually no change in its meaning. It ultimately derives from blandus, a Latin word meaning "mild" or "flattering." One of the earliest known uses of blandish can be found in the sacred writings of Richard Rolle de Hampole, an English hermit and mystic, who cautioned against "the dragon that blandishes with the head and smites with the tail." Although blandish might not exactly be suggestive of dullness, it was the "mild" sense of blandus that gave us our adjective bland, which has a lesser-known sense meaning "smooth and soothing in manner or quality."
Origin and Etymology of blandish
Middle English, from Anglo-French blandiss-, stem of blandir, from Latin blandiri, from blandus mild, flattering
First Known Use: 14th century
Synonym Discussion of blandish
Seen and Heard
What made you want to look up blandish? Please tell us where you read or heard it (including the quote, if possible).
|
<urn:uuid:f1ecf04d-8fb5-4e0e-b4bd-4447d55a1fdb>
|
CC-MAIN-2016-26
|
http://www.merriam-webster.com/dictionary/blandisher
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00069-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945701 | 261 | 2.65625 | 3 |
New high-performance cathode materials for low-temperature solid oxide fuel cells
16 August 2013
|Peak power densities of cells with LnBa0.5Sr0.5Co1.5Fe0.5O5+δ-GDC (Ln = Pr and Nd) cathode. Credit: Choi et al. Click to enlarge.|
Researchers from Ulsan National Institute of Science and Technology (UNIST) (S. Korea), Georgia Institute of Technology, and Dong-Eui University (S. Korea) report the development of new efficient and robust cathode materials for low-temperature solid oxide fuel cells (SOFCs) in the open access Scientific Reports.
Conventional solid oxide fuel cells operate as high as 950 °C to run effectively. Test cells based on these new cathode materials demonstrated peak power densities of ~2.2 W cm−2 at 600°C. (The power density of a commercialized low-temperature SOFC system developed by researchers at the University of Maryland and Redox Power is also more than 2W cm-2, earlier post.)
The demand for clean and sustainable energy has stimulated great interest in fuel cells, which allows direct conversion of chemical fuels to electricity. Among all types of fuel cells, solid oxide fuel cells (SOFCs) have the potential to offer the highest energy efficiency and excellent fuel flexibility. To make SOFC technology affordable, however, the operating temperature must be further reduced so that much less expensive materials may be used for other cell components and balance of plant.
Unfortunately, SOFC performance decreases rapidly as the operating temperature is reduced, especially the cathode for oxygen reduction reaction (ORR). While La1−xSrxMnO3 (LSM) is widely used as the cathode material for yttria-stabilized zirconia (YSZ)-based SOFCs because of its excellent compatibility with YSZ electrolyte and other cell components, the cathodic polarization loss at lower temperatures is unacceptable. Accordingly, extensive efforts have been devoted to the search for more active cathode materials toward ORR at lower temperatures.
...Here we report a synergistic effect of co-doping (Sr on A-site and Fe on B-site) in a cation-ordered double-perovskite, LnBaCo2O5+δ, to create crystalline channels for fast oxygen ion diffusion and rapid surface oxygen exchange while maintaining the compatibility with the electrolytes for IT-SOFCs and the durability under operating conditions.—Choi et al.
Co-doping with Sr and Fe succeeded in yielding better performance than existing materials at lower operating temperatures. The class of cation-ordered, double-perovskite compounds displayed fast oxygen ion diffusion through pore channels and high catalytic activity toward the ORR at low temperatures while maintaining excellent compatibility with electrolyte and good stability under typical fuel cell operating conditions.
DFT analysis using simplified models suggested that the most attractive properties of these materials are the pore channels in the [PrO] and [CoO] planes that could provide fast paths for oxygen transport, which in turn accelerates the kinetics of surface oxygen exchange.
The hardest part of this research was finding optimum composition of Sr and Fe for the best performance and robustness. Previously various researches trying to dope Sr to perovskite structure had been made by many other groups. But none of them was successful for the better performance at the low operating temperature.—Professor Guntae Kim, UNIST team leader
The researchers suggested that a more detailed understanding of the mechanistic details may help the rational design of better double-perovskite cathode materials for a new generation of high-performance SOFCs with enhanced durability.
The research was supported by World Class University (WCU) program and Mid-career Researcher Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology and the New & Renewable Energy of the Korea Institute of Energy Technology Evaluation and Planning grant funded by the Ministry of Knowledge Economy.
Sihyuk Choi, Seonyoung Yoo, Jiyoun Kim, Seonhye Park, Areum Jun, Sivaprakash Sengodan, Junyoung Kim, Jeeyoung Shin, Hu Young Jeong, YongMan Choi, Guntae Kim & Meilin Liu (2013) Highly efficient and robust cathode materials for low-temperature solid oxide fuel cells: PrBa0.5Sr0.5Co2−xFexO5+δ. Scientific Reports 3, Article number: 2426 doi: 10.1038/srep02426
TrackBack URL for this entry:
Listed below are links to weblogs that reference New high-performance cathode materials for low-temperature solid oxide fuel cells:
|
<urn:uuid:2abde57e-3af1-44b9-bbdf-fd0ace70c610>
|
CC-MAIN-2016-26
|
http://www.greencarcongress.com/2013/08/20130816-choi.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00121-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.899922 | 1,012 | 2.53125 | 3 |
SWORD ("ḥereb"; "baraḳ" [poetic form] in Job xx. 25; Greek, μάχαιπα, ῥομφαία, ξίφος):
The sword hung at the hip from a sword-belt (I Sam. xvii. 39; xxv. 13; II Sam. xx. 8), probably on the left side, Judges iii. 16, 21, notwithstanding. It was kept in a sheath ("ta'ar," I Sam. xvii. 51; "nadan," I Chron. xxi. 27; θέκη, John xviii. 11), whence the phrases "heriḳ," "shalaf," or "pataḥ ḥereb" (= "to draw the sword"). Some swords were double-edged (comp. "ḥereb welah shene piyot," Judges iii. 16; Prov. v. 4), and were used for cutting (I Sam. xxxi. 4; II Sam. ii. 16; I Chron. x. 4) and thrusting (comp. "hikkah ba-ḥereb" and I Kings iii. 24). There are no detailed descriptions of the various kinds of swords used by the Israelites, but they probably resembled those of Assyria and Egypt, being sometimes straight and sometimes curved, and either long or dagger-shaped and short. The existence of the straight variety is proved by the fact that swords were used for thrusting; and is also implied in the phrase "nafal ba-ḥereb," used of those who commit suicide by this weapon (I Sam. xxxi. 4 et seq.). The story of Ehud, who thrust his sword, haft ("niẓẓab"), and all into Eglon's belly (Judges iii. 16-22), shows that short, dagger-like swords were used.
The blade ("lahab") of the double-edged sword was probably straight, and this portion of the weapon seems generally to have been made of iron, sometimes (but rarely) of bronze (comp. I Sam. xiii. 19; Joel iii. 10; Micah iv. 3; Isa. ii. 4); this was also the custom among the Egyptians, as the blue blades in the paintings indicate. The hilt of the sword was made probably of a different material, in accordance with Egyptian and Assyrian usage; probably the hilt afforded, sometimes, an opportunity for artistic workmanship. The word "mekerah" in Gen. xlix. 5 has frequently been compared with μάχαιπα and rendered "sword," but this explanation is very doubtful. Originally μάχαιπα denoted the Lacedemonian, slightly curved sword used for cutting, having a knife-like blade, a blunt back, and a point turning up toward the latter. The same name was given to any curved saber, in contradistinction to ξίφος (the dagger-like sword).
In the Roman period the Jews adopted the short dirk ("sica") used by the Romans, and especially by the gladiators. This weapon, which was concealed in the garments, and which was especially affected by the Sicarii, who derived their name from it (Josephus, "Ant." xx. 8, § 10; "B. J." ii. 13, § 3), was only a foot in length, and somewhat curved.
|
<urn:uuid:c984f7bf-5e89-4319-a207-bb314b309303>
|
CC-MAIN-2016-26
|
http://www.jewishencyclopedia.com/articles/14150-sword
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00015-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966028 | 760 | 3.328125 | 3 |
When your friend plans to go to a party and you talk him out of it, this is an example of a situation where you dissuade him.
- to turn (a person) aside (from a course, etc.) by persuasion or advice
- Obs. to advise against (an action)
Origin of dissuadeClassical Latin dissuadere ; from dis-, away, from + suadere, to persuade: see sweet
transitive verbdis·suad·ed, dis·suad·ing, dis·suades
Origin of dissuadeLatin dissuad&emacron;re : dis-, dis- + suad&emacron;re, to advise; see swad- in Indo-European roots.
(third-person singular simple present dissuades, present participle dissuading, simple past and past participle dissuaded)
- To convince not to try or do.
|
<urn:uuid:2151a7ef-c298-47df-b60a-5939e37f68fa>
|
CC-MAIN-2016-26
|
http://www.yourdictionary.com/dissuade
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00147-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.893078 | 189 | 3.171875 | 3 |
Key: "S:" = Show Synset (semantic) relations, "W:" = Show Word (lexical) relations
Display options for sense: (gloss) "an example sentence"
- S: (n) umpire, ump (an official at a baseball game)
- S: (n) arbiter, arbitrator, umpire (someone chosen to judge and decide a disputed issue) "the critic was considered to be an arbiter of modern literature"; "the arbitrator's authority derived from the consent of the disputants"; "an umpire was appointed to settle the tax case"
- S: (v) referee, umpire (be a referee or umpire in a sports competition)
|
<urn:uuid:ee9ea4f2-1269-404b-82ba-7b442bd54e07>
|
CC-MAIN-2016-26
|
http://wordnetweb.princeton.edu/perl/webwn?s=UMPIRE
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00143-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94353 | 152 | 2.828125 | 3 |
“Walking up a long hill to ease my mare, I was joined by a poor woman, who complained of the times, and that it was a sad country. Demanding her reasons, she said her husband had but a morsel of land, one cow, and a poor little horse, yet they had a franchar (42 pounds) of wheat and three chickens to pay as a quitrent to one seigneur; and four franchar of oats, one chicken, and one franc to pay to another, besides very heavy tailles (income tax) and other taxes. She had seven children and the cow's milk helped to make the soup. 'But why, instead of a horse, do not you keep another cow?' Oh, her husband could not carry his produce so well without a horse; and donkeys are little use in the country. It was said, at present, that something was to be done by some great folks for such poor ones, but she did not know who nor how, but God send us better, 'car les tailles et les droits nous ecrasent' (for the taxes are crushing us).
“This woman, at no great distance, might have been taken for sixty or seventy, her figure was so bent and her face so furrowed and hardened by labor, but she said she was only twenty-eight. An Englishman who has not traveled cannot imagine the figure made by infinitely the greater part of the countrywomen in France; it speaks, at the first sight, hard and severe labor. I am inclined to think that they work harder than the men, and this, united with the more miserable labor of bringing a new race of slaves into the world, destroys absolutely all symmetry of person and every feminine appearance.”— Arthur Young, Travels in France during the Years 1787, 1788, and 1789
The French Revolution, along with the Industrial Revolution, has probably done more than any other revolution to shape the modern world. Not only did it transform Europe politically, but also, thanks to Europe's industries and overseas empires, the French Revolution's ideas of liberalism and nationalism have permeated nearly every revolution across the globe since 1945. In addition to the intense human suffering as described above, its origins have deep historic and geographic roots, providing the need, means, and justification for building the absolute monarchy of the Bourbon Dynasty which eventually helped trigger the revolution.
The need for absolute monarchy came partly from France's continental position in the midst of hostile powers. The Hundred Years War (1337-1453) and then the series of wars with the Hapsburg powers to the south, east, and north (c.1500-1659) provided a powerful impetus to build a strong centralized state. Likewise, the French wars of Religion (1562-98) underscored the need for a strong monarchy to safeguard the public peace. The means for building a monarchy largely came from the rise of towns and a rich middle class. They provided French kings with the funds to maintain professional armies and bureaucracies that could establish tighter control over France. Justification for absolute monarchy was based on the medieval custom of anointing new kings with oil to signify God's favor. This was the basis for the doctrine of Divine Right of Kings. In the late 1600's, all these factors contributed to the rise of absolutism in France.
Louis XIV (1643-1715) is especially associated with the absolute monarchy, and he did make France the most emulated and feared state in Europe, but at a price. Louis' wars and extravagant court at Versailles bled France white and left it heavily in debt. Louis' successors, Louis XV (1715-74) and Louis XVI (1774-89), were weak disinterested rulers who merely added to France's problems through their neglect. Their reigns saw rising corruption and three ruinously expensive wars that plunged France further into debt and ruined its reputation. Along with debt, the monarchy's weakened condition led to two other problems: the spread of revolutionary ideas and the resurgence of the power of the nobles.
Although the French kings were supposedly absolute rulers, they rarely had the will to censor the philosophes' new ideas on liberty and democracy. Besides, in the spirit of the Enlightenment, they were supposedly "enlightened despots" who should tolerate, if not actually believe, the philosophes' ideas. As a result, the ideas of Voltaire, Rousseau, and Montesquieu on liberty and democracy spread through educated society.
Second, France saw a resurgence of the power of the nobles who still held the top offices and were trying to revive and expand old feudal privileges. By this time most French peasants were free and as many as 30% owned their own land, but they still owed such feudal dues and services as the corvee (forced labor on local roads and bridges) and captaineries (the right of nobles to hunt in the peasants' fields, regardless of the damage they did to the crops). Naturally, these infuriated the peasants. The middle class likewise resented their inferior social position, but were also jealous of the nobles and eagerly bought noble titles from the king who was always in need of quick cash. This diverted money from the business sector to much less productive pursuits and contributed to economic stagnation.
Besides the Royal debt, France also had economic problems emanating from two main sources. First of all, while the French middle class was sinking its money into empty noble titles, the English middle class was investing in new business and technology. For example, by the French Revolution, England had 200 waterframes, an advanced kind of waterwheel. France, with three times the population of England, had only eight. The result was the Industrial Revolution in England, which flooded French markets with cheap British goods, causing business failures and unemployment in France. Second, a combination of the unfair tax load on the peasants (which stifled initiative to produce more), outdated agricultural techniques, and bad weather led to a series of famines and food shortages in the 1780's.
All these factors (intellectual dissent, an outdated and unjust feudal social order, and a stagnant economy) created growing dissent and reached a breaking point in 1789. It was then that Louis XVI called the Estates General for the first time since 1614. What he wanted was more taxes. What he got was revolution.
|
<urn:uuid:1a856109-6be4-4708-b8fe-fdadafbd7fa2>
|
CC-MAIN-2016-26
|
http://www.flowofhistory.com/units/eme/16/FC104
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00093-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970815 | 1,311 | 2.671875 | 3 |
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Hapless motorists and misguided ramblers need never be lost again if research being conducted at Glamorgan University comes to fruition. A team at the university is carrying out research into user positioning technology that addresses some of the weaknesses of conventional technology such as the satellite-based Global Positioning System, as used by in-car navigation systems.
George Taylor, the professor heading up the research team at Glamorgan University, said there are major issues over the accuracy and reliability of such data. A key problem is that a GPS receiver has to maintain "line of sight" with the satellites, and barriers such as buildings and trees often get in the way. Another issue is how many satellites are in sight and where they are - the more spread out they are, the more accurate the reading.
"Losing contact with satellites is a key problem. Your position can jump around quite a lot," said Taylor. "We are looking at ways of maintaining a position when that happens. Also, GPS is often 10m-15m out, and that could place you in the wrong street."
The basic idea behind Glamorgan University's research is to take GPS and make it more reliable by combining it with other technology such as GIS (geographic information systems), which translate geographic data such as street addresses into a map location. A key focus is comparing and matching GPS data with the Ordnance Survey's detailed digital mapping database, which Taylor calls "a model of the UK inside a computer".
Taylor's team aims to create a standalone system that uses digital mapping data and will eventually function without GPS, while providing more reliable, continuous and accurate positioning of devices, including mobile phones and personal digital assistants. However, the initial focus is on in-car systems.
GPS and GIS can show to within 15m or so where the device is. By matching this information with the digital mapping data, the system can more accurately predict what road the user is on. The university is also using artificial neural network techniques for this purpose. These techniques simulate the type of processes that go on in the brain and can be trained, using set parameters and thousands of scenarios, to help predict where the user is and can show how confident the system is of its conclusions.
Preliminary tests using this technique demonstrated a 10% improvement in accuracy and the university now has a full-time research student conducting work in this area. "It is pretty exciting really - I think we will get some good results," said Taylor.
The research team is also using geographical information to reduce the number of satellites required to calculate location with global navigation satellite systems. Taylor explained that to get a proper 3D reading with GPS you need to be in the line of sight of four satellites, or three for a 2D image. The university is looking at ways of cutting down on this number, for example by using the centre of the earth as a replacement for a satellite to give height readings.
One business application of positioning techniques is location-based services. Taylor believes there is "a fundamental need" for such services. However, while location-based services have been hotly tipped as an emerging technology for the past few years, the substance has not yet matched the hype. Short of trials involving shoppers being "zapped" special offers on their mobiles as they pass stores in shopping centres such as Bluewater, very little has happened.
A key issue is accuracy. "You only have to be 5m out and you get the wrong information," said Taylor. "People just switch the service off because it is rubbish." One problem is that shopping malls have roofs, which block GPS signals.
Taylor predicted there will be a myriad of potential applications for the technology Glamorgan University is developing. One interested party is the Welsh Museum of Life, which is "very keen" to use the technology to provide location-specific information to visitors in place of the current audio headset-based system.
The university is also part of the Musika research project, which aims to develop software that will help developers write applications for the European civilian satellite project Galileo, which is due to enter service from 2006. Galileo will address problems with the existing GPS, which is controlled by the US military.
A key problem is that during wartime the military can move the satellites to a new location or even turn them off entirely. "You cannot build applications that are safety-specific on a system that could be switched off," Taylor said.
A key development will be the mass take up of third generation mobile phones, which will provide the increased bandwidth to support location-based services. Other technology such as Bluetooth and "pseudo-lites" - fixed-position terrestrial satellite dishes that can be attached to buildings - could also help kickstart this technology and help bring the benefits of GPS and Galileo indoors.
"The requirement for very accurate positioning that is both reliable and continuous will be on the increase almost forever," said Taylor. "I see a time when these devices will be so small they will be sewn into your clothes. The future is a world where anyone with a PDA or mobile phone loaded with the right software could never get lost, and would always know what they were looking at."
Interactive tour guides
Providing visitors to tourist attractions with information on the item they are looking at, such as a painting or building, or a list of local hotels and restaurants.
Navigation for the blind
Mobile devices with digital mapping software could activated by voice commands or a keypad.
Reducing traffic accidents
Drivers could get warnings if their car is veering off the road. "In the future, you will have totally automatic cars - cars that can even park themselves," said Taylor. "That is 20 to 30 years away."
Positioning roads and structures.
CV: George Taylor
George Taylor's current research interests focus on the integration of geographical information systems with satellite navigation systems, such as the Global Positioning System. During the past 10 years he has worked on many projects involving modelling spatial phenomena and applying digital map data to large-scale geographical information problems. Much of his research work has been in collaboration with industrial partners, leading to commercially marketable geographical information products.
Getting wired: tell us the future
Who would have thought in 1990 that the World Wide Web would become a killer application, transforming the way we work? Research work being undertaken at universities today will change the way we use IT, and Computer Weekly is on a mission to showcase their cutting-edge IT research.
We would like to hear from researchers who think they might have made a breakthrough. Each week we will feature innovation in the field of IT, giving a glimpse of how technology will evolve in the coming years.
|
<urn:uuid:cde26180-161d-4587-ae33-4ddb4dbdaf8a>
|
CC-MAIN-2016-26
|
http://www.computerweekly.com/feature/Positioning-technology-is-not-a-lost-cause
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00087-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959819 | 1,408 | 2.84375 | 3 |
|New Tech High School is preparing today’s students for tomorrow’s future in the workforce|
|Written by Marcus Amos|
|Tuesday, 19 October 2010 00:00|
Students entering Scottsburg High School as freshmen in the 2010-2011 school year had the opportunity to enroll in Scottsburg New Tech High School.
The New Tech concept was started in Napa, CA., in 1986 to create an educational environment that would produce successful students for the real world. There are currently eight in Indiana and a great number nation-wide. Next year, 10 other Indiana schools will be joining the New Tech network.
New Tech stresses a student-centered learning environment. In New Tech, students help create an environment of trust, respect and responsibility. They share a role in governing the school and are expected to take ownership of their school and their learning.
Project based learning uses technology and investigation as students explore issues and questions relevant to their lives. The often asked question “Why do I need to learn this?” is diminished as students apply what they are learning to the real world. The learning environment, which often combines content from more than one class, is based on students working in teams to solve problems and create projects. Projects are designed to cover state standards taught in a traditional classroom. Students gain not only subject-matter knowledge, but also the skills they need to thrive in college, career and life.
New Tech takes the skills required by perspective employers such as: oral communications, professional work ethic, written communication, critical thinking and teamwork/collaboration and puts them into practice on a regular basis in the classroom. Students are not just graded on whether they can remember content for a test, but whether or not they can actually use, apply, explain and expand upon the content. The student learner in New Tech is an active learner. They learn to handle long, complex tasks and manage their time.
Students in New Tech High School complete a rigorous curriculum geared toward preparing students for the 21st Century skills they will need to successfully compete in today’s changing job market. The one-to-one student computer ratio, emphasis on Science, Technology, Engineering and Math, collaboration with classmates, and use of technology truly makes it a school for tomorrow available to students today.
The concept for New Tech began about two years for Scottsburg, as Mayor Bill Graham came and asked several key people including Scott County Economic Development Director Robert Peacock, then Scott County School District 2 Superintendent Robert Hooker, Scottsburg High School Principal Derek Marshall along with several other members of the community to explore the concept of New Tech.
The Scottsburg City Council, Scottsburg Board of Works, Scott County Economic and Development Corporation, Scott County School District 2 School Board, along with other state officials were instrumental in seeing this project through.
A two million dollar renovation was needed for the project to take place at Scottsburg, which would start with the renovation of the former Ag building at the high school. The Ag program is still in place at the school and still thriving at the main school complex. Now it was time to move into the next stage of where the future is leading the students in an everchanging world.
The school board approved the renovation with the help of a grant received through the state.
There were several teachers to come forward and agreed to teach in this new 21st Century format.
The building principal for the New Tech High School is Debora Yost. She served as the assistant principal at Scottsburg High School under Principal Derek Marshall. Marshall serves as the Campus Principal over all high school campus.
Yost has been at Scottsburg High School for five years. She is very glad to come to a smaller community after being in the big city, where the environment there was very violent.
Yost said when she first heard of the New Tech project, she had watched a video on Youtube. The video describe how schools were not learning technology like other schools, as well as colleges and universities. The video relayed the idea that students needed to be prepared for careers that may not be in existence today, but will be in the very near future. The video also emphasized that many of today’s jobs didn’t exist a few years ago, and the job market will be changing drastically in the years to come with advancement of technology, so that is why the New Tech High School is such a valuable need and asset for the students to be able to compete in a global market.
When students finish New Tech they should have a minimum of 12 hours of college credits when they graduate high school. The students will be able to take college-credited classes at the school or even take classes at area colleges, while still in high school. Students will also take part in 125 hours of community service, such as helping with senior citizens, recycling and taking on a social responsibility to their community. They will learn and exercise the three pillars of trust, respect and responsibility. They will also follow rules and school norms that they select. The students take part in an advisory class after lunch, a student body mentoring program to learn more on work projects, cyber bullying and learning a more healthier way to live. During the four years in the New Tech High School, they will have a teacher/mentor to help them along the way to make sure no one falls through the cracks of this educational process.
In a special dedication ceremony in September, Scottsburg Mayor Bill Graham stated that history was now taking place in Scottsburg, with the opening of the New Tech High School at District 2.
“The worst economic times is sometimes the best time to do something for the best of times,” Graham stated.
The New Tech High School complements and will work nicely with the new Tie Center, which is now a science park in the City of Scottsburg. This is another step to keep Scottsburg and Scott County on the cutting edge of today’s job market, and training the next generation of workers for the 21st Century and to be able to attract more jobs for this community. The knowledge and skills taught at New Tech High School will benefit today’s students and tomorrow’s workers and families.
A great deal of work for this project was undertaken by former superintendent Robert Hooker up until his retirement this past spring. But the reins of this project transferred smoothly to new Superintendent Dr. Phil Dierdorff.
Also available to speak at the dedication ceremony was Superintendent of Public Instruction from Indianapolis, Tony Bennett, who spent eight years in the Scott County School District 2 school system. Bennett stated that he was proud to call Scottsburg his home and he was proud to say “Thank you Scottsburg for being the example to others by stepping up and taking part in the New Tech High School.”
“We have to compete on a global stage in the university system and striving for the future of our children. We are in the digital age, where students are using computers instead of textbooks. With this new school we think about our children who will be our leaders of our community in the future,” noted Bennett.
According to District 2 School Board member Cory Lytle, “This will open doors for us, which was once closed for our students. We will lead the way with a new model for education. We believe in our students and staff. We want to put our kids first.”
According to New Tech Principal Debora Yost, “The first class of students and parents are amazing. They have taken a leap of faith and that really shows their belief in the program, it really shows the character of the parents.”
Each student that attends New Tech rent a Mac Book, which are valued between $1,200 and $1,400 to use in their classrooms. After a specific amount of time students will earn the right to be able to take the computers home with them. The students who have chosen to become a part of New Tech still have the opportunity to take part in other extra-curricular activities such as band, sports and other programs.
The students are graded in a much more detailed manner using the Rubric style, which allows parents and students to see the areas inwhich the excel in, maintain or need more work in a specific area.
New tech is:
* For college bound students looking for a challenging environment.
* Where real world issues and problems take center stage.
* Students actively participating in their education.
* A school where students create high quality products and performances which are presented to a public audience.
* A learning environment where students collaborate with community leaders on real world projects.
* A place students use various technology tools to demonstrate their learning.
New Tech is Not:
* A vocational school.
* A substitute for Prosser.
* Sitting Passively listening to a teacher lecture.
* Playing on a computer all day.
* Projects are not end of unit activities or fun activities, but rather central to the instruction.
New Tech Mentoring Provides:
* Daily advisory meetings.
* Exploration of career interests.
* Internship placement.
* Academic Advisory.
* Community Service projects.
New Tech Focuses on:
21st Century Skills:
* Oral Communication
* Written Communication
*Citizenship and Ethics.
* Career Preparation.
* Critical Thinking
* Technology Literacy
* Content Standards.
New Tech Students
Are Expected to:
* Earn 12 hours of college credit before graduation.
* Perform 125 hours of community service.
* Serve in an internship.
* Create a digital portfolio.
* Earn an Honors Diploma or Core 40 Diploma.
The Scottsburg New Tech High School school wide learning outcomes are to be:
* Professionalism - Students will display a high degree of honor and integrity, citizenship and responsibility. These qualities may be evidenced by proper dress, work ethic, promptness, attendance, respect and leadership.
* Technology Literacy - Student will be able to use digital technology, communication tools, and/or networks appropriately to solve information problems in order to function in an information society.
* Collaborative Skills - Students will be able to work with staff, faculty, community members and peers. Team members should discuss and work toward what they hope to accomplish to meet individual and group goals.
* Critical and Inventive Thinking - Students will display flexible, creative, and critical thinking in order to produce innovative works. Thinking must demonstrate resourcefulness, persistence, curiosity and risk taking.
*Content Knowledge and Application-Students will be able to demonstrate proper use of the content as directed by Indiana State Standards to apply the knowledge gained from the class and the projects assigned.
* Communications - Students will be able to speak and write with an awareness of audience and purpose.
Looking over the 2010-2011 course description list, students will be learning such course works as in the following:
* Aeronautical Engineering - which provides students with the fundamental knowledge and experience to apply mathematical, scientific and engineering principles to the design, development and evaluation of aircraft, space vehicles and other operating systems. Emphasis should include investigation and research on flight characteristics, analysis of aeronautical design and the impact of this technology on the environment. Classroom instruction should provide creative thinking and problem-solving activities involved in designing, testing and evaluating a variety of air and space vehicles and their systems.
* Algebra Honors - will be a more extensive version of Algebra 1. They will cover all the same material and standards in alegbra, but will take a deeper look they cover more material that revolves around geometry ideas as well. It will be an advanced algebra and pre-geometry all in once course.
* BioLit Course - is a collaborative course based on laboratory investigations and a study of language, literature, composition and oral communication with a focus on exploring a wide-variety of genres and their elements. This will include a study of the structures and functions of living organisms and their interactions with the environment. Student will use literary interpretation, analysis, comparisons, and evaluation to read and respond to representative works of historical, cultural, and scientific significance. They will explore the structure and function of populations, communities, ecosystems, and the biosphere. Students write short stories, responses to literature, expository and persuasive compositions, research reports, business letters and technical documents. Student will deliver grade-appropriate oral presentations and access, analyze and evaluate online information that will touch on various careers, personal needs and societal issues.
* Building Technology - includes classroom and laboratory experiences concerned with the erection, installation, maintenance and repair of buildings, home, and other structures using assorted materials such as metal, wood, stone, brick, glass, concrete, or composite materials. It will include cost estimating, cutting, fitting, fastening and finishing various materials in which various tools will be used along with blueprint reading. Students will develop skills related to building technology, math, measuring, written and oral communication to insure the student can interpret instructions and provide information to customers and colleagues.
* Business Technology Lab - is a business course that provides a framework for business related topics, personal finance, and career planning. The course familiarizes students with management, marketing, law, banking and career options. Students will have the opportunity to explore various segments of possible career choices. They will also address the knowledge, skills and behaviors students will need to live, plan and work successfully in today’s society.
* Creative Writing Course Description - will increase the study and application of effective strategies for different types of writing. Using the writing process, students will demonstrate a command of vocabulary, the nuances of language and vocabulary, English language conventions, an awareness of the audience, the purpose for writing and the style of their own writing.
* Commercial Art and Graphic Design - is a combination of studio art and graphic design using various materials and techniques such as InDesign, Photoshop and Illustrator. Students will draw illustrative matter for commercial applications with an emphasis on sketching of concepts, while preparing for an end of term art show which includes postcards and flyers displaying student art work.
* Digital Communication Tools - is a business course that prepares student to use computerized devices and software programs to effectively handle communication-related school assignments and to develop communication competencies needed for personal and professional activities after graduation. Students will learn the capabilities and operation of high-tech hardware and software and will develop proficiency using a variety of computer input and output devices. The knowledge and skills of using these devices will be implemented through communication, problem solving and critical thinking.
* Engineering Design - is a three trimester introductory course which uses a communication system to explore the basic language of industry. Students will investigate the design, planning, and use of the communication system. Students will work from sketching simple geometric shapes to applying a solid modeling computer program. They will learn a problem solving design process and how it is used in industry to manufacture a product. The Computer Aided Design System (CAD) will also be used to analyze and evaluate the product design.
* French 1 - student will be introduced to the French language and culture through speaking, reading, writing and researching. Student will learn basic skill such as introducing one another, participating in conversations, reading small passages, writing to other students in the French-speaking world, and researching weather patterns in Frances. They will be introduced to the TPRS method of learning a world language. TPRS stands for Teaching Proficiency thorugh Reading and Storytelling, and is a method whereby students are expected to demonstrate knowledge that is acquired, not simply learned. Students will also learn many ways to use technology in a world language classroom, including how to Skype students from across the globe so that they will be able to not only converse with French students, but see them in real time as French students are viewing them.
* Journalism/photojournalism1 - is the study of communications history, the legal boundaries, and the ethical principles that guide journalistic writing. Students use writing styles and visual design for a variety of media formats. Students express themselves with meaning and clarity to inform, entertain, and persuade so that future work on high school publications or media staffs will prepare them to take a career path in journalism. Students will complete projects such as photo essays, newspaper feature, page design, and yearbook layouts which demonstrate knowledge, application and progress in Journalism course content.
* World Studies (World Languages and Art) - students will study a world language through the use of a computer program called Rosetta Stone. Self-motivated students will work on listening, speaking and writing skills as they build their vocabulary of the language of their choice: German, Italian, Spanish or Chinese. In order to incorporate the cultural aspect of the language, students will work in groups and participate in a project based style of studying the art of the language chosen. This may include ceramics, painting or drawing.
Funds for this project have been made available through grants from the state, money from the Economic Stimulus Package and support from the City of Scottsburg.
The new school is supported by the Scottsburg New Tech Advisory Board. Members on the board are: Mayor Bill Graham, Robert Peacock, President of Scott County Economic Development Corp.; Keith Colbert, President of the Chamber of Commerce; Syd Whitlock, President of the Scott County Bank; Alan Richey, Agriculture Advisory Board Member, current high school parent and IT Director for Cylicron in Louisville, Ky., Arleen Schulze, SCSD2 Curriculum Director; Lynda Phillips, Soil and Water Conservation and New Tech Parent; Ric Manns, SCSD2 CTA President and middle school teacher; Tammy Staser, Scottsburg Elementary Teacher; and Tim Johnston, SHS and SMS band director.
More information about Scottsburg New Tech can be found at the high school website at www.scsd2.k12.in.us.
|Last Updated on Tuesday, 19 October 2010 11:07|
|
<urn:uuid:6d3528a0-225a-4d4e-8e14-078a5f2d26d5>
|
CC-MAIN-2016-26
|
http://www.gbpnews.com/index.php?option=com_content&view=article&id=1586:new-tech-shs&catid=35:giveaway&Itemid=45
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00083-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.953726 | 3,715 | 2.8125 | 3 |
The birds are a metre high and live in wetland habitats
A bird species that vanished from the East Anglian fens 400 years ago is nesting at an RSPB reserve in Suffolk.
The Royal Society for the Protection of Birds (RSPB) said two young cranes had been seen at Lakenheath Fen.
Officials said the development followed the launching of a project aimed at re-establishing cranes in Britain.
The fens, which once stretched from Cambridge to Lincoln, were the last stronghold of the three-foot (1m) high bird before it became extinct in 1600.
Conservationists said the bird vanished as wetlands were drained and hunting took its toll.
The Great Crane Project, which aims to establish a sustainable population of cranes in Britain, starts with taking eggs from healthy populations overseas.
Common Crane (Grus Grus)
Global population: 220,000
The eggs are then incubated, the chicks nurtured and then released into a protected environment.
An RSPB spokesman said: "Cranes have nested in the Norfolk Broads in recent years and have made sporadic attempts elsewhere.
"But the Fens are significant because they were traditionally the stronghold of the crane and we have really been hoping that they would return to the area."
Lakenheath Fen was created out of carrot fields in the early 1990s in an effort to encourage wetland birds to return.
Norman Sills, site manager at the reserve, said: Seeing young cranes fly over the reserve makes me realise all our hard work has been worthwhile.
"These are fantastic birds."
|
<urn:uuid:acce6fe5-486c-4a52-ac2a-611370e081ea>
|
CC-MAIN-2016-26
|
http://news.bbc.co.uk/2/hi/uk_news/england/suffolk/8158855.stm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00005-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957987 | 336 | 3.078125 | 3 |
Architectural geometry is an area of research which combines applied geometry
and ... Generative Components — Generative design software that captures and
exploits the critical relationships b...
The relationship between geometry and architectural design are described and
discussed along some examples. Geometry is the fundamental science of forms ...
Geometry in architecture and building. Hans Sterk. Faculteit Wiskunde en
Informatica .... numbers in relation to shapes in your hands is through the use of
.... From the equations x + y + z = 3 and 2x + y − z = 5, the angle φ between the
At the present time, many school children in New Haven are unaware of the
relation between the mathematics studied in their classrooms and the
Dec 1, 2012 ... Furthermore, the underlying relationship between cosmology and ... Geometric
proportions in architectural patterns represent a design ...
There is a relationship between mathematics and architecture. ... I will refer to
geometric concepts mainly, but also involve other mathematical ideas such as
the relationships that have formed between the Commission and the Court, the
PSC ... the nature of this relationship has evolved from architecture to geometry.
www.coreknowledge.org/mimik/mimik_uploads/lesson_plans/821/Geometry in Art and Architecture.pdf
Students will understand the connections between geometry and design, and the
importance of that connection to art and architecture. B. Content from the Core ...
Max Bill was one of great artists who gave a thought on the relationship between
art and structure. Through history of geometry and that of architecture there ...
Sep 20, 2012 ... The link between math and architecture goes back to ancient times, when ... so
you should remember that the geometric form is unique in that ...
|
<urn:uuid:c118c978-5712-46cd-a657-d6e51c18170b>
|
CC-MAIN-2016-26
|
http://www.ask.com/web?qsrc=60&o=41647999&oo=41647999&l=dir&gc=1&q=Relationship+between+Geometry+Architecture
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00149-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.878486 | 377 | 3.125 | 3 |
eastern and orthodox christianity at soas
In 1843 AH Layard, the archaeologist who uncovered the ruins of the Assyrian city of Nineveh, romantically and impulsively (but not rationally or scientifically) called the local Christian community "as much the remains of Nineveh and Assyria as the rude heaps and ruined palaces", which led the missionary Revd JP Fletcher to declare the Christians of what is now northern Iraq and north-west Iran "the only surviving human memorial of Assyria and Babylonia." His only 'evidence' being that, to his eyes, the local peasantry resembled the sculpted figures unearthed by archaelogists. This somewhat eccentric scratch has turned into gangrene.
"Yea, so I have stirred to preach the gospel, not where Christ was
named, lest I should build upon another man's foundations: But as
it is written, To whom he was not spoken of, they shall see; and
they that have not heard shall understand." (Romans15:20-21)
Ignoring St Paul's strictures the Protestant missionaries had an agenda of their own, deliberately undermining Christian foundations laid and sturdily built upon over an 1800 year period, they set Christian against Christian.
The Roman Catholic Uniates, known as Chaldeans ( another misnomer!)
could be ignored and vilified simply because they were Roman Catholics, the Syriac Orthodox Church was condemned as 'heretical',but the Nestorians were fawned over and idealized as 'long lost' Protestants!
The bitter irony is that Protestantism never really caught on, but because the missionaries' frenetic zeal fused quite nicely with the needs of British foreign policy, a new 'nationalism' was conjured up out of thin air on the outer edges of the Turkish Empire.
Not every missionary in the area swallowed the 'Assyrian'myth. The Archbishop of Canterbury's Mission to the Assyrians (sic) was the most prestigious grouping, but distinguished Syriac scholar JR Coakley reported Anglican missionaries as saying that the term 'Assyrian' was but "a fad of
His Grace, no-one else." Isabella Bird, in her memoir of her travels in the area at the time often recounts visits to outposts of the 'Mission to the Assyrians', but always refers to the mainly Nestorian Christians she met as, correctly, Syrians.
But Frankenstein created and nourished his monster, and today it has become yet another destabilising factor not only in Iraq, but in the ecumenical movement.
The 'Assyrians' plaster the word 'Assyrian' over everything. Not content with misnaming their Church by adding 'Assyrian' to it, they label the language every normal person calls 'Syriac' as 'Assyrian'. With not a word of logical argument or explanation, as if thousands of years of evidence does not exist, they simply say, "'Assyrian'...'Assyrian'... 'Assyrian'". If you dare to disagree you are lied about, vilified, slandered. Look at their Wikipedia entry for an example of crass indifference to truth, or read their abuse of the 120th Patriarch of the Syriac Orthodox Church, Mor Afram Barsoum, a saintly man and an acknowledged great scholar.
Perhaps their most obscene lie is to label the massacre of Syriac Christians (of all denominations) in Turkey during and just after the First World War as a massacre of 'Assyrians'. This is a blasphemous attempt to gain kudos out of the martyrdom of thousands of non-Nestorian Christians.
Unfortunately there are some academics who, like most journalists, are only capable of repeating the last sentence they've heard, and 'Assyrian' is used all-too-often in the context of Iraq.
Before the illegal US-led invasion of Iraq the Christian denominations of Iraq were as follows:
'Chaldean' RC Uniates 200-300,000;
Syriac Orthodox Church - 80-100,000;
Syrian Catholics 40-45,000;
Nestorian 'Assyrians' 25-30,000.
The 'Assyrians' ignore these figures, claiming that Iraq has anything from 700,000 to 2 million Christians, all of them 'Assyrians', of course!
Those who know and state that they are not 'Assyrians' are labelled so against their will by this vociferous minority.
The USA has taken over the mantel of the British Protestants. Under pressure from 'born again' George W Bush - who also repeats the 'Assyrian' myth, because it accords with US strategic needs - the Iraqi president promised the 'Assyrians' an autonomous region in northern Iraq which is inhabited primarily by Kurds. A recipe for a bloodbath! When I emailed a US evangelical newspaper which had trumpeted this 'success' for the 'Assyrians', asking how a small minority in the itself small Christian minority could run the lives of those it wrongly names and lies about, in an area dominated by Kurds, without murder and mayhem occuring, the reply was - a stunning silence!
The Syriac Orthodox Church, whose faithful have suffered at the hands of Kurds in the past, as have all Christians in the area, has worked strenuously to build up a relationship of trust with the Kurds, whereas the 'Assyrians' seem hell-bent on carnage.
It would be interesting to know the attitude to this 'Assyrianization' of the Church of the East within the Church itself. Ecumenical material written by clergy of the Church of the East and published on an official website is very telling. The texts speak of "the Church of the East" and the Persian Church, but never 'Assyrian'.
The acute need for dialogue, the acute need for becoming peacemakers in, for example Iraq, will hopefully provide the opportunity for those who
are Christians inside the 'Assyrian' funny farm to pray, pray and pray again. It is our duty, whilst explaining the truth, that the real Assyrians no longer exist and that the modern day 'Assyrians' are hate-filled fanatics,
to pray with and for the Christian heart of the Church of the East.
It is my hope that the Syriac Orthodox Church in particular, because of the shared language and culture, be enabled to have ecumenical dialogue with their fellow Aramaens in the Church of the East.
|
<urn:uuid:e07cda83-57c3-4f3e-8f48-65042a75e590>
|
CC-MAIN-2016-26
|
http://britishorthodox.org/forum/showthread.php?mode=threaded&tid=146&pid=861
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00199-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958622 | 1,355 | 2.53125 | 3 |
|for Families of Children with Disabilities|
|Home||Getting Started||The Laws||Parent Sites||Specific Disabilities|
|Education||Health||Mental Health||Transition to Adulthood||Español|
There are several federal laws that have been enacted to protect the rights of individuals with disabilities, including children. State laws are also in place to ensure protection of the those rights, and may govern if they provide greater protection than federal laws.
Although this website does not discuss states laws, families can learn more about the laws in their states by contacting a local PTI (parent training and information center).
The US Department of Justice hosts this site, full of information about the American with Disabilities Act. The "Introduction to ADA" page is helpful, as is the "Topics of Interest" page.
This site provides information on major topics covered by IDEA 2004. It has excellent video clips on Early Intervening Services/RTI, Individualized Education Program, Discipline, Highly Qualified Teachers, Procedural Safeguards, and other important topics.
DisabilityInfo.gov is a comprehensive online resource designed to provide people with disabilities with quick and easy access to the information they need. With just a few clicks, the site provides access to disability-related information and programs available across the government on numerous subjects, including benefits, civil rights, community life, education, employment.
This website funded by the Office of Special Education Programs (OSEP), U.S. Department of Education provides public access to data about children and youth with disabilities served under the Individuals with Disabilities Education Act (IDEA).
This site has useful information on Section 504 Plans, accommodations, and
related services. It covers information on documentation of disability as well as procedural safeguards required under Section 504 of Rehabilitation Act of 1973
This site is a publication from the Social Security Administration detailing benefits available from SSA for children with disabilities. It is clearly written and it is clearly an essential resource.
The homepage for information about SSI (Supplemental Security Income) from the Social Security Administration, this site provides information about SSI benefits. Its presentation is good and may be quite valuable for parents whose children are approaching adulthood. SSA has also provides Disability Starter Kits (in English and Spanish) to help families prepare to apply for SSI benefits for their child.
The website now features a special page Understanding Supplemental Security Income. It's easy to read and answers a lot of questions. There is a page specifically about SSI for children.
Another branch of the US Department of Educaiton, the Office for Civil Rights has a mandate to protect the civil rights of children in public schools. It is worth it to explore the whole site, but the special section on "Disability Discrimination" may be of greatest interest to families of children with disabilities.
The OSEP (Office of Special Education Programs) homepage features articles of interest and has links to publications, studies and special programs. It is a good way to keep current on trends and resarch in special education from an official point of view.
Because legislation is often written in language that is difficult to understand, there are organizations whose mission it is to advocate for children with disabilities and their families. In addition to the list below, it is always a good idea to check with a nearby parent training and information center (PTI) for local advocacy resources.
TASH has been advocating for the rights of people with disabilities for over 25 years. The site is not focused on children's issues nor is it informational, but it does provide information about the work that TASH does within the disability community.
Norm Kunc and Emma Van der Klift are Canadian husband and wife disability advocates who also provide disability-awareness trainings (Norm has cerebral palsy). Their site has a multitude of helpful links and includes information on issues such as sexuality and abuse not often available on other sites.
The Bazelon Center is a non-profit legal advocacy organization. This site is essential for families with children with mental disabilities. It is informative and brings important mental health resources together in one place.
Parents and legal professionals work together to provide advocacy for families of children with disabilities on this site. Good links to information about IDEA and Section 504 and to news about special education law.
Family Voices is a national grassroots network of families and friends speaking on behalf of children with special health care needs. Their site provides links to their wonderful publications, newsletters and advocacy alerts. Information is also available in Spanish.
This is a good site to visit for information about research and policy issues for children with special health care needs.
A voluntary national membership association of protection and advocacy and client assistance sytems (congressionally-mandated agencies that provide advocacy and legal representation to people with disabilities). The NAPAS site contains information about legal rights of individuals with disabilities and provides links to their Client Assistance Programs in every state.
Wrightslaw is a source of accurate and reliable information about special education laws and advocacy for children with disabilities. The website provides information on most of the disabilities, several special education topics such as RTI, high-stake testing, procedural safeguards, inclusion and early intervention.
|Home||Getting Started||The Laws||Parent Sites||Specific Disabilities||Education||Health||Mental Health||Transition to Adulthood||Español|
The resources on these pages are for your information. These listings are not necessarily comprehensive, nor are they an endorsement. If you find that any information is incorrect, if you would like to offer feedback or if you know of additional resources that may be helpful to include, please contact us.
This page was last updated July 16, 2014 .
Support for Families of Children with Disabililities
For information and support for families and professionals call
|
<urn:uuid:6dcf6bb7-0cad-4d64-b510-3bee99c7c07e>
|
CC-MAIN-2016-26
|
http://www.supportforfamilies.org/internetguide/laws.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00047-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.943251 | 1,192 | 3.28125 | 3 |
Broad Collection of Leaflets: NRCS
(rare, threatened & endangered mammals)
Bears, Grizzly - FWS Conservation
(see also 'Ungulates)
Mammals - tour all the FWS National Refuges meeting some of the mammals that live there. For example, Turnbull NWR, Nisqually NWR, or Quivira NWR.
Plants Are Cool Too!
Maps! Digital versions of the USGS topographical
maps for the continental United States.
Water Conservation IQ Test
Wolves - International
Can't find a particular species? Try using a
search engine, such as google.com
and use the species name under "keyword".
BEST BET! NatureServe - a biological database
Astronomy Basics - the USFS doesn't have an astronomy program ... but we do have an Air program. So we are stretching the staff boundaries by including this URL. A teacher contacted us and recommended it.
Ask the Experts at Pitsco
Boise Aquatic Sciences Lab (FS/RMRS) - Science for Kids
Botanical Society of America + Bucknell University: Plants are Cool Too!
Endangered Species Information - USFWS
Students and Teachers
Green Wing - Ducks Unlimited
Science Education Network
NASA Biology Interests
National Park Service
North American Native Fish Association
Inquirer - The Natural Inquirer is a science
education resource journal for 5th grade and above.
in Amphibian and Reptile Conservation
Plant - Recommended Plant Sites for Kids
Puddler - Ducks Unlimited Print Magazine
for Kids! - Agricultural Research Service
Smithsonian Institute - National Zoological Park
National Museum of Natural History
& Provincial Trees
Statistics & Probability
The Nature Conservancy
Although marine animals do not occur on our national forests and rangelands (with the exceptions of a few waterfowl and fish species that spend part of their life cycle in the ocean), we do receive numerous questions regarding marine animals. We suggest you try the following links:
Marine Science Center
The USDA Forest Service does not have an aquaculture
program, but these two sites may be of help.
|
<urn:uuid:c0efb20f-700d-42ef-ba87-59ad3a378609>
|
CC-MAIN-2016-26
|
http://www.fs.fed.us/biology/resources/kids.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00039-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.699363 | 457 | 3.109375 | 3 |
- Historic Sites
Herbert Hoover Describes The Ordeal Of Woodrow Wilson
The great tragedy of the twenty-eighth President as witnessed by his loyal lieutenant, the thirty-first
June 1958 | Volume 9, Issue 4
If Mr. Wilson had been either simply an idealist or a caucus politician, he might have succeeded. His attempt to run the two in double harness was the cause of his undoing. The spacious philanthropy which he exhaled upon Europe stopped quite sharply at the coasts of his own country.
He did not wish to come to speedy terms with the European Allies; he did not wish to meet their leading men around a table; he saw himself for a prolonged period at the summit of the world, chastening the Allies, chastising the Germans and generally giving laws to mankind. He believed himself capable of appealing to peoples and parliaments over the heads of their own governments. … In the Peace Conference—to European eyes—President Wilson sought to play a part out of all proportion to any stake which his country had contributed or intended to contribute to European affairs. … He sought to bend the world—no doubt for its own good—to his personal views. … If President Wilson had set himself from the beginning to make common cause with Lloyd George and Clemenceau, the whole force of these three great men, the heads of the dominant nations, might have played with plenary and beneficent power over the wide scene of European tragedy. He consumed his own strength and theirs in conflicts in which he was always worsted.
Clemenceau in his book, Grandeur and Misery of Victory , makes a variety of disparaging statements: Doubtless he [President Wilson] had too much confidence in all the talky-talk and super talky-talk of his “League of Nations.” England in various guises has gone back to her old policy of strife on the Continent, and America, prodigiously enriched by the war [Clemenceau’s italics], is presenting us with a tradesman’s account that does more honour to her greed than to her self-respect.
President Wilson, the inspired prophet of a whole ideological venture … had insufficient knowledge of … Europe. … It became incumbent on him to settle the destiny of nations by mixtures of empiricism and idealism. … He acted to the very best of his abilities in circumstances the origins of which had escaped him and whose ulterior developments lay beyond his ken. …
On the other hand, the President on his return to the United States was most generous about his colleagues at Paris. He may have been under illusions about the feelings of some of them toward him.
Vance McCormick, in his Diary under date of July 5, 1919, says: The President sent for Lamont, Davis, Baruch and me with Dr. Taussig to come to his room [on the George Washington returning to America] … to read us his message to Congress to get our suggestions and criticisms.
We had few changes to suggest as it was an excellent general statement of the situation at Paris and the problems that confronted him. We raised the question as to the praise given his colleagues and developed from him a real feeling of friendship for his colleagues whom he said privately were in accord with the principles we were fighting for but were hampered and restricted by their own political conditions at home, due to the temper of their people. He said he was surprised to find they had accepted the Fourteen Points not for expediency only but because they believed in them.
He had probably mistaken politeness for friendship or failed to realize that “Truth is the first fatality of war.”
President Wilson finally left France for the United States on June 28, 1919, and arrived in New York on July 8. I bade him good-by at the station in Paris and had no opportunity to talk with him again at any length for over two years.
We wound up our official Relief and Reconstruction organization in Europe early in September and installed in its place the American Relief Administration based upon charity.
I called on Premier Clemenceau on September 3 to express my appreciation for his undeviating support of my work. In another memoir, I have recalled: … He was in a gloomy mood, saying, “There will be another world war in your time and you will be needed back in Europe.” We would not have agreed on the methods of preventing it, so I did not pursue the subject. But to lighten the parting, I said, “Do you remember Captain Gregory’s report on the decline and fall of the Hapsburgs?” He laughed, pulled out a drawer in his desk and produced the original telegram, saying, “I keep it close by, for that episode was one of the few flashes of humor that came into our attempts to make over the world.” He was still chuckling when we parted.
The Premier was fairly accurate on both predictions. The Second World War began twenty-one years after the end of the first one. I was back in Europe in 1946 to co-ordinate world food supplies to meet the second terrible famine, which was inevitable from that war.
Upon the President’s return home, he launched his crusade for Senate ratification of the Treaty. With an accompanying statement of great eloquence, he submitted the Treaty to the Senate on July 10 and the French-British-American military alliance on July 29. On August 19 he conferred with the members of the Senate Foreign Relations Committee. By this time, Senators were in sharp debate over the Treaty. …
With the opposition the President had to meet in the Senate and from racial groups of enemy-state origin, he could not admit to his enemies that there was anything very seriously wrong with the Treaty or the Covenant if he were to secure ratification.
|
<urn:uuid:cabb6b5f-73e7-49ef-989b-c3efa746eb7e>
|
CC-MAIN-2016-26
|
http://www.americanheritage.com/content/herbert-hoover-describes-ordeal-woodrow-wilson?page=15
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00018-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.986145 | 1,206 | 2.984375 | 3 |
Concrete is an excellent building material. Man has been using concrete to build all types of structures for many centuries. It has proven to be very durable and very strong in compression. With the invention of reinforced concrete in the mid 1800s, concrete also was found to provide a degree of tensile strength. In 1868, Joseph Monier from France patented the use of reinforced concrete to build pipes and tanks. Since that time, engineers have taken full advantage of reinforced concrete's superior engineering properties and have been using it to build water and wastewater tanks all over the world.
Unfortunately, with the introduction of reinforcing steel into concrete, a new problem was created that affected the durability of concrete. When embedded reinforcing steel corrodes, it can cause concrete to crack and spall. These cracks and spalls not only reduce the structural integrity of the concrete, but they also allow deleterious elements to freely enter into the concrete to accelerate the rate of deterioration.
Other problems that affect the durability of concrete in water and wastewater tanks include abrasion, chemical attack and freeze-thaw. These destructive forces can significantly reduce the service life of the structure. This article highlights some methods and materials that can be used to protect concrete tanks from the harsh environment in water and wastewater facilities.
Factors Affecting Durability
All concrete deteriorates over time. The rate at which concrete deteriorates is a function of two factors: the quality of the concrete and the environment to which the concrete is subjected.
The quality of concrete refers to the properties incorporated into the original concrete mix design such as water/cement ratio, cement type, size and hardness of the aggregate and air entrainment. Quality is also dependent on the construction practices used to place the concrete such as proper consolidation, cover and curing. If the designer and contractor paid attention to these details, then hopefully your concrete is dense, has low permeabilty, is resistant to freeze-thaw damage, and is relatively crack-free. If you have concrete like this, consider yourself lucky but not home free.
The second factor affecting the rate of deterioration is the environment. Water and wastewater treatment plants provide a severe environment for concrete. Concrete tanks can be subjected to wet-dry cycling, freeze-thaw cycling, chemical attack and abrasion. Even high quality concrete will deteriorate under these harsh conditions (but at a slower rate than poor quality concrete). For this reason, it is wise to protect concrete, even good quality concrete, to increase durability.
The best time to protect concrete is when it is new, before harsh chemicals like acids, salts and sulfates have had a chance to get inside the concrete and cause damage. Unfortunately, there are thousands of concrete tanks that were built in the '70s and '80s that were not adequately protected. As these tanks enter their second and third decade of service, the effects of all those years of unprotected exposure start to become apparent in the form of cracks, spalls and leaks. Once these problems develop, the deterioration of the concrete is accelerated because aggressive substances now have an unobstructed passageway into the concrete.
Diagnosing the Problem
In the rehabilitation process, it is important to first determine why the concrete is deteriorating. Having this information allows you to address the root cause of the problem so that you don't get locked-in to a never-ending cycle of repair. This information is usually obtained from consulting engineers and concrete testing firms who specialize in the evaluation of concrete and who will usually perform a series of field and laboratory testing. From this data, a qualified professional can determine if the problem is freeze-thaw, chemical attack, settlement, abrasion, corrosion, etc. With this information, a protection strategy that addresses the real root cause of deterioration can be developed, instead of just applying a quick-fix solution. This is an extremely important part of the rehabilitation process and should not be compromised.
One of the most common root causes of deterioration in concrete tanks is corrosion of the reinforcing steel. In the presence of moisture and oxygen, steel will corrode if not protected. Under normal circumstances, the high alkalinity of new concrete (pH 12p;13) creates a natural protective oxide layer around the steel known as a passivating layer. As long as this layer stays intact, the steel is protected from corrosion. Unfortunately, dissolved salts in the contained water can penetrate through hardened concrete and destroy this passivating layer. The lower the quality of the concrete, the more permeable the concrete will be thereby allowing water, oxygen and salts to penetrate more easily. When this occurs, rust forms on the surface of the steel, increasing the volume of steel by up to about 2p;6 times its initial volume (Figure 2). This expansion from within creates large tensile forces within the concrete. Since concrete is relatively weak in tension, the concrete cracks to relieve the tensile stresses (Figure 3). Once the concrete cracks, water, oxygen and aggressive chemicals can freely enter the concrete and attack the embedded rebar and the deterioration process escalates.
A strategy that can be implemented to slow down the corrosion process is the use of penetrating corrosion inhibitors. A corrosion inhibitor, as defined by the American Concrete Institute, is "a liquid or powder that effectively decreases corrosion of reinforcing steel." In the case of existing concrete, a liquid amino alcohol-based corrosion inhibitor can be sprayed onto the surface of the concrete where it will penetrate through the hardened concrete down to the depth of the rebar. When the penetrating corrosion inhibitor reaches the rebar, it forms a protective layer around the steel. Such a method has been shown in independent laboratory tests to reduce corrosion in reinforced concrete by as much as 60 to 70 percent. This technology requires no special equipment and is easy to apply, making it ideal for plant maintenance crews (Figure 4). This type of corrosion inhibitor also may be used as an admixture to protect rebar in new concrete.
Abrasion, Chemical Attack and Freeze-Thaw Protection
Other common root causes of concrete deterioration in water and wastewater tanks are abrasion, chemical attack and freeze-thaw cycling. Abrasion damage results from the abrasive effects of waterborne silt, sand, gravel and other debris coming in contact with the concrete and causing the concrete to erode. Chemical attack can come in a variety of forms. One form occurs with the presence of acids and low pH water (less than 6.5). Water that is acidic dissolves the cement matrix that binds the aggregate in the concrete, causing the concrete to become weakened.
Another form of chemical attack occurs when sulfates in the water or wastewater react with the tricalcium aluminate in cement to form the expansive compound ettringite. This expansion causes internal stresses that cause the concrete to crack or crumble. Additionally, the wet-dry cycling that takes place inside a tank between the high and low water marks accentuates the impact of sulfate attack on concrete. Freeze-thaw cycling in this zone also causes expansive forces within the concrete that result in cracking and spalling (Figure 5).
Concrete can be protected from these root causes by preventing the contained water from coming in contact with the concrete. For obvious reasons, this may not be so easy to accomplish inside a water tank. However, it can be done with the use of protective coatings. When preparing a protective coating strategy it is usually necessary to differentiate between water and wastewater tanks since the latter contains water that is usually much more aggressive in terms of water chemistry and its affect on concrete.
To protect concrete in potable water tanks, polymer-modified cementitious coatings have been used with much success. A polymer-modified cementitious coating can provide an extremely dense protective layer on the surface of concrete while at the same time providing a degree of flexibility (Figure 6). The flexibility of this coating is important because it allows hairline cracks (less than 1?32 inch wide) to be sealed by the coating without having to detail every crack with sealant, chemical grout, or a strip and seal system. A good polymer-modified cementitious coating can bridge these small cracks and be flexible enough to withstand a small amount of crack movement from thermal expansion and contraction of the concrete. However, the quality of the polymer component of the coating is an important ingredient that dictates just how flexible and dense the coating will be. Acrylic and styrene-acrylic based polymers provide the desired properties.
For tanks that contain wastewater, the chemical resistance of the coating is extremely important. For the coating to be successful in protecting the concrete, it must be resistant to the particular chemicals at certain concentrations in the contained wastewater. The best way to determine if a particular coating is resistant to the wastewater in question is through laboratory analysis. This option often is not economically feasible for the job at hand. When this is the case, the owner or engineer should ask for reference projects from the coating manufacturer (preferably projects with similar exposure conditions).
For wastewater tanks, epoxy coatings offer a high degree of chemical resistance and ease of application. More specifically, the standard liquid bisphenol A epoxy with polyamine hardener has proven to be very durable in wastewater tanks. Other coating technologies such as polyureas, urethanes and vinyl esters also offer a high degree of chemical resistance but often are not easily applied.
While epoxies generally have excellent chemical resistance and are user-friendly, they need to be applied under the right conditions. One of the most common problems occurs when epoxy coatings are applied on wet concrete. This condition generally happens with tanks that have been recently drained to permit repair and maintenance work to be done.
Two moisture-related issues that need to be checked before applying an epoxy coating are moisture vapor transmission (outgassing) and excessive moisture content.
Moisture vapor transmission. Moisture vapor transmission (outgassing) simply means that water in its gaseous form (moisture vapor) is trying to exit the concrete. Under the right conditions of temperature and humidity, moisture vapor can be drawn out of the concrete and into the atmosphere through evaporation. If you try to apply an epoxy coating (or any other "non-breathable" coating) on concrete that is experiencing outgassing, you could end up with blisters and pinholes, coating defects that do not provide any protection for the concrete. Additionally, these blisters and pinholes often lead to delamination of the coating that can affect a much larger area (Figure 7).
There is a simple test that can be performed in checking for moisture vapor transmission. ASTM D 4263 (otherwise known as the "Mat Test") involves taping an 18 inch by 18 inch polyethylene sheet to the concrete surface and leaving it for a minimum of 16 hours. If after this time, moisture vapor droplets or condensation appear on the underside of the plastic sheet, then outgassing is occurring. Do not attempt to apply the epoxy coating if outgassing is happening. The solution may be as simple as waiting until late in the afternoon or night to apply the coating when the sun is not drawing the water vapor out of the concrete. If outgassing continues to occur even at night, the solution can be the same as that used to solve the second moisture-related problem, that of excessive moisture content as outlined below.
Excessive moisture content. It is a general rule-of-thumb that you should not apply an epoxy coating on a concrete surface that has a moisture content greater than four percent. If the moisture content is greater than four percent, the moisture in the pores of the concrete can prevent the epoxy from bonding well to the concrete. Likewise, it is not advisable to apply an epoxy coating on new concrete until it is at least 28 days old, so that the concrete has a chance to hydrate and the moisture content has a chance to drop.
The moisture content of concrete should be checked using a moisture meter (Figure 8) every 500p;1,000 square foot of surface area to be coated. If the concrete has a moisture content greater than four percent, the easiest solution is to let the concrete dry to less than four percent. Depending on temperature and humidity, it may take a considerable amount of time for this to happen, especially if the tank has been in service for many years and in constant contact with water. This may not be an acceptable solution due to limitations on the time that the tank can be out of service.
In the case of new construction, you may not have the luxury of waiting 28 days for new concrete to cure if the tank needs to be put on-line as soon as possible. In these cases, a solution to the moisture problem is to apply a special formulation of an epoxy-cement mortar (water-based epoxy resin + cement) on the concrete surface before the application of the epoxy coating. The cement component of the epoxy-cement mixture allows the mortar to bond to a wet concrete substrate (up to 12 percent moisture content). Applied as a leveling mortar, this thin epoxy-cement mortar layer will act as a temporary moisture barrier on the surface of the concrete long enough to allow the epoxy coating to achieve an adequate bond (Figure 9). There are commercial blends available that incorporate the two component water-based epoxy and the sand/cement component in one package.
In addition to solving the moisture problem, the cement-epoxy mortar layer also fills in bugholes, honeycombs and pores on the surface of the concrete. These surface defects make it almost impossible to obtain a continuous protective coating at the proper thickness on the concrete surface. By leveling the surface prior to applying the coating, it becomes possible to cover every square inch of surface area at the required thickness with the protective coating (Figure 10). Finally, the epoxy protective coating is applied to all surfaces at the manufacturer's recommended thickness (Figure 11).
According to the EPA and other industry sources, the U.S. municipal water and wastewater industry needs to invest $260 billion in capital expenditures over the next 20 years to keep up with expected demand and meet new and existing federal regulations. Much of the money needed is for the rehabilitation of existing water and wastewater treatment plants that have suffered from neglect due to the unavailability of funds.
Hopefully, funds will be made available to bring our water and wastewater systems up to speed. If and when that happens, protecting concrete tanks from the aggressive influences of the water or wastewater it contains should be a top priority. Whether the concrete is new or existing, the life cycle of the concrete can be extended significantly by taking the appropriate steps to protect the concrete.
|
<urn:uuid:09ff6265-9009-4908-9e65-27178440d563>
|
CC-MAIN-2016-26
|
http://www.wqpmag.com/print/12588
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00089-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931347 | 3,000 | 3.09375 | 3 |
Standardized testing in elementary schools appears to be uniformly unpopular among all involved — students, parents, teachers and administrators — than any other single issue in public education today. At the same time the number of standardized tests are administered to younger and younger children, there is growing evidence that test scores are inappropriately used in designing curriculum, fail to measure what their creators claim or simply steal valuable and irreplaceable classroom instruction time. Often referred to as achievement tests, standardized testing has been shown to correlate more closely with the amount of sleep a child obtains the night before than his ability to perform well in school according to the National Center for Fair and Open Testing.
Until significant policy and political changes occur, however, standardized testing in elementary schools is here to stay. According to Scholastic.com, there are a number of actions you can take as a parent to enhance your child’s test-taking skills, minimize test-taking anxiety and maximize their ultimate test scores. Speak with your child’s teacher to determine if there are practice tests available or what other materials might approximate the exam. Ensure that your child gets enough sleep the night before testing, feels well and has a healthy breakfast the morning before exams commence. Encourage your child’s reading skills among a variety of sources and discuss what he or she has read to maximize comprehension skills.
If you object to the standardized testing in elementary schools, join your school’s Parent-Teacher Association and contact the National Center for Fair and Open Testing for information on opting-out and other actions you can take.
|
<urn:uuid:2c400070-e93b-404f-a1c3-06367d4e0f6c>
|
CC-MAIN-2016-26
|
https://www.gerberlife.com/blog/standardized-testing-in-elementary-schools/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00111-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95446 | 318 | 3.796875 | 4 |
On the 29th of August, 1782, it was found necessary that the Royal
George, a line-of-battle ship of 108 guns, which had lately arrived at
Spithead from a cruise, should, previously to her going again to sea,
undergo the operation which seamen technically call a Parliament heel.
In such cases the ship is inclined in a certain degree on one side,
while the defects below the water-mark on the other side are examined
and repaired. This mode of proceeding is, we believe at the present
day, very commonly adopted where the defects to be repaired are not
extensive, or where (as was the case with the Royal George) it is
desirable to avoid the delay of going into dock. The operation is
usually performed in still weather and smooth water, and is attended
with so little difficulty and danger, that the officers and crew
usually remain on board, and neither the guns nor stores are removed.
The business was commenced on the Royal George early in the morning, a
gang of men from the Portsmouth Dock-yard coming on board to assist
the ship's carpenters. It is said that, finding it necessary to strip
off more of the sheathing than had been intended, the men in their
eagerness to reach the defect in the ship's bottom, were induced to
heel her too much, when a sudden squall of wind threw her wholly on
her side; and the gun-ports being open, and the cannon rolling over to
the depressed side, the ship was unable to right herself,
instantaneously filled with water, and went to the bottom.
The fatal accident happened about ten o'clock in the morning. Admiral
Kempenfeldt was writing in his cabin, and the greater part of the
people were between decks. The ship, as is usually the case upon
coming into port, was crowded with people from the shore, particularly
women, of whom it is supposed there were not less than three hundred
on board. Amongst the sufferers were many of the wives and children of
the petty officers and seamen, who, knowing the ship was shortly to
sail on a distant and perilous service, eagerly embraced the
opportunity of visiting their husbands and fathers.
The Admiral, with many brave officers and most of those who were
between decks, perished; the greater number of the guard, and those
who happened to be on the upper deck, were saved by the boats of the
fleet. About seventy others were likewise saved. The exact number of
persons on board at the time could not be ascertained; but it was
calculated that from 800 to 1000 were lost. Captain Waghorn whose
gallantry in the North Sea Battle, under Admiral Parker, had procured
him the command of this ship, was saved, though he was severely
bruised and battered; but his son, a lieutenant in the Royal George,
perished. Such was the force of the whirlpool, occasioned by the
sudden plunge of so vast a body in the water, that a victualler which
lay alongside the Royal George was swamped; and several small craft,
at a considerable distance, were in imminent danger.
Admiral Kempenfeldt, who was nearly 70 years of age, was peculiarly
and universally lamented. In point of general science and judgment, he
was one of the first naval officers of his time; and, particularly in
the art of manoeuvring a fleet, he was considered by the commanders of
that day as unrivalled. His excellent qualities, as a man, are said to
have equalled his professional merits.
This melancholy occurrence has been recorded by the poet Cowper, in
the following beautiful lines:--
Toll for the brave!
The brave, that are no more:
All sunk beneath the wave,
Fast by their native shore.
Eight hundred of the brave,
Whose courage well was tried,
Had made the vessel heel,
And laid her on her side.
A land-breeze shook the shrouds,
And she was overset;
Down went the Royal George,
With all her crew complete.
Toll for the brave!
Brave Kempenfeldt is gone;
His last sea-fight is fought;
His work of glory done.
It was not in the battle;
No tempest gave the shock,
She sprang no fatal leak;
She ran upon no rock.
His sword was in its sheath;
His fingers held the pen,
When Kempenfeldt went down,
With twice four hundred men.
Weigh the vessel up,
Once dreaded by our foes!
And mingle with our cup
The tear that England owes.
Her timbers yet are sound,
And she may float again,
Full charg'd with England's thunder
And plough the distant main.
But Kempenfeldt is gone,
His victories are o'er;
And he, and his eight hundred,
Shall plough the wave no more.
|
<urn:uuid:d63f833e-17a6-4b15-99ce-2e3d38131790>
|
CC-MAIN-2016-26
|
http://www.seastories.ca/Stories/The-Loss-Of-The-Royal-George.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00149-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.981307 | 1,082 | 2.578125 | 3 |
William King desperately sought the vice presidency for years until finally reaching his goal. In 1852, the Democrats nominated King to the second spot behind Franklin Pierce on their ticket. Pierce-King won that November, but King did not serve long. Tuberculosis took the vice president's life on April 18, 1853.
Alabama's William King enjoyed a long and distinguished career. He represented North Carolina's 5th district in the House of Representatives for three terms. Then, he moved to Alabama and won a senate seat. King served Alabama in the body from 1819-1844. After a brief stint as Minister to France, he returned to the senate. King also served as the senate's president pro tempore.
King allied himself with the Jacksonian wing of his party. As such, he opposed expansive federal power and the rabid southern fire eaters agitating for secession. The future vice president held moderate views on slavery and westward expansion. He proved a voice of moderation during an exceedingly vitriolic period. Additionally, he helped forge the Compromise of 1850 hoping it would end sectionalism.
The Alabaman's politics made him the perfect choice for vice president. The Democrats nominated northerner Franklin Pierce for president and needed a southerner for regional balance. The party denied King the second spot on the ticket in previous elections. His name began to surface for the position in the 1830s and he developed into a perennial candidate. In 1844, President Tyler went as far as shipping King off to France to blunt the senator's ambitions.
King returned to the United States in 1846 and immediately began to maneuver for the vice presidency once more. Party divisions within Alabama bogged him down in state politics in 1848. As a result, he did not run an effective campaign and finished behind General William Butler. The Democrats lost the election in 1848 leaving 1852 open for King.
By 1852, the senator reestablished himself as a top contender. The Democrats endorsed the Compromise of 1850, which he helped pass. The party selected Franklin Pierce on the 49th ballot at their Baltimore convention. The nomination process turned into a brawl and Pierce's supporters realized they needed an olive branch to the defeated Buchanan wing. They also needed a southerner to balance the northerner Pierce on their ticket. As a result, they finally chose King.
Democrat James Buchanan wanted to be president, but lost at the raucous 1852 convention. Buchanan and King had an especially close friendship. In fact, the two might have been a couple. The pair lived together for a decade, attended parties together, and demonstrated a deep affection for one another. Rumors swirled around Washington about the men. Regardless of their personal relationship, they shared common political views. In the end, King won the vice presidential nomination to help placate Buchanan.
Pierce won the presidency defeating Whig nominee General Winfield Scott. King developed tuberculosis sometime before the election. He believed he contracted it in Paris. The candidate grew more ill as the campaign wore on and worsened in November. President-elect Pierce did not consult with King on cabinet appointments, which soured the vice president-elect's mood further. The vice president-elect resigned his senate seat and traveled to Cuba in order to recuperate. In February, he realized he could not make it to Washington D.C. in time for the inauguration. Congress passed legislation allowing King to take the oath of office in Havana. It remains the only time a vice president was sworn into office outside the country.
The new vice president took the oath of office on March 24, 1853. He could no longer stand on his own. On April 6, he left Cuba for the United States. Vice President King reached his Alabama home on April 17. However, he was terminally ill by this point. He died on April 18, 1853 at the age of 67. His death removed an arbiter of moderation from presiding over the senate. As a result, King missed the rancorous Kansas-Nebraska debate which led to the infamous attack on Charles Sumner. His influence might have calmed some nerves and passions. On the other hand, the country might have been too far gone by this point for moderates like King to have any impact.
William King wanted to be vice president for two decades. Eventually, he achieved his goal. King's politics, tenure within the party, resume, ties to James Buchanan, and southern residency led to his nomination and election. Unfortunately, King developed tuberculosis and died 25 days into office. There is no way to know if his personality and experience would have helped usher the country away from the calamities it suffered in the 1850s.
|
<urn:uuid:2397f8a4-9ce6-473b-bbbf-0bc3cddbe466>
|
CC-MAIN-2016-26
|
http://www.examiner.com/article/vice-president-william-king-25-days-office?cid=rss
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00165-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.974233 | 958 | 3.453125 | 3 |
This morning Director-General Margaret Chan (pictured) of the World Health Organization (WHO) raised the threat level of H1N1 flu saying "The world is now at the start of the 2009 influenza pandemic." Chan said that while there have been relatively few deaths, and many of those cases had pre-existing medical conditions, that things could get worse as it spreads to poorer countries with few resources. In addition, Chan said that this particular virus has never been seen in humans before and that "The virus writes the rules and this one, like all influenza viruses, can change the rules, without rhyme or reason, at any time."
In addition, said Chan, pandemics typically take six to nine months to spread, and that areas where the virus has peaked can expect to see a second wave of infections.
U.S. HHS Secretary Kathleen Sebelius and U.S. DHS Secretary Janet Napolitano issued a statement saying that the change in threat level doesn't change what is being done in the U.S. to prepare and respond. "Although we have not seen large numbers of severe cases in this country so far," said Sebelius, "things could possibly be very different in the fall, especially if things change in the Southern Hemisphere, and we need to start preparing now in order to be ready for a possible H1N1 immunization campaign starting in late September."
Napolitano said that the outbreak was seen early on as a pandemic, so the announcement comes as no surprise. The administration is reaching out to state and local government, she said, as well as school districts and the private sector to urge them to modify and update their pandemic plans.
The 1918 flu pandemic that killed millions around the world was also a type of H1N1 virus. While the first wave of infection was moderately harmful, the virus mutated and the second wave was much more deadly.
|
<urn:uuid:475976d8-762c-4f4d-a631-d8eddd3fe6d1>
|
CC-MAIN-2016-26
|
http://www.govtech.com/health/Prepare-for-Second-Wave-of-H1N1.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00081-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.980611 | 391 | 2.703125 | 3 |
Source: Adapted from Health Smart
It’s long been a popular idea that our levels of optimism or pessimism can influence physical health. Think how we often tell ill or injured people to look on the bright side. Now scientists are finding that our internal philosophies, especially how optimistic we are, potentially have a greater impact on our health than we ever thought possible.
Dr. Becca Levy, from the Yale School of Public Health, has found some extraordinary benefits of an optimistic outlook. In one study, she looked at 660 people who’d completed a survey about their attitude to aging in 1975, then correlated their responses to the ages at which they died. “We found that individuals with a more positive view of aging tended to live seven-and-a-half years longer than those with more negative views of aging,” says Dr. Levy. “This advantage remained after adjusting for a number of factors such as age, gender, socioeconomic status, loneliness and functional health.”
A 2003 study from the University of Wisconsin-Madison found that positive thinkers responded better to flu vaccine. In the study, 52 people received a flu shot and were then asked to think and write about a very happy memory, followed by an unhappy one. Those who showed greater activity in the left side of the prefrontal cortex – a part of the brain associated with positive emotional responses – had the greatest number of antibodies against the disease six months later.
Even physical experiences may be buoyed by a more optimistic outlook. According to a 2005 study by the Wake Forest University School of Medicine, our personal outlook may influence how we perceive pain. The study found that if people were told to expect less pain when receiving a short burst of heat, they reported feeling less pain, regardless of the intensity of the heat. The researchers found that people’s expectations had as much effect on pain as a dose of morphine.
Bad attitude bringing you down?
But if an optimistic attitude can be good for our bodies, does a pessimistic attitude harm us? Many New Age books promote this idea, but the scientific evidence is less clear.
“There is a commonly held belief in the general community that stress and depression can cause cancer,” says Dr. Melanie Price from the University of Sydney. As a researcher in the newly emerging field of psycho-oncology, she examines how emotional health can influence and be affected by cancer. “There is some evidence to suggest that stress may increase the risk of a cancer diagnosis, but it’s not overwhelming,” she says. “Researchers have linked cancer registries with divorce and death registries, and some of those studies have found a particular link between divorce and breast cancer, but there are other studies that have found no link.”
Dr. Price is studying 2,500 women to see whether life events and coping styles influence the incidence of breast cancer, and is specifically looking at whether highly stressful experiences, such as the death of a partner, have any impact.
There is already strong evidence that some emotions could have an effect on the heart, with numerous studies linking heart disease to depression, and emotional states, such as hostility, to outcome for heart patients.
No one really knows why our minds could have this influence on our bodies. “One theory is that stress has an impact on the immune system and your hormones,” says Dr. Price. “If your immune system is compromised, you are more likely to be diagnosed with cancer. So it may be that stress can compromise your immune system, which could increase your chances of developing cancer.”
The mind-lifestyle connection
But an optimistic attitude may also influence the way you live your life, which has an impact on your health. “If you’re under a lot of stress, you’re more likely to not look after your diet, not exercise, not sleep as well, drink more alcohol. These lifestyle things are also risk factors for cancer and other diseases,” says Dr. Price.
Understanding how emotions are linked to physical conditions is important in helping to create new treatments, and also to offer hope to people who believe they may have brought their illness on themselves.
When Grace Gawler, who works with breast cancer patients, wanted to learn more about the emotional experiences of women in her support groups, she conducted open-ended surveys. “So many said that they felt the reasons that they got the cancer were because they had been chronically stressed, or had unresolved grief from their lives.”
She realized that many women blamed themselves for their diagnosis. This led to her writing a book called Women of Silence, about dealing with breast cancer and emotional healing. “There is pressure for people diagnosed with a disease like cancer to just ‘think positive’, but that’s not necessarily helpful. It can even be more like denial.”
“I’m careful to reframe that with people, to tell them that if they’ve had chronic stress, they did the best they could with what they knew at the time. Yes, stress might have been a factor in their illness, but how can we deal with stress, learn some simple stress management tools and re-engage the things they are passionate about?”
Finding your inner optimist
True optimism does not mean being perpetually positive no matter what, and it is not about denying legitimate feelings of sadness or grief. “It’s actually quite normal if you are diagnosed with a chronic or potentially fatal illness to feel upset about it,” Dr. Louise Sharpe, a psychologist from the University of Sydney who has studied the link between emotions and arthritis. “You have to process it and seek treatments before you can reach a point where you’re ready to accept it. What normally happens is people around you feel very uncomfortable with the grief and distress, and they try to get you to think positively, and sometimes that can be invalidating.”
Dr. Sharpe says an optimistic outlook doesn’t have to be ingrained – it can be created. She has used cognitive behavioural therapy with arthritis patients to see whether a more positive outlook has any influence on the disease. She found people with more optimistic views are much less likely to become depressed about their condition, and she believes this response has led to improved joint function in some patients.
“We look at what people say to themselves and whether it’s in fact true, or whether they are perhaps seeing things in a negative way. Then we get them to challenge negative thoughts in a realistic way,” she explains. “For example, someone with arthritis might not be able to open the milk because they have joint problems in their hands. They might think, ‘I’m stupid and useless.’ And we say, ‘Is it really the case that you are stupid and useless because you can’t open the milk?’ Then you look at other areas where they can do things, and you help them slowly come up with a more realistic and optimistic view.”
Anyone can learn to be more optimistic using the principles of cognitive behavioural therapy. “You need to look at your underlying beliefs, try to see what you are really telling yourself,” says Dr Sharpe. “Are those beliefs really true, and is it helpful to think that way? You can train yourself to have a more realistic slant on any situation.”
“Live each day consciously” is how Grace Gawler puts it. She believes living optimistically is not just about challenging negative thoughts, but also encouraging things you love. In Women of Silence she writes, “Sing your song, dance your dance, heal the old life story, close the chapter and the book and begin your new life. Go well.”
Found this article informative? Subscribe to our magazine today and receive more Best Health exclusives delivered to your door!
|
<urn:uuid:b0c87712-4841-4ddb-af5b-3a69a36000bf>
|
CC-MAIN-2016-26
|
http://www.besthealthmag.ca/best-you/wellness/the-power-of-positive-thinking/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00166-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.971002 | 1,638 | 3.328125 | 3 |
BIS 300 Interdisciplinary Inquiry
The purpose of BIS 300 Interdisciplinary Inquiry is to set the stage for students’ success as they pursue an undergraduate degree in Interdisciplinary Arts & Sciences (IAS). The course provides an introduction to the use and keeping of portfolios, and an orientation to the IAS program portfolio and assessment process. The course stresses interdisciplinary inquiry and the richness of the resource environment in IAS. It encourages students to think about how various types of knowledge are produced, and how they can learn to think and act as researchers by becoming active, creative, and self-critical makers of knowledge in academic and non-academic genres.
Interdisciplinary Inquiry (BIS 300) is a collaborative effort between IAS faculty and the staff of the Library, Writing Center, and Quantitative Skills Center. Considerable variation appears in the themes, readings, and assignments in individual sections of the course as instructors, librarians, and academic staff innovate and experiment with different pedagogies. What holds the various sections of the course together is our overarching goal to advance the IAS learning objectives by helping students to:
1. Understand the interdisciplinary production of knowledge and the ways it underwrites different aspects of the IAS program, including an orientation to the program’s diverse and inter-related (inter)disciplinary fields and methods of inquiry, and its portfolio-based assessment process
2. Become more skilled at critical self-reflection on their learning by making connections among assignments through a course portfolio process that models the program’s degree portfolio and promotes self-directed learning.
3. Become better critical thinkers, readers, and writers, capable of posing and addressing a variety of complex questions drawing on various types of evidence and writing in a variety of modes.
4. Become better inquiry-based researchers, able to use the resources at UWB and elsewhere in order to identify scholarly work while producing original knowledge through data gathering, interpretation and the use of evidence.
5. Become better writers and presenters, ones who are able to communicate clearly, engagingly, and persuasively about complicated topics, arguments, and issues.
6. Learn to work well collaboratively and to build shared leadership capacities.
|
<urn:uuid:5493647b-e9c5-4f4f-b816-1824d21a9819>
|
CC-MAIN-2016-26
|
http://www.uwb.edu/ias/bis300
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00142-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932315 | 451 | 2.578125 | 3 |
1. Something which confines the legs or arms so as to prevent their free motion; specifically, a ring or band inclosing the ankle or wrist, and fastened to a similar shackle on the other leg or arm, or to something else, by a chain or a strap; a gyve; a fetter. His shackles empty left; himself escaped clean. (Spenser)
2. Hence, that which checks or prevents free action. His very will seems to be in bonds and shackles. (south)
3. A fetterlike band worn as an ornament. Most of the men and women . . . Had all earrings made of gold, and gold shackles about their legs and arms. (Dampier)
4. A link or loop, as in a chain, fitted with a movable bolt, so that the parts can be separated, or the loop removed; a clevis.
5. A link for connecting railroad cars; called also drawlink, draglink, etc.
6. The hinged and curved bar of a padlock, by which it is hung to the staple.
(Science: anatomy) Shackle joint, a joint formed by a bony ring passing through a hole in a bone, as at the bases of spines in some fishes.
Origin: OE. Schakkyll, schakle, AS. Scacul, sceacul, a shackle, fr. Scacan to shake; cf. D. Schakel a link of a chain, a mesh, Icel. Skokull the pole of a cart. See Shake.
|
<urn:uuid:9743c7b7-1d74-492e-ae6b-4e8659307c33>
|
CC-MAIN-2016-26
|
http://www.biology-online.org/dictionary/Shackles
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00119-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931403 | 337 | 2.984375 | 3 |
plural noun(usually the walking wounded)
1People who have been injured in a battle or major accident but who are still able to walk.
- Bodies were lying on the track, some said, as the walking wounded - some badly burned and bleeding - tried to make their way to safety through the choking smoke and soot.
- I treated the walking wounded of the Vietnam War from 1968 to 1970.
- Menard, one of the walking wounded, recounted the fierce response they met in on a bridge at the southern Iraqi city of Nassiriya.
1.1People who have suffered emotional wounds.
- His characters are the walking wounded, and their wounds, as often as not, are the result of futile or faddish stabs at self-improvement.
- Do you want to spend your time with the world's walking wounded or go where the energy and the innovation and the creativity and the return and the opportunity for people is?
- ‘I'm a walking wounded painter,’ he says, speaking with an accent that is part East End, part transatlantic.
For editors and proofreaders
Syllabification: walk·ing wound·ed
Definition of walking wounded in:
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed.
|
<urn:uuid:b7d862c4-f06c-48f7-afc3-ee3a89db27dc>
|
CC-MAIN-2016-26
|
http://www.oxforddictionaries.com/definition/american_english/walking-wounded
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00184-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954548 | 283 | 2.65625 | 3 |
A guffaw, a giggle or a snicker: scientists have shown that different brain areas light up depending on the type of laughter we are hearing.
The study adds to evidence that laughter can be both a form of social bonding and a more complex form of communication.
Though many people laugh when they’re tickled, “social laughter” in humans can be used to communicate happiness, taunts or other conscious messages to peers, the study found.
The latest research studied participants’ neural responses as they listened to three kinds of laughter: joy, taunt and tickling.
Dirk Wildgruber of the University of Tuebingen,
|
<urn:uuid:b0d60f14-72ce-45c6-899f-00a9b8ed5e33>
|
CC-MAIN-2016-26
|
http://www.thetimes.co.uk/tto/science/article3759608.ece
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00062-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.905721 | 137 | 2.765625 | 3 |
- Get your point across effectively
- Persuade other people to your way of thinking
- Keep your cool in a heated situation
- Win people over
- Get what you want
- Tackle a difficult person or topic
- Be convincing and articulate
- Have great confidence when you speak
Table of Contents
Part 1: The ten golden rules of argument
Chapter 1 Golden Rule 1: Be prepared 3
Chapter 2 Golden Rule 2: When to argue, when to walk away 15
Chapter 3 Golden Rule 3: What you say and how you say it 25
Chapter 4 Golden Rule 4: Listen and listen again 41
Chapter 5 Golden Rule 5: Excel at responding to arguments 53
Chapter 6 Golden Rule 6: Watch out for crafty tricks 65
Chapter 7 Golden Rule 7: Develop the skills for arguing in public 89
Chapter 8 Golden Rule 8: Be able to argue in writing 95
Chapter 9 Golden Rule 9: Be great at resolving deadlock 103
Chapter 10 Golden Rule 10: Maintain relationships 111
Part 2: Situations where arguments commonly arise
Chapter 11 How to argue with those you love 121
Chapter 12 How to argue with your children 131
Chapter 13 Arguments at work 145
Chapter 14 How to complain 153
Chapter 15 How to get what you want from an expert 165
Chapter 16 Arguing when you know you’re in the wrong 175
Chapter 17 Arguing again and again 183
Chapter 18 Doormats 195
Chapter 19 How to be a good winner 205
Chapter 20 To recap 211
Format: eBook (Watermarked)?
This eBook includes the following formats, accessible from your Account page after purchase:
EPUBThe open industry format known for its reflowable content and usability on supported mobile devices.
MOBIThe eBook format compatible with the Amazon Kindle and Amazon Kindle applications.
PDFThe popular standard, used most often with the free Adobe® Reader® software.
This eBook requires no passwords or activation to read. We customize your eBook by discreetly watermarking it with your name, making it uniquely yours.
Includes EPUB, MOBI, and PDF
$19.99Add to Cart
|
<urn:uuid:64893d87-cab2-455f-a4a4-7f21d7abc837>
|
CC-MAIN-2016-26
|
http://www.mypearsonstore.com/bookstore/how-to-argue-powerfully-persuasively-positively-0132980940
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00182-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.786439 | 446 | 2.796875 | 3 |
The Macondo well has released more than 5 million gallons of oil into the Gulf of Mexico and continues to spew at least 200,000 gallons per day. A mile-long straw inserted into the well is now directing 40 percent of the flow into tankers, but BP is still looking for a permanent fix. They may try clogging the faulty blowout preventer with shredded tires, knotted rope, and golf balls in a process known as a "junk shot." Wait, golf balls?
You heard me, golf balls. When an oil company taps a well, they place a steel tower over the pipe that leads into the reservoir under the ground or sea floor. This tower, known as a blowout preventer, has a series of valves on its sides and top that can be adjusted to slow or stop the flow of oil. When the Deepwater Horizon rig sank on April 22, its blowout preventer sprung several leaks. In theory, a junk shot would stuff the tower with objects of different sizes, shapes, and textures, so that oil couldn't pass up through these leaks and into the ocean. The junk items would have to be strong enough to hold up against the pressure of the oil, which is gushing out with significant force. (Otherwise, they might be crushed to bits and flow through the leaky valves themselves.) Golf balls happen to be small enough to fill gaps between the rope and tires, and they're very, very sturdy—most are designed to withstand 2,000 pounds of force from a club.
BP might have chosen other types of detritus for a junk shot—there's no reason a bowling ball couldn't be part of the mixture, for example—but with so many junk materials to choose from, you have to start somewhere. Engineers based their first recipes for the junk shot on the one used to quell the 1991 Kuwait oil fires. (No word on whether that one included golf balls or any other sporting equipment.) But more than a week of testing on a replica of the leaky blowout preventer in the Gulf led to the revised set of ingredients.
BP has already lowered a set of pipes containing the junk to the ocean floor. The garbage is lined up inside the pipes in 10 layers, with the bigger items—the tires and rope—set to enter the blowout preventer first. Then comes the golf balls, the electrical wires, and bits of junk of further decreasing size. That way, the big items will be pushed up against the inner wall of the tower, blocking the smaller stuff from passing through the leaking valves. If all goes according to plan, the golf balls will help stopper the well while themselves being trapped in a net of rope and rubber.
To make all this happen, the junk layers would have to be forced through tubes into the bottom of the tower, where the blowout preventer meets the top of the well. The oil rising from inside the well would then force the garbage up into the top of the tower, where engineers hope it would become lodged and seal the openings. At that point, BP would pump a viscous mud through the same tubes that the junk passed through. With the leaks sealed, the fluid would have no place to go but down into the well. Downward pressure from the fluid would trap the oil in the reservoir and underwater cement could be used to seal the well.
Although tires, rope, and golf balls are the most widely cited ingredients, BP isn't proposing to use household garbage alone. The junk shot would also contain epoxy spheres used in the oil industry to seal tube perforations, called frac balls. These are smaller than golf balls—usually no more than one inch in diameter. There may be other items as well. The company hasn't released the precise recipe.
BP executives have objected to the termjunk shot, but it's a fairly good descriptor of the process. The garbage must be injected with tremendous force, or pressure from the spewing oil would prevent the junk from entering the tower. More than one shot may be required. Engineers have lowered enough junk to the sea floor for two shots, and will reload if necessary.
Got a question about today's news? Ask the Explainer.
Explainer thanks Gerald Graham of World Oceans Consulting and Mark Proegler of BP.
|
<urn:uuid:89bbb683-3624-4f1c-aa6e-210fffff6f00>
|
CC-MAIN-2016-26
|
http://www.slate.com/articles/news_and_politics/explainer/2010/05/water_hazard.single.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00169-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.963536 | 879 | 2.890625 | 3 |
public relations –noun
1. the actions of a corporation, store, government, individual, etc., in promoting goodwill between itself and the public, the community, employees, customers, etc.
2. the art, technique, or profession of promoting such goodwill.
PR has always been around in one form or another. From time immemorial, PR has been a successful strategy used by Emperors and Kings, Dictators and Evangelists and Savvy politicians.
Yet I am constantly amazed at how many people misunderstand the art of and the value of Public Relations. Public relations strategies and tactics are used to positively influence perception of everything from technology and oil spills to consumer brands, television shows, politicians and philanthropic causes.
So what is PR today? It is definitely different than it was over 100 years ago, when the term “Public Relations” was coined. How can PR work in conjunction with market research to help turn critical insights into impactful messages? To better understand, let’s start with a look at the origins of the profession.
Meet Edward L. Bernays, a nephew of Sigmund Freud, credited as ‘the father of public relations” in the U.S.
Inspired by his uncle, Bernays used psychology and social science theories to win his clients a place in the hearts and minds of their target audience. Over his long career (he lived to be 103), Bernays successfully instilled positive feelings towards many brands, products and political ideals
He made America fall in love with Bacon, Ballet, and Broadway- as well as cigarettes, soap and neo-collonialist ideology. In 1954, Bernays reportedly helped design the propaganda that helped the U.S. overthrow the Guatemalan government on behalf of the multinational United Fruit Company, which is now called Chiquita Brands International (hummm… think of that when you eat a Banana!)
A century later, the Internet has changed the very fundamentals of how people access information, and hence, which channels of information influence them.
Anyone can mount a promotional campaign on the web. It’s easy to build a website, a blog, a Facebook page, or to pontificate on sites like the Huffington Post. And – despite this new democratization of media – PR is an extremely powerful part of marketing an idea, a company, a product, a thought leader and more.
There are three key things to remember:
- PR is not marketing; PR, like market research, is part of a marketing strategy
- PR is not sales; PR generates third party validation of what is being sold
- PR is not advertising; PR keeps desire and respect alive in people’s minds
Basic #1– PR is part of a marketing strategy:
Here’s how Bernays changed the public’s perception of Ballet. In the U.S. in 1918 most people thought masculine dancers were deviates!
By launching the first-ever national PR campaign, Bernays and his team placed targeted feature stories in national media outlets, seeded preview stories in local media in advance of every show, and secured reviews of every performance.
Lesson: PR is a strategic tool and, when it is deployed correctly, public relations can build tremendous excitement, anticipation, and validation of an event or product.
Look at Apple. That company has mastered the art strategic public relations in support of and in sync with their marketing initiatives. This year’s roll out of the iPad is an example of a wildly successful, well-integrated campaign. And “somehow” information and images leaked to the media.
The company held a widely promoted news conference, and media Tweeted, streamed, and blogged about the iPad as Steve Jobs unveiled it on stage. Needless to say, coverage appeared instantly around the world and it continues to roll out in news outlets today (6 months later)!
Basic #2 — PR generates third party validation of what is being sold
Bernays recognized that people believe what they hear from an expert. When an authority endorses a cause, a product or an idea, people are automatically influenced either positively or negatively depending on their personal preferences.
The Bacon story: Hired to promote bacon sales, Bernays surveyed physicians on their breakfast recommendations. When the results were reported, American physicians favored eating what he dubbed a hearty “American Breakfast” that included bacon and eggs. (surprised ?!) Bernays leveraged the data to influence 5,000 additional physicians, who influenced their patients, who influenced their families. It sure worked, according to Wikipedia, Americans eat 17.9 lb (8.1 kg) per person per year (2007 numbers).
Lesson: PR seeds both positive and negative ideas. By introducing ideas and innovations to influencers long before the public gets wind of trends, controversies and products, public opinion can be swayed, people can be positively predisposed to an idea, and desire for a product can be ignited. Influencers help evangelize a message, a product, or idea to a targeted constituency.
Google recently announced it is testing a fleet of Robot vehicles. Nolan Bushnell, the founder of Atari, has an opinion piece published in Inc. Magazine about the innovation. Bushnell is known for his thought leadership and for delivering products that the public likes and an article bylined by him builds excitement across a business-focused and financially progressive audience,
Basic #3 — PR keeps desire and respect alive in people’s minds
PR campaigns can successfully motivate groups of people to take action. Bernays was engaged by Proctor and Gamble to revitalize the Ivory Soap brand, which was originally introduced in 1879. Bernays’ PR strategy was to convince the public that Ivory Soap was medically superior to any other product on the market and he used really creative and fun tactics to accomplish this including soap sculpting and floating contests.
Lesson: Use PR to keep audiences involved and engaged. Create opportunities for people to become emotionally invested in what you want to convey. Bernays created a sense of well-being with his Ivory Soap campaign and interjected a lot of fun into it with contests.
Earlier this year, the Stratosphere Hotel & Casino in Las Vegas kicked off a $20 million renovation with a nationally televised opening of a controlled free-fall ride called the SkyJump. The Today Show got the national broadcast exclusive of the first jump and the AP earned the print exclusive. The campaign got hundreds of thousands of people to take action, ride the SkyJump and visit the website.
In sum: PR, when implemented in a thoughtful and strategic manner, can dramatically influence the perception of a company or product in the public eye. Use PR to deliver ideas and listen to what your constituents have to say keep conversations alive all day and all night.
“This is an age of mass production. In the mass production of materials a broad technique has been developed and applied to their distribution. In this age, too, there must be a technique for the mass distribution of ideas.” Edward L. Bernays
|
<urn:uuid:52a37b1c-3867-4377-909d-88d0a794d7c6>
|
CC-MAIN-2016-26
|
http://researchaccess.com/2010/10/back-to-basics-what-is-pr/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00163-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.951507 | 1,453 | 2.6875 | 3 |
or multivariate statistical analysis
describes a collection of procedures which involve observation[?]
of more than one statistical variable[?]
at a time.
There are many different models, each with its own type of analysis:
- Correlation analysis[?] simply tries to establish whether or not there are linear relationships among the variables.
- Regression analysis attempts to determine a linear formula that can describe how some variables respond to changes in others .
- Principal components analysis attempts to determine a smaller set of synthetic variables that could explain the original set.
- Discriminant Function or Canonical variate analysis[?] attempt to establish whether a set of variables can be used to distinguish between two or more groups.
- Principal coordinate analysis[?] attempts to determine a set of synthetic variables that best preserves the distance relationships between records.
All Wikipedia text
is available under the
terms of the GNU Free Documentation License
|
<urn:uuid:4cc61ae0-c4b1-4bce-8ed5-2ac1965250d1>
|
CC-MAIN-2016-26
|
http://encyclopedia.kids.net.au/page/mu/Multivariate_statistics
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00010-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.838451 | 186 | 2.859375 | 3 |
When you encrypt a database you must also specify a boot password,
which is an alpha-numeric string used to generate the encryption key.
The length of the encryption key depends on the algorithm used:
AES (128, 192, and 256 bits)
DES (the default) (56 bits)
DESede (168 bits)
All other algorithms (128 bits)
Note: The boot password should have at least as many characters as number
of bytes in the encryption key (56 bits=8 bytes, 168 bits=24 bytes, 128 bits=16
bytes). The minimum number of characters for the boot password allowed by Derby is eight.
It is a good idea not to use words that would be easily guessed, such as
a login name or simple words or numbers. A bootPassword, like any password,
should be a mix of numbers and upper- and lowercase letters.
You turn on and configure encryption and specify the corresponding boot
password on the connection URL for a database when you create it:
Note: If you lose the bootPassword and the database is not currently
booted, you will not be able to connect to the database anymore. (If you know
the current bootPassword, you can change it. See Encrypting databases with a new key.)
|
<urn:uuid:2d3605ba-fc15-4dd3-95b8-34facb870fd3>
|
CC-MAIN-2016-26
|
http://docs.oracle.com/javadb/10.6.2.1/devguide/cdevcsecure866716.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00075-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.81212 | 269 | 3.265625 | 3 |
Do you italicize in an essay the names of characters from a book?
June 9th, 2013
im writing an essay in MLA format and I cannot remember. I know you italicize the book's title but what about the character's names from that book?
- Grammar in an Essay Question? - Is it better to italize or underline a book title? Like if the subject line is the book title is it better to have it italized, and if the title is mentioned in the essay..underline it?Any clue on what to do? if it matters it should be MLA format
- Help with this book!Please!? - Ok so I read this book in middle school and now I want to write an essay on it.But I do not remember the book’s name.All I remember is that there were a couple of kids and I think they went to Mexico for a trip.I also remember that they were in danger and someone
- In a essay or paper what do you do to the title of a survey? - I know you italicize the title of a book but what do you do to the title of a survey? I’m not so sure about this one… It seems a bit more tricky. I tried Google-ing it and nothing came up.
- How do i put the title of a play in an essay? - When i am putting the title of a play in an enthymeme-formated essay do i put the title in quotes or do i italicize it? it is also MLA format.
- What are some similarities and differences from the book the outsiders, and the movie the outsiders? - I read this book & watched the movie last year, & don’t remember anything. I have to write an essay on it, also. I know this question might have been asked before, too. I know this is a good book, & that the book is better, i just can’t really remember any specific similarities and
- MLA format essay. Need help setting up! Please Help?!? - So I needed to do an MLA format essay for English using quotations throughout the essay from a book we just read. The book is The Kite Runner by Khaled Hosseini. Now the essay is done, got my quotes and everything but now I need to go back and add the MLA stuff to it
- I HAVE A QUESTION REGARDING WRITING AN ESSAY? - are u allowed/supposed to use italics when writing an essay on a book.if sodo you use italics for:-1)author’s name2)title of the book3)characters in the book4)quotes5)other areas you can think of
Both comments and pings are currently closed.
|
<urn:uuid:58ee6ea7-f9d0-4ba8-8c95-7b08d8b46dd9>
|
CC-MAIN-2016-26
|
http://quicknetcom.com/0609/do-you-italicize-in-an-essay-the-names-of-characters-from-a-book.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00002-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952983 | 583 | 2.8125 | 3 |
The Weary Blues Introduction
In A Nutshell
By 1923, the twenty-two-year-old Langston Hughes had traveled half the globe, dropped out of Columbia University, and written some pretty kickin' poems. But "The Weary Blues" is the first poem for which Langston Hughes got an award. Originally from the Midwest, Hughes's one year at Columbia brought him to Harlem, New York at just the right time for a hip, young poet with a sense for adventure. The Harlem Renaissance, a boom-time in African American art, literature, and music, was just getting started, and Hughes caught a wave of support and interest. His first book of poems, The Weary Blues (1926), won him all kinds of awards and money – enough money to go back to school and finish this time.
In-the-know people, especially those in Harlem, were blown away by "The Weary Blues." It was so different from the stuffy, rigid poetry that passed for the standard of excellence. Hughes played loose-and-free with his lines and rhythm (sort of like Walt Whitman did). In fact, the poem is like jazz or blues music. Later in his career, Langston Hughes recorded and performed poems like the "The Weary Blues" with a jazz band. Sure T.S. Eliot threw some of that jazz stuff into The Wasteland (1922); but Eliot's "music" was like listening to your grandparents' scratchy records, while Hughes's was more like a live jam session in a smoky, gin-soaked bar.
Why Should I Care?
Has someone ever told you that the music you listen to "isn't music"? Well, that happened to a lot of African American artists and art fans in the 1920s. Sometimes people got down on African American artists because a white art critic didn't think that Africa had a tradition of Great Art. Other times people just thought art had a lot of rules, and you couldn't just go around breaking the rules. You can just hear the say, "Why it would be anarchy…art with no rules?!" Monocles would fall into champagne glasses all over America and Europe. But in the early 1900s, all the rules seemed to be changing – so why not in poetry too?
Langston Hughes and a lot of the other artists, poets, and musicians were mixing it up. African Americans in cities like New York, Washington, DC, and Chicago were coming up with their own styles.
Jazz and urban blues music were getting a lot of attention. Sure there were rules in jazz and blues, but the rules were more of a skeleton or set of building blocks. So, like a blues musician, Hughes had a basic rhythm and varied it to fit the words or changing mood of his poems. Sometimes he spelled words to match way that real people talk. And as you will see, he even samples music like a DJ does. Yes, Langston Hughes was doing mash ups before Danger Mouse was a twinkle in his great-grandpa's eye.
|
<urn:uuid:464d8ead-0839-4c04-a867-2fa2b77a4024>
|
CC-MAIN-2016-26
|
http://www.shmoop.com/weary-blues/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00159-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.983318 | 631 | 3.234375 | 3 |
A History of the Battle of Britain
Michael Korda, former editor in chief at Simon and Schuster and a Royal Air Force veteran, has written acclaimed biographies of Ulysses S. Grant (2004) and Dwight D. Eisenhower (Ike, Nov/Dec 2007).
The Topic: In 1940, Hitler’s Germany launched a full-scale assault on England named Operation Sea Lion. The German bombers were met by a swift, highly organized Royal Air Force that, over the summer, wore down the Third Reich’s forces and made impossible any thoughts of a full-scale invasion of Britain. The nimble, single-engine Spitfire and Hurricane planes, as well an extensive radar and radio system, were the innovations of Chief Air Marshal Sir Hugh Dowding, Korda’s hero in this immersing history of the Battle of Britain. Dowding was a pugnacious, controversial figure whose foresight, organization, and tactical brilliance, in Korda’s opinion, inevitably changed the direction of World War II.
HarperCollins. 336 pages. $25.99. ISBN: 0061125350
"Exploring the rough-and-tumble politics, personalities and preoccupations of pre-war Britain, [Korda] demonstrates that the country actually was uniquely prepared for the coming battle. … The author always manages to bring [the action overhead] back to earth with gripping human stories." John Barron
Dallas Morning News
"Often, accounts of this battle focus on Prime Minister Winston Churchill. … But in With Wings Like Eagles, the author manages to move the prime minister into a supporting role—no easy task." Kasey S. Pipes
Wall Street Journal
"There is something bold and refreshing about With Wings Like Eagles. While so many writers work the nooks and crannies of history in search of the new angle or revisionist take, Mr. Korda goes for the undeniably great, well-known story, never mind that new angles are few or that it has been told countless times before." Tom Nagorski
"Korda gives us the courage and cussedness of a man who stuck to what he believed in—he was prepared to face down Churchill if necessary—and cared not a jot for popularity so long as he got his way. … Korda details the battle itself day by day: the tactics, the targets, the successes, the losses and the shifting political and operational arguments on both sides." Diana Preston
Rocky Mountain News
"Korda does a good job of describing the infrastructure and strategy that were keys to the Battle of Britain. [But he] tends to write the kind of long, complicated sentences that cry for in-air refueling." Dan Danborn
A key military engagement—Korda ranks it among "one of the four most crucial victories in British history"—the Battle of Britain has been written about extensively. What Korda achieves here is an elegant reexamination that looks beyond the long shadow and statesmanship of Winston Churchill to consider the impressive legacy of Chief Air Marshal Hugh Dowding. Critics agree that, whatever the title, this is largely Dowding’s book. Korda intersperses compelling in-the-cockpit battle scenes with on-the-ground reportage, but in the end, the book is "less about [the young pilots] and more about the foresight and tactics that won the Battle of Britain" (Wall Street Journal).
|
<urn:uuid:1d08e2a9-528c-4a2a-b728-79016725aa16>
|
CC-MAIN-2016-26
|
http://www.bookmarksmagazine.com/book-review/wings-eagles-history-battle-britain/michael-korda
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00060-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.924433 | 723 | 2.5625 | 3 |
Coordinated Universal Time
Coordinated Universal Time (UTC) is the standard time system of the world. It is the standard by which the world regulates clocks and time. It is, within about 1 second, mean solar time at 0° longitude.
Some websites, like Wikipedia, use UTC because it does not make any country look more important than the others. It offers one time for all the internet (the same time can be used by people all over the world).
Timezones are often named by how many hours they are different from UTC time. For example, UTC -5 (United States east coast) is 5 hours behind UTC. If the time is 07:00 UTC, the local time is 02:00 in New York (UTC -5) and 11:00 in Moscow (UTC +4).
07:00 UTC is also written more simply as 0700Z (or 07:00Z).
Note that UTC uses the 24-hour clock. That means there is no 'AM' or 'PM'. For example, 4:00PM would be 16:00 or 1600.
When this page loaded, it was Tuesday, 2016 June 28, 20:23 in UTC
When this page loaded, it was Tuesday, 2016 June 28, 20:23Z
References[change | change source]
- Guinot, Bernard 2011. Solar time, legal time, time in use. Metrologica 48 (4): S181–185.
|
<urn:uuid:b517a32d-f186-461e-8369-03f487ec71f5>
|
CC-MAIN-2016-26
|
https://simple.wikipedia.org/wiki/Coordinated_Universal_Time
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00031-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946424 | 304 | 3.71875 | 4 |
Project-Based Learning (Learning and Mathematics)
Library Home || Full Table of Contents || Suggest a Link || Library Help
|Blumenfeld, Soloway, Marx, Krajcik, Guzdial, & Palincsar; Math Forum|
|In this 1996 article, "Motivating project-based learning: Sustaining the doing, supporting the learning," Blumenfeld and her colleagues at the University of Michigan describe project-based learning and the benefits of using long-term projects as part of classroom instruction. The authors believe that projects have the potential to foster students' learning and classroom engagement by combining student interest with a variety of challenging, authentic problem-solving tasks. In their discussion of the essential components of project-based learning, the authors pay close attention to the design of projects with regard to classroom factors and teacher and student knowledge. After considering the possible challenges that face teachers using projects in their classrooms, the authors go on to describe how technology may be used as a support system by teachers and students involved in long-term projects. A geometry.pre-college newsgroup discussion.|
|Levels:||Elementary, Middle School (6-8), High School (9-12)|
|Resource Types:||Discussion Archives, Articles|
|Math Ed Topics:||Pedagogical Research|
© 1994- The Math Forum at NCTM. All rights reserved.
|
<urn:uuid:940bf488-9cb2-4c6d-93a1-ab7dc6f6d0af>
|
CC-MAIN-2016-26
|
http://mathforum.org/library/view/7725.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00047-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.878414 | 288 | 3.0625 | 3 |
Species preservation and population size: when eight is not enough
Scientists estimate that about 1000 nesting Kemp's Ridley sea turtles, 300 right whales, and 65 northern hairy-nosed wombats survive in the wild, to name just a few of the world's endangered species.1 But what do those numbers mean? Are 65 hairy-nosed wombats enough to save a species teetering on the edge of extinction? Ignoring evolutionary history, one might answer, "Sure; as long as they can breed, we only need a few individuals to start a new population." But evolutionary theory tells a different story.
According to evolutionary theory, very small populations face two dangers inbreeding depression and low genetic variation that might keep them from recovering, despite our best efforts to preserve them.
View this article online at:
1 According to US Fish and Wildlife Service, 2002.
Sea turtle photo courtesy of US Geological Survey; Right whale photo courtesy of NOAA; Hairy-nosed wombat photo courtesy of Government of South Australia, South Australia Central Team
Understanding Evolution © 2016 by The University of California Museum of Paleontology, Berkeley, and the Regents of the University of California
|
<urn:uuid:c9b12ac7-f398-43b1-abdd-b9af1b773daf>
|
CC-MAIN-2016-26
|
http://evolution.berkeley.edu/evolibrary/print/printable_template.php?article_id=conservation_02&context=0_0_0
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00064-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.896412 | 240 | 3.734375 | 4 |
|iffinder||| .comments. ||
iffinder works by sending a "probe" UDP packet to an unused port
on an interface address.
Many routers will reply to such a packet with an ICMP PORT UNREACHABLE
error with the source address set to that of the interface on the unicast
route back to the prober. So probing one interface and getting this
error from a different interface is a strong suggestion that the two interfaces
belong to the same network node.
This method was described in J.-J. Pansiot and D. Grad's "On routes and multicast trees in the Internet" and R. Govindam and H. Tangmunarunkit's "Heuristics for Internet Map Discovery".
Iffinder can also discover new interfaces in several ways.
iffinder discovers a new interface, it adds the interface
to its list of probe targets.
- IP Record Route [RFC 791]
Iffindercan use the IP RECORD ROUTE option in its probes. Not all routers support this option, and it is only capable of recording 9 addresses along the path, but nevertheless it can result in discovering many new interfaces. And since routers that support Record Route will record the address of the "far" (outgoing) interface on the probe packet's path, and a Port Unreachable probe to this interface will usually find a router's "nearest" interface, this will often result in discovering a pair of interfaces belonging to the same router.
- ICMP errors from intermediate routers
- If the source of the ICMP error was previously unknown,
iffindersaves it. Additionally, since ICMP error packets contain the IP header of the original packet which caused the error,
iffinderretrieves Record Route data from both the error packet and the embedded original packet. The original packet contains Record Route data that would have been lost otherwise.
- The gateway given in an ICMP REDIRECT error
- ICMP REDIRECT errors contain a preferred gateway address. If that
address was previously unknown,
- "/30 mates"
- Frequently, a link between two internal routers is defined as a
so the interfaces at ether end of the link have the same 30-bit network
number and a 2-bit host number; one interface has the host number 1,
and the other has host number 2. (Host numbers 0 and 3 are not valid.)
Given a valid address within a /30 subnet, we define its "/30 mate" as
the other address within that /30 subnet.
For every known interface that is a valid /30 address, if
iffinderdid not already have its /30 mate in its list,
iffinderwill probe its /30 mate. Since
iffinderis only guessing at the existance of this address, if the address does not respond
iffinderwill discard it (and never probe it again, unless some other means proves it exists).
- IP Traceroute [RFC 1393]
- When an intermediate node forwards a packet containing the IP Traceroute
option, it should send an ICMP TRACEROUTE packet containing its own address
back to the source. However, Initial experiments suggest that this option
is very rarely supported, and that some hosts will even drop packets
containing this option. Also, it reduces by 3 the number of addresses that
can be recorded in the Record Route option. Using it would therefore be
likely to make
iffinderless effective, so we do not use this option in most runs.
When using Record Route or IP Traceroute, some probes result in an
ICMP_PARAMPROB response or no reponse at all,
if a router is buggy, configured to ignore Record Route, or whatever.
In such cases,
iffinder will retry the probe without IP options,
often resulting in a useful response.
Operational details: To avoid conflicts with traceroute processes and other
iffinder processes running on the same host,
iffinder chooses its source port in the same way as traceroute,
as a function of its process id.
- a series of 1 to 3 probes run on a particular interface
- failed experiment
- an experiment with no useful result
- a single UDP packet sent to an interface
- unroutable addrs
- IANA reserved, loopback, private (RFC 1918), multicast
- ICMP messages received not for iffinder (e.g., for traceroute)
- pairs of interfaces that were found to belong to the same node (the current version of iffinder does not report this; instead, it records raw response data, which must be postprocessed)
- new interfaces
- interfaces not in the input, discovered by iffinder receiving responses
- original interfaces
- interfaces in the input (i.e., discovered by the scamper tool)
An early versions of
iffinder(without Record Route, Traceroute, ICMP REDIRECT, or /30 mating) was run on
all0901-19lhrwaisin0801-25.nodest.act.ips, containing 357054 unique routable unicast addresses.
# started: 2000-09-26 18:13:22 # elapsed time: 17:36:47 # experiments: 359668 # failed experiments: 70203 # probes: 505347 # responses # from unroutable addrs: 6 # good (port unreach): 289465 # other icmp errors: 35803 # timeouts: 180073 # noise: 382 # joins: 29893 # new interfaces: 2692 # total interfaces: 359668 # nodes with >1 iface: 18556 # ifaces on such nodes: 48005 # single interfaces: 311663About 13% of the interfaces were matched up with other interfaces.
Histogram data for matched interfaces:
|# of interfaces||# of nodes with this many interfaces|
iffinder with Record Route support was run again on
(A previous implementation of Record Route was found to be buggy, making the results previously reported here on the effectiveness of Record Route meaningless.)
Summary of results:
# started: 2000-10-03 00:00:46 # elapsed time: 16:11:06 # experiments: 358516 # failed experiments: 228155 # probes: 495507 # responses # from unroutable addrs: 6 # good (port unreach): 130361 # other icmp errors: 197572 # timeouts: 167568 # noise: 24164 # joins: 16676 # total interfaces: 358516 # new interfaces: 1540 # by port unreachable: 799 # by other icmp err: 729 # by record route: 12 # nodes with >1 iface: 10321 # ifaces on such nodes: 26775 # single interfaces: 331741Previous experiments indicated that a large fraction of routers support Record Route. If the 9-hop range of Record Route covered a large set of addresses, the fact that Record Route in iffinder discovered only 12 previously unknown interfaces would suggest that skitter (the predecessor to scamper) did a good job of finding interfaces within that range. However, the first 8 hops between the probing host in this iffinder run and the rest of the Internet were almost always the same, so the number of nodes covered by Record Route in this run was very small. Running this test from more and better-located hosts (say, every Ark monitor) gives more useful results.
|
<urn:uuid:0e748d2f-b653-4cd1-8ef5-69832254134f>
|
CC-MAIN-2016-26
|
http://www.caida.org/tools/measurement/iffinder/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00168-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.888919 | 1,577 | 2.9375 | 3 |
Progressive Supranuclear Palsy Topic Guide
Progressive Supranuclear Palsy: Progressive supranuclear palsy (PSP) is a disease that causes the brain to degenerate, leading to problems with movement and balance, loss of cognitive function, memory, speech, and attention. This loss ultimately leads to dementia, and PSP usually affects those over 60. There is no treatment for this disease, the symptoms of which are sometimes similar to Parkinson Disease.
Dementia is the loss of reasoning, memory, and other mental abilities. Dementia may be caused by irreversible causes such as Alzheimer's disease, Parkinson's dementia, lewy body dementia, and vascular dementia. There are also treatable causes of dementia such as infections, head injury, normal hydrocephalus, and metabolic and hormonal disorders. Early symptoms of dementia include:
A variety of tests (blood tests, scans, assessment of family history) may be used to diagnose dementia. Treatment may include medication and behavioral therapy.
|
<urn:uuid:03f29969-d9b8-4fef-a170-2e067a7fc77f>
|
CC-MAIN-2016-26
|
http://www.emedicinehealth.com/progressive_supranuclear_palsy/topic-guide.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00194-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.912625 | 204 | 3.359375 | 3 |
By Dr. Mercola
Processed foods can contain any number of the thousands of additives used by the food industry. Many are under the mistaken belief that such additives must have gone through stringent testing to prove their safety, but that's oftentimes not the case at all.
Shocking as it may sound, food additives are not automatically required to get premarket approval by the US Food and Drug Administration (FDA).1,2,3,4,5
As explained in the featured video, items that fall under the "generally recognized as safe" (GRAS) designation are exempt from the approval process altogether. This is a loophole stemming from the 1958 Food Additives Amendment, which excludes GRAS items from the formal FDA approval process for food additives.
You might also recognize that this is how Monsanto and other agribusinesses snuck GMO foods into our food supply, as the FDA classified them as GRAS in 1992. Steven Druker revealed in my interview with him how his lawsuit to reverse this was lost on technicalities.
Outdated Law Lets Unsafe Ingredients into the Food Supply
The problem is, the chemical concoctions used in processed foods today didn't exist in the 1950s when the amendment was written into law. At the time, it was meant to apply to common food ingredients like vinegar and baking soda—regular cooking ingredients known through their historical use as being safe.
Nowadays however, countless manufactured ingredients end up slipping through this loophole. Another part of the problem is the fact that food companies are allowed to determine, on their own, whether an ingredient is GRAS.
A company can simply hire an industry insider to evaluate the chemical, and if that individual determines that the chemical meets federal safety standards, it can be deemed GRAS.
At that point, the company doesn't even need to inform the FDA that the ingredient is used, and no independent third party objective evaluation is ever required.
According to Center for Science in the Public Interest6,7 (CSPI), at least 1,000 ingredients are added to our food that the FDA has no knowledge of.
According to a CSPI investigation,8 these industry experts are a small tight knit group of scientists, many of whom have ties to the tobacco industry. According to Laura MacCleery, an attorney for CSPI:
"These are standing panels of industry hired guns. It is funding bias on steroids."
As if that's not bad enough, if a company does choose to notify the FDA, and the FDA disagrees with the company's determination that the item is GRAS, the company can simply withdraw its GRAS notification and go ahead and use it anyway, as if no questions were ever raised.
This legal loophole allows food manufacturers to market novel chemicals in their products based on nothing but their own safety studies, and their own safety assessments—the results of which can be kept a secret.
Food Ingredient Approval Process Violates the Law
Together with the Consumers Union, the Environmental Working Group, and the Natural Resources Defense Council, CSPI has filed an 80-page long regulatory comment9 stating that the process for determining GRAS substances is in violation of the 1958 law, which requires the FDA to determine the safety of an ingredient before it can be used in food.
According to the CSPI:10,11
"That law acknowledged that the FDA need not require pre-market testing of substances that had long been used in foods or that were well-recognized as safe by scientists.
But in a rulemaking opened by the agency in 1997—but never finalized—FDA weakened the standards for what could be considered GRAS and proposed making permanent what the groups say is an illegal program of GRAS determinations by the food industry, often done in secret...
'The FDA must provide better oversight over all of the substances that are put in our food, especially those whose safety is in question,' said EWG Research Director Renee Sharp.
'Any safety determination should be based on publicly available scientific data, not the opinions of 'expert panels' that likely have conflicts of interest with food additive regulation.'"
How Can Brand New Technologies Be GRAS?
Today we're also contending with novel nano technologies such as taste modifying chemicals that allow the company to reduce the fat or sugar content of the food.
These additives do not even have to be listed on the label. Instead, they fall under the general category of "artificial flavors," even though they do not actually have or add any flavor per say.
There's absolutely no telling what these agents are, or whether or not they're safe. As noted by Michael Hansen, a senior scientist with the Consumers Union:
"Any substance added to food created by using new science or technology, including nanomaterials, should be required to undergo a safety assessment prior to marketing and so should categorically be denied GRAS status."
To combat this runaway situation, the groups make several recommendations they believe would bring the FDA's proposal on GRAS in line with the 1958 food additives law. Their recommendations include:
- Denying GRAS for novel chemicals and substances flagged as potentially risky by authoritative scientific bodies
- Denying GRAS notifications based on unpublished studies
- GRAS notifications must be made by experts without conflicts of interest
- GRAS notifications should be mandatory and public
Examples of Hazardous GRAS Ingredients
As noted in a report12 by the Natural Resources Defense Council (NRDC) titled: Generally Recognized as Secret: Chemicals Added to Food in the United States:
"A chemical additive cannot be 'generally recognized as safe' if its identity, chemical composition, and safety determination are not publicly disclosed. If the FDA does not know the identity of these chemicals and does not have documentation showing that they are safe to use in food, it cannot do its job."
One now "classic" example of the GRAS process gone awry is artificial trans fat, which was originally considered GRAS. Faced with a mountain of evidence, the FDA has now deemed trans fats dangerous, saying they cause as many as 7,000 deaths from heart disease each year. There is little question in my mind that you will see the same reversal on GMO foods. Thankfully you don't you have to wait decades for the FDA, as you can avoid being harmed by them now by refusing to purchase them at the store or in a restaurant.
Another example is lupin—a legume related to peanuts—which can be found in many processed foods and gluten-free items. The FDA originally denied the GRAS notification for lupin,13 because it poses dangers to those with peanut allergies. Any food containing lupin would have to have a peanut allergy warning label. In response, the GRAS notification was withdrawn, and lupin is now added to foods without FDA oversight. Nor do such items have warning labels for those with potentially lethal peanut allergies...
The meat substitute known as quorn—a fungus-based mycoprotein—is another hazardous ingredient in our food supply that has been deemed GRAS. This ingredient appears to have been responsible for the death of a young boy in 2013 who was allergic to mold. His parents recently filed a wrongful death suit against the manufacturers,14,15 charging them with product liability design defects, failure to warn, and false and misleading advertising.
When Used in Combination, Food Additive Hazards Are Amplified
What little risk assessment is done is typically done on individual chemicals in isolation, and mounting research now suggests that when you consume multiple additives in combination, the health effects may be more serious than previously imagined. A recent assessment16 done by the National Food Institute at the Technical University of Denmark found that even small amounts of chemicals can amplify each other's adverse effects when combined. As reported by the Institute:
"A recently completed, four-year research project on cocktail effects in foods... has established that when two or more chemicals appear together, they often have an additive effect. This means that cocktail effects can be predicted based on information from single chemicals, but also that small amounts of chemicals when present together can have significant negative effects.
'Our research shows that indeed, little strokes fell great oaks also when it comes to chemical exposure. Going forward this insight has a profound impact on the way we should assess the risk posed by chemicals we are exposed to through the foods we eat,' Professor Anne Marie Vinggaard from the National Food Institute says."
Dietary Toxins Likely Account for 90 Percent of Diseases
Food additives are becoming of increasing concern these days. Health statistics suggest the toxic burden is becoming too great for children and adults alike, and while environmental toxins such as pollution are clearly a concern, toxins in our food simply cannot be ignored any longer. According to Joseph E. Pizzorno,17 founding president of Bastyr University, co-author of the Encyclopedia of Natural Medicine and The Clinician's Handbook of Natural Medicine, and former advisor to President Clinton on complementary and alternative medicines, toxins in the modern food supply are now "a major contributor to, and in some cases the cause of, virtually all chronic diseases."
Dr. David Bellinger, a professor of Neurology at Harvard Medical School has expressed similar concerns. According to his estimates, Americans have lost a total of 16.9 million IQ points due to exposure to organophosphate pesticides.18 Pizzorno believes pesticides may also play a significant role in the worldwide obesity epidemic, saying: "Researchers are now finding such a strong connection between the body load of these chemicals [contaminating the food supply] and diabetes and obesity that they are being called 'diabetogens' and 'obesogens'."
Pizzorno also points out that our modern food supply (most of which is heavily processed) also hampers your body's detoxification process as a result of being deficient in key nutrients. An interesting admission and change of thought expressed on the Centers for Disease Control and Prevention's (CDC) webpage on exposomics19 is the fact that, conversely to what researchers originally thought, the vast majority of diseases do NOT appear to have a genetic origin. According to the CDC:
"One of the promises of the human genome project was that it could revolutionize our understanding of the underlying causes of disease and aid in the development of preventions and cures for more diseases. Unfortunately, genetics has been found to account for only about 10% of diseases, and the remaining causes appear to be from environmental causes. So to understand the causes and eventually the prevention of disease, environmental causes need to be studied."
Glyphosate Residues in Food May Also Be a Significant Health Threat
Americans in particular also have to contend with the fact that a vast majority of our processed foods contain unlabeled genetically engineered ingredients that tend to be heavily contaminated with the toxic herbicide glyphosate (the active ingredient in Monsanto's Roundup). Experts like Dr. Don Huber strongly believe that glyphosate is actually more toxic than DDT, and the International Agency for Research on Cancer (IARC) recently reclassified glyphosate as a class 2 A carcinogen.20,21,22
The US EPA has now announced23 that US regulators may start testing for glyphosate residues on food in the near future to quell consumer concerns. But while that's good news, it's also worth noting that the EPA raised the allowable limits for glyphosate in food in 2013, and the allowable levels may now be too high to protect human health, based on mounting research.24,25
The Saturated Fat Myth in Action...
While many hazardous ingredients are given a free pass, the FDA is cracking down on food manufacturers advertising high saturated fat items as "healthy." This is yet another misguided action based on flawed science. Conventional advice calls for keeping your saturated fat intake below 10 percent a day, while mounting research indicates most people need far more than that—those with insulin resistance may need more than 50 percent of their daily calories from healthy fat. As reported by Philly.com:26
"The word 'healthy' should be removed from the labels of four types of Kind granola bars because they contain higher levels of saturated fat than is acceptable under regulatory standards for the term, the U.S. Food and Drug Administration says... In a statement on its website, Kind said it is changing the labeling for the four products...
However, the company said the fat content in nuts, one of the main ingredients in the bars, isn't unhealthy... 'This is similar to other foods that do not meet the standard for use of the term healthy, but are generally considered to be good for you like avocados, salmon, and eggs,' according to Kind."
Where to Find the Most Wholesome Food
As a general rule, a diet that promotes health is high in healthy fats and very, very low in sugar and non-vegetable carbohydrates, along with a moderate amount of high-quality protein. For more specifics, please review my free optimized nutrition plan, which also includes exercise recommendations, starting at the beginner's level and going all the way up to advanced. Organic foods are generally preferable, as this also cuts down on your pesticide and GMO exposure. If you're unsure of where to find wholesome local food, the following organizations can help:
- Local Harvest -- This Web site will help you find farmers' markets, family farms, and other sources of sustainably grown food in your area where you can buy produce, grass-fed meats, and many other goodies.
- Eat Wild: With more than 1,400 pasture-based farms, Eatwild's Directory of Farms is one of the most comprehensive sources for grass-fed meat and dairy products in the United States and Canada.
- Farmers' Markets -- A national listing of farmers' markets.
- Eat Well Guide: Wherever you are, Eat Well -- The Guide is a free online directory of more than 25,000 restaurants, farms, stores, farmers' markets, CSAs, and other sources of local, sustainably produced food throughout the US.
- FoodRoutes -- The FoodRoutes "Find Good Food" map can help you connect with local farmers to find the freshest, tastiest food possible. On their interactive map, you can find a listing for local farmers, CSAs, and markets near you.
|
<urn:uuid:3830b66a-f64a-49bf-9000-70856939140f>
|
CC-MAIN-2016-26
|
http://articles.mercola.com/sites/articles/archive/2015/04/29/flawed-gras-system-foodadditives.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00180-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957247 | 2,932 | 2.625 | 3 |
Grab some popcorn, turn on some Pink Floyd, and prepare to have your mind blown. Astronomers have created the most advanced simulation to date of the evolution of the universe over billions of years.
The simulation, called the Illustris, begins just 12 million years after the Big Bang and illustrates the formation of stars, heavy elements, galaxies, exploding supernovae and dark matter over the 14 billion years since. The simulation encapsulates the universe in a cube roughly 350 million light years on each side. The powerful simulation was presented in a paper published in the journal Nature.
Beefed up Computer Power
In order to capture the history of the universe in a box, you need a lot of computing power. Astronomers dedicated five years to programming Illustris, and it took 8,000 CPUs running in unison three months to crunch all the numbers that the model is based on, according to the Illustris website. It would have taken an average desktop computer over 2,000 years to complete the calculations.
Previous simulations, limited by computing power, either focused on a very small corner of the universe or displayed results in low resolution.
Researchers can use the tool to study cosmic phenomena, such as galaxy formation, at specific points in the history of the universe.
And just like the actual universe, developers say there are still many areas of the simulated universe that remain unexplored as they continue to investigate its results.
|
<urn:uuid:9d227b7d-6cb0-4783-ae99-1c44462d7925>
|
CC-MAIN-2016-26
|
http://blogs.discovermagazine.com/d-brief/2014/05/07/video-evolution-of-the-universe-revealed-in-computer-simulation/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00058-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.928464 | 286 | 3.59375 | 4 |
12. Clear Lake holds two hundred fifty and 9/10 acre feet of water. Lake Sonoma holds eighteen and 4/5 acre feet. How many Lake Sonomas will it take to fill Clear Lake?
How many times will 18 and 4/5 go into 250 and 9/10?
We are dividing mixed numbers so turn them into improper fractions.
Invert and multiply
The 5 will cancel with the 10, but we should decide if the 2509 will cancel with the 94. Well, 94 = 2x47, so the only primes that go into 94 are 2 and 47. 2 doesn't go into 2509 because 2509 is an odd number, and you can check to find out that 47 does not go into 2509 either, so they don't cancel. Our answer is then 2509/188. This is one instance where a mixed number would be more meaningful than an improper fraction. Divide the bottom into the top.
so there would be enough water in Clear Lake to fill up Lake Sonoma 13 and 55/188 times.
Fraction Word Problems #12
|
<urn:uuid:24c3e2e2-057f-4b1c-b031-0af72166786c>
|
CC-MAIN-2016-26
|
http://www.sonoma.edu/users/w/wilsonst/Courses/Math_300/fractions/FWPSoln/FWPSoln9-18/FWPp12.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00005-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.920932 | 224 | 2.546875 | 3 |
But Mr. Sinnett is hardly averse to having fun. With his playful sense of humor and hands-on approach, Mr. Sinnett often leads class activities from the floor, sitting among his students. "I want to be in there with them. So if they have questions, I can point things out to them. And I want to bond with them too. Create that sense of community."
Students enter his class performing at a wide range of levels, some speaking English fluently and some not at all. At first, Mr. Sinnett gives his English language learner students time to adjust, knowing that they understand more than they can communicate. As the year goes on, however, Mr. Sinnett expects to hear from them more and more. "When they feel comfortable, they'll start talking. I had one student who didn't speak at all for the first two months of school. And then, by the end of the year, she was talking all the time -- beautiful English. She had that knowledge; she was just afraid to use it."
Prompted by his school district's benchmarks, Mr. Sinnett emphasizes writing, and formally assesses his students' written work a few times a year. In the third grade, his students will take the Texas Assessment of Academic Skills, or TAAS, a standardized test that includes a writing section. Mr. Sinnett works with his students on particular writing skills, including how to make a plan for their writing -- at this stage an illustration of what they plan to write. The New Jersey Writing Project, a teacher's institute he has taken twice, has influenced his approach to teaching writing.
|
<urn:uuid:e1ee0ee7-b50c-4f80-94a0-6265d173da81>
|
CC-MAIN-2016-26
|
http://www.learner.org/libraries/readingk2/sinnett/teacher.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00003-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.991044 | 336 | 2.6875 | 3 |
How to draw tiger step by step and video tutorial. In this tutorial, you will learn how to draw a tiger. We will start drawing from the head, then we will finish off with the rest of the body. The step by step images should help you follow along with the drawing video instruction with ease. You can scroll down further to see step by step images with instruction.
The tiger is the largest cat species, reaching a total body length of up to11 ft and weighing up to 670 lb. Their most recognizable feature is a pattern of dark vertical stripes on reddish-orange fur with lighter underparts.
I am not specialize in drawing animals. But I decided that if I start drawing them, I could somehow let people know about what is going on with the wildlife when they search for them on google. So from now on, I will get animal drawing tutorials out every now and then. And along with it, some information about if they are in danger of extinction and how you (WE) could help. I love animals.
FACT: As of 2011, we have lost 97% of our wild tigers in just over a century. With as few as 3,200 remaining, action is needed to increase and strengthen their habitat and protect the species from major threats such as poaching. You can go help the big cats by donating at www.panda.org Sharing is caring.
Here is a finished tiger drawing.
Below are step by step drawing tiger. Step one, I draw a simple circular shape for its snout, then a big half circle shape for the head. Step two, as you see in the image, I draw a nose and mouth onto its snout, then both eyes line up about 2/3 of the head’s height. Its ears line up right on top of each eye. Step three, I start shading the whole face with medium gray then draw some fur on his face, cheek, and chin. Step four, I then start drawing tiger’s stripes.
Then here are steps for tiger’s massive body. Step five, I start drawing its front body first with two legs then slowly adding the torso. Step six, now I draw it pelvis and back legs. Then I shade it whole body with medium gray but leave its under side lighter value. Step seven, now its time to add stripes. And done!
And below are step by step painting tiger.
|
<urn:uuid:2941f151-e8d1-412b-80f6-f8919d9b20b8>
|
CC-MAIN-2016-26
|
http://idrawgirls.com/tutorials/2011/11/17/how-to-draw-tiger/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00122-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949159 | 495 | 2.6875 | 3 |
When it comes to climate change, Justin Trudeau, Canada’s new prime minister, seems to want a clean break from his Conservative predecessor, Stephen Harper. (As the Guardian reports, Harper has been accused of “muzzling government scientists and backtracking on climate promises.”)
Canada’s new Liberal government is talking of plans to price and cap carbon emissions. Provincial elections in 2015, including a new left-leaning government in Alberta which has already announced a carbon tax plan, may also boost the chances for Canadian climate solutions.
This month, Trudeau’s government held a National Climate Meeting in Vancouver with First Ministers and Indigenous leaders to develop a national climate policy framework for Canada. And just this week, four months after Trudeau’s election and eleven months before a new US president takes office, Trudeau and Barack Obama committed to work together on climate protections. They are expected to announce a series of common measures including a 45 percent cut in methane emissions from the oil and gas industry—a greenhouse gas that is roughly 80 times more potent than carbon dioxide—and protections for the Arctic (where scientists are now tracking the mildest winter ever recorded.)
Find this article interesting? Please consider making a gift to support our work.
With this window cracked open to North American cooperation and Canadian action, where do Canadians stand on climate issues?
Right on cue, the Yale Project on Climate Change Communications has mapped how Canadian perceptions of climate change, as well as support for carbon taxes and cap and trade, are distributed across the country, including projections by province and federal electoral district (or riding). (See also: Yale’s meta-analysis and interactive mapping of US climate opinions at the state, Congressional district, and county level.)
Here are some highlights:
- At the national level, 79 percent of Canadian voters acknowledge climate change is happening. (Compare that to 63 percent of US voters.) Quebec has the highest proportion of adults (85 percent) who say it’s happening, but even in Alberta, where belief is lowest, 67 percent understand climate change is happening.
- Local variation in Canada is significant. In Nova Scotia, for example, 87 percent say it’s happening, while only 66 percent in Saskatchewan agree; 56 percent say climate change is happening in the Souris-Moose Mountain riding in Saskatchewan compared to 91 percent in the riding of Halifax, Nova Scotia.
- Belief that the problem is real is lower in rural Canada, particularly across the Prairies. As the researchers point out, “The strongest levels of climate change belief exist in coastal BC, Quebec, Nova Scotia, and in urban areas across the country.” At the extreme, belief in climate change exceeds 90 percent of the public in the Quebec district of Laurier-Sainte-Marie, the district of Vancouver East, and the district of Halifax. That’s high! Yale mapping in the US tops out around 80 percent.
- Overall, belief in climate change is higher in Canada than in the US (as measured by Yale’s estimates for 2014). The low end of variation in Canadian beliefs map onto the middle of US variation.
- However, nationally, only 44 percent of Canadians think the cause is mostly human activities. US voters squeak ahead on this one, at 48 percent. It is in the six largest urban areas of Canada where people are most likely to see humans as the main cause. Notably, in Canada, belief that climate change is human-caused is lowest in the more greenhouse gas intensive parts of the country. “In other words, places that are more significantly contributing to climate change show lower beliefs that humans are the cause.” However, urban districts in Alberta show public opinion on this question closer to that of Ontario, Quebec, or BC.
- Despite these variations in core beliefs about the issue, there is widespread public support for climate policies. Majorities of the public in every federal electoral district (riding) support an emissions trading scheme. Support for emissions trading is highest in Quebec—71 percent—where a cap and trade system was implemented in 2013.
- Support for carbon taxes is more geographically differentiated, at 49 percent nationally (with opposition at 44 percent), but ranging from a low of 35 percent in the Northern Alberta riding of Fort-McMurray-Cold Lake to a high of 70 percent in the Montreal-area riding of Outremont. Support for carbon taxes is concentrated in urban Canada and British Columbia (where there is a carbon tax shift in effect).
- Support for carbon pricing policies is higher in provinces that have already implemented these policies. In other words, there is no evidence of a popular backlash against carbon pricing in places where people are experiencing them.
Canadians have lots of progress yet to make on attitudes and policy. But with the election of Justin Trudeau and like-minded leaders across Canada, the world’s climate may get a breath of fresh air.
|
<urn:uuid:3405f87d-100a-4b78-93a1-a177a7079474>
|
CC-MAIN-2016-26
|
http://www.sightline.org/2016/03/10/where-do-canadians-stand-on-climate/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00158-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945275 | 1,010 | 3.015625 | 3 |
4. Ekholm Archaeological Project in Sonora, Mexico
Authors: Emiliano Gallaga Murrieta, M.A., Ph.D.-candidate, University of Arizona | ESPAÑOL
This catalogue features a portion of the archaeological material of the Sonora-Sinaloa Archaeological Survey Project directed by Gordon F. Ekholm (1937-1940), which today is located at the American Museum of Natural History in New York City.
The catalogue originated in a desire to have an image database for the identification of archaeological material while in the field and during material analysis for the first field season of the archaeological project of the Onavas Valley in the Middle Yaqui River Valley, Sonora, Mexico. The lack of research in the area, the absence of published catalogue material in and around the research area, and the opportunity to gain access to the Ekholm collection at the American Museum of Natural History in New York City in the summer of 2003, provided the impetus to create a photographic record of Ekholm's material and develop this catalogue. This work does not show all the material collected by Ekholm; only a selected portion is depicted because the catalogue's original focus lay on the Middle Yaqui River and its surrounding areas. The creation of this catalogue and its publication on the museum's web page begins to show the richness of the museum collections. More importantly, it underscores the importance of and reveals the opportunity in undertaking further analysis and re-analysis of existing, already collected, and stored data and material collections. Consulting archaeological material from previous projects in or around one's area of research facilitates the creation, design, and undertaking of a better research project.
The archaeological material from the Ekholm collection originated from the Mexican states of Sonora and Sinaloa. These states are part of a larger study region, which includes the Mexican states of Chihuahua, Durango, and sometimes the peninsula of Baja California, and the American states of Arizona, New Mexico, the southern portions of Utah and Colorado, and the western portion of Texas. This large region has been given several different names: Oasis America (Armillas 1969; Kirchhoff 1943), La Gran Chichimeca (Di Peso 1974), Aridoamerica (Kirchhoff 1954), the Greater Southwest (Beals 1932), El Noroeste Mexicano, and the International Four Corners (Minnis 1989), depending largely upon which side of the international border one resides. None of these terms are without problematic connotations and for the purposes of this catalogue, I have chosen to employ the modern political divisions of Sonora, Chihuahua, Northwest Mexico, and American Southwest instead (Gallaga and Newell 2004).
History of research
The first reports or descriptions of Northwest Mexico are found in colonial documents, mostly descriptions of the Spanish entradas, such as those by Diego de Guzmán (Heredia 1969), Vasquez de Coronado (Hammond and Rey 1940), Nuñez Cabeza de Vaca (1993), and Francisco de Ibarra (Hopkins 1988). The lack of complex societies and riches, like gold or silver, which the European explorers did encounter in central Mexico, resulted in a diminished push to penetrate the region and comparatively general and limited descriptions of the native population. Nonetheless, the colonial descriptions provide data useful to the archaeologist. On the whole, the cultural and material descriptions of the native communities European missionaries provided are more informative than military reports and early chronicles. Regarding the Yaqui River populations, the documents describe high numbers of natives in the region, the colonization potential of the area, and the interaction between the coastal and the mainland peoples.
During the 19th century, early travelers (Lumholtz 1902), geographers (Bartlett 1965; Brand 1933, 1935, 1943; Sauer and Brand 1930, 1931), and archaeological explorers (Amsden 1928; Bandelier 1890, 1892; Ekholm 1939, 1942; Lister 1958; Noguera 1926, 1930, 1958; Sayles 1936) were among the most important archaeological pioneers who created points of departure for future research in Northwest Mexico. For many years, however, only a handful of archaeologists ventured down this path. Most archaeologists preferred the great temples of Mesoamerica to the south or the ubiquitous and attractive architecture of the cliff dwellings and pueblos of the American Southwest to the north. In fact, many of the early archaeologists who visited the area did so only to study how the past of Northwest Mexico fit with the areas of their primary research interest and never made careers in the Northwest. This is the case of the Gordon Ekholm's research in Northwest Mexico. Regardless, his influence over and contribution to the archaeological knowledge of Northwest Mexico's was considerable and remains highly recognized.
The Sonora-Sinaloa Archaeological Survey Project
At the end of the 1930s, George Vaillant researcher from the American Museum of Natural History in New York conceived, designed, and directed the Sonora-Sinaloa Archaeological Survey Project. Although Vaillant was the director of the project, Gordon F. Ekholm, a former student from Harvard University, was appointed field director of the project and he acquired the collections in the field. The project's main objective was to fill the archaeological gap between the American Southwest and the northern Mesoamerican frontier covering the area from the international border to the Río Culiacan (Ekholm 1942:33). While the researchers achieved their objective, unfortunately, the results of the project remained largely unpublished, with the exception of some general articles about the project (Ekholm 1939) and results about the excavation performed at the Guasave site in Sinaloa (Ekholm 1942). Over the course of three field seasons of six months each between 1937 and 1940, Ekholm registered a total of 175 sites between the Mexican states of Sonora and northern Sinaloa, from which 100 lay in Sonora and the remainder in Sinaloa.
Ekholm collected material from the surface of 106 sites, through excavations undertaken at the largest sites, like the one in Guasave, Sinaloa, and by purchasing existing collections, like the Bringas collection in Soyopa, Sonora (Carpenter 1996; Ekholm 1937-1940).
|Bringas collection in Soyopa, Sonora|
With the exception of the material from excavations and from private collections, Ekholm mainly encountered ceramics and lithics, though a great variety existed within these artifact categories. Among the ceramic material, plain wares occurred in the greatest numbers, though decorated wares, malacates or spindle whorls, and figurines also appeared. Greater variety characterized the lithic collection with stone axes, ornaments, palettes, agave knifes, reamers, stone bowls, atlatl handles, and arrow points. In addition, some turquoise beads and mica pendants were also recovered.
|Malacates (spindle whorls), clay figurines, turquoise beads, mica pendants|
Another common and no lesser important material Ekholm collected was marine shell. Marine shell surfaced as raw material, work in progress, debris, and finished goods, like beads, pendants, tinkers, or bracelets. In general, the great variety of items and material Ekholm encountered illustrates a considerable movement of goods between the coast and the interior, although not to the degree he expected to support a Mesoamerican-American Southwest direct interaction theory (Ekholm 1942: 136).
In terms of temporal affiliation, the majority of the sites Ekholm found he identified as prehispanic, some as Spanish Colonial sites, and others as historic sites belonging to Mexican, Piman, Yaqui, or Mayo groups (Ekholm 1937-1940). Ekholm was able to assign cultural affiliation to less than half of the prehispanic sites registered: 20 to the Trincheras archaeological tradition, 40 to the Rio Sonora tradition, and 14 to the Seri or the Central Coast tradition (Ekholm 1937-1940). Later on, after his excavation at the Guasave site, more than 20 sites were defined or assigned to the Huatabampo archaeological traditions (Carpenter 1996, Ekholm 1942; Pailes 1972, 1994).
The sites Ekholm registered surfaced in different geographical areas, where the prehispanic habitants had exploited different resources and had differed in social and political developments and local and regional interactions. Some sites were located on volcanic hills, covered by stone terraces which had been used as habitation sites, agricultural fields, working areas, or defensive areas. This type of sites and associated material are commonly identified as belonging to the Trincheras tradition. Other sites located on the coast featured mounds made of sand/earth and cultural material with mostly marine shell. Depending on the cultural material found, these shell mounds could be identified as Seri (central coast archaeological tradition), Huatabampo tradition, or Yaquis. In the interior of Sonora, the sites Ekholm discovered typically laid along the river valleys in areas near water sources and agricultural lands, like those of the Rio Sonora tradition. Besides the cultural material dispersed on the surface, stone foundations for houses are often observable. River stones are the most common material for those foundations, however volcanic or slab stones were used as well. In some areas of Sonora, it is possible to assign cultural affiliation to the archaeological sites, but in other instances it is difficult to do so, due to the lack of research in many areas that remain unexplored, like the Yaqui River region. Except for a few large sites, like in Guasave, Sinaloa, or Cerro de Trincheras, Sonora, the settlement pattern of the region was mostly dispersed households identified as rancherías.
The Guasave site
Site 117, Guasave, Sinaloa
Ekholm registered this site as number 117 in his survey. The site lies near the town of Guasave, Sinaloa, "on the west bank of the Sinaloa River in the center of an extremely fertile agricultural area" (Ekholm 1942:35). The site consisted of an earth mound of an oval shape 1.5 meters high and around 40 meters in diameter. Because the site was the highest point in the middle of the agricultural fields, local people know the mound as the El Ombligo (the umbilicus)(Ekholm 1942:35). At first, from the material remains, Ekholm thought that the site was a household or a trash midden. After two field excavation seasons and recovering 196 burials, the Guasave site became known as the greatest formal cemetery mound in Northwest Mexico that has been excavated, and remains unparalleled until today. John Carpenter summarizes the results of the excavation at Guasave as:
"The mortuary practices included extended inhumations with heads oriented to the north, south, and west, secondary bundle burials of disarticulated remains, and secondary interment in large plainware burial ollas. Also evident were several cases of dental mutilation represented by notches and filed incisors and canines, and fronto-lambdoidal cranial deformation. Offerings associated with these graves revealed an elaborate material culture, with several pottery types including red wares, red-on-buff, finely incised wares and several types of highly detailed polychrome pottery, alabaster vases, copper implements including bells and a probable earspool, shell, pyrite and turquoise jewelry, paint cloisonné gourd vessels, cotton textiles, ceramic masks, clay smoking pipes, modeled spindle whorls, a cylinder stamp, prismatic obsidian blades, food remains, bone daggers and human trophy skulls."Carpenter 1996:163
Although the Ekholm 1942 publication on the Guasave site provide a good description and analysis of the performed excavation, the material and data still offer several promising venues of research.
In addition to the archaeological collection recovered in this vast territory, Ekholm composed an important photographic record of his discoveries, sites, material collections, excavation pits, communities, and people he encountered on his trip. This amazing photographic archive, more of 300 photos, can be consulted as well in the AMNH collection benefiting not only archaeologies, but historians, architects, or anthropologists as well.
|Ekholm photo collection|
In many instances, the images captured by his lens no longer exist as such due by land development, intensive agricultural activities, or flooding from dam construction. For example, in examining Ekholm's photographs I came across the images of the churches of Tepupa and Batuc communities long before the construction of the Plutarco Elias Calles dam. They contrasted wonderfully with my own photos and memory from a survey I did with some colleagues in the area in 1999 when we visited the remains of those towns on dry land - a rare occurrence due to the severe drought in the region at the time.
Churches of Batuc and Tepupa
The opportunity to be able to compare the images of the churches before and after the construction of the dam and the subsequent flooding was a wonderful experience and exemplifies the potential of using the Ekholm collection to aid new data collection projects as well as to re-analyze existing data collections.
I thank Dr. Charles Spencer, Dr. Christina M. Elson, and the American Museum of Natural History staff for the access to their collections and their support to make this catalogue possible. I am grateful for the comments and support of Gillian Newell as well as the economic support from CONACYT.
|1928||Archaeological Reconnaissance in Sonora. Southwest Museum Paper No. 1. Southwest Museum, Highland Park.|
|1969||The Arid Frontier of Mexican Civilization. Transactions of the New York Academy of Sciences, Series II, 31(6):697-704.|
|Bandelier, A. F.|
|1890||The Ruins of Casas Grandes. Nation 51:185-187.|
|1892||Final Report of Investigations among the Indians of the Southwestern United States, Carried On Mainly in the Years from 1880 to 1885, part 2. Papers of the Archaeological Institute of America, American Series, vol. 4. Archaeological Institute of America, Cambridge.|
|Bartlett, J. R.|
|1965||Personal Narrative of Explorations and Incidents in Texas, New Mexico, California, Sonora, and Chihuahua, Connected with the United States and Mexican Boundary commission during the years 1850, 51, 52, and 53. 2 vols. Río Grande Press, Chicago.|
|Beals, R. L.|
|1932||The Comparative Ethnology of Northern Mexico before 1750. Ibero-Americana 2.|
|Brand, D. D.|
|1933||The Historical Geography of Northwestern Chihuahua. Unpublished Ph.D. dissertation, Department of Geography, University of California, Berkeley.|
|1935||The Distribution of Pottery Types in Northwest Mexico. American Anthropologist 37:287-305.|
|1937||The Natural Landscape of Northwestern Chihuahua. University of New Mexico Bulletin 316, Geological Series 5(2), Albuquerque.|
|1943||The Chihuahuan Culture Area. New Mexico Anthropologist 6-7:115-158.|
|1996||El Ombligo de la Labor: Differentiation, Interaction and Integration in Prehispanic Sinaloa, Mexico. Ph. D. dissertation for the Anthropology Department of the University of Arizona.|
|Di Peso, C. C.|
|1974||Casas Grandes: A Fallen Trading Center of the Gran Chichimeca, vols. 1-3. Amerind Foundation Series No. 9. Amerind Foundation, Dragoon, Arizona, and Northland Press, Flagstaff, Arizona.|
|1937-40||Sonora-Sinaloa Archaeological Project. AMNH Archives, New York.|
|1939||Results of an Archaeological Survey of Sonora and northern Sinaloa. Revista Mexicana de Estudios Antropologicos 3:7-11.|
|1942||Excavations at Guasave Sinaloa, Mexico, Anthropological Papers 38, American Museum of Natural History, New York.|
|Gallaga, Emiliano and Gillian E. Newell|
|2004||Introduction. In Surveying the Archaeology of Northwest Mexico, edited by Gillian E. Newell and Emiliano Gallaga, 1-26. University of Utah Press, Salt Lake City.|
|Hammond, G. P. and A. Rey|
|1966 (1928)||The Rediscovery of New Mexico 1580-1594. University of New Mexico Press, Albuquerque.|
|1969||Relación del Capitán Diego de Guzmán. Memorias y Revista del Congreso Mexicano de Historia I:123-143.|
|1988||Imágenes Prehispánicas de Sonora: La Expedición de Don Francisco de Ibarra a Sonora en 1565, Según el Relato de Don Baltasar de Obregón. Hermosillo, Sonora, Mexico.|
|Johnson, Alfred E.|
|1966||Archaeology of Sonora, Mexico. In Handbook of Middle American Indians, Vol. 4, edited by Gordon F. Ekholm and Gordon R. Willey, pp. 26-37. University of Texas Press, Austin.|
|1943||Mesoamerica: sus límites geográficos, composición étnica y sus caracteres culturales. Acta Americana 1:92:107.|
|1954||Gatherers and Farmers in the Greater Southwest: A Problem in Classification. American Antiquity 29:529-550.|
|Lister, R. H.|
|1958||Archaeological Excavations in the Northern Sierra Madre Occidental, Chihuahua and Sonora, Mexico. University of Colorado Studies, Series in Anthropology No 7. University of Colorado Press, Boulder.|
|Lumholtz, C. S.|
|1902||Unknown Mexico. 2 vols. Charles Scribner's Sons, New York.|
|Minnis, P. E.|
|1989||The Casas Grandes Polity in the International Four Corners. In The Sociopolitical Structure of Prehistoric Southwestern Societies, edited by S. Upham, K. G. Lightfoot, and R. A. Jewett, pp. 269-305. Westview Press, Boulder, CO.|
|1926||Ruinas arqueológicas de Casas Grandes, Chihuahua. Secretaría de Educación Publica, Publicaciones 11(14), Mexico City.|
|1930||Ruinas arqueologicas del Norte de Mexico. Direccion de monumentos prehispanicos, pp. 5-27. Publicaciones de la Secretaria de Educacion Publica, Mexico City.|
|1958||Reconocimiento arqueológico en Sonora. Unpublished report No. 10, Dirección de monumentos prehispánicos, Mexico City.|
|Núñez Cabeza de Vaca, Alvar|
|1993||Naufragios y Comentarios. Colección Austral, No. 304, Mexico.|
|Pailes, Richard A.|
|1972||Archaeological Reconnaissance of Southern Sonora and Reconsideration of the Rio Sonora Culture. Ph. D. dissertation for the Anthropology Department for the, Southern Illinois University, Carbondale.|
|1994||Recientes investigaciones arqueológicas en el sur de Sonora. Beatriz Braniff and Richard S. Felger (editors), Sonora: Antropologia del Desierto. 20 Aniversario, Noroeste de México # 12, INAH, Sonora, Mexico: 80-88.|
|Pollard, Helen P.|
|1997||Recent Research in West Mexican Archaeology. Journal of Archeological Research 5(4): 345-384.|
|Sauer, C. and D. Brand|
|1930||Pueblo Sites in Southeastern Arizona. University of California Publications in Geography 3(7):415-458.|
|1931||Prehistoric Settlements of Sonora with Special Reference to Cerros de Trincheras. University of California Publications in Geography 5(3):67-148. University of California, Berkeley.|
|Sayles, E. B.|
|1936||An Archaeological Survey of Chihuahua, Mexico. Medallion Paper 22. Gila Pueblo, Globe, AZ.|
|Villalpando, Elisa and Paul R. Fish|
|1997||Prefacio. In Prehistory of the Borderlands: Recent Research in the Archaeology of the Northern Mexico and the Southern Southwest, edited by John Carpenter and Guadalupe Sanchez, pp. ix-xi. Arizona State Museum Archaeological Series 186, University of Arizona, Tucson.|
|
<urn:uuid:5bfe84b6-beec-42d7-a134-7ec45435fa6c>
|
CC-MAIN-2016-26
|
http://research.amnh.org/anthropology/research/mca/projects/sonora
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00069-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.864276 | 4,525 | 2.984375 | 3 |
Hairless Cats – Pet tip 145
For many, just the thought of a hairless cat is enough to make them shudder. For these people, the sight of a hairless cat is shocking, and touching one is downright scary. If you are one of these people, then this article is unlikely to sway you in the other direction. However, if you are the kind of person who has always been a little bit intrigued by the unique look of hairless cats, read on. Behind their striking appearance is a fascinating background, and if you are considering making a hairless cat part of your family, there is also a plethora of information to know about the proper way to care for these beautiful creatures.
Congenital hairlessness in cats is an extremely rare trait. Essentially, a hairless cat is the result of genetic mutations in one or more of the genes which encode for normal hair development. While the word ‘mutation’ implies that this genetic condition is unwanted and accidental, there are actually many breeders who select specifically for the hairlessness trait and work to maintain the hairless lineage in their cats.
One example of a cat breed that is based on hairlessness is the Canadian Sphynx. This breed, which initially arose in Toronto in 1966 has become one of the most famous hairless cat breeds in the world. Sphynx kittens may be born entirely bald, or with an initial downy coat. Upon their first shed however, further hair growth ceases. The only areas where fur might be found is on the ears, muzzle, tail, and sometimes feet.
In addition to their obvious lack of hair, there are several other features that are commonly seen in the Sphynx and other hairless-based breeds. Some of these cats have abnormal whiskers, nails, and dentition. Most also have markedly enlarged sebaceous (oil) glands and defective tear ducts. All of these characteristics mean that hairless cats require special attention and care beyond that of the average feline pet.
Without a fur coat, hairless cats have a difficult time maintaining their core temperature. In order to keep warm, hairless cats seem to have developed a high metabolic rate. This means that they require high quality food and lots of it to meet their daily energy demands and create the heat they require. It is also important that these cats remain indoors as their skin is at great risk of both burning and freezing in extreme weather conditions. They are also more prone to insect bites than cats with fur might be. Some owners living in cooler climates will find it necessary to provide their hairless cats with a coat or blanket at certain times, even when indoors.
The lack of fur and increased number of sebaceous glands also causes oil to build up quickly on the skin of these special cats. It is critical that owners make a point of bathing their hairless cat at least once a week to keep them free of oil and dirt build-up. If this routine is started at a young age, most cats will not mind the process at all. It is also usually necessary to clean their ears and around their eyes more often than might be necessary for an average cat. In general, the lack of hair puts these cats in a higher risk category for infections, so owners must be observant and diligent when it comes to providing their pets with both at-home and veterinary care.
Due to all these extra demands, hairless cats are not for everybody. Their appearance is not to everyone’s taste, and no one can deny their high-maintenance lifestyle. But for the committed owner, hairless cats can make wonderful long-term companions. With proper care they can live healthy long lives, averaging at about 15 years of cuddly, hairless fun.
By Alison Norwich – Pets.ca writer
|
<urn:uuid:c819e45c-e68b-49e2-a922-b280b45323f0>
|
CC-MAIN-2016-26
|
http://www.pets.ca/cats/tips/hairless-cats-pet-tip-145/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00097-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.967908 | 775 | 2.703125 | 3 |
Catholic Recipe: Almond Cakes
The Feast of Chung Ch'iu, which falls on the fifteenth day of the Eighth Moon, or sometime toward the end of August in the Western calendar, is one of China's most joyous occasions. In ancient lunar calculation, Chung Ch'iu comes at mid-autumn when the moon is full and the harvest ripe. The festival honors the goddess Heng-O, who rules the moon, and the Immortals who dwell there with her.
Chung Ch'iu, a night of magic, is dedicated to poetry and music. It is the custom to feast, pay debts, and give thanks for the harvest at this season. China is so vast that festival celebrations vary from place to place. Everywhere legends about the moon are essentially the same, but customs, and even dates, differ according to locality, social position, and economic status.
All ceremonies center about the moon which influences crops and harvests and is the traditional habitation of the gods. Heng-O lives in the moon with her white Hare. The Hare supposedly sits beneath a cassia tree, where it eternally pounds out the elixir of life. The goddess and her companion dwell in a white jade palace, as pale and cool as the light of the harvest moon.
Tradition says that flowers fall from the moon on the night of Chung Ch'iu. Women who see the blossoms will be blessed with children, men with wealth. Moon-gazing is a favorite pastime on this night. Everyone studies the face of the full moon and reports on the wonderful imagined sightsa golden mountain, a budding plum, a bowl that overflows with rice.
The Chinese have many proverbs and wise sayings about the moon. Some, like the following, are direct statements a person may interpret as he likes: "When the sun sets, the moon rises. When the moon sets, the sun rises," and "When the moon is full, it begins to wane. When the waters are high, they must overflow." Other sayings, like "A broken drum saves the moon," refer to the old custom: beating drums and sounding gongs to bring back the moon after an eclipse. Another well-known proverb observes, "How seldom is the moon overhead."
For Chung Ch'iu everyone prepares as many foods as possible in round shapes. Bakeries and sweet shops display large round mooncakes, made with brown sugar and decorated with pictures of the moon and the palaces of the Immortals. Children receive enchanting toys of small tile pagodas and amusing animals.
Traditional mooncake, which always is eaten at the festival, is impractical for the uninitiated to attempt. Many Chinese stores in the United States carry the cake, which is decorated with colored paper pictures of the moon and its palaces. But the recipe for the delicacy is almost impossible to obtain. "Old people make the cake from memory and are unwilling to reveal the secret-even to inquiring daughters," a young woman told me." 'If you want to see how mooncake is made, come watch me do it,' my mother told me when I asked for directions. Even though I watched carefully, I found it difficult to judge the quantities of various ingredients, or to follow the manipulations of the Venerable Cook."
Chinese almond cookies and Yuan-hsiao or boiled rice flour dumplings with sweet stuffing, make excellent substitutes for traditional mooncake. They are as round as the moon itself and so are appropriate to the mid-autumn feast. Both cakes and dumplings are delicious and may be served for any gala occasion.
Cream sugar and lard until soft and fluffy. Beat together egg, flavoring, and water, and add to creamed mixture, mixing thoroughly. Sift flour, baking powder and salt and work into the other mixture gradually. Knead dough well. Form into balls the size of walnuts and refrigerate overnight in a covered pan.
Flatten balls on ungreased cookie sheet, pressing down with bottom of measuring cup covered with damp cloth. Press blanched almond half into each cake. Bake in hot oven (400° F.) for 10-13 minutes, or until cakes are slightly browned. Let stand 5 minutes on tin before removing with spatula. Cookies are rich and fragile and must be handled with care. They will keep fresh a long time when stored in an airtight tin.Recipe Source: Feast-Day Cakes from Many Lands by Dorothy Gladys Spicer, Holt, Rinehart and Winston, 1960
|
<urn:uuid:abdd58ed-6d0d-477a-b93c-5a1cb4a62fcb>
|
CC-MAIN-2016-26
|
http://www.catholicculture.org/culture/liturgicalyear/recipes/view.cfm?id=1186
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00113-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952236 | 941 | 2.75 | 3 |
CHARLESTON, W.Va. — At the request of a West Virginia Board of Education member who said he doesn’t believe human-influenced climate change is a “foregone conclusion,” new state science standards on the topic were altered before the state school board adopted them.
School officials said the changes are meant to encourage more student debate on the idea that humans’ greenhouse gas emissions are causing a global rise in temperatures — a theory that an overwhelming majority of scientists accepts.
Earlier this month, the state school board adopted the new education requirements, based on the national Next Generation Science Standards blueprint, with the plan to instruct teachers how to teach them by the 2016-17 school year.
The science standards are not part of Common Core, which contains nationally suggested standards for English and math, but they do have Common Core connections embedded and were crafted with aid of the same Washington, D.C., nonprofit group. West Virginia’s Common Core standards are also dubbed Next Generation.
Robin Sizemore, science coordinator for the state Office of Secondary Learning, said the new science standards will be the first time students will be required to learn about the evidence for human-driven climate change — the current standards only cover them in elective courses.
But state school board member Wade Linger asked that several changes be made to the drafted standards before they were put out for a monthlong public comment period.
“There was a question in there that said: ‘Ask questions to clarify evidence of the factors that have caused the rise in global temperatures over the past century,” Linger said. “... If you have that as a standard, then that presupposes that global temperatures have risen over the past century, and, of course, there’s debate about that.”
Linger suggested adding the words “and fall” after “rise” to the sixth-grade science standard. The change was adopted..
The Consensus Project analyzed 21 years of peer-reviewed scientific papers published on global warming and global climate change, culminating in a 2013 report that found that more than 4,000 paper abstracts authored by almost 10,200 scientists stated a position on human-driven climate change. More than 97 percent of the time, the position was that humans are contributing to a global rise in temperatures.
Other studies have found about the same level of consensus, and the National Aeronautics and Space Administration says most leading scientific organizations worldwide have stated that global warming is very likely caused by humans.
According to Linger, state Department of Education staff made other changes in response to his concerns before the school board adopted the standards.
n Original ninth grade science requirement: “Analyze geoscience data and the results from global climate models to make an evidence-based forecast of the current rate of global or regional climate change and associated future impacts to Earth systems.”
n Adopted version: “Analyze geoscience data and the predictions made by computer climate models to assess their creditability [sic] for predicting future impacts on the Earth System.”
n Original high school elective Environmental Science requirement: “Debate climate changes as it [sic] relates to greenhouse gases, human changes in atmospheric concentrations of greenhouse gases, and relevant laws and treaties.”
n Adopted version: “Debate climate changes as it relates to natural forces such as Milankovitch cycles, greenhouse gases, human changes in atmospheric concentrations of greenhouse gases, and relevant laws and treaties.”
“We’re on this global warming binge going on here,” Linger said. “... We need to look at all the theories about it rather than just the human changes in greenhouse gases.”
Milankovitch cycles are long-term changes in Earth’s orbit around the sun and fit with some climate change deniers’ assertion that the Earth is simply in a natural warming period. The Intergovernmental Panel on Climate Change, which has released dire reports about climate change impacts with a more than 95 percent certainty that humans are the main cause, states on its website that the coming and going of Earth’s ice ages is greatly linked to these orbital changes, but adds that since the start of the industrial period around 1750 the “human impact on climate during this era greatly exceeds that due to known changes in natural processes, such as solar changes and volcanic eruptions.”
The IPCC says the climate has warmed overall, particularly because of the use of fossil fuels.
“The fact that natural factors caused climate changes in the past does not mean that the current climate change is natural,” the IPCC states. “By analogy, the fact that forest fires have long been caused naturally by lightning strikes does not mean that fires cannot also be caused by a careless camper.”
State school board member Tom Campbell said that in response to the climate change language, Linger brought up concerns about political views being taught in classrooms during an open school board meeting in Mingo County in November. Campbell said he shared those concerns.
“Let’s not use unproven theories,” said Campbell, a former House of Delegates education chairman. “Let’s stick to the facts.”
Technically all theories could be considered unproven — many, like the theory of gravitation or plate tectonics, are overwhelmingly accepted by both scientists and the public based on a bevy of evidence. Even other publicly controversial ones, like evolution, are still overwhelmingly accepted by scientists.
When asked why climate change was the particular “unproven science” that he and Linger were concerned about, Campbell responded that “West Virginia coal in particular has been taking on unfair negativity from certain groups.” He also noted the coal industry provides much money to the state’s education system.
“I would prefer that the outlook should be ‘How do we mine it more safely and burn it more cleanly?” Campbell said. “But I think some people just want to do away with it completely.”
Board member Lloyd Jackson, an attorney and chief executive officer of his family’s natural gas company in Hamlin, said he recalls there being a discussion about climate change in the standards but didn’t know anything had resulted from it. Jackson, a former state senator, said he didn’t read every page of the standards before voting to pass them. He said he wouldn’t be concerned if West Virginia science teachers taught about human-driven climate change in the classroom.
Chad Colby, director of strategic communications and outreach for Achieve, said states can adopt the Next Generation Science Standards blueprint verbatim or change them without facing any punishment. West Virginia was among 26 states that helped write the blueprint. Colby said there’s no federal funding tied to the standards.
Stephen Pruitt, senior vice president of Achieve, said human-driven climate change is a relatively small part of the nationally suggested standards.
“It’s about the science of the fact the climate is changing,” Pruitt said. “We don’t get into the policy and the politics, we just say here’s the science. And the science is showing that we are seeing a rise in the mean global temperatures and we are seeing extremes in weather.”
“... Understanding that humans can contribute to that, sure. But we don’t get into the legislation and the policy.”
State school officials said the changes didn’t alter the intent of the standards, and defended them as a way to get students to debate and think critically about the evidence for and against human-driven climate change.
“I don’t want somebody to think that Wade [Linger] sent these in and we took his words,” said Clayton Burch, executive director of the state Office of Early Learning and interim associate state superintendent.
Burch said senior staff vetted the changes. He said completely removing, adding or substantially changing standards would’ve gone against the intent of the 81 stakeholder individuals — representing West Virginia organizations including K-12 schools, colleges and businesses like Charleston Area Medical Center and coal-burner Appalachian Power — that helped write the requirements over a roughly 3-year period. These stakeholders could’ve protested the climate change standard changes during the 30-day comment period required before adoption.
“We can get students arguing both sides of a research piece, which matches our (English language arts) standards: Think critically, write critically, both sides of the argument,” Burch said. He also said students hear information from many other sources outside of textbooks nowadays.
“Our students are now being faced with every yahoo that decides he wants to jump on the computer and write a blog,” Burch said. “I mean, seriously, that’s what they’re dealing with. Do they know how to critically read and think?”
Language similar to the requirement that students question the credibility of computer climate models doesn’t appear for every other topic, like evolution, that is scientifically uncontroversial but potentially controversial with the public. For instance, one objective will ask tenth graders to directly “communicate scientific information that common ancestry and biological evolution are supported by multiple lines of empirical evidence,” but students are nowhere asked to question whether carbon dating is a viable method of measuring the age of rocks containing fossils.
“Perhaps if it was 30-40 years ago when the evolution thing was a more openly discussed topic in the news, then that might have had our concern,” Sizemore said. “But that’s not where we are at this time period. Our students right now are hearing conversations about climate change.”
Libby Strong, president of the West Virginia Science Teachers Association, said she was involved in the writing of the national Next Generation blueprint but not the customized state standards. She said she doesn’t think the way the climate change standards are written will hamper teachers.
“I have no problem with students figuring out which is the most important component” in Earth’s changing climate, she said.
When asked how the state Department of Education would ensure that teachers instructing students on the climate change standards actually foster fair debate backed up by solid evidence, school officials argued they have little control over local curricula or ability to monitor it, and have a lot of professional development to do before the requirements go into effect. It’s also currently unclear how students will be tested on the standards.
Sizemore called the “and fall” addition to the global temperature rise standard “fabulous.” She said she wants students to be “skeptics” who back up their assertions with evidence.
“The science will be brought to their attention,” Sizemore said. “The students will understand why when this group says this, this is what they think, and when this group says this, this is why they’re thinking this. So I feel at peace that there’s going to be the science that’s going to rule,”
Reach Ryan Quinn at firstname.lastname@example.org, 304-348-1254 or follow @RyanEQuinn on Twitter.
|
<urn:uuid:393155c4-f6d8-4a84-b5e2-11e992822563>
|
CC-MAIN-2016-26
|
http://www.wvgazettemail.com/article/20141228/GZ01/141229489
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00075-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958913 | 2,373 | 3.015625 | 3 |
A worker does not work for money only. Non-financial rewards such as affection and respect for co-workers are also important factors. The emphasis was on employee-centered, democratic and participative style of supervisory leadership as this is more effective than task centered leadership. This approach was however criticized for its emphasis on the importance of symbolic rewards and not on material rewards. The belief of this approach that an organization can turn into one big happy family where it is always possible to find solutions which satisfies everybody has also been questioned.
An approach that recognizes the practical and situational constraints on human rationality for making decisions>
Behavioral scientists attach great importance to participative and group decision making. They are highly critical of the classical organization structures built on traditional concepts and prefer more flexible organization structures.
Two major theorists, Abraham Maslow and Douglas Mcgregor, came forward with ideas that managers found helpful.
He developed the theory of motivation that was based on three assumptions. First, human beings have needs that are never completely satisfied. Second, human action is aimed at fulfilling the needs that are satisfied at a given point in time. Third, needs fit into a hierarchy, ranging from basic and lower level needs at the bottom to higher level needs at the top.
He developed a concept of Theory X versus Theory Y dealing with possible assumptions that managers make about workers. Theory X managers tend to assume that workers are lazy, need to be coerced, have little ambition and are focused mainly on security needs. Theory Y managers assume that workers do not inherently dislike work, are capable of self control, have capacity to be creative and innovative and generally have higher level needs. This approach helped managers develop a broader perspective on the nature of workers and new alternatives for interacting with them.
An approach that focuses on the use of quantitative tools for managerial decision making.
The quantitative management viewpoint focuses on the use of mathematics, statistics and information aids to supports managerial decision making and organizational effectiveness. Three main branches have evolved: operations research, operations management and management information systems.
Operations Research is an approach aimed at increasing decision effectiveness through the use of sophisticated mathematical models and possibilities as they can accomplish extensive calculation. Some operations research tools are linear programming, querying, waiting line, routing and distribution models.
Operation management is a field that is responsible for managing the production and delivery function of an organization’s products and services. Operations management is generally applied to manufacturing industries and uses tools such as inventory analysis, statistical quality control, networking etc.
Management Information System:
Management Information System refers to the designing and implementing computer based information systems for use by the management. Such systems turn raw data into information that is required and useful to various levels of management.
A view point which believes that appropriate managerial action depends on the peculiar nature of every situation.
This approach is a viewpoint which argues that there is no best way to handle problems. Managerial action depends on the particular situation. Hence, rather than seeking universal principles that apply to every situation, this theory attempts to identify contingency principles that prescribe actions to take depending on the situation.
Systems Approach to management:
Systems theory is an approach based on the notion that organizations can be visualized as systems. A system is a set of interrelated parts that operate as a whole in pursuit of common goals. Every system has four major components:
1. Inputs are the various resources required to produce goods and services.
2. Transformation processes are the organization managerial and technological abilities that are applied to convert inputs into outputs.
3. Outputs are the products, services and other outcomes produced by the organization.
4. Feedback is information about results and organizational status relative to the environment.
Resources: (1) Human (2) Materials (3) Equipment (4) Financial (5) Informational
Managerial and Technological Abilities: (1) Planning (2) Organizing (3) Leading (4) Controlling (5) Technology
Outcomes: (1) product and services (2) Profits and losses (3) Employee growth and satisfaction.
|
<urn:uuid:fab6ae0a-294e-4b80-b590-18ccd995fb41>
|
CC-MAIN-2016-26
|
http://www.citeman.com/5716-modern-management-approaches.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00160-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.951212 | 830 | 2.640625 | 3 |
Last name: Largent
Recorded in several spelling forms including L'argent, Largent, Argent, Argente, Arghent, Argont, and Argontt, this interesting surname has a number of possible origins, all French. The use of the word "argent" was introduced into England at the time of the Norman Conquest of 1066, when for several centuries thereafter French was the official language. It derives from the Latin "argentum" meaning silver. As such it may have been used in medieval times either as a nickname for someone with silvery grey hair, at a time when few people lived past forty, or as an occupational name for a worker in silver, or in France as a topographical name for someone who lived near a silver mine. In addition there are several French towns and villages called Argent, and the surname may derive from any of these. Confusingly it may also derive from either of the places called Argens, in the departments of Aude and Bassey-Alpes. Here the derivation is from the Roman personal name "Argenteus" which also has the general meaning of "silvery". In England where the earliest examples of the surname recording are to be found, examples include John Largeant in the Subsidy Rolls of the county of Suffolk in 1524, and Thomas Argent, a christening witness on April 18th 1619 at St. Andrew's church, Enfield, in Middlesex. The first recorded spelling of the family name is probably that of Geoffrey Argent. This was dated 1180, in the "Pipe Rolls" of Northamptonshire", during the reign of King Henry 11nd, 1154 - 1189. Throughout the centuries, surnames in every country have continued to "develop" often leading to astonishing variants of the original spelling.
© Copyright: Name Origin Research www.surnamedb.com 1980 - 2016
Want to dig deeper into your family history? Take a look at our page on building a Family Tree
. Or get scientific and enter the exciting world of Ancestral DNA
|
<urn:uuid:88a56972-006a-4a76-80ca-16ddf8407aac>
|
CC-MAIN-2016-26
|
http://www.surnamedb.com/Surname/Largent
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00104-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.963903 | 428 | 2.578125 | 3 |
Do a web search for “Kyoto and Global Warming” and you will be pointed to a stunning 4.5 million sites. For many people in the world today, Kyoto could never be located on a map, few would know that it was once the imperial capital of Japan, and for that matter, few would even know that Kyoto is in Japan. It really wouldn’t matter, for most importantly, almost everyone knows Kyoto has something to do with global warming, “Kyoto” is something President Bust did or didn’t do, and it led to more global warming, right?
A meeting in Kyoto, Japan resulted in an agreement by the United Nations aimed at slowing down the buildup of greenhouse gases. The resulting “Kyoto Protocol” was part of the International Framework Convention on Climate Change; the Protocol was adopted on December 11, 1997 by the Third Conference of the Parties which was meeting in Kyoto (all of this can be traced back to the famous 1992 Earth Summit in Rio de Janeiro). If you had been on the “Conference of the Parties” circuit ever since, you would have enjoyed wonderful visits to Berlin, Geneva, Kyoto, Buenos Aires (twice), Bonn (twice), The Hague, Marrakech, New Delhi, Milan, Montreal, Nairobi, and Bali! Nothing says “fight global warming” any more than a never-ending world tour!
As you might have heard, 182 parties have ratified the Kyoto Protocol, but the United States has not ratified the treaty, much to the chagrin of many world leaders and every environmentalist on the planet No matter what the weather calamity anywhere in the world, someone is quick to point out that global warming is the cause and the mess could have been avoided had President Bush et al. ratified the Kyoto Protocol!
With interest in Kyoto and Japan, we found information in a recent article in Weather more than interesting; the article was written by Professor Takehiko Mikami of the Department of Geography at Tokyo Metropolitan University. Mikami begins the piece noting that in Japan, there are “several kinds of documentary sources for reconstructing climatic variations in historical times: 1) Cherry-tree-flowering date records since the eleventh century; 2) Lake-freezing date records since the sixteenth century; and 3) Weather diary records since the eighteenth century.” We at World Climate Report love real-world data, and we couldn’t wait to learn about the climate history of places like Kyoto.
Figure 1 shows the flowering dates of cherry blossoms, and the dates come from information in old diaries and chronicles regarding cherry blossom festivals held in Kyoto. The dates are then converted into March temperatures using statistical methods, and the temperatures appear in the figure as well. As case can certainly be made warming over the past 200 years, and global warming advocates might be thrilled to see warming since the beginning of the Industrial Revolution. However, Mikami states “The results indicate warmer periods during the eleventh to thirteenth centuries (in the Medieval Warm Period) and relatively colder periods during the sixteenth to eighteenth centuries (the ‘Little Ice Age’) with large year-to-year variability”. When viewed over the past 1,000 years, there is certainly (a) little unusual about the recent warming, (b) no apparent correlation between atmospheric carbon dioxide levels and temperature variations in Kyoto, and (c) a possibility that the recent warming was induced by the urban heat island of the growing city.
Figure 1. (a) Year-to-year variations in full flowering dates of mountain cherry trees in Kyoto since AD 1001, and (b) variations in March mean temperatures since 1001 estimated from full flowering dates of cherry trees in Kyoto city (from Mikami, 2008).
Next up is the information from lake-freezing dates from Lake Suwa located in central Japan. For a variety of reasons, local villagers have recorded the lake-freezing dates since AD 1444. During a cold winter, the lake would be frozen by mid-December while during warm winters, the freeze would be delayed until the end of February (some warm winters produced no freezing of the lake). A relatively simple statistical procedure was used to link freeze dates to winter temperatures using the period 1945 to 1990, and the transfer equation could then convert freeze dates into temperatures. In discussing Figure 2 (below), Mikami states “Although the lake-freezing records are not continuous from the late seventeenth century to the nineteenth century, a clear warming trend stands out during the final stage of the Little Ice Age from the 1750s to the 1850s. On the other hand, the coldest period since the fifteenth century was the early 1600s, when reconstructed mean winter temperatures were about 1 to 1.5 deg C lower than at present (1961–1990).” A case can be made for warming, but it all occurred at the end of the “Little Ice Age”.
Figure 2. Year-to-year variations in December/January temperatures at Lake Suwa during the period 1444–1870 (reconstructed) and 1891–1995 (observed)(from Mikami, 2008).
Next we learn that “In Japan, a large number of weather diaries from most parts of the country are preserved in local libraries and museums.” One diary from Tokyo was used to reconstruct summer temperatures from 1721 to near present. Again using statistical procedures, the author produced the reconstruction of summer temperatures shown in Figure 3. Mikami notes “From 1721 to 1790, temperatures are estimated to have been around 1 to 1.5 deg C lower than at present. During this period, July temperatures show large year-to-year variability with the lower values below 22 °C in 1728, 1736, 1738, 1755, 1758, 1783, 1784 and 1786. It should be noted that the temperatures in the 1780s were often extremely low with large inter-annual variations. In the summer of 1783, they experienced an extremely poor rice harvest under the influence of exceedingly cool and wet climate conditions, and this unusual weather brought a historic severe famine in Japan”. Cold sucks!
Mikami then states “On the other hand, it was rather warm in the nineteenth century, especially in the 1810s and the early 1850s with the higher values above 26 °C in 1811, 1817, 1821, 1851, 1852 and 1853. Among these warmer periods, the 1830s, late 1860s and late 1890s were relatively cool decades, and great famines occurred recurrently in the 1830s as appeared in the 1780s. July temperatures reached their lowest level around 1900, when 11-year mean temperatures were the same as those around 1740.” Mikami warns “it should be remembered that Tokyo has one of the strongest urban heat-islands in the world and this is likely to have influenced such temperature trends.”
Figure 3. Combined time series of reconstructed (broken line) and observed (solid line) July temperatures
in Tokyo for 1721–1995 (11-year running means)(from Mikami, 2008).
With Kyoto and Japan being almost synonymous with global warming policy, it is more than interesting to note how little “global warming” appears in any of the long-term climate records extracted from historical documents from that area.
Mikami, T. 2008. Climatic variations in Japan reconstructed from historical documents. Weather, 63, 190-193.
|
<urn:uuid:82dd9672-c3d9-40c1-950c-ac6cf92f4ebc>
|
CC-MAIN-2016-26
|
http://www.worldclimatereport.com/index.php/2008/09/09/another-message-from-kyoto/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00097-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956488 | 1,573 | 2.890625 | 3 |
< Browse to Previous Essay | Browse to Next Essay >
Seattle's Fremont Bridge opens to traffic on June 15, 1917.
HistoryLink.org Essay 3129
: Printer-Friendly Format
On June 15, 1917, Seattle's Fremont Bridge, spanning the Lake Washington Ship Canal, opens to traffic. The bridge is built to connect the neighborhood of Fremont with the west side of Lake Union at the base of Queen Anne Hill. The Fremont Bridge is a bascule bridge with counterweight balancing and cantilevered "leafs" (the parts that raise and lower). It is painted blue and orange. The bridge clears the water by 30 feet and has opened and closed its double-leafed gates more than any other Seattle drawbridge. It is one of the busiest bascule bridges in the world.
The Lake Washington Ship Canal was completed in 1917. In the process, the creek connecting Lake Union and Salmon Bay was enlarged and became part of the canal, deep enough and wide enough for oceangoing vessels to enter. The old trestle bridges were taken down and the Fremont Bridge was built. The bridge employed technology developed in Chicago in 1898.
The Bridge Engineer was F. A. Rapp, and the pier design was by D. R. Huntington. The counterweight pits and workings are housed in two concrete piers, each of which has a tower.
In Made to Last: Historic Preservation in Seattle and King County Lawrence Kreisman writes:
"The Fremont community attaches great value to the bridge as an identifiable and historically significant landmark. Bright orange was selected by the Fremont Community Council to distinguish their bridge from other bridges throughout the city, but it faded quickly, and the bridge is now painted blue with orange accents."
The current colors of the bridge were selected by a 1985 poll of Fremont residents and by the Fremont Arts Council. As of 2005, the bridge opens about 35 times a day. The community celebrated its 500,000th opening on September 20, 1991. By January 2006 the bridge had opened for marine traffic some 566,000 times.
Between September 2005 and June 2007, the bridge's 90-year-old approaches were replaced. This required the rerouting of Metro buses, many lane closings, and some bridge closings. The bridge's traffic signals, sidewalks, and curbs were also improved. SDOT reopened Fremont Bridge's four lanes on May 18, 2007. The bridge's mechanical and electrical structures will be renovated during the next year.
Lawrence Kreisman, Made To Last: Historic Preservation in Seattle and King County (Seattle: University of Washington Press, 1999), 72; "Bridges and Roadway Structures," Seattle Department of Transportation website accessed on June 15, 2005 and May 31, 2007 (http://www.ci.seattle.wa.us/transportation/bridges.htm).
Note: This essay was expanded on June 15, 2005.
Travel through time (chronological order):
< Browse to Previous Essay
Browse to Next Essay >
Seattle Neighborhoods |
Roads & Rails |
Licensing: This essay is licensed under a Creative Commons license that
encourages reproduction with attribution. Credit should be given to both
HistoryLink.org and to the author, and sources must be included with any
reproduction. Click the icon for more info. Please note that this
Creative Commons license applies to text only, and not to images. For
more information regarding individual photos or images, please contact
the source noted in the image credit.
Major Support for HistoryLink.org Provided
By: The State of Washington | Patsy Bullitt Collins
| Paul G. Allen Family Foundation | Museum Of History & Industry
| 4Culture (King County Lodging Tax Revenue) | City of Seattle
| City of Bellevue | City of Tacoma | King County | The Peach
Foundation | Microsoft Corporation, Other Public and Private
Sponsors and Visitors Like You
|
<urn:uuid:e509dab6-0932-4a5c-9fa6-c6f109495d6f>
|
CC-MAIN-2016-26
|
http://www.historylink.org/index.cfm?displaypage=output.cfm&file_id=3129
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00143-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950672 | 826 | 3.53125 | 4 |
If it's past 8 p.m. today and there seems to be quite a bit of daylight, there's a reason. If you don't know, today is arguably the longest day of the year, literally. It is the summer solstice or the real first day of summer.
What it means is that our planet's North Pole is angled closer to the sun than at any other time in the calender year -- in the Northern Hemisphere. we get more daylight and from here on, we start heading the other way. The opposite is true in Argentina, in the Southern Hemisphere, where they're experiencing the shortest day of the year.
Annually, thousands of New Agers and neo-pagans greet the early morning sun as it rises above Stonehenge, the ancient circle of pre-historic stones at Salisbury Plain, southern England, marking the summer solstice. For a good understanding of this Stonehedge ritual and what the solstice is all about, National Geographic has a great article.
If you missed the solstice sun rise and want to prepare a local celebration and whoop it up for next year's paganistic (or an environmental, one with nature) ritual, below is a "how to" video that you may want to consider. It could bring the best druid out of you.
|
<urn:uuid:aab4a394-b105-474b-9701-0c2dfff0703f>
|
CC-MAIN-2016-26
|
http://www.nj.com/hudson/voices/index.ssf/2010/06/welcome_to_the_longest_day_of.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00123-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.939761 | 270 | 3.171875 | 3 |
|The Inky Fool bids for power.|
Trotsky was born Lev Davidovich Bronshtein and was brought up speaking Ukrainian. In 1902 he adopted the code name, or nom de guerre, of Trotsky, which he seems to have nicked from one of his gaolers.
Stalin was born Ioseb Besarionis dze Jughashvili and was brought up speaking Georgian. After training as a priest he got into communism and adopted the code name, or nom de guerre, of Stalin, which means Man of Steel.
Margaret Thatcher was born Margaret Hilda Roberts, and was brought up speaking English, or as close as they get to that in Lincolnshire. She married Denis Thatcher and adopted the married name, or nom de guerre domestique, of Margaret Thatcher.
On January 24th 1976, a Soviet military propaganda outlet called Krasnaya Zvezda reported on the new leader of the British Conservative Party under the headline Zheleznaya Dama Ugrozhayet, which means Iron Lady Wields Threats. Zheleznaya means Iron and Dama means Lady.
The article claimed (utterly falsely, so far as anybody can tell) that this was how she was referred to in Britain. The article would have died a death, but it was seen by Robert Evans, who was the Reuters Bureau Chief in Moscow. So Evans wrote an article saying that: "British Tory leader Margaret Thatcher was today dubbed ‘the Iron Lady’ by the Soviet Defense Ministry newspaper Red Star." The name caught on in the West, but it was invented in Russia.
What's interesting is that, though the Russian story was hogwash, it would have made perfect sense to a Russian. The Soviet Union had, after all, been ruled for thirty years by The Steel Man, and this, I suspect, was what prompted the (baseless) story. If I'm correct in this reasoning (and it all looks pretty reasonable to me), then the Iron Lady was, essentially, named after Stalin.
Margaret Thatcher was delighted. Here is her reaction a week later.
|
<urn:uuid:a2c6e7eb-e3c0-4ea2-a211-316086979292>
|
CC-MAIN-2016-26
|
http://blog.inkyfool.com/2013/04/stalinist-thatcher-or-steel-man-and.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00087-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.983141 | 445 | 2.71875 | 3 |
The physical environment of school buildings and school grounds is a key factor in the overall health and safety of students, staff, and visitors. School buildings and grounds must be designed and maintained to be free of health and safety hazards, and to promote learning. Studies have shown that student achievement can be affected either positively or negatively by the school environment. Policies and protocols must be in place to ensure food protection, sanitation, safe water supply, healthy air quality, good lighting, safe playgrounds, violence prevention, and emergency response, among other issues that relate to the physical environment of schools.
The State Fire Code under RSA 153:5 and the State Building Code under RSA 155 establish the basic requirements for the construction, operation, and maintenance of school buildings. A number of state agencies including the Department of Education, Department of Health and Human Services, Department of Environmental Services, Department of Safety, Department of Labor, and others enforce numerous statutes and administrative rules that address topics such as:
Alcohol, tobacco, and other drugs
Hazardous materials such as asbestos, lead, mercury, radon, etc.
Laboratories and shops
Safe drinking water
Sanitation and housekeeping
School emergency response plans
Standards for school buildings
There are primarily two federal laws pertaining to the physical environment of schools:
The Americans with Disabilities Act (ADA) enforced by the U.S. Department of Justice
The Asbestos Hazard Emergency Response Act (AHERA) enforced by the U.S. Environmental Protection Agency
There are other federal environmental and public health laws that apply to schools. For the most part these have state equivalents that are administered by the appropriate state agencies.
One thing to be noted is that public schools in New Hampshire are not subject to the jurisdiction of the U.S. Occupational Safety and Health Administration (OSHA). Workplace safety for public employees is administered by the NH Department of Labor.
1. Every school should have a health and safety committee comprised of
industrial arts, studio art, and family and consumer science teachers
laboratory science teachers
food service personnel
school resource officer
The committee should develop and ensure the implementation of plans for safe, healthy and well-maintained school buildings and grounds. The committee should be empowered to deal with on-going maintenance and repair issues, as well as on-going and emerging health or safety issues related to the physical environment of schools and school grounds.
2. Every school should practice emergency response drills for a variety of likely hazards and situations.
3. Schools should implement programs to maintain good indoor air quality such as the EPA's Tools for Schools program.
4. School maintenance staff should practice Integrated Pest Management (IPM) and cleaning for health also known as green cleaning.
5. Schools should use automated systems such as Healthy SEAT and/or a Computerized Maintenance Management System (CMMS) to record and analyze maintenance issues and trends. This may be done at the district level.
6. Schools should establish procedures for managing chemicals used in science classes to include storage, reordering, and disposal.
Americans with Disabilities Act
National Clearinghouse for Educational Facilities
New England Asthma Regional Council
New Hampshire Department of Education Safety Resource Guide
U.S. Environmental Protection Agency
Indoor Air Quality
Indoor Air Quality Design for Schools, U.S. Environmental Protection Agency
IAQs for Schools Tools Program, U.S. Environmental Protection Agency
Maine Indoor Air Quality Council
IAQ Building Education and Assessment Model (I-BEAM), U.S. Environmental Protection Agency
Indoor Air Quality Scientific Findings Resource Bank
Competencies, Knowledge, and Skills of Effective School Nutrition Managers , University of Mississippi Environmental Issues
Construction Industry Compliance Assistance Center
Hazardous Waste Compliance Section, NH Department of Environmental Services
Environmentally Preferable Purchasing
"Greening Your Purchase of Carpet: A Guide For Federal Purchasers"
"Greening Your Meetings and Conferences: A Guide For Federal Purchasers"
"Greening Your Purchase of Cleaning Products: A Guide For Federal Purchasers"
Additional information is available from your insurance company on many of these topics.
Bureau of School Approval & Facility Management
NH Department of Education
101 Pleasant Street
Concord, NH 03301
Acrobat Reader format. You can download a free reader from Adobe.
|
<urn:uuid:07540f8c-a717-48de-b50c-4f9de039dbde>
|
CC-MAIN-2016-26
|
http://www.education.nh.gov/instruction/school_health/health_coord_environ.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00183-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.910534 | 903 | 3.265625 | 3 |
Find the value offor ,when .
In the my book its already solved the problem is that I don't understand after a certain point ,the steps.
After solving we have
After this point /step I don't understand anymore ,here is the rest which I want you people to explain me how and why its done (which is actually my question) ...
Also(the first and second derivative at 0 )
Thus,if n is even
and,if n is odd
I asked this question on other fourms but I didnt got any answer
Here is a link please read it https://www.physicsforums.com/threads/s … ic.873144/
Hi I got a problem
Let a and b be two vectors...
(a x b) = |a| |b| sin(t) n
where t is the angle between the two vectors a and b ,and n is the unit vector in the direction as per right hand rule.
If I square the equation
(a x b)^2 = |a|^2 |b|^2 (sin(t))^2 n^2
(a x b)^2 = |a|^2 |b|^2 (sin(t))^2
Now if I take the square root of both the sides
(a x b) = |a| |b| sin(t) ...which is the problem LHS is vector and RHS is scaler
I think there is some rules for vectors which I didn't follow may be... but what please explain and give me some reference where I can read more about this.
Well I solved this in two ways ,one of them is wrong or what I dont know
First answer is
Which is correct as many book have them
But if I try to put 1 = tan(pi/4) and then try to solve it I get something else
I am not able to understand it ,can anybody explain me where I am wrong.....
Well I did it like this
By equation of the sphere its already given that the sphere is at origin (0,0,0).
Now from there we get radius of the sphere is
When I solve the equationas my answer is
And when I solve the equationas my answer is or .
Now which one is correct ,are both correct?
Now putting the different values for "n" we can get the value for theta and put it in the equation and check,but I did just a couple and I know if I can graph it it will show but which application to use for this kind of job I dont know.
Yes stefy I agree with you, but still we use approx symbol for that... don't we ?,in the book they directly used equal symbol.
Bob the topic was about... calculating vectors...
Their are two force vectors given,first one 70N north and another is 50N south west,so calculate the resulting force and direction.
I tried but I can't solve it...
If a,b are two different values of x lying between 0 and 2 pi (i.e.. 0 to 360 degrees) which satisfy the equation 6 cos x + 8 sin x = 9 ,find the value of sin(a + b).
Well what I did was I turned that equation into an quadratic equation and find the roots and also use the relation of the sum of the roots.But no luck my solution is getting no where....
here is what I got when I turned it into an quadratic...
I will purchase a laptop (Dell Inspiron 14R) with it installed.The over all cost seems ok.
Nvidia GeForce GT 730 M is designed for laptops/notebooks,for desktop their are other series both from AMD and Nvidia like Nvidia GTX series is best on desktop platform ,for future I am planning to buy a graphic card from GTX series for my desktop.
|
<urn:uuid:4b900105-43be-4d4e-a572-df9265ba2884>
|
CC-MAIN-2016-26
|
http://www.mathisfunforum.com/search.php?action=show_user&user_id=191193
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00061-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957558 | 821 | 3.890625 | 4 |
See Disclaimer regarding information on this site. Some links on this page may take you to organizations outside of the National Institutes of Health.
autoimmune disease that affects the heart. The condition is characterized by inflammation of the heart muscle (myocardium). Some people with autoimmune myocarditis have no noticeable symptoms of the condition. When present, signs and symptoms may include chest pain, abnormal heartbeat, shortness of breath, fatigue, signs of infection (i.e. fever, headache, sore throat, diarrhea), and leg swelling. The exact underlying cause of the condition is currently unknown; however, autoimmune conditions, in general, occur when the immune system mistakenly attacks healthy tissue. Treatment is based on the signs and symptoms present in each person. In some cases, medications that suppress the immune system may be recommended.Autoimmune myocarditis is an
Last updated: 2/23/2016
- Wai Hong Wilson Tang, MD. Myocarditis. Medscape Reference. September 2014; http://emedicine.medscape.com/article/156330-overview.
- Caforio AL, Marcolongo R, Jahns R, Fu M, Felix SB, Iliceto S. Immune-mediated and autoimmune myocarditis: clinical presentation, diagnosis and management. Heart Fail Rev. November 2013; 18(6):715-732.
- Mayo Clinic has an information page on Autoimmune myocarditis.
- MedlinePlus was designed by the National Library of Medicine to help you research your health questions, and it provides more information about this topic.
- The National Organization for Rare Disorders (NORD) is a federation of more than 130 nonprofit voluntary health organizations serving people with rare disorders. Click on the link to view information on this topic.
- Medscape Reference provides information on this topic. You may need to register to view the medical textbook, but registration is free.
- PubMed is a searchable database of medical literature and lists journal articles that discuss Autoimmune myocarditis. Click on the link to view a sample search on this topic.
|
<urn:uuid:6b9ab926-d77e-42b0-a87e-d7d25bd9c125>
|
CC-MAIN-2016-26
|
https://rarediseases.info.nih.gov/gard/9519/autoimmune-myocarditis/resources/1
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00140-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.857042 | 429 | 2.734375 | 3 |
BOSNIA: How the war started
BY: Andy Wilcoxson
On March 18, 1992, Alija Izetbegovic (Bosnian-Muslim leader), Mate Boban (Bosnian-Croat leader), and Radovan Karadzic (Bosnian-Serb Leader) all reached an agreement on the peaceful succession of Bosnia & Herzegovina from Yugoslavia.
The Agreement was known as the Lisbon Agreement (it is also known as the Cutileiro Plan). The agreement called for an independent Bosnia divided into three constituent and geographically separate parts, each of which would be autonomous. Izetbegovic, Boban, and Karadzic all agreed to the plan, and signed the agreement.
The agreement was all set, internal and external borders, and the administrative functions of the central and autonomous governments had all been agreed upon. The threat of civil war had been removed from Bosnia that is until, the U.S. Ambassador Warren Zimmerman showed up.
On March 28, 1992, ten days after the agreement was reached that would have avoided war in Bosnia, Warren Zimmerman showed up in Sarajevo and met with the Bosnian-Muslim leader, Alija Izetbegovic. Upon finding that Izetbegovic was having second thoughts about the agreement he had signed in Lisbon, the Ambassador suggested that if he withdrew his signature, the United States would grant recognition to Bosnia as an independent state. Izetbegovic then withdrew his signature and renounced the agreement.
After Izetbegovic reneged on the Lisbon Agreement, he called a referendum on separation that was constitutionally illegal. On the second day of the referendum there was a Muslim-led attack on a Serb wedding. But the real trigger was Izetbegovic announcing a full mobilization on April 4, 1992. He could not legally do that without Serb & Croat consent, but he did it anyway. That night terror reigned in Sarajevo. The war was on.
The Bosnian war was ugly and extremely bloody. People were maimed and killed in bloody inner-city battles that left over half a million people dead.
The United States likes to point to Bosnia as a shining example of where it helped Muslims. It is true that the United States armed the Muslims in Bosnia. But, after many thousands of deaths and massive destruction throughout Bosnia, the Muslims were afforded by the terms of the Dayton Accords, less territory than they had been guaranteed by the Lisbon Agreement, which the United States urged the Muslim leader to reject.
The bottom line here is that this war didnít have to happen at all. Nobody had to die in Bosnia. If Ambassador Zimmerman had just left Izetbegovic alone, then none of this would have happened to begin with. Its that simple. The blame for all of the death and destruction associated with the Bosnian war lies exclusively with Alija Izetbegovic for starting the war, and with the U.S. President for sending that idiot Zimmerman to Bosnia in the first place.
This web site, intended for research purposes, contains copyright material included "for fair use only"
|
<urn:uuid:620733e1-25c5-4d4a-867e-12c173f4b597>
|
CC-MAIN-2016-26
|
http://www.slobodan-milosevic.org/bosnia-started.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.977965 | 652 | 3.28125 | 3 |
This release is available in French.
Montreal, December 10th, 2008 Insufficient vitamin D can stunt growth and foster weight gain during puberty, according to a new study published in the Journal of Clinical Endocrinology & Metabolism. Even in sun-drenched California, where scientists from the McGill University Health Centre (MUHC) and the University of Southern California conducted their study, vitamin D deficiency was found to cause higher body mass and shorter stature in girls at the peak of their growing spurt.
While lack of vitamin D is common in adults and has been linked to diseases such as osteoporosis, cancer and obesity, until this study, little was known about the consequences of insufficient vitamin D in young people. The research team measured vitamin D in girls aged 16 to 22 using a simple blood test (25-hydroxy vitamin D). They also assessed body fat and height to determine how vitamin D deficiency could affect young women's health.
"The high prevalence of vitamin D insufficiency in young people living in a sun-rich area was surprising," says study lead author, Richard Kremer, co-director of the Musculoskeletal Axis of the MUHC. "We found young women with vitamin D insufficiency were significantly heavier, with a higher body mass index and increased abdominal fat, than young women with normal levels."
Vitamin D fosters growth, healthier weight
The researchers examined 90 Caucasian and Hispanic girls and discovered that young women with normal vitamin D levels were on average taller than peers deficient in vitamin D. Yet in contrast to what's been previously reported in older women, their investigation found no association between lack of vitamin D and bone strength.
"Although vitamin D is now frequently measured in older adults, due to a higher level of awareness in this population, it is rarely measured in young people especially healthy adolescents," says Dr. Kremer.
"Clinicians need to identify vitamin D levels in younger adults who are at risk by using a simple and useful blood test," says the co-author, Dr. Vicente Gilsanz, head of musculoskeletal imaging at the Children's Hospital Los Angeles of the University of Southern California.
"Because lack of vitamin D can cause fat accumulation and increased risk for chronic disorders later in life, further investigation is needed to determine whether vitamin D supplements could have potential benefits in the healthy development of young people," added Dr. Gilsanz.
|Contact: Isabelle Kling|
McGill University Health Centre
|
<urn:uuid:045fd2fc-7a76-4cf0-86c8-1a7bf7ae22cb>
|
CC-MAIN-2016-26
|
http://www.bio-medicine.org/biology-news-1/Lack-of-vitamin-D-causes-weight-gain-and-stunts-growth-in-girls-6234-1/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00118-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.953794 | 511 | 3.0625 | 3 |
A commonly held belief that global warming will diminish oxygen concentrations in the ocean looks like it may not be entirely true. According to new research published in Science magazine, just the opposite is likely the case in the eastern tropical northern Pacific, with its anoxic zone expected to shrink in coming decades because of climate change.
An international team of scientists came to that surprising conclusion after completing a detailed assessment of changes since 1850 in the eastern tropical northern Pacific Ocean's oxygen minimum zone (OMZ). An ocean layer beginning typically a few hundred to a thousand meters below the surface, an OMZ is by definition the zone with the lowest oxygen saturation in the water column. OMZs are a consequence of microbial respiration and can be hostile environments for marine life.
Using core samples of the seabed in three locations, the scientists measured the isotopic ratio of nitrogen-15 to nitrogen-14 in the organic matter therein; the ratio can be used to estimate the extent of anoxia in these OMZs. The core depth correlates with age, giving the team a picture of how the oxygen content varied over the time period.
From 1990 to 2010, the nitrogen isotope record indicates that oxygen content steadily decreased in the area, as expected. But before that, and particularly clearly from about 1950 to 1990, oceanic oxygen steadily increased, which, according to co-author Robert Thunell, a marine scientist at the University of South Carolina, runs counter to conventional wisdom.
"The prevailing thinking has been that as the oceans warm due to increasing atmospheric greenhouse gases, the oxygen content of the oceans should decline," Thunell says. "That's due to two very simple processes.
"One, as water becomes warmer, the solubility of oxygen decreases in it, so it can hold less oxygen. And two, as the surface of the ocean warms, its density decreases and the oceans become more stratified. When that happens, the surface waters that do have oxygen don't mix down into the deeper waters of the ocean."
But that just covers the supply side of oxygen in the ocean, Thunell says. Just as important is the oxygen demand, particularly for the degradation of sinking organic matter.
Phytoplankton grow in surface waters, and they are the primary producers of this organic matter. After they die, their detritus slowly sinks from the surface to the sea floor, and there is a layer in the water column, the OMZ, where microbes consume much of the detritus, a process that depletes oxygen through bacterial respiration.
The extent of oxygen deprivation in the OMZ largely reflects how much phytoplankton is being produced on the surface, Thunell says. Plenty of phytoplankton production at the surface means less oxygen underneath.
And that, the team thinks, is why the oxygen concentrations in the Pacific Ocean so clearly increased from 1950 to 1990. Phytoplankton production is enhanced by strong winds (because they cause upwelling of nutrients from deeper waters) and diminished by weaker winds, and the scientists found evidence that trade winds were weaker then.
Looking at two different measures of wind intensity (the East-West difference in sea level pressure and the depth of the thermocline) over the time periods involved, they conclude that trade winds were diminishing over the course of 1950 to 1990, but then picked up from 1990 to 2010.
They're not sure why wind strength increased around 1990, but think it may be related to the Pacific Decadal Oscillation. "A lot of people are familiar with ENSO, or El Nino, which is a kind of interannual climate variability," Thunell says. "The Pacific Decadal Oscillation is analogous to a super-ENSO, but one that's varying on decadal time scales."
Over the course of coming decades, though, trade wind speed is expected to decrease from global warming, Thunell says, and the result will be less phytoplankton production at the surface and less oxygen utilization at depth, causing a concomitant increase in the ocean's oxygen content.
"That has some important implications for fisheries," he says. "One of the issues over the past 20 to 30 years is that oxygen has been declining and these oxygen minimum zones have been expanding, which could have a negative impact on fisheries.
"But if the last 20 to 30 years are not the norm because of these unusually strong trade winds, then there won't necessarily be that impact on the fisheries. If the trend reverses, and we go back to weaker trade winds -- as people predict will happen because of the warming oceans -- then the decrease in oxygen in the oceans that we've been seeing may be reversed."
It's a matter of both supply and demand.
|
<urn:uuid:3f0a05ee-9d46-417b-9875-8ad57f911476>
|
CC-MAIN-2016-26
|
http://www.eurekalert.org/pub_releases/2014-08/uosc-npt080714.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00076-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946778 | 987 | 3.84375 | 4 |
By Stephen P. Ryder
Patricia Cornwell's upcoming book, Portrait of a Killer: Jack the Ripper - Case Closed is without a doubt the single most publicized book ever released in the history of the genre. As such, it has the potential to impress the minds of millions worldwide with certain ideas about the Ripper crimes which are, unfortunately, largely inaccurate. The purpose of this primer is not necessarily to refute Ms. Cornwell's theory, but to provide an easy-to-follow factual guide for readers new to the subject who would like to know more about the facts presented in her book.
Those who have not yet read the book are hereby warned that the following primer may include spoilers.
Concept #1: Jack the Ripper sent taunting letters to the police and media during the murder spree.
This concept is an important one, and it forms the foundation of Ms. Cornwell's thesis, namely, that Walter Sickert was Jack the Ripper. Cornwell's team of forensic scientists found a sequence of mitochondrial DNA (mtDNA) on several "Ripper letters" which matched sequences found on several letters written by Sickert. Specific watermarks were also matched in both Sickert's letters and those sent to the police and media. If these findings are accurate, then, according to Cornwell, Walter Sickert was Jack the Ripper.
Unfortunately, its not quite that simple.
Fact #1: Most, if not all, of these "Ripper" letters are considered to be hoaxes.
The Metropolitan and City Police, as well as several news agencies, did indeed receive hundreds of letters in 1888 and the years following, in which the author either claimed to have been the Whitechapel murderer or offered advice on how to catch the killer. The exact number of these letters is unknown, but there are approximately 600 in total known to survive today. Indeed, Scotland Yard claims to have received letters like this up until the mid-1960s. Many of these letters were mailed from London itself, but many more were mailed from various portions of the U.K., as well as from France, the United States, Australia, South Africa, and many other foreign countries.
Obviously, one person could not have written all or even most of these letters. Indeed, the handwriting, grammar, spelling and tone of these communications vary greatly from one letter to another. Some are written in a shaky scrawl, others in a fine penmanship. Some are riddled with spelling errors and appear to have been the work of an illiterate. Others are thoughtfully written - in some cases, downright poetic. Several modern works have addressed this voluminous correspondence, including "The Whitechapel Murders Papers," a set of microfiche archiving hundreds of these letters, and more recently Jack the Ripper: Letters from Hell, a lavishly illustrated book by Stewart P. Evans and Keith Skinner.
The fact was that hoaxing Ripper letters in 1888 had become something of a sick, national past-time. Today we suspect that certain journalists working within the Central News Agency may have been responsible for hoaxing the first few, widely-publicized "Jack the Ripper" letters (namely the "Dear Boss" letter, and possibly the "Saucy Jacky" postcard). More than one police official publicly stated that a journalist named Tom Bulling was believed responsible for the original hoaxes. These early letters were reproduced on broadsheets and in newspapers, and fueled the panic into a near frenzy. Unfortunately, they also seemed to have struck a chord in countless individuals who soon after felt compelled to send in hoaxes of their own - in many cases, modeled after those original letters sent to the Central News Agency. Crass attempts at copying the grammar of those letters resulted in the continual reappearance of phrases like "Dear Boss", "Yours Truly", "I love my work" and, of course, the name "Jack the Ripper."
We know of at least two people who were arrested and charged for hoaxing Ripper letters. Interestingly enough, both were women. The first was a woman from Bradford named Maria Coroner, the second was a Miriam Howells from Penrhiwceiber. There were, of course, countless others.
Only one letter, interestingly enough not signed "Jack the Ripper," is considered to be possibly written by the murderer. Called today the "From Hell" letter, it was received by George Lusk approximately three weeks after the murder of Catherine Eddowes, accompanied by a small piece of human kidney. The author of the letter claimed the kidney was from Eddowes, and that he "fried and ate" the other half. One of her kidneys was indeed removed and taken away by her killer, so on the face of it, it would appear the letter has to be genuine. However, medical tests could not conclusively state that the kidney enclosed was indeed the same as the one taken from Eddowes three weeks before. The possibility remains that the From Hell letter and kidney were the result of misguided prank by a medical student (a fairly common thing in those days).
The "Openshaw" letter, which according to Cornwell provided similar mtDNA sequences and watermarks to those found in Sickert's correspondence, has never been considered a genuine Ripper letter by any serious author or researcher. It was sent to Dr. Thomas Horrocks Openshaw, who was widely publicized in the newspapers for having examined the piece of kidney enclosed along with the "From Hell" letter.
In the end, Ms. Cornwell's mtDNA and watermark evidence (which we will discuss a little later) may indicate that Walter Sickert hoaxed one or more Ripper letters, but that remains a far cry from conclusively linking Sickert to the Ripper murders. The only Ripper letter with a strong possible connection to the true murderer is the "From Hell" letter, and Cornwell found no links in the text of this letter between the author and Walter Sickert. (The original no longer survives, however, so DNA/watermark testing would have been impossible.)
Concept #2: Patricia Cornwell has found conclusive DNA evidence linking Walter Sickert to one or more "Jack the Ripper "letters.
Patricia Cornwell's forensics team performed DNA testing on the backs of envelopes and stamps from the "Ripper" correspondence, as well as from Sickert's own correspondence. It should be stated from the beginning that DNA testing of material over a century old has never before been done. Nuclear DNA tests - the usual form of DNA testing - came back negative. The forensics team then attempted mitochondrial DNA (mtDNA) testing, which provided some results. Similar "sequences" of mtDNA were found in both the "Ripper" correspondence and the Sickert correspondence.
Fact #2: The mtDNA results do not state that Walter Sickert was the author of those Ripper letters. They state only that the person who left DNA on Sickert's correspondence can not be eliminated from the percentage of the U.K. population who could have provided an mtDNA match. Walter Sickert's DNA no longer exists - he was cremated after his death.
Before we begin to discuss the actual interpretation of the mtDNA evidence, it is important to understand that the documents being tested were in most cases over a hundred years old. Most, if not all of them have been handled countless times by family members, archivists and researchers over the years, and so DNA contamination can be considered a serious problem. Little mention of this possible contamination is made in Cornwell's book.
mtDNA found on any particular letter may not necessarily have come from the author. Aside from possible contamination, we do not know for sure that Sickert licked his own stamps and envelopes. This may seem a silly point, but as Cornwell herself states, it was common practice in Victorian times to use a moist sponge for the practice, for fear of germs and bacteria. Also, if it is true as Cornwell herself suggests, that Sickert had several of these letters mailed for him by other people, then it must be taken into consideration that the envelopes and stamps may have been moistened by someone else's saliva.
So already there are several points of contention which may or may invalidate any form of DNA testing on stamp/envelope residue, since there is no concrete proof that DNA left on those letters actually came from Walter Sickert. But let's ignore that for now and examine the mtDNA results themselves.
mtDNA is different from nuclear DNA in that it is transmitted matrilineally. That means that child inherits his or her mtDNA directly from her mother - none of the father's mtDNA is replicated in his children. This is important when attempting to find DNA matches between parents and children, or between siblings, but in our case, Cornwell is simply trying to match Walter Sickert to Walter Sickert, so it shouldn't matter. mtDNA testing is used by many forensics labs for identification and should be considered as valid a method as nuclear DNA testing. It is, however, a much less specific method of testing - mtDNA, unlike nuclear DNA, is not unique. Finding an mtDNA match between two samples does not mean that one person left both, but that only a certain percentage of the population could have left both.
The mtDNA testing done by Cornwell's team found similar "sequences" of mtDNA. What does that mean? No one mtDNA sequence is unique. An mtDNA sequence found in Person A may also be found in Persons B and C, regardless of whether or not they are related by blood. This is similar, for example, to blood typing - persons completely unrelated to each other and living on opposite sides of the planet can still have the same blood type. In this case, the mtDNA "sequences" found indicate, according to Cornwell, that only 1% of the population of the U.K. could have left the DNA found on those Ripper letters, and that the person who left DNA on Sickert's correspondence was a member of that 1% population. (Other DNA experts, when asked to comment on this analysis, state that the actual percentage could range anywhere between 10% and 0.1% of the contemporary population). In 1901 there were nearly 40 million people in the United Kingdom. That means that Sickert, if we assume it is his mtDNA that was found, and that Cornwell's figure of 1% is correct, was one of approximately 400,000 people whose mtDNA shared those same sequences.
This is certainly not conclusive evidence - it would never stand up in a modern-day courtroom - but it is, as Cornwelll says, "a cautious indicator." Still, it should be noted that her own DNA specialist said that it could very well be just a matter of coincidence.
In the end, although the evidence is certainly not iron-clad, it can be considered suggestive. Ignoring the various pitfalls of contamination and provenance, the mtDNA evidence does show that Walter Sickert can not be eliminated from suspicion of having written, or hoaxed, one or more Ripper letters.
Concept #3: Patricia Cornwell has found similar watermarks, as well as similar names, phrases and drawings in both the "Ripper" correspondence and the Sickert letters.
Ms. Cornwell notes that three distinct watermarks are found in both the Ripper and the Sickert letters - A. Pirie & Sons, Joynston Superfine and Monckton's Superfine. She also notes that the name "Nemo" is found in several Ripper letters. "Nemo" was a favorite stage-name used by Walter Sickert in his acting days. Similar phrases and doodles appear similar as well.
Fact #3: Considering the incredibly vast number of documents within the "Ripper" correspondence, the laws of chance dictate that eventually, coincidental similarities will appear.
Remember that Ms. Cornwell had approximately 600 "Ripper" letters to sift through during her research, as well as many hundreds of documents belonging to Sickert. Eventually, if one is persistant enough, one will find similarities.
This is a common pitfall in Ripperology. Countless times, researchers have started out with a suspect in mind, and then scanned the Ripper correspondence for evidence that might link it with their suspect. If their suspect was a doctor, they had a dozen or so letters from people claiming to be a doctor. If their suspect was from Liverpool, they had a handful of letters post-marked from Liverpool. If their suspect was an American, they could find several letters containing "American" words and phrases.
In the case of the watermarks, journalist David Cohen made an interesting discovery. He spoke with Nigel Roche, curator of the St. Bride Printing Library in London, who told him that there were only around 90 paper mills operating in the U.K. in the late 1880s. Pirie was one of the largest. And while the Joynston/Monckton's brands may not have been as popular as Pirie, we are still talking about a very small number of possible paper manufacturers for the time period. If you examine 600 different contemporary documents, eventually you'll likely find examples from most of those mills.
I will address the use of the name "Nemo" separately. Nemo, roughly translated from Latin, means "No-one." To sign a letter or an editorial "Mr. Nemo" was, in effect, to sign it "Mr. No-one." This was simply a variation on signing "Anonymous" that just happened to be in vogue in the late 1800s. That it was used in several Ripper letters, as well as by Sickert, is indicative not of a link between the two, but rather of the popularity of the trend itself.
Similarly, Ms. Cornwell misinterprets the "Ha! Ha!" frequently found in the Ripper correspondence as a "peculiarly American laugh." She suggests this was Sickert mimicking his ex-role-model Whistler's infamous laugh. But as David Cohen points out in an article in Slate magazine, the Oxford English Dictionary dates the use of "Ha Ha" back to Old English. Thoams Carlyle, a Scotsman by birth and near-contemporary of Sickert's, is noted by Webster's Dictionary to have been known for his "Ha Ha"s.
Other suggested links - names such as Scotus, Mathematicus, as well as doodles which appeared similar to others known to have been drawn by Sickert - are far too numerous to cover individually. Most or all of them, however, can be considered to be highly subjective in nature, not constituting solid evidence whatsoever.
I will not spend too much time on this aspect, as the artistic interpretation is highly subjective and people will tend to see what they want or expect to see, regardless of what others may argue. That Walter Sickert on some occasions portrayed scenes of violence and murder, at least in his drawings and etchings, is undeniable - he was intensely interested in true crime and mystery. He may even have based one or more of his etchings on a Ripper murder, though there is no strong evidence to support this. But nothing found in any of his works is indicative of crime-scene knowledge that could have been known only by the killer.
Ms. Cornwell compares the positioning of several women in Sickert's paintings to the positioning of the women (notably, Catherine Eddowes) in the now-famous mortuary photographs. Unfortunately, these photographs were indeed taken at the mortuary - not at the crime scene. Presumably the killer was not at the mortuary. Also, as Wolf Vanderlinden points out, copies of the Mary Kelly photo, and the Eddowes mortuary photograph were first published for general consumption in France in 1899, in Lacassagne's Vacher l'Eventreur. Sickert may very well have seen these photos during one his many trips to France, and used them either consciously or subconsciously in his later work. Or, as seems more likely, the perceived similarities may just be a matter of coincidence.
Other comparisons, such as a pearl necklace representing a slit throat with "beads" of blood, are highly speculative and subjective.
Concept #5: Walter Sickert was impotent, childless, and had a fistula on his penis.
Patricia Cornwell states that Walter Sickert had no children, and was most likely impotent due to several surgeries early in life to correct a "fistula of the penis." This impotence she believes was a major cause of his intense hatred of women, and possibly spurned him on to commit the Ripper murders. Serial killers today are often found to be impotent - the act of murder becomes their only means of sexual fulfillment.
Fact #5: There is no evidence whatsoever that Sickert's fistula was on his penis. There is, to the contrary, abundant evidence that Sickert was quite a virile man who possibly sired several illegitimate children.
The only source for Walter Sickert's penis fistula is the testimony of his nephew-by-marriage, John Lessore - now an elderly man. (Mr. Lessore has since clarified that this was only "family hearsay" and that he was not absolutely sure). Although it is clear that Sickert did have a fistula of some sort, there is no documentary evidence to suggest that it was on his penis. Indeed, the fact that he was treated by Dr. Alfred Duff Cooper of St. Mark's Hospital suggests otherwise. Both Dr. Cooper and the Hospital were known for performing surgery of the rectum, anus and vagina. No records suggest at all that they were ever involved in, or qualified to perform, surgeries of the penis. The evidence here suggests that Sickert's fistula was on his rectum or anus.
Assuming for a moment that the fistula was on his penis, there is still no evidence to suggest that he was impotent because of it. He was rumored to have sired at least one child by his Dieppe mistriss, Mme. Villain, and a man named Joseph "Hobo" Sickert still contends to this day that he was Walter's illegitimate son. Indeed, a close friend, Jacques-Emile Blanche described Sickert in 1902 as an "immoralist... with a swarm of children of provenances which are not possible to count." Sickert was known to have had several mistresses, and was cited as being an adulterer by his first wife. All of this is copious evidence that Walter was perfectly able to have regular sex.
Concept #6: No evidence exists to indicate that Walter Sickert was anywhere but in London during the canonical Ripper murders between August and November 1888.
Ms. Cornwell repeatedly asserts that although she could not prove that Walter Sickert was in London during the autumn of terror, she could not prove that he wasn't in London either. She found that Sickert's whereabouts were entirely unknown on the dates of each murder.
Fact #6: There are several independent sources of evidence that indicate Walter Sickert was in France between August and October, 1888.
Cornwell admits to a single letter written by Sickert from France during the autumn of 1888. The letter, as she states, was undated, and no envelope or postmark surives to confirm the actual date it was sent. Nevertheless, the content of the letter obviously indicates that Sickert was in France at this time. "This is a nice little place to sleep & eat in," he writes. Cornwell claims that since there is no post-mark, it is impossible to state for certain the point of origination of this letter.
While that technically may be true, there are several other pieces of evidence that independently corroborate Sickert's time in France during that autumn. Sickert's biographer, Matthew Sturgis, recently elaborated on this evidence in an article in the Sunday Times (3 November 2002). According to Sturgis, although the exact date Sickert left for France can not be determined, he apparently departed sometime in mid-August. His last London sketch is dated August 4th, and there are no sources to indicate that he was in London after that date. On September 6th, Sickert's mother wrote from St. Valéry-en-Caux, describing how Walter and his brother Bernhard were having such a "happy time" swimming and painting there. A letter sent by a French painter, Jacques-Emile Blanche, to his father described a visit with Sickert on September 16th. Walter's wife Ellen wrote to her brother-in-law on September 21st, stating that her husband was in France for some weeks now.
There is evidence to suggest that Sickert stayed in the Dieppe area at least until early October, 1888. He painted a local buther's shop, "flooded with sunlight" in a piece he titled The October Sun.
Although any one of these several bits of evidence could feasibly be ignored or explained away, the combination of all these independent sources confirming the same thing - namely, that Sickert was in France at the time of the Nichols, Chapman, Stride and Eddowes murders - suggests that Sickert could not have been the killer. While it is true that ferry service between England and France was widely available, and technically Sickert could have travelled back and forth before and after each murder, that is pure speculation and there is no evidence to suggest this was the case.
So what does that leave us with? I think at best we can say that Cornwell has found some very interesting connections between Sickert and some of the Ripper correspondence, which is certainly worthy of further investigation. For this she should certainly be applauded. Ripper researchers have wondered for years whether or not it was possible to extract useable DNA from any of the extant documents, and thanks to Patricia Cornwell's research we now know that it is, apparently, possible. Her finding of a possible link between Sickert and the Openshaw letter is an important discovery, and, if confirmed, would add a third member to our list of known "Ripper hoaxers."
However, there remains as yet no concrete evidence that definitively connects Sickert with the Ripper letters, and, even if there was, that remains a far cry from being able to name Sickert as the Ripper himself. Cornwell's findings in no way should be considered sufficient evidence that the case is solved "100%". No jury, today or in 1888, would ever convict Sickert on the basis of her findings.
Many thanks to Paul Begg and Stewart Evans for clearing up several questions along the way. Please feel free to email me at email@example.com with questions or comments.
|
<urn:uuid:09f83590-1d7a-4bb4-bc2a-aee24f354880>
|
CC-MAIN-2016-26
|
http://www.casebook.org/dissertations/dst-pamandsickert.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00094-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979946 | 4,748 | 2.59375 | 3 |
Philadelphia, PA, November 28, 2012 - Medication development efforts for cocaine dependence have yet to result in an FDA approved treatment. The powerful rewarding effects of cocaine, the profound disruptive impact of cocaine dependence on one's lifestyle, and the tendency of cocaine to attract people who make poor life choices and then exacerbate impulsive behavior all make cocaine a vexing clinical condition.
In this battle, many candidate pharmacotherapies have been tested, but none have succeeded sufficiently to be adopted widely. Perhaps like cancer, heart disease, and AIDS, cocaine dependence is a disorder that requires combinations of medications for effective treatment.
In this issue of Biological Psychiatry, researchers from Columbia University and New York State Psychiatric Institute report a step forward in this effort. They tested a medication approach that unites two themes in addiction research - amphetamine and topiramate.
There are clues that stimulants, like amphetamine, methylphenidate, and modafinil, reduce reward dysfunction and deficits in executive cognitive control mechanisms associated with addiction. This approach fits with the "self-medication" hypothesis of addiction, which suggests that some people use drugs to treat symptoms that lead them to addiction or that emerge as a consequence of addiction.
There is also evidence that topiramate may be the most effective current pharmacotherapy for alcoholism. There are gaps in our understanding of exactly how topiramate works to combat addiction, but it shows signs of efficacy in animal models of stimulant addiction. In a recent large study of methamphetamine addiction, it appeared to reduce the intensity of methamphetamine use.
Using this knowledge as building blocks, Mariani and colleagues set out to test a combination of mixed amphetamine salts and topiramate for the treatment of cocaine dependence. They recruited cocaine-dependent treatment-seeking adults who were randomized to receive either the combination treatment or a placebo for twelve weeks. It was conducted as a double-blind study, using matching capsules, so that neither participants nor the research staff knew which treatment each individual was receiving.
They found that the participants receiving the combination treatment achieved three weeks of continuous abstinence from cocaine at a rate twice that of placebo (33% versus 17%). There was a significant moderating effect of the total number of cocaine use days, which suggests that the combination treatment was most effective for participants with a high baseline frequency of cocaine use.
"The combination of mixed amphetamine salts and topiramate appears promising as a treatment for cocaine dependence," said the authors. "The positive results observed in this study need to be replicated in a larger, multicenter clinical trial. The findings also provide encouragement for the strategy of testing medication combinations, rather than single agents, for cocaine dependence."
Biological Psychiatry Editor Dr. John Krystal agreed, adding that "the challenge of developing pharmacotherapies for cocaine is daunting. Yet, this combination therapy approach is a promising new strategy."
The article is "Extended-Release Mixed Amphetamine Salts and Topiramate for Cocaine Dependence: A Randomized Controlled Trial" by John J. Mariani, Martina Pavlicova, Adam Bisaga, Edward V. Nunes, Daniel J. Brooks, and Frances R. Levin (doi: 10.1016/j.biopsych.2012.05.032). The article appears in Biological Psychiatry, Volume 72, Issue 11 (December 1, 2012), published by Elsevier.
Notes for editors
Full text of the article is available to credentialed journalists upon request; contact Rhiannon Bugno at +1 214 648 0880 or Biol.Psych@utsouthwestern.edu. Journalists wishing to interview the authors may contact Frances Levin at +212 543-5896 or firstname.lastname@example.org.
The authors' affiliations, and disclosures of financial and conflicts of interests are available in the article.
John H. Krystal, M.D., is Chairman of the Department of Psychiatry at the Yale University School of Medicine and a research psychiatrist at the VA Connecticut Healthcare System. His disclosures of financial and conflicts of interests are available here.
About Biological Psychiatry
Biological Psychiatry is the official journal of the Society of Biological Psychiatry, whose purpose is to promote excellence in scientific research and education in fields that investigate the nature, causes, mechanisms and treatments of disorders of thought, emotion, or behavior. In accord with this mission, this peer-reviewed, rapid-publication, international journal publishes both basic and clinical contributions from all disciplines and research areas relevant to the pathophysiology and treatment of major psychiatric disorders.
The journal publishes novel results of original research which represent an important new lead or significant impact on the field, particularly those addressing genetic and environmental risk factors, neural circuitry and neurochemistry, and important new therapeutic approaches. Reviews and commentaries that focus on topics of current research and interest are also encouraged.
Biological Psychiatry is one of the most selective and highly cited journals in the field of psychiatric neuroscience. It is ranked 5th out of 129 Psychiatry titles and 16th out of 243 Neurosciences titles in the Journal Citations Reports® published by Thomson Reuters. The 2011 Impact Factor score for Biological Psychiatry is 8.283.
Elsevier is a world-leading publisher of scientific, technical and medical information products and services. The company works in partnership with the global science and health communities to publish more than 2,000 journals, including The Lancet and Cell, and close to 20,000 book titles, including major reference works from Mosby and Saunders. Elsevier's online solutions include SciVerse ScienceDirect, SciVerse Scopus, Reaxys, MD Consult and Nursing Consult, which enhance the productivity of science and health professionals, and the SciVal suite and MEDai's Pinpoint Review, which help research and health care institutions deliver better outcomes more cost-effectively.
A global business headquartered in Amsterdam, Elsevier employs 7,000 people worldwide. The company is part of Reed Elsevier Group PLC, a world-leading publisher and information provider, which is jointly owned by Reed Elsevier PLC and Reed Elsevier NV. The ticker symbols are REN (Euronext Amsterdam), REL (London Stock Exchange), RUK and ENL (New York Stock Exchange).
+1 214 648 0880
|
<urn:uuid:656c29b2-ea17-46a3-9f9b-552c9ac8bcfd>
|
CC-MAIN-2016-26
|
http://www.eurekalert.org/pub_releases/2012-11/e-tcd112812.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00027-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.913805 | 1,273 | 2.65625 | 3 |
The history of three towns—Harrisburg, Silver Reef, and Leeds—is
intricately connected. Harrisburg and Silver Reef are ghost towns today, while Leeds persists. Like many locations in the arid west, water and its availability and accessibility was the determining factor in whether a town lived or withered away.
The first settlement in the area was Harrisburg, founded in 1861 by Moses Harris and a few Mormon families who settled along Quail Creek. Despite their efforts in digging a 5-mile-long irrigation canal along what is now known as Leeds Creek, growth was hampered by rocky soil and limited land available for farming. By 1876 Harrisburg was losing population and essentially failing. Today, remnants of a few pioneer homes and the restored Adams House are all that remain of Historic Harrisburg.
About the same time Leeds was settled, silver was discovered on the White Reef. This reef, an upturned sandstone ledge, parallels I-15 from Harrisburg to a point north of Leeds. Miners and immigrants, including many of Irish, Cornish, and Chinese origin, rushed to the area with the hope of making their fortunes. The boomtown of Silver Reef sprang up about a mile north of Leeds, and by 1878 was a considerably larger community than either diminishing Harrisburg or the growing farming community of Leeds. At its height, Silver Reef boasted nearly a dozen mines and six ore processing mills, plus retail stores, saloons, hotels, banks, a school, Wells Fargo express office, theater company, and other urban amenities. Leeds and Silver Reef were a study in contrasts. Despite great differences in ethnicity, religion, and culture, the mining boomtown and its agricultural neighbor formed a mutually dependent relationship. The miners at Silver Reef were sustained by produce from Leeds, and Leeds farmers flourished with cash from the miners for their crops. By 1900 Silver Reef had died as the most easily accessible silver ore had been mined and the price of silver plummeted; however, the farming community of Leeds survived.
By 1867 the Harrisburg pioneers realized that a place called “Road Valley,” just to the north, was more suitable for diverting water and cultivating farmland. Amidst controversy, but with direction from Mormon leader Erastus Snow, many families moved from Harrisburg to Road Valley. An irrigation ditch was dug and water was brought to the site. The town was organized on December 1, 1867, and named Bennington, in honor of the town’s bishop, Benjamin Stringham. Bishop Stringham later requested that the town be named after Leeds, England, where he had served as a Mormon missionary. In May of 1869, Bennington became Leeds.
From Schoolhouse to Town Hall: A Building on the Move
The History of the Leeds Town Hall
From Native American Trail to Interstate Exit
The Sarah Ann and William Stirling Home
Given a Fortune for Averting Misfortune
The Leeds Tithing Office
Built in 1891-1892
From Ditches to Pipes
Washington County Historical Society
|
<urn:uuid:d56d1b4b-33e1-47b2-a0bb-bea4c99fd307>
|
CC-MAIN-2016-26
|
http://www.leedstown.org/community/history/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00033-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.973073 | 618 | 3.4375 | 3 |
Alberta Clean Power Seen Cheaper Than Coal Next Decades
Alberta, which relies on coal to generate about half its power, would see electricity rates rise more slowly in coming decades if use of renewable energy increased, according to a study published today.
Prices for electricity would be 4 percent lower by 2033 with a transition to more wind, solar and hydroelectric power than a persistent reliance on coal and natural gas, according to a report by Calgary-based environmental research firm Pembina Institute and Clean Energy Canada, a Vancouver-based organization that promotes renewable energy. The price per kilowatt-hour in Calgary has averaged less than 10 Canadian cents (9 U.S. cents) in the past decade.
The Canadian province, which holds the world’s third-largest crude reserves, is reviewing renewable-energy policies as exports from its oil sands face increasing opposition from environmental groups and lawmakers in the U.S. and Europe. The pressure to curb emissions from Alberta’s bitumen production threatens U.S. approval of TransCanada Corp.’s (TRP) proposed Keystone XL pipeline linking the oil sands to the Gulf Coast.
Alberta, which boasts Canada’s sunniest weather and only harnesses about 1 percent of its potential wind power, has eliminated most incentives for renewable energy in past decades, making it difficult for investors to compete with fossil-fuel generation, the report said.
“Alberta could cut its reliance on high-polluting energy dramatically,” said Ben Thibault, electricity program director at Pembina and co-author of the report. “The lack of a renewable policy framework has been a real barrier.”
The province, whose farmers pioneered wind-power development in Canada two decades ago, is working on a new renewable energy policy, Energy Minister Diana McQueen said last month.
A 2013 NRG Research Group poll found that 68 percent of Albertans want coal plants phased out or shut down and replaced with natural gas and renewable energy, the report said.
A separate poll by Oraclepoll Research found that a majority of Albertans also are willing to pay higher prices for electricity generated by wind and solar sources.
Alberta electricity generators are focused on increasing the use of gas to meet growing demand for power in the province. The fossil fuel is trading at about a third of its 2008 peak, helping to boost its adoption in North America.
Natural gas for June delivery rose 11.4 cents, or 2.5 percent, to $4.619 per million British thermal units at the close on the New York Mercantile Exchange today, the highest settlement since May 7.
To contact the reporter on this story: Jeremy van Loon in Calgary at firstname.lastname@example.org
To contact the editors responsible for this story: Susan Warren at email@example.com Carlos Caminada, Stephen Cunningham
|
<urn:uuid:8dddab82-9187-43f8-b551-1be37cb4567a>
|
CC-MAIN-2016-26
|
http://www.bloomberg.com/news/print/2014-05-28/alberta-clean-power-seen-cheaper-than-coal-next-decades.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00041-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.933112 | 601 | 2.984375 | 3 |
“The multiple human needs and desires that demand privacy among two or more people in the midst of social life must inevitably lead to cryptology wherever men thrive and wherever they write.” wrote David Kahn in his book “The Codebreakers”, chronicling the history of cryptography. The book was published in 1967. Almost 45 years later cryptography is seldom used to protect our privacy.
The information age spawned databases and networks capable of extracting and storing large amounts of private data. Those databases are often unknown to us and if we know of their existence we can not control them. They store personal information, communication and financial transactions. This gathering of private data happens against our will if we believe surveys that show that we actually do care about privacy. Skeptics and experts caution us but the majority of web users is forced to give in to the subtle but grave disintegration of privacy, pushed forward by industry and government. They are growing their databases steadily, expanding the records they keep on all of us.
We can see the consequences of these uncontrollable, central databases today. In what is believed to be one of the largest data security breaches in history, attackers stole personally identifiable information of 77 million PlayStation Network users earlier this year.
Accidental exposure of personal data is another problem. It is very difficult to control who has access to which piece of information. People get fired for how they behave online because they confuse personal with public communication. The web does not forget. And ever since the uprisings in the Arab world it should be clear to everybody that what one posts online can have severe consequences, including imprisonment and torture.
There are a variety of interesting judicial and ethical approaches to cope with these issues. And there is cryptography – a technological means of preserving privacy. Cryptography enables anonymity, the concept of ‘publishing information while ones identity is publicly unknown’ as well as privacy, the ability to to ‘seclude oneself or information about oneself and reveal oneself selectively’.
But almost nobody uses cryptography. Asked if he encrypts his e-mail, Bruce Schneier, cryptographer and highly regarded computer security specialist answers “I do not, except for special circumstances”. He further argues that for more people to encrypt their communication, services like Gmail would have to do it by default. This will of course never happen, since those services draw their revenue from reading our messages.
It has to work out of the box
But the more important point Schneier makes is this: what has to happen to spread the use of cryptology? It has to work out of the box. No additional application should be required, no plug-in, no add-on and certainly no driver installation. There exists a concept that could potentially offer a transparent solution for everyone: browser based cryptography.
The idea of browser based cryptography is simple: before users upload their personal data to application hosts they encrypt the data in the browser. The host only receives encrypted blobs of data and since users don’t share their key with the host the data is secure. If they decide to share their data with someone else they can provide them with means of decrypting the blobs. Users are in control at all times.
There are a number of alternatives and especially the concept of storing encrypted data with a curious or even untrusted host is not new. Traditionally, host applications have been used to handle cryptographic operations. These tools must be installed and have to be properly set up by the user. Mobile platforms might be an ideal environment for these alternatives. Installing applications is hassle-free and very common on mobile devices. Due to the well defined platform, developers can keep user effort to configure these applications to a minimum.
A colleague and I devised the idea of a cryptography enabled http proxy that is similar to the Cipherbox. The proxy is a trusted instance possibly hosted locally or connected via a VPN. All http traffic is sent through the proxy. It analyzes the traffic and encrypts and encrypts relevant parts like messages or images depending on its configuration. We implemented a prototype that is capable of transparently encrypting and decrypting Facebook messages using gpg. A proxy like this could run on a user’s FreedomBox and can in theory be extended to provide crypto-functionality for various platforms including for example Gmail.
A special form of cryptography called homomorphic encryption could enable users to take advantage of both cryptography and computing as a service at the same time. If encrypted data is sent to hosts, they usually can not process the data. If instead a homomorphic encryption scheme is in place, for certain algebraic functions on the plaintext, equivalent functions exist that can be applied to the ciphertext. Proponents of this technology argue that it could enable widespread use of cloud computing by ensuring the confidentiality of private data.
Controlling ones personal data is more difficult with every new database and network based innovation. At the same time privacy is more important than ever in a world that prepares to conglomerate health records, gathers and centralizes consumer behavior data and merges individual financial records into powerful profiles. Cryptography is an effective safeguard we must implement to prevent exploitation and discrimination based on our personal information. Every user must be enabled to use cryptology to control the data he wishes to share.
Browsers vendors must implement the building blocks required for cryptography including a secure key store that can be managed by the user. They should also include means of validating a running application against a checksum. Cryptographers and web developers must work together to implement correct and easy to use de- and encryption functionality for browser based applications.
More people must start thinking about this problem, more ideas are needed and should be carefully vetted by cryptographers and security experts. User interface specialists should work on making cryptography a transparent process. We need to get everyone involved and try to revert the damage that has already been done.
|
<urn:uuid:35beb75e-7932-47bb-9ac7-7146d79bba00>
|
CC-MAIN-2016-26
|
http://www.soa-world.de/echelon/2011/10
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00025-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948184 | 1,197 | 2.984375 | 3 |
See what questions
a doctor would ask.
Monosomy 14q11 (medical condition): A rare chromosomal disorder involving deletion of...more »
These medical condition or symptom topics may be relevant to medical information for Monosomy 14q11:
Monosomy 14q11 is listed as a "rare disease" by the Office of
Rare Diseases (ORD) of the National Institutes of Health
(NIH). This means that Monosomy 14q11, or a subtype of Monosomy 14q11,
affects less than 200,000 people in the US population.
Source - National Institutes of Health (NIH)
Ophanet, a consortium of European partners,
currently defines a condition rare when if affects 1 person per 2,000.
They list Monosomy 14q11 as a "rare disease".
Source - Orphanet
Monosomy 14q11: Another name for Chromosome 14q, partial deletion (or close medical condition association).
»Introduction: Chromosome 14q, partial deletion
»Symptoms of Chromosome 14q, partial deletion
Monosomy 14q11: Monosomy 14q11 is listed as a type of (or associated with) the following medical conditions in our database:
Some of the symptoms of Monosomy 14q11 incude:
These medical disease topics may be related to Monosomy 14q11:
Source - NIH
Search to find out more about Monosomy 14q11:
Search Specialists by State and City
|
<urn:uuid:0f64b3a9-67fa-440e-84b4-3e26c149a2ed>
|
CC-MAIN-2016-26
|
http://www.rightdiagnosis.com/medical/monosomy_14q11.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00186-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.813856 | 327 | 2.65625 | 3 |
This element is also available in our updated HTML 4 reference. Some characteristics may have changed.
|Contents:||TT, I, B, U, STRIKE, BIG, SMALL, SUB, SUP, EM, STRONG, DFN, CODE, SAMP, KBD, VAR, CITE, A, APPLET, IMG, FONT, BASEFONT, BR, MAP, INPUT, SELECT, TEXTAREA and plain text.
|May occur in:||BODY, DIV, CENTER, BLOCKQUOTE, FORM, TH, TD.
The level 1 heading is the most important header in the document. It should be rendered more prominently than any other header. It is usually used to indicate the title of the document. Often it has the same contents as the TITLE, although this is not required and not always a good idea. The title should be useful out of context (for example, in a bookmarks file) but the level 1 heading is only used inside the document.
The optional ALIGN attribute controls the horizontal alignment of the header. It can be LEFT, CENTER or RIGHT.
HTML 3.2 Reference ~ Elements by Function ~ Elements Alphabetically
Home, Forums, Reference, Tools, FAQs, Articles, Design, Links
Copyright © 1996 - 2006. All rights reserved.
|
<urn:uuid:4cbf476b-5a86-4174-bad6-38fa6cec2606>
|
CC-MAIN-2016-26
|
http://www.htmlhelp.com/reference/wilbur/block/h1.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00158-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.742844 | 282 | 2.609375 | 3 |
On Sundays I sometimes pass the Church of the Ascension on lower Fifth Avenue in Manhattan, and I generally pause to admire its Gothic Revival brownstone exterior fronted by a small courtyard with boxwood bushes. But it was not until late one Sunday afternoon in May that I went inside—drawn by a sign announcing an Evensong service that was about to begin. The casually dressed young man at the door who told me to sit up front in the oak choir stalls was, it turned out, a Protestant seminarian, who would lead the service. Besides himself, there were only a handful present.
During the brief service of psalms and hymns, what kept attracting my attention was the large mural over the altar. Not surprisingly, given the church’s name, it showed Jesus’ ascension into heaven, with the disciples grouped on the ground below. The mural is among the best known works of the painter John LaFarge, whose studio had been around the corner at 51 West 10th Street. But very much on my mind was the fact that John LaFarge the artist was the father of his namesake, John LaFarge the Jesuit (1880-1963), whom I have long esteemed because of his work among African Americans in the first half of the 20th century.
Born into a distinguished Newport family—Henry Adams, Henry James and Edith Wharton were family friends—the younger John LaFarge decided at an early age to become a priest. His father was a lukewarm Catholic, but his convert mother was devout and encouraged him in his aspirations. These assumed concrete form after Harvard, when he set out to study for the priesthood for the Diocese of Providence. But as he tells us in his autobiography, The Manner Is Ordinary (1954), she warned him: “Don’t let them make you a Jesuit.” His reply: “Mother, nothing can ever make me a Jesuit”—a stunning example of the old maxim “Man proposes, God disposes.” He entered the Society of Jesus in 1905.
After teaching in Jesuit schools, he was sent to work in Jesuit parishes in southern Maryland. There—as a man who had known nothing of poverty or its devastating effects on African Americans in particular—he encountered the South’s blatant racial discrimination at first hand and soon came in conflict with what he describes as “an age-old tradition by which Negroes were not to be considered as persons in their own right but only as persons subject to another’s right, as servants.”
For years he struggled against this local mind-set, and while doing so, created educational and other opportunities for African Americans, especially children, whose schools, he says, were “a mere farce.” He was able to persuade a group of sisters to come as competent teachers. Hands-on work of this kind, together with the necessary fundraising, “forced me,” he tells us, “to come out of my shell.” In the process, he grew to love the people he was serving, so that on receiving word in 1926 that he was to be transferred back north, he described his leave-taking as “heavyhearted.”
And where did Father LaFarge go? Right here to America, where he remained for decades as associate editor and, for one year, as editor in chief. But his work at the magazine went far beyond editorial responsibilities. Both by writing and speaking—and through his work with the Catholic Interracial Councils he helped to create—he took needed steps toward raising the awareness of Americans about the problems of race and poverty. He knew of the work of Dorothy Day in the 1930’s, and she speaks of him in Loaves and Fishes as one of the speakers at the Catholic Worker on the Lower East Side. He even devoted one of his regular columns in America, “With Scrip and Staff,” to the movement’s co-founder, Peter Maurin, whom he likens to a prophet of Israel.
Not least remarkable, he spanned several eras: raised in the Victorian age, he lived on to the beginnings of the civil rights movement, for which, in a sense, he helped to pave the way. The title of his autobiography notwithstanding, John LaFarge’s manner was anything but ordinary.
|
<urn:uuid:2636232b-b603-414b-a804-f56fc0e25421>
|
CC-MAIN-2016-26
|
http://www.americamagazine.org/issue/398/many-things/many-things
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00195-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.984835 | 905 | 2.84375 | 3 |
(Nanowerk News) Tiny components with the ability to emit single particles of light are important for various technological innovations. Physicists of the Universities of Würzburg, Stuttgart and Ulm have made significant progress in the fabrication of such structures.
Why are researchers interested in light sources that are able to emit single particles of light? "Such light sources are a basic requirement for the development of new encryption technologies," explains Professor Jens Pflaum at the Institute of Physics of the University of Würzburg.
Suitably equipped components would be able to ensure that data can no longer be "fished for" during transmission without such process being noticed. These components might be used, for instance, to increase the security of online payment systems – since any data manipulation would be immediately detected and the relevant counter measures could be directly implemented. This cannot be achieved with conventional light sources, such as lasers, because these always emit large quantities of identical light particles or photons as they are referred to by physicists.
The innovative component with which single photons can be produced at room temperature (red arrow) is schematically represented in the diagram below and shown in action in the picture on top. Electric current passes through the circular contacts, stimulating the underlying color molecules to light up. The optically active area of the component is about two millimeters in diameter. (Photo: Benedikt Stender)
The innovative light source has more than just one advantage: It consists of standard materials for organic light-emitting diodes, is pretty easy to manufacture and can be electrically controlled. What's most important: It works at room temperature. So far, any comparable optical components manufactured from semiconductor materials, such as gallium arsenide, are only functional at temperatures far below the freezing point.
Single color molecules in a matrix
What's the design of the new component? "It's quite similar to the pixel of a display, familiar to everybody with a mobile phone," explains Professor Pflaum: An electrically conductive layer is applied to a substrate – in our case represented by a glass plate. Next, an organic plastic matrix, in which the individual color molecules are embedded, is added onto this layer. The matrix is then fitted with electrical contacts. If these are connected to a battery, a flow of electrical current to the color molecules is induced, stimulating them to continually fire single photons. This has been demonstrated by the physicists with photon correlation measurements.
Three crucial tricks used
Three tricks were crucial for the achievement. Number one: "We selected the right color molecules," says Maximilian Nothaft of the University of Stuttgart. The molecules have chemical structures in which three organic complexes are grouped around one central iridium atom.
Trick number two: The physicists provided for a proper distribution of the color molecules within the matrix. Too densely packed molecules would have interacted, no longer emitting single independent photons.
Trick number three: "The interface between the electrical contacts and the matrix has been well designed by us," explains Professor Jörg Wrachtrup of the University of Stuttgart. This is important for enabling the required electrons, the carriers of the electric charge, to be injected into the polymer matrix in the first place. In this case, the scientists were successful with a contact comprised of an aluminum / barium double layer.
Glimpse into the future
What are the physicists going to do next? "We shall try to deposit the matrix with the color molecules and the electrical contacts onto various materials so that we can use flexible substrates, such as plastic films," says Professor Pflaum. This can be done with a device that works like an ink jet printer, which is a standard technology that has been used in laboratories for years. The advantage of this is: The light sources can be even better positioned on a surface.
Studies funded by DFG
This success has been achieved under the umbrella of Research Group 730 ("Positioning of single nanostructures – Single quantum devices"), which is funded by the German Research Foundation (DFG). The spokesperson of the group is Professor Peter Michler of the University of Stuttgart.
Source: Julius-Maximilians-Universität Würzburg
If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks!
Check out these other trending stories on Nanowerk:
|
<urn:uuid:699ba97b-c722-4f63-b7b6-a12d03b86129>
|
CC-MAIN-2016-26
|
http://www.nanowerk.com/news/newsid=24082.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00170-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.934212 | 904 | 3.953125 | 4 |
part 2 in the Law of Value series.
There are a lot of people that are really powerful in the world: Presidents, CEO’s, bankers, leaders of movements… But there is an object, a thing, that is more powerful than any of them. This object is money.
Money is really powerful. It makes people, societies, and countries do all sorts of things. The pursuit of money, as and end in itself, occupies many people’s lives and is the driving force of economic growth. And all over society money acts as a symbol of status, prestige and social power.
The funny thing about money is that it is just an object. Nowadays its not even a valuable object like gold. It’s just pieces of paper, or digits on a computer screen. It has all of this power and influence yet it needs no will, weapons, or words.
This phenomenon where objects have social power, in which things act as if they have a will of their own, is what Marx sought to unravel with his notion of “the fetishism of commodities.” When Marx talked about fetishism he wasn’t talking about whips and chains and leather outfits. He was talking about the way the relations between producers in a capitalist society take the form of relations between things.
The word “fetishism” originally was used to describe the practices of religions that attributed magical powers to objects like idols, or charms. If the Israelites of the Old Testament won a battle with the Philistines they attributed it to the powers of the ark of the covenant that they carried around. If they lost it was because they had pissed off the ark. Of course in reality it was their own actions that caused them to win or lose. Attributing their own powers to an object is fetishism. For Marx, money and commodities are much like this. We think that they have mystical powers, yet their powers really come from us, from our own creative labor.
Let’s take a look inside a workplace. It could be any workplace- a capitalist factory, a peasant commune, a family farm, whatever. Here the relations between different workers are direct. I make a widget and I hand it to the next person. If something needs to change about the labor process a manager brings the workers together and says, “Now we will organize things differently.” Whether it is a democratic or hierarchical form of organization it is an organization that happens directly between people.
Now let’s look outside the workplace at the market. In the market things are different. The organization of work, the division of labor, doesn’t happen through direct social relations between people. In the market the products of labor confront each other as commodities with values. These interactions between things act back upon production. They are what send signals to producers to change their labor, to produce more, produce less, go out of business, expand business, etc.
Coal miners, bakers, carpenters and chefs don’t directly relate to each other as workers. Instead the products of their labor, coal, bread, cabinets and pasta, meet in the market and are exchanged with one another. The material relations between people become social relations between things. When we look at coal, bread, cabinets and pasta we don’t see the work that created them. We just see commodities standing in relation of value to each other. A pile of coal’s value is worth so many loaves of bread. A cabinet’s value is worth so much pasta. The value, the social power of the object, appears to be a property of the object itself, not a result of the relation between workers.
[Money is the god of commodities. Through money all other commodities express their value. The amount of social labor that goes into a pencil becomes 20 cents. The portion of the social labor that goes into making a grand piano becomes 20 grand. As the god of commodities money becomes the ultimate expression of social power. It can be anything, buy anything, do anything. Yet money is just a scrap of paper, a pile of shiny rocks, a digit in a computer… It only has this power because it is an expression of social relations.]
We are atomized individuals wandering through a world of objects that we consume. When we buy a commodity we are just having an experience between ourselves and the commodity. We are blind to the social relations behind these interactions. Even if we consciously know that there is a network of social relations being coordinated through this world of commodities, we have no way of experiencing these relations directly because… they are not direct relations. We can only have an isolated intellectual knowledge of these social relations, not a direct relation. Every economic relation is mediated by an object called a commodity.
This process whereby the social relations between people take the form of relations between things Marx calls “reification”. Reification helps explain why it is that in a capitalist society things appear to take on the characteristics of people. Inanimate objects spring to life endowed with a “value” that seems to come from the object itself. We say a book is worth 20 dollars, a sweater worth 25 dollars. But this value doesn’t come from the sweater itself. You can’t cut open the sweater and find $25 inside. This $25 is an expression of the relation between this sweater and all of the other commodities in the market. And these commodities are just the material forms of a social labor process coordinated through market exchange. It is because people organize their labor through the market that value exists.
The illusion that value comes from the commodity itself and not from the social relations behind it is a “fetish”. A capitalist society is full of such illusions. Money appears to have god-like qualities, yet this is only so because it is an object which is used to express the value of all other commodities. Profit appears to spring out of exchange itself, yet Marx worked hard to explain how profit actually originates in production through the unequal relations between capital and labor in the workplace. Rent appears to grow out of the soil, yet Marx was adamant that rent actually comes from the appropriation of value created by labor. We see these fetishistic ideas in modern day mainstream economic theory in the idea that value comes from the subjective experience between a consumer and a commodity, and that capital creates value by itself.
Yet the theory of commodity fetishism isn’t just a theory of illusion. It’s not that the entire world is an illusion, reality existing somewhere far below the surface, always out of sight. The illusion is real. Commodities really do have value. Money really does have social power. Individual people really are powerless and material structures really do have social power. There is not a real world of production existing below the surface in which the relations between producers are direct. Relations between producers are only indirect, only coordinated through the mystifying world of commodities.
The theory of commodity fetishism is central to Marx’s theory of value and it’s one of the things that sharply distinguishes him from his predecessors. Adam Smith and David Ricardo both held that prices were explained by labor time. But Marx’s value theory is much more than a theory of price. It is a theory of the way the social relations between people take on material forms that then act back upon and shape these social relations. Labor takes the form of value embodied in commodities. Money price becomes the universal expression of this value. The pursuit of money as an end itself dominates society. Means of production become capital. Money, commodities and capital, as representatives of social value, become independent forces in their own right out of the control of society. The law of value is the law of these forces. Attempts to exert some control over these forces through monopoly or the state always become enmeshed in the social antagonisms of value.
Das Capital vol 1. by Karl Marx: The theory of commodity fetishism is laid out in the end of chapter one.
Essays in Marx’s Theory of Value by Issac Rubin. This is a great book about many aspects in Marx’s value theory. In many ways this video series is intended to be a modern take his book. The opening chapters are about commodity fetishism.
Also see the beginning of this article from Endnotes about value-form theory.
|
<urn:uuid:7607fa4f-81dd-4b9c-b244-24a401151984>
|
CC-MAIN-2016-26
|
https://kapitalism101.wordpress.com/2010/05/05/the-law-of-value-2-the-fetishism-of-commodities/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00109-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.965843 | 1,730 | 2.59375 | 3 |
Gadagkar, Raghavendra (1991) More gene wars. In: Current Science, 61 (12). p. 795.
In the otherwise fasinating article 'Gene wars' by Uma Shaanker and Ganeshaiah, there is an incorrect statement. The authors state, "... the interest of the offspring is not similar to that of the mother as long as they are sired by more than one father; selection acts on each offspring favouring increase in the offspring's own fitness by demanding more than the mother is selected to give [italics mine]. In sexually reproducing diploid organisms no two siblings (even full siblings) other than identical twins are identical in all 100 percent of their genes. The average coefficient of their genes. The average coefficient of genetic relatedness between full siblings (from the same father and the same mother) under outbreeding is 0.5. The Interests of the mother (who is related equally to all her offspring) will therefore not be similar to that of her offspring because each offspring is related to itself by 1.0 and by no more 0.5 even to its full sublings. Thus even when the offsprings are sired by the same father, selection should act on them to demand more from their mother than she is selected to give. This should of course make gene wars even more common.
|Item Type:||Journal Article|
|Additional Information:||Copyright of this article belongs to Indian Academy of Sciences.|
|Department/Centre:||Division of Biological Sciences > Centre for Ecological Sciences|
|Date Deposited:||11 Dec 2006|
|Last Modified:||19 Sep 2010 04:32|
Actions (login required)
|
<urn:uuid:2bd359b1-eb77-4679-a909-de32cb27ddee>
|
CC-MAIN-2016-26
|
http://eprints.iisc.ernet.in/8764/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00136-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.911532 | 359 | 2.5625 | 3 |
Russia has the edge in the space race now
The Olympics aren’t until summer 2012, but the United States is now losing an important race. The U.S. has fallen behind Russia in the all-important space race.
As you may already know, the U.S. is winding down its shuttle program, and that means that Russia will soon have complete control of access to the International Space Station.
According to a report by AFP, the Russian space agency isn’t celebrating; they’re allegedly playing down any sense of triumph. But the fact of the matter is that U.S. astronauts will have to depend on Russia to gain access to the ISS.
They will also have to pay Russia to get seats in their Soyuz space capsules.
"We cannot say that we have won the space race, but simply that we have reached the end of a certain stage," the deputy head of the Russian space agency, Vitaly Davydov, said in an interview.
On July 8, four US astronauts will board the Atlantis shuttle for its last flight. They will wrap up a three-decade-long program where the U.S. took turns to bring supplies and crews to the ISS with Russia's Proton and Soyuz rockets.
The U.S. will have to pay Russia $51 million per seat in their space capsules, which is needless to say, very pricey. NASA thinks that a new space crew vehicle could be built by private companies sometime between 2015 and 2020.
Maybe the Russians aren’t celebrating in public, but you know damn well that the significance isn’t lost on them. At one time the space race defined America’s greatness, and now another industrialized nation can simply chalk it up as one of the many advantages they have over the U.S. in the post 9/11 era.
You can add China to that rapidly growing list as well; they sent a crew to space in 2003.
It seems like there is an article like this related to the U.S. every week, but my, how the mighty have fallen. It’s pretty sad to think that as the U.S. finishes celebrating its independence, reality is setting and we have to face the fact that our "space program" is now dependent on Russia.
I think that the space race is extremely important for all of mankind, but when the wealthiest country in the world is cutting most of its budget for their space program I’d say that’s a very telling event. Economic problems and budget problems are causing our status in the world to fall, and they need to be taken seriously.
If the U.S. ever wants to be a world leader in the space race, which I hope we do, then it will need to come from a different system. If we are ever going to get serious about a space program again, then this time it needs to be directed by the private sector.
The huge budget that NASA used to have is unjustifiable with the economic problems we are facing, but that doesn’t mean we have to give up on getting to colonize the stars. We can still have the best space program in the world; we just shouldn’t expect the taxpayers to fund it. We also shouldn’t let politicians be the ones who control it anymore.
They’re the reason why the program lost its edge in the first place. A private sector directed space program needs to happen in some way, shape, or form. It’s too important to humanity’s future to not pursue.
Our private sector space program could easily beat any other country’s government controlled space program. The only problem is getting it started. Does anyone want to put in a call to Richard Branson or some other eccentric billionaire?
|
<urn:uuid:ccd3e01b-1469-4e75-a56d-a13c80106222>
|
CC-MAIN-2016-26
|
http://www.tgdaily.com/opinion-features/57034-russia-has-the-edge-in-the-space-race-now
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00163-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966674 | 789 | 2.65625 | 3 |
ERUPTION ON ISABELA - CERRO AZUL VOLCANO
10am September 16, 1998
An eruption started on the flanks of Cerro Azul on September 15 in the late evening. The reports are sketchy still but I have spoken directly with the National Park on Isabela. As of 10am September 16, no one has actually gotten up to the flows. The reports were of a bright red incandescence seen from Puerto Villamil on Isabela. The inhabitants of Floreana also can see the red glow at night. There seem to be two bright spots. The elevation of the eruption was estimated to be at the 1200 meter level on the South side of Cerro Azul.
The Park will be sending representatives to the flows especially to check on the fire status. There is a population of Galapagos tortoises in the area. The area is a bit inaccessible but I hope to go up personally if the eruption lasts.
More news as it develops....
UPDATE - Seismic activity was recorded from Hawaii at 13:45 Galapagos time on September 15 making this the time of eruption. Often, eruptions aren't visually noticed until it gets dark though. Real time observations from the geosynchronous GOES 8 satellite show the heat from the eruption though. Apparently the eruption is occurring on the South-west flank fairly near where one would take the trail up to the summit. Check out this site:
Click here for photos courtesy of the Galapagos National Park
The last image of the group is processed to show the most likely area of lava and fires. It corresponds to the eruption on Cerro Azul. These images are updated about every 15 minutes - watch how it develops!
Back to the Naturalist Net Home
|
<urn:uuid:e4ff8654-ff58-4b7c-86c4-5e2ab0d7aaeb>
|
CC-MAIN-2016-26
|
http://www.naturalist.net/news/azul.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00068-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.938256 | 364 | 3.078125 | 3 |
Carbon. Oxygen. Iron. Many of us may think of the various chemical elements
as, well, elemental—created by nature, stable and unchanging. But
scientists have now synthesized nearly two dozen novel elements, most of which
are highly radioactive and break apart in a fraction of a blink of an eye. In
the following quiz, test your knowledge of some chemistry basics and learn more
about scientists' quest to produce a more stable superheavy element in a realm
they call the "Island of Stability."—Susan K. Lewis
The periodic table is a chart of all known chemical elements, both
natural and synthesized. Why are the elements arranged in a curious pattern of
unequal rows and columns?
- to reflect chemical properties
- to indicate when they were discovered
- to leave room for notes
Hydrogen is the first element on the table, helium the second, lithium
the third, and so on. What determines an element's numerical order?
- its atomic weight
- the number of protons it has
- when it was discovered
Most elements were created inside of stars, but scientists have now made more than 20 elements in laboratories. What was the first element synthesized in a lab?
- technetium (element number 43) in 1937
- neptunium (element number 93) in 1940
- nobelium (element number 102) in 1958
Who officially names a newly discovered element?
- the researcher(s) who discovered it
- the institution where it was discovered
- an international group of chemists
What everyday object makes use of a synthesized heavy element?
- a smoke detector
- a microwave oven
- a fluorescent light
In 1952, elements 99 and 100 were discovered. Where were they
- in a particle accelerator in Stockholm
- in pitchblende, a uranium-rich ore
- in debris from a hydrogen bomb test
The nuclei of elements with a so-called "magic number" of protons tend
to be more stable. What is the heaviest element found in nature with such a
In the 1960s, physicists predicted that if element 114 could be made, it would be more stable than other superheavy elements. In 1998, when scientists finally created a single atom of element 114, it survived for how long?
- 30 seconds
- 30 hours
- 30 days
|
<urn:uuid:144ec43c-e703-411e-9f58-42de05b5ed72>
|
CC-MAIN-2016-26
|
http://www.pbs.org/wgbh/nova/sciencenow/3313/02-quiz-nf.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00032-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.912949 | 495 | 3.78125 | 4 |
In 1992, Independent presidential candidate Ross Perot made opposition to the North American Free Trade Agreement (NAFTA) the cornerstone of his national campaign, warning voters that because of huge wage differentials between the U.S., Canada and Mexico, “There will be a giant sucking sound going south.”
Even now, 20 years after NAFTA was enacted in 1994, the trade agreement’s legacy remains enshrouded in controversy, not only in the United States, but in Canada and Mexico as well.
How much of Perot’s dire forecast came true? What kinds of benefits, if any, has NAFTA brought to the economies of the U.S. and Mexico? Will it ever be possible to know, for sure, what the world would have been like if NAFTA had never been enacted?
During the heated debate that proceeded its enactment, prominent economists and U.S. government officials predicted that NAFTA – a trade agreement aimed at liberalizing trade between member countries — would lead to growing trade surpluses with Mexico and that hundreds of thousands of jobs would be created. “But the evidence shows that the predicted surpluses in the wake of NAFTA’s enactment in 1994 did not materialize,” notes Robert Scott, chief economist at the Economic Policy Institute, a left-leaning think tank in Washington,D.C.
What kind of evidence? “Jobs making cars, electronics, apparel and other goods moved to Mexico, and job losses piled up in the United States, especially in the Midwest where those products used to be made,” says Scott. “By 2010, trade deficits with Mexico had eliminated 682,900 U.S. jobs, mostly (60.8 %) in manufacturing.”
“The U.S. economy has grown in the past 20 years despite NAFTA, not because of it.” –Robert Scott
Claims by the U.S. Chamber of Commerce that NAFTA trade has created millions of jobs “are based on disingenuous accounting, which counts only jobs gained by exports but ignores jobs lost due to growing imports,” he adds. “The U.S. economy has grown in the past 20 years despite NAFTA, not because of it. Worse yet, production workers’ wages have suffered in the United States. Likewise, workers in Mexico have not seen wage growth. Job losses and wage stagnation are NAFTA’s real legacy.”
A Closer Look at Job Loss
How much of these job losses can be attributed to the impact of NAFTA? Wharton management professor Mauro Guillen has a very different view, suggesting that without NAFTA, many jobs that were lost over this period would probably have gone to China or elsewhere. “Perhaps NAFTA accelerated the process, but it did not make a huge difference. At the same time, a lot of jobs were created in the U.S. that wouldn’t be there without the Mexico trade. I’m not just talking about Texas or California or Arizona…. Many of the products made in Mexico are designed in the United States. So there are a lot of jobs created here.”
Walter Kemmsies, chief economist at Moffatt Nichol, an international infrastructure consultancy, notes that close to 40% of what the U.S. imports from Mexico is derived from U.S. sources. “This is the symbol of the success of NAFTA.” Twenty years ago, he estimates, that percentage was less than 5%.
Overall, has NAFTA been a good thing? Morris Cohen, Wharton professor of operations and information management, states that for many years, “economists have been arguing about whether global trade is a net benefit or net cost; who are the winners and who are the losers. There has been lots of ink spilled on that issue. The consensus from my perspective is that trade is generally a good thing; it helps to elevate the standard of living and it raises the level of economic activity on both sides. But there’s a net transfer sometimes, and definitely the notion of winners and losers. We don’t have the luxury of being able to have done the experiment [to find out] what would have happened had there been no NAFTA.” Or, he adds, to figure out to what extent the conditions that exist today are a result of NAFTA, or not the result.
Two decades ago, says Guillen, “People knew that trade within NAFTA would increase, so the U.S.,Canada and Mexico would trade more with each other. We [also] knew that low-wage manufacturing was going to move to Mexico from Canada and the U.S. And of course, part of this also moved to China and other locations, but Mexico has the advantage of proximity to the U.S.”
He acknowledges that Mexico has a surplus with the U.S. in trade – “and NAFTA accelerated that. But the U.S. runs a trade deficit with 90% of the countries in the world. So Mexico is not unique. In fact, the U.S. also runs a deficit with Canada, and that’s mostly because of oil and gas.”
If NAFTA had not been signed, Guillen adds, “the jobs would probably have gone to China or somewhere else; most jobs have relocated to China. The U.S. had a trade deficit with Mexico of $54 billion [in 2013], but with China, it was [a deficit of] $318 billion, so the [U.S.] deficit is five times bigger with China than with Mexico. In other words, you would calculate, maybe for every job we have lost in the U.S. to Mexico, five [jobs] were lost to China.”
While conceding that many U.S. high-wage manufacturing jobs were relocated to Mexico,China and other foreign locations as a result of NAFTA, Cohen argues that NAFTA has, on balance, been a good thing for the U.S. economy and U.S. corporations. “The sucking sound that Ross Perot predicted did not occur; many jobs were created in Canada and Mexico, and [the resulting] economic activity created a somewhat seamless supply chain — a North American supply chain that allowed North American auto companies to be more profitable and more competitive.”
Major Impact on the Auto Sector
Before NAFTA went into effect in 1994, the automotive sector throughout North America was insular and regional, and most vehicles were developed for the markets in which they were sold, notes Michael Robinet, managing director of IHS Automotive, a consultancy based in Michigan. “The rest of the world didn’t want our vehicles” because they lacked the size and mileage demanded by its consumers. South of the U.S. border, Mexican administrations pursued a policy known as “import substitution,” which is antithetical to free trade. Protected by high import duties, import licenses and quotas, Mexican plants were notorious for producing shoddy goods unpopular even in their domestic market.
As recently as 2008,Japan exported almost twice as many cars to the United States as did Mexico. This year, however, Mexico will export 1.69 million vehicles to the U.S., surpassing the 1.51 million vehicles exported by Japan to the same market. By 2015,Mexico will export 1.9 million vehicles to the U.S., surpassing Canada as the largest exporter to the U.S.
Overall,Mexico’s output of vehicles reached 2.93 million units in 2013. By 2020, almost 25% of all North American vehicle production will take place in Mexico, compared with only 10% in Canada and 65% in the United States. For both the U.S. and Canada, those numbers will represent a considerable decline in their share of the North American production pie. Massive recent foreign investments made by Asian and German brands in Mexico include Mazda’s facility, with an estimated annual production of 185,000 vehicles; Nissan’s, with an annual capacity of 149,000 vehicles, and Audi’s, set to open in 2016, with annual capacity for 150,000 vehicles.
Like Mexico,Canada’s automotive sector has long been dominated by U.S.-owned firms, even before NAFTA. “The U.S.-owned auto industry in Canada has been producing cars for generations,” notes Cohen. “But it used to be, before NAFTA, that what was produced in Canada was for sale in Canada, and it was a much smaller market. Now with NAFTA, these plants are integrated. Some components or sub-assemblies are sent back to the U.S. It is as if there is no border, as if it is one economic zone.” The quality of Canadian-made vehicles is now on the same world-class, highly competitive level as those made in the U.S. and Mexico, he notes.
All this activity has had a predictably negative impact on the U.S. share of all North American automotive jobs, which dropped from 64.5% in 2000 to just 53.4% in 2012. By 2012, 39.1% of all automotive jobs in North America were in Mexico, up from 27.1% of such jobs in 2000.
“A lot of jobs were created in the U.S. that wouldn’t be there without the Mexico trade.”— Mauro Guillen
According to Robinet, “NAFTA has driven down our costs,” making it possible for an integrated North America — as a single manufacturing platform — to become a major force in global automotive trade. Thanks to NAFTA trade preferences, automotive companies in the U.S.,Canada and Mexico “can use an engine from Mexico and a transmission from Canada, and then build the car in the U.S.” and still enjoy the NAFTA preferential treatment, so long as 62.5% of the value of that vehicle comes from within those three countries. Nowadays, the “vast majority” of vehicles built in North America have at least 75% (combined) value-added from those three countries, while some have well over 90% of North American value-added.
Meanwhile,Mexico’s emergence as an export-focused automotive manufacturing center is having a growing impact on other sectors of Mexico’s economy as well, Guillen notes. “We have seen, since the beginning of NAFTA, that productivity has increased in pretty much all of the export-oriented industries [in Mexico], especially manufacturing, where it has more than doubled.”
That is to be expected for several reasons, he says. Before NAFTA, “there were automobile plants in Mexico, but they were not really oriented toward the U.S. market. They were mostly for the Mexican market, and they were not very efficient. So in anticipation of NAFTA, and during the NAFTA period, American companies, the Japanese and South Korean firms have invested in world-class factories — with the best equipment — for the export market, which is primarily the U.S.” So part of the increase in productivity is due to better equipment in new plants. Another part is training of the labor force, for the same reason. These were cars made for export, so they needed to be well-done cars. The third reason is that Mexico, in general, even without NAFTA, would have made progress. “People have brought in better machinery,” on which workers have been trained, and educational levels, in general, have improved.
Guillen argues that “you see the same things in electronics, especially appliances, automotive parts, furniture” and other sectors such as aerospace and computers. However, the trend is more visible in the automotive sector because there are less than two dozen vehicle assembly plants in Mexico. “That’s why one decision by Nissan or Volkswagen – for example, to set up a world-class factory — makes a big difference. You can more readily see the changes in the automotive industry, but it is happening in the others too….
“Now think of where Mexico would be today without NAFTA,” Guillen adds. “Today, Mexican migration to the U.S. has come to a halt. There are Central Americans coming to the U.S. – but virtually no Mexicans. That’s because Mexico is doing well. So just imagine, without NAFTA and with Mexico not doing that well, we would have had the additional problem of an unstable Mexico with lots of people wanting to come to the United States.”
Behind NAFTA’s Success
Robinet identified several factors, beyond comparatively low wages, that are responsible for NAFTA’s impact on North American trade in recent years:
- The stable peso/dollar exchange rate: Before NAFTA, the dollar-peso exchange rate fluctuated widely, but the Mexican government “keeps inflation under control.” If you build a car in Japan, for example, you have no control over the yen/dollar relationship, Robinet noted. Facing that uncertainty, “manufacturers have learned that you need to build where you sell” – or close to it, as in the case of factories along the U.S.-Mexico border.
- The growing availability of Mexican suppliers: Nowadays, “everyone wants suppliers within one hour of the assembly plant,” said Robinet, and they are plentiful in key Mexican clusters of activity. Although it may be less costly to buy some components in China, for example, automotive companies are “beating the drum” to source more and more of their components as close as possible to final assembly. They don’t just ask how much a component costs, but “how much will it cost me to ship it here?”
- Mexican public policy: Mexico’s governments, whether of the conservative PAN party or the more populist PRI party (currently returned to office) are interested in developing a global auto industry, unlike that of China, which has focused its long-term strategy on capturing a dominant share of its (much larger) domestic market. Thus, Mexico’s government has opened the country to multinationals that have increased their scale of production, driving down prices not just for made-in-Mexico car exports, but also for cars sold to Mexico’s burgeoning middle class. To further facilitate this integration outside of North America,Mexico has forged tariff-free or reduced-tariff agreements with 44 countries around the world.
Robinet says that Mexico has become “not only the crossroads of automotive trade in the Western Hemisphere,” but it is “enhancing and augmenting its transportation infrastructure. When it comes to transportation,Mexico is in a sweet spot; you don’t need to go through the [Panama] Canal.” Before NAFTA, Japanese exporters had to go through the Canal, which has bottlenecks and is costly.
“Everyone wants suppliers within one hour of the assembly plant,” and they are plentiful in key Mexican clusters of activity. –Michael Robinet
Meanwhile, Kansas City Southern Railway’s cross-border intermodal volume in Mexico will continue to expand not just because of Mexican manufacturing growth, but because of the strong advantage rail has over the trucking industry south of the border, says Patrick Ottensmeyer, chief marketing officer at Kansas City Southern. The railroad has seen high double-digit quarterly intermodal growth inside Mexico, as the U.S. and other manufacturers of automotive, white goods and other products have shifted production to Mexico. He agrees that manufacturing in Mexico for export to U.S. consumer markets has become more attractive as a result of the recent rise in Chinese labor and transportation costs.
Mexico’s Haves versus Have-nots
Has the success of Mexico’s automotive sector accentuated the imbalance between NAFTA’s winners and losers? Some Mexican critics worry about the income inequality between those industrial workers who have benefitted from the country’s globalization and those who have been shut out of those benefits, especially the rural poor.
Guillen says he “completely disagrees with those economists who say this has generated inequality. Whenever there is this kind of growth process, especially when foreign investment comes in, you always get that inequality. Are you better as a country – or worse off? Ask the 30% of Mexicans who got well-paying jobs. Without NAFTA, they wouldn’t have those jobs, because those jobs would be in China or somewhere else.” Guillen contrasts the Mexican situation with that of the U.S., where “we are generating inequality because the lower wages are either stagnating or going down. How do they go down? When a factory worker is earning $35 an hour, gets laid off and has to go to the service sector and only makes $12 an hour.”
Overall, Guillen states, “NAFTA has been great for Mexico. The only doubts are about whether it has been good for the United States. I believe it has been, but there is more of a mixed balance between losers and winners [in the U.S.]. For Mexico, it is a total success. The problem in Mexico, though, is that the export industry there has not been big enough to employ everybody in a large population…. Inequality has been produced, not because the wages of low-wage workers got lower, but because a significant number of workers are now receiving higher wages.
“It is obviously good, but it would be even better if, instead of only 30% of Mexican workers earning those very high wages for Mexico, you could get 70% of the workers.” For that to happen,Mexico will have to overcome its shortage of capital, he adds.
Despite such imperfections, Kemmsies believes that “NAFTA is on the cusp of being a great success,” but he also worries that “Mexico will kill the golden goose before it lays an egg” by imposing export taxes on foreign firms doing business there before those firms are fully convinced they should be in Mexico for the long haul. “Mexico has to worry about overplaying its hand” before the global automakers and other foreign investors have sunk their roots more firmly into Mexican soil.” Given the fragile state of the global economy – and the uncertainties surrounding Mexico’s ambitious reform efforts — many foreign companies “are still scared and risk averse. We are not [yet] past the start-up stage in Mexico.”
|
<urn:uuid:69bd6fff-55a2-4b4e-92e1-d60fc4ca1e82>
|
CC-MAIN-2016-26
|
http://knowledge.wharton.upenn.edu/article/nafta-20-years-later-benefits-outweigh-costs/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00081-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.973532 | 3,861 | 2.65625 | 3 |
It was a Friday night in October of 1954. The rain had started coming down late that afternoon, but most people in Toronto weren’t worried. Hurricane Hazel might have killed more than a thousand people as it tore through the Caribbean and the eastern U.S., but it was supposed to have died down by the time it reached Ontario. The last official weather report came out at 9:30 that night: it would rain for a couple of hours with some strong winds, but Hazel was weakening. It sounded like everything was going to be fine.
But it wasn’t. Thanks to a rainy autumn, Toronto’s rivers and creeks were already swollen; they couldn’t handle the extra 150 billion litres that was about to fall from the sky. All over the city, they burst their banks in the dead of night, sending roaring torrents of water flooding through neighbourhoods. Roads were destroyed, bridges washed away, homes flattened. Cars were plucked from the streets and hurled downstream. Firefighters and police officers and volunteers leapt into action — but many of them got stranded too, or were swept away by a rush of water. Word went out over the police radio: the force of the currents was so strong that rescue boats of any size were to be considered useless.
On Raymore Drive in Etobicoke, many families were already asleep when the water hit. The street curved along the banks of the Humber River just across from Weston — much of it on a floodplain. So when a crest of water surged down the river, the houses on Raymore were standing right in the way. It only took a few minutes for the neighbourhood to flood. Soon houses were floating away.
Brian Mitchell, a volunteer firefighter, was there that night. “I think some of them realized their houses were moving,” he remembered years later in a book about the storm, “but a neighbour’s house was on a solid foundation; therefore, they thought, ‘Let’s swim to the safety of the neighbour’s.’ That’s what a lot of them did. Matter of fact as the water still rose they were right up on the rooftops of neighbours’ houses, hanging onto TV aerials. Some stayed in their houses, and we could hear the screams when the houses were swept down the river with people in them.
“All hell broke loose. People were screaming, ‘Save us… Save us.’ We could get spotlights on them. We could see them… but they were just so far out you couldn’t throw ropes. We tried floating ropes to them on logs, anything buoyant. We’d grab a piece of firewood, tie rope to it, and float it upstream, hoping the current would get it over to them and they’d tie it in some way to their house… Sometimes the only possibility was to swim out with a rope. We saw feats of strength we’ve tried to reproduce since, and we can’t… But these things happened. Everybody was working so hard. And you could hear people screaming… screaming.”
“I felt so helpless,” he told the Toronto Star after the storm, “but there was nothing I could do, nothing anybody could do. The water was so deep, up to our chins, and all the firemen were weighed down by clothing and boats and equipment. It was like something out of a Cecil B. DeMille movie. The incredible roar of the water, like the roar of Niagara Falls. It was a gigantic flood with smashed houses and uprooted trees bobbing like corks, everything going down the river so fast. Houses crashing into the sides of other houses, people everywhere screaming. And then you couldn’t even hear the screams anymore.”
“The firefighters did a good job,” he said. “But for every one we got out, there was another we couldn’t get out.”
By the time the sun came up, the hurricane had killed 81 people in Toronto — nearly half of them on Raymore Drive. Neighbourhoods all over the GTA were in ruins, leaving thousands of people homeless. The clean up would be massive: the military moved in with flamethrowers to burn the wreckage; it was months before all the roads and bridges were repaired.
In the wake of the disaster, the city developed a groundbreaking new plan for flood control. They built damns and reservoirs and retaining walls, installed concrete channels, and redirected streams. Thousands upon thousands of acres of land were expropriated in order to turn Toronto’s floodplains into parkland. They didn’t want anyone living there next time a big storm hit
So that’s what happened to Raymore Drive. Those houses were never rebuilt: the blocks that were underwater are now home to Raymore Park. There’s an historical plaque there. And the ruins of a bridge that Hurricane Hazel destroyed. They’re the only signs of the horror that swept through the neighbourhood on that terrible night in October, 1954.
Cross-posted from The Toronto Dreams Project Historical Ephemera Blog.
|
<urn:uuid:7dba14f1-65ab-4a25-bf8b-6e56a3f3082a>
|
CC-MAIN-2016-26
|
http://spacing.ca/toronto/2012/10/30/an-eyewitness-account-of-the-terrible-night-hurricane-hazel-hit-toronto/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00132-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.986329 | 1,081 | 2.59375 | 3 |
In 1384, John Wycliffe made an important translation of the Bible into English
Latin words continued to be absorbed by such writers as John Wycliffe (also: Wyclif, Wiclif, et al.), an ardent reformer of the Church, who insisted that Holy Writ should be available in the vernacular, and produced his translation of the Bible.
Wycliffe and his associates are credited with more than a thousand Latin words not previously found in English. Since many of them occur in the so-called Wycliffe translation of the Bible and have been retained in subsequent translations, they have passed into common use.
Caxton helped to stabilize the language by standardizing spelling and using East Midland (London) dialect as the literary form which became the standard modern English of Britain.
Wycliffe's translation of the Bible has such words as "generation" and "persecution", which did not appear in the earlier Anglo-Saxon version. Anglo-Saxon compounds like "handbook" and "foreword" were dropped from the language in favor of the foreign "manual" and "preface" (many centuries later, they were reintroduced as neologisms, and objected to by purists unskilled in linguistic history).
Wycliffe is credited with making English a competitor with French and Latin; his sermons were written when London usage was coming together with the East Midlands dialect, to form a standard language accessible to everyone, and he included scientific references; such as, those referring to chemistry and optics.
Wycliffe was noted for criticizing the wealth and power of the Catholic Church and upheld the Bible as the sole guide for doctrine; his teachings were disseminated by itinerant preachers and are regarded as precursors of the Reformation.
William Tyndale, the man who first printed the New Testament in English
The Roman Catholic church in England had forbidden vernacular English Bibles in 1408, after handwritten copies of a translation by John Wycliffe (an earlier Oxford scholar) had circulated beyond the archbishop's control. Some of the manuscripts survived and continued to circulate, but they were officially off-limits. Translating the Bible into English without permission of the Catholic church was a serious crime, punishable by death.
William Tyndale was born into a well-connected family in Gloucestershire, England, around 1494. We don't know much about his early life, but we know that he received an excellent education, studying from a young age under Renaissance humanists at Oxford.
By the time he left Oxford, Tyndale had mastered Greek, Latin, and several other languages (contemporary accounts say he spoke eight). He also had become an ordained priest and a dedicated proponent of church reform; a "protestant", before that word existed. All he needed now was a vocation. He found one, thanks in part to Desiderius Erasmus.
William Tyndale was executed
Sources of the Word
Erasmus, one of Europe's leading intellectual lights, had caused a stir in 1516 by publishing a brand-new Latin translation of the New Testament--one that departed significantly from the Vulgate, the "common" Latin translation the Catholic church had used for a millennium. Knowing that many readers saw the Vulgate as the immutable Word of God, Erasmus decided to publish his source text (a New Testament in Greek, compiled from sources older than the Vulgate) in a column right next to his Latin translation.
It was a momentous decision. For the first time, European scholars trained in Greek gained easy access to biblical "originals." Now they could make their own translations straight from the original language of the New Testament. In 1522, Martin Luther did just that, translating from the Greek into German. In England, Tyndale decided to publish an English Bible--one so accessible that "a boy that driveth the plough shall know more of the scripture" than a priest.
One problem: the Catholic church in England had forbidden vernacular English Bibles in 1408, after handwritten copies of a translation by John Wyclif (an earlier Oxford scholar) had circulated beyond the archbishop's control. Some of the manuscripts survived and continued to circulate, but they were officially off-limits. Translating the Bible into English without permission was a serious crime, punishable by death.
The Word of God made into English
Undeterred, Tyndale tried to win approval for his project from the bishop of London. When that didn't work, he found financial backers in London's merchant community and moved to Hamburg, Germany. In 1526, he finally completed the first-ever printed New Testament in English.
It was a small volume, an actual "pocket book," designed to fit into the clothes and life of that ploughboy. That made it fairly easy to smuggle. Soon Bible runners were carrying contraband scriptures into England inside bales of cloth. For the first time, English readers encountered "the powers that be," "the salt of the earth," and the need to "fight the good fight"--all phrases that Tyndale turned. For the first time, they read, in clear, printed English, "Why seek ye the living among the dead? He is not here, but is risen."
Infuriated, the bishop of London confiscated and destroyed as many copies of Tyndale's New Testament as he could. Meanwhile, English authorities called for Tyndale's arrest. He went into hiding, revised his New Testament, and (after learning Hebrew) began translating the Old Testament, too. Before long, copies of a small volume titled The First Book of Moses, called "Genesis" started showing up on English shelves.
Spreading the Word
Tyndale never finished his Old Testament. He was captured in Antwerp in 1535 and charged with heresy. The next year, he was executed by strangulation and burned at the stake. Yet others picked up his work, and Tyndale's version of the Word lived on. In fact, practically every English translation of the Bible that followed took its lead from Tyndale; including the 1611 King James Version. According to one study, 83 percent of that version's New Testament is unaltered Tyndale, even though a team of scholars had years to rework it.
The reason is simple. Tyndale's English translation was clear, concise, and remarkably powerful. Where the Vulgate had Fiat lux, et lux erat, Wyclif's old version slavishly read "Be made light, and made is light". Tyndale's translation of the same passage is still familiar to nearly every reader of English: "Then God said: 'Let there be light', and there was light." Subsequent English writers may have been more original, but none wrote words that reached more people than these.
In 1340-1400, Geoffrey Chaucer helped make English the dominant language of Britain
He is credited with combining the vocabularies of Anglo-Saxon, Scandinavian, French, and Latin into an instrument of precise and poetic expression.
William Caxton, in 1476, was the first to use Gutenberg's invention in England
“Mehr als das Gold hat das Blei in der Welt verändert.
Und mehr als das Blei in der Flinte das im Setzkasten.”
More than gold, it's lead that changed the world,
and more than the lead in a gun, it was the lead in the typesetter’s (printer's) case.
Proceed to Part 22, Modern English Period.
INDEX or Table of Contents, English and its historical development.
References: sources of information.
|
<urn:uuid:8b3f3ccb-77c1-4121-8c86-a1f9d4512eb0>
|
CC-MAIN-2016-26
|
http://wordinfo.info/unit/4209?letter=E&spage=5
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00071-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.975376 | 1,610 | 3.25 | 3 |
Gun Violence Prevention Strategies: Action Research
Researchers Make the Difference
Action research occurs when practitioners work alongside researchers to design, implement, evaluate, and revise intervention
programs. In the context of reducing gun violence, action research refers to law enforcement-researcher partnerships formed
to address a specific, local gun violence problem. These police-researcher partnerships usually involve other partners; see
The criminal justice action research model. The action research model shown here represents a process that had its first trial by fire with Operation Ceasefire. Some
version of this approach has been used by intervention programs, with varying degrees of success, ever since.
The Action Research model relies heavily on collaboration, feedback, innovation and compromise. It is illustrative — no one
model will apply for every situation.
View a larger image or read through each step.
The process generally starts out with a working group or task force made up of law enforcement, community partners and university
researchers. The working group meets to determine the crime problem, study the available data about it (crime reports, emergency
room gunshot victims, incident reports, known gang members, etc.), and devise a solution based on what the local practitioners
and community service groups know about the problem and what researchers know about best practices. As the researchers analyze
the findings, these are relayed back to the group. If findings reveal that an intervention is not reducing gun crime, the
researchers may suggest adjustments to the intervention. Sometimes, practitioners encounter an unanticipated reality in the
field that requires working with the researchers to redesign the intervention.
This process often requires innovation and can be difficult, as described by the researchers involved in designing Boston's
Operation Ceasefire (bolding added for emphasis):
[W]orking with YVSF, probation, and Streetworker members of the Working Group, plus several other police and Streetworker
participants whom they recommended, the authors mapped gangs and gang turf and estimated gang size. This process identified
some 61 different crews with some 1,300 members (the map of gang turf coincided almost perfectly with the homicide map). … Another step in the mapping process produced a network map of gang "beefs" and alliances: who was feuding with whom and
who allied with whom. … Finally, working with the same group of practitioners, the authors systematically examined each of
the 155 homicides and asked the group members if they knew what had happened in each instance and whether there had been a
meaningful gang connection. This answer, too, was striking: Using conservative definitions and methods, at least 60 percent of the homicides were gang related. Most of these incidents were not in any proximate way about drug trafficking or other "business" interests; most were part
of relatively longstanding feuds between gangs.
None of these dimensions — the number of crews, their size, their relationships, or the connection of gangs and gang rivalries
to homicide — could have been examined from formal records; the relevant information simply was not captured either within
BPD or elsewhere. But the frontline practitioners in the Working Group had this knowledge, and obtaining it by qualitative methods was a straightforward
if laborious and time-consuming task.
Despite months of this type of detailed findings and analysis, the Boston working group still had not come up with a strategy
that worked when an unexpected breakthrough made the difference — the researchers' persistence in gathering information from
caseworkers and officers uncovered a strategy that had been successful but not widely used or recognized. When this strategy
was crafted and implemented on a larger scale, gun-related homicides in Boston fell nearly 70 percent in one year.
Researchers also can help law enforcement partners develop and stick to an evidence-based approach, which calls for (1) testing and validating police activities to develop policy and program guidelines based on best practices,
and (2) careful monitoring of outcomes to ensure the program is working.
Police-researcher partnerships are vital for action research. Action research works best when police set aside their reservations or "turf issues" and researchers are flexible and sensitive
to law enforcement concerns and priorities.
From Reducing Gun Violence: The Boston Gun Project's Operation Ceasefire, by D. Kennedy, A. Braga, A. Piehl and E. Waring, September 2001, NCJ 188741: 22-23. YVFS was the Youth Violence Strike Force,
a special police unit focused on gangs. Streetworkers were city social service workers who dealt with the city's most at-risk
youth, trying to connect them with services and mediate disputes. This report describes in detail how the researchers worked
with police and other partners to design and implement Ceasefire, continually adjusting it to fit the facts on the ground.
Date Modified: May 19, 2010
|
<urn:uuid:c5ce63b4-b8a1-45f5-8606-2457eb54e8c0>
|
CC-MAIN-2016-26
|
http://www.nij.gov/topics/crime/gun-violence/prevention/Pages/action-research.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00065-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.944157 | 996 | 3.0625 | 3 |
Golden Eagle Range
- Average life span in the wild:
- 30 years
- 33 to 38 in (84 to 97 cm); Wingspan, 6 to 7.5 ft (1.8 to 2.3 m)
- 6 to 15 lbs (3 to 7 kg)
- Size relative to a 6-ft (2-m) man
Please add a "relative" entry to your dictionary.
This powerful eagle is North America's largest bird of prey and the national bird of Mexico. These birds are dark brown, with lighter golden-brown plumage on their heads and necks. They are extremely swift, and can dive upon their quarry at speeds of more than 150 miles (241 kilometers) per hour.
Golden eagles use their speed and sharp talons to snatch up rabbits, marmots, and ground squirrels. They also eat carrion, reptiles, birds, fish, and smaller fare such as large insects. They have even been known to attack full grown deer. Ranchers once killed many of these birds for fear that they would prey on their livestock, but studies showed that the animal's impact was minimal. Today, golden eagles are protected by law.
Golden eagle pairs maintain territories that may be as large as 60 square miles (155 square kilometers). They are monogamous and may remain with their mate for several years or possibly for life. Golden eagles nest in high places including cliffs, trees, or human structures such as telephone poles. They build huge nests to which they may return for several breeding years. Females lay from one to four eggs, and both parents incubate them for 40 to 45 days. Typically, one or two young survive to fledge in about three months.
These majestic birds range from Mexico through much of western North America as far north as Alaska; they also appear in the east but are uncommon. Golden eagles are also found in Asia, northern Africa, and Europe.
Some golden eagles migrate, but others do not—depending on the conditions of their geographic location. Alaskan and Canadian eagles typically fly south in the fall, for example, while birds that live in the western continental U.S. tend to remain in their ranges year-round.
See a whooping crane, a Javan rhinoceros hornbill, and more stunning birds photographed by Joel Sartore.
Identify your backyard visitors in a flash! Just answer four simple questions to search our database of 150 backyard birds common to Canada and the U.S.
Explore the pelican’s prodigious pouch. Find out how these famous fishers bring home the catch of the day.
Get right up close to 12 colorful new bird galleries, featuring photos from My Shot members and classic art from the NG archives.
|
<urn:uuid:1c0d9e87-9327-40bf-87a0-eb408122e0cf>
|
CC-MAIN-2016-26
|
http://animals.nationalgeographic.com/animals/birds/golden-eagle/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00191-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.951911 | 570 | 3.5 | 4 |
July 20, 2011
Polar Bear Cubs Face Death As Arctic Ice Melts
As their icy Arctic habitat melts, polar bear mothers and their cubs are forced to swim long distances, which expose the cubs to higher mortality rates than cubs who do not have to swim as far, a study shows.
"Climate change is pulling the sea ice out from under polar bears' feet, forcing some to swim longer distances to find food and habitat," co-author of the study, Geoff York of World Wildlife Fund (WWF), told Reuters.Polar bears are not naturally aquatic creatures. They rely on ice or land to hunt, feed and give birth, reports Reuters.
Previous studies found that individual animals have had to swim hundreds of miles to reach ice platforms or land, but this is the first to show how these long swims expose polar bear cubs to greater risks.
According to York, the current study is the first time these long swims have been quantitatively measured.
Researchers used satellites to track 68 polar bear females equipped with GPS collars over a six year span, from 2004 to 2009. Data was gathered to find occasions when these bears swam for more than 30 miles at a time.
Over those six years, there were 50 long-distance swims involving 20 bears, ranging up to 426 miles in distance, and with duration of about 12.7 days, according to the study presented at the International Bear Association Conference in Ottawa, Canada this week.
At the start of the study when the bears were equipped with the GPS collars, 11 of the bears that swam long distances had young cubs. Five of the polar bear mothers lost their babies during the long swim, which represents a 45% mortality rate, report the study.
For cubs that didn't have to swim long distances with their mothers, the mortality rate was 18%.
"They can't close off their nasal passages in rough waters," York said in a telephone interview with Reuters. "So for old bears or young bears alike, if they're out in open water and a storm hits, they're going to have a tough time surviving."
Steve Amstrup, a former scientist at the U.S. Geological Survey and current chief scientist at Polar Bear International, told Reuters, "Young bears don't have very much fat and therefore they aren't very well insulated and cannot cope with being in cold water for very long."
Since they are leaner than their parents, Amstrup says that the cubs "aren't as buoyant (as adult polar bears) so in rough water they'll have more difficulty keeping their heads above water."
Under the Endangered Species Act of the Bush administration in 2008, polar bears were listed as a threatened species. Reuter reports that the decision still holds true after being challenged last month, and Canada has listed polar bears as a species at risk this month.
The accumulation of greenhouse gases in the atmosphere has the Arctic warming up faster than lower latitudes, reports Reuters. In addition, the melting of sea ice in summer is accelerating the warming effect.
"Unless we take action to curb climate change and transition to low-carbon energy sources like renewable energy, we will consign our planet to a very perilous path," York said.
On the Net:
|
<urn:uuid:ca6bd75a-96e0-45a8-960d-1dc6eaa66ac5>
|
CC-MAIN-2016-26
|
http://www.redorbit.com/news/science/2082154/polar_bear_cubs_face_death_as_arctic_ice_melts/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00168-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.973166 | 673 | 3.109375 | 3 |
(1931 - )
Audrey Flack is a Jewish American artist best known for her photorealist paintings and sculptures.
Born in New
York in 1931 to a middle class family, she attended the Music and Art High School in New York City before going on to graduate from Cooper Union in 1951. At this time, Flack identified as an Abstract Expressionist and found herself having to be "one of the boys" in order to fit in. Flack says that she was not treated differently as a woman as a student, but many artists, students and visitors, could relate to her only as a woman. They treated her as a sex object, and her goal of becoming a professional artist was not taken seriously.
Following her graduation from Cooper Union, Flack attended Yale University and studied under Josef Albers. It was there that Flack was influenced to move beyond abstract expressionism. Albers encouraged her to use realism instead of abstract expressionism to express her political messages. She graduated from Yale with a Bachelors of Fine Arts in 1953 and subsequently moved back to New York to study anatomy at the Art Students League. Flack’s first solo exhibition was held at the Roko Gallery in New York in 1959.
While it was considered acceptable to use a photograph as the basis of a painting prior to the birth of Photorealism, it was not considered acceptable for the painting to look like the photograph. In 1965, Flack painted her first portrait based on a photograph, imitating its colors and appearance. Her use and outspokenness about the technique isolated her from the art community and other realists. Unlike many photorealists at the time who used masculine and often unemotional subjects, Flack’s paintings concentrated on highly emotional social and political themes. She is known for her feminine color schemes, which were dominated by pastel colors. Many of her photographs came from documentary news and included numerous public figures. One of her most well known and significant works depicts President Kennedy’s motorcade moments before his assassination. Flack became the first photorealist painter to get into the collection of the Museum of Modern Art in 1966.
During the 1970’s, Flack worked on her well-known series of still-life paintings and in 1972, she began to explore the role of women in society. Many of her paintings featured female religious statues and goddesses. Flack began sculpting in the 1980’s. Her first sculpture, which could fit into the palm of a hand, was a cherub clasping a shield over his heart. She then began work on a series of much larger sculptures that embodied female strength. In 1988, Flack was commissioned to create her series of “Civitas,” four twenty-foot high bronze goddesses that guard the entrance to Rock Hill, South Carolina. She was also later commissioned to create Islandia, a nine-foot bronze sculpture for the New York City Technical College in Brooklyn, NY.
Consistent through Flack’s career is her emphasis on symbolism. She tries to make her work “universal,” something that all audiences can relate to and understand.
The University of South Florida in Tampa organized Flack’s first retrospective exhibit in 1981. Her work has since been exhibited at Cooper Union (1986), JB Speed Museum in Louisville, Kentucky (1990), The Parrish Art Museum in South Hampton (1991), the Wright Art Museum in Los Angeles, CA (1992), the Guild Hall Museum in East Hampton (1996), and the Miami University Art Museum in Oxford, Ohio (1997). Flack’s work is also part of the public collections at the Metropolitan Museum of Art, the Museum of Modern Art, the Whitney Museum of American Art, the Solomon R. Guggenheim Museum, the National Museum of American Art, the National Museum of Women in the Arts, the San Francisco Museum of Modern Art, the Walker Art Center, the Los Angeles County Museum of Art, and the National Museum of Art in Canberra, Australia.
Flack holds an honorary doctorate and was awarded the St. Gaudens Medal from Cooper Union and the honorary Albert Dome professorship from Bridgeport University. She is also an honorary professor at George Washington University and has previously taught at honorary professor at George Washington University, and is currently a visiting professor at the University of Pennsylvania, The Pratt Institute in New York, New York University, and The School of Visual Arts. Flack has also written two books.
Sources: Audrey Flack, Hofstra,
Audrey Flack: Breaking the Rules, Humanities Web, ArtNet, Wikipedia
|
<urn:uuid:c52210f1-3004-4549-88a4-4970223b7f4b>
|
CC-MAIN-2016-26
|
http://www.jewishvirtuallibrary.org/jsource/biography/Flack.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00007-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.97257 | 949 | 2.640625 | 3 |
X. Injury Prevention and Control
- Youth Violence Prevention
- Intimate Partner Violence
- Bicycle Helmet and Head Injury Prevention
- Fire-Related Injury Prevention
Injury, the leading cause of death for Americans ages 1 to 44 years, is largely preventable. CDC leads federal efforts to prevent and control injuries with a program that addresses the main causes of death and disability from injury: fires and burns; poisoning; drowning; violence, including homicide and suicide; motor vehicle crashes; and lack of use of bicycle helmet, seat belts, and child restraint seats. Injury has a disproportionate impact on children, youth, and young adults. Every day 60 children die from injury, almost 3 children every hour. Each year over 150,000 Americans die from injuries, and 1 in 3 persons suffers a nonfatal injury. Injuries, one of our most expensive health problems, cost $224 billion per year as a total lifetime cost of injuries sustained. While the CDC and our public and private partners have made tremendous progress in injury prevention and control during the past several years, examples of the magnitude of the injury problem are highlighted below:
- Home fires and falls among older persons cause thousands of deaths and injuries each year and result in high medical costs and property losses;
- Violence continues to result in staggering numbers of lives lost, and frequently this is violence among intimate partners -- each year over 30% of women murdered in the U.S. are killed by a spouse or ex-spouse;
- The rates of homicide and suicide for young Americans, particularly men, are alarmingly higher than for any other Western industrialized nation;
- An estimated 2 million Americans suffer a traumatic brain injury (TBI) each year, of which about 50,000 die and another 50,000 to 70,000 are disabled;
- Approximately 4 million poisonings occur each year costing the health care system approximately $3 billion/year; and
- Each year about 153,000 children receive treatment in hospital emergency departments for bicycle-related head injuries.
Through the National Center for Injury Prevention and Control, CDC provides national leadership for designing programs to prevent premature death and disability and reduce human suffering and medical costs caused by injuries. CDC accomplishes its mission through: extramural and intramural research; developing, evaluating, and implementing prevention programs; assisting state and local health jurisdictions in their efforts to reduce injuries; and conducting prevention activities in partnership with other federal and private-sector agencies. Evaluation of intervention programs is a key component of CDC's overall strategy to discover what works and determine how to deliver programs to the American people.
As the lead federal Center for injury prevention and control NCIPC continues to discover and deliver proven interventions. For example:
- Funded five states to conduct three-year programs aimed at increasing the number of working smoke alarms in homes. During the project period, over 15,000 long life, lithium-powered smoke alarms were distributed and/or installed.
- Funded six states for programs aimed at increasing bicycle helmet use among riders of all ages. Measurable increases in helmet use has resulted from the implementation of this intervention.
- NCIPC and the National Institute of Justice co-sponsored a study that identified gaps in our knowledge of violence against women and developed a research agenda to better understand and control the problem. Violence against women research centers are being established.
- Community and school-based efforts to prevent youth violence have been launched, including evaluation of promising violence prevention strategies such as peer mediation and conflict resolution training, mentoring and role playing, and efforts to improve parenting skills.
- Work to prevent suicide among our Nation's elderly and youth continues, including taking steps to establish the first research center focused on suicide prevention.
- In an effort to a develop uniform reporting system for TBI, funding was provided to fifteen state health departments to conduct TBI surveillance. Data from these surveillance systems will enable NCIPC to estimate the magnitude and severity of TBI nationally and to assist states in TBI prevention program.
- To ensure that data is available to study and improve trauma care, NCIPC is leading a national effort to develop uniform data elements for emergency department records.
- Registries that link persons with TBI to medical services are being established.
- Improving institutional and community living environments for elderly citizens as a means of reducing the risks and consequences of falls;
Focus of the FY 2000 Performance Plan
The performance measures for injury prevention and control best represent NCIPC's mission to provide leadership in preventing and controlling injuries through research, surveillance, implementation of programs, and communication. Priority areas for the FY 2000 Performance Plan include:
- Youth violence prevention
- Intimate partner violence prevention
- Bicycle helmet usage and head injury prevention
- Fire-related injury prevention
Links to DHHS Strategic Plan
Each of the NCIPC performance objectives and measures are related to DHHS Goal 1: Reduce major threats to the health and productivity of all Americans.
Validation/Verification - Data Source Descriptions
The following data collection sources will be utilized to verify baselines and to track performance measures.
The National Electronic Injury Surveillance System (NEISS): The system is comprised of a sample of hospitals that are statistically representative of hospital emergency rooms nationwide. From the data collected, estimates can be made of the numbers of injuries associated with consumer products and treated in hospital emergency departments. Data is collected on a broad range of injury-related issues, covering hundreds of product categories, and provides national estimates of the number and severity of product-related injuries. (Consumer Product Safety Commission).
National Vital Statistics System: The National Vital Statistics System is responsible for the Nation's official vital statistics. These vital statistics are provided through state-operated registration systems. The registration of vital events--births, deaths, marriages, divorces, fetal deaths, and induced terminations of pregnancy -- is a state function. However, standard forms for the collection of the data and model procedures for the uniform registration of the events are developed and recommended for state use through cooperative activities of the states and the National Center for Health Statistics (NCHS). (National Center for Health Statistics, CDC).
National Health Interview Survey: The National Health Interview Survey (NHIS) is the principal source of information on the health of the civilian noninstitutionalized population of the United States and is one of the major data collection programs of the National Center for Health Statistics (NCHS). NHIS data are used widely throughout the Department of Health and Human Services (DHHS) to monitor trends in illness and disability and to track progress toward achieving national health objectives. The data are also used by the public health research community for epidemiologic and policy analysis of such timely issues as characterizing those with various health problems, determining barriers to accessing and using appropriate health care, and evaluating federal health programs. (National Center for Health Statistics, CDC)
Youth Risk Factor Surveillance System (YRBSS): The purpose of the YRBSS is to provide a framework that will: 1) focus the nation on risk behaviors among youth causing the most important health problems; 2) assess how risk behaviors change over time; and 3) provide comparable national, state, and local data. (National Center for Chronic Disease Prevention and Health Promotion, CDC)
Behavioral Frequency Scales: This instrument is used to measure aggressive and delinquent behavior in among program participants in CDC-funded youth violence prevention programs. The inventory includes scales that assess the 30-day frequency of specific delinquent behavior (10 items), violent behaviors (5 items), gateway drug use (6 items), other drug use (4 items). An additional 16 items assess frequency of use for other drugs, concerns about safety, the use of conflict-resolution skills, and the use of the peer mediators. Reliability ranges from .64 to .87. (The Center for the Study and Prevention of Violence, Boulder, Colorado - Peter Tolan & Nancy Guerra).
|
<urn:uuid:4cb306a0-5541-4934-a684-bf72b5564eac>
|
CC-MAIN-2016-26
|
http://www.cdc.gov/program/performance/fy2000plan/2000x.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00011-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.923011 | 1,618 | 3.046875 | 3 |
In This Episode << SLIDE LEFT TO SEE ADDITIONAL SEGMENTS
BOB ABERNETHY, anchor: Here at home, another debate about testing medicine on children. Until recently, vaccines and drugs used on children were tested first on animals, adults, and, sometimes, on older children. But not on young children. That created enough uncertainty about what worked, and what the doses should be, that the government has now begun to encourage and in some cases require that drugs for children first be tested on children, even though testing also brings problems. Betty Rollin begins her report with a study testing a nasal spray that might replace shots as a means of preventing children’s flu.
BETTY ROLLIN: He’s too young to know, but 15-month-old Jack Metcalf is not just a patient; he’s a participant — one of 4,000 nationwide — in a study that will show the efficacy of a particular nasal spray to prevent flu in children.
Jack’s mother looks forward to one less a year of these.
TONI METCALF: I hope that the flu mist is approved and will become available for children to use, as opposed to a shot.
ROLLIN: One of the researchers for this study and for others is Dr. Richard Schwartz, a pediatrician in northern Virginia, who has struggled for years with the problem of inadequate information about children’s medicine.
Dr. RICHARD SCHWARTZ (Pediatrician and Researcher): There are a lot of medicines out there that have never been tested on children so it leaves the doctors high and dry in a legal quagmire, using them without FDA approval because they have evidence above 12, above 18, but not for younger children.
ROLLIN: Sometimes the medicine is appropriate for the child, but the dosage is wrong.
Dr. DIANNE MURPHY (FDA, Director of the Office of Pediatric Drug Development and Program Initiatives): That dose may be too high. If it’s too high, the child gets toxic and we have to take them off of that dose, and they are denied that medicine. Or the converse, give them the medicine at the low dose and that doesn’t work, and put them on another medicine and that medicine may be more toxic.
ROLLIN: Under the 1997 Modernization Act, which has recently been re-authorized by President Bush, drug companies have incentives to test existing drugs that are prescribed for children, on children. In addition, the FDA encourages that new drugs to be used by children must first be tested on children.
The FDA estimates more than 36,000 children are currently enrolled in clinical trials from medicine taste testing to cancer treatments.
Dr. MURPHY: We are finding out so many important things in products, some of which have been used on children for years. So we think it is very vigorous and healthy as long as we’re careful.
ROLLIN: The problem is weighing the possible benefits of the test against the dangers to the participants, who are too young to give informed consent.
Dr. MURPHY: We know that there are sometimes issues, questions about, should kids who don’t have a disease be enrolled? What about placebo-controlled trials in children? What about kids who are very vulnerable because they have some neurological problems, how do you deal. You can’t say don’t use drugs on those children, but how do you do it in the safest way?
ROLLIN: Safety is indeed the major issue. Federal guidelines designed to ensure the safety of children, require both parental consent and when, possible, the child’s assent. In addition, the research should be of some benefit to the child being tested. And risk should be minimal.
But Professor Adil Shamoo of the University of Maryland School of Medicine believes that children participating in trials are still at greater risk than they should be.
Professor ADIL SHAMOO (University of Maryland School of Medicine): By definition, research is non-therapeutic it means you are testing something brand new and you don’t know its outcome regardless of what they claim in their advertisement that this medicine is better than the existing medicine.
ROLLIN: Researchers like Dr. Schwartz are involved in testing medications with low risk, but other clinical trials involve more risk, and some children have been sickened and some have died.
Linda Smith, (not her real name), a registered nurse, had a seven year old son — now 18 — who suffers from HCM, hypertrophic cardiomyopathy, a condition which thickens the heart and can cause sudden death. Linda’s cardiologist suggested that she take her son to the NIH, where Dr. Lameh Fananapazir was conducting a trial on pacemakers.
By implanting a pacemaker like this one in the hearts of children with HCM, Dr. Fananapazir hoped to find whether symptoms could be alleviated or the disease reversed. Dr. Fananapazir won approval from the NIH review board based on his research implanting pacemakers in adults. And from 1993 through 1996, at least 55 children, ages 5 to 15, participated.
But there were serious problems. Some families brought suit against the NIH — one for a wrongful death. At one point, Linda’s son almost died.
Ms. SMITH (Registered Nurse): When my son reached a critical point where we had to go somewhere else, I did start researching and said, “I am going somewhere else, he’s getting worse.” This particular physician told me, “No, you owe us six more months, the program is for five years.” And we were six months short of that. He didn’t want us to leave until the five years were up. Well, my son had a cardiac arrest three days later, and I ended up doing mouth to mouth on him.
ROLLIN: Linda believes that she was discouraged from pursuing other treatment options, like surgery, which turned out to be what her son needed.
Ms. SMITH: They were presented to me as such high-risk options that we wouldn’t want to even consider having them done.
ROLLIN: Dr. Schwartz feels that the vast majority of researchers behave differently.
Dr. SCHWARTZ: If you keep your ethics high and you report things that are adverse reactions that are happening and you are honest with your patients, I think that will benefit everybody.
ROLLIN: There are always risks enrolling children in clinical trials. But there also risks, big risks, doctors say, in treating children without knowing more about what works.
I’m Betty Rollin for Religion & Ethics NewsWeekly in Vienna, Virginia.
ABERNETHY: Recently, the NIH settled a lawsuit brought by several families who had participated in the pacemaker study. The NIH didn’t admit to any wrongdoing and declined to comment further.
|
<urn:uuid:31d28288-a168-43d0-8566-064882c7abdf>
|
CC-MAIN-2016-26
|
http://www.pbs.org/wnet/religionandethics/2002/01/18/january-18-2002-drug-testing-on-children/11068/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00156-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.963286 | 1,476 | 2.703125 | 3 |
Back to course home page
FORDHAM UNIVERSITY, Fordham College Lincoln Center
CSEU 3500 -- Data Base Systems
Dept. of Computer & Info. Sciences, Spring, 2004
Homework for Chapter 4, Set 1
Due date: Wednesday, March 24
- [48 pts.] Consider the employee database of Figure 4.13 [as
modified for use with PostgreSQL. The
primary keys are underlined. Note that the key of company has
been enlarged to include city so that a company can be located
in more than one city. For brevity, the word ``Corporation'' has been
left off the names of the companies in the database.]
Give an expression in [standard] SQL for each of the following queries.
[Perform the queries using the PostgreSQL interactive query processor
on host erdos, or other SQL query processor available to you. If you use
another system, copy the data from the class web page to populate the
tables. Hand in a hard copy of both the queries and the results.
The questions marked modified are different from the
corresponding queries in the text.]
|employee ( employee_name , street, city )
|works ( employee_name , company_name , salary )
|company ( company_name , city )
|manages ( employee_name , manager_name )
Figure 4.13 Employee database.
- Find the names of all employees who work for First Bank.
- modified Find the names and companies
of all employees who work for companies that are located in White Plains.
List alphabetically by company, then by name.
- modified Find the names and streets of residence
of all employees who live in Rye and earn more than $100,000.
- Find [the names and cities of residence of] all employees
in the database
who live in [any of] the same cities as the companies for which they
work [are located in].
- Find [the names, streets and cities of] all employees in
the database who live in the same cities and on the same streets as
do their managers.
- Find [the names of] all employees in the database who do
not work for First Bank.
- modified Find the names of all employees in the
database who earn more than Johnson earns.
- modified List all the companies in the database
together with their
payrolls. (A company's payroll is the total of all the company's
employees' salaries.) Sort the list so it is printed with the largest
payroll first. Label the column containing payroll amounts ``payroll''.
- Find [the names of] all employees who earn more than the average
salary of all employees of their company.
- modified Find the name of the company located in
White Plains that has the fewest employees.
- Find [the name of] the company that has the smallest payroll.
- modified List all companies that have more
employees than Small Bank has.
|
<urn:uuid:fb6232a7-9fb4-4cf6-bd1b-15a655735ca1>
|
CC-MAIN-2016-26
|
http://www.dsm.fordham.edu/~moniot/Classes/DatabaseSp04/chap4-homework1.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00106-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.893398 | 635 | 2.59375 | 3 |
About this Grantee
Ms. Geiersbach and Ms. Letter use quilting to enhance their students’ mathematics and social studies skills. Students read literature to understand the historical significance of quilting. Using pattern blocks, students compare and categorize shapes to design quilt squares. Based on their learning, students make and donate quilts to local shelters.
|
<urn:uuid:639ba831-0545-4ebc-8d84-43ea38302455>
|
CC-MAIN-2016-26
|
https://www.neafoundation.org/listings/grantee-archive/linda-geiersbach/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00174-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930451 | 74 | 3.375 | 3 |
by David Hatcher Childress
Most of us are familiar with the last scene in the popular Indiana
Jones archeological adventure film RAIDERS OF THE LOST ARK in which
an important historical artefact, the Ark of the Covenant from the
Temple in Jerusalem, is locked in a crate and put in a giant
warehouse, never to be seen again, thus ensuring that no history
books will have to be rewritten and no history professor will have
to revise the lecture that he has been giving for the last forty
While the film was fiction, the scene in which an important ancient
relic is buried in a warehouse is uncomfortably close to reality for
many researchers. To those who investigate allegations of
archaeological cover-ups, there are disturbing indications that the
most important archaeological institute in the United States, the
Smithsonian Institute, an independent federal agency, has been
actively suppressing some of the most interesting and important
archaeological discoveries made in the Americas.
The Vatican has been long accused of keeping artefacts and ancient
books in their vast cellars, without allowing the outside world
access to them. These secret treasures, often of a controversial
historical or religious nature, are allegedly suppressed by the
Catholic Church because they might damage the church's credibility,
or perhaps cast their official texts in doubt. Sadly, there is
overwhelming evidence that something very similar is happening with
the Smithsonian Institution.
The cover-up and alleged suppression of archaeological evidence
began in late 1881 when John Wesley Powell, the geologist famous for
exploring the Grand Canyon, appointed Cyrus Thomas as the director
of the Eastern Mound Division of the Smithsonian Institution's
Bureau of Ethnology.
When Thomas came to the Bureau of Ethnology he was a
"pronounced believer in the existence of a race of Mound Builders,
distinct from the American Indians."
However, John Wesley Powell, the director of the Bureau of
Ethnology, a very sympathetic man toward the American Indians, had
lived with the peaceful Winnebago Indians of Wisconsin for many
years as a youth and felt that American Indians were unfairly
thought of as primitive and savage.
The Smithsonian began to promote the idea that Native Americans, at
that time being exterminated in the Indian Wars, were descended from
advanced civilisations and were worthy of respect and protection.
They also began a program of suppressing any archaeological evidence
that lent credence to the school of thought known as Diffusionism, a
school which believes that throughout history there has been
widespread dispersion of culture and civilisation via contact by
ship and major trade routes.
The Smithsonian opted for the opposite school, known as
Isolationism. Isolationism holds that most civilisations are
isolated from each other and that there has been very little contact
between them, especially those that are separated by bodies of
water. In this intellectual war that started in the 1880s, it was
held that even contact between the civilisations of the Ohio and
Mississippi Valleys were rare, and certainly these civilisations did
not have any contact with such advanced cultures as the Mayas,
Toltecs, or Aztecs in Mexico and Central America. By Old World
standards this is an extreme, and even ridiculous idea, considering
that the river system reached to the Gulf of Mexico and these
civilisations were as close as the opposite shore of the gulf. It
was like saying that cultures in the Black Sea area could not have
had contact with the Mediterranean.
When the contents of many ancient mounds and pyramids of the Midwest
were examined, it was shown that the history of the Mississippi
River Valleys was that of an ancient and sophisticated culture that
had been in contact with Europe and other areas. Not only that, the
contents of many mounds revealed burials of huge men, sometimes
seven or eight feet tall, in full armour with swords and sometimes
(Vangard note..>Eastern Indian texts say that at one time men lived
thousands of years and grew very tall in direct proportion to their
age, as does the Bible with the comment "and there were GIANTS in
the earth in those days...")
For instance, when Spiro Mound in Oklahoma was excavated in the
1930's, a tall man in full armour was discovered along with a pot of
thousands of pearls and other artefacts, the largest such treasure
so far documented. The whereabouts of the man in armour is unknown
and it is quite likely that it eventually was taken to the
In a private conversation with a well-known historical researcher
(who shall remain nameless), I was told that a former employee of
the Smithsonian, who was dismissed for defending the view of
diffusionism in the Americas (i.e. the heresy that other ancient
civilisations may have visited the shores of North and South America
during the many millenia before Columbus), alleged that the
Smithsonian at one time had actually taken a barge full of unusual
artefacts out into the Atlantic and dumped them in the ocean.
Though the idea of the Smithsonian' covering up a valuable
archaeological find is difficult to accept for some, there is,
sadly, a great deal of evidence to suggest that the Smithsonian
Institution has knowingly covered up and 'lost' important
archaeological relics. The STONEWATCH NEWSLETTER of the Gungywamp
Society in Connecticut, which researches megalithic sites in New
England, had a curious story in their Winter 1992 issue about stone
coffins discovered in 1892 in Alabama which were sent to the
Smithsonian Institution and then 'lost'. According to the
newsletter, researcher Frederick J. Pohl wrote an intriguing letter
in 1950 to the late Dr. T.C. Lethbridge, a British archaeologist.
The letter from Pohl stated, "A professor of geology sent me a
reprint (of the) Smithsonian Institution, THE CRUMF BURIAL CAVE by
Frank Burns, US Geological Survey, from the report of the US
National Museum for 1892, pp 451-454, 1984. In the Crumf Cave,
southern branch of the Warrior River, in Murphy's Valley, Blount
County, Alabama, accessible from Mobile Bay by river, were coffins
of wood hollowed out by fire, aided by stone or copper chisels.
Either of these coffins were taken to the Smithsonian. They were
about 7.5 feet long, 14" to 18" wide, 6" to 7" deep. Lids open.
"I wrote recently to the Smithsonian, and received a reply March
11th from F.M. Setzler, Head Curator of Department of Anthropology
(He said) 'We have not been able to find the specimens in our
collections, though records show that they were received."
David Barron, President of the Gungywamp Society was eventually told
by the Smithsonian in 1992 that the coffins were actually wooden
troughs and that they could not be viewed anyway because they were
housed in an asbestos-contaminated warehouse. This warehouse was to
be closed for the next ten years and no one was allowed in except
the Smithsonian personnel!
Ivan T. Sanderson, a well-known zoologist and frequent guest on
Johnny Carson's TONIGHT SHOW in the 1960s (usually with an exotic
animal with a pangolin or a lemur), once related a curious story
about a letter he received regarding an engineer who was stationed
on the Aleutian island of Shemya during World War II. While
building an airstrip, his crew bulldozed a group of hills and
discovered under several sedimentary layers what appeared to be
human remains. The Alaskan mound was in fact a graveyard of
gigantic human remains, consisting of crania and long leg bones.
The crania measured from 22 to 24 inches from base to crown. Since
an adult skull normally measures about eight inches from back to
front, such a large crania would imply an immense size for a
normally proportioned human. Furthermore, every skull was said to
have been neatly trepanned (a process of cutting a hole in the upper
portion of the skull).
In fact, the habit of flattening the skull of an infant and forcing
it to grow in an elongated shape was a practice used by ancient
Peruvians, the Mayas, and the Flathead Indians of Montana. Sanderson
tried to gather further proof, eventually receiving a letter from
another member of the unit who confirmed the report. The letters
both indicated that the Smithsonian Institution had collected the
remains, yet nothing else was heard. Sanderson seemed convinced
that the Smithsonian Institution had received the bizarre relics,
but wondered why they would not release the data. He asks, "...is
it that these people cannot face rewriting all the textbooks?"
In 1944 an accidental discovery of an even more controversial nature
was made by Waldemar Julsrud at Acambaro, Mexico. Acambaro is in
the state of Guanajuato, 175 miles northwest of Mexico City. The
strange archaeological site there yielded over 33,500 objects of
ceramic;stone, including jade; and knives of obsidian (sharper than
steel and still used today in heart surgery). Jalsrud, a prominent
local German merchant, also found statues ranging from less than an
inch to six feet in length depicting great reptiles, some of them in
ACTIVE ASSOCIATION with humans - generally eating them, but in some
bizarre statuettes an erotic association was indicated. To
observers many of these creatures resembled dinosaurs.
Jalsrud crammed this collection into twelve rooms of his expanded
house. There startling representations of Negroes, Orientals, and
bearded Caucasians were included as were motifs of Egyptians,
Sumerian and other ancient non-hemispheric civilisations, as well as
portrayals of Bigfoot and aquatic monsterlike creatures, weird
human-animal mixtures, and a host of other inexplicable creations.
Teeth from an extinct Ice Age horse, the skeleton of a mammoth, and
a number of human skulls were found at the same site as the ceramic
Radio-carbon dating in the laboratories of the University of
Pennsylvania and additional tests using the thermoluminescence
method of dating pottery were performed to determine the age of the
objects. Results indicated the objects were made about 6,500 years
ago, around 4,500 BC. A team of experts at another university,
shown Jalrud's half-dozen samples but unaware of their origin, ruled
out the possibility that they could have been modern reproductions.
However, they fell silent when told of their controversial source.
In 1952, in an effort to debunk this weird collection which was
gaining a certain amount of fame, American archaeologist Charles C.
DiPeso claimed to have minutely examined the then 32,000 pieces
within not more than four hours spent at the home of Julsrud. In a
forthcoming book, long delayed by continuing developments in his
investigation, archaeological investigator John H. Tierney, who has
lectured on the case for decades, points out that to have done that
DiPeso would have had to have inspected 133 pieces per minute
steadily for four hours, whereas in actuality, it would have
required weeks merely to have separated the massive jumble of
exhibits and arranged them properly for a valid evaluation.
Tierney, who collaborated with the later Professor Hapgood, the late
William N. Russell, and others in the investigation, charges that
the Smithsonian Institution and other archaeological authorities
conducted a campaign of disinformation against the discoveries. The
Smithsonian had, early in the controversy, dismissed the entire
Acambaro collection as an elaborate hoax. Also, utilising the
Freedom of Information Act, Tierney discovered that practically the
entirety of the Smithsonian's Julsrud case files are missing.
After two expeditions to the site in 1955 and 1968, Professor
Charles Hapgood, a professor of history and anthropology at the
University of New Hampshire, recorded the results of his 18-year
investigation of Acambaro in a privately printed book entitled
MYSTERY IN ACAMBARO. Hapgood was initially an open-minded skeptic
concerning the collection but became a believer after his first
visit in 1955, at which time he witnessed some of the figures being
excavated and even dictated to the diggers where he wanted them to
Adding to the mind-boggling aspects of this controversy is the fact
that the Instituto Nacional de Antropologia e Historia, through the
late Director of PreHispanic Monuments, Dr. Eduardo Noguera, (who,
as head of an official investigating team at the site, issued a
report which Tierney will be publishing), admitted "the apparent
scientific legality with which these objects wer found." Despite
evidence of their own eyes, however, officials declared that because
of the objects 'fantastic' nature, they had to have been a hoax
played on Julsrud!
A disappointed but ever-hopeful Julsrud died. His house was sold
and the collection put in storage. The collection is not currently
open to the public.
Perhaps the most amazing suppression of all is the excavation of an
Egyptian tomb by the Smithsonian itself in Arizona. A lengthy front
page story of the PHOENIX GAZETTE on 5 April 1909 (follows this
article), gave a highly detailed report of the discovery and
excavation of a rock-cut vault by an expedition led by a Professor
S.A. Jordan of the Smithsonian. The Smithsonian, however, claims to
have absolutely no knowledge of the discovery or its discoverers.
The World Explorers Club decided to check on this story by calling
the Smithsonian in Washington, D.C., though we felt there was little
chance of getting any real information. After speaking briefly to
an operator, we were transferred to a Smithsonian staff
archaeologist, and a woman's voice came on the phone and identified
I told her that I was investigating a story from a 1909 Phoenix
newspaper article about the Smithsonian Institution's having
excavated rock-cut vaults in the Grand Canyon where Egyptian
artefacts had been discovered, and whether the Smithsonian
Institution could give me any more information on the subject.
"Well, the first thing I can tell you, before we go any further,"
she said, "is that no Egyptian artefacts of any kind have ever been
found in North or South America. Therefore, I can tell you that the
Smithsonian Institute has never been involved in any such
excavations." She was quite helpful and polite but, in the end,
knew nothing. Neither she nor anyone else with whom I spoke could
find any record of the discovery or either G.E. Kinkaid and
Professor S.A. Jordan.
While it cannot be discounted that the entire story is an elaborate
newspaper hoax, the fact that it was on the front page, named the
prestigious Smithsonian Institution, and gave a highly detailed
story that went on for several pages, lends a great deal to its
credibility. It is hard to believe such a story could have come out
of thin air.
Is the Smithsonian Institution covering up an archaeological
discovery of immense importance? If this story is true it would
radically change the current view that there was no transoceanic
contact in pre-Columbian times, and that all American Indians, on
both continents, are descended from Ice Age explorers who came
across the Bering Strait. (Any information on G.E. Kinkaid and
Professor S.A. Jordan, or their alleged discoveries, that readers
may have would be greatly appreciated.....write to Childress at the
World Explorers Club at the above address.)
Is the idea that ancient Egyptians came to the Arizona area in the
ancient past so objectionable and preposterous that it must be
covered up? Perhaps the Smithsonian Institution is more interested
in maintaining the status quo than rocking the boat with astonishing
new discoveries that overturn previously accepted academic
Historian and linguist Carl Hart, editor of WORLD EXPLORER, then
obtained a hiker's map of the Grand Canyon from a bookstore in
Chicago. Poring over the map, we were amazed to see that much of
the area on the north side of the canyon has Egyptian names. The
area around Ninety-four Mile Creek and Trinity Creek had areas (rock
formations, apparently) with names like Tower of Set, Tower of Ra,
Horus Temple, Osiris Temple, and Isis Temple. In the Haunted Canyon
area were such names as the Cheops Pyramid, the Buddha Cloister,
Buddha Temple, Manu Temple and Shiva Temple. Was there any
relationship between these places and the alleged Egyptian
discoveries in the Grand Canyon?
We called a state archaeologist at the Grand Canyon, and were told
that the early explorers had just liked Egyptian and Hindu names,
but that it was true that this area was off limits to hikers or
other visitors, "because of dangerous caves."
Indeed, this entire area with the Egyptian and Hindu place names in
the Grand Canyon is a forbidden zone - no one is allowed into this
We could only conclude that this was the area where the vaults were
located. Yet today, this area is curiously off-limits to all hikers
and even, in large part, park personnel.
I believe that the discerning reader will see that if only a small
part of the "Smithsoniangate" evidence is true, then our most
hallowed archaeological institution has been actively involved in
suppressing evidence for advanced American cultures, evidence for
ancient voyages of various cultures to North America, evidence for
anomalistic giants and other oddball artefacts, and evidence that
tends to disprove the official dogma that is now the history of
The Smithsonian's Board of Regents still refuses to open its
meetings to the news media or the public. If Americans were ever
allowed inside the 'nation's attic', as the Smithsonian has been
called, what skeletons might they find?
|
<urn:uuid:e084cdd6-07ad-4e7e-9a69-eea3880ae4fa>
|
CC-MAIN-2016-26
|
http://strangeworldofmystery.blogspot.com/2009/12/archeological-cover-ups.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00010-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957215 | 3,935 | 3.140625 | 3 |
Many of us spend just as much time in cyberspace touring the electronic landscapes of the Internet as we spend offline. But for all of the time we spend in front of our computer monitors, this virtual world lacks many of the real world's most precious attributes. One of the biggest drawbacks of the cyber world is its lack of realism. Most of us are born with five senses, allowing us to see, hear, touch, smell and taste; yet the Internet takes advantage of less than half of these.
When you log onto your computer, what senses are you using? Sight is probably the most obvious of the senses we use to collect information. The Internet is almost completely vision-based. While audio technology, like MP3 music files, have made a lot of noise recently, the Internet is made up mostly of words and pictures. You can also throw in touch as a third sense used in computer interaction, but that is mostly in terms of interfacing by way of keyboard and mouse. Since the beginning of the Internet, software developers have chosen to ignore our senses of smell and taste. However, there are at least two American companies who are planning to awaken all of your senses by bringing digital odors to the Internet.
We have the ability to recognize thousands of odors; and some scientists believe that smell has the power to unlock memories. In this edition of How Stuff Will Work, you will learn how smells will be transmitted to your desktop and what other possible applications this technology could present.
|
<urn:uuid:824fdb2d-fde1-470d-8411-e8a659a5044c>
|
CC-MAIN-2016-26
|
http://computer.howstuffworks.com/internet-odor.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00139-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964732 | 302 | 2.984375 | 3 |
Episode 110 – An Elementary Journey to the NGSS
Elementary science has been on our minds recently. So it is fitting that our guest this week has been working hard helping elementary teachers tackle the Next Generation Science Standards. As Coordinator for Elementary Science in Baltimore County Schools, Eric Cromwell has the task of moving a large number of schools and teachers into an NGSS based curriculum. Listen to the show to hear of Eric’s experience in this transition as we discuss how elementary schools can embrace the NGSS.
- An Elementary Journey to the Next Generation Science Standards (Blog on our transition process)
- Office of Science PreK-12, Baltimore County Public Schools
- Venn Diagram of Practices Among CCSS-Math, CCSS-ELA, and NGSS
- Baltimore County STAT (Students & Teachers Accessing Tomorrow) – Explains our digital conversion process
- Picture This: Increasing Math and Science Learning by Improving Spatial Thinking
|
<urn:uuid:52738d2c-5e7a-4725-9715-c41a6efa4a8c>
|
CC-MAIN-2016-26
|
http://laboutloud.com/2014/03/episode-110-en-elementary-journey-to-the-ngss/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00191-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.907117 | 194 | 3 | 3 |
Mother's Milk: The First Step to Good Health
The goal of the Alabama Department of Public Health Breastfeeding Promotion Program is to promote the physical and emotional well being of childbearing families and their infants by increasing the rate and duration of breastfeeding in the state. Breastfed babies are generally healthier than formula fed babies. One way to accomplish this goal is public education. The purpose of this curriculum guide is to help form positive attitudes toward breastfeeding.
Our attitudes are shaped early in our lives by cultural, emotional, and social forces. We develop our beliefs about what is or is not acceptable from our families, teachers, and friends. Clearly, we know that motherís milk contributes to the health and wellness of infants. Formula is an alternative, but it does not provide the same amount of benefits as breast milk. We know that breastfeeding is the best. Current research tells us that breastfeeding impacts health through adulthood.
This curriculum may be used in its entirety or you may choose to use any individual lesson as a "spot activity." Each lesson can be incorporated into a content area such as social studies, science, health education, and/or family and consumer science. The course of study content standards are identified in the above four subject areas according to grade levels; however, many of the lessons can be used at levels above or below the one designated in the lesson. Reading the entire curriculum will allow you to decide which grade level a particular lesson fits best: Motherís Milk Education Package for Grades K-12.
Developing Thinking Skills
For children to develop a positive attitude regarding motherís milk, we must help them learn from an early age why it is a good choice for mothers and their infants. We must also address the issue that there has been a significant worldwide decrease in breastfeeding, and we need to promote how breastfeeding contributes to a child and mother's well being.
We need to address affective learning to help students value good health. The learner needs to be aware of the pressure from advertisers to promote formula feeding, to analyze how demands of a career can affect a woman's choice to breast or formula feed, and to develop a positive attitude regarding the importance of motherís milk.
Developing the thinking skills of our students is critical towards an acceptance of breastfeeding. Practical, interesting examples that help students examine the benefits of breastfeeding are demonstrated in order for them to form their own positive values. Students can practice using their thinking skills while learning about other content areas. The curriculum guide is offered to help reach the goal of positive attitudes towards motherís milk and to ultimately increase the number of women who choose breastfeeding.
Gayle Whatley, RN, WHNP-BC, CLC
Region II & III Perinatal Coordinator
OSC 252 1500 6th Avenue South
Birmingham, AL 35294
Phone: (205) 934-6254
Fax: (205) 996-7999
|
<urn:uuid:c9a77a0a-bcaf-4d56-9d80-7b99c5c6e893>
|
CC-MAIN-2016-26
|
http://adph.org/perinatal/Default.asp?id=712
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00160-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.942239 | 588 | 3.671875 | 4 |
Theory into practice: The realities of shared decision-making
Many of the recent efforts to reform the public schools of the United States have called for teachers to be involved in decision making at the school site. While teacher participation in decision making has a great deal of popular support and intuitive appeal, the research supporting shared decision making is rather limited. A review of the literature reveals that no effective method of measuring shared decision making exists. The purpose of this study is to create a reliable and valid instrument for measuring the participation of teachers in shared decision making.^ A review of the literature established the empirical grounding for the subscales that comprise the instrument. A panel of experts confirmed the content validity of the instrument by reaching agreement on each of the 50 statements that makeup the eight subscales. The reliability and validity of the instrument were determined by analyzing the data obtained by having 109 teachers in five schools complete the instrument. The instrument had a Cronbach's alpha reliability coefficient of.96 and each subscale had a Cronbach's alpha which surpassed the criteria of.70 established for reliability in this study. An analysis of variance was conducted and confirmed the instrument's validity in discriminating shared decision making levels among schools. The construct validity of the instrument was determined by examining the intercorrelation coefficients of each of the subscales. The final test of the instrument's validity was an analysis of each of the school's scores on the instrument and a comparison of these scores with an independent appraisal of each school's use of shared decision making. Every one of these tests confirmed that the instrument provides a reliable and valid measure of teacher participation in decision making.^ This instrument can be used by researchers, practitioners, and education officials to analyze teacher participation in decision making at the school site. The instrument provides a way of conceptualizing shared decision making, of measuring it, and of relating it to educational outcomes. ^
Education, Administration|Sociology, Industrial and Labor Relations
John Joseph Russell,
"Theory into practice: The realities of shared decision-making"
(January 1, 1992).
ETD Collection for Fordham University.
|
<urn:uuid:b545cc1e-f2ef-43d0-9768-4ca8233cfacd>
|
CC-MAIN-2016-26
|
http://fordham.bepress.com/dissertations/AAI9328427/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00138-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.941445 | 424 | 2.84375 | 3 |
Odysseus Crater, with a size of epic proportions, stretches across a large northern expanse on Saturn's moon Tethys.
This view looks toward the leading hemisphere of Tethys (1,062 kilometers, or 660 miles across). Odysseus Crater is 450 kilometers, or 280 miles, across. North on Tethys is up and rotated 3 degrees to the right.
The image was taken in visible green light with the Cassini spacecraft narrow-angle camera on Feb. 14, 2010. The view was obtained at a distance of approximately 178,000 kilometers (111,000 miles) from Tethys and at a Sun-Tethys-spacecraft, or phase, angle of 73 degrees. Image scale is about 1 kilometer (about 3,485 feet) per pixel.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.
|
<urn:uuid:1892d650-0cda-43fd-ad9d-418225ef9f0c>
|
CC-MAIN-2016-26
|
http://www.jpl.nasa.gov/spaceimages/details.php?id=PIA12588
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00008-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.897154 | 266 | 3.359375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.