text
stringlengths 263
344k
| id
stringlengths 47
47
| dump
stringclasses 23
values | url
stringlengths 16
862
| file_path
stringlengths 125
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
81.9k
| score
float64 2.52
4.78
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Copyright: The National Archives / Image: Logan Zachary
“They may be portions of one of Sir John Franklin’s ships. God grant that the crews are safe.”
A rough note, and one written in haste by a tired man moving quickly across incredibly difficult land on a mission to save lives or to bear witness to a tragedy.
Dr John Rae of the Hudson’s Bay Company, an Orcadian traveller without equal, was on his third major journey of Arctic exploration. On this journey, he was crossing Victoria Land, and he was looking for traces of the lost Franklin Expedition.
A PIECE OF PINE-WOOD
In his report to the Royal Geographical Society the following April, Rae recalled how on the afternoon of 21 August 1851 the search party had “proceeded but a short distance when a piece of pine-wood was picked up which excited much interest.
“In appearance it resembled the butt end of a small flag-staff; was 5 feet 9 inches in length, and round except 12 inches at the lower end, which was a square of 2 ½ inches. It had a curious mark, resembling this (s c), apparently stamped on one side, and at 2 ½ feet distance from the step there was a bit of white line in the form of a loop nailed on it with two copper tacks. Both the line and the tacks bore the Government mark, the broad arrow being stamped on the latter.”
TWO MISSING SHIPS, 129 MISSING MEN
This far above the tree line, any searcher would rush to investigate any sighting of wood. And John Rae was looking for two missing ships and 129 missing men. This would have been an electrifying sight.
And it was only the beginning. He soon spotted something else close to the foreshore.
“I had not finished writing down the foregoing description, and we had not advanced more than half a mile, when another piece of wood was found lying in the water, but touching the beach.
“This was of oak, 3 feet 8 inches long. The lower part, to the height of 18 inches was a square of 3 ½ inches. Half of the square to the length of 6 inches at the end was cut off, apparently to fit into a clasp or band of iron, as there was a rust mark 3 inches broad across it.
“The remaining part of the stanchion (as I suppose it to have been) had been formed in a turning-lathe, and was 3 inches in diameter, with a hole about an inch wide in the upper end, or I should say part of a hole, as one side had been torn off. An iron chain had evidently been passed through this hole, as there were the marks both of the links and of rust.”
The stanchion did not bear the mark of the broad arrow on it. But Rae’s gut feeling was that both had come from a Royal Navy ship, possibly Erebus or Terror. And in this, their location was as critical as their appearance.
Rae writes: “The position where they were found is in Lat. 68 deg 52’ N., Long. 103 deg 20’W. As there may be a difference of opinion regarding the direction from which these pieces of wood came, it may not be out of place here to express what I think on the subject.
IMMENSE QUANTITIES OF ICE
“From the circumstances of the flood tide coming from the northward along the east shore of Victoria Land, there can be no doubt but there is a wide water channel dividing Victoria Land from North Somerset, and through this channel I believe these pieces of wood have been carried with the immense quantities of ice that a long continuance of Northerly and North-easterly wind and the flood tide had driven southward.
“The ebb-tide not having the power to carry it back again against the wind, the large bay on the main shore of America immediately south of Victoria Strait, became perfectly full of ice, even up to the south shores of Victoria Land.
“Both pieces of wood seem to have come to the shore about the same time, and they must have been carried in by the flood tide that was then rising, or during the previous ebb, for the simple reason that although they were touching the beach they did not rest upon it. Had they come in by a previous flood they would have been at high water mark instead of some distance below it.”
THE SEARCH GOES ON
Pragmatic Rae, veteran of countless treks across the Arctic landscape, would not have considered adding such unwieldy items to his already heavy load as he continued his search. So he made a detailed sketch of the stanchion, took small samples of the wood from both items, and continued on his journey.
When Rae’s 1851 search ended, the sketch and the samples of the flag-staff and stanchion were dutifully handed over to the Admiralty for investigation.
The mystery objects never left Victoria Land. But we have the sketch, which has been preserved among the Admiralty papers held in The National Archives.
There, Rae’s draughtsmanship allows us to picture the stanchion as it was when he first saw it lying in the freezing water with one end nudging the beach, having travelled far along an uncharted Strait from a ship as yet unknown.
One thought on “John Rae’s sketch of possible Erebus or Terror ship part”
Re the latitude and longitude where Rae saw these items, I looked at the map and wondered. That area is to the East of Cambridge Bay and near where Collinson searched.
Collinson’s expedition discovered a wooden door and took it back to England.
Certainly , the current of ice moved West from KWI.
|
<urn:uuid:ed9ab38c-c974-4816-b758-295a7c2d057c>
|
CC-MAIN-2023-23
|
https://finger-post.blog/2020/02/08/found-john-rae-sketch/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648911.0/warc/CC-MAIN-20230603000901-20230603030901-00335.warc.gz
|
en
| 0.978469 | 1,240 | 3.03125 | 3 |
Ox Cart Angel’s main character, Claire Dumont, comes from the town of
, now part of Pembina . In 1862 when the novel takes place, however, North Dakota was part of North Dakota Dakota Territory, which took up South and , as well as much of what are now the states of North Dakota and Montana . The Wyoming Dakota Territory was formed in 1861, only a year before the events in the novel take place, so for most of Claire’s life, Pembina was part of the . It’s a bit confusing, I know. However, despite which country, territory or state Pembina used to belong to ( Minnesota Territory once even thought it belonged to them in the early 1800s) it is still in the same place it always was; hugging the Canada Red River and snug right up against the present-day border on its eastern side, and almost touching Minnesota on its northern side. In fact, the Canada Pembina State Museum has an observation tower from which you can see . Canada
It’s a small town, its population under 600 in 2010, but what it lacks in size, it certainly makes up for in its importance to our country’s history. I hope to make it up there this next summer. I’m pretty sure I’ll be able to find it. If I get lost on the way, I can always listen for and follow the ghostly squeaking of ox cart wheels that once plied the area. Surely they can still be heard if you listen hard enough.
* * * * *
|
<urn:uuid:f77505a3-9d7c-409b-bd1e-3b57fced66e1>
|
CC-MAIN-2017-47
|
http://oxcartangel.blogspot.com/2011/12/where-heck-is-pembina.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807344.89/warc/CC-MAIN-20171124085059-20171124105059-00246.warc.gz
|
en
| 0.970912 | 327 | 3.078125 | 3 |
By Marcos Andre Goncalves, Rao Shen
In 1991, a gaggle of researchers selected the time period electronic libraries to explain an rising box of study, improvement, and perform. seeing that then, Virginia Tech has had funded study during this sector, mostly via its electronic Library learn Laboratory. This publication is the 1st in a 4 e-book sequence that studies our key findings and present study investigations. Underlying this e-book sequence are six accomplished dissertations (Gonçalves, Kozievitch, Leidig, Murthy, Shen, Torres), 8 dissertations underway, and plenty of masters theses. those replicate our event with an extended string of prototype or construction platforms constructed within the lab, resembling CITIDEL, CODER, CTRnet, Ensemble, ETANA, ETD-db, MARIAN, and Open electronic Libraries. There are 1000s of similar courses, shows, tutorials, and stories. we have now equipped upon that paintings so this e-book, and the others within the sequence, will handle electronic library similar wishes in lots of machine technology, details technological know-how, and library technological know-how (e.g., LIS) classes, in addition to the necessities of researchers, builders, and practitioners.
a lot of the early paintings within the electronic library box struck a stability among addressing real-world wishes, integrating equipment from comparable parts, and advancing an ever-expanding examine time table. Our paintings has slot in with those developments, yet at the same time has been pushed by way of a wish to supply an organization conceptual and formal foundation for the sector. Our objective has been to maneuver from engineering to technological know-how. We declare that our 5S (Societies, situations, areas, constructions, Streams) framework, mentioned in guides courting again to a minimum of 1998, presents an appropriate foundation. This booklet introduces 5S, and the foremost theoretical and formal elements of the 5S framework.
whereas the 5S framework can be used to explain many sorts of data structures, and is probably going to have even broader software and allure, we concentration right here on electronic libraries. Our view of electronic libraries is huge, so additional generalization will be simple. we now have hooked up with comparable fields, together with hypertext/hypermedia, info garage and retrieval, wisdom administration, computer studying, multimedia, own details administration, and internet 2.0. functions have incorporated handling not just guides, but additionally archaeological details, academic assets, fish pictures, medical datasets, and clinical experiments/simulations.
desk of Contents: creation / Exploration / Mathematical Preliminaries / minimum electronic Library / Archaeological electronic Libraries / 5S effects: Lemmas, Proofs, and 5SSuite / thesaurus / Bibliography / Authors' Biographies / Index
|
<urn:uuid:a0050d9e-9752-497c-a753-0a2fe68c062b>
|
CC-MAIN-2017-39
|
http://memified.com/index.php/ebooks/category/library-management/page/3
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685993.12/warc/CC-MAIN-20170919183419-20170919203419-00702.warc.gz
|
en
| 0.911883 | 587 | 2.53125 | 3 |
|August 30, 2014|
Recent successes in string theory have included a microscopic computation of the BekensteinHawking entropy of black holes, and the discovery of holographic dualities between gauge theories and gravity. String theory is, however, still far from fully understood at a fundamental level even the nature of the underlying degrees of freedom remains elusive. The emphasis of this workshop will be on combining the resources of physicists and mathematicians to better explore and interpret gravitational aspects of string theory.
Specific topics of interest for this focused workshop may include the role of black holes, the stringy resolution of curvature singularities (particularly those of a spacelike and cosmological nature) and of regions of bad chronology, warped compactifications and the route to constructing realistic cosmological models, the cosmological dark energy problem, gravity/gauge dualities, non-commutative and matrix theories, and holography.
To receive updates on the String Theory program please subscribe to
our mailing list at: www.fields.utoronto.ca/maillist
|
<urn:uuid:fb871e1b-2882-4e38-b898-9d0437e27ade>
|
CC-MAIN-2014-35
|
http://www.fields.utoronto.ca/programs/scientific/04-05/string-theory/gravitational/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835699.86/warc/CC-MAIN-20140820021355-00233-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.846832 | 219 | 2.609375 | 3 |
Flooding and Flood Risks
What is a Flood?
Anywhere it rains, it can flood. A flood is a general and temporary condition where two or more acres of normally dry land or two or more properties are inundated by water or mudflow. Many conditions can result in a flood: hurricanes, broken levees, outdated or clogged drainage systems and rapid accumulation of rainfall.
Just because you haven't experienced a flood in the past, doesn't mean you won't in the future. Flood risk isn't just based on history, it's also based on a number of factors: rainfall, river-flow and tidal-surge data, topography, flood-control measures, and changes due to building and development.
Flood-hazard maps have been created to show different degrees of risk for your community, which help determine the cost of flood insurance. The lower the degree of risk, the lower the flood insurance premium.
A Flood for All Seasons
Flooding can happen any time of year. Below are some of the more frequent causes.
Tropical Storms and Hurricanes
Hurricanes pack a triple punch: high winds, soaking rain, and flying debris. They can cause storm surges to coastal areas, as well as create heavy rainfall which in turn causes flooding hundreds of miles inland. While all coastal areas are at risk, certain cities are particularly vulnerable and could have losses similar to or even greater than those caused by the 2005 hurricane, Katrina, in New Orleans and Mississippi.
When hurricanes weaken into tropical storms, they generate rainfall and flooding that can be especially damaging since the rain collects in one place. In 2001, Tropical Storm Allison produced more than 30 inches of rainfall in Houston in just a few days, flooding over 70,000 houses and destroying 2,744 homes.
During the spring, frozen land prevents melting snow or rainfall from seeping into the ground. Each cubic foot of compacted snow contains gallons of water and once the snow melts, it can result in the overflow of streams, rivers, and lakes. Add spring storms to that and the result is often serious spring flooding.
Several areas of the country are at heightened risk for flooding due to heavy rains. The Northwest is at high risk due to: snow melts, heavy rains, and recent wildfires. And the Northeast is at high risk due to heavy rains produced from NorEasters.
This excessive amount of rainfall can happen throughout the year, putting your property at risk.
West Coast Threats
The West Coast rainy season usually lasts from November to April, bringing heavy flooding and increased flood risks with it; however, flooding can happen at anytime.
A string of large wildfires have dramatically changed the landscape and ground conditions, causing fire-scorched land to become mudflows under heavy rain. Experts say that it might take years for vegetation to return, which will help stabilize these areas.
The West Coast also has thousands of miles of levees, which are meant to help protect homes and their land in case of a flood. However, levees can erode, weaken, or overtop when waters rise, often causing catastrophic results.
Levees & Dams
Levees are designed to protect against a certain level of flooding. However, levees can and do decay over time, making maintenance a serious challenge. Levees can also be overtopped, or even fail during large floods, creating more damage than if the levee wasn't even there.
Because of the escalating flood risks in areas with levees, especially in the mid-west, FEMA strongly recommends flood insurance for all homeowners in these areas.
Flash floods are the #1 weather-related killer in the U.S. since they can roll boulders, tear out trees, and destroy buildings and bridges. A flash flood is a rapid flooding of low-lying areas in less than six hours, which is caused by intense rainfall from a thunderstorm or several thunderstorms. Flash floods can also occur from the collapse of a man-made structure or ice dam.
Construction and development can change the natural drainage and create brand new flood risks. That's because new buildings, parking lots, and roads mean less land to absorb excess precipitation from heavy rains, hurricanes, and tropical storms.
Understanding Flood Areas
Flooding can happen anywhere, but certain areas are especially prone to serious flooding. To help communities understand their risk, flood maps (Flood Insurance Rate Maps, FIRMs) have been created to show the locations of high-risk, moderate-to-low risk, and undetermined-risk areas. Here are the definitions for each:
High-risk areas (Special Flood Hazard Area or SFHA)
High-risk areas have at least a 1% annual chance of flooding, which equates to a 26% chance of flooding over the life of a 30-year mortgage. All homeowners in these areas with mortgages from federally regulated or insured lenders are required to buy flood insurance. They are shown on the flood maps as zones labeled with the letters A or V.
Moderate-to-low risk areas (Non-Special Flood Hazard Area or NSFHA)
In moderate-to-low risk areas, the risk of being flooded is reduced, but not completely removed. These areas are outside the 1% annual flood-risk floodplain areas, so flood insurance isn't required, but it is recommended for all property owners and renters. They are shown on flood maps as zones labeled with the letters B, C or X (or a shaded X).
No flood-hazard analysis has been conducted in these areas, but a flood risk still exists. Flood insurance rates reflect the uncertainty of the flood risk. These areas are labeled with the letter D on the flood maps.
Determining the Risk
To identify a community's flood risk, FEMA conducts a Flood Insurance Study. The study includes statistical data for river flow, storm tides, hydrologic/hydraulic analyses, and rainfall and topographic surveys. FEMA uses this data to create the flood hazard maps that outline your community's different flood risk areas.
Floodplains and areas subject to coastal storm surge are shown as high-risk areas or Special Flood Hazard Areas (SFHAs). Some parts of floodplains may experience frequent flooding while others are only affected by severe storms. However, areas directly outside of these high-risk areas may also find themselves at considerable risk.
Understanding Your Area
Changing weather patterns, erosion, and development can affect floodplain boundaries. FEMA is currently updating and modernizing the nation's Flood Insurance Rate Maps (FIRMS). These digital flood hazard maps provide an official depiction of flood hazards for each community and for properties located within it.
FEMA has published almost 100,000 individual Flood Insurance Rate Maps (FIRMs). See your map and learn how to read it so you can make informed decisions about protecting your property, both financially and structurally.
Flood Preparation & Recovery
Being prepared for a flood can not only help keep your family safe, it can also help minimize potential flood damage and accelerate recovery efforts.
Along with flood insurance, you can also protect yourself by safeguarding your home and possessions, developing a family emergency plan, and understanding your policy.
Learn how to deal with a flood, both before and after it happens, right now.
Before a Flood:
After getting flood insurance, there are several things you can do to minimize losses in your home and ensure your family's safety.
1. Safeguard your possessions.
Create a personal file containing information about all your possessions and keep it in a secure place, such as a safe deposit box or waterproof container. This file should have:
- A copy of your insurance policies with your agent's contact information.
- A room-by-room inventory of your possessions, including receipts, photos, and videos.
- Copies of all other critical documents, including finance records or receipts of major purchases.
2. Prepare your house.
- First make sure your sump pump is working and then install a battery-operated backup, in case of a power failure. Installing a water alarm will also let you know if water is accumulating in your basement.
- Clear debris from gutters and downspouts.
- Anchor any fuel tanks.
- Raise your electrical components (switches, sockets, circuit breakers, and wiring) at least 12 inches above your projected flood elevation.
- Place the furnace, water heater, washer, and dryer on cement blocks at least 12 inches above the projected flood elevation.
- Move furniture, valuables, and important documents to a safe place.
3. Develop a family emergency plan.
- Create a safety kit with drinking water, canned food, first aid, blankets, a radio, and a flashlight.
- Post emergency telephone numbers by the phone and teach your children how to dial 911.
- Plan and practice a flood evacuation route with your family. Know safe routes from home, work, and school that are on higher ground.
- Ask an out-of-state relative or friend to be your emergency family contact.
- Have a plan to protect your pets.
During a Flood:
Protect Yourself and Your Home
Here's what you can do to stay safe during a flood:
- If flooding occurs, go to higher ground and avoid areas subject to flooding.
- Do not attempt to walk across flowing streams or drive through flooded roadways.
- If water rises in your home before you evacuate, go to the top floor, attic, or roof.
- Listen to a battery-operated radio for the latest storm information.
- Turn off all utilities at the main power switch and close the main gas valve if advised to do so.
- If you've come in contact with floodwaters, wash your hands with soap and disinfected water.
After a Flood:
The Road to Recovery
As soon as floodwater levels have dropped, it's time to start the recovery process. Here's what you can do to begin restoring your home.
- If your home has suffered damage, call your insurance agent to file a claim.
- Check for structural damage before re-entering your home to avoid being trapped in a building collapse.
- Take photos of any floodwater in your home and save any damaged personal property.
- Make a list of damaged or lost items and include their purchase date and value with receipts. Some damaged items may require disposal, so keep photographs of these items.
- Keep power off until an electrician has inspected your system for safety.
- Boil water for drinking and food preparation until authorities tell you that your water supply is safe.
- Prevent mold by removing wet contents immediately.
- Wear gloves and boots to clean and disinfect. Wet items should be cleaned with a pine-oil cleanser and bleach, completely dried, and monitored for several days for any fungal growth and odors.
Additional Flood Preparation and Information:
- Flood Safety Tips and Resources
- New York Extension Disaster Education Network (NY EDEN) - Floods
- Repairing Your Flooded Home (.pdf)
- Flood Safety Tips
- Flood Safety Social Media Toolkit
- Turn Around Don't Drown, Flood Safety
- Safety Tips for Flooding
- Flood Tips
- Flood Basics and Safety
- Flooding - Iowa State University
- Emergency Planning and Preparedness - Flash Flood
By: Jim Olenbush
|
<urn:uuid:0315d8f9-933e-47bf-b3e5-289d664d732c>
|
CC-MAIN-2020-05
|
https://www.austinrealestate.com/flooding-and-flood-risks.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250603761.28/warc/CC-MAIN-20200121103642-20200121132642-00225.warc.gz
|
en
| 0.936243 | 2,358 | 3.5 | 4 |
by Ellen Weber and Jeremy Myers
Throughout this summer as we have gathered folks together around our work, the text from Ezekiel 47 continues to be a way to ground us before we begin. As our work shifts, taking time to remember these words re-grounds us in why public church matters through Ezekiel’s vision of God’s abundance.
Ezekiel’s Vision (Ezekiel 47:1–12, NRSV)
1 Then he brought me back to the entrance of the temple; there, water was flowing from below the threshold of the temple towards the east (for the temple faced east); and the water was flowing down from below the south end of the threshold of the temple, south of the altar. 2 Then he brought me out by way of the north gate, and led me round on the outside to the outer gate that faces towards the east; and the water was coming out on the south side.
3 Going on eastwards with a cord in his hand, the man measured one thousand cubits, and then led me through the water; and it was ankle-deep. 4 Again he measured one thousand, and led me through the water; and it was knee-deep. Again he measured one thousand, and led me through the water; and it was up to the waist. 5 Again he measured one thousand, and it was a river that I could not cross, for the water had risen; it was deep enough to swim in, a river that could not be crossed. 6 He said to me, ‘Mortal, have you seen this?’
Then he led me back along the bank of the river. 7 As I came back, I saw on the bank of the river a great many trees on one side and on the other. 8 He said to me, ‘This water flows towards the eastern region and goes down into the Arabah; and when it enters the sea, the sea of stagnant waters, the water will become fresh. 9 Wherever the river goes, every living creature that swarms will live, and there will be very many fish, once these waters reach there. It will become fresh; and everything will live where the river goes. 10 People will stand fishing beside the sea from En-gedi to En-eglaim; it will be a place for the spreading of nets; its fish will be of a great many kinds, like the fish of the Great Sea. 11 But its swamps and marshes will not become fresh; they are to be left for salt. 12 On the banks, on both sides of the river, there will grow all kinds of trees for food. Their leaves will not wither nor their fruit fail, but they will bear fresh fruit every month, because the water for them flows from the sanctuary. Their fruit will be for food, and their leaves for healing.’
Ezekiel’s vision becomes an invitation to follow God’s jubilee as it flows into the world and and makes everything live where it flows. The Public Church Framework (below) provides faith communities with a way to do this, to become blessings for the entire land on which they are rooted rather than existing to serve their own purpose. We are Ezekiel, following the enigmatic divine tour guide along the river as we learn to see the breadth and depth of God’s love flowing away from the temple and into the world.
- Accompaniment: Mortal, Have You Seen This? (vs. 1–6a) — The river flows out from the temple and towards the desolate places. We are called out of our temples and our comfort zones to follow this river and to stop and notice how wide and deep it becomes. As we hear our neighbors’ stories, we become aware of how God’s deep and wide love and mercy are at work in their lives. We learn to hear and see so that when we are asked this question – Mortal, have you seen this? – we can answer with a yes. Accompaniment is the practice of learning to see and hear God’s love bringing life to our world.
- Interpretation: The Water Will Become Fresh (vs. 6b–8) — As the jubilee river flows it brings fresh water into salt water. This fresh water desalinates the salt water and makes it fresh. The jubilee water dwells in, with, and under the salt water and makes it able to support and create life. The same happens to us as the stream of God’s story flows into the streams of our stories and our neighbors’ stories. God’s story begins to dwell in, with, and under our stories and our realities. This brings hope to stories that were at one time hopeless. Interpretation is the practice of learning how God’s promises (the fresh water) change the way we look at suffering in our world (salt water) and how those sufferings change the way we look at God’s promises.
- Discernment: Fishing and Spreading Nets (v. 9–11)— The living water brings about diversity and abundance. The fishing is good along this riverside. We have now seen the fullness of this river and we now have some choices to make. Is it time to fish? Is it time to dry our nets? Is this a place to fish? Is this a place to gather salt? There is work to be done along this riverside and we are invited and equipped to do it. Discernment is the practice of learning to hear God’s call and to know when, where, how and why to act on that call.
- Proclamation: Fruit for Food, Leaves for Healing (v. 12) — Ezekiel walks the riverside and notices the trees on both sides of the river and the harvest they produce. The trees are growing fruit for food and leaves for healing. The gifts of these trees create a future for God’s people. These trees do not only produce seeds that ensure the future of the trees themselves, they produce leaves and fruit for the world. Proclamation is the practice of producing and presenting our world with our gifts for the sake of the world, not for the sake of our own propagation. Christian faith communities re-engage their neighborhoods with fruit for food and leaves for healing — gifts to be given away that create a future for God’s people.
God’s creative, life-giving, jubilee river flows out from the temple and into the world. Our call is not to dam up the river and keep it in the temple. Our call is not to expect our neighbors to come to the temple to experience the life giving water of the river. Our call is to follow the river as it deepens and widens and makes all things live. As we learn to do this — to see, to fish, to spread nets, to grow and harvest fruit for food and leaves for healing — we find ourselves in the midst of innovation. Of co-creating a future for God’s world with God and our neighbor along the riverside. People are not looking for the temple, but they surely are seeking what they can find at the riverside. They are looking for others who are eager to bring the fruit for food and the leaves for healing to their neighbors.
What would it look like for you to follow the river of God’s living water out into the neighborhood?Who are the guides that might accompany you on that journey?What might happen?Come join us and find out.
|
<urn:uuid:23288d02-34eb-4acd-92eb-6a0b46c4b9c1>
|
CC-MAIN-2023-14
|
https://www.augsburg.edu/ccv/2022/08/04/ezekiel-and-the-public-church-everything-will-live-where-the-river-goes-2/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00769.warc.gz
|
en
| 0.969751 | 1,558 | 2.59375 | 3 |
The birds are chirping, the flowers are blooming, and citizen scientists… do citizen science! In case you’re not familiar with the concept, citizen science are activities supported (or sponsored) by universities, organizations, institutes or governments through which everyone can provide meaningful scientific contributions. Activities can vary greatly (from counting birds to analyzing galaxy clusters), and you can do it outside, in nature, or at home on your computer – some are actually quite fun to do while riding the subway. Right now, there’s a huge number of projects you – and anyone else interested in science – can participate in, so I’ll be sharing just a few we found interesting and fun.
Yep, citizen science is not only interesting and useful – it’s often times fun! Spring projects are usually the most active and interesting ones…so, here goes:
iSeeChange | Get involved [US]
This project combines citizen science with participatory public media and cutting edge satellite monitoring data. The goal is to go outside and share observations about the environment and monitor any change that takes place in your area. You can then go on social media and compare your results with your neighbors. This spring, the iSeeChange team is expanding its crowdsourced reporting platform, the iSeeChange Almanac, but unfortunately, it’s only available in the US.
Project BudBurst | Get involved [Worldwide]
Every plant tells a story about our changing climate. Through project BudBurst, you can share valuable ecological data – namely when trees and other plants in your area shed their leaves, flower and fruit. To participate, all you need to do is go outside, look at the plants, and write what you find. It’s interesting, you’ll learn a lot about plant lifecycles, plant species, and how plants are reacting to climate change (for example – are they blooming any sooner?). If you’re a professor or an educator, you can also engage the kids – it should make for a fantastic and interactive lesson! The scientific benefits are also huge.
Discover drugs from your soil | Get involved [US… I think]
The next generation of medical drugs can come from the soil in your back yard! Creating a new medicine is a team effort involving scientist and medical professionals from a wide variety of fields – but we often forget just how little of the natural world we’ve actually studied and understood. Through this project, people from all around the US can help investigators by sending them soil samples from their own backyards, which investigators can then analyze. Natural products are the source of many lifesaving drugs that are used today by doctors around the world – you can contribute to tomorrow’s cures!
ZomBeeWatch | Get Involved [North America]
OK, this one’s a bit trickier. The project involves tracking the honey bee parasite Apocephalus borealis and its spread throughout the continent. So what you have to do is put on a pair of gloves, maybe set up a light trap, and carefully capture a bee, see if it’s infected (the signs are really easy to figure out), and then let it go and report what you found. The project’s website does a good job at explaining how you can do this without getting hurt and without harming the bee. Not really recommended for children, but it’s a fun project, and with the many problems bees are facing these days – it’s extremely useful.
Nature’s Notebook | Get Involved [Worldwide]
Nature’s Notebook is a national plant and animal phenology observation program. Phenology is basically a fancy way of saying “plant and animal life cycle events”. Changes in climate are affecting plant and animal activity across, so scientists want to see how animals and plants are reacting – much like project BudBurst, except it’s also about birds and animals. Excellent for animal observations.
Snapshots in Time | Get Involved
Do you like photography and salamanders? Well… do you like photography and think salamanders are … OK? Then this might be the thing for you! determining changes in the timing of breeding is very important, not just for these species, but also for others that use the same habitat. For this reason, biologists need people to go outdoors and take pictures of the Spotted Salamander (Ambystoma maculatum) and the Wood Frog (Lithobates sylvaticus). This effort will focus on populations found throughout their range in North America, but it’s available everywhere there’s a population of these two species. Both species are fairly distinctive, and you can see images and instructions on the Snapshots in Time website.
Age Guess | Get Involved [Worldwide]
If you don’t want to get out of the house, and also want to hone your age-guessing skills, this is the project for you. AgeGuess investigates the differences between perceived age (how old you look) and chronological age (your age) as a potential aging biomarker. Basically, you look at pictures of other people and guess how old they are, and if you want, you can also upload pictures of you and have other people guess your age.
Cities at night | Get Involved [Worldwide]
Astronauts have taken numerous pictures from the ISS of cities at night, in an attempt to study light pollution. The problem is that the pictures are taken automatically, and scientists don’t know exactly what they are looking at – so it’s up to us to identify the cities and locations and tell them!
Big butterfly count | Get Involved [UK]
You can’t really talk about citizen science without counting butterflies now can you? There are several butterfly counting projects out there, I’m just picking one of the good ones here. What you do here is go outside, in a park or a garden or just somewhere in a natural setting, and spot the butterflies and moths for some 15 minutes. Over 44,000 people took part in 2014, counting almost 560,000 individual butterflies and day-flying moths across the. This particular project is aimed at the UK, but there are other ones for the other areas in the world. Another butterfly count project (more aimed at North America) is eButterfly.
Disk detective | Get Involved [Worldwide]
Several teams of astronomers are trying to identify dusty debris disks, similar to our asteroid field. These disks suggest that these stars are in the early stages of forming planetary systems and learning more about them could help us understand more about our own solar system. The problem is that computers often confuse debris disks around stars with other astronomical objects. This project launched by NASA is a call for volunteers to help – you can be an amateur astronomer, from your own home!
Eyewire | Get Involved [Worldwide]
Throughout everything that we’ve discovered on our planet and in the Universe, the human brain remains one of the biggest mysteries. Eyewire wants you to help scientists map the human brain, through a fun and easy to use interface. Upon registering, players are automatically directed through a tutorial that explains the game. You are given a cube with a partially reconstructed neuron branch stretching through it, coloring through some cross sections and generating 3D reconstructions, branch by branch. Multiple players work on the same cube, and then advanced players, oversee the work.
Explore the seafloor | Get Involved [Worldwide]
Are you ready to map the seafloor from your own living room, or while you’re on the subway? Explore the seafloor provides you with tutorials and introductory images, and then you take a look at spectacular underwater pictures and tag what you see. You’ll take a look at great pictures, learn about underwater habitats and help scientists – the perfect mix.
Orchive | Get Involved [Worldwide]
Listen to the whale songs – namely, orca songs. Orcas are some of the most remarkable predators on the face of the Earth. The goal of the Orchive project is to digitize acoustic data that have been collected over a period of 36 years. The acoustic data have been recorded using a variety of analog media at the research station OrcaLab, and they currently have over 20,000 hours of orca songs. You can help digitize them. It gets a bit dull after a while, but for the first part, I actually found it extremely interesting.
CosmoQuest | Get Involved [Worldwide]
Map the craters of Mercury, but also of the Moon and Vesta. There is an interactive tutorial that will show you what to do and how to do it. It’s easy to do, and you get to look at pictures of the Moon, Mercury and Vesta.
Dear Professor Einstein | Get Involved [Worldwide]
Albert Einstein was certainly the most influential scientist in modern times; but he wasn’t only a brilliant mind, he was also very vocal about a number of issues. He left behind a hefty hand-written correspondence, and you can read it and transcribe it. Speaking of transcribing…
Ancient Lives | Get Involved [Worldwide]
For more than a century researchers have been unearthing known and unknown literary texts as well as the private documents and letters that could improve their understanding of the ancient lives of Graeco-Roman Egypt. Yet many of these papyri have remained unstudied – due to a lack of resources. This is where you step in. You can learn to read, understand and ultimately translate papyri from Egypt. Impress your friends, and learn a foreign language… sort of.
Be A Smithsonian Archaeology Volunteer | Get Involved [Local to the Smithsonian]
If you’ve ever dreamed of becoming an archaeologist… this is an amazing opportunity. Join the Smithsonian Environmental Research Center (SERC) Archaeology Lab for excavating two sites. You can volunteer for one or several days, and no previous experience is required, as on-site training will be provided. This opportunity is suitable for families with older children (13+ directly supervised by a parent/guardian, 16+ may be able to work without having a parent/guardian present) and groups.
Milky Way Project | Get Involved [Worldwide]
Basically, astronomers have tens of thousands of images gathered with the Spitzer Space Telescope – and they could use some help. Just look at the images and tell researchers what you see in infrared data.
So, these are just some of the citizen science projects you too can get involved in! Citizen science is all about learning interesting things, having fun, and helping scientists better understand our planet – and the universe! Some are great weekend activities, some are great to do while on the subway or on the bus – get your kids involved, tell your family, have butterfly scouting competitions in the park or count the sea urchins on the bottom of the seafloor. Or if you feel more daring, step outside our planet and take a look at other planets and galaxies – everybody can now access these amazing opportunities. So, which one (or which ones) will you embark on?
If these are still not enough for you, you can find more projects on Wikipedia, SciStarter and Scientific American.
|
<urn:uuid:892a7935-bd70-4bde-86a5-f3ab4c8cbf96>
|
CC-MAIN-2023-23
|
https://www.zmescience.com/science/spring-is-the-season-for-citizen-science-what-you-can-do-to-have-fun-and-help-science/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647810.28/warc/CC-MAIN-20230601110845-20230601140845-00419.warc.gz
|
en
| 0.943887 | 2,366 | 3.453125 | 3 |
H. PANICULATA.—Japan, 1874. This is one of the most distinct species, in which the flower-heads are elongated, not flat, as in most other species, and from which the finest form in cultivation has been obtained. This is H. paniculata grandiflora, in which the flowers are sterile and pure white, forming large panicles often a foot in length. It is a magnificent variety, and, being perfectly hardy, should be extensively planted for ornament. The flowers are produced in late summer, but remain in good form for fully two months, dying off a rich reddish hue.
H. QUERCIFOLIA.—Oak-leaved Hydrangea. Florida, 1803. This species has neatly lobed leaves, and terminal panicles of pinky-white, but partially barren, flowers.
H. SCANDENS.—Climbing Hydrangea. Japan, 1879. This is not very hardy, but with the protection of a sunny wall it grows freely.
The Hydrangeas require a rich, loamy soil, and, unless in maritime districts, a warm and sheltered situation. They are readily propagated by means of cuttings.
HYMENANTHERA CRASSIFOLIA.—A curious New Zealand shrub with rigid ashy-coloured branches, and small leathery leaves. The flowers are violet-like in colour, but by no means conspicuous. The small white berries which succeed the flowers are, in autumn, particularly attractive, and very ornamental. It is perfectly hardy and of free growth in light peaty earth.
HYPERICUM ANDROSAEMUM.—Tutsan, or Sweet Amber. Europe (Britain). A pretty native species, growing about 2 feet high, with ovate leaves having glandular dots and terminal clustered cymes of yellow flowers.
H. AUREUM.—South Carolina and Georgia, 1882. This soon forms a neat and handsome plant. The flowers are unusually large, and remarkable for the tufts of golden-yellow stamens with which they are furnished.
H. CALYCINUM.—Aaron’s Beard, or Rose of Sharon. South-east Europe. This is a well-known native species of shrubby growth, bearing large yellow flowers from 3 inches to 4 inches in diameter. It is a prostrate plant, with coriaceous glossy leaves with small pellucid dots, and of great value for planting in the shade.
H. ELATUM is a spreading species from North America (1762), growing to fully 4 feet in height, and bearing terminal corymbs of large, bright yellow flowers in July and August. Leaves rather large, oblong-ovate, and revolute. On account of its spreading rapidly from the root, this species requires to be planted where it will have plenty of room.
H. HIRCINUM.—Goat-scented St. John’s Wort. Mediterranean region, 1640. A small-growing and slender species, with oblong-lanceolate leaves 2 inches long, and producing small yellow flowers in terminal heads. There is a smaller growing form known as H. hircinum minus. The plant emits a peculiar goat-like odour.
|
<urn:uuid:980d76ea-185c-4c3a-8201-9fec3f953c15>
|
CC-MAIN-2017-34
|
http://www.bookrags.com/ebooks/10852/48.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886101966.48/warc/CC-MAIN-20170816125013-20170816145013-00194.warc.gz
|
en
| 0.926041 | 684 | 2.71875 | 3 |
With a detailed description of the assistant chemist responsibilities, you get to know what these individuals do at healthcare clinics, chemist shops, retail drug pharmacies, drugstore chains and pharmaceutical industries. Assistant chemists are also known as pharmacy assistants or pharmacy aides. Though the job title may differ, the designation remains the same and thus the job responsibilities are also similar for these individuals.
Pharmacy assistants or assistant chemists are employed to perform various administrative tasks. They work under the supervision of pharmacist or a pharmacy technician. In a healthcare clinic, these assistants are assigned with packing of medicines, antiseptics, and other inventory. They also maintain paperwork including patient insurance forms and admission records. They answer telephone calls and redirect the calls to the concerned patient or healthcare specialist.
Assistant chemists working in a chemist shop retrieve and pack medicines, prepare invoice, receive and arrange new stock, and dispose expired or damaged medicines.
On the other hand, an assistant chemist in a pharmaceutical industry has to mix ingredients, put it in machines, and pack the medicines.
Assistant chemist responsibilities are important for any of the organization stated above because these individuals help the chemists or pharmacists in their routine activities.
As an assistant chemist, a person performs the following duties:
Employers prefer individuals with knowledge of basic computer applications and those skilled in communication and customer service. Familiarity with medical insurance documentation and billing is equally important for assistant chemists. An eye for detail and an inherent curiosity to learn about new medicines and products will help you to work more efficiently in this position. Assistant chemists need to be physically fit as they have to lift heavy boxes of stocks and arrange them on a regular basis.
A minimum of high school diploma or an associate degree in any field will work if you want to work in this position. Even without work experience, you can get this job as you will get trained on the job itself.
Assistant chemist is an entry level job. You can learn many skills on the job and can also opt for a course to become a pharmacy technician in the future. Assistant chemists are an asset for any drugstore as they decrease the workload of the chemist and they will be in demand in future owing to an increase in the number of healthcare clinics and drugstores.
Assistant chemist responsibilities closely resemble the responsibilities of pharmacy technicians. These professionals are necessary for any chemist shop or a healthcare facility.
|
<urn:uuid:451ed066-c0f2-4d3e-8e25-bbe85c001c74>
|
CC-MAIN-2020-24
|
https://www.bestsampleresume.com/job-descriptions/assistant/assistant-chemist.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513321.91/warc/CC-MAIN-20200606124655-20200606154655-00562.warc.gz
|
en
| 0.94672 | 484 | 2.953125 | 3 |
Incorporate extra grit if your soil is poorly drained or if your plants require it – this is particularly important if you’re planting bulbs and alpines.
Add organic matter to improve the soil. High levels of organic matter are required by some plants but all will benefit.
Level off the site and break down large lumps of soil.
If you have not prepared the whole area dig a hole for the plant twice as wide and as deep as the original container and improve the soil as above. If the soil is low in nutrients – for example if your soil is sandy – add a light dressing of fertiliser in the planting hole and mix well. Add a further light top dressing after planting but not close to the stem. Do not use large amounts of fertiliser as this can damage plant roots.
Check the hole is large enough for the plant, ensuring all the roots will fit in when extended.
Soak the plant. Immersion for a short time is the best method – when bubbles stop coming to the surface, remove and allow the surplus water to drain away.
Remove the container and if the plant roots are dense gently loosen so they will branch out into new ground. Ideally, new roots will just be reaching the outside of the pot at planting time – in this case do not disturb at all.
Depth of planting is important: the plant should usually end up at the same depth in the ground as it was in the pot. Place a pole across the planting hole to enable you to judge this.
Planting depth can be important with some plants – for example plant peonies too deep and they will take years to come into flower. Plant clematis too shallow (the lower stem should be buried) and they are susceptible to clematis wilt. Check plant instructions for details on planting depth.
Water in well, thoroughly soaking the soil. For trees and shrubs, mound soil around the perimeter of the planting hole to retain water and fill a number of times and daily thereafter for a few weeks. Trees and shrubs may need watering for up to a year. Tall trees may require staking – particularly in a windy site.
Finish off by mulching the ground with a 50 – 75mm (2 – 3in) layer of organic matter to discourage weeds and retain moisture. If you do not, you must scrupulously remove weeds. New weed seeds will germinate quickly, competing with your plants for water and food.
If the roots have been damaged (eg if you have moved the plant from another situation) balance the roots and the top of the plant by pruning. This will encourage the plant to grow away better. Many plants (never rhododendrons) benefit from pruning at planting time. See Pruning Tips for more information about this.
Tip: For alpines and plants susceptible to neck rots, mulch with grit immediately around the stems to keep dry.
|
<urn:uuid:8f9e2cd4-5384-4658-9b7b-f42fdcffdff3>
|
CC-MAIN-2020-16
|
http://www.evertonpark.org.uk/planting-tips/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371829677.89/warc/CC-MAIN-20200409024535-20200409055035-00345.warc.gz
|
en
| 0.930964 | 600 | 2.703125 | 3 |
Side event at the 48th Session of the Human Rights Council: Digital technologies and human rights in the administration of justice
The use of digital technologies in the administration of justice has significantly increased over the recent years, a trend that has been further accelerated by COVID-19. Virtual courts, the use of algorithms and smart prisons have the potential to enhance efficiency and accessibility but may also carry adverse human rights impacts. This event will provide an overview of this emerging field and an opportunity to discuss the human rights implications, including concerns around the right to a fair trial, due process, non-discrimination, and equality and equal protection before the law.
Date: Thursday, 30 September 2021, 15.00-16.30 CEST (Geneva time)
Register at: https://us02web.zoom.us/webinar/register/WN_NqIyx_DdTlSDp6c0DGd8TQ
You can find the agenda here.
Recent years have seen rapid advances in the development and use of digital technologies in various aspects of criminal justice systems across the globe. In addition, the COVID-19 pandemic has prompted justice sector actors in many countries to increasingly turn to virtual solutions involving enhanced digital technology to facilitate criminal justice processes while reducing the risk of transmission. Virtual courts and remote hearings for criminal proceedings have been implemented in many countries, and the use of algorithms in criminal justice decision-making, including for sentencing and release decisions, is expanding. Digital communication technologies are already in use in many prisons for visitation, rehabilitation and health care provision, and some countries are testing ‘smart prisons’ which utilize biometric and digital technology to manage prison populations.
Digitalization and other technological advances have the potential to significantly enhance efficiency and accessibility in the administration of justice but may also carry adverse human rights impacts. The use of video conferencing technology to conduct criminal trials may infringe the rights of accused persons by effectively preventing them from presenting all evidence in their cases, for example marks of torture or other forms of ill-treatment. The use of algorithms might bear the risk of perpetuating discrimination, including in respect of profiling the investigative phases of criminal justice. Furthermore, the digital divide may lead to a lack of equal protection of the law, particularly in countries where significant parts of the population live without access to the internet.
It is critical, therefore, that the design and use of such new technologies is undertaken with respect for the human rights of persons in contact with the justice system, including those deprived of their liberty. This event will provide an overview of this emerging field in the administration of justice and an opportunity to discuss the human rights implications, including concerns around the right to a fair trial, due process, non-discrimination, and equality and equal protection before the law.Events
|
<urn:uuid:3fc017d7-43ab-4e87-bfa4-6e8415fd885c>
|
CC-MAIN-2023-23
|
https://www.icj.org/side-event-at-the-48th-session-of-the-human-rights-council-digital-technologies-and-human-rights-in-the-administration-of-justice/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649177.24/warc/CC-MAIN-20230603064842-20230603094842-00298.warc.gz
|
en
| 0.925197 | 574 | 2.640625 | 3 |
Wondering how dental professionals repair damaged teeth? Dental veneers are one of the most common types of dental treatments used for those who have damaged teeth. According to WebMD, dental veneers are routinely used to fix teeth that are discolored, worn down, chipped, broken, misaligned, uneven, irregularly-shaped and gapped.Because dental veneers can be used to…
All About Dental Veneers
Dental Veneers are also commonly known as porcelain veneers or dental laminates. Though Dental veneers are quite common not many are aware of its applications and benefits or its uses. So, read on to know more about Dental veneers in details. To start with, what are Dental Veneers?
Dental Veneers are an essential part of cosmetic dentistry and are shell-like structures usually custom-made and composed of composite resin or porcelain or other teeth coloring materials that are extensively used to cover the front surface of the tooth to enhance its cosmetic appearance.
The dental veneers process
We usually bond the dental veneers to the front surface of the tooth using dental cement to attach both parts together and alter the appearance. With dental veneers, we can alter the size, length, and shape of the tooth.
While the primary purpose of dental veneers is for enhancing the cosmetic appearance of the front surface of our teeth, we typically recommended Dental Veneers to cover up discolored tooth after an extensive root canal treatment. In other cases, we will use dental veneers due to stains from excessive use of fluoride or other components like tetracycline.
Often, teeth are also discolored or the shape changes due to resin-based fillings. In those cases, dental veneers come in real handy to restore the perfect appearance of the teeth. Dental Veneers are also extensively used and recommended to restore the appearance of broken or chipped teeth, uneven teeth (those with bulges and crafts) and they are also applied to fill up the gap between two teeth. Dental veneers are also helpful in aligning misaligned teeth.
Materials and Type of Dental Veneers
We generally select or categorize Dental Veneers based on its material. The dental veneers can be broadly classified as:
Porcelain based dental veneers
Resin or composite resin-based material which resembles the natural color, texture or composition or original human teeth
Most of the time we prefer porcelain based dental veneers due to its strong and resilient nature and is usually more durable than it resins based counterpart. But on the other hand, it also offers some disadvantages. Porcelain based dental veneers take longer to install had to preorder in the lab costs more and requires an extensive maintenance and it's hard to fix if gone wrong.
On the contrary, resin-based dental veneers can be prepared by us in our chamber itself and takes one single office visit to install, easy to maintain and fix and more importantly its pocket-friendly although it's not that strong and durable compared to the porcelain based ones. However, type of dental veneer is usually selected by us based on the patient's diagnosis, treatment plan, and tooth condition.
A professional treatment for long-term use
In the United States, placing dental veneers can cost a patient anywhere between $500 to $2000 and usually lasts for 10-20 years. Just like any other implants, dental veneers need its fair share of routine cleaning, care, and maintenance. Caring for dental veneers are much like caring for one's own natural teeth.
Daily brushing with an appropriate toothpaste usually recommended by us-the dentists and regular flossing helps to maintain the dental veneers perfectly. In addition, it is recommended that we check the dental veneers on a regular basis to make sure everything is in place and in good shape with the veneers.
If you do not like your teeth, dental veneers can correct a variety of aesthetic challenges. As an Invisalign® dentist, we can use veneers to improve the appearance of your teeth or resolve many of these same issues using Invisalign®. The two solutions are wildly different, but can be used to achieve similar goals. Since…
You can alter the size, shape or color of a tooth with dental veneers and dental laminates. Call us for the details.By using dental veneers and dental laminates, we can create a beautiful and perfect looking smile. These thin shells are placed onto the surface of your teeth to cover dark stains, close gaps, and…
As a dentist, we strive to clean our patients' teeth while also ensuring that their teeth remain healthy. Our aim is for our patients to retain their natural teeth for as long as possible. Preventative care plays an important role in accomplishing this since regular teeth cleanings are essential for preventing infection and disease. It…
|
<urn:uuid:41ae64e4-2443-4f67-9377-753ae8d3aa27>
|
CC-MAIN-2020-34
|
https://impladentdentistry.com/blog/all-about-dental-veneers/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738816.7/warc/CC-MAIN-20200811150134-20200811180134-00235.warc.gz
|
en
| 0.939881 | 1,052 | 2.984375 | 3 |
Finding materials harder than diamond, plus spintronic devices, wrinkle physics and more in this week's news
Devices that use the spin property of electrons to store and process information are one step closer to reality, thanks to a new MRI technique with a resolution tens of thousands of times better than an MRI brain scan. Researchers at Harvard University and the Harvard-Smithsonian Center for Astrophysics can now watch many electrons spin at the same time — and change the spin of a single electron without disturbing its neighbor. The device was tested on a piece of diamond but could be used to monitor many different kinds of materials, as reported in Nature Physics on May 15. —Devin Powell
Wrinklons, a new species of wrinkle
Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset your password. You may also log into Disqus using Facebook, Twitter or Google.
|
<urn:uuid:03473be3-0757-47d0-a05c-92584eb5b022>
|
CC-MAIN-2014-15
|
https://www.sciencenews.org/article/moleculesmatter-energy-14
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00111-ip-10-147-4-33.ec2.internal.warc.gz
|
en
| 0.892728 | 211 | 3.3125 | 3 |
William Harvey (15781657). On the Motion of the Heart and Blood in Animals. The Harvard Classics. 190914.
X. The First Position: Of the Quantity of Blood Passing from the Veins to the Arteries. And That There Is a Circuit of the Blood, Freed from Objections, and Farther Confirmed by Experiment
SO FAR our first position is confirmed, whether the thing be referred to calculation or to experiment and dissection, viz., that the blood is incessantly poured into the arteries in larger quantities than it can be supplied by the food; so that the whole passing over in a short space of time, it is matter of necessity that the blood perform a circuit, that it return to whence it set out.
But if anyone shall here object that a large quantity may pass through and yet no necessity be found for a circulation, that all may come from the meat and drink consumed, and quote as an illustration the abundant supply of milk in the mammæfor a cow will give three, four, and even seven gallons and more in a day, and a woman two or three pints whilst nursing a child or twins, which must manifestly be derived from the food consumed; it may be answered that the heart by computation does as much and more in the course of an hour or two.
And if not yet convinced, he shall still insist that when an artery is divided, a preternatural route is, as it were, opened, and that so the blood escapes in torrents, but that the same thing does not happen in the healthy and uninjured body when no outlet is made; and that in arteries filled, or in their natural state, so large a quantity of blood cannot pass in so short a space of time as to make any return necessaryto all this it may be answered that, from the calculation already made, and the reasons assigned, it appears that by so much as the heart in its dilated state contains, in addition to its contents in the state of constriction, so much in a general way must it emit upon each pulsation, and in such quantity must the blood pass, the body being entire and naturally constituted.
But in serpents, and several fishes, by tying the veins some way below the heart you will perceive a space between the ligature and the heart speedily to become empty; so that, unless you would deny the evidence of your senses, you must needs admit the return of the blood to the heart. The same thing will also plainly appear when we come to discuss our second position.
If a live snake be laid open, the heart will be seen pulsating quietly, distinctly, for more than an hour, moving like a worm, contracting in its longitudinal dimensions, (for it is of an oblong shape), and propelling its contents. It becomes of a paler colour in the systole, of a deeper tint in the diastole; and almost all things else are seen by which I have already said that the truth I contend for is established, only that here everything takes place more slowly, and is more distinct. This point in particular may be observed more clearly than the noonday sun: the vena cava enters the heart at its lower part, the artery quits it at the superior part; the vein being now seized either with forceps or between the finger and the thumb, and the course of the blood for some space below the heart interrupted, you will perceive the part that intervenes between the fingers and the heart almost immediately to become empty, the blood being exhausted by the action of the heart; at the same time the heart will become of a much paler colour, even in its state of dilatation, than it was before; it is also smaller than at first, from wanting blood: and then it begins to beat more slowly, so that it seems at length as if it were about to die. But the impediment to the flow of blood being removed, instantly the colour and the size of the heart are restored.
If, on the contrary, the artery instead of the vein be compressed or tied, you will observe the part between the obstacle and the heart, and the heart itself, to become inordinately distended, to assume a deep purple or even livid colour, and at length to be so much oppressed with blood that you will believe it about to be choked; but the obstacle removed, all things immediately return to their natural state and colour, size, and impulse.
Here then we have evidence of two kinds of death: extinction from deficiency, and suffocation from excess. Examples of both have now been set before you, and you have had opportunity of viewing the truth contended for with your own eyes in the heart.
|
<urn:uuid:0b107f6c-992c-4a57-a003-e1f6bf3a62be>
|
CC-MAIN-2014-41
|
http://bartleby.com/38/3/10.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135549.24/warc/CC-MAIN-20140914011215-00338-ip-10-234-18-248.ec2.internal.warc.gz
|
en
| 0.960324 | 970 | 2.75 | 3 |
Northern Prairie Wildlife Research Center
Our difficulty in finding appropriate adjustments to EM for free-living and allometric error implies that species-specific values are needed to improve population food and habitat predictions for pintails (e.g., Weathers et al. 1984). Alternatively, direct estimation of DEE via doubly-labeled water techniques to measure field metabolic rates have proven useful for free-ranging birds (e.g., Obst and Nagy 1992, Piersma and Morrison 1994, Uttley et al. 1994). Similar investigations of pintails and other waterfowl would simplify and reduce uncertainty of population energetics analyses and facilitate integration with resource management; however, gains in precision need to be balanced against the cost for improvements and the scale of management needs.
In addition to improving and testing model input variables, managers need to field-test model predictions to adaptively guide management of wintering habitats for pintails. Managers could seasonally monitor pintail use of wetlands and rice fields to determine how distribution patterns and behavior change relative to estimated food availability at pintail foraging sites. Conservation programs in the Sacramento Valley (Central Valley Habitat Joint Venture 1990) are adding wetlands and flooded rice fields, so more food may be available than in 1980-82. As food supplies increase, previously documented body mass dynamics (Miller 1986b) may change, and pintails may forage less in dry rice. Managers could easily monitor these possible responses. If the continental pintail breeding population recovers to 1970's levels (U.S. Fish and Wildlife Service and Canadian Wildlife Service 1986), additional habitat may be particularly critical to maintain these populations. Flooded rice fields and wetlands provide primary foraging habitats for pintails in dry winters (Miller 1986b), and shortages of these habitats may contribute to declines in pintail breeding populations (Raveling and Heitmeyer 1989).
Managers must continue to integrate rice fields and wetlands in conservation planning to benefit energetic requirements of wintering pintails in the Sacramento Valley (Heitmeyer et al. 1989, Miller et al. 1989). Without commercial rice as a food source, about 10,000 ha of additional wetlands would be required to support the pintail use-days estimated for the wet winter of 1981-82. The needs of other species would increase this amount of wetlands markedly, because the wetland conservation goals of the Central Valley Habitat Joint Venture (1990) assume that rice lands are available to supplement foraging habitat for ducks. Therefore, managers should monitor economic trends that may affect the rice industry. For example, cotton has recently increased in the Sacramento Valley (Cline 1995), locally replacing rice in some areas, and the new strip-harvest technique (Bennett et al. 1993) directly reduces rice seed available to pintails, and the tall residual vegetation may present a physical barrier to foraging (Miller and Wylie 1997, Day and Colwell 1998).
|
<urn:uuid:7c636edd-a3c3-4458-9b14-e5cc05c5eb3e>
|
CC-MAIN-2014-35
|
http://www.npwrc.usgs.gov/resource/birds/popnrg/manage.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921869.7/warc/CC-MAIN-20140901014521-00068-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.866257 | 589 | 2.9375 | 3 |
Lamotrigine is a medicinal drug used as the main active ingredient in medication used in the treatment of bipolar diseases or epilepsy. It has antiepileptic and analgesic properties. This active component enhances the action of gamma-aminobutyric acid and inhibits neurotransmitter. This may result in a reduction of pain-related transmission of signals along nerve fibers. This agent also inhibits voltage-gated sodium channels and suppresses glutamate release.
Class of Drugs – Antiepileptic drug
Molecular Formula – C9H7Cl2N5
Molecular Weight – 256.09 g/mol
Working: In the treatment of epilepsy, this active component decreases the release of a substance in the brain known as glutamate. This process also prevents the neurons in the brain from becoming too active. It results in decreased seizures. In the treatment of bipolar disorder, given medication might affect certain receptors in the brain that help control the mood. This also decreases the number of mood swings.
Uses: This mentioned medicine is an active component approved to prevent and control seizures. It can be used to help prevent the extreme mood swings of bipolar disorder in adults. Lamotrigine is an anticonvulsant or antiepileptic drug works by restoring the balance of certain natural substances in the brain.
This medicine is not recommended to use in children younger than 2 years due to an increased risk of side effects.
Popular Brands and Dosages: Given active ingredient is used in popular brand medication as mentioned below:
- Lamictal: 25 mg, 50 mg, 100 mg, 150 mg, 200 mg, and 100 mg
- Lamictal ODT: 25 mg, 50 mg, 100 mg, 150 mg, 200 mg, and 100 mg
Medicines composed of Lamotrigine is not recommended for nursing mothers, it may harm nursing infant.
|
<urn:uuid:d7345620-42e7-4704-9cab-916e01c65604>
|
CC-MAIN-2023-40
|
https://thetopmedstore.com/lamotrigine/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511406.34/warc/CC-MAIN-20231004184208-20231004214208-00549.warc.gz
|
en
| 0.886506 | 386 | 3.015625 | 3 |
Overweight, obesity, and physical inactivity are smoking’s close cousins in preventable causes of mortality, leading to over 300,000 deaths annually. The societal costs of these conditions* may be as high as $400 billion per year. Therefore, upon reading the May 6 blog post from Sissela Bok and the May 20 post from Steven Schroeder, it seemed appropriate to address two questions that came to mind from their posts: 1) Do people over the age of 65 continue to fall prey to overeating, physical inactivity and weight gain?, and 2) Is 65 too late to tackle overweight, obesity, and physical inactivity?
To answer question 1, we observe prevalence estimates and weight trajectories over time. Among Americans 60+ years of age, nearly 74% of women and 77% of men are either overweight (body mass index of 25 or higher) or obese (BMI of 30 or higher). The rates are even higher for certain racial/ethnic groups in this age group; nearly 90% of African-American women or Hispanic men are overweight or obese. These numbers are staggering and rival or surpass the prevalence for any other age group.
But what about weight gain over time? When does it stop? Data from long-standing cohorts demonstrates that both men and women continue to gain weight, on average, through middle-age with some flattening of the trajectory compared to younger age groups. By the time men and women reach age 65, some weight loss thereafter is evident. However, slower weight gain and eventually some weight loss with aging probably reflect gradual loss of muscle mass over time, coexisting with rising body fat mass.
Consistent with this data are findings of increasing calorie consumption over the last several decades among those over the age of 60. Older people are drinking more soda, consuming more fast food, and snacking on high-calorie foods, just like everyone else. And, these behaviors are cumulative. Increases in weight and/or fat mass during older ages are on top of any prior gains, which were substantial in the over 65 group because they lived much of their adulthood during the heart of the obesity epidemic. Living for longer periods at unhealthy body weights only exacerbates the development o f chronic disease and functional limitations that are associated with overweight and obesity. Coupled with weight gain, physical inactivity is typical and, if anything, has increased over time in all age groups.
But, on to question 2. Is older age too late to address body weight and physical inactivity? Are the harms already embodied and unlikely to be reversed? Prior observational research has been mixed, showing that weight loss after age 65 may be associated with higher morbidity and mortality. However, a 2011 study by Villareal, et al., in the New England Journal of Medicine provided critical evidence in support of the concept that it’s never too late. This study enrolled 107 patients who were 65+ years old, sedentary, mild-to-moderately frail, and obese in a randomized controlled trial comparing four treatment arms: 1) a weight management program that included weekly group meetings with dieticians seeking to achieve a daily calorie deficit of 500 to 750 calories, 2) a three times weekly, 90-minute, group exercise program, supervised by a physical therapist, that included aerobic exercises, resistance training, and flexibility and balance exercises; 3) a combination of the weight management and exercise programs, and 4) a control group that received monthly advice about a healthy lifestyle from research staff.
The results were impressive. All of the intervention groups had substantial improvements in physical functioning and exercise tolerance compared to the control group, with the greatest improvements for those in the combined exercise and diet group. Strength was improved only in the groups that participate in the exercise program, and weight loss was achieved (18 to 21 pounds) only in the groups enrolled in the weight management program. In other words, losing weight and improving physical activity can have incredibly important and positive influences on health after age 65. We need more studies like this, in older age groups, to make certain that these effects are consistent.
So, I think our answers to questions 1 and 2 are a resounding “yes” and “no”. Older people are vulnerable to weight gain, physical inactivity, and overeating, and it’s never too late to address these issues. Weight loss and starting exercise are achievable and clearly improve functional status. Treatments for obesity in this age group include all of the same treatments for younger people, including lifestyle changes for everyone and medications and bariatric surgery for selected patients.
Medicare has begun recognizing the potential benefits of lifestyle changes and now covers intensive behavioral therapy for obese patients. However, the documentation and billing process for this treatment is overly burdensome, leading to limited use among providers. Medicare does not cover weight loss medications but does cover weight loss surgery. The selection process for medications and bariatric surgery among older patients is particularly important because of a highest risk of morbidity and mortality undergoing bariatric surgery and much uncertainty about the safety of using medications (really for any age group but particularly geriatric groups). The medication labels for the primary medications FDA-approved for the treatment of obesity (orlistat, phentermine, lorcaserin, phentermine/topiramate) each recommend caution when using in geriatric populations because of inadequate samples of geriatric patients in trials.
*The AMA recently classified obesity as a disease. The effects of this classification are not yet clear but could open the doors to more widespread coverage of obesity treatment programs.
Jason Block, 37, is a general internal medicine physician, and Assistant Professor in the Department of Population Medicine at Harvard Medical School, where he is Associate Director of the Obesity Prevention Program. His research examines geographic influences on the obesity epidemic and interventions to promote weight loss and overall wellness, especially at the environmental level.
|
<urn:uuid:98fe4b79-979b-400e-ab66-cc7842f1b1de>
|
CC-MAIN-2023-23
|
http://over65.thehastingscenter.org/aging-weight-gain-and-weight-loss/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644915.48/warc/CC-MAIN-20230530000715-20230530030715-00617.warc.gz
|
en
| 0.9524 | 1,196 | 2.8125 | 3 |
The Archaeology of Abingdon
The Abingdon area was attractive to human settlers from the earliest times, and important remains of every period of prehistory have been found in and around the town. Palaeolithic, Mesolithic, Neolithic and Bronze Age sites and finds have all been discovered here. In the Iron Age (around 800 BC onwards) the area seems to have been well-populated by farming communities. The remains of their settlements have been excavated at a number of places: at ‘Ashville Trading Estate’ (Nuffield Way) and Wyndyke Furlong, ‘Barton Court Farm’ (Daisy Bank), and Thrupp (in Radley parish), for example.
Excavations on the site of the new District Council Offices in 1988 and 1989 revealed remains of another such settlement: some Early Iron Age pits and a number of round houses, along with grain storage pits and remains of craft activities, dating from the Middle Iron Age (around 300 BC onwards). These discoveries, proving that there is an Iron Age settlement underneath the modern centre, provide the basis for Abingdon’s claim to be the oldest continuously inhabited town in England.
The Iron Age settlement would have been based on farming, although its location at a ford of the Thames may have given it special importance. Later in the Iron Age, large defensive ditches and banks were constructed here, enclosing a wide area beside the Thames. Abingdon at this time seems to have become an ‘oppidum’ – a defended Iron Age ‘proto-town’ which was engaged in trade and craft manufacturing as well as agriculture.
Abingdon was clearly important at and just after the time of the Roman conquest of Britain in 43 AD. It continued to be a significant place throughout the Roman period, although its character at this time is rather unclear. It may well have been a local market centre. After the end of Roman rule in Britain in 410 AD, the Abingdon area became a focus for Anglo-Saxon settlers whose origins lay in northern Europe. There was a major Anglo-Saxon cemetery at Saxton Road, while traces of sunken-featured buildings, and pottery and other objects of this date, have been found in the town centre. Abingdon Abbey may well have been founded in the late seventh century AD, ensuring the town’s prominence for the next 800 years and more. Abingdon’s medieval market was established in the time of the abbey. It is recorded in the Domesday Book of 1086 AD, and continues to the present day.
Many other towns in England have very long histories. Colchester In Essex can justly claim to be the oldest recorded town in England, as coins bearing the letters ‘CAM’, an abbreviation of the town’s Celtic name of ‘Camulodunon’, were being minted around the time of the birth of Christ. Colchester, however, has produced little evidence of Early or Middle Iron Age settlement, of the kind that has been found at Abingdon. Abingdon’s claim to be ‘England’s oldest town’ is a fair one.
See Glossary for explanations of technical terms.
© AAAHS and contributors 2014
|
<urn:uuid:ce58bdba-8e71-4943-9d34-02bbe2978261>
|
CC-MAIN-2023-50
|
https://www.abingdon.gov.uk/feature-articles/archaeology-abingdon
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100327.70/warc/CC-MAIN-20231202042052-20231202072052-00048.warc.gz
|
en
| 0.974159 | 691 | 3.140625 | 3 |
Dakota County Public Health works with cities, townships, and school districts on developing and testing pandemic flu plans.
An influenza pandemic occurs when a new influenza virus appears for which there is little or no immunity in the human population, begins to cause serious illness and then spreads easily person-to-person worldwide.
Dakota County is planning to meet the needs of the most vulnerable people in the County and to assure care for those who are ill in the community.
Dakota County is prioritizing its services, determining how to protect staff to be able to continue essential services, and working with cities and townships to do the same.
If you have questions about the Pandemic Influenza Plan, please contact Gina Adasiewicz, Public Health Emergency Preparedness and Response Supervisor, at 891-952-7149 or email@example.com.
|
<urn:uuid:dc05f94a-7f8f-4961-9dfa-3bbcc92759a5>
|
CC-MAIN-2014-35
|
http://www.co.dakota.mn.us/HealthFamily/HandlingEmergencies/Planning/Pages/pandemic-flu-plan.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831565.57/warc/CC-MAIN-20140820021351-00208-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.929254 | 189 | 2.5625 | 3 |
From pv magazine Global.
An international research group has developed a perovskite solar cell with strong thermal stability and enhanced electron injection by using special nanotubes made of cesium-titanium dioxide (Cs-TiO2).
The scientists used titanium sheets with 99.4% purity, 1 mm thickness, and a length of 50 mm. The cell was fabricated with a two-step electrochemical anodization process and was then encapsulated with Cs nanoparticles, after being doped with a Cs-based solution. The C2-TiO2 nanotubes were then annealed at 450 C. The solar cell is based on methylammonium lead triiodide (CH3NH3PbI3), which is a perovskite with high photoluminescence quantum yield.
The researchers fabricated the nanotubes with a regular, ordered structure, which they say is necessary to achieve high levels of power conversion efficiency in the solar cell. This efficiency is proportional to the length of the nanotubes themselves.
“If the nanotube length is in between 1 micrometer (μm) to 20 μm then the incident photon-to-current conversion efficiency (IPCE) increases and reaches up to 80% at 20 μm length resulting in an increase in the efficiency of perovskite solar cells,” they said, adding that 20 μm is a reasonable distance for an electron to travel and to achieve higher efficiency.
The academics said that the metal ions of the dopant material they used to produce the nanotubes have a better ability to accept electrons.
“The doped metal can easily trap the conduction electrons enabling the reduction in electron-hole pair recombination,” they said.
They used ultraviolet-visible spectroscopy (UV-Vis) to compare the performance of their solar cell with a similar cell designed with TiO2 nanotubes without Cs doping. The thermal performance of the two devices was measured through thermal gravimetric analysis (TGA). The thermal assessment showed that the doped nanotubes have excellent thermal stability under temperatures ranging up to 800 C. They also found that they lose roughly 1% of their weight at around 150 C.
The analysis showed that cesium atom doping effectively facilitates electron transport by reducing recombination reactions. The researchers said that the Cs-TiO2 based perovskite solar cell exhibited superior performance, resulting in an 18.67% jump in short-circuit current and a 22.28% increase in power conversion efficiency from the reference cell. “The doping process can be performed at a low cost, as we used an optimized concentration of cesium of only 0.05 M,” research principal author, H.M. Asif Javed, told pv magazine.
“The improvement in solar cell parameters can be attributed to enhanced extraction of the photo-generated charge carriers in the device,” the researchers concluded.
They described the cell in “Encapsulation of TiO2 nanotubes with Cs nanoparticles to enhance electron injection and thermal stability of perovskite solar cells,” which was recently published in Surfaces and Interfaces. The research team included scientists from Pakistan’s University of Agriculture Faisalabad and the National University of Sciences and Technology, China’s Xi’an Jiao Tong University and Jiangsu University, and King Saud University in Saudi Arabia.
This content is protected by copyright and may not be reused. If you want to cooperate with us and would like to reuse some of our content, please contact: firstname.lastname@example.org.
By submitting this form you agree to pv magazine using your data for the purposes of publishing your comment.
Your personal data will only be disclosed or otherwise transmitted to third parties for the purposes of spam filtering or if this is necessary for technical maintenance of the website. Any other transfer to third parties will not take place unless this is justified on the basis of applicable data protection regulations or if pv magazine is legally obliged to do so.
You may revoke this consent at any time with effect for the future, in which case your personal data will be deleted immediately. Otherwise, your data will be deleted if pv magazine has processed your request or the purpose of data storage is fulfilled.
Further information on data privacy can be found in our Data Protection Policy.
|
<urn:uuid:fd4b8c5b-09fd-428a-b656-21ea9c7f1c4d>
|
CC-MAIN-2023-14
|
https://www.pv-magazine-australia.com/2021/03/02/perovskite-solar-cell-with-cesium-titanium-dioxide-nanotubes/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00377.warc.gz
|
en
| 0.914536 | 920 | 3.09375 | 3 |
A new image of the disk of gas and dust around a sun-like star has spiral-arm-like structures. These features may provide clues to the presence of embedded but as-yet-unseen planets.
"Detailed computer simulations have shown us that the gravitational pull of a planet inside a circumstellar disk can perturb gas and dust, creating spiral arms. Now, for the first time, we're seeing these features," said Carol Grady, a National Science Foundation (NSF)-supported astronomer with Eureka Scientific, Inc.
The newly imaged disk surrounds SAO 206462, a star located about 456 light-years away in the constellation Lupus. Astronomers estimate that the system is only about 9 million years old. The gas-rich disk spans some 14 billion miles, which is more than twice the size of Pluto's orbit in our own solar system.
"The surprise," said Grady, "was that we caught a glimpse of this stage of planet formation. This is a relatively short-lived phase."
A near-infrared image from the National Astronomical Observatory of Japan shows a pair of spiral features arcing along the outer disk. Theoretical models show that a single embedded planet may produce a spiral arm on each side of a disk. The structures around SAO 206462 do not form a matched pair, suggesting the presence of two unseen worlds, one for each arm. However, the research team cautions that processes unrelated to planets may also give rise to these structures.
"What we're finding is that once these systems reach ages of a few million years, their disks begin to show a wealth of structure--rings, divots, gaps and now spiral features," said John Wisniewski, a collaborator at the University of Washington in Seattle. "Many of these structures could be caused by planets within the disks."
Grady's research is part of the Strategic Exploration of Exoplanets and Disks with Subaru (SEEDS), a five-year-long near-infrared study of young stars and their surrounding dust disks using the Subaru Telescope atop Mauna Kea in Hawaii. The international consortium of researchers now includes more than 100 scientists at 25 institutions.
"These arm-like structures have been predicted by models, but have never before been seen," said Maria Womack, program director for the division of Astronomical Sciences at NSF. "It is the first observation of spiral arms in a circumstellar disk, and an important test for models of planetary formation."
Explore further: Despite extensive analysis, Fermi bubbles defy explanation
|
<urn:uuid:caf84f31-d763-453c-801e-673b323ae692>
|
CC-MAIN-2014-23
|
http://phys.org/news/2011-10-spiral-arms-hint-presence-planets.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275463.39/warc/CC-MAIN-20140728011755-00059-ip-10-146-231-18.ec2.internal.warc.gz
|
en
| 0.92638 | 531 | 3.6875 | 4 |
Galileo originally intended to map the constellation of Orion, but he soon decided that it was far too complex a task, due to the vast number of stars he could see in the constellation through his telescope. He chose instead to include in Siderius Nuncius a map of the portion of Orion around the sword, in addition to a map of the Pleiades.
Galileo's map of a portion of Orion, showing about 80 stars in addition to the ones which make up Orion's belt and sword. The tiny blue dot inside the red circle represents the field of view (15 arc minutes) of Galileo's telescope. The tiny field of view shows the enormous amount of difficulty Galileo must have had in mapping large regions of the sky.
A modern map of the entire constellation of Orion. The yellow dot within the blue circle shows the field of view.
Looking at Galileo's map of a portion of Orion (above) it can be clearly seen that he has not drawn the Orion nebula, the "fuzzy" star in the middle of Orion's sword which measures a full degree in extent.
A photograph of the Orion nebula. The black circle shows the field of view.
Why did Galileo not include the nebula in his map? Many historians have offered theories to explain this unusual oversight in a skilled astronomical observer. Galileo may have failed to observe the nebula altogether because it has changed in appearance since his lifetime, a theory espoused by Thomas G. Harrison. He may, as suggested by Albert Van Helden, have seen the Orion nebula but chosen not to publish it in his map of Orion, assuming that, like the milky way, the nebula could be resolved into a collection of individual stars given a telescope with sufficient magnification. Tom Williams proposes that Galileo failed to observe the Orion nebula because the field of view of his telescope (15 arc minutes) was so small that he would have been unable to detect the subtle changes in brightness which make up the nebula. We hope to provide some insight into these possibilities through observations of the Orion nebula region using Galilean-type telescopes.
The following picture is a sketch of the sword region of the constellation Orion. The dots represent stars, with the size of the dot being a rough representation of the magnitude of the star. The circles overlayed on the star map represent the field of view of the 10x Galilean telescope used to make the observation. The group member who made the drawing was unable to see the Orion nebula, which should appear on top of the two adjacent stars between the two empty circles in the third field of view from the top.
Using a twenty-powered telescope with a large field of view (>60 arcminutes) in the relative darkness of Big Bend National Park, it was not difficult to see the Orion nebula. (See sketch below.)
Although we have attempted to do so, no group member has as yet been able to see the Orion nebula either within or outside of Houston using the ten-powered Galilean telescope. However, Tom Williams claims that, outside of Houston, he was able to detect a slight increase in brightness in the field of view when he oriented the telescope on the region where he knew the nebula to be.
The data so far seem to support the theory that Galileo was simply unable to see the Orion nebula due to the small field of view of his telescope. There are, however, several variables that remain to be investigated before a conclusive judgement can be made on this issue. Our failure to see the Orion nebula within and near Houston could be due to any one of three possibilities: The magnification (10x) may not have been sufficient, the field of view may have been too small to permit detection of the light gradient, or light pollution from the city may have obscured the light gradient. Furthur research must be performed to determine which of these variables is most likely. The effect of magnification can be tested by using Galilean telescopes of varying magnifications. Unfortunately, the field of view of a Galilean telescope decreases as its magnification increases, so this will be a difficult problem to investigate. The effect of field of view can be tested by itself by looking for the nebula using a set of non-Galilean telescopes of the same magnification but differing fields of view (ranging from, say, 5 to 100 arcminutes). In order to best emulate Galileo's viewing conditions, the problem of light pollution should be eliminated to as great an extent as possible. This could be done by visiting an isolated observatory.
|
<urn:uuid:279e8a01-9094-493e-afd1-61a4717168d1>
|
CC-MAIN-2014-41
|
http://galileo.rice.edu/lib/student_work/astronomy96/rjbrown/orion.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662882.4/warc/CC-MAIN-20140930004102-00361-ip-10-234-18-248.ec2.internal.warc.gz
|
en
| 0.962575 | 928 | 3.953125 | 4 |
The projects aim to reduce the impact of bush fires on communities and reduce public and private losses.
The NSW Government will provide $2.59 million for the projects, which is additional to the $2.7 million for 126 projects in 2015 through the NSW Bush fire Risk Management Grants Scheme.
Minister for Justice Michael Keenan said that while natural hazards like bushfires are a fact of life in Australia, communities are stepping up efforts to manage the risks.
“It is only by working together that we can reduce the potentially destructive impacts of future disasters such as bush fires.”
Nineteen fire trails and aerial firefighting supply tanks at 24 regional airstrips across NSW have been approved.
Fire trails provide easier access to fires and the aerial firefighting supply tanks will ensure bulk water or fire retardant is immediately available for loading into fixed-wing aircraft for a more rapid response.
|
<urn:uuid:e579d035-4d69-4a51-8038-c3216e8f6012>
|
CC-MAIN-2020-24
|
https://www.nsw.gov.au/news/extra-funding-for-bushfire-risks
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347441088.63/warc/CC-MAIN-20200604125947-20200604155947-00498.warc.gz
|
en
| 0.940246 | 182 | 2.515625 | 3 |
Evaluating your teaching is an iterative process of gathering evidence which you use to improve and enhance your teaching.
Common forms of evaluation include gathering student feedback to inform how you can improve, including feedback from student questionnaires, and from other ways to get student feedback; or working with other teachers to evaluate and enhance your teaching such as from peer review and Teaching and Learning Circles (see the Otago Teaching Profile for more information).
As a result of regularly evaluating and improving your teaching, you will also generate evidence that you can use for promotion and confirmation purposes. You can describe this evidence in your Otago Teaching Profile.
Using evaluations to improve teaching and demonstrate quality
Details about the Quality Forum 2017
In this quality forum there was an overview of:
- The Otago University guidelines for evaluating teaching
- The various methods you can use to evaluate and enhance your teaching, and
- How you can use the results of these methods to demonstrate the quality of your teaching
This was followed by a panel discussion where teachers shared how they evaluate their teaching and then used this process to demonstrate the quality of their teaching. Students shared their perceptions of evaluation and suggestions for how evaluation can be more effective for improving teaching and learning.
Facilitator: Associate Professor Clinton Golding, HEDC
- Associate Professor Karyn Paringatai, Prime Ministers teaching award winner 2013
- Brad Hurren, National teaching award winner 2017
- Professor Ruth Fitzgerald, National teaching award winner 2017
- Professor Vernon Squire, Deputy Vice-Chancellor (Academic) 2017
- Bryn Jenkins OUSA student representative 2017
View the video of the Quality Forum 2017
Evaluate to improve: Useful approaches to student evaluations
Here are a few articles to refer to.
Use student evaluation results to improve teaching, Clinton Golding (2012) Akoranga, 8, 8-10
Evaluate to improve: useful approaches to student evaluation. Clinton Golding and Lee Adam (2014)
Many teachers in higher education use feedback from students to evaluate their teaching, but only some use these evaluations to improve their teaching. One important factor that makes the difference is the teacher’s approach to their evaluations.
In this article, we identify some useful approaches for improving teaching. We conducted focus groups with award-winning university teachers who use student evaluations to improve their teaching, and we identified how they approach their evaluation data. We found that these teachers take a reflective approach, aiming for constant improvement, and see their evaluation data as formative feedback, useful for improving learning outcomes for their students. We summarise this as the improvement approach, and we offer it for other teachers to emulate. We argue that if teachers take this reflective, formative, student-centred approach, they can also use student evaluations to improve their teaching, and this approach should be fostered by institutions to encourage more teachers to use student evaluations to improve their teaching.
Other ways of getting student feedback
Student feedback is essential for evaluating your teaching. You need to know whether your teaching is ‘working’ for your students. Student evaluation questionnaires are often the best way to get feedback, because they give you a snapshot of all your students, but sometimes you might also want other forms e.g. if want to go deeper, or small groups where questionnaires are not as useful.
More details about getting student feedback
Teaching and Learning Circles
Teaching and Learning Circles (TLCs) is a University-wide initiative, which combines observations of teaching with supportive peer conversations to provide insight into enhancing teaching. TLCs involve group-based, reciprocal peer observation of teaching with the ultimate goal of strengthening teaching culture and practice.
Teaching and Learning Circles provide “the opportunity to talk about teaching so that it gives you the mind space to reflect upon your own [teaching]. It’s that constructive alignment of seeing others and talking about the ways they get their class to work and being able to talk about your own process with them that is probably, what I think, is missing if I do it with peer review.” (TLCs participant, 2018)
The TLCs process
Each TLC consists of three or four members (preferably from different departments/disciplines). The average time commitment to participate in a TLC is four to five hours over a semester. This includes pre- and post-observation meetings as well as observing the teaching of each member.
There are six stages to the process as shown below.
The Teaching and Learning Circles Resource Pack details the process and provides helpful prompts for self-reflection and post-observation discussion.
Any data generated from your participation in TLCs can be used as evidence of effective teaching to complement student evaluations, and / or to support your teaching statement in the Otago Teaching Profile when applying for confirmation or promotion.
“This has been a really useful and positive experience for me. It has really helped to be able to engage with others’ teaching, especially from such different fields, and at a time when we were in danger of feeling isolated, this group provided the extra support I needed.” (TLCs member during COVID lockdown, 2020)
Teaching and Learning Circles was launched in the Division of Humanities in October 2017 by the then Associate Dean (Academic) Professor Tim Cooper who sought to enhance teaching practice and culture in the Division. The programme is now offered across all Divisions. Those who have participated in Teaching and Learning Circles have a range of teaching experience yet have all reported on the benefits of observing others teach and engaging in supportive conversations about teaching.
Read the Ako-funded research project about the beneficial outcomes for university teachers who have participated in TLCs.
The TLCs co-ordinator, Dr Tracy Rogers
Dr Rogers can help you form or join a Teaching and Learning Circle.
Email firstname.lastname@example.org for further information or to request to join a TLC.
|
<urn:uuid:0d8b9666-f951-4e3e-be31-63d1e619e97a>
|
CC-MAIN-2023-14
|
https://www.otago.ac.nz/hedc/evaluate/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00018.warc.gz
|
en
| 0.948212 | 1,248 | 2.75 | 3 |
Download this snapshot (pdf)
Productivity growth—the rate at which we increase production with a given amount of work and resources—is at the heart of economic growth. It will be easier to address the myriad challenges facing the U.S. economy—increasingly fragile middle-class living standards, the long-term federal budget deficits, and increasing economic competition from rapidly developing countries—if productivity grows faster rather than slower and economic growth accelerates with it.
Productivity growth is critical to our national economic competitiveness. U.S. products and services are more competitive in the global marketplace when U.S. firms manage to produce more and better things with the same amount of inputs.
Productivity growth also boosts our future living standards. Simply put, productivity growth means we can have more goods and services available for a given amount of resources used—hours at work, in particular. Because productivity growth makes our work go further, the average standard of living can rise more quickly. To ensure broadly shared prosperity we still need to address how the gains from productivity growth are distributed between wages and profits, but we can’t forget that rising prosperity begins with strong and sustained productivity growth.
A number of key factors come into play in determining faster productivity growth in the future. These include the level of business investment, the availability of skilled workers, spending on research and development, and adequate financing for bringing new and innovative products to market. The indicators of U.S. productivity and innovation surveyed here raise a number of points of concern about the future of U.S. economic competitiveness and living standards:
- Productivity has increased at modest rates since the start of the current business cycle in December 2007. The U.S. economy may already be losing some of its competitive edge in world markets as the U.S. high-tech trade deficit continues to worsen. Domestic innovators are facing stronger international competition as overseas patent applications grow faster than domestic ones.
- Business investment, while recovering, remains historically low. Corporations are not directing their rebounding profits to productivity-enhancing activities, instead holding cash or spending the money on buying back their own shares and paying out dividends to shareholders.
- Spending on innovation is decelerating, in particular public investments in science and research that in the past have led to revolutionary technological advances and private venture capital investments in early-stage start-up companies.
- Workforce training in science, technology, engineering, and mathematics is lagging, such that U.S. employers may not be able to hire all of the skilled workers that they will need in the future.
Productivity growth doesn’t just fall from the sky; it requires sustained policy attention to create private incentives and to supply complementary public investments. One saving grace is the potential opportunity presented by the special congressional “super committee” tasked with presenting a plan for deficit reduction by Thanksgiving. Its 12 members—drawn equally from both parties and chambers of Congress—were given great leeway by Congress under the recently enacted Budget Control Act of 2011, including to recommend policies that can get the economy moving today. The committee can propose legislation that requires only a majority of votes in the House and Senate to boost employment, incomes, and investment while building the foundation for a more productive, competitive, and stable economy tomorrow. The 12 committee members and all members of Congress should seize this opportunity to focus public debate on real national economic priorities: growth, jobs, and competitiveness. Here’s why.
The numbers tell the tale
Soft productivity growth requires policy attention
Worker productivity, the amount of goods or services produced in an hour of work, fell 0.3 percent in the second quarter of 2011. Though productivity in the U.S. economy is up 6.5 percent since the start of the Great Recession—and the current business cycle—in December 2007, the growth in productivity is lagging the pace of previous business cycles. There have only been two business cycles since the Great Depression that lasted at least as long as the Great Recession and had equal or lower productivity growth during this period. All other six such business cycles yielded faster productivity growth.
Productivity growth is soft in part due to slow economic growth but also because productivity growth tends to follow business investment with a lag of one or two decades. The present slow growth could bode ill for future productivity increases in the U.S. economy. Business investment in equipment, software, and factories, as we discuss next, has been at or near historic lows for a decade now. Low investment today may likely constrain productivity increases tomorrow.
Some business investment recovering
Since March 2010, business investment grew faster than gross domestic product or GDP—the sum total of all goods and services produced by workers and equipment in the United States. Business investment stood at less than 10 percent of GDP in the second quarter of 2011, and has averaged just 10.2 percent of GDP for the current business cycle—the lowest average of any business cycle since the late 1960s.
The American Recovery and Reinvestment Act of 2009 helped pave the way for the recovery and increase of business spending on equipment such as trucks, machinery, and computers. Such business investment fell to a low of 6.4 percent of GDP in the second quarter of 2009. As the Recovery Act stimulus began kicking in, the demand it propelled for investment goods from private businesses helped boost equipment spending to 7.3 percent of GDP in the second quarter of 2011. While business equipment investment has nearly recovered to prerecession levels, business investment in buildings and factories remains nearly 33 percent below prerecession levels in the wake of the real estate bubble and mortgage and financial crisis.
Business investment, though on the upswing, will only gain momentum if businesses expect more sales in the future. Additional sales can come from stronger consumption at home and from more exports.
Rebounding profits fuel cash holdings, share repurchases—not productive activity
The slow recovery of business investment has little to do with business profitability. The corporate profit rate in nonfinancial businesses is up from a low of 1.5 percent of total assets in December 2008 to 2.8 percent in March 2011, the highest level since December 2006. But with these profits, corporations are prioritizing activities other than hiring and investing. First, corporations are stockpiling cash holdings, which stood at 6.8 percent of total assets in March 2011, the highest level since December 1965.
Second, corporations are using rebounding profits to prop up their stock prices—a key factor in executive compensation—by repurchasing shares and paying out dividends amounting to 118 percent of after-tax profits on average between December 2007 and March 2011. This means corporations are actually borrowing money to buy back their own shares and pay dividends, rather than putting that money into productivity-enhancing investments or hiring workers.
Venture capital investment neglecting early innovation
Investments by venture capital investors are also recovering slowly. In the four quarters through June 2011, venture capital investments amounted to more than $24.9 billion, up 5 percent over one year prior, but still 21 percent lower than before the financial turmoil in the second half of 2008—and less than one-fourth of the level at the end of the 1990s dot-com era, after adjusting for inflation.
The slow recovery of VC funding reflects less a lack of opportunities or resources and more a lack of risk appetite from VC investors. Financing for expansion and late-stage VC investments has grown at a robust 19 percent annual rate since the start of the current business cycle. Over the same time VC investments in seed-stage companies have fallen by 34 percent. In an environment of low overall investment and employment, creating uncertainty for economic growth, venture capitalists are seeking more proven investments over riskier start-up businesses and innovations.
The United States lags behind other countries in R&D spending
Spending on research and development in the United States amounted to 2.7 percent of GDP in 2007 (the most recent year of comparable data), ranking eighth in the world. Israel, Sweden, South Korea, Finland, Japan, Switzerland, and Iceland all dedicated larger shares of their economies to R&D investment. China, in comparison, dedicated 1.5 percent of its GDP to R&D and ranked 24th among all countries.
R&D figures include both private business and public investments in research and innovation. Though total R&D spending is led by private business—accounting for an average of two-thirds of all R&D—not all private-sector R&D investment yields improvements to productivity or general social well-being in obvious ways. A food manufacturer, for example, may spend on R&D leading to a new artificial flavoring. In contrast, public investments in R&D typically provide support for development of basic science, technological advances, and commercialization of innovations. Often, such public R&D spending provides resources critical for supporting activities of private businesses. But Federal government spending on R&D has lagged far behind private R&D, growing at only 1.5 percent annually through 2008 and has been targeted for severe cuts in recent budget negotiations.
STEM workforce in relative decline
All the capital in the world can’t be productive but for a skilled labor force to put it to work. Of course innovative ideas may come from anywhere, but training in science, technology, engineering, and mathematics—so-called STEM fields—creates a workforce with the requisite skills to be productive at innovating, adapting, and implementing new technological advances. Over the past decade educational attainment in the U.S. labor force increased steadily, with the share of the labor force earning associate degrees rising from 9 percent in 2000 to more than 10 percent in 2009, and with the share earning four-year college degrees rising from 20 to 22 percent.
Educational attainment is on the rise, yet students are increasingly moving away from education in STEM fields. In 2003, as much as 18 percent of all associate degree recipients earned degrees in STEM fields, but by 2009 only 11 percent did. Similarly, early in the decade 4.3 percent of all bachelor degree recipients were in STEM fields, but by 2009 only 3.8 percent graduated with STEM majors. The declining shares of workers graduating with STEM skills could impede adaptation and commercialization of innovations and technology into new business applications, even though U.S. universities continue graduating high shares of STEM Ph.D.s.
Advanced technology trade balance eroding further
The U.S. trade deficit in high-tech goods, such as aircraft, optical equipment, and medical devices, worsened 45 percent to $92 billion in the 12 months through June 2011, the last month for which we have data. For the past two years, U.S. exports of advanced technology goods have grown at a modest 2.7 percent annually. At the same time, U.S. high-tech imports—already larger than exports to begin with—grew 9.4 percent annually to $384 billion. On an annualized basis, the high-tech trade deficit now has worsened every month for the past 20 months.
Compared to other U.S. exports, high-tech exports are growing slowly. Overall U.S. exports grew 11.5 percent annually for the past two years. Lagging performance of advanced technology trade also weighs on the overall U.S. trade deficit. The share of the high-tech trade deficit in the total U.S. trade deficit increased from 11.8 percent one year ago to 13.6 percent in June.
The U.S. economy may be losing its competitive edge as the trade balance in high-tech products is widening, and a widening high-tech trade deficit contributes to a growing threat of economic instability emanating from a rapidly growing overall U.S. trade deficit.
Domestic innovation facing stronger international competition
Grants for utility patents from the U.S. Patent and Trademark Office grew markedly in 2010, up 31 percent over 2009 to nearly 220,000 grants. Utility patents are special property rights awarded to individuals or organizations for the invention of “new and useful” or material improvements of processes, machines, or materials. Not all patents represent productivity-enhancing innovation and the timing of patent grants may not coincide with the timing of invention. Nonetheless, the pace of patent awards provides a metric of the pace of innovation in the U.S. economy.
Even though patents overall were up in 2010, the share of patents awarded to domestic U.S. entities continued to decline. Under U.S. law, both Americans and foreigners can apply for patent rights. Of new patent awards in 2010, 51 percent were granted to foreign entities; in 1999 foreign entities earned only 45 percent of all patent awards.
Innovations from abroad can still confer substantial benefits on the U.S. economy. By making new technologies or practices available to domestic businesses and consumers, foreign innovations can enhance business productivity and boost living standards for U.S. households. But homegrown innovation remains critical to U.S. global science and technology leadership, and the rising awards of patents to foreigners signals an increasingly competitive international landscape for innovation.
With the broad authority vested in the congressional “super committee” under the Budget Control Act, lawmakers now have an opportunity to act on flagging U.S. productivity. At the very least, any deal should refrain from cutting fiscal support from an already fragile economy. Beyond this very low bar, Congress has an opportunity to refocus public debate on real national economic priorities—strong and sustained economic growth, jobs, and productivity. Here, there are two obvious and mutually reinforcing policy imperatives:
- Shore up the fragile economic recovery and boost employment by stabilizing aggregate demand growth, which will strengthen the private sector’s sales expectations that are key to increasing business investment.
- Support for public investments in education, infrastructure, and scientific research that provide a foundation for innovation and productivity across all sectors of the U.S. economy.
Christian E. Weller is a Senior Fellow at the Center for American Progress and an associate professor of public policy at the University of Massachusetts Boston. Adam Hersh is an economist at the Center.
Download this snapshot (pdf)
|
<urn:uuid:3feca624-90b6-4e14-8459-b92fb4a5ee88>
|
CC-MAIN-2020-10
|
https://www.americanprogress.org/issues/economy/news/2011/08/18/10068/the-state-of-american-productivity-growth/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147628.27/warc/CC-MAIN-20200228170007-20200228200007-00319.warc.gz
|
en
| 0.948412 | 2,900 | 2.875 | 3 |
A new study suggests that men who regularly eat soy products may be more likely to need infertility treatment.
A link between soy isoflavone compounds and infertility has previously been demonstrated in animal studies, but the latest study in the journal Human Reproduction is one of the first to indicate a similar link in humans.
Researchers studied 99 men and discovered that those who ate the most soy products - such as tofu, tempeh and soy milk - had 41 million sperm per millimetre less than men who did not eat any soy products.
Dr Jorge Chavarro, a research fellow in the department of nutrition at Harvard School of Public Health, US, revealed: "Men in the highest intake group had a mean soy food intake of half a serving per day.
"In terms of their isoflavone content that is comparable to having one cup of soy milk or one serving of tofu, tempeh or soy burgers every other day."
The expert also noted that some of the men involved in the study were eating nearly four whole servings of soy products per day.
At least 20 per cent of couples who are of reproductive age are affected by infertility, and around 30 per cent of men are estimated to be sub-fertile.
© Adfero Ltd
Infertility treatment news : 25/07/2008
|
<urn:uuid:897795d5-723e-4d1b-8cff-de3863828c78>
|
CC-MAIN-2013-48
|
http://www.privatehealth.co.uk/news/july-2008/study-links-soy-foods-with-male-infertility1323/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052810/warc/CC-MAIN-20131204131732-00070-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.961385 | 270 | 2.515625 | 3 |
Tech moves fast! Stay ahead of the curve with Techopedia!
Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.
A gateway is a data communication device that provides a remote network with connectivity to a host network.
A gateway device provides communication to a remote network or an autonomous system that is out of bounds for the host network nodes. Gateways serve as the entry and exit point of a network; all data routed inward or outward must first pass through and communicate with the gateway in order to use routing paths. Generally, a router is configured to work as a gateway device in computer networks.
Any network has a boundary or a limit, so all communication placed within that network is conducted using the devices attached to it, including switches and routers. If a network node wants to communicate with a node/network that resides outsides of that network or autonomous system, the network will require the services of a gateway, which is familiar with the routing path of other remote networks.
The gateway (or default gateway) is implemented at the boundary of a network to manage all the data communication that is routed internally or externally from that network. Besides routing packets, gateways also possess information about the host network's internal paths and the learned path of different remote networks. If a network node wants to communicate with a foreign network, it will pass the data packet to the gateway, which then routes it to the destination using the best possible path.
|
<urn:uuid:048d711e-da63-4d1f-838f-840c17e20d2f>
|
CC-MAIN-2020-34
|
https://www.techopedia.com/definition/5358/gateway
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736972.79/warc/CC-MAIN-20200806151047-20200806181047-00387.warc.gz
|
en
| 0.943456 | 296 | 4.3125 | 4 |
DALLAS (AP) – In the bowels of the Museum of Nature & Science in Fair Park, Tommy Diamond sat hunched over what looked like a huge boulder of plaster.
Yes, there was a lot of plaster there, but what was underneath was much more valuable priceless, in fact.
Diamond spends eight hours (or more) a day chipping away at the vertebra of an Alamosaurus.
Ten vertebrae of the 65-million-year-old dinosaur, uncovered in 1997, are being prepared to go on display for the opening of the Perot Museum of Nature & Science in Victory Park in 2012.
“No one’s ever seen the neck of an Alamosaurus before,” said Tony Fiorillo, curator of earth sciences at the museum. “They’re not the biggest vertebrae ever found, but they’re certainly the biggest ever excavated in Texas.”
The cervical bones were found in Big Bend National Park in West Texas by accident. Fiorillo’s team was working about 500 feet away when University of Texas at Dallas student Dana Biasatti saw the dinosaur’s hip bones poking right above the surface.
They weren’t lifted out of the park until 2001, after extended negotiations with Big Bend officials. Because the vertebrae were so heavy (ranging from 100 to 800 pounds), some had to be airlifted out of the park by helicopter.
Now they sit in the museum’s basement in plaster casts, waiting to be uncovered by the museum’s fossil preparators. It takes six to seven months to chip away at all the plaster and uncover the bone (without damaging it) and get it museum ready.
“When you get past all the dirt and down to the bone, that’s when you get really sucked into it,” said Ron Tykoski, chief fossil preparator. “It gets really exciting, every little layer you peel away.”
The first Alamosaurus bones were discovered in 1922 in New Mexico. Contrary to one’s first thought, the dinosaur was not named after the San Antonio landmark, but rather the river bed where the fossils were found.
The dinosaur is classified as a sauropod, which includes some of the largest animals to live on land. The Alamosaurus, a herbivore, is estimated to have weighed 35 tons.
The Museum of Nature & Science is working with the Smithsonian Institution in Washington, D.C., and UTD, both of which have other parts of the dinosaur. The actual bones will be scanned and casts will be made and put on display in the museum. The real bones are too heavy to use in the full skeleton re-creation.
Three institutions working together on one animal is atypical, Fiorillo said. Usually one organization handles all of the preparation and set up for a museum display. Because the other institutions discovered the other bones decades ago, the three must work together to re-create the entire animal.
But there is one part of the Alamosaurus no one has ever seen — a skull.
“We were really excited because the neck was all articulated,” Fiorillo said. “It was going and going and we were hoping for a skull, but it stopped after the neck.”
Tools that wouldn’t look out of place in a dentist’s office are used to painstakingly chip away slivers of dirt of and plaster. Tykoski and Diamond said it’s not uncommon for their fingers to be numb after consecutive hours of scraping away at the bone.
The museum has two people dedicated solely to getting the Alamosaurus fossils ready. But volunteers have also played a huge part in preparing the bones.
Whittling away at the layers of plaster and dirt for hours on end doesn’t demand expert skill, but it does require quite a bit of patience.
Fiorillo said the volunteers come in when they can, working for a few hours at a time. They come from scientific and nonscientific backgrounds — from anthropology students to fossil enthusiasts to artists.
Marcie Keller, who started full time with the museum at the beginning of October, was an art major in college. She has no experience in archaeology or science at all, for that matter.
Fiorillo said those with an artistic background are some of their best volunteers, because of their good hand-eye coordination.
Keller agreed, adding: “I did a lot of etching and printmaking so this is pretty much doing the same thing. I felt very comfortable in the lab right away.”
The work is tedious, the fossil preparators admit, but there is almost always a huge payoff once the final layer of dirt is scraped away.
“You can be the first person in the world to see a particular bone or fossil,” Tykoski said. “It can look like a big chunk of rock, but maybe you’ll find something under all those layers.
“It’s quite the adrenaline rush.”
(Copyright 2010 by The Associated Press. All Rights Reserved.)
|
<urn:uuid:beb92209-baca-4a67-8c61-f429d4ed4b43>
|
CC-MAIN-2017-34
|
http://dfw.cbslocal.com/2010/11/21/dallas-researchers-chip-away-at-dinosaur-bones/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00216.warc.gz
|
en
| 0.956611 | 1,070 | 3.234375 | 3 |
- How do you find the diameter of a square foot?
- How many feet is 3 square feet?
- What is the diameter of 3 feet?
- How do you measure 3 square feet?
- How large is 5×5?
- What all can you fit in a 5×5 storage unit?
- Can a car fit in a 10×10 storage unit?
- How much is a 10×20 storage unit?
- How do I know what size storage unit I need?
- Do people live in storage units?
- How big is a 10×10 storage unit?
- How big is a 8×10 storage unit?
- How big is a 10×12 storage unit?
- What fits in a 10×10 storage?
- What can fit in a 4×4 storage unit?
- What fits in a 4×5 storage unit?
- What size storage do I need for a 1 bedroom apartment?
- How big is a 10×30 storage unit?
How do you find the diameter of a square foot?
Calculate the Diameter of a Circle, from Its Area
- Divide the area (in square units) by Pi (approximately 3.14159). Example: 303,000/3.14159 = 96447.98.
- Take the square root of the result (Example: 310.56). This is the radius.
- Now double the radius to get the diameter (Example: 621.12 meters).
How many feet is 3 square feet?
Square feet to Feet Calculator
|1 ft2 =||1 feet||1 feet =|
|3 ft2 =||1.7321 feet||3 feet =|
|4 ft2 =||2 feet||4 feet =|
|5 ft2 =||2.2361 feet||5 feet =|
|6 ft2 =||2.4495 feet||6 feet =|
What is the diameter of 3 feet?
Diameter of a Circle
How do you measure 3 square feet?
Basic formula for square feet Multiply the length by the width and you’ll have the square feet. Here’s a basic formula you can follow: Length (in feet) x width (in feet) = area in sq. ft.
How large is 5×5?
A 5×5 self storage unit is 5 feet wide and 5 feet long, totaling 25 square feet — comparable to a large closet. Many of our 5×5 storage units have 8-foot ceilings, providing roughly a total of 200 cubic feet of packing space. Check with your local Public Storage facility for exact dimensions on 5×5 units.
What all can you fit in a 5×5 storage unit?
Typically, a 5×5 storage unit can hold:
- Twin or full size mattress.
- Seasonal home décor.
- Pool accessories and gardening equipment.
- Carpets, lamps, end tables and other small décor items.
- Dining room chairs or stools.
- Baby accessories.
- Handful of small to mid-size boxes or storage containers.
Can a car fit in a 10×10 storage unit?
For smaller vehicles, a 10×10 or 10×15 storage unit is your go-to because it’s very similar in size to a one-car garage. Ranging from $80-$105 per month, a 10×10 storage unit will fit your vehicle perfectly. If you want a little more space for extra equipment or supplies, a 10×15 ranges from $99-126 per month.
How much is a 10×20 storage unit?
What is the average cost of a 10×20 storage unit? The average cost of a 10×20 storage unit is $138.06 per month.
How do I know what size storage unit I need?
3. Calculate your space
- Square feet (sq. ft.). Multiply the length and width of your belongings. If they make a pile that’s 5 x 5 feet, you’d need a storage unit with at least 25 square feet.
- Cubic feet (cu. ft.). Multiply the length, width, and height of your belongings.
Do people live in storage units?
Can You Live in a Storage Unit? No. Living in a storage unit is prohibited by various local and federal housing laws. Storage facilities must evict any person they find living on the premises to comply with the law and most insurance policies.
How big is a 10×10 storage unit?
100 square feet
How big is a 8×10 storage unit?
Our 8×8 Storage Unit, approximately 64 square feet, is a comparable, wider upgrade from the 7×10 unit. The 8×8 extra space allows for all kinds of storage options. Pack away a lot of your household items and personal belongings, wide furniture, moving boxes, etc.
How big is a 10×12 storage unit?
Our 10×12.5 units can house the full contents of a two-bedroom apartment. Measuring 125 square feet, they equal an average-sized bedroom. Some of the items that can fit in our 10×12.5 storage space include: Large appliances – refrigerator, washer, dryer, etc.
What fits in a 10×10 storage?
A 10×10 storage unit can hold:
- Large appliances.
- Coffee tables and end tables.
- Lamps, rugs and home décor items.
- Bicycles, garden equipment and seasonal home décor.
- Two queen size beds.
- Dressers and vanities.
- Office furniture.
What can fit in a 4×4 storage unit?
Store away your skate shoes, golf bags, skateboards, tennis racquets, cricket sets, fishing gear (hey, no fish!) and many other sporting goods in a 4×4 storage unit.
What fits in a 4×5 storage unit?
- 4×5. About the same size as a broom closet. This space is great for skis, boots, a tent, or hanging your bike vertically.
- 8×10. Larger than a walk in closet.
- 10×20. This size will accommodate the contents of a three or four bedroom home with all appliances or a compact vehicle, snowmobile trailer, or boat.
What size storage do I need for a 1 bedroom apartment?
A 5’x10′ (50 sq. ft.) storage unit is ideal for a 1 bedroom apartment. It’s great for furniture like a queen-sized mattress set, dresser, couch, TV, bike and several boxes.
How big is a 10×30 storage unit?
300 square feet
|
<urn:uuid:fce56f09-44cd-4e15-b407-da5a2ad85e91>
|
CC-MAIN-2023-23
|
https://www.archivemore.com/how-do-you-find-the-diameter-of-a-square-foot/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652494.25/warc/CC-MAIN-20230606082037-20230606112037-00444.warc.gz
|
en
| 0.821352 | 1,500 | 2.546875 | 3 |
A while ago I chose to use as my recession indicator a decline in annual real GDP per capita. The reason for this is twofold. Firstly it can be derived from official figures (ie St Louis Fed) as opposed to the more nebulous proclamations from the NBER. The second reason is that simply measuring real GDP doesn't cut it in an annualised form since, by that measurement, there was no recession in 2001 and the 1970 recession wasn't too bad at all.
The thing is that population always affects economic growth. If an economy is expanding and population along with it, chances are that economic growth is being driven by an expanding consumer and producer base - more people means more potential consumers and more potential producers.
- Nominal GDP = the actual numbers. This ignores inflation and population.
- Real GDP = nominal GDP adjusted by inflation. This ignores population.
- Real GDP per capita = Real GDP per head of population.
As you can see there are very close correlations between the NBER pronouncements and annual declines in real GDP per capita. The differences (apart from the 1956 Q3 blip) seem to be in measuring the start date - the NBER measurements always start between 1-3 quarters before annual declines in real GDP per capita - and in measuring the length.
|
<urn:uuid:eca6673e-decc-4f81-bb29-abd8c7350150>
|
CC-MAIN-2017-34
|
https://one-salient-oversight.blogspot.com/2011/07/recession-measurement.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102967.65/warc/CC-MAIN-20170817053725-20170817073725-00317.warc.gz
|
en
| 0.951094 | 266 | 3.15625 | 3 |
View Larger Image
Animal ABC + 123 Wooden Blocks Set
20 solid-wood blocks feature letters and numbers, plus counting dots and pictures of familiar objects to illustrate each one. It's a classic learning manipulative that will lead to hours of counting, sorting, and building fun-in a modern color palette that today's families will love!
Extension Activities: More Ways to Play and Learn:
- Ask the child to make a tall tower, stacking as many blocks as possible until the tower tumbles.
- Ask the child to sort the blocks by color. Ask the child to in other ways to sort and group the blocks - such as gathering the letters in his or her own name, or sorting sea animals from land animals.
- Help the child to count all the blocks in a given group. Then help the child add two groups together, counting all the combined blocks
- Ask the child to select a number block. Help the child name the number, trace the number with a finger, count the counting dots, then make a tower of the same number of blocks.
- Gather a group of several blocks and arrange them so that all but one block has an animal facing up. Ask the child to identify the block that is different and turn it to make the group complete. Repeat with numbers, capital letters, lowercase letters, and counting dots.
Dimensions: 9.75" x 8" x 2.25" Packaged
Please share your thoughts about this toy with your Facebook friends in the Comment box below.
|
<urn:uuid:ded4d482-3235-4dde-9d76-58ae13f9aca5>
|
CC-MAIN-2014-10
|
http://www.melissaanddoug.com/animal-abc-alphabet-numbers-123-wooden-blocks-set
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999678747/warc/CC-MAIN-20140305060758-00002-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.896822 | 313 | 3.828125 | 4 |
The best carpentry tip for beginners is to remember the saying ‘measure twice, cut once’. Learning to accurately measure is essential and will save you a lot of time in the long run.
Apprenticeships are a tried and tested route into carpentry and is how many carpenters learn their trade. Apprenticeships offer full-time employment and pay whilst you learn.
A carpenter needs a few basic tools to get started. These include a hammer, screwdrivers (a set with various sizes), and a tape measure. It’s important to buy quality tools because they can last a long time, and if you don’t use them properly, they can be dangerous.
A hammer isn’t just for hitting home nails, it can also be used to remove nails, lever timber into position, blunt the end of nails so they don’t split wood and much more. You should aim for accuracy over power at first until you get the hang of it.
Similarly, screwdrivers can be used to turn screws in wood, as well as other materials like metal. They’re also useful for tightening or loosening screws. Carpenters should have a few different types of screwdrivers, as some projects require a specific type.
All carpenters need to know how to use a tape measure accurately. This is because every project requires precise measurements. Getting even one measurement wrong can derail a whole project, so it’s important to learn how to do this correctly. A basic tape measure should have calibrations for both imperial (inches and feet) and metric.
Another tool that all carpenters need is a miter box. A miter box is a wooden jig with two raised sides and a groove in the middle to slot your saw into predetermined angles to make accurate cuts. This helps you avoid making mistakes when cutting pieces of timber to the same size, which could be disastrous.
You’ll also need a pencil or carpenter’s chalk to mark out lines and shapes on timber. This is so that you can see where you’re going to cut, and it makes it easy to go back and correct any errors. It’s also good for marking lines on areas that are difficult to reach, so you don’t have to strain when using power tools.
You’ll also need to have a suitable work space to practice your carpentry skills. This doesn’t have to be a huge garage or basement workshop, but it should be spacious enough to fit your tools and allow you to move around comfortably. It should also be clean, well-lit, and ventilated. This will help you stay focused and motivated, so you’ll be more likely to keep practising your carpentry skills.
Measurement is an essential skill for any carpenter, as precise measurements help ensure a quality finished product. Accurate measurements can also save time and money by reducing the number of mistakes made during project completion. The key to accurate measurement is using the right tools and following the correct procedures.
The most basic carpentry tool is a tape measure. Opt for a model with a locking mechanism and clear markings to ensure precision. A framing square is another must-have for woodworking projects, as it can help you determine whether a piece of wood is level or plumb and can be used to mark perpendicular lines. A combination square combines a ruler with a pivoting head to allow you to make measurements in a variety of angles, making it an indispensable tool for checking the accuracy of your work.
Other important measuring tools include a carpenter’s pencil, which has a flat rectangular shape to prevent it from rolling off surfaces and can be sharpened to a wide point for accurate markings on wood. You can also use a set of calipers to measure precise distances with high accuracy. Lastly, a carpenter’s level is useful for ensuring that surfaces and structures are level and plumb.
Regardless of the measurement tool you choose, it’s vital to maintain a steady grip when using them. Shaky hands can result in inaccurate measurements, so be sure to practice good hand hygiene and regularly clean your measuring tools to avoid contamination. Lastly, remember to account for the thickness of your measuring tool when taking measurements, as this can affect the results.
Measuring accurately is crucial for any carpentry project, but it’s not always easy to achieve. By following a few simple tips and using essential tools, you can improve your measuring skills and ensure precision in all your future carpentry projects.
Carpentry is all about joining pieces of wood together to make something functional. This is one of the hardest skills to master as it requires precise cutting and positioning of each piece. Even the slightest mistake can turn a carefully honed piece of wood into nothing more than a heap of scrap lumber.
To get the most from your woodwork you will need to be able to use various joining techniques. This means knowing how to use nails, screws, glues and a variety of other ways to connect wood together. This also includes being able to select the correct materials for each job. For example, short screws will fail to hold boards together while larger ones will pierce through and damage the wood.
Another part of joinery is woodworking techniques, such as planing (reducing the size of boards), routing (using a tool to create finished edges and shapes) and lathing (carving wooden materials on a rotating axis). You will also need to know how to choose and use power tools correctly. This is especially important when using a drill, as the wrong type can damage your work and hurt you.
When deciding on a power tool for your job, look for a model that has an ergonomic handle, is lightweight and has variable speed settings. Also, if you’re just starting out, consider choosing a cordless option, as this will be much safer to use.
Finally, it’s always a good idea to invest in a high quality tape measure, which is more durable than cheaper models and easier to read. A 25 feet model is a great choice as it’s long enough for most jobs but still compact and easily portable.
Lastly, don’t be afraid to learn from others. Many of the most successful carpenters have spent time watching or listening to experienced professionals. This is a great way to pick up new tips and tricks and see how other carpenters work. You can find plenty of videos online that cover everything from how to cut different types of wood to how to properly erect a shed.
As with any hobby or career, it’s important to practice safety techniques when first getting started in carpentry. This includes wearing proper clothing and using the right tools for the job, and it also means being aware of potential hazards like power tools and sharp objects. Being aware of these hazards can help prevent accidents and injuries from happening.
It’s also a good idea to make sure that your work area is clean and organized. A messy workstation can quickly lead to distractions and lost productivity. And finally, it’s important to always wear the appropriate PPE (personal protective equipment) like gloves, eye protection, and a mask when working with loud power tools or hazardous materials.
Another aspect of carpentry is keeping your tools in good condition. This means cleaning them on a regular basis, making sure that they have all of their safety features intact, and storing them properly. This will ensure that they continue to work well for you and that you’re able to use them effectively.
As you start to get more experience in carpentry, you may find that you need to hone your skills in certain areas. For example, you might want to learn how to read blueprints or become more proficient at laying out timber for assembly. These are both useful skills that can help you advance your career, so don’t be afraid to challenge yourself and take on new tasks. Just be sure to choose a project that is within your skill set so you don’t overwhelm yourself.
The best way to begin your journey in carpentry is by purchasing some basic tools and creating a safe workspace. From there, you can start exploring the many possibilities that this trade has to offer. It’s important to remember that carpentry can be physically demanding, so it’s a good idea to stretch and take breaks often. Also, be mindful of your posture and avoid taking on projects that will require you to stand or bend for extended periods of time. Overall, with some dedication and hard work, you can become a skilled carpenter in no time at all!
|
<urn:uuid:ef2b068f-4c84-47af-8744-135653f021b5>
|
CC-MAIN-2023-40
|
https://robfindlay.org/carpentry-techniques-you-need-to-know-for-beginners/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510516.56/warc/CC-MAIN-20230929122500-20230929152500-00007.warc.gz
|
en
| 0.950616 | 1,794 | 2.890625 | 3 |
|The House of Commons, 1833-43, oil on canvas, 300 x 500 cm, National Portrait Gallery, London|
Painted to commemorate the passing of the first Parliamentary Reform Bill in England in 1832, Sir George Hayter took 10 years to complete the work, which depicts the opening session of the new House of Commons on 5 February, 1833.
Of the 658 in parliament at the time 375 are present in the portrait and 323 can be definitively identified, including a self-portrait of the artist himself, kneeling in the bottom right corner. Highly figurative, each representation has been given specific painstaking attention, with individual sittings taking place in most instances, of which many preparatory oils survive.
After completion, interest in the Reform Bill having waned, Hayter had great difficulty in finding a buyer for the monumental work. It was 15 years later that he succeeded in selling the work to the, ironically, then Tory government (who originally opposed the commemorated reforms) for ₤2,000. It was presented to the newly founded National Portrait Gallery in London and was for many years hung in the Houses of Parliament, rebuilt following a fire in 1834 - a year after Hayter completed preparatory sketches of the space.
While the painting remains a record of a moment in the extension of democracy in Britain, it must be remembered that it was not until 1919 that the first woman joined the House and 1928 that women gained equal rights to vote. Looked at today the work is a stark reminder that power, global, has historically been held primarily by straight, well-off, white males.
|
<urn:uuid:eeb8fc97-0ded-4711-9b8b-ad8c4df08c43>
|
CC-MAIN-2014-23
|
http://realitybitesartblog.blogspot.com/2011/06/bite-127-sir-george-hayter-house-of.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510263423.17/warc/CC-MAIN-20140728011743-00009-ip-10-146-231-18.ec2.internal.warc.gz
|
en
| 0.968404 | 334 | 3.015625 | 3 |
How to Use Big Data to Your Advantage
Users have been generating increasing amounts of data in the past few years, partly due to rapid digitalization since the pandemic. As a result, increasing numbers of analytics applications are capitalizing on these data assets.
However, building scalable systems is no trivial task and incidents are inevitable. Complex systems generate data in the form of logs, traces, metrics, and more, which organizations often find themselves sprinting through. Such logs are a powerhouse of valuable information. When analyzed with third-party datasets, these can help the IT operations team identify the incident lineage.
Extracting Incident Management Insights From Big Data
Sophisticated AI algorithms and powerful infrastructure level the playing field. Consequently, speed differentiates businesses with a competitive edge. The pace of innovation frequently calls for users to migrate centralized IT assets to modern microservice-based systems. However, these complex systems are as unpredictable as centralized systems. Furthermore, they require advanced tools in a changing environment.
Analyzing vast amounts of data is a herculean task. Data assets can quickly become a liability if they’re not well managed. This is where incident management systems come in. An effective incident management system improves the incident lifecycle and helps the IT operations team understand why outages occur.
The Challenges of Working With Big Data
The rapid increase in data from disparate sources makes it impractical for people to generate their own insights. According to Miller’s law, the average person can only hold about seven objects in their working memory. This means that the human brain cannot analyze vast datasets to generate actionable insights. However, artificial intelligence (AI) and machine learning (ML) algorithms can overcome this limitation. These algorithms can uncover hidden data patterns, empowering businesses with data-driven insights.
An incident signifies an outage or a negative impact on business services. Incident logs store all the relevant details, including the impacted application, its connected services, timestamp, severity, and so on. These logs generate large amounts of vital data from different IT infrastructures, services, applications, and networks to detect faults early. However, the low signal-to-noise ratio of such a data set critically limits its analysis.
First, if the data set has a low signal-to-noise ratio, it’s difficult for IT engineers to identify the subset of data relevant to troubleshoot the error logs. Secondly, such incident logs are largely imbalanced since most data capture the system’s healthy state. Only a small subset of data carries phenomena related to incidents that must be filtered and monitored for effective incident response management.
When incidents occur, engineers must expend precious brainpower to examine them manually. This makes a clear case for using AI and ML algorithms to manage large-scale datasets and generate actionable insights.
What Is Artificial Intelligence for IT Operations (AIOps)?
Observability will be the data industry’s next big focus, allowing you to see the health of your data pipelines. It’s a big step up from the black-box testing approach, which only provides limited information about output deviations concerning changes at the input.
AIOps that use AI and ML algorithms empower IT operations. AIOps equips the IT team with the necessary support to efficiently and effectively manage systems at scale. For example, AIOps automatically analyzes data to maintain system health. This removes the risk of manual errors or delinquencies.
AIOps systems save significant time for IT teams by automatically resolving alerts. AIOps can aggregate and correlate patterns from data sources like storage, infrastructure, and network. ML algorithms generate data-driven inferences from large data sets by:
- Predicting when the next outage will be
- Identifying inefficiencies
- Predicting which type of outages are most likely to occur
- Analyzing the root cause of incidents
These insights give the operations team the details necessary to respond to incidents, leading to quicker incident resolution. AIOps breaks down the information silos restricting information to a limited group of engineers. This democratization of knowledge empowers more team members to address incidents.
AIOps is critical to faster incident management and reducing downtime, keeping customer and system expectations in check.
AIOps and ML Capabilities
Utilizing ML algorithms that use IT data and telemetry data, AIOps builds a predictive maintenance arm for the systems. Advanced techniques such as natural language processing (NLP) analyze unstructured data, like error logs, outage sources, defect types, incident categories, and event descriptions. As computers don’t understand raw text data, NLP converts the natural language data into a vectorized form. Computers then apply the processed data in many cases, such as clustering similar incidents using unsupervised algorithms. Such capabilities can detect issues in your services early, reducing the impact on consumers downstream.
As operations grow, incident logs can quickly proliferate and swarm the operations team with alerts. It’s challenging to filter out false alerts—also known as false positives—and irrelevant data wastes time. AIOps solutions cut through the noise. Sophisticated algorithms discover telemetry data patterns to identify the cause of outages and alert the site reliability engineering (SRE) engineer. Consequently, engineers can focus on what’s important, like text data, timestamps, and the source of incidents.
AIOps allows an autonomous workflow that resolves some bugs without human intervention. Engineers can either fix the other bugs themselves or follow a prescribed resolution.
Insights From Big Data
Static rules are insufficient for monitoring modern IT infrastructure’s complex, modular, and distributed nature. The operations team can only detect issues they’ve previously encountered. And since issues don’t always follow a repeated pattern, it’s difficult for the operations team to detect and resolve new issues swiftly with minimal business disruption.
AIOps can prevent outages by alerting the team to issues they’ve never encountered before. However, AIOps isn’t just about alerting the team to existing issues. It also promotes learning and recording fixes for every incident. Information on the nature of incidents, proposed resolutions, and final working solutions become training data for ML algorithms. A tightly-coupled feedback loop allows the algorithms to learn and identify underlying causes to help prevent similar incidents in the future.
To build trustworthy systems, you must be able to verify whether your system is behaving as intended. This requires you to analyze large data sets with the help of an AIOps system that leverages sophisticated algorithms. To increase efficiencies at every layer of the incident management system and tackle the causes of recurring incidents, your organization needs an AIOps component.
|
<urn:uuid:15026436-e1e1-437d-b8e1-d10ce3d65570>
|
CC-MAIN-2023-23
|
https://www.xmatters.com/blog/how-to-use-big-data-to-your-advantage/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655092.36/warc/CC-MAIN-20230608172023-20230608202023-00313.warc.gz
|
en
| 0.913549 | 1,387 | 2.65625 | 3 |
THURSDAY, Aug. 27 (HealthDay News) -- Practicing yoga regularly may help your eating habits so you can maintain a healthier weight, a new study says.
Researchers at the Seattle-based Fred Hutchinson Cancer Research Center reported a link between yoga practitioners and "mindful eaters," people who were better aware of their feelings of hunger and fullness and why they ate. These mindful eaters, as opposed to those who ate regardless of hunger or to soothe anxiety or depression, tended to be less likely to be obese, the study found. Results are published in the August issue of the Journal of the American Dietetic Association.
"Mindful eating is a skill that augments the usual approaches to weight loss, such as dieting, counting calories and limiting portion sizes," study leader Alan Kristal, associate head of the Cancer Prevention Program at the Hutchinson Center, said in a news release from the center. "Adding yoga practice to a standard weight-loss program may make it more effective."
The study was based on analysis of a questionnaire about mindful eating habits (such as being distracted by other things while eating or responding to emotional situations with food) and other health- and exercise-related factors that was completed by more than 300 people at Seattle-area yoga, fitness and weight-loss facilities. Though the average weight of the participants was within normal ranges, people who practiced yoga tended to have a noticeably lower body mass index than those who didn't, with the average being 23.1 versus 25.8, respectively.
The findings support earlier research by Kristal that found that regular yoga helped middle-age people gain less weight over a 10-year period than non-yoga practitioners, regardless of other physical activity and eating patterns.
Although about half of the new study's participants also engaged in at least 90 minutes of walking or moderate and strenuous exercise, only regular yoga class participation was linked to mindful eating.
Kristal, himself a yoga enthusiast, said that yoga challenges people to focus and accept their surroundings without judgment, key teachings that might encourage better discipline about eating. "This ability to be calm and observant during physical discomfort teaches how to maintain calm in other challenging situations, such as not eating more even when the food tastes good and not eating when you're not hungry," he said.
Kristal hopes the questionnaire his team developed could have clinical and research applications that would help people understand their eating habits and promote better ones.
The U.S. National Center for Complementary and Alternative Medicine has more about yoga.
SOURCE: Fred Hutchinson Cancer Research Center, news release, August 2009
|
<urn:uuid:5ce25b0b-3f27-4329-9fae-5b2f73ceb541>
|
CC-MAIN-2017-43
|
http://abcnews.go.com/Health/Healthday/story?id=8427783
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825700.38/warc/CC-MAIN-20171023054654-20171023074654-00229.warc.gz
|
en
| 0.970677 | 533 | 2.625 | 3 |
If you’re thinking about using the services of a child psychologist, or even becoming one yourself, you may wonder what does a child psychologist do on a daily basis?
A child psychologist uses therapeutic techniques to help children and their families address emotional, behavioral, and developmental issues. They may conduct diagnostic evaluations, provide individual and family therapy, and offer support and guidance to parents and other caregivers.
Working with kids and their parents to aid child development is basically the role of a child psychologist. They use knowledge of psychology and therapy methods to hopefully help kids overcome challenges and reach their potential.
Let’s further explore the role of child psychologists. Here are main duties of the job.
1. Assess Potential Disorders
A child psychologist may also be called a pediatric psychologist or developmental psychologist. One of their main duties is to assess mental, emotional, and behavioral disorders in children and adolescents.
As a professional in this field, a child may present to you after, for example, being referred by their treating physician or being brought in by a parent.
Among the first steps you may take is to conduct an initial assessment of the child. You could interview the child and their parents, observe the child, and administer some standardized tests. Key elements include the following.
- Interviews: Interview the child, their parents, and other significant people in the child’s life, such as teachers or caregivers. Gather information about the child’s symptoms, medical history, family history, and developmental history.
- Behavioral observation: Observe the child in different settings, such as home, school, or during therapy sessions, to gather information about behavior and symptoms.
- Standardized tests: Administer standardized tests, such as IQ tests, achievement tests, or personality tests, to assess the child’s cognitive functioning, academic skills, and personality traits.
- Other assessments: Other assessments may be applied as well, such as rating scales or checklists, to gather information about symptoms, behaviors, and functioning.
- Review medical records or previous assessments: Review any previous medical or psychological records, such as school or medical evaluations, to gather additional information about the child’s history and development.
A good child psychologist will also consider the child’s physical and social environment, including the family and school situation. These external factors can have a big impact on behavior and symptoms.
2. Diagnose Disorders
Based on the information gathered in assessing a child, a developmental psychologist may make a diagnosis of any mental, emotional, or behavioral disorders. The practitioner will evaluate the child’s symptoms, behaviors, and functioning in different areas of life. From this evaluation, it will be determined if the child meets the criteria for a specific disorder.
Making a diagnosis of a mental disorder requires knowledge, training, experience and clinical judgement. The diagnosis will be based on the criteria of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5); the standard classification of mental disorders. Note that diagnosis can be a complex process and may involve multiple evaluations over time to ensure the diagnosis is accurate.
A child psychologist may also reach other conclusions about a child’s emotional or behavioral issues. Examples of alternative conclusions are that the child’s difficulties:
- stem from a specific developmental stage and may improve with time and maturity;
- are related to environmental factors, such as stress or trauma, and can be addressed through therapy or interventions to improve the child’s living situation;
- are related to a medical condition, such as a brain injury or neurological disorder, and may require treatment or management from a medical specialist; or
- are related to poor parenting or other issues within the family system, and may require group therapy or other interventions to address underlying issues.
The psychologist may reach a conclusion that the child does not have a mental disorder, but that they are going through a difficult time and need additional support.
3. Provide Therapeutic Treatment
Depending on the particular diagnosis, a developmental psychologist has an arsenal of therapeutic treatment options available. These are needed to address the myriad of potential conditions.
Child psychologists encounter numerous mental health conditions and other issues, such as attention-deficit/hyperactivity disorder (ADHD), anxiety disorders, autism spectrum disorder, behavioral problems, depression, family problems, learning disorders, parent-child conflicts, social skills difficulties, and trauma and abuse.
In therapy, kids talk and learn how to work out their problems. Going to therapy helps them cope better, communicate better, and do better.Kids Health
Treatment may include a combination of the following:
- Cognitive-behavioral therapy (CBT) to help children identify and change negative thoughts and behaviors
- Behavioral therapy to address specific problems such as defiance or aggression
- Family therapy to improve communication and relationships within the family
- Play therapy to help children express themselves and work through their issues
- Medication may be prescribed to help manage symptoms of certain conditions such as ADHD or anxiety
- Social skills training to improve children’s ability to interact with others
- Support and guidance for parents to help them understand and support their child’s needs.
CBT, behavioral therapy, family therapy, and play therapy are all forms of talk therapy, which helps children and families work through emotional and behavioral issues. Medication may be prescribed as an adjunct therapy to help manage symptoms of certain conditions such as ADHD or anxiety. Social skills training and parent guidance are also considered therapeutic treatments.
A child psychologist typically works as part of a team. Examples of fellow professionals who may be part of the team and treatment plan include: occupational therapist, parent and caregivers, pediatrician or psychiatrist (who can prescribe medications), school counselor, social worker, special education teacher, and speech therapist.
4. Work with Parents and Caregivers
Child psychologists often work closely with parents and caregivers to help them understand and manage their child’s condition. The practitioner will educate them on the child’s diagnosis, and offer strategies for managing symptoms and behavior.
Child psychologists consult with parents to help them understand the child’s developmental needs. They offer guidance on how to support the child’s social, emotional and cognitive development. They may discuss how to communicate effectively with their child, and create a supportive home environment.
Psychologists may also work with parents to help them develop effective parenting skills and cope with the stress of caring for a child with a mental health condition. Parents sometimes also need help with personal issues that may be impacting their ability to parent effectively.
5. Collaborate with Other Professionals
A child psychologist often works as part of a multidisciplinary team of professionals. He or she may collaborate with teachers, physicians, social workers, and other mental health professionals to ensure that the child’s needs are being met in all areas of their life.
For example, a developmental psychologist may work with a child’s teacher to provide strategies for managing symptoms in the classroom setting, or may work with a physician to coordinate medication management.
Collaboration with fellow professionals allows for a more holistic approach to treatment, taking into account the child’s physical, emotional, and social needs.
A child psychologist may also consult with school counselors or school psychologists, to provide support and guidance on how to support the child academically and socially. School is a big part of a child’s life and these involvements help ensure the child receives appropriate services and accommodations.
Additional professionals may be involved too, such as occupational therapists, speech therapists, or other specialists. They may address any other concerns or issues that may be impacting the child’s overall well-being. A collaborative approach ensures that the child receives the most comprehensive and effective treatment possible.
6. Conduct Scientific Research
As a child psychologist, all or part of your job may be to conduct scientific research on child development and mental health. Child psychology is listed as field of study in psychology. You may help further our understanding of the causes and treatments of mental health disorders in children.
Researchers work in settings such as universities, hospitals, or research institutions. They use research methods that include surveys, experiments, case studies, and observational studies, to collect data and analyze the information.
Research can take many forms, including basic research that aims to understand the underlying mechanisms of child development and mental health. There is also applied research that aims to evaluate the effectiveness of different interventions or treatments.
7. Train Related Professionals
Child psychologists often provide consultation and training to people who work with children, such as teachers, social workers, physicians, and other mental health professionals.
They may offer advice or instruction on how to recognize and address mental health issues, as well as strategies to manage symptoms and behavior. Training may also cover how to conduct assessments and evaluations, and use evidence-based treatments.
Child psychologists may also provide consultation to schools and other organizations that serve children, such as after-school programs or youth sports leagues, to help them promote health and well-being. The education efforts may extend to professionals in related fields, such as child welfare or juvenile justice.
8. Work on Early Intervention
Some child psychologists develop and implement prevention and early intervention programs to address mental health concerns in children. These programs aim to identify children at risk and provide early support.
Prevention activities include providing education and information to parents, teachers, and other adults who interact with children. A child psychologist may also develop and implement school-based programs to promote health and well-being, such as social-emotional learning programs or resilience-building programs.
Early intervention programs address mental health issues as soon as they’re identified, often before they become severe. This can include providing individual or group therapy, or counseling. Psychologists may also work with families and caregivers to provide support and guidance on how to manage and cope with their child’s condition.
9. Support Families in Crisis
Child psychologists guide families in crisis situations, such as when a child is experiencing a mental health crisis, or when a family is dealing with a traumatic event.
When working with families in crisis, a child psychologist can offer crisis counseling, trauma-focused therapy, and support groups. They may also work with professionals, such as crisis responders, to provide a coordinated response.
10. Do Community Outreach
Child psychologists may participate in community outreach programs to raise awareness about child mental health and promote access to services. This can include providing education and information to parents, caregivers, and community members on: (a) the signs and symptoms of psychological conditions in children; and (b) how to access services.
In this role, a psychologist may work with community organizations, such as schools, faith-based organizations, and youth sports leagues to provide information and resources. They may train and consult with professionals such as teachers, social workers, and community health workers.
Another service is to provide screening and assessment to children and families in underserved or under-resourced communities. This can help to identify children who are at risk of developing mental health problems and facilitate early intervention.
Myths and Facts
Here are some common misperceptions about child psychology and the truthful facts.
Myth 1: Child psychologists are only concerned with diagnosing and treating disorders.
Fact: Child psychologists are trained to work with children and adolescents to help them overcome emotional, behavioral, and mental health issues. This includes not only diagnosing and treating disorders, but also promoting healthy development and well-being.
Myth 2: Child psychology is only for children with severe mental health conditions.
Fact: Child psychology is not just for children with severe mental health conditions. Child psychologists work with diverse clients, including young people experiencing common challenges such as difficulty with school, friendship, or family dynamics. They also work with kids dealing with more serious issues such as abuse, trauma, or severe disorders.
Myth 3: Child psychology is only for children and adolescents.
Fact: Child psychology is not only for children and adolescents, but also for their families and caregivers. Child psychologists often work with parents, caregivers and other family members to help them understand and support their child’s needs. They may also provide guidance on how to manage and cope with the child’s condition, and may work with families to develop effective parenting skills.
|
<urn:uuid:b883b591-81d4-47bc-b5c8-eff6aaf6a373>
|
CC-MAIN-2023-40
|
https://www.babyandchildsafety.com/what-does-a-child-psychologist-do/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510516.56/warc/CC-MAIN-20230929122500-20230929152500-00533.warc.gz
|
en
| 0.950177 | 2,520 | 3.640625 | 4 |
On this day in 1919, preeminent jazz pianist and musical personality Nat “King” Cole, was born. Known for his soft baritone, Cole was one of the first black Americans to host a TV variety show.
Born Nathaniel Adams Coles in Montgomery, Alabama, and raised in Chicago, Cole was reared on an eclectic musical smorgasbord. When his father became a Baptist minister, Cole learned to play the piano organ from his mother, the church organist. He began formal lessons at age 12, learning jazz, gospel, and Russian and European classical music. As a teen, he would sneak out of his room and hang out around clubs, soaking in the likes of Louis Armstrong, Earl Hines, and Jimmie Noone. In fact, Cole was so inspired by Hines, a pioneer of modern jazz, that he dropped out of school at 15 to pursue a performing career. He adopted the name “Nat Cole,” invited brother Eddie, a bass player, to join the band, and made his first recording in 1936 under Eddie’s name. He got his nickname, “King,” inspired by the nursery rhyme, “Old King Cole,” while performing at a jazz club.
The following year, Cole quickly assembled the King Cole Trio, a touring group that landed on the charts in 1943 and 1944 with “That Ain’t Right” and “Straight Up and Fly Right.” They also landed in American homes with pop hits like the holiday classic, “The Christmas Song.” By the 1950s, Cole emerged as a popular solo performer with hits like “Nature Boy,” “Mona Lisa,” “Too Young,” and “Unforgettable.” He worked with greats like Louis Armstrong and Ella Fitzgerald and befriended the likes of Frank Sinatra.
The famous crooner with the honey voice made TV history in 1956 when he became the first black American to host his own national program, The Nat King Cole Show, a variety mix that featured top performers. Unfortunately, the series lasted just one year, going off air in 1957 because of a dearth of sponsors–which, some argue, reflected an unwillingness by businesses to back a program featuring a black American.
As a prominent black American in the civil rights era, Cole struggled to stake his place in the national conversation on race. He was harassed by supremacists during performances in the south, but black Americans also rebuked him for not taking a more public stance on racism and civil rights. Cole said saw himself primarily as an entertainer and not an activist.
On 15 February 1965, Cole died from lung cancer. He was only 45. To this day, the velvet-voiced crooner remains an incredible influence in the music world and his music has found its way into countless films and TV soundtracks.
Credit: © AF archive / Alamy
Caption: Jazz legend Nat “King” Cole in front of a piano.
|
<urn:uuid:fe96e847-1cfe-40be-a98b-ecde55860d11>
|
CC-MAIN-2020-16
|
https://www.historychannel.com.au/articles/nat-king-cole-is-born/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00237.warc.gz
|
en
| 0.97791 | 630 | 3.421875 | 3 |
Forgiveness and Judaism
Interview With Carmelite Father F. Millán
MADRID, Spain, JULY 28, 2006 (Zenit) - Judaism offers a positive challenge to the Christian idea of forgiveness, says Carmelite Father Fernando Millán Romeral.
Father Millán, professor at the faculty of theology of the Pontifical University of Comillas in Madrid, explains in this interview with us that "modern Judaism has kept some essential features of forgiveness that we Christians -- at certain times and in certain ways, even with the best intentions -- have tended to neglect."
Q: Could you explain the Jewish concept of forgiveness?
Father Millán: First, it must be said that the concept of forgiveness is very important, not only for Judaism, but for all religions. More than that, it is such an essential experience that, understood or misunderstood, it is present in every cultural expression, in the political debate, in family life, etc.
In Judaism, forgiveness is conceived in a very similar way to what we Christians practice -- not in vain did we also inherit from them, among many other things, the idea of forgiveness.
Perhaps -- and this is what I usually speak about -- modern Judaism has kept some essential features of forgiveness that we Christians -- at certain times and in certain ways, even with the best intentions -- have tended to neglect.
Because of this, I believe that thinkers like Vladimir Jankelevitch or experiences such as those recounted by Simon Wiesenthal in his work "The Sunflower" can help us to rethink our idea of forgiveness, the idea that sometimes is addressed in theology and catechesis.
Q: Do you think that Christianity -- or, rather, Christians -- have abandoned the concept of conversion and that the concept of forgiveness has become somewhat "juridical"?
Father Millán: I don't think so. The believer who takes his faith fairly seriously is constantly hearing talk of conversion and forgiveness.
What has perhaps happened, at least in certain milieus, is that by preaching a merciful God, which could not be otherwise, we have forgotten that forgiveness means a "return" to God, a conversion -- that God does not rain down forgiveness and does not distribute it indiscriminately.
God always forgives and he forgives everything. There is no sin that is so great that it cannot be forgiven and that God is not willing to forgive, but only he who wants to be forgiven and this presupposes a series of elements such as the desire to repair in the measure possible the evil committed, sincere repentance, careful attention to the victims of our sin, etc. If it is not so, forgiveness becomes something else.
Of course all this makes sense when we speak of sin in the strong sense; otherwise this discussion becomes a caricature. Perhaps our trivialization of the concept of forgiveness comes from our trivialization of the concept of sin. When anything is called sin, in the end real sin is no longer taken seriously.
Q: Some theologians and pastors speak of a "crisis" of the confessional. Does that crisis exist? What are its causes?
Father Millán: It does exist, though it is also true that there are Christian groups, communities, movements, etc., of very different orientation that have included this element in their journey and in their living of the faith. But, in general terms, a crisis does exist.
The reasons are very varied and very complex: from a loss of a sense of sin in our society [...], to a loss of values and moral points of reference, as well as a certain disaffection and lack of appreciation for this sacrament in the pastoral program and in Christian practice.
Also of influence, perhaps, is the trivialization of which we spoke earlier. When forgiveness is granted in a routine way, with little meaning, without consequences on real life, etc., it ends up by being something trivial and, often, believers with a strong faith experience have abandoned this practice.
Likewise the liturgical and symbolic poverty of this sacrament is now something chronic, despite the efforts of the new rite of penance of 1974. ... However, I stress, the causes are very complex.
Q: Does Pope John Paul II's petition for forgiveness for the errors committed by Christians in history, especially toward the Jews, come close to the idea of "conversion"?
Father Millán: I believe that gesture of John Paul II is of enormous grandeur and it will take us centuries to appreciate it properly.
It is true that some Christians might have felt disconcerted, and there were even those who complained that no one asks for forgiveness, only we Christians acknowledge our faults --blessed be God! By asking for forgiveness, one does not lose stature or dignity -- on the contrary.
Nor does this gesture in any way imply looking negatively at 2,000 years of history. Above all the Jubilee was an act of thanksgiving for all that the Church has received in the course of the years and for what she has given the world, but there have also been enormous infidelities, persistent errors, lamentable negligence, and for this the Pope, in the name of the whole Church, asks God for forgiveness.
I think that any person from another religious tradition, if he looks at this gesture of John Paul II without prejudices, would see something beautiful and hopeful in it.
Q: What influence has the Holocaust had on contemporary Jews?
Father Millán: Jean Amery, a Jewish thinker who has written much on this topic, says that the experience of the Holocaust is not only a "shema" Israel, but a "shema" world.
The whole world looks overwhelmed at the experience of the Holocaust, an experience that --without ever attributing more value to the death of one human being over that of another -- had such special characteristics ....
Let us recall, for example, that it was about a systematic, cold and bureaucratic death and a persecution that had no possibility of redemption. Even if a Jew was tall and blond, even if he was a Christian, even if he was affiliated to the Nazi party, he was equally destined to extermination.
The Holocaust should make us all more cautious, more profound in our political analyses. Today when there is so much superficial talk in the political world, the Holocaust is a constant knock on our consciences and an inescapable ethical warning.
Q: Do you think the Holocaust has influenced the dialogue between Jews and Christians? In general, what stage have these relations reached?
Father Millán: It is a very delicate question. Let's not forget that the Holocaust took place in Christian countries, though carried out by a strongly anti-Christian ideology. On the other hand, Jewish thought is not unitary. There is no unique or official Jewish thought.
In this connection, I think that Christians and Jews of good-will look at the Holocaust with the same astonishment and horror. And we also look toward the future. John Paul II was a very positive Pope in this regard and Benedict XVI follows the same line.
If Christianity shows itself to be respectful and willing to dialogue with all religions -- without implying that all is accepted uncritically, especially in certain cases -- in the case of Judaism this is even clearer and easier.
Our relationship with Judaism is not simply the respectful relationship between two religions that are parallel. It is much more: Christianity loses its meaning if it forgets Judaism. Much repeated in this connection is John Paul II's phrase "the Jews are our elder brothers in the faith," and it really sums up well what we are saying.
http://www.catholic.org CA, US
Catholic Online - Publisher, 661 869-1000
Forgiveness, Judaism, Millan, Christian, Jews
More Catholic PRWire
Showing 1 - 50 of 4,718
A Recession Antidote
Monaco & The Vatican: Monaco's Grace Kelly Exhibit to Rome--A Review of Monegasque-Holy See Diplomatic History
Dna. Maria St. Catherine Sharpe, t.o.s.m., T.O.SS.T.
A Royal Betrayal: Catholic Monaco Liberalizes Abortion
Dna. Maria St.Catherine De Grace Sharpe, t.o.s.m., T.O.SS.T.
Embrace every moment as sacred time
Mary Regina Morrell
Letting go is simple wisdom with divine potential
Mary Regina Morrell
Father Lombardi's Address on Catholic Media
Pope's Words to Pontifical Latin American College
Prelate: Genetics Needs a Conscience
State Aid for Catholic Schools: Help or Hindrance?
Scorsese Planning Movie on Japanese Martyrs
2 Nuns Kidnapped in Kenya Set Free
Holy See-Israel Negotiation Moves Forward
Franchising to Evangelize
Catholics Decry Anti-Christianity in Israel
Pope and Gordon Brown Meet About Development Aid
Pontiff Backs Latin America's Continental Mission
Cardinal Warns Against Anti-Catholic Education
Three words to a deeper faith
Relections for Lent 2009
Wisdom lies beyond the surface of life
Mary Regina Morrell
World Food Program Director on Lent
Pope's Lenten Message for 2009
Keeping a Lid on Permissiveness
Glimpse of Me
The 3 stages of life
Sex and the Married Woman
A Catholic Woman Returns to the Church
Modernity & Morality
Just a Minute
Catholic identity ... triumphant reemergence!
Edging God Out
Burying a St. Joseph Statue
George Bush Speaks on Papal Visit
Sometimes moving forward means moving the canoe
Mary Regina Morrell
Easter... A Way of Life
Papal initiative...peace and harmony!
Proclaim the mysteries of the Resurrection!
Jerusalem Patriarch's Easter Message
Good Friday Sermon of Father Cantalamessa
Papal Address at the End of the Way of the Cross
Cardinal Zen's Meditations for Via Crucis
Interview With Vatican Aide on Jewish-Catholic Relations
Pope Benedict XVI On the Easter Triduum
by Catholic Online
- Daily Reading for Tuesday, September 26th, 2017 HD Video
- Daily Reading for Monday, September 25th, 2017 HD Video
- North Korea threatens to test H-Bomb over Pacific
- At UN, the Vatican asks all nations to help in fight against human ...
- It's time to regulate Facebook and other social media
- Martyrs of Chalcedon: Saint of the Day for Sunday, September 24, 2017
- Daily Readings for Sunday, September 24, 2017
- Woman photographs angel in flight, is it here to deliver a message? HD
- Daily Reading for Sunday, September 24th, 2017 HD
- Hurricane Maria: The worst disaster for Puerto Rico HD
- Daily Reading for Saturday, September 23rd, 2017 HD
|
<urn:uuid:bb4f696a-e585-43ea-9703-4057e5649c30>
|
CC-MAIN-2017-39
|
http://www.catholic.org/featured/headline.php?ID=3521
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689874.50/warc/CC-MAIN-20170924044206-20170924064206-00011.warc.gz
|
en
| 0.923099 | 2,243 | 2.6875 | 3 |
By Jesse Buonanno
The purpose of using a vMAC (virtual media access control address) was to prevent tracking of mobile devices. However, due to poor implementation, or no implementation at all, many devices are still able to be tracked. Figure 1 shows how a vMAC is determined. If the universal/local bit is set, then a MAC is determined to be virtual.
The most common vector taken advantage of is the probe request. Probe requests are sent out periodically by the mobile device to look for access points in the area. This can be done through broadcast or targeted probe requests. To track a device, these requests will be passively monitored and fingerprinted to isolate a single device from other that may also be using vMACs. Techniques for tracking and for fingerprinting will be validated and combined to bypass protections provided in current implementations of using a vMAC. Testing was done on an iPod (MKH22LL/A) IOSv10.3.1, LG Nexus 5x Android 6.0 and a Google Nexus 6p Android 7.1.2.
IOS vs Android
Currently, vMACs are implemented differently between IOS and Android. IOS does a better job of using vMACs more frequently. For example, occasionally on both Android devices, while not connected to an access point, broadcast probe requests would alternate between using vMACs and media access control addresses (rMACs). In doing so, completely defeating the purpose of vMACs. Both IOS and Android do NOT use vMACs for directed probe requests, allowing an attacker to monitor them as well.
IOS is a little peculiar in its MAC prefix. Organizations should purchase MAC prefixes from IEEE to ensure global uniqueness. However, when using vMACs, IOS does not have a unique prefix or even adhere to the IEEE standard. Instead, IOS creates a uniformly distributed MAC address that will potentially not be unique. Despite the randomness, the universal/local bit is still always set, establishing the MAC as virtual. This odd technique was validated . Android does adhere to the IEEE standard and has reserved the prefixes DA:A1:19 and 92:68:C3 for vMAC usage. Again, this was confirmed in my testing .
A sequence number is information that can be tracked from one packet to another in the 802.11 frame. For each packet sent out by the device, the sequence number is increased by 1. This continues until it reaches 4096, for a total of 12 bits, where it then resets to 0. If packets are fragmented, the same sequence number will be used for all fragmented packets if the total number of fragmented packets doesn’t exceed 16, or 4 bits.
Where sequence numbers become interesting is that they change predictability, with or without using a vMAC. If a device sends out a directed probe request with its rMAC and then sends out a global probe request using its vMAC, the sequence number still increases by one. The pattern can then be followed for all subsequent packets that use a vMAC. This observation was confirmed .
Global vs Directed Probe Requests
Probe requests are 802.11 management frames that probe the area for possible SSIDs to connect to. In the global probe request, a broadcast is sent out to all SSIDs in the area for information about what they provide. If a specific SSID wants to be checked to see if it is in range, a directed probe request is used. The differences between how probe requests are handled was stated in the IOS vs Android section. Directed probe requests can also allow for a karma or evil twin attack. Although discussed in the paper, I deemed them too invasive for the type of tracking I was looking for.
Authentication and Association Frames
Authentication and association frames are like directed probe requests in that they are sent with the device’s rMAC. In combination with a deauthentication attack, the device would continue to try to reconnect to the access point. In the same manner as the evil twin and karma attack, deauthing the device to provoke authentication/association frames would be too invasive for the level of stealth I was looking to achieve. It is worth noting that claims made in are valid and confirmed regarding both these frames.
RTS Frame Attack
The RTS frame attack was the one I was the most excited to test. Upon sending a RTS frame containing the targets rMAC and a random SSID, it should respond with a broadcast CTS for the specified random SSID. The existence of the CTS frame and the attacker specified SSID would allow the device to be tracked with little effort. If the device was in the area, a CTS frame would be seen, else it was not in the area. Although this attack is active, it’s not nearly as invasive as the other attacks. Unfortunately, I was not able to replicate the attack. Even after capturing legitimate RTS frames and replaying them to the device on different channels with correct checksums, the devices did not respond. Future work would be reaching out to the authors and seeing how this attack was carried out as it the most interesting of the describe attacks. As of now, it remains unverified.
vMAC Probe Request Fingerprinting
In addition to tracking sequence numbers, fingerprinting techniques can be used to further determine if a packet belongs to a desired device. A strong fingerprinting technique is to look at the order of 802.11 frame parameters. Often, they will vary greatly between device models. This method claims ~80% accuracy when looking at broadcast probe requests . The content of some parameters can be considered. For instance, the content of the channel and SSID parameters will vary, and thus would not allow useful fingerprinting of the device. Figure 2 shows the 802.11 parameters of a broadcast probe request from the testing iPod.
Figure 2: Packet capture of 802.11 broadcast probe request using vMAC
To track a device, the attack uses a couple different techniques. The first is sequence number tracking stated previously. A baseline sequence number is created when a packet is seen containing the target’s rMAC. On every successful identification of the target, the baseline sequence number is increased. In testing, it was found to be more effective to specify a range of sequence numbers to allow. This allows us to account for packets that were not sniffed, for a wide range of reasons, and if the device temporarily moved out of attack range while still not being associated to an access point. Implementing sequence number tracking this way did not come without tradeoffs, which will be discussed in the considerations section.
The second attack implemented was the fingerprinting technique. Once a baseline is established with sequence numbers, it can be followed to the next packet with a vMAC and fingerprinted. Doing so enables further accuracy going forward. However, again there is a tradeoff to using this technique as there is a guess made between the baseline sequence number and fingerprinting a packet. This will be discussed further in the considerations section.
Assuming currently updating sequence number baselines and a correct initial fingerprint the device will continually be tracked. This implementation is passive only . There is no active interaction between the attacker and the tracked device.
This attack assumes the following:
- You know the MAC of the device you are looking for
- You are in range of your target
- You know if the device is IOS or Android
- Not required but will help accuracy due to reasons specified in IOS vs Android section
Paper roughly claims ~80% accuracy on fingerprint techniques. However, it does not consider combining sequence number and 802.11 parameters together OR how good 802.11 parameters are for distinguishing separate devices of the same model. It’s still not clear how much same device types differ from one another in their probe requests. A larger sample size of devices would need to be tested specifically for this.
Adjustments for sequence numbers needed to be made to account for packet loss and devices not being in range. To determine how likely it would be to pick the incorrect device after gaining an initial sequence number baseline, the following formula was used: (1- (1-25/4096)^M )∗100. This calculates the likelihood that the wrong device will be picked after an initial baseline for a sequence number range of 25 and for M MACs in the receiving area. For a better understanding, Figure 3 extrapolates out the number of device in the area and the likelihood that the attack will not pick the desired target. Less than 113 devices need to be within range of one’s receiver to achieve 50% accuracy on the first tracking attempt.
Figure 3: Likelihood of Missing target on first try with M, MACs in area and sequence number range of 25.
Current implementations of vMACs are broken. It was shown that 100% passive attacks allow for device tracking even with best case uses of vMACs. Even still, there is room for improvements in the script. Such improvements could be combining more fingerprinting techniques to increase accuracy and adding RTS attack functionality. I still feel it is possible to validate the RTS attack despite current failed attempts. How the script operates allows for tracking in seemingly large areas where less than 100 devices are present. If the RTS attack were functional, an incredibly large number of devices could be in the area and it would still give 100% accuracy. However, the attack would then become active. All other aspects of paper were verified except UUID fingerprinting as it only affected a small range of devices and required a large amount of precomputation, like rainbow tables.
Martin, Jeremy, et al. “A Study of MAC Address Randomization in Mobile Devices and When it Fails.” arXiv preprint arXiv:1703.02874 (2017). [https://arxiv.org/pdf/1703.02874v1.pdf]
Vanhoef, Mathy, et al. “Why MAC Address Randomization is not Enough: An Analysis of Wi-Fi Network Discovery Mechanisms.” Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. ACM, 2016.
|
<urn:uuid:d5b2b8c5-4cad-43a2-b183-e641967140ca>
|
CC-MAIN-2017-47
|
https://ritcsec.wordpress.com/2017/05/16/mobile-phone-tracking-an-experiment-validation/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00069.warc.gz
|
en
| 0.936679 | 2,101 | 3 | 3 |
In those days, the Judge Martial was independently responsible for military discipline.
In the 1630s, the phrase 'judge advocate’, or variants thereof, came increasingly into common use. In 1639, an Advocate served with the Army of King Charles I. On June 7, 1645, an Ordinance for constituting Commissioners and a Council of War for trial of all persons . . . appointed a Judge Advocate and a Provost Marshall.
The Ordinance enabled and authorized the Judge Advocate to receive all “... accusations, articles, complaints and charges against all or any of the offenders...” By 1659, the Office of the JAG was created to supervise ‘courts martial’.
Prior to 1893, in the U.K. the JAG was a Privy Councillor, a junior Minister in the government, usually a Member of Parliament and a spokesman for the Commander in Chief in Parliament. In those days, the appointment was regarded as a political office and the JAG had direct access to the Sovereign on matters pertaining to his office.
UNITED KINGDOM MAKES REFORMS IN 1948
In 1948, the UK Parliament decided to appoint a civilian member of the bench as the JAG and entrusted him with the exclusive role of chief magistrate of the penal military justice system accountable not to the military chain of command but to the Lord Chancellor.
BEHIND THE TIMES
In 1997, in the wake of the Somalia Inquiry, the Canadian JAG was divested of all judicial functions when full-time military judges were appointed to ensure, to the fullest extent possible, judicial independence from the chain of command.
Inexplicably, however, the JAG was allowed the continued use of the title of “Judge” which not only misrepresents the factual reality but caused it to be wrongly viewed and perceived, to this day, as the titular head of the military judiciary apparatus which is clearly not.
Most curiously, in addition to acting as the military legal Advisor to a host of senior officials and the military chain of command, the Canadian JAG also retained supervision of both the military prosecutorial and defence functions.
Given that the current JAG, Major-General Blaise Cathcart, is expected to retire in June 2017, his replacement should be appointed as the CF Legal Advisor to the Armed Forces. He should be simultaneously. divested of his supervisory role over the Prosecution and the Defence functions.
As head of the JAG branch and the JAG Command, the JAG not only determines the total number of personnel to be assigned to each such function, but decides which military lawyers will be posted to each of these two directorates. The JAG also has the authority to manage the career of all legal officers serving in these two directorates including their postings, assignments, appointments, selection for postgraduate training and performance evaluation and eventually promotions to higher rank. This is matter of continued debate raising very very serious apprehension about the real independence and impartiality of those particular offices.
|Maj.-Gen. Blaise Cathcart|
|
<urn:uuid:e0353c6f-2228-49b4-a73c-74e5e5572b96>
|
CC-MAIN-2017-39
|
http://globalmjreform.blogspot.com/2017/04/the-canadian-jag-office-is-clearly.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688103.0/warc/CC-MAIN-20170922003402-20170922023402-00698.warc.gz
|
en
| 0.972592 | 636 | 3.1875 | 3 |
Section 3: Archaic People
The Archaic era began about 7,500 years ago. The changes in climate led to changes in plants and animals. The Archaic (ar-kay-ik) was a new era because people made different spear points and used new tools. Some animals became extinct. Some animals, such as bison (sometimes called buffalo), became smaller. People of the Archaic era adapted to these changes.
People of the Archaic era were the descendants of the people who lived in the Paleo-Indian era. As their population increased, the people continued spreading throughout the continents of both North America and South America. They lived in small bands, or groups, and continued their nomadic way of life following herds of game animals and gathering plants for food.
Archaeologists have found remains of the Archaic people’s culture scattered throughout the plains. A hard stone called flint was mined by people along the Knife River in North Dakota. Spear points made of this flint were used by hunters in North Dakota and were also traded to people in other areas. Evidence of the use of flint has also been dated back to the time of the Paleo-Indians.
A weapon called an atlatl (at-lat-ul) was developed for hunting by the Archaic people. This weapon was a stick with a handle on one end and a hook on the other end. With the atlatl, hunters could throw darts much harder and farther than they could throw a spear. People of the Archaic era made knives, hammers, scrapers and other tools from flint or animals bones. Many of these stone and bone tools have been found in sites of ancient peoples’ activities in North Dakota.
|
<urn:uuid:663a0191-57f6-4838-8469-f521cc4f82e1>
|
CC-MAIN-2020-05
|
https://www.ndstudies.gov/gr4/american-indians-north-dakota/section-3-archaic-people
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00399.warc.gz
|
en
| 0.982485 | 353 | 3.890625 | 4 |
Generate third-order polynomial trajectories
generates a third-order polynomial that achieves a given set of input waypoints with
corresponding time points. The function outputs positions, velocities, and accelerations at
the given time samples,
pp] = cubicpolytraj(
tSamples. The function also returns the
pp form of the polynomial trajectory with respect
cubicpolytraj function with a given set of 2-D xy waypoints. Time points for the waypoints are also given.
wpts = [1 4 4 3 -2 0; 0 1 2 4 3 1]; tpts = 0:5;
Specify a time vector for sampling the trajectory. Sample at a smaller interval than the specified time points.
tvec = 0:0.01:5;
Compute the cubic trajectory. The function outputs the trajectory positions (
q), velocity (
qd), acceleration (
qdd), and polynomial coefficients (
pp) of the cubic polynomial.
[q, qd, qdd, pp] = cubicpolytraj(wpts, tpts, tvec);
Plot the cubic trajectories for the x- and y-positions. Compare the trajectory with each waypoint.
plot(tvec, q) hold all plot(tpts, wpts, 'x') xlabel('t') ylabel('Positions') legend('X-positions','Y-positions') hold off
You can also verify the actual positions in the 2-D plane. Plot the separate rows of the
q vector and the waypoints as x- and y -positions.
figure plot(q(1,:),q(2,:),'-b',wpts(1,:),wpts(2,:),'or') xlabel('X') ylabel('Y')
wayPoints— Waypoints for trajectory
Points for waypoints of trajectory, specified as an n-by-p matrix, where n is the dimension of the trajectory and p is the number of waypoints.
[1 4 4 3 -2 0; 0 1 2 4 3 1]
timePoints— Time points for waypoints of trajectory
Time points for waypoints of trajectory, specified as a p-element vector.
[0 2 4 5 8 10]
comma-separated pairs of
the argument name and
Value is the corresponding value.
Name must appear inside quotes. You can specify several name and value
pair arguments in any order as
'VelocityBoundaryCondition',[1 0 -1 -1 0 0; 1 1 1 -1 -1 -1]
'VelocityBoundaryCondition'— Velocity boundary conditions for each waypoint
zeroes(n,p)(default) | n-by-p matrix
Velocity boundary conditions for each waypoint, specified as the comma-separated
pair consisting of
'VelocityBoundaryCondition' and an
n-by-p matrix. Each row corresponds to the
velocity at all p waypoints for the respective variable in the
[1 0 -1 -1 0 0; 1 1 1 -1 -1 -1]
q— Positions of trajectory
Positions of the trajectory at the given time samples in
tSamples, returned as an m-element vector,
where m is the length of
qd— Velocities of trajectory
Velocities of the trajectory at the given time samples in
tSamples, returned as a vector.
qdd— Accelerations of trajectory
Accelerations of the trajectory at the given time samples in
tSamples, returned as a vector.
Piecewise-polynomial, returned as a structure that defines the polynomial for each
section of the piecewise trajectory. You can build your own piecewise polynomials using
mkpp, or evaluate the polynomial at
specified times using
ppval. The structure contains the fields:
breaks: p-element vector of times when
the piecewise trajectory changes forms. p is the number of
for the coefficients for the polynomials.
n(p–1) is the dimension of the trajectory
times the number of
pieces. Each set of n
rows defines the coefficients for the polynomial that described each variable
pieces: p–1. The number of breaks minus
order: Degree of the polynomial + 1. For example, cubic
polynomials have an order of 4.
dim: n. The dimension of the control
|
<urn:uuid:467a4642-2df0-42fc-962f-cdaf7697483b>
|
CC-MAIN-2020-29
|
https://it.mathworks.com/help/robotics/ref/cubicpolytraj.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00135.warc.gz
|
en
| 0.677044 | 978 | 2.671875 | 3 |
This study was conducted in order to investigate the effective language learning strategies employed by Cambodian students -studying English as a second language or as a foreign language. According to research documents from other sources, most well-known Oxford, Chamot, O’Malley, Stern and Rubin are interested in researching “Language Learning Strategies”. In reality, the conceptual framework of my research was selected from the Oxford’s theory customized into direct strategies consisting of memory strategies, cognitive strategies and compensation strategies and indirect strategies including metacognitive strategies, affective strategies and social strategies. Sample of the research is about 100 students – studying advance level in one English private school -who score of proficiency test from 80 upwards.
All of selected participants first are required to answer the questionnaires about the learning strategies which have done in two languages both English and Khmer. Secondly, the researcher will interview some of the students in order to ensure what they have already answered in the Oxford’s SILL questionnaires (1990) is very accurate and trustful; moreover, the researcher also need to do the interview with 10 teachers who are responsible for holding advance classes in order to ensure that data of those students from different classes are knowledgeable and reliable. All data obtained will be analyzed by using SPSS and then put into EXCEL to find the mean and standard deviation.
After analyzing the data, the researcher can see that the majority of Cambodian outstanding students preferred working with the first choice of compensatory strategies and the second is cognitive strategies. Furthermore, it is anticipated that after the research project students can work well with English language through employing new learning strategies which are effective and reliable.
Key words: Language Learning Strategies, Successful Language Learners, EFL, English as Foreign Language (EFL) Learners
1.1 Background of the Study
English has been considered as the predominant and common global language for people living in developing countries in the beginning of twenty-first century because it is very essential tool -which is for tracking the universal information and for global communication -that can help them find a well-paid job in locally and internationally standardized company or enterprise. In Cambodia, as elsewhere in the globalized world, English has gained a crucial role in different areas of life over the last decades. This is evident by the increasing number of English schools from children to adults in Cambodia. At the moment, the market for learning and teaching English in Cambodia is so vast. In fact, over the past decades, McLaren (2000) illustrated that Cambodian people have inspired themselves to study English because learning English is very worthy for them to have a better lifestyle (as cited in Koji Igawa, 2008). As a result, English is currently popular language used for international business and education in Cambodia (Igawa, 2008). Igawa. K (2010) English now is most interesting and popular language in Cambodia and it is also regarded as a bridge to receive a high-paid work and to continue scholarship abroad. In addition, futurists Naisbitt and Aburdence (1990) mentioned the prospects of English in glowing color:
The dramatic increase of development of a single global lifestyle is because of the proliferation of English language and culture transmission. It is now about 1 billion and is probably to exceed to 1.5 billion English speakers in the world by 2000. Language has been taught around the world; English is not replacing the original language but it is supplementing the others (as cited in Koji Igawa).
Similarly, English is now important communication means in every area of life, including science, business, aviation, entertainment, TV, internet and diplomacy in the world. Most kinds of published documents or materials written into English language are available in the world. Montgogomery (2004), 80 to 90 percent of documents, journal or (Graddol, 1999) information stored in the internet is written or dominated by English language (as cited in Aslan & Oktay, 2009). In the same time, Crystal (2003) mentioned that native English speakers are about 400 million when non-native English speakers -who used English as Second Language (ESL) and Foreign Language (EFL) – are more than 430 million and latter is around 730 million. Hence, the surprising figure of English language speakers peak up to one and half billion worldwide (as cited in Aslan.O, 2009).
According to what researcher mentioned above, it is extremely clear that most parts of the world are studying English language in order to have a better living standard. However, up until now, learning English is tough for not just Cambodian people but also other non-native English speakers whose language has much difference from English. Apart from this, Cambodia is; for example, not an English speaking country; therefore, English communicative competence is hard to acquire. In this case, language learning strategies can greatly assist language learners in Cambodia to learn English in more effective and efficient ways. According to Cohen (2005) mentioned that Language Learning Strategies (LLS) are very essential for “language learning and teaching” for two main reasons: “(a) researcher can identify the metacognitive, cognitive, social and affective process involved in language learning by investigating the strategy use of second language learners, (b) less successful language learners can be assisted to be better language learners through effective strategy instruction. The second reason is more necessary for classroom pedagogy and some researchers (O’Malley & Chamlot, 1990; Ozeki, 2002; Ikeda & Takeuchi, 2003) have claimed language learners can improve their language performance by using instructed learning strategy” (as cited in Kusumi, 2007).
1.2 Problem Statement
Having been working as an EFL teacher for ages in public institute English is likely to use as support language, researcher has observed that the majority of Cambodian students studying English there are facing the obstacles of learning English because they are living in non-native English speaking country whose language is Khmer; therefore, they don’t have much chance sharpening and strengthening their English. Although the students change every year, their learning attitude remains the same. Most of them do not have successful educational history and numbers of them are unaware of language learning strategies according to what the researcher’s observation. Furthermore, Cambodian English learners and teachers have shortage of effective language learning and teaching strategies.
1.3 Aim and Objective of the Study
The purpose of this study is to investigate the effective language learning strategies used by the learners, with specific emphasize on the great numbers of strategies and the domain differences and to indicate the link between strategy use and success level. What’s more, researcher is willing to find out how well the learning strategies helped Cambodian students learn English. According to Brown (1987) illustrates the interaction of “what is learning and what is teaching”. After researching the effective language learning strategies, the students can employ these learning strategies to master their learning the language. Also, academic director can conduct a workshop on the subject matter of how to learn the language fruitfully and successfully; moreover, teachers can clearly understand the approaches to build up students’ ability and needs.
1.4 Research Question
What are the language learning strategies that contribute to successful EFL learners?
1.5 Rationale of the Study
Cambodia is now moving towards the “process of globalization” along with membership of ASEAN community (Igawa, 2008). Therefore, the findings on language learning strategies towards successful EFL learners will be beneficial for learners to reinforce their learning strategies or styles more and more effective and for educators to create more effective teaching methodology in order to fulfill students’ needs. Furthermore, working as a successful, standard and professional educator, it is very essential to transparently and deeply understand the ways to educate the students or children both mental awareness and academic abilities. In addition, school principal or director will be able to update the study program to match the job market. Consequently, researcher sincerely hopes that this study would assist all school principal or manager, educators and EFL learners to apply in teaching or studying respectively.
2.1 The Definition of Language Learning Strategies (LLS)
Educators and learners have now paid much attention on not only ‘what to learn’ but also ‘how to learn’. In order to completely achieve language learning, the focus is placed on the development of the learning strategies generally -the educators ought to be aware the varieties of techniques in transferring their English knowledge to learners. For instance, with good language learning strategies, teachers can instruct the students well enough. However, the first step to realization of this approach is to know what learning strategies are about.
Speaking about language learning strategies, most of researchers mainly highlight on how to deal with language learning strategies more easily and more effectively and introduce useful help for language learners. Therefore, language learning strategies are broadly defined by many researchers of second and foreign language learning depending on the subject areas including pedagogy, psychology, linguistic.
According to the linguist, LLS is generally defined as ways the learners to acquire and to use English language effectively and correctly (Richards & Schmidt, 2002). Moreover, Tsan. (2008), LLS refers to action or technique the language students use in order develop their language learning progress; for example, questions during teaching and feedbacks after reading.
Rubin (1975) defined “successful language learners” by just looking at their strategies and their learning performance. Also, Parrot defined a learning strategy as “a measure that the learner actively employs to assist or advance learning” (1993: 57).
Aslan. O (as cited in Rigny, 1985 & Oxford, 1990) defined that “language learning strategies” were as “operations employed by learner to aid the acquisition, storage, retrieval, and use of information”. Scarcella & Oxford (1992, p.63) defined that learning strategies are “steps, behaviors, specific actions or techniques” such as making conversation, solving tough problem in language learning. Learning strategies or learner strategies are steps adopted by learners to achieve their learning and develop their language acquisition (Chamot & O’Malley, 1996; Nunan, 1996; Oxford, 1996; Lessard-Clouston, 1997).
In the meantime, Cohen (2007) agreed with Oxford’s ideas defined that the purpose of language learning strategies is to develop learning, to work with specific tasks, to cope with specific problems, to make learning easier, faster, and more enjoyable and to compensate for a deficit in learning.
English as Foreign Language (EFL): is defined as “English language learning occurs in a nonnative English environment where the native language is spoken” (Tanveer. M &Yang. M, 2010)
Even though language learning strategies are not simply defined, they all depend on to one common thing which helps learners make their language learning easier and acquire language more effectively and successfully.
2.2. Importance of language learning strategies
The language learning strategies are special ways to help people to realize, to learn, and to memorize new knowledge. As an old Chinese saying goes “Teaching a man how to fish is better than giving him a fish.” Teachers cannot always teach him by his side throughout his life, so these strategies play an important role in developing learner autonomy. Learners can make the best use of these strategies to establish the ability of self-directed learning. If people use these strategies efficiently, they can learn by themselves and self-examine their own progress. Gradually, they can set up their self-confidence. Therefore, having proper learning strategies can improve learners and enhance their abilities of language.
2.3 Taxonomy of Language Learning Strategies
According to the early studies on language learning strategies, the professional researchers involved in language learning have had different concepts of classification of learning strategies. The researchers (Cohen, 2002; Ellis, 1997; O’Malley, 1985) indicate similar learning strategies categorized into metacognitive, cognitive, social and communication. Obviously, there are three main types of learning strategies consisting of “metacognitive, cognitive and social strategies” (Ghani as cited in O’Malley and Chamot, 1990). Another researcher stated that three types of language learning strategies have been classified such as; “learning strategies, communication strategies and social strategies” (Zare as cited in Rubin, 1987). Referring to Oxford’s classification, the professional and expertize researcher has divided the language learning strategies into two main categories; direct strategies and indirect strategies which are subdivided into total of six groups (Zare as cited in Oxford, 1990). Whereas, researcher (Stern, 1992) stated that there have been five types of language learning strategies such as; (1) management and planning strategies (2) cognitive strategies (3) communicative -experiential strategies (4) interpersonal strategies (5) affective strategies.
In researcher’s opinion, researcher would prefer to follow the Oxford’s taxonomy rather than the others’ views because its taxonomy clearly stated the procedures and steps of language learning strategies divided into two major parts named direct strategies and indirect strategies. Remarkably, these two strategies are not apart from each other but they are relatively interconnected. Most interestingly, a few main subdivisions for each strategy are also closely interrelated and help one another.
2.4 Conceptual framework of LLS
According Oxford (1990), LLS is categorized into two major groups such as; direct strategies and indirect strategies. The first one was the activities which directly effect on the learning process including “memory, cognitive, and compensation strategies that help learners achieve the target language in communication gaps. The latter is the indirect activities which have influence on leaning process. It consists of “metacognitive, affective and social strategies”. The conceptual framework that informed this study will use this classification so as to explore language learning strategies for successful EFL learners.
Figure 2.1 Interrelationships between Direct and Indirect Strategies and Among the Six Strategy Groups (Oxford, 1990)
2.4.1 Direct strategies
The direct strategies play very important role because they help learners store and recover the information easily and produce the language. According to Oxford (1990), direct strategies are the learning strategies involved in target language -the metal processing of the language; for example, remembering the information and practicing the target language while the other three subdivisions of direct strategies activate this process normally. In reality, Oxford (1990) classified direct strategies into three categories such as; memory strategies, cognitive strategies and compensation strategies. These categories are stated as:
“Memory strategies, such as grouping or using imagery, have a highly specific function: helping students store and retrieve new information. Cognitive strategies, such as summarizing or reasoning deductively, enable learners to understand and produce new language by many different means. Compensation strategies, like guessing or using synonyms, allow learners to use the language despite their often large gaps in knowledge”(Oxford, 1990; p.37).
220.127.116.11 Memory strategies
Memory strategies are defined as the learning strategies used for storing, entering and retrieving the information. Memory strategies help learners to connect one L2 concept to another but it is not sophisticated. Additionally, memory strategies help learners learn and organize things in order; however, with other strategies, language learners create learning and retrieving through “sounds, images, the combination of sounds and images, body movement, mechanical means or a location” (Aslan as cited in Oxford, 1990). Stevick (1982), McCathy (1990), Holden (1999), and Cohen (2002) mentioned the similar method to remember the vocabulary and structures easily for fresh language learners. “Memory strategies can contribute powerfully to language learning” (Aslan, 2009).
Oxford (1990) distinguishes memory strategies into another set of four: creating mental linkages, applying images and sounds, reviewing well and employing action. Here is the diagram of the memory strategies.
Creating Mental Linkages
Placing new words into a context
Appling all images and sounds
Representing sounds in Memory
Using physical response or sensation
Using mechanical techniques
Figure: 2.2: Diagram of the memory strategies (Oxord, 1990, p.18)
18.104.22.168 Cognitive strategies
Cognitive strategies are very essential strategies to enhance students’ ability in critical thinking and enable the learners to manipulate or change the target language because of the above-mentioned reasons, cognitive strategies are very useful for learning a new language (Aslan, 2009). Cognitive strategies refer to the strategies which learners employ to think critically, analyzing the information professionally, note-taking and summarizing and so on (Oxford. L. R, 1990). Chamot (1989) stated that learners acquire the target language and complete the tasks directly by themselves. According to the previous studies, the cognitive strategies are prominently connected with L2 proficiency (Kato, 1996; Oxford & Ehrman, 1995; Oxford, Judd, and Giesen, 1998; and Park, 1994). Moreover, cognitive strategies popularize for language learners due to one study conducted by Oxford (1989, 1990).
Oxford (1990) illustrated that cognitive strategies have four sets such as; “practicing, receiving and sending messages, analyzing and reasoning and creating structure for input and output. Here is the cluster of cognitive strategies.
Formally practicing with sounds & writing system
Recognizing and using formula and patterns
Receiving and sending messages
Getting the idea quickly
Using resources for receiving and sending messages
Analyzing and reasoning
Analyzing contrastively (across languages)
Creating structure for input and output
Figure: 2.3: Diagram of the memory strategies (Oxford, 1990, p.18-19)
22.214.171.124 Compensation strategies
Compensatory refers to the reduction of bad effects of something according Longman Dictionary of Contemporary English. Oxford (1990, p.47) compensatory strategies are strategies that learners use new language for either comprehension or production despite limitations in information; moreover, these strategies are willing to make up for an insufficient repertoire of grammar and, especially, of vocabulary. Due to applied linguistic dictionary (2002) defined the compensatory strategies as strategies which help learners -children who lack language experience enable to comprehend the missing the information. For example, learners cannot read the text or understand the text easily and efficiently because of a few difficult vocabularies. Nijmegen & Kasper (1983) stated that compensatory strategies refer to strategies which language learners use so as to overcome the intended meaning during verbal communication.
As above mention, these strategies assist learners to produce spoken and written expressions in the target language even though they have inadequate knowledge. Aslan. O (2009) mentioned that “compensatory strategies for production serve as helper in carrying on employing language. Apart from these, some of these strategies help learners become more fluent in their prior knowledge”.
Oxford classified compensatory strategies in ten sub-strategies which are under the two main strategies. Ten of these strategies are guessing by linguistic clues, guessing by other clues, switching to the mother tongue, getting help, using mime or gesture, avoiding communication partially or totally, selecting the topic, adjusting or approximating the message, coining words, and using circumlocution or synonym. Below is the diagram of compensatory strategies taken from Oxford’s theory.
Overcoming limitations in speaking and writing
Switching to the mother tongue
Using mime or gesture
Avoiding communication partially or totally
Selecting the topics
Adjusting or approximating the message
Using a circumlocution or synonym
Using linguistic clues
Using other clues
Figure: 2.4: Diagram of the compensatory strategies (Oxford, 1990, p.18-19)
2.4.2 Indirect strategies
According to Oxford (1990), indirect strategies can be classified as meta-cognitive, affective, and social strategies. She added that all of the strategies are indirect, since they help the language learning without directly connected with the target language. Also, she stated as the following:
“Metacognitive strategies enable language learners to manage their own cognition- that is, to coordinate the learning process by using functions such as centering, arranging, planning the organization either written and spoken discourse, and evaluating the comprehension of receptive language and language production. Affective strategies help to regulate emotions, motivations, and attitude. Social strategies help students learn through interaction with others using either interpersonal or intrapersonal communication” (p.135).
Similarly, Kozmonová (2008) agreed with Oxford’s that indirect strategies are not the same as direct strategies, for they are not directly related to the “target language”. Indirect strategies connected with language learning management such as; “planning and organizing time for learning, evaluating learners’ progress, paying attention to emotions, learning with other people, and others”.
More significantly, Oxford (1990) and Filiz (2005) claimed that direct strategies and indirect strategies are interconnected and are beneficial in all language learning circumstances and helping language learners to improve and strengthen all four macro skills such as listening, reading, speaking, and writing.
126.96.36.199 Metacognitive strategies
Tsan (2008) indicated that meta-cognitive strategies referred to overall learning process management. In addition, Al-Buainain (2010) showed that metacognitive strategies are involved in one’s language learning by planning, organizing, monitoring and evaluating and they help learners to “gain control over their emotions and motivations related to language learning through self-monitoring”. According to Oxford’s clarification (1990), metacognitive strategies are absolutely vital methods which the successful language learners need to employ to integrate the process of language learning. For instance, students who are unfamiliar with vocabulary and confusing the grammar rules and so on so forth need these strategies.
Cohen (2002, p.3) accurately describes metacognitive strategies as those which “[…] deal with pre-assessment and pre-planning, on-line planning and evaluation, and post-evaluation of language learning activities and of language use events”. Researcher, Ellis (1997, p.77) asserted that “metacognitive strategies are those involved in planning, monitoring and evaluating learning. Additionally, O’ Malley and Chamot mentioned that these strategies involve in planning and thinking about learning, such as planning one’s learning, monitoring one’s own speech or writing, and evaluating how successful a particular strategy is (1990, p.44).
According to what above mention, research finding of metacognitive strategies has common strategies which refer to learners’ planning, organizing, and evaluating of their learning. Hence, students can regain their focus by using metacognitive strategies and utilize the strategies for making use of other necessary learning strategies for a successful outcome.
Metacognitive strategies consist of eleven sub divisions such as: centering your learning, arranging and planning your learning and evaluating your learning (Oxford, 1990). Here is the diagram of meta-cognitive strategies.
Centering your learning
Overviewing & linking with already known material
Delaying speech production to focus on listening
Arranging and planning your learning
Finding out about language learning
Setting goals and objectives
Identifying the purpose of a language task
Planning for language task
Seeking practice opportunities
Evaluating your learning
Figure: 2.5: Diagram of the metacognitive strategies (Oxford, 1990, p. 20)
188.8.131.52 Affective strategies
“Affective” refers to the emotions, attitudes, motivations and values by Oxford (1990 & Cohen, 2002, p.3). Tsan (2008) defined affective strategy as procedure that identified one’s mood and anxiety level. Apparently, in language learning process, good language learners employ different types of affective strategies. Sometimes, it can be frustrating to learn another language. It can arouse feeling of unfamiliarity and confusion. In some other cases, learners might not have a positive perspective towards native speakers. On the other hand, good language learners are relatively aware of these emotions, and they try to build positive feelings towards the foreign language and its speakers as well as the learning activities. Positive feelings will result in better performance in language learning. To a great deal, training can be of assistance to the students to face these controversial feelings and to overcome them by drawing attention to the possible frustrations or mentioning them as they come up (Chamot, 1992). As other famous researchers coping with affective strategies, Ellis and Sinclair (1989) put affective strategies into metacognitive ones, whereas Ellis (1997) includes them in social ones.
Therefore, while learning a new language, learners can gain control over factors which are relevant with emotions, attitudes, emotions and values through the use of affective strategies.
According to Oxford (1990), affective strategies cover a few sub strategies including “lowing your anxiety, encouraging yourself and taking your emotional temperature. Here is the diagram of affective strategies.
Lowering your anxiety
Using progressive relaxation, deep breathing and meditation
Making positive statements
Taking risks wisely
Taking your emotional temperature
Listening to your body
Using a checklist
Writing a language learning diary
Discussing your feelings with someone else
Figure: 2.6: Diagram of the affective strategies (Oxford, 1990, p. 20)
184.108.40.206 Social strategies
Social strategies are indirect strategies which refer to communication held with other learners or native speakers. Social strategies are the activities in which the language learners can practice their existing knowledge of language through interaction (Zare as cited in Rubin, 1987). Rubin (1975) finalized that successful language learners are eagerly inspired to have a strong drive to communicate and to learn from communication […]. In fact, if you contact others, then you are employing these strategies. Our relationship between people is important, and the relationship can help us to do something that we can’t finish it by ourselves.
Therefore, social strategies support the learners to work with others and understand the culture as well as the language, as Aslan (as cited in Oxford, 1990) states “language is a form of social behavior.” Finally, social strategies play very crucial role in communicative language either second or foreign learners.
Six skills listed under three sets of social strategies are “asking questions, cooperating with others and empathizing with others owing to Oxford (1990). Here is the diagram of the social strategies.
Asking for clarification or verification
Asking for correction
Cooperating with Others
Cooperating with peers
Cooperating with proficient users of the new language
C. Empathizing with others
Developing cultural understanding
Becoming aware of others’ thoughts and feelings
Figure: 2.7: Diagram of the social strategies (Oxford, 1990, p. 21)
This study mainly focuses on the language learning strategies used by Cambodian students. Survey questionnaire and an interview will be used to find out about the effective strategies employed in learning English as a second language (ESL) or foreign language (EFL). In fact, the participants of the study are students and home class teachers studying and teaching in one English private school in Takhmau town, Kandal province. The researcher will choose this English school to conduct the research because it is close to the researcher’s accommodation; therefore, it is convenient to travel and get the rich information. The researcher will conduct a research at one private school -same programs for full-time and part-time -which has offered different levels of English education including beginner (level 1-3), elementary (level 4-6), pre-intermediate (level 7-9), upper-intermediate (level 10-12) and advance (level 13-15). According to school information collected from school principal, this school has 120 from beginner class; 250 from both full-time and part-time elementary class; 200 from pre-intermediate class; 200 from upper-intermediate class and 170 from advance class. Totally, this school has 820 students -450 students from basic level, 200 from pre-intermediate up and 170 from advance level. Hence, the sample size of this study is 170 students who are from advance level. With appropriate and logical reason, the researcher selected those students to do the research because those of whom have better English rather than the other levels mentioned above.
The researcher will collect the data for the study via two ways: firstly, a research questionnaire administered with the students of selected students from advance level; secondly and interviews with selected students and with the selected teachers.
3.2 Sample and Sampling
This research study will be conducted in one English private school in Takhmau town, Kandal province. Moreover, approvals from school principal and other participants in the school will be needed before the study will be conducted. Only 100 students from various levels whose score of proficiency test is around 80 upwards will be invited as participants in the survey questionnaire following Oxford’s SILL (1990) required for this study and the rest of students will not be invited because they are not qualified enough. Moreover, Of 20 students numbered consecutively will be randomly selected to participate in structured interviewed as well as 10 home class teachers.
With school principal’s approval of conducting research in his school and agreement from home class teachers, a questionnaire will be given to selected students from advance level to complete during their study time in order to find out the ways of their learning English language smoothly and effectively. In the questionnaire, there are plenty of closed-ended questions needed the participants to accurately put number (1-5) a circle so that the researcher could clearly see that what effective language learning strategies employed through those outstanding participants are.
After questionnaires completion, because of time constraint, 20 students who are randomly selected from 100 will be invited to take part in structured interview -face-to-face interview in one assigned class. Interviewing with selected students, the researcher would like to ensure that their responses are absolutely true and reliable comparing to whatever closed-ended questions they answered in the questionnaire.
Additionally, in order to make data more and more reliable and trustworthy, the researcher will conduct structured interview -face-to-face -with 10 home class teachers. In fact, the researcher interviews them because they can provide the trustful answers about their students’ academic performance. Students’ answers alone are not accurate so that the researcher needs to get those teachers to participate in this research study.
3.3 Research Design
Approvals from school principal, teachers and students are very significant for this study because data collection will be needed for the research topic of this study. Without their permission, the study will not be successfully completed. I will write a letter, along with a letter from Royal University of Phnom Penh (RUPP), to the school principal to ask for approval to do a research at his school. The letter will also mention the detailed information of the research, the aim and objectives of the study, and the rationale of the study related to my research topic.
Teachers and students will be invited to participate in the research study. Therefore, the researcher needs to formally write and send one permission letter enclosed to 10 relevant home class teachers and another one for selected students. Clearly, those students will get the rich information about participating in this research study from their home class teacher. In the letter, the researcher will state the detailed information of the research, the aim and objectives of the research and the rationale of the study which are relevant to my research topic.
After those selected students agreed on this research study, questionnaire will be sent to home class teacher to deliver to their students participated in the research to answer at the end of the time around 20 before leaving class and the rest of the students will be asked to leave earlier. Before completing the questionnaire, the class teachers explained them what they are supposed to do in this questionnaire. After completing the questionnaires, the students need to return their questionnaire to their class teacher. Then, the relevant teachers dropped students’ answers at help desk enclosed with A4 envelop.
3.3.1 Pilot Testing
To avoid any unexpected mistakes in collecting data for the study, the researcher prepared a pilot test study which will be used to discover any misunderstandings or unintentional mistakes prior to the research questionnaire delivered to the participants. Furthermore, pilot testing necessarily needs to be carried out around one week before the real interview begins to guarantee the validity and reliability of questionnaire and the structured interview questions as well. From this pilot testing, it can help improve questions contents and its prospective score results.
The SILL will be translated into Khmer language to avoid any problems participants might encounter in understanding the items and response scales. The SILL translation process will go through four stages: Translation, assessment, editing, pretesting. First the researcher will translate the SILL into Khmer, keeping as much as possible the referential meaning of the words. Second, the Khmer-translated version will be assessed again by the source version by an English-Khmer translator, who will be requested to assess the textual quality of the translation in terms of appropriate translational equivalency. Third, the revised version will then be checked by a Khmer linguist for naturalness, clarity, and smooth reading. Finally the revised version will be pretested by asking some advance learners to complete the survey. Upon completion of the SILL, the respondents will be invited to make any comments on the wording and clarity of the items and the response scales. In general, they will express satisfaction.
Similarly, the structured interview questions for teachers and students were prepared and translated with the help from a person who is skillful for translation so as to avoid some mistakes in translation from English into Khmer. The questions used in the interviews with the teachers and the students were also tested before the real ones in order to avoid unexpected mistakes. After the questions prepared and translated, the researcher interviewed a few teachers and selected students to find out about some errors with the questions which would make those teachers and students doubtful, or confused.
More importantly, after pilot testing, the researcher made some changes on misunderstanding and confused aspects including the phrases, sentences and other points that participants are not clear. One again, the researcher doubled check the translation between English and Khmer language with the help from expert skillful in translating.
Remarkably, this pilot test was conducted with teachers and students; therefore, while doing the real ones, the participants are clear. To sum up, after checking and editing the questions, the interview will be held smartly and comfortably.
3.3.2 Quantitative and Qualitative Methods
Quantitative and qualitative methods were employed to collect the reliable and trustful data. In this study, quantitative method was used with selected students to complete the questionnaire through putting the numbers (1-5) a circle in order to find out their language learning strategies. Also, face-to-face interview was conducted with some selected students so as to ensure that their answers in the questionnaire were true and reliable; moreover, the researcher interviewed with some teachers followed the qualitative method. During the interview, the researcher asked permission from participants to audio-record their answers and also took notes so that he was able to analyze their responses for data collection.
3.4 Research Instrument
In order to work with this research well, researcher will design some useful tools including Oxford’s SILL (Strategy Inventory Language Learning) questionnaire to get the rich information from successful EFL learners studying advance level. In addition, various types of questions were developed in this questionnaire. Obviously, Oxford’s SILL (1990) consists of 50 items that represent the six categories of strategies mentioned above. First, memory strategies help learners to remember and retrieve information through creating mental linkages, applying images and sounds, reviewing well, and employing action. Second, cognitive strategies help learners to understand and produce new language through practicing, receiving and sending information, analyzing and reasoning, and creating structure for input and output. Third, compensatory strategies enable learners to use the language despite gaps in their language knowledge through guessing intelligently, and overcoming limitation in speaking and writing. Fourth, metacognitive strategies allow learners to control their own learning through organizing, planning, and evaluating their learning. Fifth, affective strategies help learners to gain control over their emotions, attitudes, and motivations through lowering their anxiety, encouraging themselves, and taking their emotional temperatures. Six, and finally, social strategies help learners to interact with others through asking questions, and cooperating with others and empathizing with them.
Additionally, structured interview questions will be developed to be better and better. Also, structured interviews with purposeful questions will be also used for collecting information from both students and teachers in order to ensure the data collection are transparent and trustful.
3.5 Data Collection Procedure
The process of data collection will be described as the following. Here are the descriptions of data collection procedure.
3.5.1 Questionnaire completion
Selected students will be asked to complete the questions given through home class teachers at the end of teaching hour around 20 minutes left before leaving class. Before completing the survey questionnaire, the researcher will ask target home class teachers to inform their students the date of doing the survey and time-consuming needed. Also, teachers will ask them not to be absent on that day. After finishing the questionnaire completion, the participants will hand in the questionnaire to home class teacher to keep it and store it at help desk ground floor where it is easy to access. The questionnaire will be kept secretly in enclosed envelops from different classes.
3.5.2 Face-to-face interview
Besides completing the questionnaire, interview with some selected students will be likely to take around 15-20 minutes. Noticeably, the researcher has the selected participants’ phone numbers; therefore, before the interview, the researcher will contact and make on-phoned agreement with selected participants about the condition of research. Especially, inform them about the record of the interview. After getting permission from the students, the researcher assigned suitable time to meet and interview them face to face. It will be clearly informed that all the recorded answers will be kept confidential and destroyed sometime later after the study finished.
Similarly, other ten teachers will also be informed about the interview which will take no more than 20-25 minutes because they will be very busy with their teaching. Before the interview, researcher will read the research information for teachers, and the consent form to ensure that they will understand clearly about the purpose of the interview, and then they are supposed to sign the consent form to show their agreements. It will be clearly informed that all the recorded answers will be kept confidential and destroyed sometime later after the study finished.
3.5.3 Purpose of data collection
The purpose of this data collection was to gather rich information from varied sources so that the researcher would be able to get clear reports that would lead to highlight some of effective language learning strategies. The researcher sought to obtain a fully informed view from the different categories of participants concerning the exploratory of language learning strategies. In order to obtain a truly informed answer, the researcher used both Oxford’s SILL (Strategy Inventory Language Learning) questionnaires with selected students, as well as elaborated answers from interviews with the students and the teachers.
3.6 Limitation of Research
The scope of this study is targeted to the EFL students in one private school conducting in English language with English-speaking environment located in Takhmau Town, Kandal Province. It was mainly held with small numbers of 115 top students studying in different levels -granted into pre-intermediate, intermediate, upper-intermediate and advance levels. With effective language learning strategies, those of who can produce outputs of four-macro skills well with other conversationalists both native speakers and non-native speakers such as: Cambodian, Vietnamese, Chinese and so on. However, the purpose of this study is not to provide the broad generalization for the study because this research was just mainly highlighted on one English private school; hence, the numbers of participants were small comparing students studying English as a second or foreign language in nationwide. Additionally, the researcher was just willing to explore what effective language learning strategies are.
3.7 Ethical Consideration
In order to get successful in conducting the research, the researcher is required to take into consideration about some safeguards. A letter of introduction from the M. Ed program of Royal University of Phnom Penh will be sent to find the approval from school principal about the research study.
The consent and information forms are very essential information which explains about the aim and objective of the study and my personal background. The researcher introduced himself to all selected participants and purpose of the study; he also asked those of whom to participate in doing the research study. After seeing the consent form, the relevant participants could make a decision whether to voluntarily take part in the study or not. It was clearly stated that the participation is not compulsory. What’s more, during the interview, the interview will be based on the participants’ time; therefore, the researcher is flexible due to the participants.
Confidentiality is very important. All questionnaires and interview documents will be stored in a safe place out of reaching from other people. The researcher will use code number rather than specific names of selected participants. Apparently, these research documents will be destroyed whenever the researcher completes his research study.
In addition, the volunteers getting involved in helping conduct the research successfully will be asked to keep confidentiality about whatever questions they answered. The pressure will not be put on the respondents. After the interview, they will just record the data and destroy it sometime later after the study finished.
DATA ANALYSIS AND RESULTS
4.1 Data Analysis
To analyze the data reliably, the researcher will use the code number of each answer responded to the questionnaires provided by selected participants and it will be entered into the program SPSS (Statistical Package for the Social Science) based on descriptive statistics in order to find the “mean” and “standard deviation”. Moreover, the qualitative data analysis which will be obtained from interviewing with selected language learners and few numbers of teachers’ will be based on themes and emerging patterns from the participants’ responses.
4.2 Results of the study
According to the previous study on the title of language learning strategies of other countries, the researcher has learnt that some successful language learners employed the same strategies; whereas, the others are different. As a result, Shu Chuan Tsan’s study (January-December, 2008) critically analyzed language learning strategies based on descriptive statistics in order to find “mean (M) and standard deviation (SD)” and this researcher found that the strategies the second or foreign language learners preferred to use most frequently to the least are compensatory strategy, cognitive strategy, meta-cognitive strategy, social strategy and affective strategy. Another research study Al-Buainain, H (December, 2010) revealed that language students were likely to utilize metacognitive strategies most (75.3%), cognitive (70.9%), compensation (68.2%) and social (65%); while memory and affective strategies were used the least (58.6%).
Furthermore, regarding to the third research study of learning strategies, it was shown that students often chronologically used social strategies and meta-cognitive strategies, “compensation and cognitive in middle-range”; whereas, learners rarely employed the affective and social strategies (Griffiths, C and M.Parr, J, July, 2001).
Thus, according to my exploratory study, the result of the research answers my research question “what are language learning strategies contribute to successful EFL learners in Cambodia?” Owing to this exploratory study, the researcher will conclude the result as the following.
Finally, after seeing the results of language learning strategies of other countries, the researcher will analyze that the learning strategies will be utilized differently following their learning preferences. According article of “Principles of Integrated Language Teaching and Learning”, language learners use a variety of language and learning strategies to expand learning beyond the classroom and become independent, lifelong learners. Ghani. M (2004) successful language learners tends to employ learning strategies that match to the “materials, the tasks, and to their own objective, needs and stage of learning”. (as cited in Skehan, 1989; Oxford, 1989; Oxford & Crockall, 1989) Additionally, Ghani continued clearly stated that “good language learners” pay much attention and efforts to language learning. Also, the finding of language learning strategies in Spain indicated that the strategies were used in term of their native language such translating (Pineda. E. J, November, 2010). To sum up, the majority of Cambodian language learners will use compensatory strategies and cognitive strategies rather than other strategies.
5.1 Introduction of research study
In the fulfillment of the Master of Education program, all students are required to find a topic for their research work. I feel very interested in doing research because I would really like to know something new and updated; I also can grab some further new information in order to make my workplace more productive. In fact, after a long process of writing research report, I did learn a lot about how to do research well.
The research study is a long process of task revision and development. Each part needs to be linked smartly and smoothly in order to synchronize it into single and common ideas. If one part is updated or changed, it will affect other parts of the research automatically; hence, good preparation and time management are very crucial to achieve the tasks successfully and effectively. What’s more, the research work is required the student to put lots of efforts to pool the ideas together because it is independent task needed to use in-depth understanding to analyze the work and to build up critical thinking skill.
5.2 Literature Review Discussion
Reviewing the literature review is the hardest work and it provided me lots of learning experiences because I have been through numbers of texts, articles, published proposals and relevant books. Furthermore, finding relevant sources and good desired information is a time-consuming job and need to pay much attention and great effort to seek out in the library and to be on the internet. Necessarily, if researcher doesn’t know the relevant key word to search on the internet, it is much difficult to find the productive and lucrative sources which are relevant with the proposed topic since key words are useful tool to gather the advantageous information. According to attending this master course, I have learnt some good website and well-known scholars which are really reliable and trustworthy; therefore, I can get good sources from the internet. In reality, if researcher has poor-quality information, the researcher will result in poor research work. As a result, limited and inadequate sources are really problematic that seriously affects the quality of the research. Moreover, the process of finding sources has helped me understand more when seeking for information from one particular place. I know more and more about the real working environment and society and how to get the data. To sum up, developing a research report is very expensive task for me because I have learnt lots of tasks.
Through these sources, it shows the researchers that lots of well-known scholars are interested in doing research related to the language learning strategies which are useful for second language learners or non-native speakers to catch up with English language easily. Doing the research mainly focuses on proposed topic, the researcher has found out the Oxford’s Language Learning Strategies categorized into two integral part such as; Direct Strategies and Indirect Strategies which are subscribed into sub-parts.
5.3 Methodology Discussion
Having been through this section, I have clearly learnt both qualitative and quantitative research methodology. In my research report, I have employed the quantitative method which make me understood clearly how to conduct quantitative research. Also, I have learnt how to make and prepare the questionnaires for collecting the data in an accurate and purposeful way with no harm to selected participants. Data collection must be trustful, so if the researcher doesn’t know to get along with the participants, the researcher cannot get clear and trustful information or data. Additionally, it also taught me how to find weakness and strengths of the data collection, proposed me to consider the connection with school principals and the target participants needed.
Furthermore, while conducting individual interview, I myself need to ensure that it is not accused of crime or political intimidation. My researcher report mainly highlighted on only one area of the language learning strategies; therefore, it doesn’t reflect other learning perceptions.
Based on the findings synthesize from previous research study on the title of language learning strategies applied by successful ESL or EFL leaners, it is found that there are two main types of learning strategies such as direct strategies consisting of memory, compensatory and cognitive strategies and indirect strategies including meta-cognitive, social and affective strategies. These strategies were excerpted from Oxford’s theory which is specialized in language learning strategies. Successful language learners used these strategies differently owing to their preferences, own learning styles and objectives. For example, some of those learners frequently prefer compensatory strategies rather than other strategies, while the others are most likely to employ social and meta-cognitive strategies among the others as the result shown in the previous chapter.
The results of this proposed research would be limited in one private school; therefore, it is not general for all stakeholders elsewhere. Through the process of my research development, I can realize that my study doesn’t sound much better to be applied by language learners yet; therefore, the followings are my personal critic and recommendation on the research topic of language learning strategies.
According to the study, it is mainly highlighted to explore language learning strategies used by successful ESL or EFL learners. If next researcher conducts a research whose topic is similar to mine, I would like that researcher to continue conducting the research with public high school grade 10-12 in order to see the gap of differences between strategies used in private and public sector. Otherwise, the next researcher ought to study it on other English private schools. Doing so, the researcher might clearly see that the data analysis is more and more accurate and reliable. In addition to, it is good if the next researcher conduct the study with public or private universities with vast numbers of participants so as to find out the learning strategies used in higher education.
Moreover, the participants in this study were limited to only students and teachers from one English private school, and the number of students was only about a hundred with only ten teachers. Consequently, it should be conducted with a larger number of English language learners and teachers from different settings, bearing in mind other possible strategies that were found to be effective in language learning in previous research, and with various forms of research means, so as to be able to better understand the language learning strategies on achievement in the target language.
|
<urn:uuid:a6f7b1ff-466b-4c21-8d3b-e301da9818ce>
|
CC-MAIN-2017-47
|
https://www.enlunwen.org/mei-guo-da-xue-lun-wen-dai-xie-definition-of-language-learning-strategies-education-essay/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805362.48/warc/CC-MAIN-20171119042717-20171119062717-00463.warc.gz
|
en
| 0.933177 | 10,544 | 3.328125 | 3 |
Collarbone, or clavicle, pain is more than just an aching sensation in the shoulder region. It’s a signal from our body, hinting at potential underlying issues or the aftermath of a sudden injury. This bone, connecting the sternum to the shoulder blade, plays a pivotal role in our everyday motions. Understanding the causes and remedies for clavicle pain is vital. This article delves into the intricacies of collarbone discomfort, providing insights and solutions for those who seek relief.
When Should I Be Worried About Collarbone Pain?
Collarbone (clavicle) pain can arise from various causes, some more concerning than others. If you experience collarbone pain, it might be helpful to consider the following factors, and you should seek medical attention in the following scenarios:
- Trauma or Injury
If you’ve recently experienced a direct blow, fall, or accident that impacted the collarbone or shoulder area, it could be a sign of a fracture, dislocation, or another injury. Immediate pain, swelling, bruising, or a deformity could indicate a break or dislocation.
- Pain with Movement
Difficulty or pain when moving your arm, especially lifting it overhead, might indicate an issue with the collarbone or surrounding structures.
- Audible Sounds
Hearing a grinding or popping sound when moving the shoulder could be indicative of a joint issue or fracture.
- Visible Changes
Any noticeable changes in the appearance of the collarbone, like bumps, lumps, or deformities, especially if they appear suddenly, should be evaluated.
- Prolonged Pain
If the pain is persistent and doesn’t improve with rest or over-the-counter pain relief. Then, it might be a sign of a more chronic issue or condition.
- Associated Symptoms
Signs of an infection (such as redness, warmth, fever), swelling, or radiating pain can be concerning. Similarly, if the pain is accompanied by chest pain, difficulty breathing, or any other alarming symptoms, it’s essential to seek medical attention immediately.
- No Known Cause
Sometimes, pain without a known cause (i.e., no injury or trauma) can be the most concerning, as it could be indicative of an underlying condition.
If you’re ever in doubt about the severity or cause of your collarbone pain, it’s always best to consult with a healthcare professional or orthopedic specialist. They can provide a proper assessment, diagnosis, and guidance on potential treatments or interventions.
What Causes Collarbone Pain Without Injury?
Collarbone (clavicle) pain without a direct injury can be puzzling. But there are several potential causes. Here are some reasons you might experience collarbone pain in the absence of trauma:
- Repetitive Strain: Overusing the shoulder, perhaps due to specific activities or occupations, can cause microtears in the muscles and tendons around the collarbone, leading to pain.
- Arthritis: The acromioclavicular (AC) and sternoclavicular (SC) joints, where the collarbone meets the shoulder blade and breastbone, respectively, can develop osteoarthritis. This degenerative condition can cause pain without a direct injury.
- Postural Issues: Chronic poor posture, especially a forward head and rounded shoulders, can place additional strain on the collarbone and surrounding musculature, resulting in pain.
- Thoracic Outlet Syndrome: This is a condition where the blood vessels or nerves between the collarbone and the first rib become compressed, leading to pain and sometimes tingling or numbness in the arm.
- Bone Infections: While rare, osteomyelitis (bone infection) can occur in the clavicle.
- Tumors: Again, while rare, benign or malignant tumors in or around the collarbone can cause pain.
- Costoclavicular Syndrome: This is a form of thoracic outlet syndrome where there’s compression between the clavicle and the first rib, leading to pain and vascular or neurological symptoms.
- Muscle Strains or Spasms: Muscles around the clavicle, like the pectoralis major or the trapezius, might spasm or become strained, causing pain.
- Heartburn or Acid Reflux: Sometimes, gastrointestinal issues can cause pain that’s felt in the chest or collarbone area.
If you’re experiencing unexplained collarbone pain, it’s essential to see a healthcare provider for a proper diagnosis and appropriate treatment.
Can Collarbone Pain Be Heart-Related?
Yes, collarbone pain can be heart-related in some cases. Chest discomfort or pain, which sometimes radiates to other areas like the arms, neck, jaw, stomach, or back, is a hallmark symptom of heart issues, including angina or a heart attack. The left side, in particular, can be more concerning as the heart is located on the left side of the chest. However, heart-related pain usually presents with additional symptoms like shortness of breath, sweating, dizziness, nausea, or palpitations.
It’s crucial to differentiate between musculoskeletal pain and heart-related pain. While musculoskeletal pain related to the collarbone or surrounding muscles might worsen with movement or touch, heart-related pain is often triggered by physical activity, emotional stress, or other factors and might not change with movement or touch. If there’s any suspicion that collarbone pain might be heart-related, it’s vital to seek medical attention immediately.
How Do You Stop Collarbone Pain?
If you experience persistent or severe collarbone pain, here are some general measures that might be recommended based on the cause of the pain:
- Rest: If the pain is due to an injury or strain, giving the area a break and avoiding activities that aggravate the pain can help in recovery.
- Cold Compress: Applying ice to the injured area during the first 24-48 hours can reduce swelling and provide pain relief. Use a cloth barrier between the ice and skin to prevent frostbite.
- Pain Relievers: Over-the-counter pain relievers like acetaminophen (Tylenol) or nonsteroidal anti-inflammatory drugs (NSAIDs) can help to reduce pain.
- Sling: In the case of fractures or significant injuries, a sling might be recommended to immobilize the area and promote healing.
- Physical Therapy: If the pain is due to postural issues, overuse, or after the acute phase of an injury, physical therapy can be beneficial. Therapists can provide exercises and stretches to strengthen the shoulder and improve mobility.
- Surgery: In some cases, such as severe fractures or dislocations, surgical intervention might be necessary.
- Posture Correction: Maintaining good posture can prevent and alleviate pain caused by muscle strain and tension.
- Warm Compress: For non-acute pain or stiffness, warm compresses can help to relax and loosen tissues and stimulate blood flow to the area.
- Protective Gear: If you’re involved in sports that have a higher risk of injury, using protective gear can prevent trauma to the collarbone.
If its cause is unknown, or it doesn’t improve with rest and home remedies, always consult with a healthcare provider.
What Are Some Best Exercises And Stretches To Help?
If you’re experiencing collarbone pain due to muscle tension, then always go for the exercises and stretches. This might help improve mobility and reduce discomfort. Below are some best ones to consider:
- Stand beside a table or chair for support.
- Bend slightly at the waist and allow the affected arm to hang down.
- Gently sway your body to let the arm swing freely in small circles.
- Perform 10 circles in one direction, then switch to the other direction.
- Stand in front of a wall at arm’s length.
- Place your palms on the wall at shoulder height.
- Bend your elbows and bring your body towards the wall, then push back to the starting position.
- This exercise helps to strengthen the shoulder muscles without intense pressure.
- Sit or stand with your arms by your sides.
- Squeeze your shoulder blades together as if trying to hold a pencil between them.
- Hold for a few seconds, then release.
- Sit or stand upright.
- Tilt your head to one side, bringing your ear towards your shoulder. You should feel a stretch on the opposite side.
- Hold for 15-30 seconds, then repeat on the other side.
- Sit or stand upright.
- Gently turn your head to one side until you feel a stretch.
- Hold for 15-30 seconds, then switch sides.
Chest Opener Stretch:
- Stand or sit upright.
- Clasp your hands behind your back, straighten your arms, and lift them slightly.
- Open up your chest, squeeze the shoulder blades together, and tilt your head back slightly.
- Hold for 15-30 seconds.
- Sit or stand upright.
- Roll your shoulders up, back, and down in a circular motion.
- Perform 10 rolls, then switch direction.
Remember, it’s crucial to perform these exercises and stretches gently and without forcing any movement. If any exercise or stretch causes increased pain or discomfort, stop immediately. The goal is to enhance mobility and relieve pain, not exacerbate it.
In conclusion, collarbone pain, while often overlooked, can be a significant concern for many individuals. It can arise from a variety of causes, ranging from traumatic injuries to degenerative conditions. Collinbone pain can stem from various causes and shouldn’t be overlooked. Early diagnosis and treatment are key to ensuring optimal recovery and well-being.
Always consult with healthcare professionals when experiencing such discomfort. If you’re experiencing Shoulder pain, physical therapy for shoulder pain at PhysioMantra can help: Book an online physical therapy session.
|
<urn:uuid:1163d333-82c0-43ba-810a-db208cb30b43>
|
CC-MAIN-2023-40
|
https://physiomantra.co/shoulder/collarbone-pain/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.35/warc/CC-MAIN-20230923062631-20230923092631-00630.warc.gz
|
en
| 0.880777 | 2,143 | 2.546875 | 3 |
Exposed concrete floors
Exposed concrete floors are durable, hard wearing and useful for all sorts of applications from warehouses to art galleries and garages to domestic living rooms. They are particularly useful for heavy loads or traffic or, when combined with underfloor heating, for continuous and efficient space heating. As a finish they can appear utilitarian or luxurious depending very much upon the chosen materials and installation techniques.
Technically, the upper face of a concrete floor is called an “unformed” face, in contrast to “formed” concrete faces which obtain their shape and texture from the moulds or formwork in which they are cast. Unformed faces are typically flat and require post-finishing techniques to achieve the required texture. The most prevalent texture or finish for internal exposed concrete floors is polished – either power-trowelled for a smooth solid appearance, or diamond-ground to expose the aggregates. A requirement for greater slip resistance is often a driver to explore other options, such as the inclusion of retardants or shot-blasting. Some of the techniques more commonly used for external surfaces, such as imprinting or brushing, can also be used to good aesthetic effect inside.
“Wearing screeds” (formerly known as “high strength concrete toppings”) are usually installed once the space is enclosed, unlike a structural floor slab. They are specifically designed to serve as a floor finish, often incorporating pipes for underfloor heating, and require care and expertise to execute correctly. Designers are advised to seek guidance from specialist installers at early stages of design development when specifying screeded floors.
Polished concrete floors
There are two ways to create a smooth, polished patina to concrete floors, but each achieves quite a different visual appearance.
Diamond polished, or ground and polished concrete
This technique creates a smooth finish, with varying degrees of shine and exposed cross sections of aggregate. The top millimetres of the concrete are mechanically ground away to expose the aggregate, often using diamond grinders. The surface is then polished with increasingly fine-grade buffers to achieve the desired finish. It can take on the appearance of terrazzo depending upon the colour of cement, pigments and type of aggregates used – specialist suppliers and installers offer a huge range of colours and aggregate combinations.
The depth of grind will determine the degree of aggregate exposure, ie larger aggregate sections will be revealed with a deeper grind. More commonplace mixes of in-situ concrete can also be finished in this way. This is often a technique used to level and finish existing floors, although the final result is unlikely to be as consistent as where a floor is designed and placed to be polished in the first place.
Floated and trowelled finish
Unlike the mechanically abraded system, this technique takes place before newly placed concrete is fully cured and timing is critical to achieve a successful finish. The surface is floated, often using a pan floater, and then trowelled progressively to produce a natural polished sheen. This is most commonly achieved using power tools but it is also feasible to trowel small spaces by hand.
Further degrees of polish can then be obtained by using different surface sealants if required. The resulting floor has a more solid colour, sometimes described as mottled, or a natural patina but with minimal visible aggregate. The colour of the floor is determined primarily by the fine content of the concrete mix, ie the cement, sand fines and any pigments in the mix. Alternatively, this technique can be used to install a pigmented dry-shake finish, creating the desired colour on the top surface of the concrete.
Texture and pattern
Exposing the aggregate in concrete can also create grain, texture and pattern in floor surfaces, which is particularly useful for areas requiring greater slip resistance, especially when wet. The uppermost layer of concrete, or surface mortar, can be removed in a number of ways, each with subtly different results.
A key consideration is the practicality of application and the degree of control required for the specific project. Appropriate specification of the concrete mix is essential for a high-quality exposed aggregate finish, since “normal” concrete is unlikely to contain sufficient aggregate at the surface once compacted. A range of proprietary mixes are available with different combinations of coloured aggregates and binder. Common techniques include: exposure by washing or brushing, the use of surface retardants or abrasive or shot blasting.
Exposure by washing or brushing
Washing or brushing away the surface mortar of the concrete is in principle the simplest method of exposing aggregate in newly poured concrete. However, skilled execution and timing are critical to ensuring that the concrete is sufficiently stiff to hold the coarse aggregate in place while the surface is sprayed or brushed, but not too hard so that it is difficult to remove the surface layer.
Use of surface retardants
Surface retardants offer a more controlled method of exposing the aggregate. Liquid surface retardant is sprayed onto the surface of the wet concrete, preventing the uppermost layer (known as mortar) from setting and allowing it to be washed or brushed away. The depth of mortar removed, and therefore amount of aggregate revealed, is determined by the choice of retardant.
Texture patterns can be created in the floor surface by using stencils to mask areas of the concrete before the retardant is applied. The protected areas remain smooth alongside the more grainy texture that is left where the aggregate is exposed. The effect is intensified when the pigment and/or cement colour contrasts with the sands and aggregates. As with all techniques, timing and skill is important for effective execution.
Shot or grit blasting
Shot or grit blasting s a simple technique for mechanically removing the top layer of hard concrete to create a stippled surface, often used to improve adhesion for supplementary finishing layers. If intended to be a final finish, careful control is required to ensure consistency. A test patch is recommended so that the degree of shot blast can be agreed. As with depth of polish, the heavier the abrasion, the larger the pieces of aggregate that will appear. There are many fine examples of patterns created using stencils with this technique.
Imprinting wet concrete
Imprinting wet concrete is a common technique for creating texture in external concrete and can be a cost-effective way of improving slip resistance. A tamping beam is progressively lowered and raised along the face of the floor to imprint freshly placed and levelled concrete. This creates a surface with ridges, with their frequency and width depending on the size of the beam. It can be specified as light, medium or heavy tamp but these descriptions are not clearly defined and a benchmark or test panel is advised.
Brushed or dragged concrete
Brushed or dragged concrete is a similarly cost-effective finish for adding slip resistance and is useful for utilitarian areas where aesthetics are less important, due to the difficulty of obtaining a uniform finish. It does offer opportunities for pattern and variety depending on the direction of drag and the material used. Stiff bristled brooms give a coarser texture than those with soft bristles, for example. While actual sweeping brooms can be used, a purpose-made brush with steel bristles or tines is more appropriate for commercial applications. Other dragged surface finishes include ‘Turf drag’, ‘Hessian drag’ or ‘tine finish’ using different materials to create the texture as their name suggests.
An alternative method for adding texture to wet concrete is stamped or pattern imprinting using mats or rollers. Most commonly used externally, standard systems are most likely to be in the form of patterns with stone, cobbles or herringbone for paths and patios, but the technique offers opportunities for unique textures and patterns through the creation of bespoke mats.
The use of tamped, brushed and imprinted concrete on internal spaces may be restricted by the ability to gain access to create the finish in the wet concrete. This therefore requires consideration of sequencing early on.
The colour of a concrete floor will be determined by the through colour of the mix, which will itself vary in appearance depending on the surface texture employed and how much of the aggregate is exposed. It is also possible to add different and permanent colour to the upper surface only of the concrete.
Dry shake toppings
Dry shake toppings are powder or granules broadcast onto the surface of the concrete before trowelling. Once trowelled into the floor, this results in a hard, durable surface. There is a wide range of coloured pigments available, including grey, and toppings can be selected to suit the specific performance requirements of the floor. Many include a surface hardener to improve the durability of the floor finish, but they can also provide abrasion resistance and are often used to improve the surface finish of large pours when steel fibres are included.
Colour stains can be applied to either trowelled or diamond-ground polished floors and offer the advantage of colour variety and a controlled application to create pattern, allowing the natural tonal variation of the concrete surface to show through.
For more information, see the Concrete Quarterly Visual Concrete special issue.
|
<urn:uuid:296d35a4-26d3-43d1-bd22-aa908f6ed4b8>
|
CC-MAIN-2023-14
|
https://www.concretecentre.com/Specification/Visual-Concrete/Tips-to-achieve-Visual-Concrete.aspx?feed=96acc294-3dbe-4b54-9763-6af751e47a25
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00744.warc.gz
|
en
| 0.929626 | 1,885 | 2.53125 | 3 |
UF/IFAS researcher using “precision breeding” to create disease-resistant grapes
Apopka, Fla. — Powdery mildew and black rot are two scourges of grape growers, but University of Florida researcher Dennis Gray is developing disease-resistant grapes, using what he calls “precision breeding” to create these super varieties.
Gray, a developmental biologist with UF’s Institute of Food and Agricultural Sciences, has successfully bred Thompson Seedless, Seyval Blanc and Syrah that resist mildew and fungus. Those are just three of only 35 grape varieties that accounted for 66 percent of the world grape acreage in 2014, he said.
“The importance of improving grape varieties cannot be overstated,” Gray said. “A majority of these are centuries old and maintained primarily through a stringently managed system of vegetative propagation. However, these varieties lack other very important traits, particularly durable disease and pest resistance, that are demanded by today’s intensive agricultural conditions.”
Producers currently rely on frequent use of pesticides and fungicides to control diseases of grape, particularly in areas of high humidity, such as northern Italy and northern California or Florida. However, the public is interested in alternatives that can decrease pesticide use, limiting its potential effects on health and environment.
Gray says that “precision breeding” is the answer to this by creating varieties that don’t need to be sprayed or sprayed less frequently, thus reducing the amount of pesticides and fungicides needed.
Now Gray hopes to develop a grape that is resistant to Pierce’s Disease, which needs “unsustainable mass spraying of pesticides” to stop the insect that carries it. Federal and state governments, mainly in California, have spent more than $50 million in the last 15 years to fight it with little to no success.
When a vine becomes infected with Pierce’s Disease, the bacterium causes a gel to form in the tissue of the vine, preventing water from being drawn through the vine. Leaves on vines will turn yellow and brown, and eventually drop off the vine. Shoots will also die. After one to five years, the vine itself will die.
Gray also wants to move away from what he calls the scientifically inaccurate and illogical term “genetic modification” to the more accurate “precision breeding,” and inform the public that it is less disruptive than conventional breeding and will finally allow the 35 ancient cultivars grown in most of the world to be genetically improved.
“Without exception, all crops used for food and fiber have been intentionally genetically modified by humankind,” Gray said. “It is a fact that every fruit, vegetable and grain, including all produce labeled ‘organic,’ that we purchase from the grocery store or farmers market are significantly and purposely genetically modified.
He points to earlier studies, showing that humans began choosing seeds from the best plants as early as the Neolithic period, some 12,000 years ago.
Soon after, people began crossing one plant with another to create a new plant. This technique uses half of the genes from one plant and half from the other plant. That is what is known as “conventional plant breeding.” Precision breeding, Gray says, simply takes one or two desirable traits from one plant and inserts that DNA into another plant to create the new, improved varieties.
“Precision breeding is significantly more ‘precise,’ accurate, and much less likely to produce unintended consequences,” Gray said.
A study on Gray’s precision breeding research was recently published in the journal Acta Horticulturae. Other researchers on the study include: Zhijian Li, a molecular biologist and researcher, Trudi Grant, a post-doctoral research associate, and Deborah Dean, a post-doctoral research associate – all with UF’s Mid-Florida Research and Education Center; Robert Trigiano, a professor and plant pathologist with the University of Tennessee’s Department of Entomology and Plant Pathology; and Sadanand Dhekney, an assistant professor and horticulturalist with the University of Wyoming-Sheridan’s Department of Plant Sciences at the Sheridan Research Center.
By Kimberly Moore Wilmoth, 352-294-3302, firstname.lastname@example.org
Source: Dennis Gray, 407-884-2034, ext. 126, email@example.com
For additional information on grapes, see: http://edis.ifas.ufl.edu/topic_grape
Photo caption: UF/IFAS developmental biologist Dennis Gray looks at the progress of grapevines in a vineyard. Photo by UF/IFAS
|
<urn:uuid:99ff1d73-5a40-4920-94c8-cac2bf39f54a>
|
CC-MAIN-2020-34
|
http://blogs.ifas.ufl.edu/news/2015/05/20/ufifas-researcher-using-precision-breeding-to-create-disease-resistant-grapes/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736902.24/warc/CC-MAIN-20200806091418-20200806121418-00285.warc.gz
|
en
| 0.935999 | 993 | 3.21875 | 3 |
MIT researchers have developed a method for printing electronic inks neatly and reliably on the nanoscale, on rigid or flexible substrates. They expect it will be used in turn to develop transparent, sensor-rich panels, stickers, and packaging.
Two groups at Lund university have come up with new technologies to enhance the performance of intra-neural electrode arrays. Making them compatible not just with the brain, but with each other, will be key to making better brain-machine interfaces.
For decades, the semiconductor industry has operated under the mantra of ‘Better, faster, cheaper.’ With Moore’s law scaling no longer functioning below 28nm, what would you be willing to pay for chips and products that satisfied ‘Better’ and ‘faster’ — but were more expensive than current hardware?
Shrinking technology down requires low-power components with tightly packed components, but Moore’s Law go on forever. A new technique that replaces transistors with bits of nanowire could make tiny processors and microcontrollers a reality.
According to its creators, the process that gave rise to this world first is leaps and bounds ahead of the competition and “can actually start to compete with silicon.” Carbon nanotubes appear to be the future of computing.
Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
|
<urn:uuid:7e38157e-806c-41f9-9703-c3f3f8e404a0>
|
CC-MAIN-2017-26
|
https://www.extremetech.com/tag/nanotubes
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320130.7/warc/CC-MAIN-20170623184505-20170623204505-00597.warc.gz
|
en
| 0.927658 | 284 | 2.609375 | 3 |
Other Names for this Disease
- Alopecia areata universalis
See Disclaimer regarding information on this site. Some links on this page may take you to organizations outside of the National Institutes of Health.
It is a severe form of alopecia areata, which refers to hair loss of unknown cause characterized by round patches of complete baldness. Alopecia universalis is thought to be an autoimmune condition that occurs in some genetically-predisposed people. Hair loss occurs when a person's immune system mistakenly attacks the hair follicles. There are currently no treatments known to be effective for alopecia universalis, but sometimes hair regrowth occurs on it's own, even after many years.Alopecia universalis (AU) is a condition characterized by complete loss of hair on the scalp and body.
Last updated: 8/26/2014
- Chantal Bolduc. Alopecia Areata. Medscape Reference. March 19, 2014; http://emedicine.medscape.com/article/1069931-overview. Accessed 8/26/2014.
- Types of Alopecia Areata. National Alopecia Areata Foundation. 2011; http://www.naaf.org/site/PageServer?pagename=about_alopecia_types. Accessed 7/11/2014.
- Alopecia areata. MedlinePlus. 2010; http://www.nlm.nih.gov/medlineplus/ency/article/001450.htm. Accessed 5/8/2012.
- Alopecia Areata: Questions and Answers About Alopecia Areata. National Institutes of Arthritis and Musculoskeletal and Skin Disorders (NIAMS). January 2012; http://www.niams.nih.gov/Health_Info/Alopecia_Areata/. Accessed 7/11/2014.
- DermNet NZ is an online resource about skin diseases developed by the New Zealand Dermatological Society Incorporated. DermNet NZ provides information about this condition.
- MedlinePlus was designed by the National Library of Medicine to help you research your health questions, and it provides more information about this topic.
- The National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS) support research into the causes, treatment, and prevention of arthritis and musculoskeletal and skin diseases, the training of basic and clinical scientists to carry out this research, and the dissemination of information on research progress in these diseases. Click on the link to view information on this topic.
- The National Organization for Rare Disorders (NORD) is a federation of more than 130 nonprofit voluntary health organizations serving people with rare disorders. Click on the link to view information on this topic.
- Medscape Reference provides information on this topic. Click on the link to view this information. You may need to register to view the medical textbook, but registration is free.
- The Online Mendelian Inheritance in Man (OMIM) is an catalog of human genes and genetic disorders. Each entry has a summary of related medical articles. It is meant for health care professionals and researchers. OMIM is maintained by Johns Hopkins University School of Medicine.
- PubMed is a searchable database of medical literature and lists journal articles that discuss Alopecia universalis. Click on the link to view a sample search on this topic.
- The American Hair Loss Association Web site lists resources for kids with alopecia. Click on American Hair Loss Association to view the page.
|
<urn:uuid:3654f4d2-33d2-4188-b500-4422204f9b56>
|
CC-MAIN-2014-35
|
http://rarediseases.info.nih.gov/gard/614/alopecia-universalis/resources/1
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829839.93/warc/CC-MAIN-20140820021349-00441-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.857973 | 754 | 3.015625 | 3 |
Article Published: June 11, 2014
Article Published: June 11, 2014
Humans have stored more than 295 billion gigabytes (or 295 exabytes) of data since 1986, and the U.S. is home to about one-third of that total.
Every time anyone accesses the Internet, uses an ATM, watches television, listens to digital music, enjoys a movie with computer-generated special effects or uses a personal video recorder, he or she accesses and shares large amounts of digital information on a wide variety of storage devices.
Data storage requires the integration of physics, tribology (the study of surfaces in relative motion), aerodynamics, fluid mechanics, information theory, magnetics and other disciplines.
The fundamental goals of data storage, however, are simply to reliably place as much information as possible (called maximizing areal density) on a hard disk or tape, using magnetic or optical methods, and to provide the means for its rapid access.
Since the mid-1950s, when initial attempts at these fundamental goals were successful, the quest has been to get more and more magnetic storage bits crammed into smaller areas. Up until 2007, the holy grail of data storage had been to squeeze 1,000 gigabits per square inch on a disk, opening the possibility of a one-terabyte drive.
One terabyte is enough storage capacity to place a small black and white photo image of every man, woman and child on earth onto a CD-sized disk. To further place such large numbers in context, the entire print collection of the Library of Congress is thought to hold about 10 terabytes of text.
The hard drive turned 57 this year, and over the past five decades data capacity has increased at a fairly regular and rapid pace. The first drive, which came with the RAMAC computer, weighed about a ton and held 5MB of data. In 1980, the world's first gigabyte-capacity disk drive, the IBM 3380, was the size of a refrigerator, weighed 550 pounds and had a price tag of US $40,000.
By 2008, Seagate had announced the first 1.5 terabyte hard drive for use in desktop hardware.
When the Large Synoptic Survey Telescope comes online in 2016, it is estimated that it will acquire information about our universe at the rate of 140 terabytes of data every five days, which is more data than is found in every book ever written assembled every two days.
The hard drive has advanced about 65 million times in areal density since the RAMAC, and researchers estimate that we are three orders of magnitude from any truly fundamental limits.
Hard-drive scientists say that increases in capacity will continue because of technologies like heat-assisted recording, patterned media and nanostructures.
To continue to exceed the 1.5 terabyte goal with conventional data storage media, scientists must overcome the superparamagnetic effect, a phenomenon that is projected to limit the density of magnetically stored information, above which heat destroys the data. Going beyond that threshold will require significant changes in how data is recorded, the widths of tracks on discs, the protective coatings on components, the ultra-thin lubricants between them and the precision of the tiny suspension systems and sliders within the disk drive.
If cramming in the magnetic bits were the only requirement, the goal would be more easily met, but the magnets must be functional, too. Writing a digital “one” or “zero” in such small areas, then subsequently reading back the values requires extreme precision in discerning one bit from another. The bit also must be stable for long periods of time; if some “ones” switch to “zeros” the data will be filled with errors.
Data storage density and durability always have been mutually exclusive. The greater the density, the shorter the durability. One popular illustration of this is that information carved in stone is not dense but can last thousands of years, whereas previously silicon memory chips could hold their information for only a few decades.
Recently researchers at the U.S. Department of Energy have developed a new mechanism for digital memory storage that consists of a crystalline iron nanoparticle shuttle enclosed within the hollow of a multiwalled carbon nanotube, thereby creating a memory device that features both ultra-high density and ultra-long lifetimes. The new memory storage medium can pack thousands of times more data into one square inch of space than conventional chips and preserve this data for more than a billion years.
Current research ongoing at Carnegie Mellon University’s Data Storage Systems Center (DSSC) is aimed at developing the underlying science and technology to allow the demonstration of stable hard disk magnetic recording at areal densities of four terabits per square inch and 10 terabits per square inch by the end of 2015. This theoretically would enable home versions of the Library of Congress on perhaps fewer than a dozen CDs.
Pittsburgh’s Big Data
The term, “Big Data,” is used to describe an aggregate of information that is too large, such that it cannot be interpreted or processed simply by using a typical desktop computer. Data gathered by retailers, smart phones, GPS systems, Internet searches and a host of other sources require sophisticated and complex analysis software and equipment. Therein lies a huge market potential for the type of work and research that has been conducted in the Pittsburgh region for decades.
Established in 1985, Carnegie Mellon University’s DSSC is an interdisciplinary research and educational organization, established by the National Science Foundation, where faculty, students and researchers from a broad swath of academic disciplines collaborate in pioneering theory and experimental research that will lead to the next generation of information storage technology. Collaborating Carnegie Mellon Departments include chemical engineering, chemistry, electrical and computer engineering, mechanical engineering and physics.
More than 60 Carnegie Mellon researchers are working on a variety of projects designed to advance information storage technology beyond the current frontiers of magnetic recording, optical data storage, probe-based systems, holographic and solid-state memory. The DSSC has been helping industry design nanometer-scale technology that will ultimately lead to very fast, low-cost and compact information storage devices.
The center works closely with industry partners to define projects that will contribute to the expansion of what some experts estimate is already a $70-billion market and growing at 15 to 20 percent per year. The center’s goals, among others, are to cooperate with industry to identify future data storage systems, to develop the more promising applications and to transfer the technology to commercial businesses.
To that end, the DSSC maintains state-of-the-art facilities for its research programs including high-tech recording test stands, materials synthesis and characterization facilities and various scanned probe capabilities. In addition, the DSSC makes use of the extensive nanofabrication labs on Carnegie Mellon’s campus. Researchers at the DSSC are working closely with industry and the federal government to exploit nanotechnology in order to create a revolution, not only in information storage, but also in a wide variety of fields.
(See related article, “Nanotechnology in the Pittsburgh Region.”)
Component departments of the DSSC and its affiliates hold 22 patents in the field. Some of the center’s other aggressive projects span a wide range of disciplines, including thin film and particulate media, materials for magnetic and optical heads, tape dynamics, optical recording systems, magnetic disk systems, probe-based systems, holographic and solid-state memory.
Recent technical presentations have included the latest DSSC research on media and heads; channels, mechanics and memory; heat-assisted magnetic recording; bit-patterned media recording; high coercivity media; high resolution 2-D contact testing; spin torque oscillation; mobile dopant semiconductors; HAMR testing and modeling; methanol etching for magnetic film patterning; fine pattern transfer and self-organized structures.
Regional Industry Base
Seagate, one of the DSSC’s corporate partners, introduced the first 5.25-inch hard drives specifically for personal computers in 1979, helping to fuel the personal computer revolution.
In 2010, Seagate commanded one-third of the nearly $35 billion world-wide market for hard drives. Mark Kryder, who formerly served as head of Carnegie Mellon’s DSSC, was instrumental in attracting Seagate’s research facility to Pittsburgh by sheer virtue of his reputation in the field. Seagate since has closed its Pittsburgh research facility as a result of business strategy shifts, however Kryder succeeded in attracting more than 100 Ph.D.s from 25 countries to Pittsburgh, which is a legacy that will continue to benefit the region far beyond the Kryder years. Kryder continues his work at the DSSC.
Beyond Seagate, the DSSC’s successes include some five or six invention disclosures, patents and software licenses per year. A succession of spin-off companies also has been a by-product. Among the spin-offs is Ansoft, which was formed in 1989 to develop software for the design of electromagnetic devices, including recording heads. The software is based on advanced computer algorithms developed at the DSSC. Ansoft’s HF product suite is a solution for system analysis, circuit design and electromagnetic simulation that go into developing wireless technology, broadband communication networks, antenna systems and aerospace electronics. Headquartered in Pittsburgh, Ansoft operated three other locations in the U.S. and five overseas, before it became a subsidiary of ANSYS in 2008.
Another spin-off is Advanced Materials Corporation, a premier manufacturer of permanent magnets and related materials. The company's facilities are located in the laboratories provided through a contractual relationship with the Carnegie Mellon. The facilities are fully equipped for the characterization of hard and soft magnetic materials and the exploitation of metal hydrides. They include equipment to fabricate alloys, ferrites and hydrides.
Meanwhile, Avere Systems previously had raised $32 million in venture funding from Menlo Ventures, Norwest Venture partners and Tenaya Capital to help ramp up sales of its proprietary technology. The company’s systems learn how customers’ data files are being used and enable users to move data to the most efficient media for storage and easier, faster retrieval. Avere’s customers include industries, such as movie production and rendering, genomics and DNA sequencing and oil and gas exploration. One Avere oil and gas customer experienced data retrieval 2.5 times faster than previous with 90 percent of its storage capacity freed. Avere co-founder Ron Bianchini sold his previous data storage startup, Spinnaker Networks, to Network Appliances for $300 million.
Recently, a consortium of Pittsburgh companies and institutions, called Pittsburgh Dataworks, was founded in an effort to attract and retain data scientists, entrepreneurs and business to the region. Founding members include:
Among the name brand organizations that are part of Pittsburgh Dataworks, it is not surprising to find UPMC, since a McKinsey study has projected the big data market in the U.S. health care industry to reach $300 billion annually.
As these organizations continue to prosper, it will be a signal that the Pittsburgh region will be on the forefront of an emerging subcluster that places the most amount of information on the least amount of real estate.
|
<urn:uuid:d1bf5491-c617-4c8b-97fb-d52a6f0e8205>
|
CC-MAIN-2017-30
|
http://pghtech.org/news-publications/state-of-the-industry-report/data-storage-in-the-pittsburgh-region.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424945.18/warc/CC-MAIN-20170725002242-20170725022242-00445.warc.gz
|
en
| 0.936753 | 2,337 | 3.796875 | 4 |
"Turns out when humans fight, the primary target is the face," Carrier said. "It's what people strike at. The vast majority of the injuries that occur in fractures [from interpersonal violence] are localized in the face."
The coauthors chiefly focused on the anatomy of skulls from a type of hominin that dates back 2 million to 4 million years ago in eastern Africa.
"The teeth were very big," Carrier said. "The mandible and the bones of the upper jaw become more stout, more robust. They're thicker; they're bigger."
The jaw muscles would have absorbed some of the energy of a punch and reduced the risk of fracturing or dislocating the upper and lower jaws, their study said.
"The activation of the jaw and neck muscles stiffens the connection between the head and body, decreases acceleration of the brain upon impact and therefore reduces the risk of concussion," their study said.
Morgan said it makes sense to develop structural traits to protect the most valuable human asset the brain. And the skeletal structure of the face was too large to develop for diet alone, they contend.
"One of the things we address in our paper ... is that if you think the jaw and the structure of the jaw and the teeth were built for that diet, they're massively overbuilt," Morgan said.
If the bones formed to increase strength when consuming food, "bone strain produced by chewing would be relatively uniform through the facial skeleton," the researchers said.
Instead, they found bone strain in the nasal region, cheek bones and eye sockets was low during chewing.
Also, data uncovered in recent years suggests the hominins didn't have an exclusively nut and seed diet, they note. Studies analyzing microwear patterns on the teeth of early hominins and other evidence suggest they more often ate fruits and grasses.
The two theories are not mutually exclusive, Carrier and Morgan said. "In nature oftentimes we see coevolution of numerous traits that can serve multiple purposes," Morgan said.
Past findings that males are more violent than females, and more often attack other males, also support the theory, the study said.
The areas of the skull that differ most between the sexes are also the areas that most frequently fracture during fighting, the researchers found. And the mechanics of chewing can't explain the stronger neck and jaw muscles in males and other gender differences, they add.
Skulls became less robust as the most common threat the fist also decreased in size, Morgan said. The researchers speculate that upper body strength decreased as weapons were invented and developed.
Their 2013 study claiming the human hand evolved to form a fist for more effective fighting drew skepticism from some scientists.
Showing that "a closed fist is better buttressed for fighting" doesn't prove that hands evolved for it, biologist Brigitte Demes, of Stony Book University in New York, said after the earlier study was released.
Fossil evidence shows the human hand developing and growing more deft around the same time as the first tools came into use, she said.
Carrier is now studying the foot posture of great apes, continuing to explore the hypothesis that violence played a greater role in human evolution than previously believed. The emphasis on aggression is "very uncomfortable and off-putting to people," Morgan said.
But the researchers argue humans need to understand their violent nature.
"We're going to be better able to prevent violence in the future if we have a better capacity to react to fear and anger and hatred," Carrier said. "That's the reason why this work is important. It's addressing this debate about our history, our deep history."
Morgan added: "We hope that with our research we can hold the mirror up to our faces and say, 'Yeah, this is who we were. How do we change our nature?' "
|
<urn:uuid:7763c971-fb17-4b38-96ed-8adaf977687f>
|
CC-MAIN-2017-34
|
http://archive.sltrib.com/article.php?id=58027516&itype=CMSID
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109525.95/warc/CC-MAIN-20170821191703-20170821211703-00112.warc.gz
|
en
| 0.972741 | 786 | 3.640625 | 4 |
I would now like to discuss reductionism, which I think is one of our society’s conceptual blockages to moral and social progress. Reductionism is a form of analysis based on taking things apart and examining the small pieces to determine their properties, in order to create a description of how larger things function in terms of their smaller components. It is also sometimes called the resolutive-compositive method: as when one disassembles an engine, resolving it down to its parts, and then re-assembles the engine into its composite form again, to see how the parts work both individually and together.
Reductionism is the predominant method of modern science, especially the physical sciences. Although the sciences do at times make use of more holistic analyses, and have shown somewhat more willingness to do so since the 1960s, reductionist approaches are, I think, generally considered to be necessary for doing science. This approach is a very valuable one for many purposes and has produced a great many benefits, and my aim is certainly not to attack science, or to claim that reductionism is methodologically wrong, but only to argue that it is a limited form of analysis that should only be understood as one of the analytical tools in our intellectual toolbox. Reductionism, when taken to extremes, can lead to a sort of atomistic fundamentalism: the belief that not only is our universe made up of particles (meaning not just atomic and sub-atomic particles but whatever the smallest quantum fluctuations), but that these particles are fundamental, definitive of reality itself. The universe, of course, consists of particles, but they are only units of analysis for one level of observing reality; one can also rationally say that reality consists of larger objects too. I would like to show that particles are not definitive of reality. More on levels of analysis in a moment.
There is another, different sense of the term “reductionism” which is often used in philosophical and intellectual debates, as when a critic claims a theory “reduces” something to one cause or explanation only. To take an example from the social sciences, mainstream economics can be faulted for reducing human psychological motivations to self-interest, thus neglecting other motivations such as common identity, group interest, or sympathy. This sense of the term “reductionism” thus basically positively contrasts multi-perspectivism with what we might call uni-perspectivism, and is different than the atomistic reductionism defined above. Sometimes these terms overlap, however, as when scientists and science-minded thinkers reduce everything to reductionism, that is, when they boil things down to the small, and take that to give a complete picture of reality. Thinking that the only true reality is the particulate is an extreme form of reductionism, and I have talked to some scientists who believe that; even when that extreme view isn’t held, the particulate is often privileged above other levels of analysis as somehow being “more real.”
Let me give an example at this point to clarify: before me as I write is a coffee table. That coffee table can be examined at many different levels, small and large. One can examine it at the atomic, subatomic, and quantum levels and find out many interesting and valuable things about its small bits. But we can already imagine some limitations of only sticking with this level of analysis. Just by looking at, say, the electrons or protons of its atoms we could not tell that they belong to a coffee table, rather than, say, a carpet or an elephant. Indeed, at that level a scientist wouldn’t be able to tell where the particles of the coffee table end and those of other objects begin. While examining the table’s atomic structure would help to know some of its properties such as hardness, that wouldn’t help us know its shape or dimensions or age or many other properties.
If we move up in our analysis to the level of molecules we can say that some of the table is made of organic material (wood, and the polymers of the paint) and some of is of metal (the nails and screws). Moving up from molecules to the cellular level we can start to see the structure and density of its cellulosic fibers. If we then look at the table from a larger but still precise mid-level mechanical level, a carpenter could measure (to various degrees of tolerance) the size of the nails and holes, the thickness of the paint, and the dimensions of the parts. We could also at this level define the boundaries that distinguish it from other objects.
Now if we take a step back and observe the whole table as a mid-level object, we can assess its size in relation to the other objects in the room, see its color under various lighting conditions, pick it up and measure its weight, and so on. At this point we have taken in the whole of the table as an individual object, but we are not finished, because the table exists in relation to other things. It is an object used by members of a conscious species for certain purposes, and they experience and judge the table in light of those purposes: does the table properly perform its function of supporting objects at a certain height? Does it look good as part of the room and in relation to the other objects in it? The table may also be part of the context of a family; perhaps it was an heirloom or a gift. The table is also part of a larger social system of production and consumption, i.e. it is an economic good: someone made it, and earned a living by doing so. Or, more accurately, many people helped make it, people organized into a particular form of economic structure (a corporation), which is itself part of a larger economic, political, and social system. And the table is also part of wider nature as a moment in a cycle of resource use: its components were once trees on the land and metal under the ground, and eventually it will go into a garbage pile, and the biosphere will do its work cause age and decay and return to the earth. The table is thus part of the larger system of the biosphere. None of this, obviously, would be perceivable through a reductionist analysis alone.
Larger even still, the table is a part of Earth’s gravitational system and is thus stuck fast to the planet’s surface. Even the moon and the other bodies of the solar system exert a little gravitational tug on it, and so it is also a part of that larger system, and eventually all that the table once was will be swallowed up by our sun when it expands to become a red giant. Indeed, the table is part of the gravitational systems of the Milky Way galaxy and of the universe as a whole, and is touched by the cosmos: at night, when it is dark and I leave the window shades open, the light of distant suns billions of light years from here touches my table.
The table exists at all these levels, in all these ways, and the universe itself does not hold any of them to be more or less important than any other. The universe itself is not an intelligent being; it has no point of view on the matter, and so it does not set up any level as the privileged one, including the reductionist level. That is a fully human prejudice. The table exists as a composition of microscopic particles, true enough. But it also exists, really exists, as a mid-level physical object, and as a smaller part of larger social, global, planetary, and interstellar systems.* The same is the case for all the other things we experience: tables, chairs, houses, cars, animals, our individual selves, societies, language, or anything else. We have become so used to thinking in reductionist terms that it can be hard to grasp that mid- and large-scale objects are fully real.
There may be goods reasons to sometimes use reductionist analysis, but is no reason to ontologically privilege it. The universe doesn’t, so why should we? We humans might engage in or carry out an analysis of small things for certain purposes, as when a chemist wants to determine a better glue for holding the cellulose of future coffee tables together. But there are other purposes for which a reductionist analysis is altogether unnecessary and in many cases inappropriate, misleading, and wasteful. A carpenter or furniture maker usually need not use an electron microscope or conduct a chemical analysis to make a prize piece of furniture. Indeed, for a carpenter to think much, or at all, about the atoms of the thing is a waste of time and mental effort. A political theorist need not make a reductionist analysis either to assess the efficiency or the justice of an economic system of which the table is a part. Quantum physics is, in fact, completely irrelevant for such an analysis. But both the carpenter and the political philosopher are doing perfectly legitimate, perfectly necessary forms of thinking and analysis.
In December 2011 I attended a lecture/debate at the National Association for the Advancement of Science entitled “Can Science Explain Everything,” which included as one of the featured speakers Harvard physicist Lisa Randall, whom made and excellent presentation overall. She laid out a view of the world as consisting of a scale from the quantum level of smallness to the cosmic level of largeness, and noted that we inhabit a middle position in which our normal scale of perception in in meters. She thought that these scales were sublime, that people use different approaches, including science, art and religion, for different scales, and she also acknowledged that wholes are not necessarily the mere sum of their parts. Her position, however, seemed to be that the components of things are fundamental, and she used the phrase “fundamental ingredients” repeatedly. The example that she gave was that of a soufflé: a soufflé is made of fundamental ingredients, and you can’t make a soufflé without them. The implication seemed to be that the ingredients were “fundamental” and should have ontological priority because, obviously, if you don’t have the parts of an object, then you don’t have the object. Take away the ingredients, and poof! – you don’t have a soufflé. And she is entirely right and that is not controversial. However, there is more to the story: while you can’t make a soufflé without ingredients, you also can’t make one without a cook. Or a soufflé pan. Or an oven. I’m not being facetious here, I’m being entirely serious: you need these things (and many more) to make a soufflé just as much as you need ingredients, and its existence depends on them. While the smaller ingredients are necessary, these larger aspects of context are necessary too for the existence of the soufflé. It’s the same for other, larger objects: there must be an economy that supplies the ingredients for the food and the items to cook it, a society that trains cooks, a culture that records soufflé recipes, the kind of planet and biosphere that provides the ingredients, etc. If the reason that the ingredients, the small bits, are ontologically fundamental is because the souffle’s existence depends on them, then the larger aspects of context have to count as ontologically fundamental too. There is no reason to privilege small ingredients; all levels are real.
Thus it is no use to say that reductionism describes the universe merely because it describes the behavior of particles, for there is so much reality going on at larger levels that a reductionist approach does not explain. So it doesn’t describe the universe, or not completely, anyway, and leaving out other forms of analysis suitable to higher levels is a mistake that, I will argue later, leaves us unable to make necessary distinctions and evaluations of the qualitative aspects of the world, and therefore cripples the mental capacity called judgment. And if we are to solve our moral, social, and even our environmental problems then we need to develop new habits of thought based on a non-reductive materialism that facilitates the development of a better sense of that judgment.
* Social systems are just as physical and material as the other systems, although that claim will have to be elaborated upon elsewhere.
|
<urn:uuid:3bfc28df-a067-4e69-a34c-89f5f93071bc>
|
CC-MAIN-2017-43
|
http://www.dailydissident.com/non-reductionist-materialism-part-i-it/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00847.warc.gz
|
en
| 0.957784 | 2,541 | 2.5625 | 3 |
Did You Know Wyoming Has A Spring That ‘Breathes’? [VIDEO]
Located in Swift Creek Canyon near Afton, Wyoming, this rhythmic spring is one of only three features in the entire world that has this type of behavior. It is the largest of it's kind and is know as 'The Spring That Breathes'.
The 'Intermittent Spring' (also know as the 'Periodic Spring') is not your average mountain spring. Water will flow for 10-18 minutes at a time, then will completely stop for many minutes before continuing to flow again.
Researchers believe this fluctuating flow pattern is due to the 'siphon effect'. Meaning, spring water will flow into , and fill up, a cavity underground. When the water reaches the top of the cavity, the water rushes out creating a siphon effect - taking all of the water out of the cavity space. The water flow will then stop until the cavity fills up again and the whole process starts over.
If you are looking to experience this phenomenon for yourself, late summer - early fall is the best time to experience the Periodic Spring.
|
<urn:uuid:53242ef0-c9af-4f46-9aa2-a3813867cb83>
|
CC-MAIN-2023-23
|
https://jackfmcasper.com/did-you-know-wyoming-has-a-spring-that-breathes-video/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643784.62/warc/CC-MAIN-20230528114832-20230528144832-00699.warc.gz
|
en
| 0.934533 | 236 | 2.984375 | 3 |
According to the American Dietetic Association and USDA, a diet rich in fruits and vegetables is associated with decreased risk for chronic disease and eating them as part of a reduced-calorie diet can be beneficial for weight management.
Be sure to include the recommended amounts every day:
• Vegetables- 2 1/2 cups
One serving equals 1 cup raw or cooked vegetables or vegetable juice (low-sodium), or 2 cups of raw leafy greens
• Fruits- 2 cups
One serving equals 1 cup fruit or 100% fruit juice,1/2 cup dried fruit, or 1 small piece of fruit
Why are Fruits and Vegetables Important?
• Eating a diet rich in fruits and vegetables as part of an overall healthy diet may reduce risk for stroke and possible other cardiovascular disease, and type 2 diabetes
• Eating a diet rich in fruits and vegetables as part of an overall healthy diet may protect against certain cancers such as mouth, stomach, and colon-rectum cancer
• The fiber in fruits and vegetables may reduce the risk of coronary heart disease
• The potassium in fruits and vegetables may reduce the risk of developing kidney stones and may help to decrease bone loss
Tips on How to Enjoy More Fruits and Vegetables
• Cut up fruit and vegetables and have them ready to eat in the refrigerator
• Have a bowl of fresh fruit including apples, oranges, and bananas on the counter, ready to grab
• Stock up on fresh fruits and vegetables while they are in season and buy frozen when out of season to save money
• If buying canned vegetables, be sure to choose “No Salt Added” varieties. (rinsing does wash away some of the sodium otherwise)
• If buying canned or packaged fruit, be sure to choose varieties that are packed in water or their own fruit juice not syrup
• Add vegetables to your pizza for a delicious way to include yourveggies
• Help kids make fruit kabobs for a fun after school snack
• Include a green salad with dinner most nights of the week
• Shred zucchini or carrots into meatloaf, casseroles, or muffins
• Include chopped vegetables in lasagna or pasta sauce
American Dietetic Association www.eatright.org
Cooking Light www.cookinglight.com
Preparation time: 10 minutes
1 english muffin
2 tablespoons whipped fat-free strawberry cream cheese
1/4 cup strawberries, sliced
1/4 cup red grapes, quartered
1/4 cup canned mandarin oranges, drained
Instructions: Toast the english muffin until golden brown. Spread cream cheese on toasted muffin. Arrange sliced strawberries, grapes, and orange slices on top of the cream cheese. Slice into quarters and “yummy – fruit pizza!”
1/2 Cup of Fruit per Serving
Fruit and/or Veggie Color(s): Red, Orange
Nutrition Information per serving: calories: 228, total fat: 1.3g, saturated fat: 0g, %
calories from fat: 5%, % calories from saturated fat: 0%, protein: 10g, carbohydrates:
46g, cholesterol: 5mg, dietary fiber: 4g, sodium: 374mg
Yield: 6 servings
1 teaspoon olive oil
3/4 cup sliced mushrooms
3/4 cup chopped zucchini
1/2 cup sliced carrot
1/2 cup chopped red bell pepper
1/2 cup thinly sliced red onion
1 (26-ounce) bottle fat-free tomato basil pasta sauce
2 tablespoons commercial pesto
1 (15-ounce) carton part-skim ricotta cheese
6 hot cooked lasagna noodles (about 6 ounces uncooked), cut in half
3/4 cup (3 ounces) shredded part-skim mozzarella cheese
Preheat oven to 375º.
Heat oil in a medium saucepan over medium heat. Add mushrooms and the next 4 ingredients
(mushrooms through onion); cook for 5 minutes, stirring frequently. Add pasta sauce; bring to a boil.
Reduce heat, and simmer 10 minutes.
Combine pesto and ricotta in a small bowl. Spread 1/2 cup tomato mixture in the bottom of an 8-inch square baking dish or pan coated with cooking spray. Arrange 4 noodle halves over tomato mixture. Top noodles with half of ricotta mixture and 1 cup tomato mixture. Repeat layers, ending with noodles. Spread remaining tomato mixture over noodles; sprinkle with mozzarella.
Cover and bake at 375º for 30 minutes. Uncover and bake an additional 20 minutes. Let stand 10 minutes.
Note: To make ahead, assemble as directed; stop before baking. Cover and refrigerate overnight. Let stand 30 minutes at room temperature; bake as directed.
Calories: 328 (30% from fat),Fat:10.9g (sat 5.4g mono 3.8g poly 0.9g),Protein:18.2g
Carbohydrate: 39g,Fiber: 3.7g,Cholesterol: 31mg,Iron: 2.9mg,Sodium: 491mg,Calcium:418mg
Cooking Light, JANUARY 2001
|
<urn:uuid:514a0731-be07-4a5c-bdbc-0356225943f3>
|
CC-MAIN-2017-34
|
http://www.hilliardschools.org/dcr/parents/wellness/fruits-and-vegetables/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103891.56/warc/CC-MAIN-20170817170613-20170817190613-00276.warc.gz
|
en
| 0.852383 | 1,087 | 3.140625 | 3 |
This report describes the competencies of 307 children in the Wellington region, just before they started school.
It shows how the children scored for 6 'being' competencies - communication, curiosity, perseverance, social skills with peers, social skills with adults, and individual responsibility - and for 4 'doing' competencies - literacy, mathematics, logical problem-solving, and motor skills.
The results show that low family income is the main factor associated with lower competency scores. Good-quality early childhood education experience clearly benefits all children. But to make a bigger difference for children from low-income families, ECE services need more resources, so that they can provide a higher standard of support.
5 years old and competent is a handy summary to this report.
|
<urn:uuid:ab5b106f-e3e9-412f-8d10-f49797add61b>
|
CC-MAIN-2020-16
|
https://www.nzcer.org.nz/research/publications/competent-children-5-families-and-early-education
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370528224.61/warc/CC-MAIN-20200405022138-20200405052138-00343.warc.gz
|
en
| 0.929 | 156 | 2.859375 | 3 |
Welcome to the world of smart homes, where cutting-edge technologies revolutionize the way we live and interact with our living spaces. Experience the transformative power of smart homes and become a pioneer in this digital revolution! Enroll in our Digital Disruption course and unlock the incredible benefits of connected devices, automation, and energy-saving strategies. Learn how to optimize your home’s energy usage, simplify daily tasks through automation, and embrace a more sustainable and cost-effective lifestyle. In this article, we will explore the incredible benefits of smart homes, focusing on the lessons we can learn from connected devices, automation, and energy-saving strategies. Discover how these advancements enhance our lifestyle while reducing our environmental impact.
Connected Devices: The Backbone of Smart Homes
Connected devices serve as the backbone of smart homes, enabling automation and control of various functions and appliances. These devices have the ability to communicate with each other and with homeowners, providing unprecedented convenience and efficiency. A prime example is the smart thermostat, which learns your schedule, preferences, and even weather conditions to automatically adjust your home’s temperature, optimizing both comfort and energy savings. According to a study by the U.S. Department of Energy, smart thermostats can save homeowners an estimated 10-15% on heating and cooling costs, resulting in significant energy and cost savings over time.
Automation: Streamlining Tasks and Enhancing Experiences
Automation is a game-changer in smart homes, empowering homeowners to remotely control and schedule tasks and routines. Imagine being able to create custom automation scenarios that turn off lights, adjust thermostat settings, and lock doors when you leave home, or activate them when you arrive. Automation not only simplifies daily tasks but also enables smart devices to work together in sync, creating a seamless and intuitive experience. For instance, when you start streaming a movie on your smart TV, your smart lights can automatically dim, creating a cozy and immersive movie-watching ambiance.
Energy Savings: Optimizing Efficiency and Reducing Costs
One of the most compelling benefits of smart homes is the potential for significant energy savings. By leveraging connected devices and automation, homeowners can optimize their energy usage, reduce waste, and lower their energy bills. Let’s consider the example of smart plugs, which have the ability to automatically turn off power to idle devices such as gaming consoles or chargers. This simple action helps combat “vampire” energy consumption, resulting in notable energy savings. Moreover, smart lighting systems can adjust brightness and color temperature based on natural light levels and occupancy, not only saving energy but also enhancing comfort and convenience. Furthermore, smart homes can integrate with renewable energy sources like solar panels or smart grids, allowing homeowners to generate their own clean energy and reduce their carbon footprint, leading to a more sustainable future.
Smart homes offer a plethora of benefits by harnessing the power of connected devices, automation, and energy-saving strategies. From increased convenience and comfort to optimized energy usage and cost savings, smart homes are transforming the way we live and interact with our living spaces. Don’t miss out on the opportunity to revolutionize your living space and embrace the future of smart homes. Enroll in our Digital Disruption course and discover how connected devices, automation, and energy-saving strategies can transform your home into a cutting-edge sanctuary. Learn how to save energy, streamline tasks, and reduce your carbon footprint, all while enjoying the comfort and convenience of a smart home.
|
<urn:uuid:34f3f42b-fc30-42df-9457-899b660723d3>
|
CC-MAIN-2023-50
|
https://peteralkema.com/smart-homes-connected-devices-automation-and-energy-savings/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099942.90/warc/CC-MAIN-20231128183116-20231128213116-00591.warc.gz
|
en
| 0.900896 | 712 | 2.625 | 3 |
What is Gum Disease, and why does it matter?
I drive around Franklin County with a bumper sticker that reads, “Gum Disease can kill more than your smile.” I know it sounds like something a health geek would care about, but I’d like to share with you why all of us can live a little better and a little longer if we understand this simple phrase. Here’s the story…
Gum disease (periodontal disease) affects 46 percent of the U.S. population and is caused by bacteria that grow on the teeth under the gums. It’s something to know about even if you are a good brusher and have a beautiful smile, because it’s possible to have great teeth and yet terrible gum disease, but you might never know it. The people around you know it though, because gum disease causes a kind of really bad breath that you can’t smell but others can sometimes three or four feet away.
It’s possible to have an ongoing low grade gum infection and lose half the bone that once held your teeth tightly in the gums. Sometimes the only sign is occasional bleeding gums when brushing or flossing.
The inflammation process involved in Gum disease is a “sneaky killer” like high blood pressure, diabetes and sleep apnea, because there are few symptoms until you experience some other major health failure. Just like high blood pressure can lead to stroke at any age, undiscovered diabetes can permanently effect your eyesight, and chronic sleep apnea worsens heart disease and emotional stability, gum disease causes irreversible changes that can be enormously hard to manage after the fact.
Doctors are learning more and more about how gum disease may be related to heart disease, unstable diabetes control and many other serious conditions. If you just google “gum disease and health risks” you will see why my bumper sticker is spot on. Here’s a short list:
Research finds that 80% of people who dies of heart attacks also have gum disease. That's a very important correlation!
Research finds that men with gum disease are 49% more likely to develop kidney cancer, 54% more likely to develop pancreatic cancer, and 30% more likely to develop blood cancers.
Researcher suggests a link between osteoporosis and bone loss in the jaw that may lead to tooth loss because the density of the bone that supports the teeth may be decreased, which means the teeth no longer have a solid foundation.
Research has found that bacteria that grow in the oral cavity can be aspirated into the lungs to cause respiratory diseases such as pneumonia, especially in people with periodontal disease.
If you would like to learn more about your treatment options, please contact us to set up an appointment!
|
<urn:uuid:e263e69d-6915-46e1-9119-6b2537a08a47>
|
CC-MAIN-2017-34
|
http://www.fiddleheaddental.com/gum-disease
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103270.12/warc/CC-MAIN-20170817111816-20170817131816-00616.warc.gz
|
en
| 0.952052 | 573 | 2.75 | 3 |
Table of Contents
What is Depression?
Depression (major depressive disorder) is a mutual and serious illness that negatively affects how you feel, think, and act. Fortunately, this is also curable. Depression causes feelings of sadness and loss of interest in activities you used to enjoy. It can lead to many emotional and physical problems and reduce your function at work and home.
Symptoms of Depression can Array from Mild to Severe and may Include:
- Feeling the sad or depressed mood
- Loss of interest or wish in activities that used to be enjoyable
- Appetite changes: weight loss or gains unrelated to diet
- Sleep disturbance or excessive sleep
- Loss of energy or increased fatigue
- Increased aimless physical activity (e.g., inability to sit still, walk, wring hands) or slow movement or speech (these activities must be severe enough for others to notice)
- Feelings of worthlessness or guilt
- effort thinking, concentrating or making decisions
- Feelings of death or suicide
Symptoms must include latter at least two weeks and reflect a change in your previous level of functioning to be diagnosed with depression.
In addition, medical conditions (such as thyroid problems, a brain tumour, or vitamin deficiencies) can mimic symptoms of depression, so it’s important to rule out common medical causes.
Unhappiness affects about one in 15 adults (6.7%) in any given year. And one in six persons (16.6%) will suffer from depression at some point in their lives. Depression can occur at any time, but it first appears between late adolescence and early twenties. Women are more likely than men to suffer from depression. Some studies show that one-third of women experience a major depressive episode during their lifetime. There is a high heritability (about 40%) when first-line relatives (parents/children/siblings) are depressed.
Depression is Different from Sadness or Grief
The death of a loved one, the loss of a job or the breakup of a relationship are difficult experiences for a person. It is normal for sadness or grief to arise in response to such situations. Those who are experiencing loss can often describe themselves as “depressed”.
But being sad is not similar to being depressed. The mourning process is natural and unique and shares some common features with it. Grief and depression can accompany by intense sadness and withdrawal from normal activities. They also differ in important ways:
- With heartache, painful feelings come in waves, often interspersed with positive memories of the deceased. In major depression, mood and interest (pleasure) decline for two weeks.
- When grief is severe, self-esteem usually preserve. Feelings of worthlessness and self-hatred are common in major depression.
- During grief, thoughts of death may arise when people think about “joining” with a deceased loved one. In major depression, studies are focused on the end of life because the person feels useless or unworthy to live or cannot cope with the pain of it.
Grief and It Can Coexist For some people, the death of a loved one, job loss, physical abuse, or a major disaster can trigger it. When grief and it coexist, suffering is more severe and lasts longer than grief without it.
The distinction between grief and it is important and can help people get the help, support, or treatment they need.
Depression Risk Factors
It can affect anyone, even someone who seems to live in relatively ideal conditions.
Several factors may play a role in it:
- Biochemistry: Differences in certain substances in the brain can contribute to symptoms of depression.
- Genetics: It can be hereditary. For example, if one identical alike suffers from depression, the other has a 70% chance of becoming ill at some point in her life.
- Personality: people with low confidence, who are easily stressed or, in general, pessimistic, are more prone to it.
- Environmental Factors: Chronic exposure to violence, neglect, abuse, or poverty can make some people more vulnerable to it.
How Depression Treat?
It is one of the most treatable mental disorders. Between 80% and 90% of people through depression eventually respond well to treatment. Almost all patients experience some relief of symptoms.
Before making a diagnosis or treatment, a health professional should perform a thorough diagnostic evaluation, including an interview and physical examination. In some cases, a blood test may ensure the depression isn’t related to a medical condition, such as thyroid problems or vitamin deficiencies (removing the medical cause would relieve symptoms like depression). The evaluation will identify specific symptoms and examine medical and family history and cultural and environmental factors to diagnose and plan a course of action.
- Electroconvulsive Therapy (ECT)
Also Read: Complete List of Diabetes Medications
|
<urn:uuid:28ca10a7-85a5-4c70-b6e7-f092add21733>
|
CC-MAIN-2023-40
|
https://www.fashionglee.com/depression/?noamp=mobile
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510259.52/warc/CC-MAIN-20230927035329-20230927065329-00464.warc.gz
|
en
| 0.94297 | 1,004 | 3.65625 | 4 |
Autism is an impairment of the growth and development of the central nervous system, which in turn affects learning ability, emotion and memory. Autism affects information processing in the brain by altering how nerve cells and their synapses join and categorize.
3 types of Autism recognized in the autism spectrum (ASDs) the other 2 are Asperger syndrome(lacks delays in cognitive development and language and Pervasive developmental disorder- Not Otherwise Specified (PDD-NOS) This one is diagnosed when the full set of criteria for autism or Asperger syndrome are not met.
Autism is believed to have a strong genetic basis, although the genetics of autism are complex and unclear as to whether ASD is explained more by rare mutations or by rare combinations or common genetic variants. Parents will usually notice signs in the first two years of their child’s life. Some will develop signs gradually, but some children will develop normally then regress.
Possible Indicators of Autism Spectrum Disorders:
Does not babble, point, or make meaningful gestures by 1 year of age
Does not speak one word by 16 months
Does not combine two words by 2 years
Does not respond to name
Loses language or social skills
Some Other Indicators
Poor eye contact
Doesn’t seem to know how to play with toys
Excessively lines up toys or other objects
Is attached to one particular toy or object
At times seems to be hearing impaired
From the start, typically developing infants are social beings. Early in life, they gaze at people, turn toward voices, grasp a finger, and even smile.
Although there is no known cure for autism there is also not a “one-size-fits-all” treatment plan either. The life-expectancy is normal for a child with autism.
Treatment options may include:
· Speech/Language Therapy
· Occupational Therapy
· Sensory Integration Therapy
· Cranio-sacral Therapy
· Music/Sound Therapy
· Play Therapy
Please keep in mind this is not a complete list of therapies that we use for autism, just a common list. You child we be evaluated by one of our highly trained professionals and from that evaluation a treatment plan will be implemented.
|
<urn:uuid:d55d63a2-db5b-42ce-889b-abd1b13f026a>
|
CC-MAIN-2017-43
|
http://kidsworktherapycenter.com/autism/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825147.83/warc/CC-MAIN-20171022060353-20171022080353-00279.warc.gz
|
en
| 0.913452 | 464 | 3.953125 | 4 |
An adhesive is a compound that adheres or bonds two items together. Adhesives may come from either natural or synthetic sources.
Phenol (C6H50H), the primary component of the resin used for the wood adhesives, is made from petroleum. In making adhesives and plastics from wood waste or other biomass uses solvent lo exlraci and concentrale a phenolics and neutrals traction lrom the pyrolysis oil.
The global presentation identifies the Pressure Sensitive Adhesives demand in the USA, Western Europe and Japan which, when combined, represents about 80% of the world market place.
|
<urn:uuid:5c99cd62-b3cf-41d9-afcc-4313b7a231e7>
|
CC-MAIN-2017-34
|
http://smelink.blogspot.com/2011/11/adhesives.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108268.39/warc/CC-MAIN-20170821114342-20170821134342-00039.warc.gz
|
en
| 0.837884 | 132 | 3.03125 | 3 |
Common motion-picture film sizes are shown above. Most professional theatrical movies are shot on 35-mm film, while amateur and nontheatrical films often use smaller sizes. The ratios, called aspect ratios, indicate the relationship between frame width and frame height. A ratio of 1.33:1 was most common until about the mid-1950s, when 1.85:1 became standard in the United States. The 35-mm Panavision wide-screen system uses a special lens to condense the image horizontally. When projected, the image is spread out so that it is 2.40 times as wide as it is tall.
|
<urn:uuid:80bef892-c4d4-4dc5-9d6b-d24af5654720>
|
CC-MAIN-2020-05
|
https://kids.britannica.com/students/assembly/view/90139
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607407.48/warc/CC-MAIN-20200122191620-20200122220620-00050.warc.gz
|
en
| 0.966643 | 128 | 3.3125 | 3 |
1. How many axes of symmetry does a circle have?
An infinite number of mirror lines or axes of symmetry can be drawn on a circle.
2. What is the difference between a chord and a segment?
A chord is a line , whereas a segment is an area or region of a circle.
3. In how many places does a tangent meet a circle?
A tangent touches a cirlce in just ONE place.
4. How many circles can be drawn through two points?
An infinite number of circles can be drawn through two points.
|
<urn:uuid:94b29791-fe4c-4ad5-9d7e-3c65b435f590>
|
CC-MAIN-2020-10
|
http://bestmaths.net/online/index.php/year-levels/year-10/year-10-topics/circle-properties/faq/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00048.warc.gz
|
en
| 0.926871 | 118 | 3.453125 | 3 |
This is the Sunday Series, a peek of things in the art music world.
Music has a close bound with architecture for quite some time. Concert halls are also the ones which effects the music and also effected by music. They have also given birth to great architecture designs in the globe.
This series presents the beauty of the structure, in which music creativity flows abundantly.
Inaugurated in 1900, Boston Symphony Hall is considered one of the best symphonic hall in the world. It adopted a traditional shoebox shape which is very popular in Europe. Shoebox hall shape is a shape of a concert hall which is very similar to a shoe box, long way down the hall with narrow balconies.
It is 61 feet high, 75 feet wide, and 125 long from the lower back wall to the front of the stage. Stage walls slope inward to help focus the sound. With the exception of its wooden floors, the Hall is built of brick, steel, and plaster, with modest decoration. Side balconies are very shallow to avoid trapping or muffling sound, and the coffered ceiling and statue-filled niches along three sides help provide excellent acoustics to essentially every seat.
The hall’s leather seats are still original from 1900. The hall seats 2,625 people during Symphony season, 2,371 during the Pops season, and up to 800 for dinner.
This hall is the home for Boston Symphony Orchestra, one of the best seven orchestras in the United States. The orchestra is now led by James Levine.
The Symphony Hall organ, a 4,800-pipe Aeolian-Skinner (Opus 1134) designed by G. Donald Harrison, installed in 1949, and autographed by Albert Schweitzer, is considered one of the finest concert hall organs in the world. It replaced the hall’s first organ, built in 1900 by George S. Hutchings of Boston, which was electrically keyed, with 62 ranks of nearly 4,000 pipes set in a chamber 12 feet deep and 40 feet high. The Hutchings organ had fallen out of fashion by the 1940s when lighter, clearer tones became preferred. E. Power Biggs, often a featured organist for the orchestra, lobbied hard for a thinner bass sound and accentuated treble.
The 1949 Aeolian-Skinner reused and modified more than 60% of the existing Hutchings pipes and added 600 new pipes in a Positiv division. The original diapason pipes, 32 feet in length, were reportedly sawed into manageable pieces for disposal in 1948.
~ all infos are from wikipedia
|
<urn:uuid:c3eabb41-8065-474c-bc97-3c12a0e84a59>
|
CC-MAIN-2014-35
|
http://musicalprom.com/2008/10/05/architecture-for-music-boston-symphony-hall/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535925433.20/warc/CC-MAIN-20140901014525-00088-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.941028 | 539 | 2.6875 | 3 |
Have you been thinking that you need to exercise more but you don’t know where to start?
Participating in regular physical activity will help you:
– maintain your muscle mass
– increase your bone density
– improve your balance, posture and flexibility
– have better control of chronic disease symptoms
– decrease pain and depression
– prevent falls
The Center for Disease Control and Prevention (CDC) states 28% of the population over the age of 50 are physically inactive. This is a sad fact considering that 4 out of 5 of the most limiting chronic health conditions could be managed or prevented with physical activity.
As you age your heart muscles and arteries can become stiffer. The ligaments surrounding your joints becomes less elastic leading to increased pain and stiffness. Your body also metabolizes food slower which can lead to weight gain.
Throughout the world, the World Health Organization, has linked 3.2 million deaths to not enough physical activity. The Centers for Disease Control and Prevention (CDC) reports that falls are the number one cause of fatal and non-fatal injuries in the United States for people who are over the age of 65 years.
Not only does exercise help you feel better, but you may also look better and can enjoy a higher quality of life. Exercise helps you continue to do many of the things you love and need to do.
Many seniors are afraid to exercise at home because they are worried they may injure themselves; that is a valid concern.
Exercise is meant to improve your health, not cause you to get hurt. As always, check with your physician before starting any new exercise programs.
Helpful Tip: If you are worried about your safety while trying new exercises, seek a healthcare/fitness professional ahead of time. You both can have fun learning new exercises and you will know somebody is there to help you if you need it.
Nurse Next Door has curated a list of exercises that may be beneficial for seniors. These six user-friendly exercises for seniors to do at home and will focus on the core areas of (click to scroll):
Exercises for Strength
Strength training is not just for bodybuilders! Stronger muscles help you to continue to do all the things you need to do in a day from walking up stairs to getting out of a chair.
Dean Maddalone, a certified strength and conditioning specialist states that you can lose 3-8% of your muscle mass each decade. Strength training increases bone density by 1-3% and reduces your risk of death from heart disease by 41%.
Pretending that you are about to sit down in a chair can strengthen your entire lower body.
- Stand in front of a chair with your feet as far apart as your hips.
- Bend your knees while keeping your shoulders and chest upright.
- Lower your bottom so you sit down.
- Then push your body back up to return to a standing position.
These push-ups can provide strengthening for your entire upper body with a focus on your arms and chest. But you don’t have to get down on the floor and worry about being stuck there!
- Stand in front of a sturdy wall, up to two feet away but as close as you need to.
- Place your hands up against the wall directly in front of your shoulders.
- Keep your body straight and bend your elbows to lean in towards the wall.
- Stop with your face close to the wall and then straighten your arms to push your body away from the wall.
Exercises for Balance
Falls are one of the leading causes of visits to the emergency room. About 30% of people over the age of 65 will fall each year. Often a fall can result in fractures and declining health. Balance helps you to keep yourself on your feet and recover from those accidental upsets.
Single Foot Stand
This exercise is similar to standing like a flamingo but less dangerous.
- Stand behind a steady, unmoveable chair and hold onto the back.
- Pick up your left foot and balance on your right foot as long as is comfortable.
- Place your left foot down and then lift up your right foot and balance on your left foot
You are aiming to be able to stand on one foot without holding the chair for up to a minute.
Tippy Toe Lifts
You can pretend to be a ballerina while strengthening your legs and improving your balance with this exercise.
- Stand beside or behind a chair or counter and place your hands on the surface for support.
- Push yourself up onto your tippy toes as high as is comfortable and then return back to a flat foot. Repeat.
Exercises for Flexibility
Tight and sore muscles make it difficult to do things that were once simple such as pulling up your socks or reaching for something high up. Improving your flexibility helps you maintain good posture and move more freely and easily.
A study published in the International Journal of Physical Therapy found that after 10 weeks of stretching 2-3 times a week, older adults had better spinal mobility, an increased ability to flex their hips and a more steady gait.
Wall Snow Angels
Do you remember plopping down on your back in a patch of freshly fallen snow, sliding your arms and legs up and down to form a perfect “snow angel”?
This exercise helps to open up your chest and to decrease that tightness in the middle of your back that develops as a result of looking down. But you don’t have to fall on your back in the snow to do this “wall angel”!
- Stand about 3 inches away from the wall and place your head and lower back flat against the wall.
- Put your hands at your sides with the palms out and the backs of against the wall.
- Keeping your arms touching the wall, raise them up above your head (or as high as is comfortable).
Repeat a couple times to make some beautiful imaginary wings for your angel.
The Head Turn
One of the simplest and easiest stretches to do! This exercise involves a movement you do whenever you shake your head “no”.
- Stand or sit with your back straight and your shoulders relaxed.
- Turn your head slowly to the right until you feel a light stretch.
- Hold that position and then turn slowly to the left.
This exercise helps to keep your neck mobile, that’s important for driving and being aware of your surroundings!
Consider going to a local gym for a personal trainer or sign-up for senior-specific exercise classes at your local senior and community center!
Did you know that Nurse Next Door’s caregivers can accompany you to these classes and observe or join the class right alongside you? Learn more about our Companionship services!
|
<urn:uuid:4b9636ba-b89e-4af1-806f-0e5a449ee67b>
|
CC-MAIN-2023-40
|
https://www.nursenextdoor.com/blog/6-easy-and-safe-exercises-for-seniors/?utm_source=Fizikaflex&utm_campaign=574630734d-EMAIL_CAMPAIGN_09_01_2020&utm_medium=email&utm_term=0_742da3ec30-574630734d-534206085&ct=t(EMAIL_CAMPAIGN_9_01_20)&mc_cid=574630734d&mc_eid=a3e5b0032d
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506339.10/warc/CC-MAIN-20230922070214-20230922100214-00800.warc.gz
|
en
| 0.940941 | 1,412 | 2.71875 | 3 |
The 2 Main Types of Tafsir with ExamplesUsama Ghumman
If you want to learn Qur’an, you must know other topics that mean Tafsir the verses, such as Islamic Tafsir and Types of Islamic Tafsir. Tafsir of the Qur’an is essential for a better and deeper understanding of the meaning of the Qur’an. Reading the English translation of a verse is not enough to find its exact meaning.
Tafsir the meaning of those verses where Muhammad was after the revelation of the Qur’an because his role was not only to convey God’s message.
Although the language of the Qur’an is Arabic, the Muslims of the Arabian Peninsula cannot understand the meaning of the Qur’an. So they rely on the Interpretation of Rasulullah (PBUH). This leads to Tafsir in Islam.
Tafsir of the Qur’an helps Muslims understand the principles of Islam, the oneness of God, its obligations, and prohibitions.
Friends appreciated the importance of the narrations of the Prophet Muhammad SAW, which included interpreting his verses, so many narrated his teachings to Muslims, such as Abu Hurairah and Anas Ibn Malik.
The importance of Tafsir
Tafsir is required to know the true Qur’an. Tafsir has two types; it is an example of Al-Qur’an. Without knowing Tafsir, it isn’t easy to interpret Al-Qur’an. The purpose of Tafsir is to clarify what God has revealed to humanity through the Holy Scriptures.
Allah said in the Qur’an, “Truly, this Qur’an leads to the right path.”
Anyone can find their way directly from this book. A person trained in Qur’anic explanation can discern the meaning of a verse better than the average believer.
Finally, learning the method of teaching Tafsir and Tafsir with examples can be used in everyday life to know more about the Qur’an. Sheikh Asirasir Kadhi explains the relationship between the Qur’an and Tafsir in his book Introduction to the Sciences of the Qur’an. He put it this way:
“The Qur’an is like a treasure in a glass bottle; Men can see and benefit from it, but they need Interpretation, because Interpretation is the key that unlocks the treasure and enables mankind to make use of it. ”
Another scholar, Iyaas, said, “For example, people who read the Qur’an but do not know the Tafsir, like a group of people who receive a message from their ruler in the middle of the night and have no light.” Ibn Muawiyah. As a result, he needed to learn the content of the message. It was as if someone familiar with Tafsir approached him with a lamp and read what was written in the message. ” Two main types of Tafsir with examples of the Qur’an
3 reasons why learning Tafsir is important :
Perform daily activities (prayer) with concentration and awake.
If a person prays several times daily, they may need to be more accurate and correct. In the same way, he can think a lot about his interests, plans, and problems that arise in his mind.
Who can avoid this if someone knows the significance and meaning of each verse he recites in his prayer? This will help a person become hushed and huzoo in prayer and build a strong bond between him and Allah. It can be achieved when studying Tafsir.
2-For a better understanding of the Qur’an by non-Arabs.
God uses the easiest, most straightforward, and most concise language in the Qur’an; with it, the meaning and meaning are clear for those who know or come from Arabic.
However, it is not easy for non-Arabs to understand the Qur’an. For this purpose, the study of Tafsir is necessary for all Muslims.
Iyaas bin Mu’awiyah (122 AD) once said:
“A person who reads Al-Qur’an and does not know its explanation/tafsir is like a congregation of people whose authority comes with news that comes at night, and there is no light to read it. This is it.
Therefore, there is no vague idea about what is in the news. An example of a person who knows Tafsir is someone who comes to him with light and tells him what is in the treatise.
Build an eternal bond with our Creator.
Being close to God in the Qur’an, focusing on different situations, and doing different things means God is always with you. However, the term close to God has a different meaning.
As the Qur’an says: “Truly I have created man, I know how his inner thoughts grow, and I am closer to him than his veins.” [Quran 50:16].
Learning Tafsir creates a more profound connection in the hearts of believers than reading or memorizing the Qur’an. It brings the heart and mind closer to the word of God and makes us stand on the straight and right path.
Types of Tafsir in Islam
Scholars divide the Science of Tafsir into two main types:
Comment on Riwaya or Athar
This form of Tafsir began to appear after the establishment of Science. The Interpretation of the Qur’an is based on the use of leading sources.
Qur’an itself is the primary source of this type of Tafsir. Companions also used the words of Muhammad, peace be upon him, about interpreting the Qur’an.
Therefore, the source of Tafsir bil Riwaya is the Qur’an, the “Sunnah” of the legend of Muhammad, and the Interpretation of the Companions.
When you read this verse, you will see that the blessed night when Allah revealed the Qur’an is unknown. But in the verse of Al-Qadr, Allah describes this night as the night of Al-Qadr.
Verily, We have revealed this in Lailatul Qadr}
Another example is the Interpretation of the stories of Prophet Muhammad (pbuh). He explained the verse in Baqqarah as follows:
Eat and drink until the white thread of the mountain looks different from the black thread}
Muhammad alaihissalam explained this verse as narrated by Adi bin Hatim: “O Messenger of Allah! What is the difference between white thread and black thread? Are those two lessons? He said, “You are not smart if you watch two lessons.” Then he said, “No, it is dark at night and white during the day.”
Tafsir bill Al-Ra’y
In Arabic, the form of the name of Tafsir is Ijtihad. These scholars of Interpretation compare the verses of the Qur’an to obtain an accurate interpretation.
They also compared the interpretations of the Companions, so they gave the first knowledge: Tafsir Bil Riwaya.
Book of Tafsir and Riwaya
You can read many books about this type of explanation:
- Al-Jawahir al-Hisn fi Tafsir al-Gurhan, Abd al-Raḥman Tha alabi (876 H).
- Al-Kashf wa al-Bayan, Abu Isak Aḥmad al-Tha labi (427 H).
- Fatḥ al-Qadir, Muhammad b. Ali b. Shawkani (1250 H).
- Jalal al-Din Suyuti (911 H) Durr al-Manthur fi al-Tafsir bil al-Mathur.
- Baḥr al-Ulum (Samarqandī), Abu Layth Samarqan (373 H).
- Ma’alim al-Tanzil, Abu Muḥammad b. Mas’ud Baghawi (516 H).
- Tafsir Jami al-Bayan fī Taʼwil al-Gurhan, Abu Jafar Muḥammad b. Jarir Tabari (310 H)
Book of Tafsir Bill Ray
There are many books on this type of Interpretation.
- Madarik al-Tanzil wa Haqa’iq al-Tawil, Ibn Maḥmud Nasafi (710 H).
- Ta’wilat al-Qur’an, Abu Mansur Muḥammad b. Muhammad Maturidi (333 H).
- Lubab al-Tawil fi Maani al-Tanzil, Ala al-Din al-Khazin (725 H).
- Tafsir Kabir (Mafatih al-Ghayb), Fakhr al-Din Razi (606 H).
- Ghara’ib al-Qur’an wa Ragha’ib al-Furqan, Ḥasan b. Muhammad Qummi Naishaburi (730 H).
- Taḥrir wa al-Tanwir, Ibn Ashur (1393 H).
- Tafsir Kabir (Mafatih al-Ghayb), Fakhr al-Din Razi (606 H).
Determining the meaning of the Qur’an “Tafseer” is necessary for an accurate and complete understanding of the verse. Types of Qur’an: Tafsir bil Rivaya and Tafsir bil Ra’i. The primary sources of Interpretation of the Qur’an in Islam are the Qur’an, As-Sunnah, and the interpretations of the companions of the Prophet Muhammad SAW.
The Science of Interpretation is not only to understand the verses of the Qur’an better but also to recognize the laws of Islam that must be followed by Muslims and to study the history of Islam. In addition, the Interpretation of the verses of the Qur’an strengthens the relationship with God and creates social norms.
Leave a Reply
|
<urn:uuid:9407a215-b470-4c17-b339-0c11178afb84>
|
CC-MAIN-2023-14
|
https://tipyaanacademy.com/the-2-main-types-of-tafsir-with-examples/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00052.warc.gz
|
en
| 0.900686 | 2,264 | 3.0625 | 3 |
As far as I am concerned, these are different forms of private, communal ownership or several property. I would tend to define property rights as private if the answer to the following question is “no”: “is there some entity that is not the state that has the right to exclude others from certain activities on the property?”. However, not everybody shares that vocabulary. Those of a more left-leaning disposition might prefer to use terms that have a more communitarian ring to describe these arrangements. That is fine, but the more we talk about these things, the more we might find we agree!
Highly relevant to this discussion about semantics is Elinor Ostrom. She was the first female winner of the Nobel Prize in economics in 2009. She died in 2012. When she won the Nobel Prize, articles appeared in The Guardian praising her from a left-leaning perspective and in The Telegraph from a somewhat different perspective at more or less the same time. The latter was put on the reading list for the then Conservative shadow cabinet. In her last appearances in the UK before she died, Elinor Ostrom gave the Hayek lecture at the free-market Institute of Economic Affairs and, earlier on the same day, she gave a presentation to a conference of several thousand environmentalists.
Ostrom managed to draw together people of goodwill who had very different political perspectives in debates about the management of environmental resources. To those who support private property, her research gave amazing insights into non-state solutions to resource management. To left-leaning people she came over as a communitarian who believed that communities acting together could solve their problems without creating markets that involved buying and selling of pre-packaged property rights.
So what was her contribution? And what has it got to do with Catholic social thought?
Ostrom discovered that, in practice, environmental resources were often managed effectively when property rights existed in what she described as a “polycentric” order with different organisations being responsible for different aspects of governance. Privately owned land and resources, as we would understand the idea in the West, would often not lead to sustainable management; state ownership would often be a disaster. However, by way of an alternative to both, she found that communities, from the bottom up, often develop methods of controlling the use of environmental resources – fish and forests in particular – that are remarkably stable and effective. Communities would develop their own systems of enforcement to restrict resource use to sustainable levels. Sometimes the systems that economists recommend (such as tradable quotas in fishing rights) don’t work in practice because they cannot be monitored. However, alternatives, such as limiting the time out at sea of each boat or restricting net designs might work. The community, she argued, is often remarkably effective at developing and enforcing rules that allow the exploitation of natural resources within sustainable limits.
In this framework, the main role of various levels of government, she argued, is to support those systems and not to take them over. For example, the government might provide information on resource levels.
The links between her work and the principles of Catholic social teaching are obvious, though I do not think she can have read any Catholic social teaching at all. The principle of subsidiarity is certainly at play – resource management is left to the lowest level and the higher levels of social organisation assist the lower levels. The focus is on the individual as somebody who needs to work and use environmental resources to sustain their family. However, the individual is clearly acting within the community.
Ostrom put a lot of emphasis on the importance of trust within communities and also reciprocity – key themes of Pope Benedict’s encyclical Caritas in veritate.
Ostrom argues that in traditional economics: “external officials are expected to impose an optimal set of rules on those individuals involved. It is assumed that the momentum for change must come from outside the situation rather than from the self-reflection and creativity of those within a situation to restructure their own patterns of interaction”. By way of contrast she argued in her Nobel Prize lecture: “Extensive empirical research leads me to argue that instead [of designing institutions that force individuals to achieve better outcomes], a core goal of public policy should be to facilitate the development of institutions that bring out the best in humans.” This certainly has a strong resonance with a Catholic view of the principle of subsidiarity and integral human development which is sceptical of the idea of development being done to the community from outside.
There are two lessons from this. The first is that, following the papal encyclical Laudato si, we should study how communities, especially in poor countries, actually manage environmental resources in practice. Ostrom’s work shows the way here. The second is to do with the way language gets in the way of agreement. Elinor Ostrom was widely admired by almost everybody (and mis-represented by nobody) because she was able to express herself in a language that resonated with all people of goodwill.
|
<urn:uuid:f18e7d46-5a55-48d4-81da-17ec160dbfe1>
|
CC-MAIN-2020-34
|
https://catholicsocialthought.org.uk/elinor-ostrom-and-catholic-social-thought/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00179.warc.gz
|
en
| 0.972932 | 1,025 | 2.578125 | 3 |
Mont Saint Michel
Photograph - Photographs
Mont Ste. Michel is on an island just 600 metres from land — made it accessible at low tide to the many pilgrims to its abbey, but defensible as an incoming tide stranded, drove off, or drowned, would-be assailants. The Mont remained unconquered during the Hundred Years' War; a small garrison fended off a full attack by the English in 1433. The reverse benefits of its natural defence were not lost on Louis XI, who turned the Mont into a prison. Thereafter the abbey began to be used more regularly as a jail during the Ancien Régime.
November 20th, 2015
Viewed 64 Times - Last Visitor from Shenzhen, 30 - China on 07/20/2020 at 10:55 AM
|
<urn:uuid:3fe2b6ce-d223-4eed-80c0-f1eda7353ece>
|
CC-MAIN-2020-34
|
https://licensing.pixels.com/featured/mont-saint-michel-hugh-smith.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738380.22/warc/CC-MAIN-20200809013812-20200809043812-00487.warc.gz
|
en
| 0.934952 | 166 | 2.640625 | 3 |
Unfortunately, much of what we understand about the Spiro site comes to us only in bits and pieces. This site remains the location of one of the largest and longest episodes of looting at any American archaeological site in history. First identified in 1914 by Joseph Thoburn, a professor at the University of Oklahoma, the mounds were part of the Choctaw Nation land allotment. Divided into at least four allotment partitions, the Spiro site was dispersed to Choctaw freedmen. At this time, the owners forbid all digging on the property, forcing Thoburn to abandon his attempted excavation. By 1933, however, this restriction was reversed. Needing the money, the owners now leased the land to a group of commercial diggers calling themselves the Pocola Mining Company (PMC). Having no respect for the site or the Caddoan people who created it, they dug with reckless abandon. Their only goal was to extract as much material from the mound as possible. Soon, the PMC discovered the most unique feature ever revealed in North America. They had found a hollow chamber, or Spirit Lodge, in the mound’s interior, containing thousands of freshwater pearls, eight hundred engraved and unengraved marine shell cups, stone and wooden statuary, basketry, feathered textiles, masks, large copper plates, and countless other items. “Moving swiftly, these men grabbed all the ancient relics they could sell and tossed the textiles, pot sherds, broken shell, and cedar elements onto the ground.” As described by Forrest E. Clements, head of the Department of Anthropology at the University of Oklahoma, in 1945, “Sections of cedar poles lay scattered on the ground, fragments of feather and fur textiles littered the whole area; it is impossible to take a single step in hundreds of square yards around the ruined structure without scuffing broken pieces of pottery, sections of engraved shell, and beads of shell, stone, and bone.”20
To save what remained of the site, Oklahoma passed the state’s first antiquities laws in 1935. They required all excavations to be licensed through Forrest Clements and the Department of Anthropology at the University of Oklahoma. Denying all other licenses to the site, Clements collaborated with the Works Progress Administration (WPA) to exhume what was left of the mound. From 1936 to 1941, a joint excavation was undertaken by the University of Oklahoma, the University of Tulsa, the Oklahoma Historical Society, and the Woolaroc Museum. Even after two years of looting, what was discovered at the Craig Mound still remains the largest assemblage of material ever revealed at a single Mississippian site.
|
<urn:uuid:48dc8141-47a5-40d8-ab90-fab5e98e06a7>
|
CC-MAIN-2023-40
|
https://spiromounds.com/collection/archaeology-looting
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506479.32/warc/CC-MAIN-20230923030601-20230923060601-00747.warc.gz
|
en
| 0.945688 | 556 | 3.515625 | 4 |
Starting a new school year can be challenging for many families. Children often feel the stress of meeting new teachers and classmates and wondering what the upcoming school year will look like for them. Parents can play a very important role in the success of their child’s school year.
Here are some tips from Trina Gillam, who serves as one of the supervisors of Le Bonheur’s Healthy Families program.
Sleep Discipline — every child needs a proper sleep schedule before school starts. Develop and implement a schedule before school begins.
Focus on the Positive — discuss the positive aspects of school with your child. Set positive goals for the school year with your child.
Express Feelings — allow your child to express good or bad feelings about the upcoming school year.
Take a Tour — if your child will be attending a new school, ask if you can take a tour. Try to schedule your tour with a principal or school administrator. This will help your child become familiar with new surroundings before the first day.
Review the School’s Website — allow your child to view the school’s website with you. Many schools post information such as calendars, student policies and teacher web pages on their sites. This is an opportunity for your child to view the school virtually.
Motivate, Motivate — ignite excitement about the upcoming school year. Invite family and friends over for a back-to-school celebration. Use your child’s school mascot as a celebration theme.
Reassurance — let your child know you will be available to support his or her learning and that school is important. Attend school meetings and activities. Keep up with assignments and your child’s progress. Assist your child with homework. Remain in ongoing contact with your child’s teacher(s).
Resilience — encourage your child to be resilient. Children who are resilient can handle challenging situations, solve problems, focus and understand mistakes happen to everyone.
Here are a few websites that Trina recommends for families:
|
<urn:uuid:6900da31-9e3c-48e9-bca5-63484d56e8b0>
|
CC-MAIN-2017-43
|
http://www.lebonheur.org/practical-parenting/successful-start-to-school/if-you-own-a-gun-prevent-accidental-injuries-by-following-these-safety-practices
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823168.74/warc/CC-MAIN-20171018233539-20171019013539-00706.warc.gz
|
en
| 0.942735 | 413 | 2.84375 | 3 |
By Ixchel Rosal
April 14, 2014
This year the Gender and Sexuality Center (GSC) within the Division of Diversity and Community Engagement has begun a research project to examine the impact of its Peers for Pride program. The Gender and Sexuality Center provides opportunities for all members of the UT Austin community to explore, organize, and promote learning around issues of gender and sexuality. The center also facilitates a greater responsiveness to the needs of women and the lesbian, gay, bisexual, transgender, queer, questioning, and ally (LGBTQA) communities through education, outreach and advocacy.
The primary goal the GSC’s Peers for Pride (PfP) program is to train peer facilitators to lead workshops about sexual orientation and gender identity across the UT campus. Students in PfP earn academic credit for their participation in the year-long program. During the fall semester they take a course entitled, “Confronting LGBTQ Oppression: Exploring the Issues and Learning the Skills to Communicate Them,” where students learn basic facilitation skills while taking an in-depth look at some issues facing lesbian, gay, bisexual, transgender and queer individuals. During the spring semester course, “Facilitating Dialogues on LGBTQ Oppression: Peers for Pride in Action,” peer facilitators have the opportunity to fine-tune their facilitation skills and lead workshops across campus.
The research study begun this year explores the narratives of Peers for Pride peer facilitators. Through the lens of Harro’s Cycle of Socialization, the following research questions will guide the study:
- How do facilitators conceptualize their participation in Peers for Pride?
- Were facilitators able to develop a sense of agency and capacity? In what ways did participation as a facilitator provide students with a greater sense of confidence that they could impact change and be effective leaders?
- What implications emerge in terms of identity and peer leadership development?
Harro’s (2010) Cycle of Socialization articulates the process by which we are each born with a myriad of social identities (i.e. our social identity profile) related to gender, age, skin color, ethnicity, ability status, sexual orientation, etc, and are then socialized to play certain roles prescribed to those identities by an imbalanced social system of oppression.
This qualitative case study seeks to investigate the impact of participation in the Peers for Pride Program on peer educators by interviewing former Peers for Pride peer facilitators from the past five years. According to Merriam, qualitative researchers are “interested in understanding the meaning people have constructed, that is, how they make sense of their world and the experiences that they have in the world” (2001, p. 6).
Two primary data collection methods will be used: semi-structured interviews with former PfP facilitators and document review. One-hour semi-structured interviews will be conducted with the participants. The semi-structured interviews were designed specifically for this project and will focus on how the facilitators conceptualized participation in the program, as well as how participation was situated within the context of the institution. The interview will help establish rapport with the participant, collect base-line data, and address the study’s research questions.
SI research fellow Dr. Kiersten Ferguson and postdoctoral fellow Dr. Stella Smith, are working closely with GSC staff on the design and implementation of the study. Results from the study are anticipated to be available in fall, 2014. For more information about the study please contact Ixchel Rosal (Director, GSC) at email@example.com.
|
<urn:uuid:27b2e7f3-92ec-4ef0-b8c6-7d391dc7a390>
|
CC-MAIN-2017-39
|
http://diversity.utexas.edu/2014/04/15/with-support-from-cdsi-gsc-launches-research-project/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689897.78/warc/CC-MAIN-20170924062956-20170924082956-00281.warc.gz
|
en
| 0.933237 | 758 | 2.671875 | 3 |
Keep Haralson Beautiful, located in Tallapoosa, GA, recently shared its perspective on a proposed landfill in its community. The potential landfill has been the topic of controversy among local residents. In fact, hundreds of people recently attended a Board of Commissioners’ meeting to lobby against the project, according to this Times-Georgian article.
“In Haralson County, where a passionate debate has erupted over the proposal to build a new landfill, the idea of a common ground existing between the two sides seems improbable,” Keep Haralson Beautiful writes on Facebook. “Yet, common ground is what Keep Haralson Beautiful has focused on for the last six years and it is where the group wants residents of the county to focus as well.”
Keep Haralson Beautiful continues:
Ultimately, both sides of the landfill debate make important arguments that are worth exploring and understanding. Those residents supporting the landfill point to the economic benefits that the county will likely accrue. Those against the building of the landfill express concerns about environmental safety and aesthetic value. Each objective is important and the conflicting priorities attached to them by our citizens has generated strong feelings.
But, as it turns out, the two sides have a lot more in common than we think and when it comes to the values in the county that we prioritize, we actually all share the same singular belief: Trash is bad. Actually, there is more to that statement. It should read: Trash is bad and trash is a problem.
Read the full post here.
|
<urn:uuid:cd6896fe-3566-42c2-95a5-3f65ac130964>
|
CC-MAIN-2023-23
|
https://kab.org/keep-haralson-beautiful-finding-common-ground-in-landfill-debate/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643585.23/warc/CC-MAIN-20230528051321-20230528081321-00431.warc.gz
|
en
| 0.963841 | 313 | 2.640625 | 3 |
What is Pupil Premium?
Introduced in 2011, the pupil premium is a sum of money given to schools each year by the Government to improve the attainment of disadvantaged children.
This is based on research showing that children from low income families perform less well at school than their peers. Often, children who are entitled to pupil premium face challenges such as poor language and communication skills, lack of confidence and issues with attendance and punctuality. The pupil premium is intended to directly benefit the children who are eligible, helping to narrow the gap between them and their classmates.
Schools can choose how to spend their pupil premium money, as they are best placed to identify what would be of most benefit to the children who are eligible.
Common ways in which schools spend their pupil premium fund include -
- Extra one-to-one or small-group support for children within the classroom.
- Employing extra teaching assistants to work with classes.
- Running catch-up sessions before or after school, for example for children who need extra help with maths or literacy.
- Running a school breakfast club to improve attendance.
- Providing extra tuition for able children.
- Providing music lessons for children whose families would be unable to pay for them.
- Funding educational trips and visits.
- Paying for additional help such as speech and language therapy or family therapy.
- Funding English classes for children who speak another language at home.
- Investing in resources that boost children’s learning, such as laptops or tablets
For more information on the strategy behind Pupil Premium Funding at Grimsdyke School please click the link below to read our strategy.
Please find below our most recent Pupil Premium Statement. Although the funding is allocated on a financial year basis, the spend is showing per academic year. Therefore the fund allocation and spend will not show as a direct match.
|
<urn:uuid:c39385dc-86bf-4257-ac3c-5b42e650bb81>
|
CC-MAIN-2020-34
|
https://www.grimsdyke.harrow.sch.uk/page/?title=Pupil+Premium&pid=45
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739370.8/warc/CC-MAIN-20200814190500-20200814220500-00451.warc.gz
|
en
| 0.951558 | 387 | 3.234375 | 3 |
Groundwater salinity increases with the depth from fresh and brackish in alluvial aquifer (EC = 387 to 5810 μS/cm), brackish to saline in shallow bedrocks (EC = 3867 to 9371 μS/cm); inter-bedded sandstone-siltstone water-bearing zones (EC = 2395 to 6100 μS/cm) and coal seams (EC = 3014 to 4999 μS/cm) (Parsons Brinckerhoff, 2012a; 2012c; 2013a). Similar salinity patterns were also observed for the Stratford, Duralie and Rocky Hill coal mines located in the Gloucester Basin (Australasian Groundwater and Environmental, 2013; Heritage Computing, 2009; 2012).
The stable isotope values of groundwater, collected from all formations, indicated that all tested water samples are of meteoric (rainfall) origin and that no enrichment has occurred due to evaporation. In terms of the age, the oldest water was identified in the inter-bedded sandstone/siltstone water-bearing zone (mostly aquitard) and the youngest in alluvium. This corresponds with the EC/salinity trend. The stable isotopes, aging analysis and EC values are likely to indicate that high salinity in the older groundwater is likely to be related to in-situ water mineralisation.
In iron (Fe), fluorine (F), phosphorus (P) and mercury (Hg) were identified as typical elements occurring in coal seams, which can potentially assist analysis of aquifer interactions. Only Fe and P were included in water quality monitoring, and P concentration was particularly elevated in all formations, which is not typical for groundwater. Total organic carbon levels increased with water age and methane concentrations were, on average, 10 µg/L, 140 µg/L, 12,789 µg/L and 21,931 µg/L in alluvial, shallow bedrock, inter-bedded sandstone-siltstone and coal seams respectively.
Product Finalisation date
- 1.1.1 Bioregion
- 1.1.2 Geography
- 1.1.3 Geology
- 1.1.4 Hydrogeology and groundwater quality
- 1.1.5 Surface water hydrology and water quality
- 1.1.6 Surface water – groundwater interactions
- 1.1.7 Ecology
- Contributors to the Technical Programme
- About this technical product
|
<urn:uuid:0b278560-6b11-466d-bd68-2e4f145fec4a>
|
CC-MAIN-2023-40
|
https://bioregionalassessments.gov.au/assessments/11-context-statement-gloucester-subregion/1142-groundwater-quality
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506421.14/warc/CC-MAIN-20230922170343-20230922200343-00304.warc.gz
|
en
| 0.915447 | 527 | 3.140625 | 3 |
It’s not just fish populations shrinking, according to a new study. Fish themselves will be much smaller within a few decades.
Global warming linked to greenhouse-gas emissions will cause the body weight of more than 600 types of marine fish to dwindle up to 24% between 2000 and 2050, according to a report in the journal Nature Climate Change.
Additional factors, such as overfishing and pollution, will only make matters worse.
Ultimately, the changes “are expected to have large implications for trophic interactions, ecosystem functions, fisheries and global protein supply,” according to the study.
Aquatic creatures grow depending on the temperature, oxygen and resources available in water, according to researchers. Fish will struggle to breathe and develop as oceans become warmer and less oxygenated.
Fewer, smaller fish could result in a supply crunch, leading to higher prices of seafood down the line.
Food costs have soared of late, due in large part to this summer’s severe drought. Concerns from a British trade group of a “world shortage” of pig products have sparked fears of a so-called Aporkalypse of unaffordable bacon.
|
<urn:uuid:cfb81772-1080-4d40-ab85-e893f0b4c3cd>
|
CC-MAIN-2017-43
|
http://www.mcall.com/la-fi-mo-global-warming-shrink-fish-20121001-story.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825510.59/warc/CC-MAIN-20171023020721-20171023040721-00227.warc.gz
|
en
| 0.933518 | 242 | 3.5625 | 4 |
Web Exercise #1: Terror Management Theory and Research Methods
Terror management theory suggests that people find thoughts of death profoundly disturbing, so they cling to worldviews, like the just world hypothesis, that provides comfort and distraction. In the “Spotlight on Research Methods” section of Chapter 5, you read about the mortality salience manipulation. For this exercise, complete the mortality salience manipulation on your own by visiting the Terror Management Theory website that provides the actual Mortality Salience (MS) Manipulation.
- Complete the MS Manipulation.
- Complete the “Just World” survey in the Applying Social Psychology to Your Life section.
- Finally, journal your thoughts and observations on how completing the MS Manipulation may have influenced how you completed the “Just World” survey.
Web Exercise #2: False Consensus and Uniqueness Padlet
The false consensus effect is the tendency to think that most other people agree with our personal opinions. Conversely, the false uniqueness bias is the perception that our good and positive traits are fairly rare (i.e., we are special and above average). As a group, create an online, collaborative Padlet to share your favorite images, notes, and ideas related to these two self-serving biases. Make sure to allow comments so you and your classmates can share your thoughts with the rest of the class.
|
<urn:uuid:f9c02025-80bf-4b5a-b9e6-56c5354b61de>
|
CC-MAIN-2020-05
|
https://edge.sagepub.com/heinzen/student-resources-0/chapter-5/web-exercises
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593994.14/warc/CC-MAIN-20200118221909-20200119005909-00117.warc.gz
|
en
| 0.869776 | 279 | 2.921875 | 3 |
After more than four decades, Judy Chicago continues to be an influential feminist artist, author, and educator. Her pioneering work helped establish the Feminist Art Movement of the 1970s.
Born Judy Cohen in Chicago, Illinois, in 1939, Chicago attended the Art Institute of Chicago and the University of California, Los Angeles. Chicago’s early work was Minimalist, and she was part of the landmark Primary Structures exhibition in 1966 at The Jewish Museum in New York. She turned to feminist content in the late 1960s. At this time she changed her last name to Chicago, the location of her birth.
Believing in the need for a feminist pedagogy for female art students, Chicago began the first Feminist Art Program at California State University, Fresno, in 1970. The following year, with arist Miriam Schapiro, she co-founded the Feminist Art Program at California Institute of the Arts, Valencia. Womanhouse (1972), a collaborative installation the two artists created with their students, transformed an abandoned building into a house representative of women’s experiences.
Chicago is perhaps best known for her iconic The Dinner Party (1974–1979), which celebrates women’s history through place settings designed for 39 important women. The monumental, collaborative project incorporates traditional women’s crafts such as embroidery, needlepoint, and ceramics.
Chicago’s work has continued to address themes from women’s lives with The Birth Project (1980-1985) and The Holocaust Project (1985-1993). She is a prolific lecturer and writer, and she has taught at Duke and Indiana Universities and the University of North Carolina at Chapel Hill. Her numerous awards include grants from the National Endowment for the Arts and the Getty Foundation and four honorary doctorates. She currently resides with her husband, photographer Donald Woodman, with whom she collaborates on artistic and teaching opportunities.
|
<urn:uuid:f30e1e20-d585-432d-b1d1-7331ab80da98>
|
CC-MAIN-2013-20
|
http://www.nmwa.org/explore/artist-profiles/judy-chicago
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699632815/warc/CC-MAIN-20130516102032-00068-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955686 | 384 | 2.578125 | 3 |
Demystifying terroir: maybe it’s the microbes making magic in your wine
Scientists and winemakers are now recognising that part of what makes a wine-growing region special may be its resident microbes.
A wine’s terroir is what makes it special, says Greg Allen. He’s a California winemaker who has studied and worked in the industry for 20 years.
“There’s a rush of emotion when I think of terroir,” he says. A wine’s terroir may recall the slope of the hill where lush grapes grow — and maybe the angle of sunlight that warmed those grapes on that hill, or the way water moves through the soil that nourished them.
But when Allen thinks of terroir, he also think about microbes — about bacteria and fungi.
See, for centuries now, vintners and wine enthusiasts alike have puzzled over what gives wines their unique flavour profiles, which come from grapes grown in a specific geographic location.
Scientists and winemakers like Allen are now recognising that part of what makes a wine-growing region special may be its resident microbes. A new study published in the journal mBio found that the collection of bacteria and fungi, or “microbiome,” on pressed grapes can help predict the flavor profile of a finished wine.
“We know that humans can detect something in wine that allows us to separate it out by region,” says David Mills, a microbiologist at the University of California, Davis and the study’s senior researcher. “What that something is, is something that we can apply science to figure out.”
Mills and his research team already knew from previous research that the microbes on grapes vary across broad growing regions. Cabernet sauvignon grapes grown in Sonoma, Calif, for example, carry different microbes than the same grapes grown about 150 miles south on the Central Coast.
The scientists also knew from other research that metabolites — the chemical compounds that shape the flavour and texture of wines — vary regionally.
What they did not know was whether microbes and metabolites differ at the finer scale of neighbouring vineyards, and whether a grape’s microbial community had any connection to the flavour metabolites in the finished wine.
To figure it out, Mills and his UC Davis colleagues teamed up with Far Niente and Nickel & Nickel wineries in Oakville, Calif, to study chardonnay and cabernet sauvignon grapes from across Napa and Sonoma counties.
Allen, a former graduate student of Mills and a winemaker for another winery in the Far Niente and Nickel & Nickel family, led efforts to collect more than 700 samples of wine in progress during the 2011 season.
The team then identified the groups of bacteria and fungi in each sample by sequencing the microbial DNA. They also used a chemical analysis to identify collections of flavour metabolites in the final products.
They discovered that neighbouring vineyards often harbour unique collections of microbes, as well as unique groups of flavour metabolites in the finished wines. They also found that they could use the microbial community of the pressed grapes to predict which collections of flavour metabolites would end up in the finished wine.
The scientists now want to know whether the microbes directly impact flavour metabolites or whether the two are merely correlated.
Either way, a winemaker in the future could test the microbiome of grapes to determine whether a particular growing site will produce desirable metabolites in the wine, or to identify possible microbial issues early in the fermentation process.
“I think this is a really cool study,” says Anna Katharine Mansfield, a wine scientist at Cornell University. “Measuring as much as you can measure, even in a wine you are happy with, gives you a baseline and a way to guess what went wrong if the flavour changes,” she says. “Wine is an art and a science. But if the art can be informed by the science, then we have a broader palette to paint with.”
Daniele Tessaro, a vintner at Barboursville Vineyards in Virginia, agrees. But he says it is unlikely that he’ll be analyzing microbiomes any time soon. “You always have to decide whether testing something is worth your time and money,” he says.
Does demystifying terroir with science detract at all from the romantic qualities of a good wine? Greg Allen doesn’t think so: “I’ll have a little extra satisfaction in my next glass of wine knowing that the microbes that contributed to the final product were unique to a specific vineyard.”
Trackback from your site.
|
<urn:uuid:479f9b93-d726-4ae5-824d-b9f543b9d688>
|
CC-MAIN-2020-05
|
https://www.drinkstuff-sa.co.za/demystifying-terroir-maybe-it-s-the-microbes-making-magic-in-your-wine/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00355.warc.gz
|
en
| 0.953904 | 980 | 3.125 | 3 |
The Montessori teaching-learning method is one of the most widelypopular ‘alternative’ educational pedagogies now.The method is being extensively used all over the world and is predominantlyimplemented in early childhood and primary school education. The highly trained teachers with Montessori teacher training online program generate a personalizedlearning environment created according to a child’s unique aptitudes, comforts, and learning approach.
The Montessori teaching-learning technique comprises of anequipped environment for young learners.Usually, aMontessori classroom assimilateskids of miscellaneouseternities that are generally clustered in periods of 3 years. This naturally encourages socializationand solidarity among them.The Montessori education isa completelyhands-onteaching-learning experience.
Let us have a look at the 5 basic principles of Montessori teaching-learning education —
The Montessori Method is developed by Italian educationalist Dr. Maria Montessori and here are the five basic principles: –
- Respect for the Kid
One of the major principles underlying the complete Montessori method is respect for the child. Dr Montessori strongly believed children should be appreciated. Montessori trained teachers respect all students and they must learn to observe the young learners without judgement.Respect is the element of the Montessori process where teachers instil respect with kindness in children.
Dr Montessori says, “As a rule, however, we do not respect children. We try to force them to follow us without regard to their special needs. We are overbearing with them, and above all, rude; and then we expect them to be submissive and well-behaved, knowing all the time how strong is their instinct of imitation and how touching their faith in and admiration of us. They will imitate us in any case. Let us treat them, therefore, with all the kindness which we would wish to help to develop in them.”
- Absorbent Awareness
Dr Montessori believed the children have absorbent minds. She believed that the young mind is prepared and enthusiastic to learn. Children learn from their surroundings, and they are always engrossing new information.All kids are born with distinctivedimensions and desire to learn.
With the help of their senseskidscontinuallycaptivate information from the world around them. Children educate themselves and this has been a core belief of Montessori education. The Montessori settinghelps the young learners by providing them with educational experiences. This encourages their sense of belonging, self-assurance, individuality and intervention.
- Sensitive Periods
The primary six years of life are vital in a child’s growth as they create an understanding of themselves as well as their surroundings. These sensitive periods are very crucial for learning and education. Dr Maria Montessori believed kids can simply learn certain information throughouttheir sensitive periods of sharp learning.
Montessori educatorsmonitor for these sensitive periods to checkwhether the children are being provided with enough resources along with opportunities or not.Even though all kids experience sensitive periods, the timing and arrangement will differ for every child.
Watch this video to know more!
- A Prepared Environment
Children learn best in a prepared environment, according to the Montessori principles. They require a place in which they can do things for themselves without restrictions. A well-prepared environment offers learning materials as well as experiences to children in a methodical format.
This also providesyoung learnerswith the freedom to select their own learning materials and try things for themselves. Montessoriteachers can formulate the best teaching-learning environment for children by: Perceiving children. The important characteristic and aim of an organized environment is self-reliance.
- Auto Education
Auto education is also known as self-education. This concept believes that kids are accomplishedby educating themselves. It is also one of the most important principles in the Montessori technique. Montessori instructorsdeliver the learning setting with the proper guidance with the help for children to educate.Introducing kids to new study materials in a prepared environment, encouraging exploration, etc. are the core values of Montessori education.
How’s the Montessori classroomsetting?
Well, one of the mostbroadlyacknowledgedstructures of a Montessori classroom setting is that of the multi-age classroom. Montessori schools believe multi-age school-roomsempoweryoung learners to work more efficiently at their natural stride.
Montessori classrooms are characteristically set up in 3-year age assortments. Montessori educators also think this allows children to learn better social assistances which eventually help them to grow academically in a co-operative and non-competitive learning atmosphere.
Therefore, Montessori learning spaces are organized in such a way as to best encourage learning.Let us just have a look at some of the following considerations in a Montessori classroom arrangement:
- Montessori classroom is quiet, peaceful and well-organized.
- In these kinds of setups, you will see walls are usually painted in various neutral shades with minimal substances and artworks.
- Montessori classroom doesn’t have any desks, you will also see a teacher is not standing at the front of the classroom to deliver a lesson.
- Montessori classrooms also consist of child-sized equipment, tiny spaces for reading, accessible shelves, child-sized kitchen utensils, etc.
- Artwork is cautiouslyselected and exhibited at children’s eye level.
- You will see that different types of resources that appeal to all five senses – sight, touch, smell, taste and hearingare being used in Montessori classrooms.
- Montessori classrooms generally have enough space for the young learners to move around without troubling others.
- Living plants can also be seen in Montessori classrooms.
Here are few ideologies of Montessori education that can clear the myth around it.
Montessori educators gently helplearners to maintain the cleanliness of this organized environment to keep it arranged and attractive.
Now, after knowing all of these, you must be wondering about the curriculum of the Montessori approach, right? Well, let’s have a quick read then –
The Montessori curriculum basically covers 5 key areas of learning, including:
Sensorial –Sensorial educationgenerally helps kidsto categorize their environments and create a direction. Various sensorial materials were being planned by Doctor Maria Montessori to aid children to express their sensory involvements. The major aim of sensorial learning is to help in the growth of the knowledgeable senses of a child.Also, there are sensorial resources that focus on pictorial perception, tangible impressions, auditory sense, olfactory and taste sensitivities and so on.
Practical Life–In Montessori classrooms, the young learners mostly learn daily-life assistances like how to get dressed, make snacksand care for plants and animals, etc. Children also learn apt social communications, being sympathetic and supportive, listening without disturbing, resolving strugglesquietly and so on. These kinds ofpractical life activities encourageindividuality, and fine- and gross-motor synchronization.
Language– Different language materials are being intended in the Montessori approach to improve thelanguageby exploring both written and spoken language. With the various language-based actions, kids learn numerous phonetic sounds, how to combine words phonetically and so on. Childrengrowby using tangible materials that comprise of their written work and also learn to communicate their opinions and feelings.
Culture –In Montessori education, cultural activities help the child to experience music, stories, artwork and items from the society with cultural contextual. Geography, science, zoology and botany all are being encompassed in this part. The culture area boostskids to advance their capacity for creativity with developing fine motor aids. Children also develop responsiveness and gratitudeto the world around them.
Mathematics –The young learners learn to identify numeralsthrough hands-on activities. Several mathematical ideas are presented to the kidwithtangible sensorial materials. This encourageskids to comprehendelementary maths concepts – learning number recognition, counting and sequencing of numbers, etc.
With Montessori teacher training online , trained Montessori teachers are in great demand all across the globe.If you choose to employ the Montessori process then getting educated with this teaching method is your foremost step. It is important to keep in mind – no apprentices are the same. Just like students, teachers also need to keep learning to keep growing.
Leave a Reply
|
<urn:uuid:77379369-2930-4f99-9fc8-35832e1c7b95>
|
CC-MAIN-2023-14
|
https://www.educationprogram.us/skills-development/integrating-child-centered-montessori-values-into-a-classroom.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00648.warc.gz
|
en
| 0.940615 | 1,780 | 3.890625 | 4 |
Friday, April 30, 2010
Reflection/Brainstorm - TED.com
Our initial experimentation can be seen to be a free exploration of form generation. The combination of digital form generation as a stimulus for a physical model output, we chose to explore the haptic exploration of modeling.
Through our experimentation we have explored modeling as:
Container; a hollow mass, essentially a facade construct. This provided little explanation of internal layout but rather the exploration of faceted geometry to create unique form.
Stacking; skeletal models, dealing with the practicality of structure. Laser modeling enabled us to explore the intricacy of structure resolution. By massing within Revit and converting our buildings into laser cut planes, models became a exploration of strengthening connections and stability.
Modular: creation exploration, through the laser cutting of set shapes we were able to rapidly explore form in the physical setting. The shapes implicated limitations of our forms but as a result the forms provided a syntax of pattern and rhythm, generated from their components.
These explorations represent to exhaustion of physical modeling. And has left us asking ourselves what are other stimulants that will enable us to generate new forms of a more rigorous composition? And thus bring us closer to the realisation of a flexible Architecture.
Taking the precedence of the seattle library as a precedence it can be seen that the form is generated from a direct interpretation of creative programming arrangement.
This has lead to our investigation into experiment 1. What makes a building program and how does that programming generate or influenced by the building form?
|
<urn:uuid:13596915-c581-4858-9205-3ab71031c025>
|
CC-MAIN-2017-43
|
http://architecture4resolution.blogspot.com/2010/04/reflectionbrainstorm-tedcom.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824775.99/warc/CC-MAIN-20171021114851-20171021134851-00567.warc.gz
|
en
| 0.904609 | 314 | 2.609375 | 3 |
University of North Carolina, Chapel Hill
Blended learning or hybrid learning used to be an innovative way of teaching; however, it has become more of the standard, and preferred by students, method of instruction delivery (Pomerantz & Brooks, 2017). The term blended came from online assignments and in-class, face-to-face instruction.
Blended learning has shifted to synchronous and asynchronous instruction with integration of digital solutions to accomplish learning outcomes.
According to the Pomerantz and Brooks 2017 ECAR Study of Undergraduate Students and Information Technology, 95 percent of undergraduate students own a laptop or a smartphone and 30 percent own a laptop, a smartphone, and a tablet. With the rise of students having access to a mobile technology it has become expected that instruction is more blended but faculty members often are not given the tools or training to help bridge this gap.
The 2017 Faculty and Information Technology study by Pomerantz and Brooks found that almost half of faculty disagreed or strongly disagreed that online learning helps students learn more effectively. Many faculty members have this belief despite the study from Barbara Means in 2014 that showed blended learning has stronger learning outcomes than face-to-face or online instruction alone (Means, Bakia & Murphy, 2014). So, how is one to start the journey to create an effective blended learning class?
A solution presented by the Instructional Designers at the Center for Digital Learning and Innovation at Seattle University may help solve the problem. Since it isn’t effective to just haphazardly insert technology into a class and call it blended, there needs to be a purpose for each assignment and the modality in which it is presented and interacted with by students. The team at Seattle University provided a guide for their faculty to create meaningful units that incorporate instructional technology tools appropriately to facilitate a successful blended learning environment. The designers created an interactive Blended Flow Toolkit workflow and the Blended Flow Planner to assist faculty in designing a learner-centered backwards design (Anthoney, Jacobson & Snare, 2018). Luckily, these tools are available for all to use and help faculty design a thoughtful blended learning unit.
The Blended Flow Planner steps the faculty members through each part of a unit and gives suggestions as to what readings, videos or interactive data can be used for each part of the unit.
When you click on each “+” it opens the suggested learning activity and then provides suggestions for what activity for students to complete, either online or in the classroom.
Each of the headings are linked to suggestions of how to accomplish each part of the unit with pros and cons about using that specific modality for the section.
Each section provides different online and classroom activities that would be helpful to accomplish the learning objectives for each of the sections, for example under “Set The Stage” in “Preparatory Exploration” there are many different types of activities that are suggested for this section to accomplish the learning objectives. Under each of the online topics, the detailed information about the activity provides suggested technology tools to accomplish the learning objective, such as using the Learning Management System (LMS) discussion tool, LMS Assignment tool, Padlet, or Wiki Pages.
Choosing the most effective tool is always a critical part of making an effective lesson or unit. One of the easiest ways to integrate technology that can be meaningful for students is to use the discussion tool embedded within your institution’s Learning Management System (LMS). Students can easily access the tool since it is associated with the class you are teaching and each student already has a log in to the system. Many times, the discussion board has a grading feature that allows for seamless interaction between the assignment and the gradebook feature. If you’re looking for something to meet learners where they are in a social media driven world and a bit more personal than a traditional discussion board, try a video discussion board such as Flipgrid or Voicethread.
Flipgrid is a video discussion board that is facilitated by the instructor and students can respond to the topics or questions. Other students in the group are able to respond as well but all done through videos. One great benefit of Flipgrid is that it is a free tool. The faculty member must create a login to start a discussion board, but for students to respond, they just need the code to be linked to your grid. Students can download the app (Android or iOS) or can reply from any internet connected device with a camera. Flipgrid even has it own lingo, broken down here:
A great use for this tool would be to have students introduce themselves to the class at the beginning of the semester. Another option would be to gauge students' understanding of a topic from their responses. There are multiple ways that you can set up your grid for students to respond. A step-by-step guide can be found from Sean Fahey and Karly Moura on how to get started with Flipgrid. There is also read from Flipgrid how it can help to build a community in the Higher Ed Classroom.
Another option for video discussion is VoiceThread. VoiceThread allows for students to interact asynchronously and leave comments on multimedia the instructor has posted. VoiceThread allows students to leave comments in multiple ways: voice (through mic or via telephone), text, audio file, or video and through the use of the app. VoiceThread does have a cost associated, a single instructor license with 50 associated student accounts is $99 a year. Additional student accounts are $2 each. The single instructor account allows for the instructor to edit access to all stud
|
<urn:uuid:2d4de5b1-ae8f-4094-b607-cd2976c00896>
|
CC-MAIN-2023-14
|
https://www.scholarlyteacher.com/post/how-to-integrate-technology-tools-into-a-blended-learning-classroom-for-enhanced-student-learning
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00485.warc.gz
|
en
| 0.943714 | 1,141 | 3.046875 | 3 |
Click to Download and Print
Susana's 5-page guide for using simple puppets to bring life into your language arts curriculum
Susana's 13 page lesson guide for using puppets in the context of conflict resolution
Timmy's Staff Development Workshop Proposal
Susana and Timmy offer a program focused on the links between folk music and literature. It is astounding how many folk songs have counterparts which can be found in the library. Timmy has compiled an extensive bibliography which includes books appropriate at every age level found in many school resource centers. Before the program, Timmy will contact the media specialist to design a specific repertoire of songs and stories which have links to specific books in that school's own library.
Students who participate in this presentation will leave with an excitement for reading that they have never felt before. This will be a program designed to draw out the richness of American folk music with the specific goal of drawing students in to the library.
|
<urn:uuid:e08484bd-8d3a-463e-9051-88e94ea9ab03>
|
CC-MAIN-2017-43
|
http://www.timmyabellmusic.com/resources
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822625.57/warc/CC-MAIN-20171017234801-20171018014801-00366.warc.gz
|
en
| 0.950574 | 193 | 3.078125 | 3 |
Understand PV in 3-digit numbers
In-depth Investigation: Round the Three Dice
Which 100s number is your 3-digit number closest to? Round the Three Dice from nrich.maths.org.
Revision of 2x, 5x and 10x tables: x and ÷
In-depth Investigation: Make the Multiples
Children draw on their knowledge of multiples of 2 and 5 to create these using digit cards 0 - 9.
Using place value to add/subtract
In-depth Investigation: Magic 147
Children arrange the ‘nearly numbers’ in a square so that each row and column adds to Magic 147.
Times tables; multiplication/division
In-depth Investigation: Crack the Code
Children work together to reason and think logically to crack a code. They practise mental multiple and division strategies.
Place value in money: add/subtract
In-depth Investigation 1: Three Coins
Using exactly three coins, children work out how many amounts can be made between £1 and £2.
In-depth Investigation 2: Money Bags
Children use their knowledge of counting in 10s to solve a problem involving money.
Using number facts to add/subtract
In-depth Investigation: Puzzling Squares
Children use reasoning skills to solve a number puzzle. They use numbers 0 to 7 to make a total of 10 on each side of a square.
|
<urn:uuid:ffe5a6f2-26f4-489a-9f67-260ee32d9b17>
|
CC-MAIN-2020-16
|
https://www.hamilton-trust.org.uk/resources/?query=&year=Y3&subject=&title=Mastery%3A+Reasoning+and+Problem-Solving
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00177.warc.gz
|
en
| 0.840221 | 296 | 3.921875 | 4 |
In automotive engineering a multi-valve or multivalve engine is one where each cylinder has more than two valves. A multi-valve engine has better breathing and may be able to operate at higher revolutions per minute (RPM) than a two-valve engine, delivering more power.
Multi-valve engine design
A multi-valve engine design typically has three, four, or five valves per cylinder to achieve improved performance. Any four-stroke internal combustion engine needs at least two valves per cylinder: one for intake of air (and often fuel), and another for exhaust of combustion gases. Adding more valves increases valve area and improves the flow of intake and exhaust gases, thereby enhancing combustion, volumetric efficiency, and power output. Multi-valve geometry allows the spark plug to be ideally located within the combustion chamber for optimal flame propagation. Multi-valve engines tend to have smaller valves that have lower reciprocating mass, which can reduce wear on each cam lobe, and allow more power from higher RPM without the danger of valve bounce. Some engines are designed to open each intake valve at a slightly different time, which increases turbulence, improving the mixing of air and fuel at low engine speeds. More valves also provide additional cooling to the cylinder head. The disadvantages of multi-valve engines are an increase in manufacturing cost and a potential increase in oil consumption due to the greater number of valve stem seals. Some SOHC multi-valve engines (such as the Mazda B8-ME) use a single fork-shaped rocker arm to drive two valves (generally the exhaust valves) so that fewer cam lobes will be needed in order to reduce manufacturing costs.
- Three-valve cylinder head
This has a single large exhaust valve and two smaller intake valves. A three-valve layout allows better breathing than a two-valve head, but the large exhaust valve results in an RPM limit no higher than a two-valve head. The manufacturing cost for this design can be lower than for a four-valve design. The three-valve design was common in the late 1980s and early 1990s; and from 2004 the main valve arrangement used in Ford F-Series trucks, and Ford SUVs. The Ducati ST3 V-twin had 3-valve heads.
- Four-valve cylinder head
This is the most common type of multi-valve head, with two exhaust valves and two similar (or slightly larger) inlet valves. This design allows similar breathing as compared to a three-valve head, and as the small exhaust valves allow high RPM, this design is very suitable for high power outputs.
- Five-valve cylinder head
Less common is the five-valve head, with two exhaust valves and three inlet valves. All five valves are similar in size. This design allows excellent breathing, and, as every valve is small, high RPM and very high power outputs are theoretically available. Although, compared to a four-valve engine, a five-valve design should have a higher maximum RPM, and the three inlet ports should give efficient cylinder-filling and high gas turbulence (both desirable traits), it has been questioned whether a five-valve configuration gives a cost-effective benefit over four-valve designs. The rise of direct injection may also make five-valve heads more difficult to engineer, as the injector must take up some space on the head. After making five-valve Genesis engines for several years, Yamaha has reverted to the cheaper four-valve design, examples of the five-valve engines are the various 1.8l 20vT engines manufactured by AUDI AG, later versions of the Ferrari Dino V8 and the very rare 1.6l 4A-GE engine of Toyota.
- Beyond five valves
For a cylindrical bore and equal-area sized valves, increasing the number of valves beyond five decreases the total valve area. The following table shows the effective areas of differing valve quantities as proportion of cylinder bore. These percentages are based on simple geometry and do not take into account orifices for spark plugs or injectors, but these voids will usually be sited in the "dead space" unavailable for valves. Also, in practice, intake valves are often larger than exhaust valves in heads with an even number of valves-per-cylinder:
- 2 = 50%
- 3 = 64%
- 4 = 68%
- 5 = 68%
- 6 = 66%
- 7 = 64%
- 8 = 61%
Turbocharging and supercharging are technologies that also improve engine breathing, and can be used instead of, or in conjunction with, multi-valve engines. The same applies to variable valve timing and variable intake manifolds. Rotary valves also offer improved engine breathing and high rev performance but these were never very successful. Cylinder head porting, as part of engine tuning, is also used to improve engine performance.
Cars and trucks
The first motorcar in the world to have an engine with two overhead camshafts and four valves per cylinder was the 1912 Peugeot L76 Grand Prix race car designed by Ernest Henry. Its 7.6-litre monobloc straight-4 with modern hemispherical combustion chambers produced 148 bhp (19.5 HP/Liter(0.32 bhp per cubic inch)). In April 1913, on the Brooklands racetrack in England, a specially built L76 called "la Torpille" (torpedo) beat the world speed record of 170 km/h. Robert Peugeot also commissioned the young Ettore Bugatti to develop a GP racing car for the 1912 Grand Prix. This chain-driven Bugatti Type 18 had a 5-litre straight-4 with SOHC and three valves per cylinder (two inlet, one exhaust). It produced appr. 100 bhp (75 kW; 101 PS) at 2800 rpm (0.30 bhp per cubic inch) and could reach 99 mph (159 km/h). The three-valve head would later be used for some of Bugatti's most famous cars, including the 1922 Type 29 Grand Prix racer and the legendary Type 35 of 1924. Both Type 29 and Type 35 had a 100 bhp 2-liter SOHC 24-valve NA straight-8 that produced 0.82 bhp (1 kW; 1 PS) per cubic inch.
Between 1914 and 1945
A.L.F.A. 40/60 GP was a fully working early racing car prototype made by the company now called Alfa Romeo. Only one example was built in 1914, which was later modified in 1921. This design of Giuseppe Merosi was the first Alfa Romeo DOHC engine. It had four valves per cylinder, 90-degree valve angle and twin-spark ignition. The GP engine had a displacement of 4.5-liter (4490 cc) and produced 88 bhp (66 kW) at 2950 rpm (14.7 kW/liter), and after modifications in 1921 102 bhp (76 kW) at 3000 rpm. The top speed of this car was 88-93 mph (140–149 km/h). It wasn't until the 1920s when these DOHC engines came to Alfa road cars like the Alfa Romeo 6C.
In 1916 US automotive magazine Automobile Topics described a four-cylinder, four-valve-per-cylinder car engine made by Linthwaite-Hussey Motor Co. of Los Angeles, CA, USA: "Firm offers two models of high-speed motor with twin intakes and exhausts.".
Early multi-valve engines in T-head configuration were the 1917 Stutz straight-4 and 1919 Pierce-Arrow straight-6 engines. The standard flathead engines of that day were not very efficient and designers tried to improve engine performance by using multiple valves. The Stutz Motor Company used a modified T-head with 16 valves, twin-spark ignition and aluminium pistons to produce 80 bhp (59 kW) at 2400 rpm from a 360.8 cid (5.8-liter) straight-4 (0.22 bhp per cubic inch). Over 2300 of these powerful early multi-valve engines were built. Stutz not only used them in their famous Bearcat sportscar but in their standard touring cars as well. In 1919 Pierce-Arrow introduced its 524.8 cid (8.6-liter) straight-6 with 24 valves. The engine produced 48.6 bhp (0.09 bhp per cubic inch) and ran very quietly, which was an asset to the bootleggers of that era.
Multi-valve engines continued to be popular in racing and sports engines. Robert M. Roof, the chief engineer for Laurel Motors, designed his multi-valve Roof Racing Overheads early in the 20th century. Type A 16-valve heads were successful in the teens, Type B was offered in 1918 and Type C 16-valve in 1923. Frank Lockhart drove a Type C overhead cam car to victory in Indiana in 1926.
Bugatti also had developed a 1.5-liter OHV straight-4 with four valves per cylinder as far back as 1914 but did not use this engine until after World War I. It produced appr. 30 bhp (22.4 kW) at 2700 rpm (15.4 kW/liter or 0.34 bhp/cid). In the 1920 Voiturettes Grand Prix at Le Mans driver Ernest Friderich finished first in a Bugatti Type 13 with the 16-valve engine, averaging 91.96 km/h. Even more successful was Bugattis clean sweep of the first four places at Brescia in 1921. In honour of this memorable victory all 16-valve-engined Bugattis were dubbed Brescia. From 1920 through 1926 about 2000 were built.
Bentley used multi-valve engines from the beginning. The Bentley 3 Litre, introduced in 1921, used a monobloc straight-4 with aluminium pistons, pent-roof combustion chambers, twin spark ignition, SOHC, and four valves per cylinder. It produced appr. 70 bhp (0.38 bhp per cubic inch). The 1927 Bentley 4½ Litre was of similar engine design. The NA racing model offered 130 bhp (0.48 bhp per cubic inch) and the 1929 supercharged 4½ Litre (Blower Bentley) reached 240 bhp (0.89 bhp per cubic inch). The 1926 Bentley 6½ Litre added two cylinders to the monobloc straight-4. This multi-valve straight-6 offered 180-200 bhp (0.45-0.50 bhp per cubic inch). The 1930 Bentley 8 Litre multi-valve straight-6 produced appr. 220 bhp (0.45 bhp per cubic inch).
In 1931 the Stutz Motor Company introduced a 322 cid (5.3-liter) dual camshaft 32-valve straight-8 with 156 bhp (116 kW) at 3900 rpm, called DV-32. The engine offered 0.48 bhp per cubic inch. About 100 of these multi-valve engines were built. Stutz also used them in their top-of-the-line sportscar, the DV-32 Super Bearcat that could reach 100 mph (160 km/h).
The 1935 Duesenberg SJ Mormon Meteor's engine was a 419.6 cid (6.9-liter) straight-8 with DOHC, 4 valves per cylinder and a supercharger. It achieved 400 bhp (298.3 kW) at 5,000 rpm and 0.95 bhp per cubic inch.
The 1937 Mercedes-Benz W125 racing car used a supercharged 5.7-liter straight-8 with DOHC and four valves per cylinder. The engine produced 592-646 bhp (441.5-475 kW) at 5800 rpm and achieved 1.71-1.87 bhp per cubic inch (77.8-85.1 kW/liter). The W125 top speed was appr. 200 mph (322 km/h).
The 1967 Cosworth DFV F1 engine, a NA 3.0-liter V8 producing appr. 400 bhp (298 kW; 406 PS) at 9,000 rpm (101.9 kW/liter), featured four valves per cylinder. For many years it was the dominant engine in Formula One, and it was also used in other categories, including CART, Formula 3000 and Sportscar racing.
Debuting at the 1968 Japanese Grand Prix in the original 300 PS (221 kW; 296 hp) 3.0-liter version the Toyota 7 engine participated in endurance races as a 5.0-liter (4,968 cc) non-turbo V8 with DOHC and 32-valves. It produced 600 PS (441 kW; 592 hp) at 8,000 rpm (88.8 kW/liter) and 55.0 kg⋅m (539 N⋅m; 398 lb⋅ft) at 6,400 rpm.
The first mass-produced car using four valves per cylinder was the British Jensen Healey in 1972 which used a Lotus 907 belt-driven DOHC 16-valve 2-liter straight-4 producing 140 bhp (54.6 kW/liter, 1.20 bhp/cid). What about the Nissan S20 from 1969 used in the Nissan Skyline and Nissan Fairlady Z432?
The 1973 Triumph Dolomite Sprint used an in-house developed SOHC 16-valve 1,998 cc (122 ci) straight-4 that produced appr. 127 bhp (47.6 kW/liter, 1.10 bhp/cid).
The 1975 Chevrolet Cosworth Vega featured a DOHC multi-valve head designed by Cosworth Engineering in the UK. This 122-cubic-inch straight-4 produced 110 bhp (82 kW; 112 PS) at 5600 rpm (0.90 bhp/cid; 41.0 kW/liter) and 107 lb⋅ft (145 N⋅m) at 4800 rpm.
The 1976 Fiat 131 Abarth (51.6 kW/liter), 1976 Lotus Esprit with Lotus 907 engine (54.6 kW/liter, 1.20 bhp/cid), and 1978 BMW M1 with BMW M88 engine (58.7 kW/liter, 1.29 bhp/cid) all used four valves per cylinder. The BMW M88/3 engine was used in the 1983 BMW M635CSi and in the 1985 BMW M5.
The 1978 Porsche 935/78 racer used a twin turbo 3.2-liter flat-6 (845 bhp/630 kW@8,200 rpm; 784 Nm/578 ft.lbs@6,600 rpm). The water-cooled engine featured four valves per cylinder and output a massive 196.2 kW/liter. Porsche had to abandon its traditional aircooling because the multi-valve DOHC hampered aircooling of the spark plugs. Only two cars were built.
Ferrari developed their Quattrovalvole (or QV) engines in the 80s. Four valves per cylinder were added for the 1982 308 and Mondial Quattrovalvole, bringing power back up to the pre-FI high of 245 hp (183 kW) . A very unusual Dino Quattrovalvole was used in the 1986 Lancia Thema 8.32. It was based on the 308 QV's engine, but used a split-plane crankshaft rather than the Ferrari-type flat-plane. The engine was constructed by Ducati rather than Ferrari, and was produced from 1986 through 1991. The Quattrovalvole was also used by Lancia for their attempt at the World Sportscar Championship with the LC2. The engine was twin-turbocharged and destroked to 2.65 litres, but produced 720 hp (537 kW) in qualifying trim. The engine was later increased to 3.0 litres and increased power output to 828 hp (617 kW). The 1984 Ferrari Testarossa had a 4.9-liter flat-12 with four valves per cylinder. Almost 7,200 Testarossa were produced between 1984 and 1991.
The Mercedes-Benz 190E 2.3-16 with 16-valve engine debuted at the Frankfurt Auto Show in September 1983 after it set a world record at Nardo, Italy, recording a combined average speed of 154.06 mph (247.94 km/h) over the 50,000 km (31,000 mi) endurance test. The engine was based on the 2.3-liter 8-valve 136 hp (101 kW) unit already fitted to the 190- and E-Class series. Cosworth developed the DOHC light alloy cast cylinder head with four large valves per cylinder. In roadgoing trim, the 190 E 2.3-16 produced 49 hp (36 kW) and 41 ft•lbf (55 N•m) of torque more than the basic single overhead cam 2.3 straight-4 engine on which it was based offering 185 hp (138 kW) at 6,200 rpm (59.2 kW/liter) and 174 lb⋅ft (236 N⋅m) at 4,500 rpm. In 1988 an enlarged 2.5-liter engine replaced the 2.3-liter. It offered double valve timing chains to fix the easily snapping single chains on early 2.3 engines, and increased peak output by 17 bhp (12.5 kW) with a slight increase in torque. For homologation Evolution I (1989) and Evolution II (1990) models were produced that had a redesigned engine to allow for a higher rev limit and improved top-end power capabilities. The Evo II engine offered 235 PS (173 kW; 232 hp) from 2463 cc (70.2 kW/liter).
Saab introduced a 16-valve head to their 2.0-liter (1985 cc) straight-4 in 1984 and offered the engine with and without turbocharger (65.5 kW/liter and 47.9 kW/liter respectively) in the Saab 900 and Saab 9000.
The 2.0-liter Nissan FJ20 was one of the earliest straight-4 mass-produced Japanese engines to have both a DOHC 16-valve configuration (four valves per cylinder, two intake, two exhaust) and electronic fuel injection (EFI) when released in October 1981 in the sixth generation Nissan Skyline. Peak output was 148 hp (110 kW) at 6,000 rpm and 133 lb⋅ft (180 N⋅m) at 4,800 rpm. The FJ20 was also offered with a turbocharger, producing 188 hp (140 kW) at 6,400 rpm and 166 lb⋅ft (225 N⋅m) at 4,800 rpm.
Following Nissan's lead, the 1.6-liter (1,587 cc) Toyota 4A-GE Toyota engine was released in 1983. The cylinder head was developed by Yamaha Motor Corporation and was built at Toyota's Shimayama plant. While originally conceived of as a two-valve design, Toyota and Yamaha changed the 4A-GE to a four-valve after a year of evaluation. It produced 115-140 bhp/86-104 kW@6,600 rpm (54.2-65.5 kW/liter) and 148 Nm/109 lbft@5,800 rpm. To compensate for the reduced air speed of a multi-valve engine at low rpm, the first-generation engines included the T-VIS feature.
In 1986 Volkswagen introduced a multi-valved Golf GTI 16V. The 16-valve 1.8-liter straight-4 produced 139 PS (102 kW; 137 bhp) or 56.7 kW/liter, almost 25% up from the 45.6 kW/liter for the previous 8-valve Golf GTI engine.
The GM Quad 4 multi-valve engine family debuted early 1987. The Quad 4 was the first mainstream multi-valve engine to be produced by GM after the Chevrolet Cosworth Vega. The NA Quad 4 achieved 1.08 bhp (1 kW; 1 PS) per cubic inch (49.1 kW/liter). Such engines soon became common as Japanese manufacturers adopted the multi-valve concept.
The 1975 Honda Civic introduced Honda's 1.5-liter SOHC 12-valve straight-4 engines. Nissan's 1988–1992 SOHC KA24E engine had three valves per cylinder (two intakes, one exhaust) as well. Nissan upgraded to the DOHC after 1992 for some of the sports cars, including the 240SX.
In 1988, Renault released a 12 valve version of its Douvrin 4 cylinder 2.0l SOHC.
Mercedes and Ford produce three-valve V6 and V8 engines, Ford claiming an 80% improvement in high RPM breathing without the added cost of a DOHC valve train. The Ford design uses one spark plug per cylinder located in the centre, but the Mercedes design uses two spark plugs per cylinder located on opposite sides, leaving the centre free to add a direct-to-cylinder fuel injector at a later date.
The 1989 Citroën XM was the first 3-valve diesel-engined car.
The 1993 Mercedes-Benz C-Class (OM604 engine) was the first 4-valve diesel-engined car.
In April 1988 an Audi 200 Turbo Quattro powered by an experimental 2.2-liter turbocharged 25-valve straight-5 rated at 478 kW/650 PS@6,200 rpm (217.3 kW/liter) set two world speed records at Nardo, Italy: 326.403 km/h (202.8 mph) for 1,000 km (625 miles) and 324.509 km/h (201.6 mph) for 500 miles.
Yamaha designed the five-valve cylinder head for the Toyota 4A-GE 20V 1991 Silvertop and 1995 Blacktop engine used in some Toyota Corolla. Yamaha also developed five-valve Formula One engines, the 1989 OX88 V8, 1991 OX99 V12, 1993 OX10 V10 and 1996 OX11 V10, but none of these were very successful. For their YZ250F and YZ450F motocross bikes, Yamaha developed five-valve engines.
Although most multi-valve engines have overhead camshafts, either SOHC or DOHC, a multivalve engine may be a pushrod overhead valve engine (OHV) design. Chevrolet has revealed a three-valve version of its Generation IV V8 which uses pushrods to actuate forked rockers, and Cummins makes a four-valve OHV straight six diesel, the Cummins B Series (now known as ISB). Ford also uses pushrods in its 6.7L Power Stroke engine using four pushrods, four rockers and four valves per cylinder. The Harley-Davidson Milwaukee Eight engine, introduced in 2016, uses four-valves per cylinder driven by pushrods and a single in-block camshaft.
Examples of motorcycles with multivalve-engines include:
- 1914 Peugeot Grand Prix racer, 500 cc DOHC 8-valve parallel twin (top speed over 122 km/h).
- 1915 Indian board track racer, 61-cid (1.0-liter) OHV 8-valve V-twin.
- 1921 Triumph Ricardo 499 cc OHV 4-valve single-cylinder machine, copied by Rudge-Whitworth with their 1924 Rudge Four 350 cc OHV 4-valve single-cylinder machine, and 1929 Rudge Ulster 500 cc OHV 4-valve single-cylinder machine.
- 1923 British Anzani 1098cc OHV 8-Valve V-twin, used in Morgan three-wheelers and McEvoy motorcycles
- 1972 Honda XL250 "pent-roof" SOHC 4-valve single-cylinder machine (the first mass-produced 4-valve motorcycle).
- 1973 Yamaha TX500 "pent-roof" 500cc DOHC 8-valve parallel-twin (the first mass-produced DOHC 4-valve per cylinder motorcycle)
- 1977 Honda CB400 SOHC 6-valve parallel-twin.
- 1978 Honda CX500, a 498 cc SOHC, pushrod actuated OHV, 4-valve per cylinder V-twin; 1982 CX500 Turbo was the first factory multi-valve turbocharged motorcycle.
- 1978 Honda CBX1000, a 1,047 cc DOHC 24-valve straight-6 (105 bhp (78 kW; 106 PS)).
- 1979 -1992: Honda NR series, racing & production motorcycles with 8-valve-per-cylinder "oval-piston" V4 engines (actually 32-valve V8s with adjoining cylinders merged).
- 1985 Yamaha FZ750 motorcycle with DOHC 20-valve straight-4 Yamaha "Genesis" engine.
- 1991-2010 Yamaha TDM and TRX parallel twin motorcycles with 5 valves per cylinder
- 1998–2006 Yamaha YZF-R1 superbike with redesigned (more compact) "Genesis" engine. 2006 model delivered 180 bhp (134 kW; 182 PS) at 12,500 rpm (130.3 kW/liter).
The Yamaha XT660 single once had five valves per cylinder, but a subsequent redesign reduced the valve-count to four. The Aprilia Pegaso 650 single also started out with five valves, but current models only have four. The jointly developed BMW F650 single always had four valves.
Ettore Bugatti designed several multi-valve aircraft engines. The 1916 Bugatti U-16 1484.3 cid (24.32 L) SOHC 16-cylinder, consisting of two parallel 8-cylinder banks, offered 410 bhp (305 kW) at 2,000 rpm (12.5 kW/liter or 0.28 bhp/cid). Each cylinder had two vertical inlet valves and a single vertical exhaust valve, all driven by rocking levers from the camshaft. Other advanced World War I aircraft engines, such as the 1916 Maybach Mb.IVa that produced 300 bhp (224 kW; 304 PS) at altitude and the 1916 Benz Bz.IV with aluminium pistons and the 1918 Napier Lion (a 450 bhp 24-liter DOHC 12-cylinder), used two intake valves and two exhaust valves.
Long after the King-Bugatti "U-16" aviation engine used them, shortly before World War II, the Junkers aviation firm began production of the Third Reich's most-produced military aviation engine (68,000+ produced), the 1936-designed, 35-litre displacement, inverted-V12, liquid-cooled Junkers Jumo 211, which used a three-valve cylinder head design inherited from Junkers' first inverted V12 design, the 1932-origin Junkers Jumo 210 — this was carried through into the later, more powerful 1940-origin Junkers Jumo 213, produced through 1945, the production versions of which (the Jumo 213A and -E subtypes) retained the Jumo 211's three-valve cylinder head design.
An example of a modern multi-valve piston-engine for small aircraft is the Austro Engine AE300. This liquid-cooled turbocharged 2.0-liter (1,991 cc) DOHC 16-valve straight-4 diesel engine uses common rail direct fuel injection and delivers 168 bhp (125 kW; 170 PS) at 3,880 rpm (62.0 kW/liter). The propeller is driven by an integrated gearbox (ratio 1.69:1) with torsional vibration damper. Total power unit weight is 185 kg (408 lb).
In 1905 car builder Delahaye had experimented with a DOHC marine racing engine with six valves per cylinder. This Delahaye Titan engine was a massive 5190 cid (85.0-liter) four-cylinder that produced 350 bhp (0.07 bhp/cid). It allowed the motor boat Le Dubonnet piloted by Emile Dubonnet to set a new world's speed record on water, reaching 33.80 mph (54.40 km/h) on the lake at Juvisy, near Paris, France.
An example of modern multi-valve engines for small boats is the Volvo Penta IPS Series. These joystick-operated seawater-cooled inboard diesel engines use combined charging (turbo and supercharger, except IPS450) with aftercooler, common rail fuel injection and DOHCs with hydraulic 4-valve technology. Propshaft power ranges from 248 to 850 bhp (185 to 634 kW; 251 to 862 PS) (highest efficiency 59.7 kW/liter for IPS400 3.7-liter straight-4 diesel). Multiple units can be combined.
- Kevin Clemens. "An Echo of the Past: The history and evolution of twin-cam engines (European Car, February, 2009)". Archived from the original on 2013-07-02. Retrieved 2011-12-23.
- Dan McCosh. "Auto Tech 88: 4-valves (Popular Science, May 1988, pp. 24, 37-40)". Retrieved 2011-12-23.
- In direct injection engines - such as diesels and later petrol engines - fuel is delivered to the chamber directly via the injector rather than through a valve. In carburetted engines and indirect-injection engines the fuel is mixed with the air outside of the cylinder and both enter together via the intake valve.
- "Alfa Designers". velocetoday.com. Retrieved 2011-12-30.
- Mort Schultz (January 1985). Engines: A Century of Progress (Popular Mechanics, Jan 1985, pp. 95-97, 120, 122). Retrieved 2011-12-26.
- Sports Car Market. "1918 Stutz Series S Roadster (Sportscarmarket.com, Friday, 31 March 2000)". Archived from the original on 16 January 2012. Retrieved 2011-12-23.
- Classic Car Database. "1918 Stutz S Series Roadster Standard Specifications (Classic Car Database)". Retrieved 2011-12-23.
- PaulFreehill. "16-valve Stutz block (YouTube.com video, May 6, 2010)". Retrieved 2011-12-23.
- RM Auctions. "1919 Pierce-Arrow Model 48 Dual-Valve Four-Passenger (RM Auctions, Phoenix, AZ, USA)". Retrieved 2011-12-23.
- Classic Car Database. "1919 Pierce Arrow 48-B-5 Series Touring Standard Specifications (Classic Car Database)". Retrieved 2011-12-23.
- Conceptcarz.com. "1919 Pierce Arrow Model 48 Specifications (Conceptcarz.com)". Retrieved 2011-12-23.
- Northwest Vintage Speedsters. "Roof Alphabetical Index and Images (nwvs.org)". Retrieved 2011-12-23.
- Model T Ford Club of America. "Robert M. Roof (MTFCA.com)". Retrieved 2011-12-23.
- Sports Car Market. "1921 Peugeot 3-liter Racer (Sportscarmarket.com, 30 June 1999)". Archived from the original on 24 October 2011. Retrieved 2011-12-27.
- Donald Osborne. "Honoring the Original American Sports Cars (New York Times, August 12, 2011)". Retrieved 2011-12-23.
- Classic Car Database. "1932 Stutz CD DV 32 Series Super Bearcat Standard Specifications (Classic Car Database)". Retrieved 2011-12-23.
- Richard Owen. "1935 Duesenberg SJ Mormon Meteor (Supercars.net)". Retrieved 2011-12-22.
- Daniel Vaughan. "1935 Duesenberg SJ Special Mormon Meteor (Conceptcarz.com, March 2011)". Retrieved 2011-12-22.
- Wikimedia Commons. "1975 Cosworth Vega advertisement (Motor Trend Magazine, 1975)". Retrieved 2011-12-23.
- Mike Allen (February 1988). Quad 4: The Inside Story (Popular Mechanics, February 1988, pp.62-65). Retrieved 2011-12-23.
- D. Sherman (January 1990). Five valves for Audi (Popular Science, Jan 1990, pp. 35, 37). Retrieved 2011-12-30.
- Brunn Racing. "AUDI 200 N6000 - WORLD RECORD PROTOTYPE (Brunnracing.com, Dec 2011)". Retrieved 2011-12-30.
- "A baby that sprints: tiny Mitsubishi engine blasts off with five valves". Ward's Auto World (April 1989).
- Michael Knowling. "Mighty Minica ZZ-4 (Autospeed Issue 353, 19 October 2005)". Retrieved 2011-12-26.
- Ermanno Cozza & George Lipperts. "Maserati Sei Valvole (Enrico's Maserati Pages, 2002–2004)". Retrieved 2011-12-26.
- Bennett, Jay (2016-08-29). "Milwaukee Eight Multi-Valve". Popular Mechanics. HEARST DIGITAL MEDIA. Retrieved 16 August 2017.
- Cook, Marc. "HD Pushrods". Motorcyclist Online. Bonnier Corporation. Retrieved 16 August 2017.
- Yves J. Hayat & Bernard Salvat. "Peugeot Racers - Part 1 (The Best Motorcycle, Jan 26, 2010)". Retrieved 2011-12-27.
- Yesterdays.nl. "1915 Indian 8 Valve Boardtrack Racer (YouTube.com video, Mar 18, 2010)". Retrieved 2011-12-27.
- "YAMAHA TX500/750: A QUESTION OF BALANCE". Tobyfolwick.com. Retrieved 2015-12-23.
- German language illistration of Jumo 211 three-valve design
- "Flight Magazine, September 9, 1937". flightglobal.com. Flightglobal Archive. September 9, 1937. p. 265. Retrieved March 15, 2017.
At the recent international meeting at Zürich, several of the successful German machines were fitted with the new Junkers 210 petrol engine...Three valves per cylinder are provided, two inlets and one exhaust, operated by push rods and rockers from a single camshaft.
- Culy, Doug (April 4, 2012). "The Junkers Jumo 213 Engine". enginehistory.org. Aircraft Engine Historical Society. Archived from the original on December 21, 2016. Retrieved March 15, 2017.
The Jumo 213 had a three-valve head, but a four-valve head was in development for the “J” version. However, the Jumo 213A is documented as itself having superior high altitude performance at that particular point in time, although the DB 603 was later developed with equal or better features.
- Gérald Guétat (1998-01-10). Classic Speedboats 1916–1939 (Motorbooks International, 1997, p.16, ISBN 0-7603-0464-5). ISBN 9780760304648. Retrieved 2011-12-23.
- Kinematic Models for Design Digital Library (KMODDL) - Movies and photos of hundreds of working mechanical-systems models at Cornell University. Also includes an e-book library of classic texts on mechanical design and engineering.
|
<urn:uuid:4392225f-552a-4117-92c8-cf818e0d0fa0>
|
CC-MAIN-2020-29
|
http://prewar.mgcc.info/r-type/?rdp_we_resource=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMulti-valve
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897844.44/warc/CC-MAIN-20200709002952-20200709032952-00395.warc.gz
|
en
| 0.903739 | 7,574 | 3.609375 | 4 |
Treatment for gastroesophageal reflux disease (GERD) usually consists of three stages. The first two stages include taking medications and making diet and lifestyle changes. The third stage is surgery. Surgery is generally used only as a last resort in very severe cases of GERD that involve complications.
Most people will benefit from first-stage treatments by adjusting how, when, and what they eat. However, diet and lifestyle adjustments alone may not be effective for some. In theses cases, doctors may recommend using medications that slow or stop acid production in the stomach.
Proton pump inhibitors (PPIs) are one type of medication that can be used to reduce stomach acid and relieve GERD symptoms. Other medications that can treat excess stomach acid include H2 receptor blockers, such as famotidine (Pepcid AC) and cimetidine (Tagamet). However, PPIs are usually more effective than H2 receptor blockers and can ease symptoms in the majority of people who have GERD.
PPIs work by blocking and reducing the production of stomach acid. This gives any damaged esophageal tissue time to heal. PPIs also help prevent heartburn, the burning sensation that often accompanies GERD. PPIs are one of the most powerful medications for relieving GERD symptoms because even a small amount of acid can cause significant symptoms.
PPIs help to decrease stomach acid over a four to 12-week period. This amount of time allows for proper healing of the esophageal tissue. It may take longer for a PPI to ease your symptoms than an H2 receptor blocker, which usually starts reducing stomach acid within one hour. However, symptom relief from PPIs will generally last longer. So PPI medications tend to be most appropriate for those with GERD.
PPIs are available both over-the-counter and by prescription. Over-the-counter PPIs include:
(Prevacid 24 HR)
Lansoprazole and omeprazole are also available by prescription, as are the following PPIs:
Another prescription drug known as Vimovo is also available for treating GERD. It contains a combination of esomeprazole and naproxen.
Prescription-strength and over-the-counter PPIs seem to work equally well in preventing GERD symptoms.
Talk to your doctor if GERD symptoms don’t improve with over-the-counter or prescription PPIs within a few weeks. You could possibly have a Helicobacter pylori (H. pylori) bacterial infection. This type of infection requires more complex treatment. However, the infection doesn’t always cause symptoms. When symptoms do develop, they’re very similar to GERD symptoms. This makes it hard to distinguish between the two conditions. Symptoms of an H. pylori infection may include:
- frequent burping
- loss of appetite
If your doctor suspects you have an H. pylori infection, they will run various tests to confirm the diagnosis. Then they will determine an effective treatment plan.
PPIs have traditionally been considered to be safe and well-tolerated medications. However, research now suggests that certain risks may be involved with long-term use of these drugs.
A recent study found that people who use PPIs long-term have less diversity in their gut bacteria. This lack of diversity puts them at an increased risk for infections, bone fractures, and vitamin and mineral deficiencies. Your gut contains trillions of bacteria. While some of these bacteria are “bad,” most of them are harmless and help in everything from digestion to mood stabilization. PPIs may disrupt the balance of bacteria over time, causing the “bad” bacteria to overtake the “good” bacteria. This can result in illness.
Additionally, the U.S. Food and Drug Administration (FDA) issued a public safety announcement in 2011 that stated long-term use of prescription PPIs might be associated with low magnesium levels. This can result in serious health problems, including muscle spasms, irregular heartbeat, and convulsions. In about 25 percent of the cases that the FDA reviewed, magnesium supplementation alone didn’t improve low serum magnesium levels. As a result, PPIs had to be discontinued.
Yet the FDA emphasizes that there’s little risk of developing low magnesium levels when using over-the-counter PPIs as directed. Unlike prescription PPIs, over-the-counter versions are sold at lower doses. They are also generally intended for a two-week course of treatment no more than three times a year.
Despite the potential side effects, PPIs are usually a very effective treatment for GERD. You and your doctor can discuss the potential risks and determine whether PPIs are the best option for you.
When you stop taking PPIs, you may experience an increase in acid production. This increase can last for several months. Your doctor may gradually wean you off these drugs to help prevent this from happening. They may also recommend taking the following steps to reduce your discomfort from any GERD symptoms:
- eating smaller portions
- consuming less fat
- avoiding laying down for at least two hours
- avoiding snacks before bedtime
- wearing loose clothing
- elevating the head of the bed about six inches
- avoiding alcohol, tobacco, and foods that
Make sure to consult with your doctor before you stop taking any prescribed medications.
|
<urn:uuid:7a237673-cd17-4a8c-83f1-2d1e43c28dab>
|
CC-MAIN-2023-14
|
https://www.healthline.com/health/gerd/proton-pump-inhibitors
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00248.warc.gz
|
en
| 0.934258 | 1,207 | 2.609375 | 3 |
|The cyst of Chilomastix mesnili, shown on the right, is pear-shaped and measures 4 to 6 µm wide and 6 to 10 µm long. There is a single nucleus and a curved cytostomal fibril called the shepherd's crook. The image at right is a trichrome stain (1000x).|
|The trophozoites of C. mesnili are also pear-shaped and measure from 6 to 24 µm in length and 4 to 8 µm wide. The single nucleus usually has a prominent karyosome. The anterior flagella are difficult to see. The oral groove (cytostome) is sometimes seen near the nucleus. The image on the left is an iron hematoxylin stain (1000x).|
The image at right is a wet mount of a trophozoite viewed by phase-contrast microscopy. The image below left is an iodine stained wet mount of a cyst. The image right is unstained wet mount of a cyst, the plane of focus is on the nucleus. Images courtesy of Gustavo Gini.
Diagnostic Parasitology Index
Copyright 2008 Don Lehman
|
<urn:uuid:4cc9eeeb-8946-4c66-b2b0-3a693dd6b69c>
|
CC-MAIN-2014-23
|
http://www.udel.edu/mls/dlehman/medt372/C-mesnili.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883466.67/warc/CC-MAIN-20140722025803-00205-ip-10-33-131-23.ec2.internal.warc.gz
|
en
| 0.918521 | 251 | 2.515625 | 3 |
Germany joined nearly two dozen governments around the world that have introduced legislation allowing gays and lesbians to marry on Friday by legalizing same-marriage.
This comes days after Chancellor Angela Merkel said she would allow her conservative lawmakers to follow their conscience in the vote.
The German legal code was changed to say “marriage is entered into for life by two people of different or the same sex”, in the bill that was strongly supported by leftist parties.
The reform grants full marital rights, including child adoption, to gay and lesbian couples, who in Germany have been allowed since 2001 to enter so-called civil unions.
The lower house passed the bill by a margin of 393-226. The upper house has already approved it, and the measure is expected to enter into force before the end of the year.
The election-year bill was pushed by Merkel’s leftist rivals who pounced on a U-turn she made in an on-stage interview on Monday. The manoeuvre left many of her conservative lawmakers fuming.
Merkel, who voted against the winning bill said marriage should be between a man and a woman. She however, hoped that the parliament’s approval would lead to more social cohesion.
“For me, marriage in the Basic Law is marriage between a man and a woman and that is why I did not vote in favour of this bill today,” she told reporters moments after the vote.
While fielding questions from newsmen, Merkel said, “I hope that the vote today not only promotes respect between different opinions but also brings more social cohesion and peace.”
|
<urn:uuid:4964b747-6f47-4f82-a86c-432b493f6d39>
|
CC-MAIN-2020-24
|
http://nationaldailyng.com/germany-latest-haven-for-gay-couples/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00028.warc.gz
|
en
| 0.979912 | 330 | 2.5625 | 3 |
Drake Music’s work on the Youth Music funded Exchanging Notes project is mostly based around classroom support and peri-style sessions at Belvue School (a SEN/D specialist school in Ealing).
Last summer I was delighted to be asked to incorporate small a research and development project, drawing on tech ideas from our DMLab hacker/maker community, and creating techniques that could feed into our other activities.
Fresh from working on the first version of the KellyCaster guitar with John Kelly, this was an exciting opportunity to bring some inspiration from our collaborative approach to the school environment.
The result was a musical Lazy Susan – a freestanding instrument with an embedded computer, incorporating sound, haptic feedback, light, and various sensors for interaction.
WATCH [Vine: the Lazy Susan in Action]
What is haptic feedback? Touch feedback to the end user. You know how your phone sometimes vibrates a tiny bit when you tap a button? That’s haptics at work.
Over the course of ten weeks, I worked with pupils from a KS3 class, alongside their teacher, to develop their own accessible instrument.
Starting with a foundation of play, and aiming to keep the young musicians’ ideas and musical identities at the heart of the sessions, we gradually built up an idea of preferences and ways that we could interact.
Most of our music-making developed out of some of the class’s favourite warm-up activities –
- passing a flashing ball around to work on turn-taking
- singing and signing with Makaton
- dancing to the school’s Sound Beam and sounds DJ’d from an iPad
- recording and playing animal sounds with switches to make our own guessing games
We gathered a small collection of instruments out of this process (some bespoke, some borrowed from the music cupboard), refining ideas in a running conversation based more on playing and moving in the moment than on verbal dialogue.
It was vital to create an inclusive atmosphere in the sessions as well as building accessibility into the instrument, recognising that everyone can experience and play music differently.
Some of the young musicians in our group benefitted from a multi-sensory approach, using light and vibrations to see and touch the music, rather than insisting that we listen with only our ears.
Our approach to technology began with resources that the school already had in place, such as iPads, switches, and the SoundBeam.
The school had already bought a Bare Conductive Touch Board with conductive paint, which this gave us a good starting point to work with some custom interfaces, in the form of interactive pictures.
We did this by creating conductive “underlays” that we could disguise or decorate with a different picture each week – conductive paint and foil don’t rely on direct touch, so they still work well when hidden under thin card or laminated paper.
We built light into our sessions through a set of affordable disco lights linked to a computer, so that movement and touch created cues that were turned into colour.
Vibration speakers converted bassier sounds into buzzes and rumbles that could be felt by touching the surfaces.
The idea for approaching turn-taking in our design came through conversations with teachers within the school who said they had dreamed of setting up instruments and technology in ways that could be rotated and passed around; a quick trip to Ikea later, the first prototype was born!
Sounds could be played either by touching the instrument or leaning into it. We also incorporated a mode which required the player to keep their hand on the board while moving, so that they could choose notes with one hand while turning them on and off with another. Although the kind of coordination and reach required meant it wasn’t accessible to everyone, this added a dimension in terms of physical feedback. By making contact with the instrument while waving, it was possible to experience the benefits of the motion sensor while keeping a physical connection with the vibrations.
Since it was essential not to have any wires hanging out while the Lazy Susan was turned around, the project was compressed into a tupperware box, with battery power for the speaker and other components. We decorated the box together with bright colours and animals from our sessions.
We spent the next few weeks learning to play the instrument as one of the options in our group: devising songs, improvising, having musical conversations, and building in that all-important turn-taking moment.
Spinning the board around and passing it to the next person gradually became a familiar gesture to everyone, an essential action in playing the Lazy Susan.
The unconventional shape of the instrument helped establish that there was no right or wrong way to play, as long as the turn-taking element and mutual respect for others were present. The young musicians developed ways of playing to play with little verbal guidance from adults in the room, with focus on musical interactions, modelling, and responding with physical gestures.
Through this process we found many new ways of playing: touching the conductive areas and counting or singing, leading others in call and response, and leaning in to blow onto the sensor. Although in this latter case the blowing didn’t control the sound directly, the gesture made sense to the young musician in a more concrete way, and became a firm favourite!
So what did the group of young musicians think?
In general the instrument went down well, and was a popular request amongst our options during the sessions.
It was interesting to note that some of those in our group who played with the most focus and enthusiasm ended up giving the most critical feedback. One of our most rewarding moments came through getting to the bottom of why one of our most active players consistently said the instrument was “rubbish” during our feedback time, apparently contradicting himself.
Towards the end of the project, having remembered to ask the all important question “why”, we came to an understanding that the instrument “should play pop music”, and as a result had a great time playing together with drum sounds during our final session. In future versions we hope to incorporate samples as well for maximum flexibility.
The design aesthetic and sounds might not ultimately have been to everyone’s taste, but it felt clear from our interactions that everyone in the room felt a sense of ownership of the instrument and our music-making activities.
The routines and sense of communication that had developed felt just as important as any of the new technology. The class teacher noted that many of the young people involved had shown a marked improvement in turn-taking and listening, both in and out of sessions.
It’s been an interesting endeavour – not least striking a balance between trying new ideas in a reliable enough way to keep everyone engaged, and making sure that the young musicians’ opinions were always actively engaged in this process.
With hindsight (and having worked on a few similar projects since), it would have been good to prioritise ways that pupils and teaching staff could operate the technology independently between sessions, with more opportunities for play outside our allocated contact time.
Often the most effective workshops I’ve encountered in this area have been based on technology that can be thrown together in five minutes, with a focus on hands-on making, sculpting, and experimentation that would be equally at home in a messy art class.
Bespoke music technology is rapidly becoming affordable and accessible, with very limited knowledge required to make something exciting.
Sometimes, as in this case, the most rewarding work lies in finding a good point of entry with a focus on interaction, where hopefully the technology starts to feel transparent.
Bonus geek-out section: under the hood
The Lazy Susan was created with affordability in mind; the resources used totalled under £100. Our first prototype wasn’t exactly built to last, but it has served well for a term and is still available for anyone to play in the music classroom, with a more permanent version on its way.
- Cardboard owl or similar animal with holes cut out for the distance sensor (essential)
- A Lazy Susan from your favourite furniture store
- A Bare Conductive Touch Board set to onboard MIDI mode (plus a bit of custom code for the distance sensor, lights, and added bass notes)
- Several NeoPixels connected to pins on the Touch Board to generate coloured lights
- A vibration speaker (using a headphone splitter, audible sounds can be panned to a regular speaker one channel, while vibrations — the same notes duplicated in lower octaves — are panned to the other).
- An ultrasonic rangefinder (the HC-SR04 is a great budget sensor)
- A power bank designed for mobile phones (watch out – some recent models shut down after a while if unused!)
- Tupperware for the housing, with a combination of hot glue and Sugru to hold everything in place.
- A MIDI socket and cable to control an iPad (optional)
- A few weeks of exploration, dialog, and play!
|
<urn:uuid:ebcc61d5-7287-4df1-a21a-ccfd206a25df>
|
CC-MAIN-2020-29
|
https://www.drakemusic.org/blog/charles-matthews/a-musical-lazy-susan-research-and-development-at-belvue-school/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147917.99/warc/CC-MAIN-20200714020904-20200714050904-00228.warc.gz
|
en
| 0.965523 | 1,868 | 2.6875 | 3 |
From the deck of the world's largest container ship, Tracey Logan asks how tomorrow's vessels will compare with the ocean-going behemoths of today in the BBC World Service's Discovery programme.
The current expansion in world trade, particularly trade with China, is causing a rethink of the way goods are transported across the world's oceans.
The amount of goods coming from China is booming
Almost all goods traded worldwide travel by sea, and experts predict that trade will triple by the year 2020. This will require bigger ships, faster ships and greener ships.
Professor Chris Hodge, chairman of next year's World Maritime Technology Conference in London, explained the driving forces behind the search for new shipping technologies.
"Shipping is already huge. Perhaps 90% of the world's trade goes by sea and it's only going to increase," he said.
"Perhaps we'll see doubling of world shipping in our lifetime. The needs of the environment, the need to reduce manpower and costs, all present technological challenges."
The world's largest container ship, the MSC Pamela, launched earlier this year. It carries 9,200 container boxes between China and Western Europe
Ships this big would have been unthinkable just a few years ago, according to David Tozer of the Lloyds Register, which keeps records of all merchant shipping
But ultra-large container ships are rapidly becoming the norm. By the year 2020, 30% of the world's shipping fleet will be too big to pass through the Panama Canal.
Mr Tozer predicted a maximum ship capacity of 12,500 boxes per ship. Beyond that, he said, even the biggest container ports would be unable to load and unload them.
But perhaps size is not everything. Transporting goods by container is cheap, around a dollar for a refrigerator, but slow at just 25km/hr.
At the moment, a European car ordered in the US could take up to 25 days to be delivered. This includes the time taken to cross the Atlantic and the time it takes to unload and distribute the cargo from today's giant container ships in port.
The naval architect Nigel Gee believes there is a gap in the market for an ocean-going courier service that fits somewhere between air freight and conventional shipping.
His patented Pentamaran design would be a much smaller craft, travelling much faster, so he predicts that the same car could be delivered to the same customer within a week.
Pentamarans would carry around 1,000 boxes at a maximum speed of around 85km/hr.
They would achieve such speeds without vastly increasing fuel consumption thanks to a long, thin torpedo-shaped main hull. Such a design is naturally unstable in the water and requires side hulls to stop it sinking - in this case, four are used.
Its designers claim great interest among ship builders worldwide.
But shipping is notoriously cost-conscious. As oil prices rise, so ships will need to find ways of minimising their fuel consumption.
In an anonymous-looking test hall in the heart of England where, back in the 1930s, Frank Whittle tested the concept of a jet engine to power aircraft, engineers are testing similar techniques to power ships.
In the absence of ocean, the All Electric Ship Demonstrator project needs no hull or rudder to show the benefits of propelling a ship electrically rather than mechanically.
The programme is jointly funded by the British and French defence ministries to test the operation of an advanced naval electric propulsion system before applying the technology to a warship.
A gas turbine produces electricity to drive the system's virtual propeller via a vastly shortened propeller shaft.
Container ships will have to get bigger and faster
It goes like a rocket and yet handles like a dream, the propeller changing direction with the click of a computer mouse in the test facility's virtual bridge or control room.
That is just one of the features of the all-electric ship that makes it so appealing, according to Professor Chris Hodge, also a chief engineer with the marine engineering consultancy BMT.
"One of the biggest problems with a mechanical engine is it can't turn backwards very easily," he said.
"An electric motor really doesn't know which way it's turning, once it's started. It's much more flexible, much more versatile and because of all of that much cheaper to run."
Nigel Gee believes that in the short term, the all electric ship will be limited to certain shipping sectors that may change in the future.
"Cruise liners have an enormous demand for electricity for services to their passengers, be it heating, water for baths, cooking or entertainment.
"And these occur at certain times of the day. At other times of the day, when the passengers are all asleep, they need to propel the ship fast.
"If you produce a lot of electricity you can choose where you divert it to, for services or weaponry, depending on need.
"In the longer term, when the oil runs out, the all electric ship may play a wider role in shipping as, indeed, may nuclear power."
|
<urn:uuid:e4dcb80c-85a0-4ccd-ab70-7e92c4a0c7f8>
|
CC-MAIN-2014-23
|
http://news.bbc.co.uk/2/hi/technology/4503686.stm
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889314.41/warc/CC-MAIN-20140722025809-00148-ip-10-33-131-23.ec2.internal.warc.gz
|
en
| 0.950007 | 1,045 | 2.78125 | 3 |
Short, five-minute exercises and case studies will be scattered throughout the two-day session. Longer exercises are detailed below. Time spent on each topic will vary depending on the composition of the class and the interest in particular areas.
1. Agile Thinking
In order for us to understand the benefits of Scrum and the nuances behind its framework, we begin with the history of agile methods and how relatively new thoughts in software development have brought us to Scrum.
- a. How manufacturing has influenced software development
- b. The origins of agile thinking
- c. The Agile Manifesto
- d. The complexity of projects
- e. Theoretical Vs. Empirical processes overview
- f. The “Iron Triangle” of Project Management
EXERCISE: The “Art of the Possible.” This is an opportunity to understand how small changes in behavior can have a large impact on productivity. This also turns our thinking towards new ideas and a willingness to change for the better.
2. The Scrum Framework
Here we’ll ensure that we’re all working from the same foundational concepts that make up the Scrum Framework.
- a. The different Scrum roles
- b. Chickens and Pigs
- c. Iterative Development vs. Waterfall
- d. Self Management concepts
- e. Full disclosure and visibility
- f. The Scrum Framework Overview
3. Implementation Considerations
Moving beyond Scrum’s foundational concepts, we’ll use this time to dig deeper into the reasons for pursuing Scrum. We’ll also use this time to begin a discussion of integrity in the marketplace and how this relates to software quality.
- a. Traditional vs. Agile methods overview
- b. Scrum: The Silver Bullet
- c. The Agile Skeleton
- d. A Scrum launch checklist
EXERCISE: Integrity at a fast-food restaurant. During this exercise we’ll review various options regarding an employee faced with a difficult situation. The importance of providing high quality products to our customers will be explored.
EXERCISE: understanding customer expectations. This exercise is the beginning of an extended exercise involving agile estimating and planning. During this first portion of the exercise, we’ll work with a fictional customer who has a very demanding schedule and understand how our assessment of project work plays a significant role in customer satisfaction.
EXERCISE: : The 59-minute Scrum Simulation. This popular exposure to Scrum asks us to work on a short project that lasts for just 59 minutes! We’ll walk through all of the key steps under the Scrum framework as we work in project teams to deliver a new product.
4. Scrum Roles
Who are the different players in the Scrum game? We’ll review checklists of role expectations in preparation for further detail later in our session.
- a. The Team Member
- b. The Product Owner
- c. The Scrum Master
5. The Scrum Team Explored
Since the ScrumMaster is looking to protect the productivity of the team, we must investigate team behaviors so we can be prepared for the various behaviors exhibited by teams of different compositions. We’ll also take a look at some Scrum Team variants.
- a. The Agile Heart
- b. Bruce Tuckman’s team life cycle
- c. Patrick Lencioni’s Five Dysfunctions of a Team
- d. Team ground rules
- e. Getting Human Resources involved
- f. The impact of project switching
- g. The MetaScrum
- h. The Scrum of Scrums
- i. The importance of knowing when software is “done”
- “Done” for multiple team integrations divided by function
- “Done” for multiple team integrations divided by skill
- “Done” for unsynchronized technologies
- j. Internal Outsourcing*
6. Agile Estimating and Planning
Although agile estimating and planning is an art unto itself, the concepts behind this method fit very well with the Scrum methodology an agile alternative to traditional estimating and planning. We’ll break into project teams that will work through decomposition and estimation of project work, and then plan out the project through delivery.
- a. Product Backlog Features
- b. Relative Weighted Prioritization
- c. Prioritizing Our Time
- d. User Stories
- e. Relative Effort
- f. Velocity
- g. Planning Poker and Story Points
- h. Ideal Team Days
- i. Team Capacity
- j. Projecting a Schedule
- k. Why Plan in an Agile Environment?
7. The Product Owner: Extracting Value
The driving force behind implementing Scrum is to obtain results, usually measured in terms of return on investment or value. How can we help ensure that we allow for project work to provide the best value for our customers and our organization? We’ll take a look at different factors that impact our ability to maximize returns.
- a. The Priority Guide
- b. Product Backlog Refactoring
- c. Productivity Drag Factors
- d. Fixed Price/Date Contracts
- e. Release Management
- f. Earned Value Management
8. The ScrumMaster Explored
It’s easy to read about the role of the ScrumMaster and gain a better understanding of their responsibilities. The difficulty comes in the actual implementation. Being a ScrumMaster is a hard job, and we’ll talk about the characteristics of a good ScrumMaster that go beyond a simple job description.
- a. The ScrumMaster Aura
- b. Characteristics of a ScrumMaster Candidate
- c. The Difficulties of Being a ScrumMaster
- d. A Day in the Life of a ScrumMaster
- e. The Importance of Listening
- f. Common Sense
9. Meetings and Artifacts Reference Material
While most of this material was discussed in previous portions of class, more detailed documentation is included here for future reference.
- a. A Chart of Scrum Meetings
- b. The Product Backlog
- c. Sprint Planning
- d. The Sprint Backlog
- e. The Sprint
- f. The Daily Scrum
- g. The Sprint Demo/Review
- h. Why Plan?
- i. The Ideal Team Day
- j. Scrum Tools
10. Advanced Considerations and Reference Material
This section is reserved for reference material. Particular interests from the class may warrant discussion during our class time together.
- a. Conflict Management
- b. Different Types of Sprints
- c. The ScrumMaster of the Scrum-of-Scrums
- d. Metrics
- e. Dispersed Teams
- f. Scaling
- g. Developing Architecture
- h. Stage Gate/Milestone Driven Development
- i. Inter- and Intra-Project Dependencies
- j. Task Boards, Project Boards
- k. Scrum and CMM, “Traditional” XP
|
<urn:uuid:3e142c71-f2c4-4aca-94ab-b87c53aadda4>
|
CC-MAIN-2013-20
|
http://www.dwwtc.com/outline/agile/certified_scrummaster_workshop
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706470784/warc/CC-MAIN-20130516121430-00095-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.839359 | 1,515 | 2.78125 | 3 |
CGI and video games: computer generated images (or, at least, computer interpreted images) are, by definition, the visual recipe for every video game we play and part of what makes games one of the most complex and captivating forms of entertainment on the planet. From great stories and characters to awesome gameplay and sound design, there are numerous reasons why video games are a part of more people’s lives than ever before. But if there’s one aspect of games that has evolved the most over time, it’s the element many care about most — visuals.
For as long as video games have been around, people have gravitated towards games that are not only fun to play but also look amazing. In fact, even the film industry is now studying how game developers create realistic graphics and movement to tell a story. Of course, much like movies themselves, games have gone through an evolution in becoming the visually jaw-dropping experiences they are today. And CGI has played a major role in the evolution of game visuals.
The Early Days
In the beginning, or the early ‘70s, all you had was a few white pixels over a black screen. Although Pong wasn’t officially the first video game ever made, it was one of the earliest arcade games to become popular across the globe. Other games like Midway’s Boot Hill and Gotcha only used black and white computer-generated images, but this was enough at the time to fill arcades.
The success of these black-and-white titles led to a desire for more attractive visuals and shapes. Namco’s Galaxian astonished gamers everywhere in 1979 with its brightly colored ships, and a year later the enormously popular Pac-Man arrived. Developers would continue pushing the limits of the video game consoles at the time to deliver games that were a joy both to view and play.
The Sprite Era
In 1985 a little game called Super Mario Bros. jumped onto the scene, almost single-handedly resurrecting the video game industry after a devastating market crash. At the same time, games like Street Fighter II, Teenage Mutant Ninja Turtles, and Strider revived arcades as a social and game hub. Revolutions in memory, storage capacity, and graphics cards/ screen resolution allowed these games to offer more vibrant colors and diverse shapes than ever, leading to improved user experiences.
The increased hardware power of systems like the Super NES and Sega Genesis also inspired developers to create jaw-dropping visuals for their time. Games like Chrono Trigger, Sonic The Hedgehog, and Super Metroid are to this day considered masterpieces of an era when designers were able to craft charming worlds and atmospheric places with sprites alone. While 2D graphics still have their fans to this day, the mid-‘90s are arguably the period of greatest CGI advancement in video games.
The 3D Takeover Unfolds
Increased power in the average home computer gave developers the freedom to use tricks to simulate 3D. One of the games to do this best was the critically praised Doom, a pioneer in perhaps the most popular genre today: first-person shooter. True 3D graphics finally took over in the mid-’90s with the release of the Nintendo 64 and PlayStation.
With these consoles, gamers could truly begin exploring fully-3D worlds. There was nothing more incredible than seeing Mario jump, fly, and slide in Super Mario 64, the first successful 3D platformer. Games like PlayStation’s Crash Bandicoot and PC-favorite Quake continued pushing CGI in games until developers needed better hardware to take things further.
The Modern Age
The jump from 2D to 3D still stands as the most significant advancement of CGI in video games. Ever-improving technology in the early 2000s opened the door to head-turning games like Halo: Combat Evolved, Grand Theft Auto III, and Metroid Prime. Never before were video game visuals so capable of creating environments that sucked players in and made them feel like part of the virtual worlds.
Today, 3D continues dominating the industry as games become more and more realistic. The latest video game consoles allow for the best cinematic realism ever to grace the industry, while computer users are able to constantly boost their system’s graphics capabilities. With the advent of virtual and augmented reality, there’s no telling where video game CGI will go next.
|
<urn:uuid:ddb27585-af6e-4f93-8be6-b8607235bc52>
|
CC-MAIN-2020-24
|
https://www.nyfa.edu/student-resources/tag/cgi/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388012.14/warc/CC-MAIN-20200525063708-20200525093708-00333.warc.gz
|
en
| 0.955534 | 898 | 2.8125 | 3 |
An automotive master technician is an automobile technician who received additional training or certification to obtain the credential of master technician. This credential can be obtained through automobile manufacturers or the National Institute for Automotive Service Excellence. Salary for this occupation can vary based on the master technician's experience and employer.
Although there are no specific educational requirements for this occupation, many employers prefer candidates who received vocational training in automotive service technology. This training can be obtained in high school or through trade and technical schools. Master technicians are required to receive master-level certification from automobile manufacturers or the National Institute for Automotive Service Excellence credential of certified master automobile technician.
In May 2009, the Bureau of Labor Statistics estimated 606,990 automotive service technicians and mechanics employed in the United States. Average annual wages ranged from $19,840 to $59,920 per year. The 25th percentile earned $25,970, the median earned $35,420 and the 75th percentile earned $47,240 per year.
Master Technician Salaries
Along with general responsibilities, master technicians may coordinate the activities of other automobile mechanics. In January 2011, CBSalary.com reported median annual salaries of $53,399 per year per for master auto technicians. The 25th percentile earned $40,178 and the 75th percentile earned $73,764 per year.
The Bureau of Labor Statistics suggests automotive service technicians and mechanics with specialized skills and certifications are more likely to find employment opportunities because of the increased use of advanced technology and automobiles. It also cites, “By becoming skilled in multiple auto repair services, technicians can increase their value to their employer and their pay.”
|
<urn:uuid:f8916e40-5b15-4708-8125-020ab634ebb9>
|
CC-MAIN-2017-39
|
http://www.ehow.com/info_7845251_automotive-master-technician-salaries.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689471.25/warc/CC-MAIN-20170923033313-20170923053313-00406.warc.gz
|
en
| 0.947409 | 334 | 2.59375 | 3 |
You will be given journal assignments throughout the semester.
Narrative Elements Setting What is it? Why is it important? How do I create it? The setting is the environment in which a story or event takes place. Setting can include specific information about time and place e.
Boston, Massachusetts, in or can simply be descriptive eg. Often a novel or other long work has an overall setting e. Geographical location, historical era, social conditions, weather, immediate surroundings, and time of day can all be aspects of setting.
Setting provides a backdrop for the action. Think about setting not just as factual information but as an essential part of a story's mood and emotional impact.
Careful portrayal of setting can convey meaning through interaction with characters and plot. For example, in Jack London's Call of the Wild, the setting for Buck's adventures changes frequently, moving from a civilized environment to a wild and dangerous environment.
These changes of setting are crucial to Buck's development as a character and to the events in the tale. To create setting, provide information about time and place and use descriptive language to evoke vivid sights, sounds, smells, and other sensations. Pay close attention to the mood a setting conveys.
To portray setting in both fiction and non-fiction, Refer specifically to place and time: Darwin Provide clues about the place and time by using details that correspond to certain historical eras or events: With its quilted liner, the poncho weighed almost 2 pounds, but it was worth every ounce.
In April, for instance, when Ted Lavender was shot, they used his poncho to wrap him up, then to carry him across the paddy, then to lift him into the chopper that took him away. Empty benches rose on either side of him, but ahead, in the highest benches of all, were many shadowy figures.
They had been talking in low voices, but as the heavy door swung closed behind Harry an ominous silence fell. Rowling, Harry Potter and the Order of the Phoenix Describe the weather and the natural surroundings: They could not have had a more perfect day for a garden-party if they had ordered it.
Windless, warm, the sky without a cloud. Only the blue was veiled with a haze of light gold, as it is sometimes in early summer.
The gardener had been up since dawn, mowing the lawns and sweeping them, until the grass and the dark flat rosettes where the daisy plants had been seemed to shine.17 thoughts on “ Use Word Choice to Set the Mood ” annbrown11 May 4, at pm Hello good day, i will like to meet you in person, am miss Anna, am from France and am leaving in London, please contact me on my email id at ([email protected]), for more information about me.
The term sense of place has been used in many different ways. It is a characteristic that some geographic places have and some do not, while to others it is a feeling or perception held by people (not by the place itself).
Do you get why this sentence doesn’t make logical sense? The writer didn’t compare like with like; he compared Picasso’s art to Van Gogh the artist. You know what he means to say, but you might well have to go back and re-read the sentence to understand what the author really wants to say.
The best writers re-write and re-write. New writers tend to think that editing merely means a brief read through for typos and spelling errors. That's the very last thing to do. I’ve been a reader of authors who have a strong sense of place, because in my own life I’ve been somewhat placeless.
I always traveled as a kid, and went to a new school every year. Continuing my reflections on living as if I only have a year to go. And joining in on a ‘Blog Tour’ too. A friend has contacted me this week and asked me to take part in a sort of blogging chain letter.
|
<urn:uuid:5d2e66e5-50cb-40e9-989e-09177e9c47e2>
|
CC-MAIN-2020-05
|
https://jihygytucykuvo.iridis-photo-restoration.com/why-do-authors-write-about-a-sense-of-place-14526qo.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607596.34/warc/CC-MAIN-20200122221541-20200123010541-00076.warc.gz
|
en
| 0.969177 | 839 | 4.1875 | 4 |
Asthma is a serious medical condition that can affect anyone, from a teenager to a senior citizen. You must go to the right doctors, and you should also watch out for a lot of different warning signs that can make your symptoms worse. This article will show you some simple ways you can reduce your asthma symptoms, thus lowering your chances of a serious attack.
There are many different types of asthma. Being fully informed about the specific type of asthma you have is very important. People who suffer from exercise-induced asthma will need to make sure that they have an inhaler with them inside of their gym bag. Knowing symptom patterns will help you prevent emergencies.
Cleaning products should be avoided when you have asthma. Asthma sufferers are often sensitive to chemical cleaners; using these products can sometimes trigger asthma attacks. Instead of relying on harsh cleaners, check out some organic solutions. They might cost a few dollars more, but the difference is well worth it.
If you have asthma, you need to avoid any kind of tobacco smoke. Asthma creates breathing problems by constricting airways, and cigarette smoking only exacerbates the problem. Stay away from vapors and all chemical fumes so you are not able to breathe them in. This can cause your asthma to flare up, causing an attack that may be uncontrollable. Avoid secondhand smoke by leaving physical distance between yourself and the smoker.
If you suffer from asthma and allergies that result in attacks, you can get injections of long-lasting medication for relief. Omalizumab is an antibody medication that is used to control these allergic reaction symptoms and may be recommended by your allergist.
Think about buying a dehumidifier if your asthma symptoms are bad. Cutting down the humidity in the house will lower the amount dust mites in the air, and that will mean fewer asthma flare-ups. Dehumidifiers work by taking the humidity out of the air.
When you travel, your rescue medication should be with you all the time. Traveling can make you more likely to suffer from attacks due to the extra strain and stress on your body. It’s also hard to control the environment you’re in when traveling, which is another reason you might experience more symptoms or have an attack.
Many of the most common asthma triggers are found in the home. Such irritants include mold spores, dust, smoke and chemical fumes. To reduce asthma attacks and stay healthy, have an inspector remove any harmful agents yearly. In addition, regularly cleaning the home can stop these things from building up.
If you are cleaning, you should use a mop that is damp instead of a dry broom. Sweeping stirs up irritants that can trigger an asthma attack. A damp rag should be used when dusting because a feather duster can cause dust to kick up and lead to an asthma attack.
For those struggling with their asthma, avoiding regular contact with pets is important to control symptoms. Even people that do not have allergies are prone to suffer an asthma attack from the pollen and dust on animals.
You want to make certain you visit more than just one doctor. While your primary care physician should be your go-to source for asthma help, consider making an appointment with a specialist or two. Allergists, asthma centers, pulmonologists, and even nutritionists can work with you to make sure you are taking advantage of all avenues of treatment.
Asthma can be a very life threatening problem and should always be taken seriously. Because asthma attacks can cause death, you should take steps to keep your asthma under control. So, carry an emergency inhaler with you at all times, or you can take precautions like making sure your house is always free of dirt and dust. You are likely to see some improvement in these asthma-related symptoms if you take the time and follow the advice given in the following tips.
Act now and visit us at our online office < a href= http://partymanshop.com/en/home-distillation-handbook-printed-version-148.html> Click here.
|
<urn:uuid:8f2f7315-f744-496c-ac65-868c711034c1>
|
CC-MAIN-2023-23
|
https://asthmaattacksymptom.com/asthma/treating-your-asthma-has-never-been-this-easy-before/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655092.36/warc/CC-MAIN-20230608172023-20230608202023-00085.warc.gz
|
en
| 0.947343 | 837 | 2.703125 | 3 |
The remains of the 6th Century Church of St. Polyeuktos
It all started with peacocks... those dazzling creatures that represented the eternity of the spirit and had associations with Roman empresses... There once was a church, built by a royal princess who was the noblest in the land, that had a golden dome and was lavishly decorated with the most precious marbles from the empire, colors of green, blue and gold glittering everywhere, with peacock motifs decorating column capitals and niches around the church, it was known to have been inspired from the temple of Solomon. No one knew of its existence till half a century ago when magnificent marble fragments appeared as if by magic in the middle of Istanbul... This, was the Church of St. Polyeuktos in Constantinople, built in the 6th century by Anicia Juliana, a descendant of both Eastern and Western emperors ...
Dome of the Rock, Jerusalem
The first time I had every heard of the Church of St Polyeuktos was when I was doing a book review for this blog on And Diverse are their Hues, exploring the significance of color in Islamic art and culture. In his article "Blue Behind Gold: Inscription of the Dome of the Rock and its Relatives" Dr. Lawrence Nees gave the example of St. Polyeuktos as being one of the fore-bearers for the color scheme and type of decoration used on Dome of the Rock. Dr. Nees painted a radiant picture of a Byzantine church with peacock blue and green used behind gold inscriptions that left me enthralled with this mysterious, magnificent structure and its founder, I had to find out more...
It was during my semester in Istanbul last fall, at a visit to Istanbul Archaeology Museum, I recalled Anicia Juliana and her church once again, upon seeing a capital with peacock motifs. I found the section set aside for the Church of St. Polyeuktos among the Istanbul galleries of the museum, identified by the name of the present day neighborhood where they had been excavated, Saraçhane. There were some beautiful, intricately carved late antique marble works, capitals and columns with a few colored glass still intact, but visualizing the splendor and comprehending the significance of this church required more research.
After inquiring, I learned that Saraçhane was the neighborhood directly across from the Istanbul municipality, near the aqueduct of Valens and the fragments leading to the discovery of this church were found during grading operations in the area in the 1960's. As a matter of fact, I had probably passed by the site, which is labeled an "Archaeological Park," thousands of times in my lifetime. It had been right there, where I took the right turn to go to the antique flea-market, Horhor, one of my favorite haunts in Istanbul. This was just another one of the remains of the Byzantine past buried beneath that great capital of Empires, Istanbul, it's fate, sadly recalling the neglected history of Byzantium.
Ruins of St. Polyeuktos with the view of Aqueduct of Valens
Ruins of St. Polyeuktos with the view of the Istanbul municipality
So, what was so special about this church and how did we compile a whole picture from the bits of marble strewn about? And where did Solomon's temple fit into all of this? This, is the great wonder of archaeology and art historical research, they tell us the tales of the people and places of the past, bringing to life a culture, a way of life that has been long gone and sometimes even forgotten for centuries but still relevant to us today. From the myth surrounding it about being an exact replica of the Temple of Solomon to its extraordinary decoration program which appropriated distinctly Sassanian motifs next to classical Roman ones, the most ostentatious structure of it's day, Church of St. Polyeuktos, standing in the middle of the processional way, between the Forum Taori and the Church of the Holy Apostles, was waiting to tell us the story not only of its remarkable patron, who dared to challenge Justinian, but also the establishment of signs and symbols of divine and heavenly kingship in Late Antiquity.
One of the primary finds that helped to identify St. Polyeuktos was the inscriptions that were to be found around the fragmentary niches, the apse of which were filled with the outspread tail feathers of a peacock. The inscription was identified as the seventy-six line epigram recorded in a 10th century source, Palatine Anthology, which stated that it belonged to the church of the martyr Polyeuktos built by Anicia Juliana, the great-granddaughter of the empress Eudocia (wife of Theodosius II) "who built a structure to rival the temple of Solomon." Dr. Martin Harrison, who excavated the dig also discovered that the unit of measurement used in the church was the royal cubit as opposed to the Roman foot and the church measured 100 royal cubits feet square, the measurement of the Temple at Jerusalem, built by King Solomon in tenth century BCE. Princess Anicia Juliana had not only planned and executed one of the predominant signs of kingship in every detail of her church but she had also brazenly declared it like a manifesto in one of the most visible elements of the structure, the decorative inscription.
The Church of St. Polyeuktos appears in the Byzantine Book of Ceremonies, which mentions the emperor stopping here during imperial procession for Easter Mondays where he changed his candle before continuing along the Mese to the Church of the Holy Apostles. Another literary source that mentions the church of St. Polyeuktos and adds an interesting dimension to the whole construct is the sixth century story by Gregory of Tours. In this story, Juliana's confrontation with the "upstart" Justinian (he was of peasant stock while she had the blood of the Theodosian dynasty coursing through her veins) and her victory over him is told. Justinian looking for more revenue to fund his defense and building projects requests Juliana make a contribution to public funds. She humbly asks for time to gather her treasure and meanwhile instructs her workers to plate the roof of the church using all of her gold. When Justinian comes back, she takes him to the church where they kneel in prayer and when they are done, she points to their surroundings, and tells him to take what he likes. Since he is not about to take apart a house of God, Justinian is about to leave when Juliana gives him her emerald ring saying, "Accept, most sacred Emperor, this tiny gift from my hand, for it is deemed to be worth more than this gold." The passing on of the ring has been interpreted by scholars as the last member of the Theodosian dynasty passing on the right to rule to her successor. It is also assumed that Juliana was a disillusioned, old woman by now, neither her husband nor her son attaining what she deemed was rightfully theirs, the position of emperor.
From what can be gathered from the archaeological evidence and the remaining artifacts, St. Polyeuktos had been pillaged during the fourth crusade, some parts making their way to Europe and finally collapsed around the 13th century. Except for a few marble fragments in museums and the two pillars gracing the Piazzetta outside of the south walls of the San Marco under the auspices of Pillars of Acre, all that remains in place of this once magnificent church whose influence can be seen from the Hagia Sophia in Istanbul to the S. Vitale in Ravenna is the substructure. Some scholars believe the workers that created the splendid, marble decorations for the church were recruited by Bishop Ecclesius on his visit to Constantinople to go to work in S. Vitale in Ravenna while others probably worked on Justinian's two churches, SS. Sergius and Bachus, and Hagia Sophia, that bear a striking resemblance in the quality and type of marble sculpture to St. Polyeuktos. This is why Justinian's legendary quote upon completion of the Hagia Sophia, "Solomon I have outdone thee!" is believed actually to be addressing Juliana.
Pillars of Acre, Venice, from the Church of St. Polyeuktos, Constantinople (source)
San Marco, Venice, taken from the Church of St. Polyeuktos, Constantinople (source)
Today, when I stand in Saraçhane Square and look around me, instead of seeing another busy, Istanbul thoroughfare with a sea of bustling humanity and throngs of vehicles passing through on their way to some other destination, I imagine the imperial procession approaching St. Polyeuktos, with large tapers in their hands or members of the factions, the demarche of the Blues with the deme of the Whites receiving the emperor and the criers saying "Your divine Majesty is welcome" and the acclamations of eulogy being chanted by the criers and the people... It's almost as if a silent, black and white movie just became a technicolor movie with sound. As I venture around the ruins, I try to imagine and draw courage from Anicia Juliana, a woman who was bold enough to compare herself to King Solomon 1500 years ago. When I consider the decorative cycle with those unique Sassanian inspired plant motifs, I get a glimpse of the far-reaching interaction between the two rival courts, cultures, and can't help but wonder how much more there is to know that we still haven't discovered or been able to see through our Greco-Roman centric, point of view. I liken the site to a momento mori in the inevitable abandonment and destruction of even such a significant structure when it fulfilled its usefulness.
Standing amidst the ruins of St. Polyeuktos
Art history is the visual manifestation of the common link that connects us to the past, the present day as well as the future. Every work of art I encounter has a story to tell about the time, the people, the culture and the world at large which broadens my perspective about the world I am living in and my place in it. My journey in the world of art history is a very personal one that has given me a sense of belonging. By studying the history of art, I not only discover the details of different cultures from history, I also find the common link that bonds us all together as human beings. St. Polyeuktos was obviously not the first work of art that I encountered to get me interested in this type of study and Byzantine art is not the only concentration I am interested in, when it comes to art history, I am not discriminating, I will take it all in. I have favorite artists and favorite time periods of course but as I progress further into my research, I realize that the two fundamental aspects that has me so enthralled every single time is how each work touches my soul and my intellect.
I would like to thank Hasan for giving me this opportunity to share my thoughts on 3 Pipe Problem. One of the greatest benefits of blogging about art history has been the connections I have made with like minded individuals and I feel I owe a special thanks to Hasan for this as well. His generosity of spirit has been a connecting force for a lot of art history bloggers from all the corners of the world.
1. Nees, Lawrence. Blue Behind Gold: Inscription of the Dome of the Rock and its Relatives. Video hosted by islamicartdoha.org (link)
2. Canepa, Matthew P. Two Eyes of the Earth - Art and Ritual of Kingship between Rome and Sasanian Iran. University of California Press. 2010. Preview available at Google Books (link)
3. De Cerimoniis (ed. A. Vogt, pp. 68 and 43-4; ed. Reiske, pp. 75-6 and 50) Translated by M. Harrison. Excavations at Sarachane in Istanbul. Vol. 1. Princeton University Press. 1986. pp.9-10
4. Gregory of Tours - Glory of the Martyrs. Translated by R. Van Dam. Liverpool University Press. 1988. Preview available at Google Books (link)
5. Harrison, RM Excavations at Sarachane in Istanbul. Vol.1 The Excavations, Structures, Architectural Decorations, Small Finds, Coins, Bones, and Mollusc. Princeton University Press, 1986
6. Harrison, M. A Temple for Byzantium. University of Texas Press. 1989.
7. Mango, C and Sevcenko, I. Remains of the Church of St. Polyeuktos at Constantinople. Dumbarton Oaks Papers. Vol. 15. 1961.
8. Palatine Anthology. Book I - Christian Epigrams. pp.7-11. Available online at The Internet Archive (link)
8. Palatine Anthology. Book I - Christian Epigrams. pp.7-11. Available online at The Internet Archive (link)
Sedef Piker is an independent researcher traversing between Istanbul and New Jersey dedicated to advancing a greater understanding of the cross-cultural interactions between the 'East' and the 'West'. After years of being subsumed in the culture of the Ottomans during her work as a student of manuscript illumination at the Nakkashane (Ottoman academy of painting) of the Topkapi palace and later Grup Gulefsan atelier in Ankara, she is currently trying to augment her knowledge in the art and history of Western civilizations. When she isn't taking classes at Rutgers University, she rejoices in sharing her new discoveries in the world of art and its history on her blog Sedef's Corner (link).
|
<urn:uuid:7a88d320-b51e-4dc2-a772-a06378aa5055>
|
CC-MAIN-2013-48
|
http://www.3pipe.net/2013/08/saintpolyeuktos.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164018116/warc/CC-MAIN-20131204133338-00058-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.961082 | 2,894 | 3.0625 | 3 |
Watts Station, May 2008
|Location||1686 E. 103rd Street, Watts, Los Angeles, California|
|Architectural style||Late Victorian|
|NRHP Reference #||74000523|
|Added to NRHP||March 15, 1974|
(Not to be confused with Watts Towers/103rd Street Station, which lies immediately to the south and is currently in use.)
Watts Station is a train station built in 1904 in Watts, Los Angeles, California. It was one of the first buildings in Watts, and for many years, it was a major stop for the Pacific Electric Railway's "Red Car" service between Los Angeles and Long Beach. It was the only structure that remained intact when stores along 103rd Street in Watts were burned in the 1965 Watts Riots. Remaining untouched in the middle of the stretch of street that came to be known as "Charcoal Alley", the station became a symbol of continuity, hope, and renewal for the Watts community. It has since been declared a Historic-Cultural Monument and is listed on the National Register of Historic Places.
Construction and operation as a "Red Car" station
Watts was built on the old Rancho La Tajuata. In 1902, the family of Charles H. Watts, for whom the community was later named, sought to spur development of the rancho by donating a 10-acre (40,000 m2) site to the Pacific Electric Railway. Watts Station was built on the site in 1904, serving for more than 50 years as a major railway depot and stop for the Pacific Electric's "Red Car" service between Los Angeles and Long Beach. The station is a single-story, 2,200-square-foot (200 m2), wood-frame structure divided into three rooms. It was one of the first buildings erected in Watts and is one of the few remaining from its early years. It also served as a model for later depots built in La Habra, Covina and Glendora.
When the station opened, it drew people to the area, so much so that the community that grew in the area was initially known as "Watts Station." A vintage 1906 photograph of the station from the USC Digital Archives can be viewed here. Another classic image of the station from the collections of the Los Angeles Public Library can be seen here.
The building remained an active depot until passenger rail service was discontinued in 1961.
Colorful and violent history
From its beginning, Watts Station had a colorful and violent history. Incidents occurring around Watts Station in its early years including the following:
- In 1904, shortly after the station opened, a woman described as "an illiterate Mexican woman" was killed trying to save her six-year-old son who had wandered onto the tracks. She was the first person to die in a train accident at Watts Station, and the Los Angeles Times proclaimed her a heroine who "dashed upon the track" and threw her son out of the path of an oncoming train. The paper also noted: "Children are numerous among the Mexican families who live in the vicinity of Watts Station, and with that childish disregard for danger that marks the age of indiscretion these children play on the tracks of the electric railroad."
- In December 1904, a 40-year-old "maiden lady" living at Watts Station deliberately stepped in front of a rapidly approaching Long Beach "flyer." The Los Angeles Times reported that "when the body was finally removed from under the wheels it was so badly crushed and torn that it could hardly be recognized as that of a human being."
- In May 1905, a confrontation at Watts Station prompted a remarkable ethnic commentary by the Times. A man known as the "Duke of Watts" had been hired by the Pacific Electric to watch over the "village of child-like laborers" at Watts station and protect them from "the ways of a wicked gringo world." The "Duke" became angered when wagons driven by "some of thim [sic] damn dagoes" rolled into the right of way raising a cloud of dust. When confronted by the "Duke," who the Times described as "a beneficent sovereign," the paper reported that "one of the Italians went up in the air, the way they do" and the other Italians "jumped down from their wagons and began gesticulating like crazy men, ringing their arms and howling like agitated monkey." The Italians claimed the Duke pulled a large pistol on them and forced them to leave Watts.
- In July 1905, Jose Bustos, "a Mexican employed by the Pacific Electric railway" was struck and almost instantly killed by a Long Beach car at Watts station. After exiting a train, "intending to go to the Mexican camp," he was struck while crossing the south bound tracks. Another Mexican laborer, C. Medal, sustained fatal injuries after being struck by a train at Watts Station six months later. And in December 1906, a motorman, Charles Vaughn, was pinched between two cars at Watts Station, suffering possibly fatal injuries.
- In January 1906, a shooting at Watts Station shocked "Red Car" passengers. A woman named Mrs. Henry Welsh, while in the midst of divorce proceedings, drew a revolver and shot at her husband, who was waiting for the Red Car. When her husband grabbed the gun, Mrs. Welsh walked to her husband's pool room near the station where she "smashed in the windows and generally demolished things." She then secured another revolver and returned to the station, firing two more shots at her husband. Both shots missed her husband and went into a waiting Red Car, sending passengers into a panic.
- In August 1919, a group of ex-employees of Pacific Electric attacked a Red Car at the Watts Station, breaking windows and throwing stones. When A. W. Moon, a Pacific Electric guard, tried to defend the derailed car, he was arrested and charged with "fighting and threatening to fight." The crowd followed the guard to the Watts jail, jeering at him, "calling him opprobrious names," and threatening him. At his trial, it was alleged that the crowd cried, "String them up," prompting Mr. Moon's attorney to request that the case be moved to a venue out of Watts.
Symbol of hope along "Charcoal Alley"
In August 1965, the Watts Riots resulted in the destruction of buildings up and down 103rd Street—the main commercial thoroughfare in Watts. Watts Station was situated in the center of the one-mile (1.6 km) stretch of 103rd Street between Compton and Wilmington Avenue that came to be known as "Charcoal Alley" due to the widespread destruction. One observer recalled: "Both sides of 103rd Street were ablaze now. The thoroughfare was a sea of flames that emitted heat so unbearable that I believed my skin was being seared off." Another account of the riots along "Charcoal Alley" states: "On the third day of the Watts Riots, 103rd St. was burned to the ground." In the middle of the rubble and widespread destruction along "Charcoal Alley", the Los Angeles Times reported that "the train station was the only structure that remained intact when stores along 103rd Street burned during the Watts riots." The survival of the old wood-framed Watts Station, whether an intentional omission or a mere coincidence, resulted in the station becoming "a symbol of continuity, hope and renewal" for the Watts community.
Historic designation and restoration
Four months after the riots, the station was declared a Historic-Cultural Monument (HCM #36) by the Los Angeles Cultural Heritage Commission. It was also listed on the National Register of Historic Places in 1974. In the 1980s, after the station had been vacant for many years, the Community Redevelopment Agency spent $700,000 to restore the structure to its original exterior design. The station was re-opened in 1989 as a customer service office for the Los Angeles Department of Water and Power and a small museum of Watts history. Mayor Tom Bradley attended the dedication ceremony and said: "Those days of glory are going to return, and we are going to be at the heart of the action right here at the Watts train station."
In 1990, the Metro Blue Line resumed train service from Los Angeles to Long Beach along the old Pacific Electric right of way. Though the old Watts Station does not serve as a passenger platform or ticket booth for the new Blue Line, the trains do stop at a new "Watts Station", 103rd Street-Kenneth Hahn, on 103rd Street, at a location next to the old Watts Station. Unfortunately, the Blue Line has brought back the fatalities that plagued the Red Cars, with more than 87 motorists and pedestrians having been killed at Blue Line crossings since 1990, making it the deadliest and most accident-prone light rail line in the country.
- List of Los Angeles Historic-Cultural Monuments in South Los Angeles
- List of Registered Historic Places in Los Angeles
- Watts Riots
- "National Register Information System". National Register of Historic Places. National Park Service. 2008-04-15.
- "Watts Station Declared: 'Of Historic Significance'". Los Angeles Sentinel. 1965-12-09.
- "Historic Train Depot in Watts Set For $310,000 Restoration". Los Angeles Times. 1986-11-09.
- "Dies Awful Death To Save Her Child". Los Angeles Times. 1904-05-19.
- "Ground To Death: Miss Mary Ryan Steps Before Pacific Electric Flyer to Shocking Fate". Los Angeles Times. 1904-12-28.
- "Duke of Watts Was 'Pinched': Scared Italian Grocers Had No Passports; Terrance Mulligan Ordered Them Out of His Dominions, and They Flew to the City to 'Get the Law on Him' - War Busted Loose on the Pacific Electric". Los Angeles Times. 1905-05-14.
- "Stepped to His Death: Laborer Employed by Pacific Electric Killed by Long Beach Car at Watts Station". Los Angeles Times. 1905-07-30.
- "Cannot Recover: Mexican Struck by Car Near Watts Station Sustains Injuries Which Will Prove Fatal". Los Angeles Times. 1906-02-19.
- "Motorman May Die: He Is Pinched Between Two Cars of Work Train at Watts Station; Taken to Hospital". Los Angeles Times. 1906-12-09.
- "Shoots Into Car Window: Woman's Bad Aim Endangers Many Passengers; Mrs. Welsh Fires on Mate at Watts Station; Climax to Numerous Stormy Domestic Quarrels". Los Angeles Times. 1906-01-27.
- "Says He Can't Get Fair Trial: Trolley Car Guard Wants a Change of Venue; Declares Gangsters Menace City Court at Watts; Faces Disturbing of Peace Charge for Doing 'Duty'". Los Angeles Times. 1919-09-16.
- Ray Hebert (1966-02-27). "Hope Brightens for Riot Areas: Action Promises Revitalization of Forgotten Slum". Los Angeles Times. ("In Watts, for example, a mall is being discussed for a stretch of 103rd Street -- the riot's infamous 'charcoal' alley between Compton and Wilmington Ave.")
- Mitchell Landsberg and Valerie Reitman (2005-08-11). "Watts Riots, 40 Years Later". Los Angeles Times. ("They had just secured one of the hardest-hit areas of Watts, a stretch of 103rd Street that had been dubbed 'Charcoal Alley.'")
- Art Berman (1965-12-06). "Watts Scars Heal Slowly: Businessman's New Store Looted". Los Angeles Times. ("Along a mile of 103rd Street in Watts -- dubbed 'Charcoal Alley' after 41 commercial buildings were destroyed by fire during the riot -- block after block is dotted with bare or rubble-filled lots or blackened shells.")
- Betty Pleasant (2005-08-03). "Eyewitness Account of the Watts Riots". The Wave Newspapars.
- "Charcoal Alley". Community Walk.
- Paul Feldman (1989-03-17). "Watts New? Reopening of Historic Red Car Station as Museum and DWP Office Seen as Symbol of Hope, Renewal". Los Angeles Times.
- Los Angeles Department of City Planning (2007-09-07). Historic - Cultural Monuments (HCM) Listing: City Declared Monuments. City of Los Angeles. Retrieved 2008-07-08.
- "Blue Line Train Kills Pedestrian at Watts Station". Los Angeles Times. 1999-06-25.
- "Summary of Blue Line Train/Vehicle and Train/Pedestrian Accidents". Los Angeles County Metropolitan Transportation Authority. June 2007.
- "Light rail fatalities, 1990-2002". American Public Transportation Association. 2005-05-20.
|
<urn:uuid:747b4c79-4ad4-49b0-a815-2f5e453b8d24>
|
CC-MAIN-2014-23
|
http://en.wikipedia.org/wiki/Watts_Station
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776435842.8/warc/CC-MAIN-20140707234035-00093-ip-10-180-212-248.ec2.internal.warc.gz
|
en
| 0.950477 | 2,689 | 2.703125 | 3 |
William Carlos Williams' "The Red Wheelbarrow" is a cool little poem to visually brainstorm with a visible thinking activity like sketchnoting. What's even cooler is to get a whole class engaged in putting together their own sketchnotes on the same poem and then taking a look at the final results, either with an installation that's spread across the classroom bulletin boards, a repository of the scanned doodles saved as a Google doc, or the completed sketchnotes tweeted out for all to see.
Williams was influenced by so many different developments that were happening not just in American poetry but throughout the visual and performing arts. Looking at what he, Stevens, cummings, H.D., and Stein were doing in those first three decades of the 20th-century shows just how impactful someone like Pablo Picasso was on the development of their writing.
One has no idea what someone like William Carlos Williams would have thought about visual brainstorming, visible thinking, or sketchnotes, but something tells me that he and others like him would have been rather fond of the practice!
If you enjoyed this little post, you might also enjoy:
Dr. Glen Downey is an award-winning children's author, educator, and academic from Oakville, Ontario. He works as a children's writer for Rubicon Publishing, a reviewer for PW Comics World, an editor for the Sequart Organization, and serves as the Chair of English and Drama at The York School in Toronto.
If you've found this site useful and would like to donate to Comics in Education, we'd really appreciate the support!
|
<urn:uuid:1e31c7b4-6449-4fa7-8ef7-1ae55ae63e1d>
|
CC-MAIN-2017-47
|
https://www.comicsineducation.com/home/the-visible-thinking-project-sketchnoting-the-red-wheelbarrow
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809229.69/warc/CC-MAIN-20171125013040-20171125033040-00282.warc.gz
|
en
| 0.975428 | 327 | 3.046875 | 3 |
Sara, 11, explains why fasting is good: for devotion to Allah, for self discipline and for sympathy for the poor, explaining her religious practice.
One of the 5 Pillars of Islam, fasting during the month of Ramadan is a time to be devoted to Allah, to ask for forgiveness, to develop self discipline and to be spiritually aware of how hard life might be for people who are much poorer. The festival of Eid Al Fitr at the end of the month includes the first lunch for weeks, a trip to a funfair and giving presents. It is also a time to reflect on life and to be thankful to Allah for the good things of life. Sara, 11, a Muslim from London explains and shows us what the fasting and the festival mean to her.
The rules for fasting say that no food is to be taken from sunrise until after sunset each day during Ramadan. As the Muslim calendar follows the moon, there are 354 days in the Muslim year, so Ramadan moves a little earlier each year. At the moment, it falls during the British summer, so the fasting time is long.
Sara helps her mum get the evening meal ready: a chicken casserole to her Algerian dad’s favourite recipe, but she notes that the discipline of fasting is there to stop you thinking too much about food. Instead, Ramadan is time to remember Allah’s gift of the Holy Qur’an, to be prayerful, thankful and full of generosity.
The film follows Sara and some friends to the funfair in London that marks the end of Ramadan: if you have had no lunch for over 4 weeks, then you can take special pleasure and satisfaction in the midday meal of Eid al Fitr.
|
<urn:uuid:e1a638eb-b76b-4600-a190-2889155d6bdd>
|
CC-MAIN-2020-34
|
https://www.bbc.co.uk/programmes/p02mwdxf
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738777.54/warc/CC-MAIN-20200811115957-20200811145957-00197.warc.gz
|
en
| 0.947363 | 352 | 2.828125 | 3 |
Nature has always been generous with us. It provided us land to live, water to drink, and fresh air to breathe. It gifted us the warmth of the sun and the shade of trees. It gave us several resources so that we can lead a comfortable life.
But, instead of thanking nature for these blessings, we went on exploiting it for our selfish needs. We kept on cutting trees and made the forests barren. We kept on polluting the water and made the water bodies toxic. We kept on wasting the resources so much so that we have only a handful of them left to pass down to our coming generation. And now, when we look back, we realize that we have been very mean to nature. But it is not too late now. By monitoring our actions, we can still make up for the damage we have done.
If you too wish to lead an eco-friendly, greener life, here’s what you should do:
- Use electronics wisely
Today’s world is driven by technology, and it is next to impossible to live even a day without them. But, by using them wisely, you can minimize their negative impact on nature. Always turn off the devices when not using them as they use energy when left on standby mode.
Also, invest in eco-friendly technology. More and more companies are launching energy-efficient devices to minimize energy loss. These devices reduce your energy output and save your money.
- Use energy-efficient light
Instead of using the old lights, replace them with energy-efficient LED bulbs. Go a step ahead and install solar panels. It may sound like a big investment now, but it will save lots of energy and money in the long run.
- Use public transport
Driving a car adds to your carbon footprint by leaps and bounds. Shun driving and use public transport. You can also consider biking or walking to attain fitness goals while doing your bit for the environment.
If you have no other option than driving, ensure that you keep the speed down and keep your engine running smoothly to minimize its impact on nature.
- Plant some trees
Climate change has hit us hard. Although there are several reasons behind it, deforestation is a prime one. To stabilize the ecosystem, plant some trees. Of course, you cannot plant a forest, but even planting a single tree makes a difference.
Also, consider growing your food. Start with planting herbs and then plant some vegetables. It won’t just save your money but also cut down the carbon emission caused due to food transportation.
- Start composting
Rather than throwing waste food in the trash can, start composting it. With composting, you can create a natural fertilizer and reduce the waste that goes to landfill.
- Cut down your plastic use
Plastic is toxic to nature. All the plastic that goes to landfills keep polluting the environment for years. So, cut down the use of plastic completely. Instead of using plastic bags, switch to reusable cloth bags. Switch your plastic water bottles with glass bottles and you’ll feel the difference.
If you have a lot of plastic waste at your home, call a junk removal service to take that away. The biggest benefit of using junk removal services is that they dispose of the waste in the most environment-friendly way.
The bottom line
All the natural elements and resources that we have on this planet are the biggest blessing bestowed upon us by nature. Instead of depleting them mindlessly, we should use them wisely. Also, our careless gestures harm our planet by leaps and bounds. We should try to minimize the damage, we always subject our environment to.
|
<urn:uuid:29ddf839-2ed2-4e0b-825e-1be9cd4a554e>
|
CC-MAIN-2023-40
|
https://savedelete.com/news/technology/looking-forward-to-adopting-an-eco-friendly-life-here-is-what-you-need-to-do/390072/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510412.43/warc/CC-MAIN-20230928130936-20230928160936-00557.warc.gz
|
en
| 0.943137 | 749 | 2.859375 | 3 |
Turboprop cargo aircraft
Transall 160 – Armstrong Whitworth Argosy – Lockheed L-100
The turboprop engine is a variation on the jet engine in which the thrust generated by the gas turbine is converted into power that drives a propeller. It is an efficient power plant at speeds below about 725km hour and has several advantages over turbojet and piston engines. It has a good power to weight ratio, uses readily available jet fuel and the thrust of the propellers can be used for short take off and landing in reverse pitch. These features make turboprop engines very attractive for use in military cargo aircraft and the three we are looking at today are all civil versions of military cargo aircraft.
|
<urn:uuid:1ef5cdc0-e481-4278-8487-902422aad13b>
|
CC-MAIN-2023-14
|
https://leighedmondslittleboxofstuff.com/2023/02/08/the-curators-choice-50/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00043.warc.gz
|
en
| 0.942229 | 145 | 3.0625 | 3 |
2000 State Fair Exhibit
Technologies for a Sustainable Future
From August 19 to September 4 of 2000, CU-BAC, other exhibitors, planners, and volunteers presented "Technologies for a Sustainable Future," an exhibit filling 80 feet of a glass display case space at the Colorado State Fair, in Pueblo, Colorado. This exhibit showcased pollution prevention activities, and new environmental technologies, products and information.
The exhibit was produced and coordinated by CU-BAC and sponsored by the Colorado Department of Public Health and Environment, the exhibit was produced and coordinated by Pollution Prevention Advisory Board.
For 17 days various volunteers welcomed some 10,000 visitors to the Natural Resources building on the state fair grounds to view 13 different exhibits, read close to one hundred environmental flip tips, sign up for a free Samsonite Swirl suitcase, which is made out of recycled plastic, and to spin the "wheel" to test their pollution prevention knowledge.
Listed below are all of the exhibitors and members of the planning team.
Planning and Exhibit Organizers:
Colorado Environmental Business Alliance
Colorado Renewable Energy Society
Colorado Water Protection Project/League of Women Voters of Colorado Education Fund
Recycling Development Incubator
Clean Air Colorado
Colorado State University
Cooperative Extension Service
Colorado School of Mines, Colorado Advanced Material Institute
Corn Growers Association
Coleman Natural Beef
Great Plains Oil
Northern Colorado Water District
Colorado Pollution Prevention Program
Scroll down to view all of the exhibits or choose a link to see a particular exhibit.
|Pollution Prevention||Renewable Energy/Solar Wind|
|Consumer Home Choices||Natural Building Material|
|Organic meats||Recycled Tire Research/Reuse|
|Samsonite||Motor Oil from Sunflower Seeds|
|Products from Recycled Material||Products Made From Corn|
|Water Pollution and Conservation||Advanced Vehicles|
Pollution Prevention--It's Your Choice
This display highlighted that pollution prevention, or P2, is something that all of us can do at home, at work, at play and on the road. We all make choices in our daily lives and some of those choices can be made to benefit the environment and ourselves. The Colorado Pollution Prevention Program, a part of the Colorado Department of Public Health and Environment set up this display not only to promote better environmental choices but also to inform the public that this program provides services to assist Colorado companies in choosing cleaner production practices.
Consumer Home Choices
|This display prepared by the Pueblo Master Gardeners of CSU Cooperative Extension, provided information on how individuals can save money and prevent pollution by composting and making cleansers from common natural ingredients.|
|Coleman Natural Beef Company, out of Boulder, Colorado, exhibited the many benefits of buying organic foods. Organic beef is that which is taken from cattle that were not injected with any hormones or antibiotics. Being organic means that no pesticides or chemicals were used on the cattle's feed or grass nor were any chemicals or injections used.|
|Samsonite Worldproof donated both of these suitcases to the exhibit. At the end of the fair CU-BAC drew two winners out of thousands to receive the suitcases. "The swirl design is the result of an environmentally friendly manufacturing proccess.The swirl occurs when additional steps are taken to reduce air emissions and plastic waste."|
Recycling Business Development
|The Recycling Development Incubator in coordination with Amazing Recycled Products Inc., Hard Copy Recycling, Westword and the City of Fort Collins Natural Resources Department designed this display out of recycled products. These products included a recycled computer clock and a book cover just to name a few. This exhibit presents the many ways that we as consumers can reuse our used electronics and other products.|
"We All Live Downstream"
|The Colorado Water Protection Project and the League of Women Voters designed and displayed the Water Pollution & Conservation exhibit. This exhibit emphasized the fact that we all live downstream from each other, so whatever pollutants we create someone else endures. We are all impacted by water pollution.|
|The exhibit included photographs of areas that are impacted by water pollution and also the sources of those pollutants.|
|Where does that water come from? This water faucet intrigued both children and adults. This display proves the point that our water must come from somewhere. Therefore we must conserve from our own faucets. (The water from this faucet is sent up the tube by a pump, and then back down by gravity). The water faucet display was provided by Northeast Water Conservation District|
"More Power to You"
|Out of Boulder, Planetary Solutions is a unique company that provides flooring, countertops and other materials for your home and building projects that are made from strictly recycled products. Some of those products that were displayed here are a corkboard floor that has the appearance of a wooden floor and several colors of carpet that both feel and look like traditional carpet.|
||The Colorado School of Mines is conducting research in the recycling of old tires. Piles of tires are not only unsightly but also dangerous to the environment. There are many potential uses for recycled tires such as playground and track surfaces and sound barriers. This exhibit highlighted the research that is currently being performed by the Colorado Advanced Materials Institute in coordination with three Colorado companies.|
Motor oil, from sunflowers?
Yep, that's right! Motor oil and other petroleum based products can be made from sunflower seed oil. Great Plains Oil, LLC designed this intriguing display that explains the uses and benefits of their products that are functional and environmentally safe. The process developed by reserachers at Colorado State University is being commercialized by Great Plains Oil through a partnership with Kiowa County Growers.
Corn, America's Renewable Resource
||The Colorado Corn Growers Association decorated their exhibit space with only a few of the hundreds of products that can be produced from corn. Some of those products include windshield washer fluid, laundry detergent, packing peanuts and diapers.|
|Energy Sense created this display of various models of electric and hybrid-electric automobiles now entering the market and produced by major automakers. A Colorado-produced electric scooter and a sample of advanced vehicle components designed and produced by Unique Mobility, hightlight Colorado's participation in this industry.|
Return to the previous page
|
<urn:uuid:e5203165-794b-44ca-b952-67d9e05f0a4a>
|
CC-MAIN-2013-20
|
http://www.colorado.edu/cubac/exhibitors.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00033-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.937713 | 1,316 | 2.609375 | 3 |
Minimalism design is tied in with keeping a space simple, cleaned up, and complementing the appealing engineering highlights of a space. The range is for the most part monochromatic and variety is utilized as a complement. In simple words, Minimalist architecture includes the utilization of reductive plan components, without ornamentation or enhancement.
While minimalism ejected from the advanced development, its definition has extended with its work all through different interior design developments. "In spite of the fact that minimalism is normally connected with a modern and contemporary look,
Components Of Minimalism Interior Design
Many people are today acquainted with the idea of minimalism, which includes stripping things down to their most fundamental structure.
- Essential Fact:
The minimalist methodology utilizes just the fundamental components: light, structure, and wonderful materials, ordinarily in an open arrangement format, to make a feeling of opportunity and unwinding. There is no exorbitant decoration and ornamentation.
- Straight lines:
Minimalism design are built with clean, straight lines and minimal ornamentation. flat, smooth surfaces are major areas of strength and, lines make strong articulations that accentuate the fundamental idea of everything.
- Making a minimalist space look warm and inviting:
Minimalist spaces are unmistakable for their fresh, clean, mess-free, and monochromatic look and to guarantee your minimalist planned space likewise also has an inviting vibe, there are a couple of little touches you can make.
- Merge various shades and surfaces:
An extraordinary method for carrying warmth to the space is by the utilization of soft wool fabrics, linen wallpaper and floor coverings in the room adds calming warmth.
|
<urn:uuid:70b5b628-f5de-4183-80a4-0110d0cbd468>
|
CC-MAIN-2023-23
|
https://www.nakshewala.com/minimalist-interior.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652207.81/warc/CC-MAIN-20230606013819-20230606043819-00640.warc.gz
|
en
| 0.921889 | 348 | 2.578125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.