content
stringlengths 275
370k
|
---|
Viruses can cause diseases in herbaceous plants, vegetables, and even in woody plants. They rarely cause damage to plant health in the last group but can be of great concern to those who propagate plants. Roses and ash are the most common woody hosts of concern. Herbaceous plants in greenhouse can have many virus problems, as can annuals, perenniels, and vegetables grown in polyhouses or moved to outdoor production.
The symptoms of viruses include some general problems such as poor plant vitality, stunted growth, and reduced leaf size, as well as more specific symptoms such as a mottled or mosaic pattern in the foliage, ringspots, distorted growth, or odd color patterns. Sometimes these symptoms are not visible at first (latent) but become obvious with temperature changes or time. The images show rose and pepper samples with symptoms of virus.
Viral pathogens do not have fruiting bodies or spores to help with identification. They cannot be cultured from infected plant material onto artificial media. They do not cause oozing of material from the infected site. Serological methods are necessary to confirm the presence of a virus in infected plant material. The generally accepted serological methods usually involve ELISA (enzyme-linked immunosorbant serological assays). Molecular diagnostics can also be used, but costs become much higher with molecular testing.
The U of I Plant Clinic can test for some viruses but not all possible plant viruses. We have tests for viruses that we see most often in our lab and for which there are detection kits available. At present, we can test for cucumber mosaic virus, impatiens necrotic spot virus, cymbidium mosaic virus, Odontoglossom ringspot virus, potato virus Y, squash mosaic virus, tobacco mosaic virus, and tomato spotted wilt virus. We can make referrals to specialty labs for other virus tests. Additionally, we can order reagents for other virus tests if we know a large sample request for a virus is coming to us. Refer to our Web site, http://plantclinic.cropsci.uiuc.edu/, for information about fees and sample preparation. In most cases, plant material submitted for virus testing must arrive fresh so that sap can be extruded for testing. Fees are required with the testing request. |
A team of researchers at MIT have designed one of the lightest and strongest materials ever using graphene.
They made it by compressing and fusing flakes of graphene, a two-dimensional form of carbon.
The new material has just five per cent density and ten times the strength of steel, making it useful for applications where lightweight, strong materials are required.
The key factor that makes this new material strong is its geometrical 3-D form rather than the material itself, suggesting that other similar strong, lightweight materials could be made from a range of other substances by creating similar geometric structures.
The research, published in the journal Science Advances, has been attempted by other research groups, but experiments had failed to match predictions, with some results exhibiting less strength than expected.
The MIT team decided to analyze the material’s behavior down to the level of individual atoms within the structure.
Two-dimensional materials – flat sheets that are just one atom in thickness but can be indefinitely large in the other dimensions – are very strong and have electrical properties.
But because of their thinness, ‘they are not very useful for making 3-D materials that could be used in vehicles, buildings, or devices,’ said Dr Markus Buehler, head of MIT’s department of Civil and Environmental Engineering (CEE) and one of the lead authors of the research.
‘What we’ve done is to realize the wish of translating these 2-D materials into three-dimensional structures,’ said Dr Buehler.
To make the material, the team compressed small flakes of graphene using heat and pressure.
This produced a strong, stable structure that resembles that of some corals and a tiny type of algae called a diatom.
‘Once we created these 3-D structures, we wanted to see what’s the limit — what’s the strongest possible material we can produce,’ said Zhao Qin, a CEE research assistant and one of the authors of the study.
To test how strong the material was, the researchers created a variety of 3-D models and then subjected them to various tests.
The new 3-D graphene material, which is composed of curved surfaces under deformation, reacts to force in a similar way to sheets of paper.
Paper doesn’t have much strength along its length and width, and can be easily crumpled up.
But when its folded into certain shapes, for example rolled into a tube, the strength along the length of the tube is much greater and can support more weight.
In a similar way, the geometric arrangement of the graphene flakes naturally forms a very strong structure.
The material was made using a high-resolution, multimaterial 3-D printer.
Tests conducted by the MIT team ruled out a possibility proposed previously by other teams that it might be possible to make 3-D graphene structures lighter than air and used as a replacement for helium in balloons.
Instead, the material would not have enough strength and would collapse from the surrounding air pressure.
The researchers say that the material could have many applications in situations that require strength and light weight.
‘You could either use the real graphene material or use the geometry we discovered with other materials, like polymers or metals,’ Dr Buehler said, to gain similar advantages of strength combined with advantages in cost, processing methods, or other material properties (such as transparency or electrical conductivity).
‘You can replace the material itself with anything,’ Dr Buehler says.
‘The geometry is the dominant factor. It’s something that has the potential to transfer to many things.’
The unusual geometric shapes that graphene naturally forms under heat and pressure look something like a Nerf ball — round, but full of holes.
These shapes, known as gyroids, are so complex that ‘actually making them using conventional manufacturing methods is probably impossible,’ Dr Buehler said.
The team used 3-D-printed models of the structure, enlarged to thousands of times their natural size for testing.
To actually make the material, the researchers suggest that one possibility would be to use the polymer or metal particles as templates, coat them with graphene by chemical vapor deposit before heat and pressure treatments, and then remove the polymer or metal phases to leave 3-D graphene in the gyroid form.
The same geometry could even be applied to large-scale structural materials.
For example, concrete for a structure such as a bridge might be made with this porous geometry, providing comparable strength with a fraction of the weight.
The material would also provide the added benefit of good insulation because of the huge amount of enclosed airspace within it.
Because the shape has very tiny pore spaces, the material might have applications in filtration systems for either water or chemical processing.
‘This is an inspiring study on the mechanics of 3-D graphene assembly,” says Dr Huajian Gao, a professor of engineering at Brown University, who was not involved in this work.’
This work, Dr Gao says, ‘shows a promising direction of bringing the strength of 2-D materials and the power of material architecture design together.’ |
in the UV region at > 3 eV, in the visible between 1.7 and 3 eV,
in the IR at < 3 eV
1, 2, 3
A plot of these three emission bands is shown below. Which of the three
peaks corresponds to the transition from E3 to E1?
1, 2, 3
Will they have the same intensity of color in a given pathlength?
A, B, C, D
Complete the equilibrium reaction shown below for this dopant:
The charge on the product ion is:
The ionization of N leads to the production of a(n):
Which arrow shown below represents the electronic transition corresponding
to the ionization of N?
According to LeChatelier's Principle, adding boron (an acceptor) to N-doped
diamond will shift the equilibrium of equation 1 toward the:
(A similar set of questions could be asked about doping with B, which
can produce yellow diamonds, such as the Smithsonian
A, B, C
233. When intense red light strikes a sample of Cs metal in a photoelectric effect experiment, no electrons are ejected from the surface. In contrast, exciting the sample with weak blue light ejects electrons, whose kinetic energy can be measured.
These results are consistent with which of the following conclusions?
Red photons have more energy than blue photons.
Red photons have less energy than blue photons.
Red photons have the same energy as blue photons.
A plot of the kinetic energy of the ejected electron vs. incident photon frequency, , would look like
An equation describing this experiment is
235. (Nuclear chemistry, periodic properties) Researchers are trying to prepare element 114 (Science, 24 October 1997, 278, 571-572.). If a compound could be made from only this element and hydrogen, based on its position in the periodic table, how many H atoms would you expect to bond to element 114?
1, 3, 4, 5
239. Consider two LEDs, one red and one green, both emitting at their band gap energy. Which operating LED can cause a photovoltage in the other LED?
The red LED causes photovoltage in the green LED
The green LED causes photovoltage in the red LED
More questions are needed for this section. Please submit contributions to email@example.com.. |
FOOD SAFETY FOR CAMP STAFF
Acute gastroenteritis is the most common infectious disease encountered when camping. Symptoms of acute gastroenteritis include abrupt onset of vomiting, diarrhea and abdominal cramps. The majority of these cases are transmitted from one person to another or through contaminated foods and water. The most common cause of acute gastroenteritis is a virus, norovirus, that can spread very rapidly in a camp setting. Other types of foodborne illness that have occurred in camps include:
- Shiga Toxin-Producing E. coli
Top 11 Food Safety Tips
Following some basic food safety guidelines when cooking outdoors can help prevent foodborne illness. Here are some basic tips for camp staff on food safety:
- Insist that campers and staff wash their hands after using the bathroom, before preparing food, before eating, and after handling foods like meat, poultry and eggs. Make sure that soap, water and towels are available in camp. While wilderness camping, or other times when soap and water are not available, make sure hand sanitizer is available. Dirty hands can spread disease from one camper to another and can contaminate food that will be eaten by other people.
- Supervise campers to assure that clean cooking surfaces are used to prepare food. Always wash cutting boards and other utensils with detergent and clean water after contact with raw meat, poultry or eggs. Eggs, meat and poultry can carry Salmonella and other bacteria.
- Tell campers and staff not to drink water from rivers, lakes, creeks or streams (i.e., ‘surface water’). Also, do not use surface water for cleaning cooking and eating utensils. Surface water is often contaminated with bacteria and parasites from animal feces. Make sure campers know to use only properly treated water for cooking and drinking.
- Remind your campers and staff not to share cups or utensils with others. Saliva can spread a variety of viruses and bacteria, including those that cause colds, influenza and meningitis.
- Instruct campers and staff to pack food in tight, waterproof bags and containers and explain to them the importance of keeping the containers sealed. This will keep bacteria and insects out of the food. Food, even snack food, should never be stored in tents because it can attract wildlife. There is a lot of wildlife in West Virginia.
- Teach campers and staff to keep food in an insulated cooler to help keep foods at desired temperatures. Cold foods should be kept cold and hot foods should be kept hot. Storing foods at the right temperature can inhibit bacterial growth.
- Supervise campers to make sure they keep raw foods separate from cooked food to prevent cross contamination. Use separate areas for preparing food that will be cooked and food that will not be cooked. Cutting boards, plates and other utensils used for meat, poultry and eggs should be cleaned with water and detergent after use. Never use cooking surfaces that were used for raw meat or chicken to prepare or store cooked food. By the same token, only use clean cutting boards and utensils for foods that will be eaten raw, such as raw fruits and vegetables or salads.
- Assure that campers and staff clean their dishes and utensils properly with clean water and detergent. Do not use water from rivers, creeks or streams for cleaning as it may be contaminated.
- Assure that campers and staff cook foods to proper internal temperatures. Meat and poultry must be cooked to the proper internal temperature to destroy the germs that cause foodborne illness. You must use a food thermometer to be sure it is done, you can’t tell just by looking! Color is not a reliable indicator of doneness, and it can be especially tricky to tell the color of a food if you are cooking in a wooded area in the evening. Hamburger must be cooked to an internal temperature of 160 degrees Fahrenheit (F) and chicken or other poultry must reach 165 degrees Farenheit (F) to be safe. See USDA Minimum Internal Temperature Chart for a complete list of proper cooking temperatures.
- Use meat and fish as little as possible because it is hard to keep fresh in the outdoors. If your campers catch fish to eat, they should keep fish chilled or alive until they are ready to cook it.
- Make sure campers and staff wash fruits and vegetables thoroughly in clean water before eating raw.
Please Use the Links Provided For Additional Information:
- United States Department of Agriculture (USDA)
- National Institute of Health
- WV Bureau For Public Health |
Oxygen-23 loses its halo
Jaguar enables researchers to choose between contradictory experiments
The oxygen-23 isotope is rare and ephemeral.
This is not the oxygen that keeps your body running. It exists at the edge of the nuclear landscape, where isotopes are always fleeting and most commonly found within exploding stars.
Nevertheless, if we are to understand how the universe is put together, we must understand exotic isotopes such as oxygen-23. A research team from ORNL, the University of Tennessee and the University of Oslo in Norway recently contributed to this understanding with intense calculations of the oxygen-23 nucleus performed on ORNL's Jaguar supercomputer. In doing so the researchers also demonstrated that supercomputer simulation has become an indispensable tool for scientific discovery, on par with physical experiment. Their work is discussed in a recent edition of the journal Physical Review C.
The isotope that makes up nearly 100 percent of naturally occurring oxygen is oxygen-16, whose nucleus has eight positively charged protons and eight uncharged neutrons. It doesn't decay and is especially important to sustaining life, both as the stuff that keeps us breathing and as the heavier of water's two elements. When you step on a scale, oxygen-16 is nearly two-thirds of the weight that stares back up at you.
With eight protons and 15 neutrons, oxygen-23 does decay—and quickly. The oxygen-23 nucleus has a half-life of 82 milliseconds, meaning that if you have 10,000 atoms now you'll be down to two or three within a second. Its neighbor, oxygen- 24, is believed to be the heaviest an oxygen isotope can get; beyond it lies the so-called neutron drip line, where neutrons will no longer attach to a nucleus. While they may be rare, these and other exotic isotopes are important, at least in part because they challenge current theories of how a nucleus—and therefore the universe—is constructed.
"Our goal is to explain the origin of the elements," explained Gaute Hagen of ORNL, who did the calculations with Øyvind Jensen of the University of Oslo and Thomas Papenbrock of ORNL and UT. "The only way to go beyond oxygen on the nuclear chart is to go via paths that are along the very drip line of the nuclear chart. On a bigger scale, astrophysics is also nuclear physics. What happened in the first milliseconds after the Big Bang, that's nuclear physics as well."
Nuclear physicists have worked for more than 60 years within an approach known as the nuclear shell model, akin to the atomic shell model that governs electron orbits. The nuclear model notes that as you add protons or neutrons to a nucleus, it becomes especially stable at certain numbers such as 2, 8, and 20. These are known as magic numbers. Calculated for protons and neutrons separately, they indicate that a "shell" within the nucleus has been filled. In addition, there are "subshells" in between these numbers where the nucleus is relatively stable, but less so than with a full shell.
This model does well describing stable isotopes, but it becomes problematic as you move toward unstable, exotic nuclei. For instance, the model would suggest that oxygen-28, with eight protons and 20 neutrons, is magic, yet no such isotope has been observed and researchers believe oxygen-24 is the heaviest possible oxygen isotope. In fact, a recent paper from Hagen and colleagues in the journal Physical Review Letters supports the contention that oxygen-24 itself is magic, although existing theory would say otherwise.
By exploring exceptions to the shell model as it has been understood, researchers seek a deeper understanding of all nuclei, stable and unstable alike.
"The idea is that these naïve shell model pictures of the nucleus do not hold when you go to the very extreme of the nuclear chart," Hagen noted, "where you have very neutronrich or unstable or fragile systems."
Enter oxygen-23. The oxygen nucleus reaches a relatively stable subshell with 14 neutrons at oxygen-22. The twenty-third neutron is essentially left over. Experimental data from a decade ago suggested that the twenty-third neutron did not even touch the others, but rather hovered over the nucleus as a halo. That conclusion grew from observation that the nucleus had an especially large cross section; in other words, it was very wide.
Data from more recent experiments by Rituparna Kanungo of Saint Mary's University in Halifax, Nova Scotia, disagreed. Working at Germany's GSI Helmholtz Centre for Heavy Ion Research, Kanungo concluded that the cross section of oxygen-23 was far smaller, meaning that it did not have a neutron halo.
The question, then: Which experiment was right?
Kanungo invited Hagen to explore the question computationally, which he did with a first-principles approach known as coupled cluster theory. To ensure the most accurate possible answer, he did the calculations for several oxygen isotopes, from oxygen-21 with 13 neutrons to oxygen-24 with 16. Each calculation ran on about 100,000 processors and used several million processor-hours.
A tough calculation
Oxygen-23 is a very neutron-rich nucleus, which until very recently could not be accurately analyzed microscopically with existing supercomputers. Hagen noted that the calculations necessary to handle 23 strongly interacting particles were very complex and required a system of Jaguar's power.
"This work could not have been done without Jaguar," he said. "It would not be possible. It's putting us at the forefront of our field just being at this location." The calculations showed that Kanungo's conclusion was right and oxygen-23 indeed does not have a halo.
"It's not a halo nucleus, our calculations verified," Hagen said. "We computed the ground state, the radius of the system, and the density profile of the neutrons to see where the neutrons really are. By using those results and the reaction calculation, Kanungo was able within the uncertainty of the experiment to put our calculations in there, and there was very nice agreement between theory and experiment."
The study of this isotope is a piece in the puzzle of nuclear physics. As supercomputers grow in power, they will tackle ever-heavier isotopes and, possibly, help us understand how matter is put together.
"It affects our understanding of the universe, its evolution, and why we're here," Hagen said. "All those questions relate to what we do, and they are really not located at the stable part of the nuclear chart. They're at the extremes." —Leo Williams |
Eye is the organ that collects light and sends messages to the brain to form a picture. The three main parts of the eye:
2) Orbit (eye socket)
3) Adnexal (accessory) structures (such as the eyelid and tear glands)
- The outer part of the eye is made up of the retina, sclera and uvea.
- The sclera is the outer wall of the eyeball.The retina is a thin-layered structure that lines the eyeball and sends information from the eye to the brain.
- The uvea nourishes the eye. Both the retina and the uvea contain blood vessels.
- The uvea consists of the following:
Iris: The colored part of the eye that controls the amount of light entering the eye.
Ciliary body: Muscular tissue that produces the watery fluid in the eye and helps the eye focus.
Choroid: The layer of tissue underneath the retina that contains connective tissue and melanocytes and nourishes the inside of the eye; the choroid is the most common site for a tumor.
Symptoms of eye cancer:
- Bulging of one eye
- Complete or partial loss of sight
- Pain in or around the eye (rare with eye cancer)
- Blurred vision
- Change in the appearance of the eye
Eye cancer can also cause
- Seeing spots or flashes of light or wiggly lines in front of your eyes
- Blinkered vision (loss of peripheral vision) – you can see what is straight ahead clearly, but not what is at the sides
- A dark spot on the colored part of the eye (the iris) that is getting bigger
Pain is quite rare unless the cancer has spread to the outside of the eye. |
Why Do the Leaves Change Color?
by Michelle Wallace
Society is profoundly impacted by autumn. Every year as the days get shorter, nights grow longer, and the temperature outdoors becomes cooler the leaves in our abundant deciduous forests across the state and country begin to turn bright hues of gold and crimson. Literally millions of tourists every year come to visit our national and state forests to experience fall’s brilliance. Perhaps the warm colors are nature’s way of warming our spirits in preparation for the cold temperatures that follow.
The changing of color in leaves is largely connected to the change in the length of day. As nights grow longer and days grow shorter, photosynthesis and chlorophyll production in the leaves slows down until it eventually comes to a stop. Chlorophyll in a leaf is what gives leaves their green color. When chlorophyll is absent the other pigments present inside the leaf begin to appear. These pigments are known as caroteniods. They produce colors of yellow, orange, and brown. In addition to the change in the length of day, a plant’s fall color is influenced by the weather and the intensity of light. Anthocyanins pigments (reds and purples) are produced when there are excessive amounts of sugar in the leaves in combination with bright light. It is hypothesized that the anthocyanin pigments in leaves help to protect the photosynthetic system as plants prepare to go dormant and nutrients are being transferred to other areas of the plant. The anthocyanin pigment produced in some leaves is largely dependent on the pH level of the cell sap (sugar) in the leaf. Leaves with highly acidic cell sap produce very red hues while foliage with lower pH levels produce purple hues.
The weather is what causes a corky membrane to develop between the branch and the leaf stem. This membrane reduces the flow of nutrients into the leaves and begins this whole process which is completed when a layer of cells at the base of each leaf is clogged, sealing the tree from the environment and finally causing the leaf to fall off.
Nature creates this magical canvas every fall which is an inspiration to gardeners and outdoor enthusiasts. Consider incorporating a few tree specimens into your landscape that have stunning fall foliage. The United States National Arboretum has a wonderful list of plants listing their fall foliage colors that range from yellow to brilliant red. Go to http://www.usna.usda.gov/PhotoGallery/FallFoliage/FallColorList.html . To find out more information on planting trees and shrubs in North Carolina go to http://www.ces.ncsu.edu/depts/hort/hil/hil-601.html . |
Collect and organize data using observations, surveys, and experiments 0306.5.1
Links verified on 4/9/2013
- Coin Flip - this coin flipper builds a column graph one flip at time - let your students see the progression as data is generated and collected [works best in Internet Explorer]
- Coin Toss - toss enough coins to make a prediction about probability (maximum number of tosses 1000, but you can keep tossing to get a larger data set)
- Data Picking - Students collect data, enter tally marks or numbers and then select which graph is appropriate.
- Every Breath You Take - Students estimate, experiment, and display real-life data. Students use the number of breaths taken during a specified time period for this exploration.
- Ken White's Coin Flipping Page - select the type of coin to flip and then enter a number of times to flip
site for teachers | PowerPoint show | Acrobat document | Word document | whiteboard resource | sound | video format | interactive lesson | a quiz | lesson plan | to print |
Dumped in SpacePrint
A simple new way to track Earth-orbiting trash
By Josie Glausiusz
A few days ago, my three-year-old daughter spontaneously cleaned the breakfast table. She scraped cold leftover oatmeal into the rubbish bin, dropped the dishes into the kitchen sink, and wiped down the table. If a three-year-old can tidy up the breakfast mess without being asked, I wonder, why are we humans incapable of cleaning up the masses of space junk we insist on tossing into Earth’s orbit?
Granted, the problem is bigger: items numbering in the hundreds of thousands have been lost—or dumped—into orbit, including an astronaut’s glove, a spatula, and a 1,400-pound tank of ammonia. This trash cache orbiting Earth poses a very real danger to satellites and to the International Space Station. Not only do satellites orbit in different planes, but the direction of the debris’ orbits will change over time, meaning that satellites and trash can collide from almost any direction at velocities of several miles per second. In 2009, for example, the communications satellite Iridium 33 collided with the defunct Russian Kosmos 2251 satellite, generating a giant cloud of more than one thousand pieces of debris, and in April 2012 NASA’s Fermi Gamma-Ray Telescope narrowly missed a crash with another obsolete Russian reconnaissance satellite, Cosmos 1805.
Now, Australian astronomer Steven Tingay hopes to remedy that situation with a space-junk-tracking system based in part on radio waves emitted by local radio stations. Tingay is director of the Murchison Widefield Array (MWA), a low-frequency telescope in Western Australia. The MWA, a highly sensitive telescope exploring the Cosmic Dawn—the era when stars and galaxies first formed in our universe—is constantly surveying vast swathes of sky. Its enormous field of view makes the MWA particularly useful for monitoring space junk as well, since the telescope can simultaneously track hundreds of pieces of debris and follow them to figure out their orbits.
He and Ph.D. candidate Ben McKinley realized that radio waves beamed into space by local FM transmitters bounce off the moon, the International Space Station, and small pieces of junk, which reflect some of them back to Earth, where they can be monitored. The telescope can receive reflected signals from objects smaller than a meter and as far as 1,000 kilometers away. According to NASA, most space junk orbits within 2,000 kilometers of Earth’s surface, with the greatest concentrations found near 750 to 800 kilometers.
The most dangerous pieces of debris, as Tingay wrote to me in an email, are “those that are small enough that they are not well monitored, yet big enough and numerous enough to pose large risks. Even a bit of junk the size of a marble will take out a satellite when it is traveling at a relative speed of [about] 10 kilometers per second.” By tracking the positions of space junk, Tingay and his team can predict when collisions are likely to occur, alerting operators to maneuver their satellites out of the path of the treacherous debris.
The method is “very simple and efficient,” Tingay wrote, because there is no need to coordinate with local radio stations. “We just utilize whatever they are broadcasting.” he says. “That is part of the beauty of this technique.”
Josie Glausiusz has written about every topic known to science, from physics to furry animals, for magazines that include Nature, National Geographic, Scientific American Mind, Discover, New Scientist, and Wired. She is the co-author of Buzz: The Intimate Bond Between Humans and Insects.
More Posts from On Science: |
|This is what the smallpox virus looks like under a microscope. Before its eradication, smallpox killed millions around the world. Now the virus exists only in labs in Russia and at the Centers for Disease Control and Prevention in Atlanta. However, some scholars fear that not all supplies are secure. Dr. Fred Murphy, Sylvia Whitfield|
The immune system is constantly changing. It matures throughout childhood, then declines as the body ages. It responds to the environment as it encounters new threats and recognizes old, familiar enemies. It can be weakened by disease, medications, or environmental toxins. As scientists learn more about how the immune system functions and changes, they can devise strategies to treat or prevent disease by manipulating the immune system. These strategies are called immune therapies.
Many diseases, such as mumps and measles, are once-in-a-lifetime events. The reason: After you have recovered from an infection, your immune system remembers. As mentioned in the Introduction, Edward Jenner developed a vaccine for smallpox using a similar disease (cowpox) and exploiting our immune system’s memory. While Jenner based his vaccine only on observation, we now know that vaccines work by taking advantage of the adaptive immune system’s memory cells. A vaccine is made from a microbe that resembles a disease-causing one to prompt the immune system to recognize it as an invader, but the microbe is weakened so that it will not cause a disease. The disabled microbe attracts the attention of phagocytes, which sound the alarm to summon lymphocytes. The resulting battle is brief—but long enough for memory B and T cells to recognize the real invader later on. That is why Jenner’s cowpox vaccine protected people against smallpox two centuries ago—and why the vaccinations you have received since childhood, for other diseases, are so important. The next time you get a flu shot or some other vaccination, remember: your own internal defensive army is being activated on your behalf, and when that germ shows up, your army can eliminate it before it has a chance to cause disease.
Dead or Alive
Unlike Jenner’s, most vaccines today must be created in the laboratory. Louis Pasteur created the first artificial vaccines by weakening the microorganisms that cause rabies and anthrax. Like Pasteur’s vaccines, modern “live” vaccines are weakened versions of disease-causing organisms. The vaccines for mumps, measles, German measles (rubella), and chicken pox are such vaccines. Inactivated (killed) vaccines are also very common. To make them, large amounts of the target organism are grown in the laboratory, then killed with heat, radiation, or chemicals. Most polio and influenza vaccines used in the United States today are inactivated vaccines. “Subunit” vaccines, such as the vaccine for whooping cough, contain only a part of the harmful organism. “Toxoid” vaccines, such as the tetanus vaccine, do not contain the organism itself, but an inactivated and attenuated version of the disease-causing toxin it produces.
Although traditional vaccines have controlled many diseases, effective vaccines against some of the biggest killers in many countries—AIDS, malaria, and tuberculosis—have not yet been discovered. New techniques in molecular biology and genetic engineering may be the key to making safe, effective, and affordable vaccines to meet these challenges.
The vaccines and natural infections described above illustrate adaptive immunity because they stimulate the body’s own immune system to adapt, or learn, to fight diseases. One type of adaptive immunity occurs when the body makes antibodies, proteins that can recognize and destroy germs. This active immunity contrasts with passive immunity, which occurs when antibodies are made somewhere else (in another person, an animal, or even in the laboratory) and introduced or “passed” into the body.
We all start life with the gift of natural passive immunity. Newborn babies are protected from many diseases by the antibodies they received from their mothers through the placenta before birth. They can get even more of these passive antibodies from their mothers’ milk, especially from the antibody-rich milk (colostrum) produced immediately after birth.
Artificial passive immunization, with antibodies from humans or animals, can be used to treat or prevent diphtheria, tetanus, hepatitis, and rabies. People bitten by poisonous spiders or snakes are often treated by passive immunization.
The Monoclonal Revolution
In the 1970s a new technique for producing antibodies in the laboratory changed the world of immunology forever. Georges Köhler and César Milstein won the 1984 Nobel Prize in physiology or medicine for discovering a way to mass-produce monoclonal antibodies. In an animal’s body, antibodies are produced by many different cells and can bind to many sites on an antigen; these are called polyclonal antibodies. In contrast, monoclonal antibodies are produced in laboratory dishes by cells derived from a single antibody-producing mouse cell. A monoclonal antibody binds to a single, tiny site on an antigen and can recognize very subtle differences between cells, molecules, or germs. Because of this specificity, and because they can be produced in unlimited amounts, monoclonal antibodies have wide-ranging applications in science and medicine—from basic research to the diagnosis and treatment of diseases.
At first, monoclonal antibodies were less useful for treating disease because the human immune system sees antibodies made by mouse cells as foreign and rapidly eliminates them from the body. With the help of genetic engineering, researchers continue to investigate ways to “humanize” monoclonal antibodies to make them more effective against human diseases. Hundreds of monoclonal antibodies are now being tested in clinical trials.
|Mothers pass antibodies to their babies through the placenta before birth and through their milk after birth. These antibodies help bolster the babies’ immune systems at a time when they are more vulnerable to infection. Peggy Greb|
|César Milstein © The Nobel Foundation|
|Georges Köhler © The Nobel Foundation|
Monthly injections of humanized monoclonal antibodies against respiratory syncytial virus (RSV) during the RSV season can protect premature babies and young children with chronic lung disease from serious respiratory infection. Monoclonal antibody therapy also shows promise against West Nile virus and some antibiotic-resistant bacteria.
The usefulness of monoclonal antibodies is not limited to infectious diseases. Many are under investigation or approved for use against allergic and autoimmune diseases (including rheumatoid arthritis, inflammatory bowel disease, asthma, and diabetes) and to prevent organ transplant rejection. These antibodies can suppress the immune system by blocking important molecules on the cell surface that recognize attack signals.
Monoclonal antibodies can also fight cancer. They can block receptors that the cancer cell needs to survive or—when bound to drugs, toxins, or radioactive particles—deliver cancer-killing weapons precisely to their targets.
A fascinating new mechanism of monoclonal antibodies is that they can coat tumor cells so that the immune system’s dendritic cells then detect the tumor and present numerous other tumor antigens to the lymphocytes, thus triggering adaptive immunity.
Stem Cell Transplants
Stem cell transplants are a kind of immune therapy that transfers cells instead of antibodies. Blood-forming stem cells are immature cells that will become red and white blood cells and platelets. The bone marrow (the spongy material in the center of some bones) contains blood-forming stem cells. Blood-forming stem cells can also be isolated from the circulating blood and from the umbilical cords of newborn babies (after the cord is cut, cells are harvested from the discarded portion of the cord).
These are not the controversial embryonic stem cells, cells from human embryos or fetal tissue that can develop into other types of cells and tissues. For the most part, immunologists have not been drawn into the debate surrounding embryonic stem cells because the discovery of blood stem cells in bone marrow revealed the secret of rebuilding the immune system via bone marrow transplants.
Regardless of how the controversy over embryonic stem cells plays out, transplants of the stem cells found in bone marrow (also called bone marrow transplants) are a viable, noncontentious treatment for some immune deficiency diseases and certain cancers. Cells can be obtained from a living donor, or the patient’s own stem cells can be removed and later introduced back into the patient. In the case of tumors, the patient’s tumor cells are first destroyed with drugs and/or radiation, but this also destroys the body’s immune system, leaving the patient with a weakened immune defense. The beauty of stem cells is that they regenerate the full immune system. After the stem cells are injected into the patient’s blood, they migrate to the bone marrow, where they mature into blood cells, including all the different cells of the immune system. This is an intricate process involving, among other things, the action of cytokines. Cytokines are a large family of proteins, each of which regulates distinct steps in the life of stem cells and their immune descendents. Only very small amounts of cytokines are required to regulate stem cells and other immune cells.
Benjamin Reese / Custom Medical Stock Photo
The patient and stem cell donor must be a good genetic match for the transplant to succeed. If the match is poor, transplanted cells may recognize the patient’s body as foreign and attack it. This is called graft-versus-host disease. In some cases, transplanted immune cells may attack the skin, causing rashes, or the intestine, causing severe inflammation. In other cases, the patient’s body may attack and reject the donor cells. Successful matches, however, make possible a promising treatment for a variety of immune problems.
On the Horizon
Many other kinds of immune therapy are under investigation. Some of these techniques involve removing cells from the patient’s body and treating them in the laboratory to make them more aggressive disease fighters. Other techniques use cytokines or other substances to stimulate the immune system. Some new immune therapies are described in “Research Advances: Recent and Prospective”.
The Vaccine That Spoiled the Party |
Mount Ontake, Japan's second-highest volcano, erupted killing at least 31 people on September 27. Since then, there has been feverish speculation about why tourists were on an active volcano and why the eruption wasn't predicted.
Mount Ontake (also known as Ontakesan) is a stratovolcano which last erupted in 1979-80 and 2007 (there was also a possible, unconfirmed eruption in 1991). Before this, there were no recorded historical eruptions at Mount Ontake.
Since the eruption in 1980, Ontake has been monitored by the Japan Meteorological Agency (JMA). It has seismometers around the volcano to record volcanic tremors and instruments to measure any changes around the volcano. This would provide the JMA with signs that there was magma movement underneath the volcano and that perhaps an eruption was imminent. There had been a slight increase in volcanic tremors starting at the beginning of September. Why then, was this eruption not predicted?
Firstly, the ability to predict volcanic eruptions is an ambition that volcanologists are far from realizing. Magma movement under a volcano will cause volcanic tremors, make the ground rise and fall and release gases such as sulphur dioxide. If these signs are monitored closely, then it may be possible to forecast that an eruption may be imminent.
However, all of these things can also happen without any volcanic eruption. Knowing what these signs mean for an individual volcano relies on data collected during previous eruptive episodes, as each volcano behaves differently. Mount Ontake has only had two known historical eruptions and was not monitored previous to the 1979 eruption, so scientists had no previous data to work with. Volcanic tremors are very common at active volcanoes and often occur without being associated with an eruption.
Secondly, the type of eruption that vulcanologists think occurred at Ontake is one that does not cause the signals typically monitored at volcanoes. The images and videos captured by hikers on the volcano show that the ash cloud was mostly white, which can be interpreted to mean that the eruption was mostly steam.
The effects of the pyroclastic density currents, the flows of ash, and gas that flowed over the ground from the summit suggest that they were low-temperature and low concentration. Both of these point to there being no magma directly involved in the eruption. Instead, it is likely water had seeped into the volcano and was superheated by magma within the volcano and flashed to steam causing what is known as a phreatic eruption. Phreatic eruptions occur without magma movement, hence the lack of precursor signals. The 2007 eruption was also phreatic and also occurred with little warning.
Power of nature
So, if an eruption like the one in Japan could not be predicted, should tourists have been allowed up Mount Ontake? Ontake is a place for religious pilgrimage, as well as a popular destination for hikers and climbers. This is quite common for volcanoes around the world; tourists flock to Kilauea, Hawaii, to watch the lava flows, climb volcanoes in the U.S. Cascade Range and even ski at volcanoes such as Ruapehu in New Zealand. A phreatic explosion such as the one seen at Ontake on Saturday is possible at all of these places.
There is something compelling about the power of nature, and the beauty of a volcano that draws people to them. Volcanoes are inherently dangerous places and there will always be risks to those who visit them. However, events like that at Ontake are thankfully rare. Laying the blame at the foot of either the hikers, or the authorities that allow tourists to visit active volcanoes would be misplaced.
The events at Ontake were tragic. It's my opinion that it was a tragedy that could not have been predicted or prevented, given our current level of knowledge. It highlights the need to understand volcanic systems better. My thoughts are with the survivors, and the families of those who didn't make it.
- Despite public outrage, web access for prisoners isn't a luxury item – here's why
- Australia's snub to Nobel Peace win is major break from ambiguous nukes policies of past
- Drones, volcanoes and the 'computerisation' of the Earth
- Anthill 20: Myths
This article originally published at The Conversation here |
August 20, 2012
Researchers Study The Lowdown On The Shakedown
Brett Smith for redOrbit.com - Your Universe Online
In continuing a trend that has seen scientists looking to the mechanics of nature for inspiration, researchers at the Georgia Institute of Technology are studying the ways in which furry mammals shake themselves dry.The study – which involved 33 different animals, including 16 species and five dog breeds - found that furry mammals can shake 70 percent of the water off their bodies in just a fraction of a second. They also saw that smaller mammals shook at higher oscillations to compensate for their diminutive radius, according to the research team´s report published recently in the Journal of Royal Society Interface.
Equipped with a hose and high-speed video camera, Georgia Tech mechanical engineering professor David Hu, along with his colleagues Andrew Dickerson and Zachary Mills, teamed up with Zoo Atlanta to study the drying technique of furry mammals that likely played a role in the evolution and survival of these animals.
“What would you do on a cold day if you were wet and could not towel off or change clothes? Every warm-blooded furry creature faces this dilemma often,” Dickerson said. “It turns out that oscillatory shaking exhibited by mammals is a quite efficient way to dry.”
The report cited two factors that play an important role in determining how these furry mammals shake off the water that is held close to their bodies by the forces of surface tension. Smaller animals had to oscillate their bodies and heads rapidly to overcome this attractive force. For example, a tiny mouse swings its body 27 times per second, but a grizzly bear does the same thing four times per second.
The skin of furry mammals, which tends to be loose against their bodies, also plays a role in getting these animals dry during a shakedown. As furry mammals oscillate, this loose skin “whips the fluid around much faster than if the skin was tight”, said Hu. This whip action increases the acceleration of water droplets on the skin–sending them flying around the animal.
Mammals use this combination of factors to generate forces between 10 and 70 times that of gravity, yet they do so while trying to expend the least amount of energy – a crucial balance that needs to be kept during the colder months.
While wetting down animals and filming them with a high speed camera provides a potentially great viral video, the researchers said they hope the study will provide a window into technology that could remove liquid or dust from machines, like the Curiosity rover that recently landed on the surface of Mars.
"We hope the findings from our research will contribute to technology that can harness these efficient and quick capabilities of drying seen in nature,” Dickerson said.
“In the future, self-cleaning and self-drying may arise as an important capability for cameras and other equipment subject to wet or dusty conditions,” Hu added.
In addition to observing live animals, the scientists built and studied a ℠robotic wet-dog-shake simulator´ that was designed to eject water from its surface.
Hu and Dickerson said their future research will center other ways animals interact with water in the natural world. |
Towards Solving the Problem
This pages continues working through the stages of problem solving as laid out in: Problem Solving - An Introduction.
This page concludes our problem solving series with a brief overview of the final stages of the problem solving framework.
Stage Four: Making a Decision
Once a number of possible solutions have been arrived at, they should be taken forward through the decision making process.
Decision Making is a an important skill in itself and you may want to read our Decision Making articles for more information.
For example, information on each suggestion needs to be sought, the risks assessed, each option evaluated through a pros and cons analysis and, finally, a decision made on the best possible option.
Stage Five: Implementation
Making a decision and taking a decision are two different things.
- Being committed to a solution.
- Accepting responsibility for the decision.
- Identifying who will implement the solution.
- Resolving to carry out the chosen solution.
- Exploring the best possible means of implementing the solution.
Stage Six: Feedback
The only way for an individual or group to improve their problem solving, is to look at how they have solved problems in the past. To do this, feedback is needed and, therefore, it is important to keep a record of problem solving, the solutions arrived at and the outcomes. Ways of obtaining feedback include:
- Follow-up phone calls
- Asking others who may have been affected by your decisions.
It is important to encourage people to be honest when seeking feedback, regardless whether it is positive or negative.
Conclusions to Problem Solving
Problem solving involves seeking to achieve goals and overcoming barriers. The stages of problem solving include identification of the problem, structuring the problem through the use of some forms of representation, and looking for possible solutions often through techniques of divergent thinking. Once possible solutions have been arrived at, one of them will be chosen through the decision making process.
The final stages of problem solving involve implementing your solution and seeking feedback as to the outcome, feedback can be recorded for help with future problem solving scenarios. |
Tasmanian devils live in forests in Tasmania.
They eat birds, small animals,reptiles.
They have dark fur with some white fur.
They screech loudly.
Their strong jaws bite through bones.
The Tasmanian devil is the largest of the carnivorous (meat-eating) marsupials. They were once found all over Australia, but are now found only in Tasmania, Australia's island state.
They were probably driven south by the dingo when it came to Australia, at a time when Tasmania was joined to the mainland.
Habitat and Distribution (where they are found)
They are found in northern, eastern and central Tasmania. Their habitat is wooded countryside, in forests and near the outer suburbs of many towns.
Appearance and Behaviours
Devils are black with a white mark on the chest and rump, and look similar to a medium sized solid dog. Their tracks are in a diamond pattern: a single paw print, followed by two paw prints side by side, and then another single print. Their front legs are longer than their back legs, which gives them a rocking movement when they run, at a top speed of about 13 kilometres per hour.
Tasmanian devils are nocturnal: they hunt at night and spend the day in a burrow. They have powerful jaws that can bite through bones. The can open their mouths in a very wide gape. When several gather at one carcass, they growl and screech loudly, but rarely injure each other. This bone-chilling screeching gave the devil its name: early settlers thought that they sounded like devils in the night.
Generally Tasmanian devils eat dead animals they find. This is an important activity because it means dead carcasses are cleaned up, which keeps the bush clean and on farms prevents sheep getting infected by maggots. However, they can also hunt and kill birds, reptiles and small mammals. Like many marsupials, devils can retain fat in their tails to keep up nutrition when there is little food around.
After mating in March, a female gives birth in April to 2-4 young, called pups, after a pregnancy of about 21 days. After they are born, the tiny pups make their way to their mother's pouch. Inside the pouch, there are four teats to feed milk to the young devils.
The pouch opens towards the female's back legs. They stay in the pouch for about 16 weeks, and when they are too big to fit, the female moves them to a nest. They stay there for about 16 weeks, generally starting to follow her around at the end of that time. By the age of about 40 weeks, the young are on their own, and their mother leaves them in a den.
Conservation Status and Threats
Tasmanian devils are classified as Endangered, and are completely protected.
An epidemic called Devil Facial Tumour Disease is now having a terrible effect on the Tasmanian devil population. It started in the north-east of Tasmania in the mid-1990s but has now spread to other areas of the state.
Because of their habit of biting each other, particularly when sharing a carcass, devils spread the disease to each other.
Scientists and vets are working to find out how the disease can be stopped. Meanwhile, groups of healthy Tasmanian devils have been moved to zoos in the mainland to breed in captivity. When Tasmania is once again disease free, those devils will be released into the wild. |
How do we measure UVR?
The ultraviolet index or UV Index is an international standard measurement of the strength of UVR from the sun on the ground at a particular time. The UV Index is an important vehicle to raise public awareness of the risks of excessive exposure to UV radiation, and to alert people about the need to adopt protective measures. Encouraging people to reduce their sun exposure can decrease harmful health effects and significantly reduce health care costs.
As the UV Index increases the hazard increases. There are a number of categories ranging from low exposure to extreme as shown in the table.
|UV Index||Exposure Category|
|2 or less||Low|
|3 to 5||Moderate|
|6 to 7||High|
|8 to 10||Very High|
The exposure categories are based on the response to fair-skinned people exposed to UVR. The UV Index may be either a prediction or a measurement.
ARPANSA obtains the measured UV Index from a detector that responds to UV radiation in much the same way as human skin does. The measurements take into account cloud cover and other environmental factors that computations can only approximate. The Bureau of Meteorology (BOM) calculate the predicted value from a radiative transfer model using parameters of date, time, latitude, temperature and ozone concentration. The skin’s response to UV radiation is required for calculating the predicted solar UV Index.
What is your skin type?
Skin is classified by sensitivity to UV radiation. If you are very fair skinned (white skin) and tend to burn easily in the summer sun and find it difficult to achieve a tan you have skin type 1. People with skin type 1 have the highest risk of premature skin aging and greatest risk of developing some form of skin cancer. If you are of this type then you should limit your exposure to the sun and always dress to minimise sun exposure, wear a hat and use sunscreen. For other skin types from very fair to dark please refer to the Skin Chart based on the research of Fitzgerald.
How can you reduce your UVR exposure?
Even for very sensitive fair-skinned people, the risk of short-term and long-term UV damage below a UV Index of 2 is limited, and under normal circumstances no protective measures are needed. If sun protection is required, this should include all protective means, i.e. clothing and sunglasses, shade and sunscreen.
For best protection, we recommend a combination of sun protection measures:
- Slip on some sun-protective clothing that covers as much skin as possible.
- Slop on broad spectrum, water resistant SPF30+ (or higher) sunscreen. Put it on 20 minutes before you go outdoors and every two hours afterwards
- Slap on a hat – broad brim or legionnaire style to protect your face, head, neck and ears.
- Seek shade.
- Slide on some sunglasses – make sure they meet Australian Standards.
A balance is required between avoiding an increase in the risk of skin cancer by excessive sun exposure and achieving enough exposure to maintain adequate vitamin D levels. Cancer Council Australia provides further advice on vitamin D.
quote from Arpansa |
True or false: Areas near rivers, lakes and mountains are safe from tornadoes?
True or false: Windows should be cracked open to equalize air pressure during a tornado?
True or false: The southwest corner of a structure is the safest place during a tornado.
Question: How long are tornadoes on the ground?
Question: How high do wind speeds reach inside a tornado?
Question: Which is more intense, a small tornado or a very large one?
False--Tornadoes have been known to strike almost any location.
False--Open windows allow damaging winds to enter the structure. The most important thing to do is seek shelter.
Probably false--About 85 percent of tornadoes move from the southwest to the northeast. You want to put as many walls as possible between you and the approaching tornado. Get as low as you can, close to a wall. Collapsing floors cause many inuries; so don't choose a spot under heavy appliances, such as a refrigerator. If you are in a car, get out of it and lie flat in a ditch.
Answer--The typical tornado will produce a damage path two to six miles long in 5 to 15 minutes. MOST but not all tornadoes touch down only momentarily.
Answer--Wind speeds in tornadoes rarely have been measured, and then usually by an indirect method. One practical way to measure wind speed is to inspect the damage after the tornado passes, using a scale called the Fujita-Pearson Scale. It ranges from an F0 (40-72 mph)considered a tornado doing light damage, to an F5 (261-318 mph) that can rip the pavement from streets and tear buldings off of their foundations. An F1 (73-112 mph) is a moderate tornado with wind speeds comparable to most hurricanes. An F2 (113-157 mph) is a significant tornado and can cause considerable damage. An F3 (158 to 206 mph) is a severe tornado and can tear off roofs and some walls of well-constructed houses. An F4 (207-260 mph) is a devastating tornado.
Answer:--The size of a tornado is not an indication of its intensity. A very large tornado can actually be weak. A small tornado, on the other hand, can be violent and cause extreme damage.
For more info, visit the website of theStorm Prediction Center. |
NETW320 – Converged Networks with Lab
Lab #7 Title: CODEC Selection for a WAN
A codec is a device capable of performing encoding and decoding on a digital signal. Each codec provides a different level of speech quality. The reason for this is that codecs use different types of compression techniques in order to require less bandwidth. The more the compression, the less bandwidth you will require. However, this will ultimately be at the cost of sound quality, as high-compression/low-bandwidth algorithms will not have the same voice quality as low-compression/high-bandwidth algorithms. The following table shows three standard codec types along with their corresponding Coder Type, Bit Rate, Frame Size Delay, and Look Ahead Delay. These particular three were chosen because these are the ones we will use in this lab, but there are others. ITU Recommended Delay Values/G.114
Frame Size Delay (milliseconds)
Look Ahead Delay (milliseconds)
A common scale used to determine the quality of sound produced by various codecs is the mean opinion score (MOS). This standard is based on a scale from 1 to 5, where 1 is very poor and 5 is excellent. Here is a common table used to rate values of MOS.
No Meaning Understood
In this lab, you will build a wide-area network that will consist of several LANs spread across the country and connected via the Internet with two types of traffic: data and voice. You will configure three scenarios where the data and voice traffic generated will be held constant. The only parameter that you will alter will be the type of codec used. In the first scenario, the voice traffic will use G.711 (PCM); in the second scenario, the voice traffic will use G.729 (CS-ACELP); and the third scenario, the voice traffic will use G.723.1 (ACELP). By holding the traffic generated for the data and voice users constant, and only changing the codec values, we will be able to see how the varying bit rate of the specific codecs used will affect the end-to-end delay for the data traffic, the end-to-end delay for the voice traffic, and the packet delay variation for the voice traffic for a wide-area environment. Motivation
Deciding on what codec to use is a tradeoff between bandwidth and speech quality; the more bandwidth the codec uses, the better the speech quality will be. Here are the codecs we will use, along with their corresponding MOS values. Codes
However, this is not the only factor that must be considered. Delay will greatly affect the quality of voice traffic because it is real-time traffic. Words and responses to words must be received in a coherent and timely manner for the voice conversation to be tolerated by the users. Here is a table with ITU-T recommendations for the overall one-way delay times for voice traffic. ITU-T Recommend Acceptable One-Way Overall Delay Times/G.114 Range in Seconds
Range in Milliseconds
0 to 0.150
0 to 150
Acceptable for most user applications.
0.150 to 0.400
150 to 400
Acceptable, provided that administrators are aware of the transmission time and the impact it has on the transmission quality of user applications. Above 0.400
Unacceptable for general network planning purposes: it is recognized that in some exceptional cases this limit is exceeded.
The table shown in the introduction lists a Frame Size Delay and a Look Ahead Delay. The frame size refers to the time it takes the sender to transmit a frame. For complex compression algorithms that reduce bandwidth, this delay can...
Please join StudyMode to read the full document |
While it’s known that a mother’s diet can affect her unborn child’s development in utero, a new study found that her eating habits around the time of conception can also alter her child’s lifelong risk of cancer.
A new study published in the journal Genome concluded that a gene affecting a person’s risk of cancer can be permanently altered in utero depending on a mother’s diet. While a child’s genes are directly inherited from his parents, how the genes are expressed is controlled through modifications to the DNA, which occur during embryonic and fetal development, according to the report.
Modifications can occur when gene regions are tagged with chemical compounds called methyl groups that silence genes. The compounds require specific nutrients, which means that a mother’s eating habits before and during pregnancy can permanently affect the “setting” of these tags, the report said.
Researchers at the Baylor College of Medicine, the Children’s Nutrition Research Center at Baylor and Texas Children’s Hospital, the London School of Hygiene & Tropical Medicine in London, the MRC Unit and The Gambia, split into two groups and targeted specific regions of the genome called metastable epiallels that are particularly sensitive to these effects.
The research groups both found that the tumor suppressor gene VTRNA2-1— which helps prevent cells from becoming cancerous— was the most sensitive to the environment created by the mother around the time of conception.
“There are around 20,000 genes in the human genome,” study leader Dr. Robert Waterland, an associate professor of pediatrics and nutrition at Baylor said, according to the report. “So for our two groups, taking different approaches, to identify this same gene as the top epiallele was like both of us digging into different sides of a gigantic haystack containing 20,000 needles… and finding the exact same needle.”
Typically, aside from genes on sex chromosomes, mammals inherit two copies of all genes which function equally. Researchers found that VTRNA2-1 belongs to a special class of genes that are expressed from only the maternal or paternal copy. These genes are labeled imprinted genes because they are imprinted with epigenetic marks inherited from either the sperm or egg, the report said. What further sets VTRNA2-1 apart is that it is the first example of an imprinted metastable epiallele.
“Our results show that the methylation marks that regulate VTRNA2-1 imprinting are lost in some people, and that this ‘loss of imprinting’ is determined by maternal nutrition around the time of conception,” Andrew Prentice, professor at the London School of Hygiene & Tropical Medicine and head of the MRC International Nutrition Group said, according to the report. “These are large changes in gene methylation that affect a substantial subset of individuals.”
Three previous studies showed that an increase in these methylation marks is a risk factor for acute myeloid leukemia, lung and esophageal cancer. However, a decrease in these marks— VTRNA2-1 loss of imprinting— led to individuals with a double-dose of the anti-cancer gene, according to the report.
“The potential implications are enormous,” Prentice told The Guardian. “In this particular example, the gene involved is really crucial – it lies at the center of the immune system so it affects our susceptibility to viral infection. At the very beginning of fetal growth, the way it is labeled is going to affect the baby’s health for the rest of its life,” he said.
“If a mother’s diet is poor then it causes a whole lot of damage to the genome which has a shotgun effect, so a baby might have possible adverse outcomes,” Prentice told the news site. “This general phenomenon might explain preterm births, problems in pregnancy, brain defects, or why some babies are born too small.”
“We could potentially clean up a lot of adverse pregnancy outcomes by getting the diet right,” he told The Guardian.
Researchers also showed the loss of VTRNA2-1 affects all cells of the body, and that the loss of imprinting is stable from childhood to adulthood. Researchers say more studies are under way to test whether methylation at VTRNA2-1 can be used as a screening test to predict risk of cancer. |
Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats on the Contact Us page.
- The Facts
- What causes acid rain?
- What does acid mean?
- What is pH?
- Where is acid rain a problem?
- Where do sulphur dioxide emissions come from?
- Have SO2 emission levels changed at all?
- Where do NOx emissions come from?
- Have NOx emission levels changed at all?
- What is the difference between a target load and a critical load?
- Would acid rain remain a problem without further controls?
- Air Quality
- Your Health
- Case Studies
Questions and Answers
What causes acid rain?
Acid deposition is a general term that includes more than simply acid rain. Acid deposition primarily results from the transformation of sulphur dioxide (SO2) and nitrogen oxides into dry or moist secondary pollutants such as sulphuric acid (H2SO4), ammonium nitrate (NH4NO3) and nitric acid (HNO3). The transformation of SO2 and NOx to acidic particles and vapours occurs as these pollutants are transported in the atmosphere over distances of hundreds to thousands of kilometers. Acidic particles and vapours are deposited via two processes - wet and dry deposition. Wet deposition is acid rain, the process by which acids with a pH normally below 5.6 are removed from the atmosphere in rain, snow, sleet or hail. Dry deposition takes place when particles such as fly ash, sulphates, nitrates, and gases (such as SO2 and NOx), are deposited on, or absorbed onto, surfaces. The gases can then be converted into acids when they contact water.
What does acid mean?
An acid is a substance with a sour taste that is characterized chemically by the ability to react with a base to form a salt. Acids turn blue litmus paper (also called pH paper) red. Strong acids can burn your skin.
What is pH?
A pH scale is used to measure the amount of acid in a liquid-like water. Because acids release hydrogen ions, the acid content of a solution is based on the concentration of hydrogen ions and is expressed as "pH." This scale is used to measure the acidity of rain samples.
- 0 = maximum acidity
- 7 = neutral point in the middle of the scale
- 14 = maximum alkalinity (the opposite of acidity)
The smaller the number on the pH scale, the more acidic the substance is. Rain measuring between 0 and 5 on the pH scale is acidic and therefore called "acid rain." Small number changes on the pH scale actually mean large changes in acidity.
For example, a change in just one unit from pH 6.0 to pH 5.0 would indicate a tenfold increase in acidity. Clean rain usually has a pH of 5.6. It is slightly acidic because of carbon dioxide which is naturally present in the atmosphere. Vinegar, by comparison, is very acidic and has a pH of 3.
Where is acid rain a problem?
Acid rain is a problem in eastern Canada because many of the water and soil systems in this region lack natural alkalinity - such as a lime base - and therefore cannot neutralize acid naturally. Provinces that are part of the Canadian Precambrian Shield, like Ontario, Quebec, New Brunswick and Nova Scotia, are hardest hit because their water and soil systems cannot fight the damaging consequences of acid rain. In fact, more than half of Canada consists of susceptible hard rock (i.e., granite) areas that do not have the capacity to effectively neutralize acid rain. If the water and soil systems were more alkaline - as in parts of western Canada and southeastern Ontario - they could neutralize or "buffer" against acid rain naturally.
In western Canada, there is insufficient information at this time to know whether acid rain is affecting these ecosystems. Historically, lower levels of industrialization - relative to eastern Canada - combined with natural factors such as eastwardly moving weather patterns and resistant soils (i.e., soils better able to neutralize acidity), have preserved much of western Canada from the ravages of acid rain.
However, not all areas in western Canada are naturally protected. Lakes and soils resting on granite bedrock, for instance, cannot neutralize precipitation. These are the conditions found in areas of the Canadian Shield in northeastern Alberta, northern Saskatchewan and Manitoba, parts of western British Columbia, Nunavut and the Northwest Territories . Lakes in these areas are as defenseless to acid rain as those in northern Ontario. If sulphur dioxide and nitrogen oxide emissions continue to increase in western Canada, the same sort of harmful impacts that have happened in eastern Canada could occur.
Visit The NatChem Website for information on how to obtain deposition data and maps.
Where do sulphur dioxide emissions come from?
Sulphur dioxide (SO2) is generally a byproduct of industrial processes and burning of fossil fuels. Ore smelting, coal-fired power generators and natural gas processing are the main contributors. In 2000, for instance, U.S. SO2 emissions were measured at 14.8 million tonnes - more than six times greater than Canada's 2.4 million tonnes. But the sources of SO2 emissions from the two countries are different. In Canada, 68% of emissions come from industrial sources and 27% comes from electric utilities (2000). In the U.S., 67% of emissions are from electric utilities (2002).
Canada cannot win the fight against acid rain on its own. Only reducing acidic emissions in both Canada and the U.S. will stop acid rain. More than half of the acid deposition in eastern Canada originates from emissions in the United States. Areas such as southeastern Ontario (Longwoods) and Sutton, Quebec receive about three-quarters of their acid deposition from the United States. In 1995, the estimated transboundary flow of sulphur dioxide from the United States to Canada was between 3.5 and 4.2 millions of tonnes per year.
Have SO2 emission levels changed at all?
Initiated in 1985, the Eastern Canada Acid Rain program committed Canada to cap SO2 emissions in the seven provinces from Manitoba eastward at 2.3 million tonnes by 1994, a 40% reduction from 1980 levels. By 1994, all seven provinces had achieved or exceeded their targets. In 1998, the provinces, territories and the federal government signed The Canada-Wide Acid Rain Strategy for Post-2000, committing them to further actions to deal with acid rain. Progress under both the Eastern Canada Acid Rain Program and under the Post-2000 Strategy, including data on emissions, is reported in the respective annual reports of these two programs. Between 1980 and 2001, emissions of SO2 declined by approximately 50% to 2.38 million tonnes. Ineastern Canada , emissions of SO2 declined by approximately 63% between 1980 and 2001.
Where do NOx emissions come from?
The main source of NOx emissions is the combustion of fuels in motor vehicles, residential and commercial furnaces, industrial and electrical-utility boilers and engines, and other equipment. In 2000, Canada's largest contributor of NOx was the transportation sector, which accounted for approximately 60% of all emissions. Overall, NOx emissions amounted to 2.5 million tonnes in 2000. By comparison, U.S. NOx emissions for 2000 amounted to 21 million tonnes - 8 times more than Canada 's emissions.
The influence of transboundary flows of air pollutants from the United States into Canada is significant. Overall about 24% of the regional-scale ozone episodes that are experienced in the United States also affect Ontario. An analysis of ozone concentrations at four sites in extreme southwestern Ontario taking wind factors into account provides an estimate that 50 to 60% of the ozone at these locations is of U.S. origin (Multi-stakeholder NOx/VOC Science Program 1997b).
Have NOx emission levels changed at all?
In Canada , total NOx emissions have been relatively constant since 1985. As of 2000, stationary sources of NOx emissions have been reduced by more than 100,000 tonnes below the forecasted level at power plants, major combustion sources and metal smelting operations. In 2000, as part of the Ozone Annex to the Canada-US Air Quality Agreement, Canada committed to an annual cap on NO2 emissions from fossil-fuel power plants of 39,000 tonnes in central and southern Ontario and 5,000 tonnes in southern Quebec. It also committed to new stringent emission reduction standards for vehicles and fuels and measures to reduce NOx emissions from industrial boilers. These commitments are estimated to reduce annual NOx emissions from the Canadian transboundary region (defined as central and southern Ontario and southern Quebec) by approximately 39% from 1990 by 2010.
What is the difference between a target load and a critical load?
The critical load is a measure of how much pollution an ecosystem can tolerate; in other words, the threshold above which the pollutant load harms the environment. Different regions have different critical loads. Ecosystems that can tolerate acidic pollution have high critical loads, while sensitive ecosystems have low critical loads.
Critical loads vary across Canada. They depend on the ability of each particular ecosystem to neutralize acids. Scientists have defined the critical load for aquatic ecosystems as the amount of wet sulphate deposition that protects 95% of lakes from acidifying to a pH level of less than 6. (A pH of 7 is neutral; less than 7 is acidic; and greater than 7 is basic.) At a pH below 6, fish and other aquatic species begin to decline.
A target load is the amount of pollution that is deemed achievable and politically acceptable when other factors (such as ethics, scientific uncertainties, and social and economic effects) are balanced with environmental considerations. Under the Eastern Canada Acid Rain Program, Canada committed to cap SO2 emissions in the seven provinces from Manitoba eastward at 2.3 million tonnes by 1994. The program's objective was to reduce wet sulphate deposition to a target load of no more than 20 kilograms per hectare per year (kg/ha/yr), which our scientists defined as the acceptable deposition rate to protect moderately sensitive aquatic ecosystems from acidification.
Under the Canada-Wide Acid Rain Strategy for Post-2000, signed in 1998, governments in Canada have adopted the primary long-term goal of meeting critical loads for acid deposition across the country. Recently, maps that combine critical load values for aquatic and forest ecosystems have been developed. These maps indicate the amount of acidity (reported as acid equivalents per hectare per year (eq/ha/yr)) that the most sensitive part of the ecosystem in a particular region can receive without being damaged.
The maximum amount of acid deposition that a region can receive without damage to its ecosystems is known as its critical load. It depends essentially on the acid-rain neutralizing capacity of the water, rocks, and soils and, as this map of Canada shows, can vary considerably from one area to another. Critical loads were calculated using either water chemistry models (i.e., "Expert" or "SSWC") or a forest soil model (i.e., "SMB"). The index map (lower left) indicates the model selected for each grid square: red = Expert (aquatic), yellow = SSWC (aquatic), green = SMB (upland forest soils).
Would acid rain remain a problem without further controls?
Yes. Scientists predicted in 1990 that a reduction in SO2 emissions from Canada and the U.S. of approximately 75% beyond commitments in the 1991 Canada-U.S. Air Quality Agreement (AQA) would be necessary to eliminate the acid deposition problem in Canada. This science was based on the effect of sulphur-derived acids in wet deposition on aquatic ecosystems. New science presented in the 2004 Acid Deposition Science Assessment assesses the capacity of aquatic and terrestrial ecosystems to receive acids derived from both sulphur and nitrogen in wet and dry deposition. Improved estimates of dry deposition (the sum of gaseous SO2, particle sulphate, nitric acid, particle nitrate and other nitrogen species) indicate that past estimates of critical loads for aquatic ecosystems are too high, implying that past predictions of the impact of proposed control strategies have been overly optimistic. In some regions, the critical loads for forest ecosystems are even more stringent that those for aquatic ecosystems. Canada still needs to evaluate the sustainability of forest ecosystems for various levels of acid deposition given the new critical loads for terrestrial ecosystems. It is likely that new science will continue to support the need for further SO2 emission reductions of this scale or somewhat greater.
That is why The Canada-Wide Acid Rain Strategy for Post-2000 calls for further emission reductions in both Canada and the United States. Without further controls beyond those identified in the 1991 Canada-U.S. Air Quality Agreement, areas of southern and central Ontario, southern and central Quebec, New Brunswick and Nova Scotia would continue to receive mean annual sulphate deposition amounts that exceed their critical loads. The critical load would be exceeded by up to 10 kg/ha/yr of wet sulphate in parts of central Ontario and central and southern Quebec. As a result, about 95,000 lakes would remain damaged by acid rain. Lakes in these areas have not responded to reductions in sulphate deposition as well as, or as rapidly as, those in less sensitive regions. In fact, some sensitive lakes continue to acidify.
In total, without further controls, almost 800,000 km2 in southeastern Canada-an area the size of France and the United Kingdom combined-would receive harmful levels of acid rain; that is, levels well above critical load limits for aquatic systems.
Predicted wet sulphate deposition in excess of critical loads in 2010, without further controls (in kg/ha/yr).
Is rain getting more or less acidic?
One measure of the acidity of acid rain is the pH. The pH of rain depends on two things: the presence of acid-forming substances such as sulphates, and the availability of acid-neutralizing substances such as calcium and magnesium salts. Clean rain has a pH value of about 5.6. By comparison, vinegar has a pH of 3.
Although the acidity of acid rain has declined since 1980, rain is still acidic in eastern Canada. For example, the average pH of rain in Ontario's Muskoka-Haliburton area is about 4.5 - about 40 times more acidic than normal.
Reductions in the acidity of acid rain are due to reductions in emissions of SO2.
How does acid rain affect lakes, rivers and streams?
Lakes that have been acidified cannot support the same variety of life as healthy lakes. As a lake becomes more acidic, crayfish and clam populations are the first to disappear, then various types of fish. Many types of plankton-minute organisms that form the basis of the lake's food chain-are also affected. As fish stocks dwindle, so do populations of loons and other water birds that feed on them. The lakes, however, do not become totally dead. Some life forms actually benefit from the increased acidity. Lake-bottom plants and mosses, for instance, thrive in acid lakes. So do blackfly larvae.
Not all lakes that are exposed to acid rain become acidified. In areas where there is plenty of limestone rock, lakes are better able to neutralize acid. In areas where rock is mostly granite, the lakes cannot neutralize acid. Unfortunately, much of eastern Canada-where most of the acid rain falls-has a lot of granite rock and therefore a very low capacity for neutralizing acids.
What happens to the fish, frogs, birds and bugs that live there?
There are many ways the acidification of lakes, rivers and streams harm fish. Mass fish mortalities occur (during the spring snow melt) when highly acidic pollutants-that have built up in the snow over the winter-begin to drain into common waterways. Such happenings have been well documented for salmon and trout in Norway.
More often, fish gradually disappear from these waterways as their environment slowly becomes intolerable. Some kinds of fish such as smallmouth bass, walleye, brook trout and salmon, are more sensitive to acidity than others and tend to disappear first.
Even those species that appear to be surviving may be suffering from acid stress in a number of different ways. One of the first signs of acid stress is the failure of females to spawn. Sometimes, even if the female is successful in spawning the hatchlings or fry are unable to survive in the highly acidic waters. This explains why some acidic lakes only have older fish in them. A good catch of adult fish in such a lake could mislead an angler into thinking that all is well.
Other effects of acidified lakes on fish include: decreased growth, inability to regulate their own body chemistry, reduced egg deposition, deformities in young fish and increased susceptibility to naturally occurring diseases.
Here are the effects of an acidified ecosystem on the natural environment:
As water pH approaches Effects 6.0
- crustaceans, insects, and some plankton species begin to disappear.
- major changes in the makeup of the plankton community occur.
- less desirable species of mosses and plankton may begin to invade.
- the progressive loss of some fish populations is likely, with the more highly valued species being generally the least tolerant of acidity.
Less than 5.0
- the water is largely devoid of fish.
- the bottom is covered with undecayed material.
- the nearshore areas may be dominated by mosses.
- terrestrial animals, dependent on aquatic ecosystems, are affected. Waterfowl, for example, depend on aquatic organisms for nourishment and nutrients. As these food sources are reduced or eliminated, the quality of habitat declines and the reproductive success of birds is affected.
Are the lakes recovering?
Some acidified lakes are recovering, but many more are not. Of 202 lakes that have been studied since the early 1980s, 33% have reduced levels of acidity while 56% have shown no change and 11% have actually become more acidic. The greatest improvements have been seen in the Sudbury area, where local emissions of acid-causing pollutants have declined by 90% in the last three decades. Here, fish populations have rebounded and fish-eating birds, such as loons, have increased. However, no substantial wildlife recovery has been seen beyond the Sudbury area. The least improvement has been seen in Atlantic Canada, even though lakes in this region were never as highly acidified as those in some parts of Ontario and Quebec. Since 1990, scientists have confirmed that maintaining lake pH at 6.0 or more is the most appropriate criterion for calculating critical loads. This pH level encourages healthy aquatic systems in lakes, rivers and streams.
What does acid rain do to trees?
The impact of acid rain on trees ranges from minimal to severe, depending on the region of the country and on the acidity of the rain. Acid rain, acid fog and acid vapour damage the surfaces of leaves and needles, reduce a tree's ability to withstand cold, and inhibit plant germination and reproduction. Consequently, tree vitality and regenerative capability are reduced.
Acid rain also depletes supplies of important nutrients (e.g. calcium and magnesium) from soils. The loss of these nutrients is known to reduce the health and growth of trees (see below).
How else does acid rain affect forests?
Prolonged exposure to acid rain causes forest soils to lose valuable nutrients. It also increases the concentration of aluminum in the soil, which interferes with the uptake of nutrients by the trees. Lack of nutrients causes trees to grow more slowly or to stop growing altogether. More visible damage, such as defoliation, may show up later. Trees exposed to acid rain may also have more difficulty withstanding other stresses, such as drought, disease, insect pests and cold weather.
The ability of forests to withstand acidification depends on the ability of the forest soils to neutralize the acids. This is determined by much the same geological conditions that affect the acidification of lakes. Consequently, the threat to forests is largest in those areas where lakes are also seriously threatened - in central Ontario, southern Quebec, and the Atlantic provinces. These areas receive about twice the level of acid rain that forests can tolerate without long-term damage. Forests in upland areas may also experience damage from acid fog that often forms at higher elevations.
Are these effects reversible?
Acid rain has caused severe depletion of nutrients in forest soils in parts of Ontario, Quebec and the Atlantic provinces, as well as in the northeastern United States. While this may be reversible, it would take many years - in some areas hundreds of years - for soil nutrients to be replenished to former levels through natural processes such as weathering, even if acid rain were eliminated completely. For now, forests in affected areas - where acid rain exceeds the critical loads - are using the pool of minerals accumulated during post-ice age times, although some monitoring sites are already deficient in minerals and visual damage to forests has appeared. The loss of nutrients in forest soils may threaten the long-term sustainability of forests in areas with sensitive soils. If current levels of acid rain continue into the future, the growth and productivity of ~ 50% of Canada's eastern boreal forests will be negatively affected.
The maximum amount of acid deposition that a region can receive without damage to its ecosystems is known as its critical load. It depends essentially on the acid-rain neutralizing capacity of the water, rocks, and soils. This map, of the upland forest soil Steady-state critical load exceedances for southeastern Canada (eq/ha/yr), shows areas of eastern Canada where the levels of acid deposition exceed the capacity of the soils to neutralize the acid without harming the long-term sustainability of the environment. The Steady-state exceedance calculations assume that the forests are not harvested.
Are there connections to other air pollution problems?
Yes. Burning fossil fuel also creates urban smog, climate change and releases mercury into the air.
SO2 can react with water vapour and other chemicals in the air to form very fine particles of sulphate. These airborne particles form a key element of smog and are a significant health hazard. Fine particles lodge deep within the lungs and can cause inflammation and tissue damages. Seniors and persons with heart and respiratory diseases are particularly vulnerable. Recent studies show strong links between high levels of airborne sulphate particles and increased hospital admissions and higher death rates.
Urban smog also forms a haze in the air that reduces the visibility of distant objects. The areas that are most affected are the Windsor-Quebec corridor in eastern Canada and British Columbia's Lower Fraser Valley where scenery and buildings are often obscured.
It is expected that with climate change, there will be accompanying higher temperatures and drought. Climate change can also cause harmless sulphur compounds that have built up in wetlands and soils to become acid-forming sulphates. When the wet weather returns, the sulphates are flushed into the surrounding lakes and increase their acidity.
Higher concentrations of mercury commonly found in acid lakes can cause reproductive problems in birds.
Ultra violet (UV) radiation
Plankton and other organisms that live near the surface of an acid lake are more vulnerable to increased UV levels that result from a thinner ozone layer. This happens because acidity reduces the amount of dissolved organic matter in the water, making it clearer and allowing the UV to penetrate to greater depths.
What is the link between acid rain and human health?
Sulphur dioxide can react with water vapour and other chemicals in the air to form very fine particles of sulphate. These airborne particles form a key component of urban smog and are now recognized as a significant health hazard.
What are the health effects of particulate matter (PM)?
Fine particles, or particulate matter (PM), can lodge deep within the lungs, where they cause inflammation and damage to tissues. These particles are particularly dangerous to the elderly and to people with heart and respiratory diseases. Recent studies have identified strong links between high levels of airborne sulphate particles and increased hospital admissions for heart and respiratory problems, increased asthma-symptom days, as well as higher death rates from these ailments.
The air pollution health effects pyramid is a diagrammatic presentation of the relationship between the severity and frequency of health effects, with the mildest and most common effects at the bottom of the pyramid, e.g., symptoms, and the least common but more severe at the top of the pyramid, e.g., premature mortality. The pyramid demonstrates that as severity decreases, the number of people affected increases.
What are the costs to Canadians of these health effects?
By using computer models, scientists and economists can estimate the costs of these health effects to Canadians. They do this by computer simulations, where they eliminate SO2 emissions in increasing amounts to predict how the cases of heart and respiratory problems and premature mortality would decline. This decline in health effects represents a significant potential benefit to Canadians; however, it also represents the cost to Canadians of living with current SO2 emission levels.
For example, the expected health benefits to Canada of a 50% SO2 reduction in both eastern Canada and the U.S. (i.e., reductions above and beyond the current commitments in the Eastern Canada Acid Rain Program and U.S. Acid Rain Program) are:
- 550 premature deaths per year would be avoided;
- 1,520 emergency room visits per year would be unnecessary; and
- 210,070 asthma symptom days per year would be avoided.
Economists estimate that society values these health benefits in a range from just under $500 million per year up to $5 billion per year.
The U.S. has also estimated the health benefits of their current Acid Rain Program, both to their citizens as well as to Canadians. The average total annual estimated health benefit (in 1994 dollars) for 1997 in the United States is US$10.6 billion, and rises to US$40.0 billion by the year 2010, when the U.S. Acid Rain Program is fully implemented.
The estimated benefits for Canada occur primarily in the Windsor-Quebec corridor, where the greatest share of the Canadian population likely to be affected by transboundary transport of SO2 emissions from the eastern U.S. is located. The average total annual estimated health benefit for Canada is US$955 million, or well over a billion Canadian dollars by the year 2010.
The Sudbury region has a well-known history for very high local SO2 emissions and associated acid deposition. Furthermore, it has a broad sensitivity to acid rain. The degree of historical damage to the landscape, combined with efforts of the Ontario government and industry to improve conditions, makes the Sudbury area an unintentional but important "experiment" on a whole ecosystem acidification and recovery process.
Of the 7000 lakes estimated to have been damaged by smelter emissions, most are located in the hilly forested areas, underlain by granite bedrock, northeast and southwest of Sudbury. As a result, sport fish losses from acidification in this area have also been heavy. In fact, most of Canada's well-documented cases of fisheries losses from acid rain are in the Sudbury area (not forgetting, of course, the losses of Atlantic salmon from Nova Scotia rivers and some sports fish losses in areas of Quebec).
Over 35 years ago, scientists began studying the lakes and ponds near Sudbury. Since then, a vast amount of information has been collected that has clearly established the damaging effect of smelter emissions on the chemistry and biology of water bodies. This information has since been widely used throughout Canada and the rest of the world in the debate for cleaner air. Dramatic chemical improvements in Sudbury area lakes have been observed following substantial reductions in local smelter emissions. Between 1980 and 1997, Inco and Falconbridge, the two major producers of smelter emissions in the Sudbury area, reduced their SO2 emissions by 75% and 56% respectively.
Overall, the widespread chemical and biological improvements seen in lakes of the Sudbury area demonstrate the resiliency of aquatic systems and provide strong support for the use of emission controls to combat aquatic acidification. However, many area lakes are still acidic and contaminated with metals.
Major Sulphur Dioxide (SO2) sources in Sudbury, Ontario (kilotonnes)
1980 1990 1997 SO2 cap INCO (Copper Cliff) 812 617 200 265 FALCONBRIDGE (Sudbury) 123 70 54 100
- Date Modified: |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2006 November 24
Explanation: Runaway stars are massive stars traveling rapidly through interstellar space. Like a ship plowing through cosmic seas, runaway star Alpha Cam has produced this graceful arcing bow wave or bow shock - moving at over 60 kilometers per second and compressing the interstellar material in its path. The bright star above and left of center in this wide (3x2 degree) view, Alpha Cam is about 25-30 times as massive as the Sun, 5 times hotter (30,000 kelvins), and over 500,000 times brighter. About 4,000 light-years away in the long-necked constellation Camelopardalis, the star also produces a strong wind. The bow shock stands off about 10 light-years from the star itself. What set this star in motion? Astronomers have long thought that Alpha Cam was flung out of a nearby cluster of young hot stars due to gravitational interactions with other cluster members or perhaps by the supernova explosion of a massive companion star.
Authors & editors:
NASA Web Site Statements, Warnings, and Disclaimers
NASA Official: Phil Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. |
Biology 9 Lesson 63: Review biology and environment
1. Theoretical Summary
1.1. Environment and ecological factors
1.2. The division of groups of organisms based on ecological limits
1.3. Same-species and different-species relationships
1.4. Systematization of concepts
1.5. The characteristics of the population
1.6. Typical signs of a biome
2. Illustrated exercise
Lesson 1: In what ways are human populations different from other biomes? What is the meaning of population pyramid?
– Human population is different from biological population: human population has socio-economic characteristics such as law, marriage, education, and culture. Because humans have labor and thinking, they have the ability to self-regulate ecological characteristics in populations, and at the same time improve nature.
+ Meaning of the population pyramid: The population pyramid represents the population characteristics of each country. The population pyramid consists of the young population pyramid and the old population pyramid.
– Young population pyramid: is a population pyramid with a wide bottom due to the high number of children born each year. The many oblique sides of the tower and the pointed top of the tower indicate a high mortality rate and therefore a low life expectancy.
Old population pyramid: is a population pyramid with a narrow bottom, a non-pointed top, an almost vertical side of the pyramid, indicating both low birth and death rates, so life expectancy is high.
Lesson 2: Why is it said that environmental pollution is mainly caused by human activities? List measures to reduce pollution.
– Say that environmental pollution is mainly caused by human activities because environmental pollution caused by natural activities is very little such as: volcanoes spewing lava causing a lot of dust, natural disasters and floods create favorable conditions. conditions for the growth of many pathogenic microorganisms. There are many other causes of pollution, mainly caused by human activities.
Measures to limit pollution:
- Installation of air purification equipment for factories.
- Use a lot of new energy that does not generate emissions (wind energy, solar).
- Create a settling tank and filter wastewater.
- Construction of a waste treatment plant.
- Bury and burn waste scientifically.
- Promote scientific research to predict and find preventive measures.
- Build a factory to recycle waste into raw materials, utensils, etc.
- Build green parks, plant trees.
- Educating to raise people’s awareness about pollution and how to prevent it.
- Building a place to strictly manage highly hazardous substances.
- Incorporating animal manure before use to produce biogas.
- Food production and safe food.
3.1. Essay exercises
Question 1: Is it possible based on morphological characteristics to distinguish the impact of ecological factors from the adaptation of organisms?
Verse 2: What are the differences between same-species and interspecies relationships?
Question 3: In what basic relationships do communities and populations differ from each other?
Question 4: Describe the negative and positive human activities for the environment?
3.2. Multiple choice exercises
Question 1: What is the most favorable position in the ecological limit for organisms to grow and develop?
A. Near the bottom lethal point.
B. Near the top lethal point.
C. At the extreme pole
D. In the midpoint of the bottom lethal and the top lethal.
Verse 2: Why is human factor separated into a separate ecological factor group?
A. Because people have thinking and work.
B. Because humans are the most evolved compared to other animals.
C. Because human activities are different from other creatures, humans have wisdom, so they can both exploit natural resources and improve nature.
D. Because man has the ability to master nature.
Question 3: When does the density of animal populations increase?
A. When living conditions change suddenly such as floods, forest fires, epidemics, …
B. As the population’s living area expands.
C. When there is a herd separation of some individuals in the population.
D. When the food source in the population is abundant.
Question 4: Which of the following is not an example of an organism?
A. Penguins live on the Antarctic coast.
B. The individual hamsters live in a rice field.
C. Cobras live on three islands far apart.
D. The resinous pine forest is distributed in the Northeast region of Vietnam.
Question 5: If a country has more than 30% of the population under 15 years of age, less than 10% of the population of the elderly, and a low life expectancy, it is classified as a country with low life expectancy.
A. Relatively stable population pyramid
B. Declining population pyramid
C. Stable population pyramid
D. The pyramid of population growth
After completing this lesson, you should know the following requirements:
- Systematize basic knowledge about organisms and the environment, know how to apply theory to production and life practice.
- Train students skills to compare, synthesize, generalize knowledge, and work in groups.
- Educating students to love nature, consciously protect nature and living environment. |
Encouraging children to work through problems independently
Schools are increasingly looking at ways of developing a broad skills base in children that will equip them with the necessary qualities to succeed in later life.
According to a collection of academic research published on the London Metropolitan University website recently, students feel they learn best under the following conditions:
- at their own pace;
- at times and places of their own choosing;
- often with other people around, especially fellow-learners;
- when they feel in control of their learning.
In addition, the data outlines that the role of the teacher is more as a facilitator to learning than a direct instructor. Teachers are then charged with supporting children in the learning process in the following ways:
- providing learners with resource materials;
- whetting learners appetites to learn;
- providing learners with chances to test out their learning;
- giving learners feedback on their progress;
- helping learners to make sense of what they have learned.
Children who are left to work through problems and tasks independently are then given ownership of their learning and they gain the ability to self-motivate as well as developing independent problem solving strategies.
By giving pupils regular opportunities to work by themselves, without the constant support of an adult, parents and schools are aiding children to develop the skills that they will need to draw on at every stage of their education.
In the majority of schools, children are already being asked to plan their own investigations in science, identify what they would like to find out in a history/geography topic and to set their own progress targets in numeracy and literacy.
Speaking at the annual conference for the Association of Teachers and Lecturers, Jon Overton, a teacher from inner-London, argued that children now need a broad range of independent skills:
'What we need to equip our young people with are skills; enquiry skills, the ability to innovate. That is what universities are saying is lacking, that is what employers say is lacking; transferrable skills that ultimately will make a difference in the life of a young person.' |
Language arts teachers, jazz up your classroom and help students navigate some of English’s most confusing words with this colorful grammar poster!
There are many confusing words in the English language that look and sound similar. Keep them straight with this invaluable poster that teaches students the difference between the commonly confused homophones. It also gives easy to understand examples of how to use each word in a sentence. Brightly colored with a layout that makes it easy for kids to see, read and understand, this educational chart is much more than your average classroom decoration, it’s a must have for any language arts, English, or creative writing classroom!
- EDUCATIONAL POSTER lists 10 sets of commonly confused words and homophones and gives the definition and examples of each
- MUST HAVE FOR YOUR CLASSROOM - This is the perfect, colorful decoration for middle school or high school classrooms
- DURABLE - Laminated with 3 mil thick laminate with edge protected corners to ensure durability
- EXCELLENT QUALITY - 100# cover gloss paper
- FULL SIZE POSTER - Measures 17"x22"
- Made in the USA
showing reviews from (% reviewsData.reviews.from %) to (% reviewsData.reviews.to %)
0 Review Posted
showing questions from (% questionsData.questions.from %) to (% questionsData.questions.to %)
0 Question Asked |
According to researchers, millions of people worldwide are affected by epilepsy. Although people of any age can be affected by this disease, most often very old or very young people are affected by epilepsy as well.
Epilepsy is a neurological condition that is caused by recurring disruption to the brain’s usual activity. The outward sign of epilepsy is known as seizures, and these can differ in appearance depending upon the part of the brain that is affected and by how far any disruption has been made. These seizures can occur in any part of the brain, and most often, a seizure lasts for one to two minutes.
The brain consists of tiny cells that carry the electrical charge, known as neurons. Usually, the electrical charges in the brain pass between nerve cells and to all the parts of the body. These cells fire in a controlled way. But when an individual has epilepsy, the nerve cells in the brain of an individual are overactive, which sends out powerful and rapid electrical charges. This disrupts the normal functioning of the brain. During this time, brain cells are four times more active than normal, as a result, it causes problems with behavior, movement, and thinking.
Seizures are associated with some phases.
- Preictal – Preictal is the time before the seizure. This phase can last for days to make people feel differently. Many people do not experience anything during this phase, and the others who do, take it as a warning sign. These people experience an aura before a seizure, and this aura makes an individual to see, smell, hear or taste something for no reason. Some people may experience a weird feeling.
- Ictal – In this phase, people experience a seizure. Many physical changes occur during this phase of the seizure, and many people also see cardiovascular and metabolic changes.
- Interictal – It is the time between seizures. During this phase, most people experience emotional changes in between seizures. These changes can range from mild fear to anxiety and depression.
- Cointerictal – It is the final phase. It is a recovery period that depends on the time of seizure and the severity of it.
Some of the common symptoms of seizure include:
- Extreme tiredness
- Unusual sensations
- Sore muscles
There are many factors that can play a role in the development of epilepsy and some of them include, abnormal brain development, an infection of the brain, loss of oxygen to the brain, brain tumor and stroke.
Types of seizure
There are two broad categories of seizures – generalized seizures and partial seizures.
In a generalized seizure, a large area of both parts of the brain is affected. A Generalized seizure is categorized into several types:
- Absence seizure – This type of seizure occurs for a few seconds and can last for any number of times in a day. It is often mistaken as daydreaming, and later patient do not know anything while he or she was in a seizure.
- Generalized tonic-clonic seizure – This type of seizure has two-phases, tonic phase, and the clonic phase. In the tonic phase, patients experience stiffness in their arms and legs. In the clonic stage, the limbs and head begin to jerk. As it progresses, patients are confused and do not remember what happened.
- Myoclonic seizures – Myoclonic seizure can cause the patient’s body to jerk. It can affect a part or the whole body, and it can be strong enough to affect an individual.
- Atonic seizures – This type of seizure causes difficulty in walking and can cause a sudden drop. It is a serious type of seizure, and patients having this type of seizure wear protective headgear.
Partial seizures are the most common type of seizures, and they occur on one side of the brain. It affects all the functions that are controlled by that part of the brain. For example, when the seizure affects the part of the brain that controls speech, the person’s ability to speak is impaired. A partial seizure is categorized into two main types:
1. Simple partial seizure – Most often, patients suffering from simple partial seizures stay awake throughout the period of seizure and are aware of what is happening but are unable to speak or move until a seizure is over. If the part of the brain that controls the senses are affected, patients might hear, smell or see things that are actually not present in reality. Sometimes, patients even hallucinate about things that had occurred in the past.
2. Complex partial seizures – Complex partial seizures are serious and they affect the greater part of the brain. Also, along with the symptoms that occur due to impairment of the functions of an affected part, patients experience problems with consciousness. Whenever a patient has a complex partial seizure, they stop their work and stare blankly. They often stop their conversation with others and start doing unorganized things. Many times, patients having complex partial seizures look normal because they just stare blankly.
Tags: epilepsy symptoms, epilepsy causes, epilepsy treatment, epilepsy types, epilepsy diagnosis |
Function for displaying math formula with component variable names – xlv
=xlv(CELLREF, Optional Rounder(INTEGER), Optional ColumnNo(Integer))
This function displays the variable names of a mathematical formula defined in another cell. It updates automatically with a change in the values in the formula or a change in the formula.
A formula (=C3+((C4/C5)^(C6+C8)*C7)*C9) is entered in Cell C11 that references the values in cells C3 to C9. Corresponding variable names are located in cells B3 to B9. Each variable name must be located in the same row as the value it is representing.
The variables of the formula in Cell C11 is shown in the cell where the xlv function is entered. In the example below, xlv is entered in Cell C13.
Does the variable name need to be in the cell directly to the left of the value it is representing?
NO: By Default the xlv function looks for the first cell to the left of the value that is not a numeric value.
How to use the Optional Parameters “Rounder” and “ColumnNo”
In the example above, the xlv function has been used without the Rounder parameter and so has defaulted to its adaptive formatting. The adaptive formatting works in the following way:
Each value in the formula is parsed in the following way:
if Value > 10000000 the number is expressed in Scientific notation
If 10000000 > Value > 100 the number in expressed in standard notation to zero decimal places
If 100 > Value > .001 the number in expressed in standard notation to 3 significant figures
if Value < .001 the number is expressed in Scientific notation
Generally, the xlv function will display the variables of the formula. However, in an instance where a numeric value (rather than a CellRef) is entered in the formula that the “xlv” function is referencing, the rounder parameter can be used to determine the number of decimal places to be displayed.
The formula in cell M11 : =M3+((M4/M5)^(M6+M8)*M7)*M9-101.23589
The last term of the formula is 101.23589 and is a numeric value (and not a CellRef). By default, xlv uses the adaptive number formatting (as described above) and displays the result as shown below in Cell M13; Entry in Cell M13 is: =xlv(M11)
Cell M15 displays the result when setting the rounder parameter to 2 (i.e. 2 decimal places); Entry in Cell M15 is: =xlv(M11,2)
By default, the xlv function returns the first cell to left of the value that is not a numeric value.
In the example below, a formula is entered in cell E10 that references the values in Column E.
=xlv(E10) is entered in cell E12. As can be seen by the output of the xlv function (in cell E12), the xlv function returns the variables located in Column B, since that is where the first non-numeric values are located.
But what if I have more than one column with variable names to the left of the value I am referencing?
What if my variable name is located to the right of the value I am referencing?
Use the second Optional Parameter “ColumnNo” to specify in which column the variable is located. See Below.
In the following example variables are located in Columns B, C and I. (A formula is located in Cell F33 and references values in Column F).
The xlv function is used in Cells F35 to F39 to demonstrate how the second optional parameter is used to specify in which column the variables are located.
The column number can be specified in various ways. E.G. “Column(B1)” can also be used to specify column number 2.
Note 1: If there is no variable defined in the column that is specified, the xlv Function returns “XL_Viking_NoVar” or “xlvikingnovar” to indicate that no variable is defined.
Note 2: The second Optional variable is not used in the above example (hence the double commas), which means any numeric values that are entered in the formula will default to the XL-Viking adaptive number formatting, Ref “Rounder” Parameter. |
An ecological concept called “The Pyramid of Numbers” shows the relative abundance of the various organisms that form a food chain. The organisms that far outstrip all the others in terms of total weight (biomass) are the green plants, which form the base of the pyramid. They are known as “producers,” reflecting the fact that they constitute the first and most significant component on which all the other members of the pyramid depend, either directly or indirectly. Without green plants, there would be no life on earth.
Plants are so important because they “capture” the sunlight’s energy and photosynthesize it by binding carbon dioxide and water to produce carbohydrates, one of the basic foods that all living things require. Although animals must obtain carbohydrates, they cannot produce it as plants and some bacteria do. Lacking the ability to photosynthesize, animals must hence become consumers.
Herbivores obtain carbohydrates by consuming plants directly. Carnivores and piscivores consume other animals, including some that consume plants. And so it goes on up the pyramid, with each level generally being represented by a progressively lower number of consumers.
A typical example from the fish world would be as follows. Free-floating green algae (a producer) could be eaten by a water flea (Daphnia). Thus, the Daphnia becomes the primary consumer (the first to eat the producer). If a small fish eats the Daphnia, the fish is a secondary consumer (it consumes the producer’s nutrients through a primary consumer). That fish could later be eaten by a tertiary consumer, such as a pike. The pike, if small, could be consumed by a heron, or if large, could be fished by an angler and later eaten — and so on. Clearly, it takes numerous free-floating algae to sustain a single Daphnia. Fewer (but still quite numerous) Daphnia would be required by a mosquitofish. It would take even fewer mosquitofish to feed a pike, etc.
It is this relationship that gives the Pyramid of Numbers its characteristic shape, particularly when biomass is substituted for actual numbers; for example, a single Daphnia weighs less (has a lower biomass) than the total number of algae it eats over a lifetime. We are all part of this intricate relationship.
Since herbivores only have a single level below them (the green plants), it seems logical that any feeding adaptations they possess should be finely adapted to plant consumption. The same obviously applies to secondary, tertiary and other consumers. Some of these adaptations are internal and cannot be easily observed. However, the external ones can.
By far, the best-known herbivorous fishes available in the hobby include the various plecos (Hypostomus, Glyptoperichthys and Liposarcus spp.), the bristlenosed catfishes (e.g. Ancistrus spp.), and the panaques or sucker catfishes (Panaque spp.).
A considerably rarer fish is Euchilichthys guentheri. Yet, despite its rarity, there can be little doubt concerning the level at which it feeds (and lives) or the nature of its diet. In common with all its better-known counterparts, the Euchilichthys has a downward-pointing, sucker-type mouth which clearly indicates that it is predominantly a “solid substrate feeder.” I hesitate to use the term “bottom feeder” because the orientation of the mouth has evolved to eat off of various solid surfaces (such as rocks, leaves, exposed underwater roots or branches, or any other submerged objects) — not just surfaces parallel to the bottom.
Obviously, not all feeding sites have surfaces which are parallel to the bottom of the stream or river in which the fish are found. To maximize these fish’s total feeding areas, the suckermouth has adapted to allow a fish to eat not only off the top of a submerged branch, for example, but also off its sides, underside and other various protruberances. This adaptation applies to algae-eaters in home aquaria, which not only eat algae off the bottom of the tank but also cling to the sides of the aquarium in search of food. Without the suckerlike mouth, it wouldn’t be able to attach itself; it would simply fall off.
“Padders” and Croppers
A suckerlike mouth does not, however, tell us what diet a fish has. To find essential clues concerning this, we have to look at other features, particularly the teeth. Their number, shape and distribution all carry important messages. The fish mentioned all exhibit several shared characteristics. They all possess numerous small teeth arranged in padlike groups which can be made to lie flat against a chosen surface simply by resting the mouth against the surface and operating the sucker mechanism.
The vast majority of encrustations scraped off by these fish are algae. However, many microscopic organisms, such as certain species of protozoans (single-celled animals), are known to live among encrusting algae. Algae-scraping species may therefore passively supplementing their vegetable diet with a small, but regular, input of animal matter. The same could be said of other predominantly herbivorous species, such as the various mollies (Poecilia sphenops, P. latipinna) which spend a great deal of time cropping algae in a similar way to their terrestrial equivalents: cows, sheep, zebras and the like.
Despite the huge, obvious differences between a cow and molly, they both exhibit remarkable similarities in the form of their plant-cropping “equipment.” Basically, what is required is a system consisting of two straight edges which can be brought together, almost like a pair of pincers, at right angles to the vegetation that needs to be cut. If you look at the feeding arrangements possessed by both cows and mollies, this is precisely what you find — and the similarities don’t end there, either.
Internally, herbivores must have a digestive system capable of extracting the maximum amount of nourishment from their relatively poor-quality food, with its hard-to-digest cellulose cell walls. Such a situation demands that ingested plant matter be made to travel the longest possible distance along the digestive tract.
It will come as no surprise, therefore, to discover that herbivores, be they horses or algae-eaters, possess long, convoluted guts. This is so typical that it is quite possible to make statements concerning the diet of a fish, even if the only evidence available is a preserved specimen of unknown identity.
Although many people are aware of the food pyramid, most people think of it in terms of terrestrial animals. Understand where your plant-eating fish fall in the pyramid and their corresponding biology, and you will be a step closer to understanding the world of fish in general. A few fish tank accessories also might enhance the environment.
Posted by: Chewy Editorial
Featured Image: Via Shutterstock/MIKITO SHIRAI |
The previous eruption of Agung in 1963 killed nearly 2,000 people and was followed by a small eruption at its neighboring volcano, Batur.
In September 2017, a distal seismic swarm triggered the evacuation of around 140,000 people from Agung volcano, Bali. As this event was the deadliest volcanic eruptions of the 20th Century, scientists are putting great effort to monitor and understand the re-awakening of Agung.
Now, using satellite technology and 3D numerical models, scientists at the University of Bristol have uncovered why the Agung volcano in Bali erupted in November 2017 after 50 years of dormancy. They have shown that seismicity was associated with a deep, sub-vertical magma intrusion between Agung and its neighbor Batur.
Scientists used Sentinel-1 satellite imagery provided by the ESA to monitor the ground deformation at Agung. They then detected uplift of about 8-10 cm on the northern flank of the volcano during the period of intense earthquake activity.
Dr. Juliet Biggs said, “From remote sensing, we are able to map out any ground motion, which may be an indicator that fresh magma is moving beneath the volcano.”
Dr. Fabien Albino, also from Bristol’s School of Earth Sciences, added: “Surprisingly, we noticed that both the earthquake activity and the ground deformation signal were located five kilometers away from the summit, which means that magma must be moving sideways as well as vertically upwards.”
“Our study provides the first geophysical evidence that Agung and Batur volcanoes may have a connected plumbing system.”
“This has important implications for eruption forecasting and could explain the occurrence of simultaneous eruptions such as in 1963.”
Their findings, published today in the journal Nature Communications, could have important implications for forecasting future eruptions in the area. |
- Format:Mixed media product
- Qualification:Cambridge Primary
- Author(s):Emma Low
- Available from: August 2014
This series is endorsed by Cambridge International Examinations and is part of Cambridge Maths.
Send a Query×
This teacher's resource for stage 6 will fully support teachers to get the best from their learners and effectively use the learner's book and games book. Detailed lesson plans based on the course objectives are offered, along with additional activity ideas. Teachers will be guided to formatively assess their learners' understanding. They will have the confidence to engage the class in mathematical discussion and encourage learners to justify answers and make connections between ideas. Answers to the learner's book and all photocopiable sheets required are provided. All book content, plus more, is included on the CD for convenience. Mac users please note that CD-ROMS don't autostart when used with Macs they will need to be started manually. If you have any queries please visit our customer support page or contact customer services.
Detailed lesson plans based on the syllabus objectives are offered.
Additional activities are suggested so you can adapt the lessons to the needs of your learners.
Strategies on encouraging mathematical dialogue and advice on formative assessment, differentiation, vocabulary and prior knowledge and a clear objective mapping grid are provided to help you plan your teaching
Answers to the questions in the learner’s book.
All photocopiable sheets required are provided
All book content, plus more, is included on the CD for convenience.
- 1. The Number system (whole numbers)
- 2. Number: factors, multiples and primes
- 3. Multiplication
- 4. More on number
- 5. Measures: length, mass and capacity (1)
- 6. Measures: time (1)
- 7. Measures: area and perimeter (1)
- 8. Shapes and geometric reasoning: 2D and 3D shape (1)
- 9. Shapes and geometric reasoning: angles (1)
- 10. Shapes and geometric reasoning: position and direction (1)
- 11. The number system
- 12. Decimals
- 13. Positive and negative numbers
- 14. Multiples and factors and mental strategies using them
- 15. Multiplication and division
- 16. Special numbers
- 17. Measures: length, mass and capacity (2)
- 18. Measures: time (2)
- 19. Measures: area and perimeter (2)
- 20. Handling data: data displays
- 21. Handling data: statistics
- 22. Handling data: probability
- 23. The number system
- 24. Number: mental strategies
- 25. Number: addition and subtraction
- 26. Number: multiplication and division
- 27. Number: fractions
- 28. Number: fractions, decimals and percentages
- 29. Number: ratio and proportion
- 30. Measures: length, mass and capacity (3)
- 31. Measures: time (3)
- 32. Measures: area and perimeter (3)
- 33. Shapes and geometric reasoning: 2D and 3D shape (2)
- 34. Shapes and geometric reasoning: locating 2D shapes
- 35. Shapes and geometric reasoning: angles (2).
Thank you for your feedback which will help us improve our service.
If you requested a response, we will make sure to get back to you shortly.× |
Cool Photos Of Emotions Activities for Preschoolers – Kindergarten Worksheets current an attention-grabbing method for kindergarten kids to learn and reinforce basic ideas. Since kids learn best by doing and since children get bored very easily, giving them well-designed, illustrated worksheets to do makes it simpler and more fun for them to learn. Finishing a worksheet also offers kids an amazing sense of achievement.
Find out how to use worksheets for greatest impact:
1. Give children worksheets appropriate to their level. Give a simple worksheet for an idea immediately after you educate that concept.
2. The worksheets ought to require a child to assume just a little. If a toddler finds any exercise too tough, give him an easier one. It can be crucial that the child would not get pissed off. Do not forget that completely different children have tremendously varying levels of comprehension and pace of learning.
3. It’s going to help if the worksheets are well-illustrated. Using cartoon characters would make it more fascinating for a kid. Encapsulating frequent situations encountered at home, college, within the marketplace, and so on, and utilizing widespread objects known to kids would make the emotions activities for preschoolers more related.
4. Try to supplement every worksheet with a real-life activity. For instance, after a worksheet on counting, you may ask the kid to select 3 biscuits and 2 carrots from many.
5. Keep in mind, a child is learning many new things at once. A toddler of this age has a tremendous capacity to study many new things fast. He may also overlook them equally quick. Doing many fascinating worksheets with cartoons and so on would be fun for him and would assist continually reinforce what is realized.
6. Give optimistic feedback and encourage a child. His finer motor expertise are just growing. Do not anticipate or strive for perfection. Do not give any writing train too early i.e until he’s totally snug with holding a pencil. Spend enough time and regularly reinforce the educational in day-to-day situations. Most importantly, it should be enjoyable for the trainer and the taught! |
The FUNCTION of a cooling tower is to reduce the temperature of an open recirculating cooling water stream so that the water can be reused. The main PURPOSE of a cooling tower is to conserve water.
A cooling tower is a heat rejection device that cools the water by evaporation. This is accomplished by recirculating water and passing ambient air through it. Heat energy is removed through this evaporation process, thus dropping the temperature of the remaining recirculating water. Approximately 970 BTU's are removed from the recirculating water for every pound of water that is evaporated. As the water evaporates in a cooling tower system, it leaves behind all of the dissolved solids that were present in the fresh makeup water used to replenish system losses (due to evaporation and system bleed). Remember that only water evaporates - any dissolved solids or suspended solids that remain behind in the water become more concentrated as additional water is evaporated. As the concentration of dissolved solids builds up in the recirculating water, it is said to be "cycling up". Here is an example: Our makeup water has a dissolved solids concentration of 500 ppm. If we allow the dissolved solids in this water to concentrate up in an open recirculating water loop to 1500 ppm, we would be running at 3 cycles of concentration.
What happens when some of the minerals that make up the dissolved solids reach their saturation point? We will investigate that situation in our blog next week. |
CBSE NCERT Solutions for Class 10th Science Chapter 14 : Sources of Energy. NCERT Class 10 Science Solutions for Chapter 14
In Text Questions
Page No: 243
1. What is a good source of energy?
A good source of energy fulfils the following criteria:
→ It produces a lot of heat per unit mass.
→ It does a huge amount of work per unit mass.
→ It is easily accessible.
→ It is easy to store and transport.
→ It is economical.
→ It produces less amount of smoke.
2. What is a good fuel?
A good fuel produces a huge amount of heat on burning, does not produce a lot of smoke, and is easily available.
3. If you could use any source of energy for heating your food, which one would you use and why?
Natural gas can be used for heating and cooking food because it is a clean source of energy. It does not produce huge amount of smoke on burning. Although it is highly inflammable, it is easy to use, transport, and it produces a huge amount of heat on burning.
Page No: 248
1. What are the disadvantages of fossil fuels?
The disadvantages of fossil fuels are:
→ Burning of coal and petroleum produces a lot of pollutants causing air pollution.
→ Fossil fuels release oxides of carbon, nitrogen, sulphur, etc. that cause acid rain, which affects the soil fertility and potable water.
→ Burning of fossil fuels produce gases such as carbon dioxide that causes global warming.
2. Why are we looking at alternate sources of energy?
Fossil fuels which are traditionally used by human beings everywhere as an energy sources are non-renewable sources of energy. These sources of energy are limited and will disappear after some time. They are being consumed at a large rate. Therefore, we should conserve the energy sources. Hence, we should look for alternate sources of energy.
3. How has the traditional use of wind and water energy been modified for our convenience?
Earlier, the windmills were used to harness wind energy to do mechanical work such as lifting/drawing water from a well. Today, windmills are used to generate electricity.In windmills, the kinetic energy of wind is harnessed and converted into electricity.
Water energy which was used for transportation before is now a good source to generate electricity. Dams has been constructed on river for generating electricity. Waterfalls were used as a source of potential energy which was converted to electricity with the help of turbines.
Page No: 253
1. What kind of mirror – concave, convex or plain – would be best suited for use in a solar cooker? Why?
A concave mirror is used in a solar cooker as it uses heat of the sunlight to cook food. The mirror focuses all the incident sunlight at a point. The temperature at that point increases, thereby cooking and heating the food placed at that point.
2. What are the limitations of the energy that can be obtained from the oceans?
The forms of energy that can be obtained from the ocean are tidal energy, wave energy, and ocean thermal energy. There are several limitations in order to harness these energies.
→ Tidal energy depends on the relative positioning of the Earth, moon, and the Sun.
→ High dams are required to be built to convert tidal energy into electricity.
→ Very strong waves are required to obtain electricity from wave energy.
→ To harness ocean thermal energy efficiently, the difference in the temperature of surface water (hot) and the water at depth (cold) must be 20ºC or more.
3. What is geothermal energy?
Geothermal power plants use heat of the Earth to generate electricity. This heat energy of the Earth is known as geothermal energy.
4. What are the advantages of nuclear energy?
The advantages of nuclear energy are:
→ Large amount of energy is produced per unit mass.
→ It does not produce smoke. It is a clean energy.
→ Fission of one atom of uranium produces 10 million times the energy released by burning of one atom of carbon.
→ Fusion of four hydrogen atoms produces huge amount of energy approximately equal to 27 MeV.
1. Can any source of energy be pollution-free? Why or why not?
No source of energy can be pollution-free. Every source of energy has some type of pollution. For example, the wastes of nuclear reaction are very dangerous to the environment.
2. Hydrogen has been used as a rocket fuel. Would you consider it a cleaner fuel than CNG? Why or why not?
Hydrogen gas is cleaner than CNG. CNG contains hydrocarbons. Therefore, it has carbon contents. Carbon is a form of pollutant present in CNG. On the other hand, hydrogen is waste-free. The fusion of hydrogen does not produce any waste. Hence, hydrogen is cleaner than CNG.
Page No: 254
1. Name two energy sources that you would consider to be renewable. Give reasons for your choices.
Two renewable sources of energy are :
→ Sun: The energy derived from the Sun is known as solar energy. Solar energy is produced by the fusion of hydrogen into helium, fusion of helium into other heavy elements, and so on. A large amount of hydrogen and helium is present in the Sun. The Sun has billions years more to burn. Therefore solar energy is a renewable source of energy.
→ Wind: Wind energy is derived from fast blowing air. Wind energy is harnessed by windmills in order to generate electricity. Air blows because of uneven heating of the Earth. Since the heating of the Earth will continue forever therefore wind energy will also be available forever.
2. Give the names of two energy sources that you would consider to be exhaustible. Give reasons for your choices.
Two exhaustible energy sources are as follows:
→ Coal: It is produced from dead remains of plants and animals that remain buried under the earth’s crust for millions of years. It takes millions of years to produce coal. Industrialization has increased the demand of coal. However, coal cannot replenish within a short period of time. Hence, it is a non-renewable or exhaustible source of energy.
→ Wood: It is obtained from forests. Deforestation at a faster rate has caused a reduction in the number of forests on the Earth. It takes hundreds of years to grow a forest. If deforestation is continued at this rate, then there would be no wood left on the Earth. Hence, wood is an exhaustible source of energy.
1. A solar water heater cannot be used to get hot water on
(a) a sunny day
(b) a cloudy day
(c) a hot day
(d) a windy day
► (b) a cloudy day
Page No: 255
2. Which of the following is not an example of a bio-mass energy source?
(b) gobar gas
(c) nuclear energy
► (c) nuclear energy
3. Most of the sources of energy we use represent stored solar energy. Which of the following is not ultimately derived from the Sun’s energy?
(a) Geothermal energy
(b) Wind energy
(c) Nuclear energy
► (c) Nuclear energy
4. Compare and contrast fossil fuels and the Sun as direct sources of energy.
Fossil fuels are energy sources, such as coal and petroleum, obtained from underneath the Earth’s crust. They are directly available to human beings for use. Hence, fossil fuels are the direct source of energy. These are limited in amount. These are non-renewable sources of energy because these cannot be replenished in nature. Fossil fuels take millions of years for their formation. If the present fossil fuel of the Earth gets exhausted, its formation will take several years. Fossil fuels are also very costly.
On the other hand, solar energy is a renewable and direct source of energy. The Sun has been shining for several years and will do so for the next five billion years. Solar energy is available free of cost to all in unlimited amount. It replenishes in the Sun itself.
5. Compare and contrast bio-mass and hydro electricity as sources of energy.
Bio-mass and hydro-electricity both are renewable sources of energy. Bio-mass is derived from dead plants and animal wastes. Hence, it is naturally replenished. It is the result of natural processes. Wood, gobargas, etc. are some of the examples of bio-mass.
Hydro-electricity, on the other hand, is obtained from the potential energy stored in water at a height. Energy from it can be produced again and again. It is harnessed from water and obtained from mechanical processes.
6. What are the limitations of extracting energy from –
(a) the wind? (b) waves? (c) tides?
(a) A windmill requires wind of speed more than 15 km/h to generate electricity from wind energy also large numbers of windmills are required to get feasible output which covers a large area.
(b) Very strong ocean waves are required in order to extract energy from waves.
(c) Very high tides are required in order to extract energy from tides. Also, occurrence of tides depends on the relative positions of the Sun, moon, and the Earth.
7. On what basis would you classify energy sources as
(a) renewable and non-renewable?
(b) exhaustible and inexhaustible?
Are the options given in (a) and (b) the same?
(a) The source of energy that replenishes in nature is known as renewable source of energy. Sun, wind, moving water, bio-mass, etc. are some of the examples of renewable sources of energy.
The source of energy that does not replenish in nature is known as non-renewable source of energy. Coal, petroleum, natural gas, etc. are some of the examples of non-renewable sources of energy.
(b) Exhaustible sources are those sources of energy, which will deplete and exhaust after a few hundred years. Coal, petroleum, etc. are the exhaustible sources of energy.
Inexhaustible resources of energy are those sources, which will not exhaust in future. These are unlimited. Bio-mass is one of the inexhaustible sources of energy.
Yes. The options given in (a) and (b) are the same.
8. What are the qualities of an ideal source of energy?
An ideal source of energy must be:
→ Easily accessible
→ Smoke/pollution free
→ Easy to store and transport
→ Able to produce huge amount of heat and energy on burning
9. What are the advantages and disadvantages of using a solar cooker? Are there places where solar cookers would have limited utility?
Solar cooker uses Sun’s energy to heat and cook food. It is inexhaustible and clean renewable source of energy. It is free for all and available in unlimited amount. Hence, operating a solar cooker is not expensive.
Disadvantage of a solar cooker is that it is very expensive. It does not work without sunlight. Hence, on cloudy day, it becomes useless.
The places where the days are too short or places with cloud covers round the year, have limited utility for solar cooker.
10. What are the environmental consequences of the increasing demand for energy? What steps would you suggest to reduce energy consumption?
Industrialization increases the demand for energy. Fossil fuels are easily accessible sources of energy that fulfil this demand. The increased use of fossil fuels has a harsh effect on the environment. Too much exploitation of fossil fuels increases the level of green house gas content in the atmosphere, resulting in global warming and a rise in the sea level.
It is not possible to completely reduce the consumption of fossil fuels. However, some measures can be taken such as using electrical appliances wisely and not wasting electricity. Unnecessary usage of water should be avoided. Public transport system with mass transit must be adopted on a large scale. These small steps may help in reducing the consumption of natural resources and conserving them. |
Last month, the European Space Agency Rosetta’s space probe arrived at the comet known as 67P/Churyumov–Gerasimenko, thus becoming the first spacecraft to ever rendezvous with a comet. As it continues on its way to the Inner Solar System, Rosetta’s sensing instruments have been studying the surface in detail in advance of the attempted landing of it’s Philae probe.
Because of this, Rosetta has been able to render a map of the various areas on the surface of the comet, showing that it is composed of several different regions created by a range of forces acting upon the object. Images of the comet’s surface were captured by OSIRIS, the scientific imaging system aboard the Rosetta spacecraft, and scientists analyzing them have divided the comet into several distinct regions, each characterized by different classes of features.
All told, areas containing cliffs, trenches, impact craters, rocks, boulders and parallel grooves have been identified and mapped by the probe. Some of the areas that have been mapped appear to be caused by aspects of the activity occurring in and around the nucleus of the comet, such as where particles from below the surface are carried up by escaping gas and vapor and strewn around the surface in the surrounding area.
So detailed are these images that many have been captured at a resolution of one pixel being equal to an area of 194 square centimeters (30 square inches) on the comet surface. Dr. Holger Sierks, OSIRIS’ Principal Investigator from the Max Planck Institute for Solar System Science, puts it into perspective:
Never before have we seen a cometary surface in such detail. It is a historic moment – we have an unprecedented resolution to map a comet… This first map is, of course, only the beginning of our work. At this point, nobody truly understands how the surface variations we are currently witnessing came to be.
The newly-generated comet maps and images captured by the instruments on Rosetta will now provide a range of detail on which to finalize possible landing sites for the Philae probe to be launched to the surface . As such, the Rosetta team will meet in Toulouse, France, on September 13 and 14 to allocate primary and backup landing sites (from a list of sites previously selected) with much greater confidence.
At the same time, Rosetta has revealed quite a bit about the outward appearance of the comet, and it aint pretty! More often than not, comets are described as “dirty snowballs” to describe their peculiar composition of ice and dust. But Rosetta’s Alice instrument, which was installed by NASA, has sent back preliminary scientific data that shows that the comet is more akin to a lump of coal.
Alice is one of eleven instruments carried aboard Rosetta and one of three instrument packages supplied by NASA for the unmanned orbiter. Essentially, it’s a miniature UV imaging spectrograph that looks for thermal markers in the far ultraviolet part of the spectrum in order to learn more about the comet’s composition and history. It does this by looking specifically for the markers associated with noble gases, such as helium, neon, argon, and krypton.
The upshot of all this high-tech imaging is the surprising discovery of what 67P/Churyumov-Gerasimenko looks like. According to NASA, the comet is darker than charcoal. And though Alice has detected oxygen and hydrogen in the comet’s coma, the patches of barren ice that NASA scientists had expected aren’t there. Apparently, this is because 67P/Churyumov-Gerasimenko is too far away from the warmth of the sun to turn the ice into water vapor.
We’re a bit surprised at just how unreflective the comet’s surface is and how little evidence of exposed water-ice it shows.
Launched in 2004, Rosetta reached 67P/Churyumov-Gerasimenko by a circuitous route involving three flybys of Earth, one of Mars, and a long detour out beyond Jupiter as it built up enough speed to catch up to the comet. Over the coming months, as the Rosetta spacecraft and comet 67P move further into the solar system and approach the sun, the OSIRIS team and other instruments on the payload will continue to observe the comet’s surface for any changes.
Hence why this mission is of such historic importance. Not only does it involve a spacecraft getting closer to a comet than at time in our history, it also presents a chance to examine what happens to a comet as it approaches our sun. And if indeed it does begin to melt and breakdown, we will get a chance to peer inside, which will be nothing less than a chance to look back in time, to a point when our Solar System was still forming. |
Swine Flu is caused by Influenza virus A (H1 N1). We need first to explain what's type A and what are H & N. Influenza viruses are divided into 3 main categories A, B & C; A virus causes world wide epidemics (pandemics) of influenza, B virus causes major outbreaks while C virus only causes mild respiratory tract infection. The pandemics caused by influenza A virus occur almost every 10-20 years, but major outbreaks caused by this virus occur annually every year in various countries.
The key to the persistence of the influenza virus is its genetic material and antigenic composition. It's major surface antigens are hemagglutinin (H) and neuraminidase (N). The H antigen is used to bind to host cells, while N Antigen cleaves budding viruses from infected cells. Hemagglutinin has 4 subtypes (H1, H2, H3 and H5) and N antigen has 2 (N1 and N2) that have caused human disease. The surface antigens can change or (shift) over time. H & N Antigens change continuously reflecting mutations in their genetic material where antigenic shift occurs when a major change occurs in the antigens and this shift often triggers a pandemic, because humans often have little or even no pre-existing immunity to the new strain.
Mode of infection
Swine flu virus spread like any other influenza virus by droplets or aerosols.
1-4 days (The period before appearance of symptoms).
Unfortunately swine flu has the same symptoms like normal human flu and this includes:
Sudden increase of body temperature, cough, malaise, anorexia, headache, muscle aches, rhinitis and respiratory distress (it may or may not be accompanied by vomiting and diarrhea).
Swine flu is diagnosed by nasopharyngeal swabs, washes or aspirates of specimens taken early in the course of the disease. The virusus are fragile and need to be handled carefully; and specimens should never be frozen.
As there's no treatment available yet the only way we have to avoid being infected or to enhance our immunity is by following these measures:
– Avoid crowded places (you can wear masks) – Avoid kisses if you are not sure of your kisser – Wash your hands frequently with water and soap (especially if you sneeze) – Always provide good airing – Drink water and fluids as much as you specifically particular Ginger (to increase immunity) – Useful foods to increase immunity are cantaloupe, apple, guava, honey, lettuce and radish – Use vitamins specifically vitamins A & C and antioxidants.
I hope this information will help you to pass this hard period. |
British Values Statement
Helping children to understand and appreciate the British Cultural Heritage they live within, alongside developing and celebrating their own identity, place and contribution within this society.
Pentland Infant and Nursery School is a Rights Respecting School. We aim for our children to become valuable and fully rounded members of society, who treat all others with respect and tolerance. We value the community that we are part of and celebrate the diversity of the country and world we live in. We actively promote the basic British values of democracy, the rule of law, individual liberty and mutual respect and tolerance of all regardless of their faith and beliefs.
Our school reflects these British values in all that we do. We aim to nurture our children and help them grow into safe, caring, democratic, responsible and tolerant adults who make a positive difference to British society and to the wider world. We believe that it is our responsibility to help children to be creative, open-minded and independent individuals who can contribute successfully to the society in which they live.
The Department for Education has introduced a statutory duty for schools to promote British Values more actively from September 2014, and to ensure they are taught in schools. They define these values as follows:
- Respect for democracy and support or participation in the democratic process
- Respect for the basis on which the law is made and applies in England
- Support for equality of opportunity for all
- Support and respect for the liberties of all within the law
- Respect for and tolerance of different faiths and religious and other beliefs
At Pentland, our assemblies, PSHE and RE lessons in particular provide excellent opportunities to deepen and develop children’s understanding of these values. However, all curriculum areas provide a vehicle for furthering children’s knowledge of the concepts and providing them with the skills to be able to put them into practice in everyday life.
Below are just some examples of how these values are taught and interwoven into the ethos, work and life of our school:
We value the voice of the child and promote democratic processes. A good example of this is our thriving School Council. Each year each class, from Reception to Year 2, has two councillors voted for and elected by the children in their class. This process helps the children to learn about how adults elect councils and members of parliament in order to represent their interests and give them a voice.
The School Council are given responsibility for gathering children’s views in order to inform and help to shape the work and focus of the school. In the last year our School Council have:
- Helped revise the Dinner Menus
- Evaluated the School Grounds and Environment and suggested changes that needed to happen
- Visited Books Plus to choose books for the Classroom and School Library
- Helped to organise and promote a ‘Safe Parking and Travel to School’ Awareness Day and poster competition
- Hosted a visit to our school from the Unicef Councillors from Haveley Hey Academy Primary School in Manchester
School Council gives children the opportunity to debate issues, put forward different ideas and viewpoints and arrive at an agreed consensus.
Throughout the day, in different lessons and at times such as playtimes, children are encouraged to share ideas, offer their opinions and listen to those of others. There are many opportunities for children to be helped to understand that sometimes an agreement must be reached, that takes in the opinions of the majority and that this then shapes the activity of the class or school.
The children’s opinions are vital and we seek to gather their views on an on-going basis. This is done in many informal ways, as well as formally through our pupil survey, which we undertake once a year. The feedback from this and school council play an important part in school improvement planning.
Rule of Law
The importance of Laws, whether they be those that govern the class, the school, or the country, are consistently reinforced throughout regular school days, as well as when dealing with behaviour and through school assemblies.
The school has a very clear Positive Behaviour Policy, which clearly sets out our expectations in terms of positive and acceptable behaviour. Our ‘Top Ten Behaviour Tips’ help children to understand exactly what actions they need to take to demonstrate good behaviour and to act in such a way as to maintain a happy and safe learning environment for all within school.
The policy has a very clear and structured approach to both rewards and sanctions. These are reviewed with children on a regular basis.
The school is actively working towards the Unicef Rights Respecting School Award and as such has placed the rights of the child at the centre of all we do. Children are taught about their own and other’s rights and their responsibility to both respect the rights of others and what they should personally do in order to help uphold these.
Regular assemblies link our expectations for behaviour to laws which apply to both adults and children. We welcome visitors from authorities within the community, such as the Police and Fire Service, who help to reinforce these messages.
Within school, pupils are actively encouraged to make choices, knowing that they are in a safe and supportive environment. As a school we educate and provide boundaries for young pupils to make choices safely, through provision of a safe environment, effective teaching and empowering education. Pupils are encouraged to know, understand and exercise their rights and personal freedoms and are advised how to do this safely, for example through our E-Safety and PSHE lessons.
Throughout the day, children are given opportunities in lessons and at break times to make choices; this could be a choice of learning or games activity, a method of recording their work or a choice from the menu on offer at lunchtime. Whether it be big or small, children are given the opportunity to exercise their right to the freedom of choice.
Our school is built on the strong belief that everyone has the right to be their own person; our policies uphold every child’s right to equal opportunities, regardless of gender, sexuality, ethnicity, faith, political or financial status. Through all our work in school, we help children to understand that no one should be subject to intimidation and that they should have strength in their own choices and beliefs and not be unduly influenced by the thoughts and actions of others, who may seek to guide their choices inappropriately.
Respect is one of the core values of Pentland School. Children learn to develop respect for themselves and for others. Everyone is invested with the responsibility for promoting and demonstrating respect for each other. The adults in school are role models to the children and our School Councillors understand that they are the key representatives for their peers.
The principle of respect is the foundation of our Behaviour policy, class rules and charters. Our Top Ten Behaviour Tips clearly explain to the children what respectful behaviour looks like on an everyday level. These messages are reinforced in day to day teaching and assemblies.
Children are also encouraged to respect and celebrate their own and other’s achievements and we celebrate these weekly in a special Celebration Assembly.
Tolerance of those of Different Faiths and Beliefs
Pentland is situated in an area which is not particularly culturally diverse, therefore we place a great emphasis on promoting diversity with the children.
The school makes considerable efforts to ensure children have exposure to opportunities beyond their local community and introduces them to concepts and ideas beyond their everyday experiences, thus helping them to develop tolerance, appreciation and respect for all people within the multicultural society in which we live.
We place a high value on sporting and the arts activities, as well as ensuring every child has the opportunity to experience an educational trip at some point during the year. These experiences are used as platforms for linking with people beyond their own community and embracing the idea of diversity being a positive concept.
Our RE and PSHE teaching teaches tolerance of those of different faiths and beliefs and promotes a greater understanding of religious diversity and practices for those religions represented in the UK. Planning for RE is directed by the Kirklees Agreed Syllabus for Religious Education.
Assemblies are regularly planned to address this issue through the inclusion of moral stories and celebrations from a variety of faiths and cultures. We use national and international events, such as Comic Relief, to help children respect and appreciate people in Britain and internationally. Members of different faiths or religions are encouraged to share their knowledge to enhance learning within classes and the school.
Just a few examples of learning about and celebrating British Values at Pentland:
- Our curriculum topics offer children the chance to reflect on our core values and British values in many ways. For example, in Year 2, there is a strong focus on the life and work of Florence Nightingale and how she contributed to developing the work of nursing in this country. Pupils also learn about Guy Fawkes, with a focus on London and Houses of Parliament.
- As a school we encourage knowledge of current affairs that are significant to us as a nation. The Golden Jubilee was a huge event for the school, with a 'street party' lunch held in the playgrounds and a parade around the streets of Savile Town, dressed as Queens and Kings.
- At Christmas, Year One visit a neighbouring school to watch their performance of a traditional Nativity play and all children in school listen to a story from Father Christmas, thereby encouraging the children to understand and respect the Christian faith and celebrations. |
Malayalam is the language of Kerala and is one among the 22 official languages in India. About 35 million people speak malayalam and stands in eighth position depending upon the total number of speakers. The people who speak malayalam are called malayalees. The natives of Lakshadweep which is a union territory on the western coast also speak malayalam language. The meaning of the word ‘Malayalam’ is ‘land of mountains’. It is derived from two words ‘Mala’ and ‘Alam’ where ‘Mala’ means mountain and ‘Alam’ means land. The spelling of ‘Malayalam’ in English forms a palindrome. In Malayalam language there are 53 characters out of which there are 37 consonants and 16 vowels. In writing, there are two lipis. The old and the new. The new lipi replaced the old one in 1981. The number of characters in new lipi is less compared to that of the old.
History of Malayalam
The five main languages that form the South Dravidian family are malayalam, tamil, kota, kodagu and kannada. Malayalam is not an ancient language instead it is the youngest of all the other in the Dravidian family. The malayalam language is almost similar to tamil language and it took around four to five centuries to become a distinct language from its Tamil-Malayalam variant. The language used for administration and scholarship were Tamil and Sanskrit. These languages have influenced the malayalam language for its development. Some Indo-Aryan features were also used by malayalam after the coming of the Brahmins.
The Malayalam vocabulary has taken words from a large number of languages. It took words from Tamil and Sanskrit. The coming of the Europeans has influenced the malayalam language and so it has taken words from English, Portuguese, Dutch etc. Out of the words borrowed from other languages, Sanskrit has the first position followed by English. At the same time some malayalam words have been used in other languages also. Some of the words like coir, copra, catamaran etc are originally malayalam words.
Malayalam Scripts and Writing Malayalam
VazhappaLLi inscription is the oldest among malayalam records. Kolezhuthu is the Malayalam script and it is derived from the Grandha Script. Malayalam language has 37 consonants called vyanJanam and 16 vowels called swaram. The new lipi was used to type the Malayalam words using typewriter. This lipi has lesser number of characters. There are numbers in Malayalam language but are not used nowadays.
The pronunciation of Malayalam in various regions is also different. The accent, grammar, vocabulary etc of a person of one region is different from that of another from a different region. Malayalam is a regional language spoken by the people of Kerala and the total population who speaks malayalam is less compared to other languages. Even then there are 170 newspapers, 235 weeklies and more than 550 monthly periodicals published in Malayalam. Malayalam newspaper is the most circulated regional language daily newspaper in India.
Today there is a great trend for the people from Kerala to migrate to other countries. This has helped to spread the language to abroad as well. Malayalam language is taught in educational institutions outside Kerala. |
5 Key Nutrients that protect child’s brain function!
Did you know that a not a single food can aid in brain development, but a healthy, well-balanced diet actually does good for their brains. Nutrients can affect not only neuroanatomy, but also neurochemistry and neurophysiology. Lack of proper nutrition can have a significant impact on all levels, including healthy brain function. Below par brain function can impact our children in many ways, including poor school performance. Let's know the critical nutrients that play a key role in brain function: Omega-3 Fatty Acids: Omega-3 fatty acids are called essential fatty acids, important for child's brain function. Since omega 3 FA are anti-inflammatory, they have the ability to promote healthier brain cells. Good sources of Omega-3 include oily fish such as salmon, flax seed, pumpkin seed and walnuts. Vitamin C: Diets low in vitamin C may affect children's mental and physical development. Both are a potential antioxidant nutrients, lowering the deterioration of brain health function. Foods rich in vitamin C- capsicum, oranges, kiwi, broccoli, strawberries, cabbage, cauliflower, potato, tomatoes, spinach and green peas. Zinc: Zinc aids in DNA synthesis and neurotransmitter releases in the cerebellum part of the brain.Good sources of zinc- beans, yogurt, cashews, chickpeas, oatmeal, almonds and peas, organ meats. Choline: Choline is a precursor to acetylcholine, a substance that helps stimulate the brain to make new connections, is also an important part of memory. Foods high in choline include eggs, liver, soybeans, potatoes, cauliflower, lentils and oats. Iron: Iron play a key role in mylin development present in white matter of the brain. Iron sources are green leafy veggies, dates, figs, organ meats etc. Apart from these foods and nutrients, Play seems to have wonderful role in toddler's bain development. |
Writing in EYFS and KS1
Early writing in EYFS and KS1 taught through Read Write Inc. The children are taught to use the sounds that have learnt in reading to write words and then progress onto writing sentences. The children’s writing is very closely linked to their reading level to allow them time to embed the sounds that they have been learning.
Talk for Writing Across the Whole School
At Illogan, we use the Talk for Writing approach to teach writing across the whole school. This writing approach focuses on learning a story and immersing the children in rich vocabulary and sentence structure before using this story as a base for inventing their own stories. We also use this approach for teaching and learning how to write non-fiction texts. In EYFS, we focus on speaking the story and then offer writing opportunities when the children are ready through continuous provision. Please see the link below to find out a little more about Talk for Writing.
Below are some examples of story maps that the children have used to learn and then write their own stories.
Each half term, the children learn poems linked to their class topics. The children also have the opportunity to create their own poetry based upon these topics.
In KS1 and KS2 the children have spellings to practise and learn in school and at home. Each week, the children will learn a new spelling rule and their weekly spellings fit this rule. We use the Spelling Shed scheme and all children from Y1-Y6 will have logins for this site.
How to help my child with spellings.
To help your child learn their spellings, please login to Spelling Shed and have a go at some of the spelling activities every week. New spelling activities will be set by your child’s class teacher weekly. See below for a link to the Spelling Shed website. We also send home a spelling sheet every week. You could support your child by discussing the spelling rule with them and helping them to fill in the spelling sheet.
As well as learning spellings that link to a spelling rule, the children will learn common exception words. These words don’t fit a spelling rule and therefore need to be learnt independently. Please see below for the common exception words that the children will learn to spell by the end of the year for each year group. You can help your children at home by learning to spell a few words per week.
The children start learning to form letters through Read Write Inc. lessons in EYFS. They have the opportunity to practise forming letters daily during these lessons. The children also enjoy practising letter formation through continuous provision. Please see the picture below for the Read Write Inc. letter formation rhymes that we teach the children.
In Year 1 up to Year 6, the children have regular handwriting lessons. In Year 1, the children build upon the letter formation that they have learnt in EYFS. From Year 2 to Year 6, the children learn to join their handwriting.
How to help my child with their handwriting.
Practise letter formation or joins with your child. In EYFs, practise starting letters in the correct place and saying the rhyme as you form the letter. From year 1 up to year 6, have a look at the handwriting progression below. Practise the letter formation or joins for your child’s year group. When writing, please remember that your child needs to be sitting on a chair with their feet on the floor and practising writing on a line rather than plain paper.
If you have any questions about handwriting, please ask your child’s class teacher. They will be more than happy to help. |
Rome from the modern era to the 20th Century
From the church state to the capital of modern Italy
The fall of the Vatican's Pontifical State began with the occupation of Rome by Napoleon and the removal of the Papal States. In the revolutionary context and in the context of the independence movement Rome was occupied several times. In 1870 Pope Pius IX was forced to finally give up and Rome became the capital of the new Italian Kingdom. The Vatican got back its autonomy only in 1929 under Mussolini. Since 1946 Rome has been the capital of the modern Republic of Italy.
Hotels & Accommodations
The end of the medieval church-state
In the glorious years between the 15th and 18th Century, which are shaped by the art of the Renaissance and Baroque periods, many of the magnificent buildings were built that characterize the Italian capital to this day. In 1798 French revolutionary troops occupied the city and Napoleon abolished the Papal States in 1809 by decree. Pope Pius II was deported to France and Napoleon's son became king of Rome. With the defeat of Napoleon in 1815, the Congress of Vienna restored the church state, and the Pope returned to Rome as a sovereign. In the turmoil of the revolutionary years between 1848 and 1850 riots broke out in Rome and Pope Pius IX had to flee from Giuseppe Mazzini and Giuseppe Garibaldi. However, in 1850 he returned to Rome with the help of French troops and banned 20,000 supporters of the Risorgimento.
Monument of Vittorio Emanuele II.
Kingdom of Italy and Fascism
When the national Risorgimento movement proclaimed the United Kingdom of Italy under King Victor Emmanuel II in 1860, the Vatican successfully refused the accession. It was not until 1869 that Pope Pius IX gave up, after the French protection forces left Rome because of the Franco-German war. The church state was finally incorporated into the Italian kingdom in 1870. Rome became the new capital and the pope remained in voluntary exile in today's Vatican. In 1922 Mussolini marched on Rome and forced his appointment as Prime Minister - a fascist dictatorship was created. In 1929 Mussolini eventually resolved the on-going dispute with the Pope. In the Lateran Pacts Italy granted the Vatican state autonomy and a say in Italian family and matrimonial law.
Rome until today
As early as 1940 Italy entered the war on the German side - Mussolini was an important ally of Hitler. In 1943 Mussolini was deposed and arrested by the king. However, he was freed by German troops, which occupied Rome shortly thereafter. In 1944 the German army gave up and the allied forces liberated the city of Rome. In 1946 the King finally abdicated and Rome became the capital of the Republic of Italy. Today, Rome has nearly 2.8 million residents and a large civil service and bureaucracy rule the entire country. |
Using detailed computer simulations, Oxford University research has revealed why falcons dive at their prey using the same steering laws as man-made missiles.
Published today in PLOS Computational Biology, researchers from Oxford’s Department of Zoology use computer simulations of peregrine falcon attacks to show that the extreme speeds reached during dives from high altitudes enhance the raptors’ ability to execute manoeuvres needed to successfully attack agile prey that would otherwise escape.
Professor Graham Taylor and PhD student Robin Mills, alongside colleagues from the University of Groningen, built a physics-based computer simulation of bird flight that pits falcons against prey. The team had previously shown that falcons attack their prey using the same steering rules as man-made missiles.
The simulation incorporated the aerodynamics of bird flight, how birds flap and tuck their wings, how falcons perceive their prey and react to it with delay and how falcons target their prey like a missile. It showed that high-speed dives enable peregrines to manoeuvre faster, producing much higher aerodynamic forces, thereby maximising their chance of seizing agile prey.
In addition the simulation showed that high-speed dives require very precisely tuned steering for a falcon to attack successfully, revealing that the stoop is a highly specialist hunting technique. The research team found that optimal tuning of the mathematical laws that control steering in the simulation corresponded closely to measurements of steering for real falcons.
The team is now extending its simulation to explore other unique and specialised attack strategies as well as escape tactics employed by different raptor species.
‘Ultimately,’ says Mills, ‘we aim to understand the arms race between aerial predators and their prey that has led raptors to become some of the fastest and most agile animals on Earth.’ |
On the eastern coast of Gibraltar, steep cliffs conceal a network of caves containing artwork and engravings that were created by Neanderthals more than 39,000 years ago.
However, this world heritage site, along with dozens of others found across the Mediterranean coast, is now at risk of being damaged or destroyed as a result of sea level rise, a study finds.
Using data taken from the study, Carbon Brief has produced an interactive map showing the flood risk faced by 49 world heritage sites found in southern Europe and northern Africa at different levels of sea level rise.
The findings show that, today, 37 out of the 49 sites are already at risk and, by the end of the century, the average flood risk across the region could increase by a further 50%.
Where possible, it may be necessary to move these iconic sites further inland in order to protect them from climate change, the lead author tells Carbon Brief.
What does the map show?
The map shows 49 coastal sites across the Mediterranean, all of which located at less than 10m above sea level.
Each point on the map represents one of the United Nations Educational, Scientific and Cultural Organisation’s (Unesco) world heritage sites. For a landmark to be considered a world heritage site, it must be of “outstanding universal value”, according to Unesco. In total, there are 263 world heritage sites across the Mediterranean.
Sites on the map include the iconic ancient cities of Venice and Naples in Italy, as well as lesser known sites including Gorham’s Cave Complex, Gibraltar, which contains Neanderthal artwork, and the ruins of Butrint, Albania, a prehistoric settlement which was occupied by ancient Greeks and Romans.
The researchers quantify “flood risk” by developing an index, from zero to 10, that combines potential flood depth and area. The index is calculated using digital maps of topography, data on storm surge heights and projections of sea level rise.
The maps shows the flood risk faced by each site from 2000 to 2100, according to this index. The slider can be used to show the risk at 10-year intervals. On the map, yellow indicates the lowest flood risk while dark purple indicates the highest flood risk.
RCP2.6: The RCPs (Representative Concentration Pathways) are scenarios of future concentrations of greenhouse gases and other forcings. RCP2.6 (also sometimes referred to as “RCP3-PD”) is a “peak and decline” scenario where stringent mitigation and carbon dioxide removal technologies mean atmospheric CO2 concentration peaks and then falls during this century. By 2100, CO2 levels increase to around 420ppm – around 20ppm above current levels – equivalent to 475ppm once other forcings are included (in CO2e). By 2100, global temperatures are likely to rise by 1.3-1.9C above pre-industrial levels.
Sea level rise increases coastal flood risk by raising water levels, which means that, during high tides or a storm, coastal defences are more likely to become overwhelmed, says Dr Lena Reimann, a researcher at the City University of New York and Kiel University, Germany and lead author of the study published in Nature Communications.
Sea level rise also increases the average height of a “storm surge” – a rising of the sea above the normal tide level during a storm, which causes coastal flooding.
The map can be toggled between different future scenarios of climate change and, thus, sea level rise. These include “RCP2.6”, a relatively low emissions scenario where warming is limited to below 2C above pre-industrial levels; “RCP4.5”, a moderate emissions scenario and “RCP8.5”, a comparatively high emissions scenario.
The map shows how, at present, 37 out of the 49 sites are already at risk of being flooded by the size of storm expected once every 100 years.
The risk of flooding has already reached the maximum of 10 in several sites, including Venice and its surrounding lagoons. Venice is particularly vulnerable to sea level rise because it is low-lying, built over seawater, and the weight of its buildings is causing the land beneath it to sink.
The flood risk across the region during the 2000s averaged 3.7. If warming is limited to 2C (RCP2.6), this is projected to rise by a quarter (to 4.6) and if little action is taken to tackle climate change (RCP8.5), this could increase by almost half to 5.5.
Two sites that are expected to face considerably higher risks if little is done to tackle climate change are Tipasa, Algeria, a former Roman military base which also plays host to palaeochristian and Byzantine ruins, and the Old Town of Corfu, off the coast of Greece, which has its roots in the eighth century BC.
The study also estimated how sea level rise could increase the risk of coastal erosion faced by each site.
Coastal erosion occurs when the action of waves, winds and tides eats away at the land, causing the shoreline to retreat. Sea level rise can worsen coastal erosion by causing the tide to move closer to the land and allowing waves to reach further up and into the coastline.
The study finds that, at present, 42 out of the 47 sites are at risk from coastal erosion. This number increases to 46 under the high emissions scenario.
The average erosion risk increases from 6.2 out of 10 in 2000 to 6.4 in 2100 under RCP2.6 and RCP4.5. Under RCP8.5, the average risk increases to 6.4.
The site facing the highest risk of coastal erosion is Tyre, Lebanon – an ancient Phoenician city where, according to legend, purple dye was invented.
The findings show that, even under relatively low sea level rise, the Mediterranean’s world heritage sites are likely to face increasing risks, says Reimann:
“Adaptation will be essential for those sites to maintain their outstanding universal value, but common adaptation measures are not likely to be adequate for this purpose.”
Common measures to combat coastal flooding and erosion, such as flood barriers and sea dikes, could be inadequate because they are unsightly, she says:
“Different disciplines will need to contribute to design non-conventional, innovative solutions. Such disciplines include architecture, arts, and engineering.”
In some cases, the only option could be to move the site of interest further inland and away from the threats posed by sea level rise, she says:
“Relocation – technically feasible in some cases – may be the last resort for world heritage sites at very high risk and where other adaptation measures cannot be implemented. As the outstanding universal value of each site is bound to its location, relocation must be assessed on a case-by-case basis.”
Reimann, L. et al. (2018) Mediterranean UNESCO World Heritage at risk from coastal flooding and erosion due to sea-level rise, Nature Communications, doi:10.1038/s41467-018-06645-9
Map by Ros Pearce and Tom Prater for Carbon Brief
Mapped: The Mediterranean world heritage sites at risk from sea level rise
Mapped: Where sea level rise could threaten world heritage sites in the Mediterranean
Expert analysis directly to your inbox. |
In July, the world celebrated the 50th anniversary of the historic Apollo 11 moon landing. MIT played an enormous role in that accomplishment, helping to usher in a new age of space exploration. Now MIT faculty, staff, and students are working toward the next great advances — ones that could propel humans back to the moon, and to parts still unknown.
“I am hard-pressed to think of another event that brought the world together in such a collective way as the Apollo moon landing,” says Daniel Hastings, the Cecil and Ida Green Education Professor and head of the Department of Aeronautics and Astronautics (AeroAstro). “Since the spring, we have been celebrating the role MIT played in getting us there and reflecting on how far technology has come in the past five decades.”
“Our community continues to build on the incredible legacy of Apollo,” Hastings adds. Some aspects of future of space exploration, he notes, will follow from lessons learned. Others will come from newly developed technologies that were unimaginable in the 1960s. And still others will arise from novel collaborations that will fuel the next phases of research and discovery.
“This is a tremendously exciting time to think about the future of space exploration,” Hastings says. “And MIT is leading the way.”
Sticking the landing
Making a safe landing — anywhere — can be a life-or-death situation. On Earth, thanks to a network of global positioning satellites and a range of ground-based systems, pilots have instantaneous access to real-time data on every aspect of a landing environment. The moon, however, is not home to any of this precision navigation technology, making it rife with potential danger.
NASA’s recent decision to return to moon has made this a more pressing challenge — and one that MIT has risen to before. The former MIT Instrumentation Lab (now the independent Draper) developed the guidance systems that enabled Neil Armstrong and Buzz Aldrin to land safely on the moon, and that were used on all Apollo spacecraft. This system relied on inertial navigation, which integrates acceleration and velocity measurements from electronic sensors on the vehicle and a digital computer to determine the spacecraft’s location. It was a remarkable achievement — the first time that humans traveled in a vehicle controlled by a computer.
Today, working in MIT’s Aerospace Controls Lab with Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, graduate student Lena Downes — who is also co-advised by Ted Steiner at Draper — is developing a camera-based navigation system that can sense the terrain beneath the landing vehicle and use that information to update the location estimation. “If we want to explore a crater to determine its age or origin,” Downes explains, “we will need to avoid landing on the more highly-sloped rim of the crater. Since lunar landings can have errors as high as several kilometers, we can’t plan to land too closely to the edge.”
Downes’s research on crater detection involves processing images using convolutional neural networks and traditional computer vision methods. The images are combined with other data, such as previous measurements and known crater location information, enabling increased precision vehicle location estimation.
“When we return to the moon, we want to visit more interesting locations, but the problem is that more interesting can often mean more hazardous,” says Downes. “Terrain-relative navigation will allow us to explore these locations more safely.”
“Make it, don’t take it”
NASA also has its sights set on Mars — and with that objective comes a very different challenge: What if something breaks? Given that the estimated travel time to Mars is between 150 and 300 days, there is a relatively high chance that something will break or malfunction during flight. (Just ask Jim Lovell or Fred Haise, whose spacecraft needed serious repairs only 55 hours and 54 minutes into the Apollo 13 mission.)
Matthew Moraguez, a graduate student in Professor Olivier L. de Weck’s Engineering Systems Lab, wants to empower astronauts to manufacture whatever they need, whenever they need it. (“On the fly,” you could say).
“In-space manufacturing (ISM) — where astronauts can carry out the fabrication, assembly, and integration of components — could revolutionize this paradigm,” says Moraguez. “Since components wouldn’t be limited by launch-related design constraints, ISM could reduce the cost and improve the performance of existing space systems while also enabling entirely new capabilities.”
Historically, a key challenge facing ISM is correctly pairing the components with manufacturing processes needed to produce them. Moraguez approached this problem by first defining the constraints created by a stressful launch environment, which can limit the size and weight of a payload. He then itemized the challenges that could potentially be alleviated by ISM and developed cost-estimating relationships and performance models to determine the exact break-even point at which ISM surpasses the current approach.
Moraguez points to Made in Space, an additive manufacturing facility that is currently in use on the International Space Station. The facility produces tools and other materials as needed, reducing both the cost and the wait time of replenishing supplies from Earth. Moraguez is now developing physics-based manufacturing models that will determine the size, weight, and power required for the next generation of ISM equipment.
“We have been able to evaluate the commercial viability of ISM across a wide range of application areas,” says Moraguez. “Armed with this framework, we aim to determine the best components to produce with ISM and their appropriate manufacturing processes. We want to develop the technology to a point where it truly revolutionizes the future of spaceflight. Ultimately, it could allow humans to travel further into deep space for longer durations than ever before,” he says.
Partnering with industry
The MIT Instrumentation Lab was awarded the first contract for the Apollo program in 1961. In one brief paragraph on a Western Union telegram, the lab was charged with developing the program’s guidance and control system. Today the future of space exploration depends as much as ever on deep collaborations.
Boeing is a longstanding corporate partner of MIT, supporting such efforts as the Wright Brother’s Wind Tunnel renovation and the New Engineering Education Transformation (NEET) program, which focuses on modern industry and real-world projects in support of MIT’s educational mission. In 2020, Boeing is slated to open the Aerospace and Autonomy Center in Kendall Square, which will focus on advancing enabling technologies for autonomous aircraft.
Just last spring the Institute announced a new relationship with Blue Origin in which it will begin planning and developing new payloads for missions to the moon. These new science experiments, rovers, power systems, and more will hitch a ride to the moon via Blue Moon, Blue Origin’s flexible lunar lander.
Working with IBM, MIT researchers are exploring the potential uses of artificial intelligence in space research. This year, IBM’s AI Research Week (Sept. 16-20) will feature an event, co-hosted with AeroAstro, in which researchers will pitch ideas for projects related to AI and the International Space Station.
“We are currently in an exciting new era marked by the development and growth of entrepreneurial private enterprises driving space exploration,” says Hastings. “This will lead to new and transformative ways for human beings to travel to space, to create new profit-making ventures in space for the world’s economy, and, of course, lowering the barrier of access to space so many other countries can join this exciting new enterprise.” |
This post is also available in: French
The Great Smog [smoke + fog] of ’52 or Big Smoke, was a severe air pollution event that affected London during December 1952 … although it was not thought to be a significant event at the time, with London having experienced many smog events in the past, so called “pea soupers“. However, government medical reports in the following weeks estimated that up until 8th December 4,000 people had died prematurely and 100,000 more were made ill because of the smog’s effects on the human respiratory tract. More recent research suggests that the total number of fatalities was considerably greater at about 12,000. Wikipedia
A study carried out between 1996 and 2004 at the university of Birmingham suggested that high levels of pollution may have contributed to the deaths of thousands of people in England from pneumonia in recent years: 386,374 people died of pneumonia during the eight years examined, linked to a range of pollutant levels, including engine exhaust emissions. “High mortality rates were observed in areas with elevated ambient pollution levels,” said Professor George Knox, who wrote the report. “The strongest single effect was an increase in pneumonia deaths.”
The benefits of a reduction in air pollution alone justify action on climate change, say the authors of a new report published in the Guardian as action on climate change “would save millions of lives a year by the end of the century purely as a result of the decrease in air pollution”.
“It is pretty striking that you can make an argument purely on health grounds to control climate change,” said Dr. J. Jason West, Assistant Professor at the Department of Environmental Sciences and Engineering, University of North Carolina. His team concluded that millions of premature deaths a year would be avoided, as well as making considerable savings, if fossil fuel emissions were reduced. |
Developed in the early 1900s, the Montessori method of teaching is now practised globally in Montessori schools. Focusing on freedom within structure, natural child development and specialist education materials, this "Montessori approach" has a huge number of educational advantages but has also split opinions.
Read on as we outline the method, its origins and its principles as you consider whether it might be the right programme for your child.
👉🏼 What is the Montessori method of teaching?
The Montessori method is based on the principles of teaching by Maria Montessori, who started her first school in an apartment building in Italy in 1907. 📏 This first school was simple, and basic, but full of materials that would end up forming the foundations of the Montessori learning method.
Maria Montessori realised that her students understood complex ideas much better when they engaged all of their senses. Some of the first activities she focused on were basic (like dressing, sleeping, gardening and caring for the environment). Montessori allowed her students to roam free and play with the materials available, which she oversaw.
She realised that children concentrated deeply when given free choice and showed much more interest in the activities. Over time, this emerged into spontaneous self-discipline.
She concluded that by working independently, children reached new levels of autonomy, positivity, and became self-motivated. This understanding came to form the basis of the Montessori philosophy.
Montessori decided that a teacher’s role was as a facilitator, carefully arranging and overseeing an environment where students could become independent and responsible adults who shared a love of learning. ⭐
Maria Montessori and the Montessori method of education began to travel the world and inspire lots of thinkers in other countries (for example Thomas Edison and Alexander Graham-Bell, and the more modern-day Google founders Larry Page and Sergey Brin!). 🚀
The term ‘Montessori’ now stands as its own term beyond the school itself and is used within a large number of different schools and communities that share the Montessori theory ethos. This includes Kindergartens, Primary schools, special-needs schools and homeschooling, too!
What do Montessori teaching methods look like in practice?
🌏 Students choose what to learn from a carefully curated programme
🌏 Open classrooms (sometimes outdoors) to allow for free movement
🌏 A safe, engaging and nurturing environment
🌏 Specialised Montessori materials expertly organised and placed
🌏 Mixed-age classes (e.g. 0-3, 3-6, 6-12) so children can learn from each other
🌏 Uninterrupted blocks of study time (up to 3 hours)
🌏 No grading or homework
🌏 A trained teacher-specialist
🌏Using specific Montessori toys that are designed to integrate into the method
👉🏼 What are the five principles of the Montessori method?
The Montessori method abides by five principles, so understanding these is a great first step to figuring out if you want to learn more about the method, and introduce it into your family’s lives:
- Principle 1: Respect for the child
Children should be respected and their concentration not interrupted, respect is also shown by giving children the freedom to make choices.
- Principle 2: The Absorbent Mind
Especially at a young age, children are constantly absorbing. This makes it all the more important to create a carefully designed, safe and stimulating learning space.
- Principle 3: Sensitive Periods
Children aren’t able to learn to the best of their ability all of the time! Montessori believed that these periods varied for each child.
- Principle 4: The Prepared Environment
Environments should be child-centred, focusing on stimulating all of the senses.
- Principle 5: Auto Education
Students are capable of educating themselves with the facilitation of a teacher to guide them.
👉🏼 What are the advantages and disadvantages of Montessori education?
Montessori is based on hands-on learning, self-directed activity and collaborative play. Does that mean students can do whatever they want all of the time? Absolutely not! ✋🏼
There are boundaries and environments created and set up by teachers for students to be creative. Everything is careful and safe, and a lifelong love of learning is created.
Encouraging independence in children is never easy, especially when there are risks involved like the potential to fall over or get hurt.
The Montessori method offers a safe environment where children can experience natural consequences whilst developing strong motor skills.
Rushing to focus on writing and numeracy too soon before children have fully developed is not proven to be any more helpful. Motivating and exploring the senses creates long-lasting retention of trickier knowledge. 😊
Maria Montessori believed that the educator should ‘follow the child’, but this can be incredibly frustrating and time-consuming when a child might not be playing ball.
And what about when there is a classroom of other students to help, too? Some students who attend Montessori might find the transition to mainstream school (with more rules and boundaries) much harder.
👉🏼 How can you bring Montessori into your home?
Not quite sure about the method? Try these simple home tricks which align with its principles, and see how your child responds over time:
🌲 Keep living plants in your child’s learning space, and show them how to look after them. This helps to create happy emotions around learning.
🎨 Allow for the child’s independence in every room by lowering hooks and paintings to eye level, allowing for children to enjoy things themselves. You can also encourage them to put away their own clothes!
✋🏼 Declutter and create a home for everything to have its place - this appeals to a child’s natural love of order.
🎒 Play with toys on rotation rather than leaving everything in a big toy box. This allows the child to be stimulated by different things at different times, rather than spoiled.
💡 Do not interrupt play! In Montessori, play is the work of a child and a time when they can deeply concentrate. |
The Gallipoli campaign of World War I is perhaps Australia’s best known military endeavour. In 2015, it is 100 years since Australian and New Zealand troops landed at what is now known as ‘ANZAC Cove’ and faced the formidable Turkish troops.
Here you will find a five-lesson unit of work commemorating this memorable and important event.
The lessons are aimed at students in years 5 to 7, but are flexible enough to be adapted for lower or higher year levels. While designed to be taught as a complete unit, they can be used individually to suit your students and classroom program.
Each lesson has an introduction, links to the Australian curriculum, learning outcomes, a list of resources, lesson steps and accompanying activity sheets (presented as PDFs and IWB files as appropriate). There are also additional curriculum-linked activities to extend your students’ learning.
Each lesson has a distinct focus:
School children from across Australia are invited to capture their individual reflections about those Australians who have sacrificed their lives for us in conflicts by writing their individual thoughts upon a Commemorative Cross.More information > |
Volume strain: Due to the application of external force, if the volume of an elastic body is changed without changing its shape, the strain is called volume strain. It is measured by the unit change of volume.
Explanation: It is the ratio of the change in volume of a body to its original volume. If VV is the original volume of a body and V + ΔV is the volume of the body under the action of normal stress, the change in volume is ΔV.
Let the initial volume of an object = V and due to application of external force the change of volume = ΔV.
So, according to definition, volume strain = change of volume / original volume = ΔV /V
So, the volumetric strain is the unit change in volume, i.e. the change in volume divided by the original volume. |
Supporters of biographical approach concerning analysis of his literary legacy determine this period as a period of idealistic faith in better sides of life pay attention to the fact that your own william shakespeare short biography essay may include research and analysis of the following works by the author in tragedy titus. William shakespeares parents names were john shakespeare, and mary shakespeare there is no accurate birth date known yet william shakespeare began school at the age 7, in stratford, his hometown at the age 13, william began working with his father in the gloving business when shakespeare was young,. Detailed information on shakespeare's life from scholars and editors, from your trusted shakespeare source. The early life and literary achievements of william shakespeare william shakespeare was a great english playwright, dramatist and poet who lived during the late sixteenth and early seventeenth centuries shakespeare is considered to be the greatest playwright of all time no other writer's plays have been produced so. William shakespeare was a playwright, poet and actor he wrote 37 plays in his lifetime his most famous works are hamlet, king lear and romeo and juliet.
Personal background only a few documents chronicle william shakespeare's life , and thus, scholars have been forced to attempt a reconstruction of the playwright. Absolute shakespeare, the essential resource for for william shakespeare's plays, sonnets, poems, quotes, biography and the legendary globe theatre. An overview of his life, times, and work an nac english theatre company educational publication the national arts centre english theatre programmes for student audiences peter hinton artistic director, english theatre this backgrounder was written and researched by jane moore for the. William shakespeare was an english poet, playwright and actor, widely regarded as the greatest writer in the english language and the world's pre-eminent dramatist he is often called england's national poet and the bard of avon his extant works, including collaborations, consist of.
William shakespeare short biography this is a short biography of william shakespeare it includes the major facts about his life and work. Born: april 23, 1564 stratford-upon-avon, england died: april 23, 1616 stratford -upon-avon, england english dramatist and poet the english playwright, poet, and actor william shakespeare was a popular dramatist he was born six years after queen elizabeth i (1533–1603) ascended the throne, in the height of the.
William shakespeare is arguably the most famous writer of the english language, known for both his plays and sonnets though much about his life re. Free essay: “sweet the use of adversity which is like a toad, ugly and venomous, wears yet a precious jewel in his head” (william shakespeare: as you like.
This is the definitive collection of shakespeare's plays and sonnets think of it as an entire course on shakespeare squished into one single volume the notes on shakespeare's life and times are invaluable, and the introductory essays to each play are pretty much the best overview you can get stephen greenblatt, will in. Free essay: along with being translated into every language, shakespeare's words reach and are accepted by multiple races and cultures (mcmillan) a reason. Family background william shakespeare was born in 1564 in stratford-upon- avon, england, a small town of about 1500 people northwest of london john shakespeare. Category: essays papers title: the life of william shakespeare.
The book thinking with shakespeare: essays on politics and life, julia reinhard lupton is published by university of chicago press. William shakespeare (1564-1616) english poet and playwright – shakespeare is widely considered to be the greatest writer in the english language he wrote 38 plays and 154 sonnets short bio of william shakespeare william shakespeare was born in stratford-upon-avon on 23rd april 1564 his father william was a. |
The Kitchen Pantry Scientist: Chemistry for Kids: Homemade Science Experiments and Activities Inspired by Awesome Chemists, Past and Present
Liz Lee Heinecke (Author)
DescriptionReplicate a chemical reaction similar to one Marie Curie used to purify radioactive elements! Distill perfume using a method created in ancient Mesopotamia by a woman named Tapputi! Aspiring chemists will discover these and more amazing role models and memorable experiments in Chemistry for Kids, the debut book of The Kitchen Pantry Scientist series.
This engaging guide offers a series of snapshots of 25 scientists famous for their work with chemistry, from ancient history through today. Each lab tells the story of a scientist along with some background about the importance of their work, and a description of where it is still being used or reflected in today's world. A step-by-step illustrated experiment paired with each story offers kids a hands-on opportunity for exploring concepts the scientists pursued, or are working on today. Experiments range from very simple projects using materials you probably already have on hand, to more complicated ones that may require a few inexpensive items you can purchase online. Just a few of the incredible people and scientific concepts you'll explore:
Galen b. 129 AD
Make soap from soap base, oil and citrus peels.
Modern application: medical disinfectants
Joseph Priestly b. 1733
Carbonate a beverage using CO2 from yeast or baking soda and vinegar mixture.
Modern application: soda fountains
Alessandra Volta b. 1745
Make a battery using a series of lemons and use it to light a LED.
Modern application: car battery
Tu Youyou b. 1930
Extract compounds from plants.
Modern application: pharmaceuticals and cosmetics
People have been tinkering with chemistry for thousands of years. Whether out of curiosity or by necessity, Homo sapiens have long loved to play with fire: mixing and boiling concoctions to see what interesting, beautiful, and useful amalgamations they could create. Early humans ground pigments to create durable paint for cave walls, and over the next 70 thousand years or so as civilizations took hold around the globe, people learned to make better medicines and discovered how to extract, mix, and smelt metals for cooking vessels, weapons, and jewelry. Early chemists distilled perfume, made soap, and perfected natural inks and dyes.
Modern chemistry was born around 250 years ago, when measurement, mathematics, and the scientific method were officially applied to experimentation. In 1896, after the first draft of the periodic table was published, scientists rushed to fill in the blanks. The elemental discoveries that followed gave scientists the tools to visualize the building blocks of matter for the first time in history, and they proceeded to deconstruct the atom. Since then, discovery has accelerated at an unprecedented rate. At times, modern chemistry and its creations have caused heartbreaking, unthinkable harm, but more often than not, it makes our lives better.
With this fascinating, hands-on exploration of the history of chemistry, inspire the next generation of great scientists.
May 05, 2020
8.4 X 0.5 X 10.9 inches | 1.1 pounds
Earn by promoting books
Earn money by sharing your favorite books through our Affiliate program.Become an affiliate
About the Author
Liz Lee Heinecke has loved science since she was old enough to inspect her first butterfly. After working in molecular biology research for 10 years and earning her master's degree, she left the lab to kick off a new chapter in her life as a stay-at-home mom. Soon, she found herself sharing her love of science with her three kids as they grew, chronicling their science adventures on her KitchenPantryScientist website. Her desire to share her enthusiasm for science led to regular television appearances, an opportunity to serve as an Earth Ambassador for NASA, and the creation of an iPhone app. Her goal is to make it simple for parents to do science with kids of all ages, and for kids to experiment safely on their own. Liz graduated from Luther College and received her master's degree in bacteriology from the University of Wisconsin, Madison. She is the author of Kitchen Science Lab for Kids, Kitchen Science Lab for Kids: Edible Edition, Outdoor Science Lab for Kids, STEAM Lab for Kids, Little Learning Labs: Kitchen Science for Kids, and The Kitchen Pantry Scientist: Chemistry for Kids. |
Modern computer chips are a marvel of human engineering. With billions of transistors, they are among the most complex devices we’ve ever created, yet they operate with precision and digital accuracy. How did this become possible? Certainly not by a person laying down each transistor by hand. Instead, a software pipeline and ecosystem of tools — built around electronic design automation (EDA) — gives people superpowers to build things that we couldn’t otherwise do.
With software, we can design and program a whole world of applications on top of chips and computers. When it comes to engineering biology this way though, we’re way behind. Despite the fact that cells are the original computers — processing information, encoding computational operations, communicating with each other, organizing spatially, and so on — we’re still at the very beginning of biological circuit design, even though the ideas have been around for decades. Biological circuits are sets of biochemical reactions that have similar components (logic gates, memory, etc.) as computer circuits, but that are driven by the chemistry of life — with molecules taking the role of electrons and electricity.
Designing biological circuits is therefore in many ways conceptually analogous to designing microprocessors. Until now, however, it’s been a manual and expensive process; how do you drive beyond the limits of complexity, the point at which human engineers can’t go beyond? A new startup, Asimov, is tackling this very problem by applying software concepts and many aspects of the EDA toolchain to engineer living cells. Inspired by the trajectory of electronic design automation — and drawing on deep expertise in both biology and computer science — they’re making the engineering of biology follow the same workflow of engineering a computer chip. With Asimov, a biological circuit design starts in the very same way that a computer chip design would start: by programming it in Verilog, the language used to design electronic circuits for decades.
This approach allows the same tools and mindsets learned from software to be applied to the biological realm. Take hierarchy, a key aspect to software design: One does not design every transistor in a modern microprocessor by hand, but instead designs it in modular parts (e.g. circuits to do memory, arithmetic, logic, control, etc.) that are then combined. With hierarchy, you don’t need to understand the inner workings of those parts; they are abstracted away, allowing people to design high-level goals without needing to get stuck in the details. In biological circuit design, a modular approach that decouples the genetic context and interactions allows bioengineers to build more reliable systems. It also allows recent software engineering practices — such as DRY (don’t repeat yourself) and agile programming — to play an important role in biological engineering.
But even if we can design complex biological circuits this way, the question is… would they work?
The next step — in both the electronic and biological context — is to predict the outcomes of circuits, because making new chips as well as new cells is expensive at the prototype stage. EDA tools include powerful simulators of circuits, so engineers can debug them virtually, resulting in a low-cost, high-turnaround process. Asimov’s custom tools include a powerful simulator that can predict whether or not a biological circuit will work with up to 90% accuracy. This is a huge improvement over doing it with the physical device where readouts are more limited… Or where thousands to potentially millions of designs must be brute-force tested to find the one that works — as is the case when you only have empiricism, and not engineering, on your side.
The hallmark of Asimov’s ability to engineer chips is that it avoids a slow, expensive, empirical trial-and-error approach.
Not only do such biological circuit design automation tools give bioengineers the ability to debug biological circuits much like we debug software — with complete detail of what the simulated circuit is doing — but Asimov engineers have also developed modular biological circuit components that don’t have adverse reactions to other parts of the cell. Why does this matter? It’s akin to a computer programmer designing code that is then injected into a running program or existing operating system. These biological building blocks can be easily used downstream by circuit designers — the bio advance in turn facilitates the computer science advance, namely the accurate simulation of biological circuits.
With Asimov’s approach, high-accuracy simulation, and circuit building-blocks, we can greatly speed the development of biological circuits — decreasing their cost, and greatly increasing their sophistication and complexity. Continuing with our analogy of computers here, we’re still in the “transistor phase” of things, so are not yet at the point where the full complexity of a modern microprocessor can be realized into the circuits of cells. But there are many initial applications where this technology can make major advances — much like how early microprocessors, as simple as they were, became a dramatically enabling technology.
Because biology is everywhere, living cells have applications in everything from food and materials to agriculture to healthcare. In fact, 7 of the top 10 drugs today are biologics, i.e., proteins that have therapeutic properties. These proteins are manufactured in cells at the cost of billions of dollars. Asimov’s technology could drive a dramatic reduction in cost to patients — enabling these drugs to be in the hands of more and more people.
Looking even further out, a bolder application for designing biological circuits is one where new cellular therapies could sense disease in the body, perform logic, and drive a precise curative response — like therapeutic, microscopic “bio-robots”. Sounds like science fiction, but what EDA brought us for electronic circuits could also have once been considered science fiction: It helped bring the reality today of massive cloud supercomputers and smartphones that would make even Captain Kirk jealous.
To help realize the vision of bringing this biological-circuits tech platform to broader applications, I’m excited to announce Andreessen Horowitz’ investment in Asimov, led by co-founders Alec Nielsen, Raja Srinivas, Chris Voigt, and Doug Densmore. Alec, Chris, and Doug co-authored the seminal paper, “Genetic Circuit Design Automation” in Science describing the initial technology. Alec and Raja met while doing their PhDs in Biological Engineering at MIT, and Raja brings both a computational biology background along with previous entrepreneurial experience. Chris is a professor of Biological Engineering at MIT where he leads the Voigt Lab in pioneering synthetic biology research, and Doug is a professor of Electrical and Computer Engineering at Boston University where he directs a cross-disciplinary group on design automation for engineering biology. With this combination of both scientific and computer science expertise cutting across disciplines, they’re the best team to bring this technology to fruition… and I’m honored to join the Asimov board. |
*Aphasia facts medical author: Charles Patrick Davis, MD, PhD
- Aphasia, a disturbance in the formulation and comprehension of language, is due to damage to brain tissue areas responsible for language; aphasia may occur suddenly or develop over time, depending on the type and location of brain tissue damage.
- Strokes are a common cause of aphasia.
- Causes of aphasia are mainly due to strokes, severe head trauma, brain tumors, and brain infections; however, any brain tissue damage for whatever reason that occurs in the language centers of the brain may cause aphasia.
- Two broad categories of aphasia are fluent and non-fluent (also termed Broca's aphasia), but there are subtypes of these categories.
- Aphasia, especially a subtype, is diagnosed by tests given to people to determine the individual's ability to communicate and understand, using language skills; neurologists most frequently diagnose the type of aphasia.
- Aphasia is mainly treated by speech and language therapy and therapy methods are based on the extent and locale of the brain damage.
- Aphasia research is ongoing; studies include revealing underlying problems of brain tissue damage, the links between comprehension and expression, rehabilitation methods, drug therapy, speech therapy, and other ways to understand and treat aspects of aphasia.
What is aphasia?
Aphasia is a neurological disorder caused by damage to the portions of the brain that are responsible for language. Primary signs of the disorder include difficulty in expressing oneself when speaking, trouble understanding speech, and difficulty with reading and writing. Aphasia is not a disease, but a symptom of brain damage.
Who has aphasia?
Most commonly seen in adults who have suffered a stroke, aphasia can also result from a brain tumor, infection, head injury, or dementia that damages the brain. It is estimated that about 1 million people in the United States today suffer from aphasia. The type and severity of language dysfunction depends on the precise location and extent of the damaged brain tissue.
Difficulty with speech can be the result of problems with the brain or nerves that control the facial muscles, larynx, and vocal cords necessary for speech. Likewise, muscular diseases and conditions that affect the jaws, teeth, and mouth can impair speech.
What are the types of aphasia?
Generally, aphasia can be divided into four broad categories: (1) Expressive aphasia involves difficulty in conveying thoughts through speech or writing. The patient knows what he wants to say, but cannot find the words he needs. (2) Receptive aphasia involves difficulty understanding spoken or written language. The patient hears the voice or sees the print but cannot make sense of the words. (3) Patients with anomic or amnesia aphasia, the least severe form of aphasia, have difficulty in using the correct names for particular objects, people, places, or events. (4) Global aphasia results from severe and extensive damage to the language areas of the brain. Patients lose almost all language function, both comprehension and expression. They cannot speak or understand speech, nor can they read or write.
What is the treatment for aphasia?
In some instances, an individual will completely recover from aphasia without treatment. In most cases, however, language therapy should begin as soon as possible and be tailored to the individual needs of the patient. Rehabilitation with a speech pathologist involves extensive exercises in which patients read, write, follow directions, and repeat what they hear. Computer-aided therapy may supplement standard language therapy.
What is the prognosis for aphasia?
The outcome of aphasia is difficult to predict given the wide range of variability of the condition. Generally, people who are younger or have less extensive brain damage fare better. The location of the injury is also important and is another clue to prognosis. In general, patients tend to recover skills in language comprehension more completely than those skills involving expression.
A concussion is a traumatic brain injury.
What research is being done for aphasia?
The NINDS and the National Institute on Deafness and Other Communication Disorders conduct and support a broad range of scientific investigations to increase our understanding of aphasia, find better treatments, and discover improved methods to restore lost function to people who have aphasia.
Medically reviewed by Jon Glass, MD; American board of Psychiatry and Neurology
"NINDS Aphasia Information Page." National Institute of Neurological Disorders and Stroke. 14 Feb. 2014. |
Study shows sending microbes to Earth’s stratosphere, to test their endurance to Martian conditions, can reveal their potential use and threats to space travel.
Some microbes on Earth could temporarily survive on the surface of Mars, finds a new study by NASA and German Aerospace Center scientists. The researchers tested the endurance of microorganisms to Martian conditions by launching them into the Earth’s stratosphere, as it closely represents key conditions on the Red Planet. Published in Frontiers in Microbiology, this work paves the way for understanding not only the threat of microbes to space missions, but also the opportunities for resource independence from Earth.
“We successfully tested a new way of exposing bacteria and fungi to Mars-like conditions by using a scientific balloon to fly our experimental equipment up to Earth’s stratosphere,” reports Marta Filipa Cortesão, joint first author of this study from the German Aerospace Center, Cologne, Germany. “Some microbes, in particular spores from the black mold fungus, were able to survive the trip, even when exposed to very high UV radiation.”
Understanding the endurance of microbes to space travel is vital for the success of future missions. When searching for extra-terrestrial life, we need to be sure that anything we discover has not just traveled with us from Earth.
“With crewed long-term missions to Mars, we need to know how human-associated microorganisms would survive on the Red Planet, as some may pose a health risk to astronauts,” says joint first author Katharina Siems, also based at the German Aerospace Center. “In addition, some microbes could be invaluable for space exploration. They could help us produce food and material supplies independently from Earth, which will be crucial when far away from home.”
Mars in a box
Many key characteristics of the environment at the Martian surface cannot be found or easily replicated at the surface of our planet, however, above the ozone layer in Earth’s middle stratosphere the conditions are remarkably similar.
“We launched the microbes into the stratosphere inside the MARSBOx (Microbes in Atmosphere for Radiation, Survival and Biological Outcomes experiment) payload, which was kept at Martian pressure and filled with artificial Martian atmosphere throughout the mission,” explains Cortesão. “The box carried two sample layers, with the bottom layer shielded from radiation. This allowed us to separate the effects of radiation from the other tested conditions: desiccation, atmosphere, and temperature fluctuation during the flight. The top layer samples were exposed to more than a thousand times more UV radiation than levels that can cause sunburn on our skin.”
“While not all the microbes survived the trip, one previously detected on the International Space Station, the black mold Aspergillus niger, could be revived after it returned home,” explains Siems, who highlights the importance of this ongoing research.
“Microorganisms are closely-connected to us; our body, our food, our environment, so it is impossible to rule them out of space travel. Using good analogies for the Martian environment, such as the MARSBOx balloon mission to the stratosphere, is a really important way to help us explore all the implications of space travel on microbial life and how we can drive this knowledge towards amazing space discoveries.”
Reference: “MARSBOx: Fungal and Bacterial Endurance From a Balloon-Flown Analog Mission in the Stratosphere” by Marta Cortesão, Katharina Siems, Stella Koch, Kristina Beblo-Vranesevic, Elke Rabbow, Thomas Berger, Michael Lane, Leandro James, Prital Johnson, Samantha M. Waters, Sonali D. Verma, David J. Smith and Ralf Moeller, 22 February 2021, Frontiers in Microbiology.
Life finds a way.
I wonder how long Elon Musk and his fantastic colony would survive on Mars? Never once saw any energy calculations to show how all the things we take for granted on earth could be reproduced. When you hear about the ice caps containing “rocket fuel” you know the speaker is off their rocker.
The room sized Nasa nuclear reactor they have shown in Mars needs energy articles has about the same output of 10KWe as the primary energy use of every American. Since the earth actually covers most all of our true energy inputs (plus air,water, dirt,etc) with fossil fuels giving us a relatively comfortable life, the Mars citizen is going to need far more energy than the earth citizen since no easy water, energy, air, soil, protections exists there.
So Nasa will have to send maybe 10-100 nuclear reactors per person to make an earth like paradise colony where we take all these things for granted before adding in our recent 200 yr step up from drudgery. Basically they will be on a very long tether of constant resupplies costing the earth citizenry the entire world economy.
We didn’t even make it on earth in that infamous Arizona Earth 2 bubble. While a failure at the time, it demostrated mankinds hubris in thinking that Mars would be so easy to terraform.
I look forward to seeing Musk’s excellent adventure to Mars, a one way trip to bad reality.
I would build up on mars if my family was to benefit |
Education and Immigrant Integration
Many newcomers come to the United States seeking economic opportunities for their families as well as educational opportunities for their children. Like most families, they recognize the critical role that education plays in the long-term success of their children.
From their earliest years, education helps children and their parents integrate into communities. Indeed, all levels of education set the stage for ongoing language learning, career and workforce development and civic engagement. Additionally, of all locations, schools are where immigrants tend to interact most with the community. Many immigrants do have young families, and the school can be a strong force for helping families and children with developing a sense of belonging.
In particular, immigrant integration and education can be studied at four levels:
The preschool context provides an opportunity to work with families of younger children, who may be earlier in their acculturation process, at a critical time in their children's development and when families would strongly benefit from broader community support and interaction.
Equally important is providing opportunities for children from immigrant families to benefit from the positive long-term effects of high quality early learning opportunities. As studies demonstrate, children from linguistically, culturally and socially isolated families who attend early childhood education programs are better prepared to attend elementary school. Academically and socially, these environments serve as a valuable bridge to the K-12 experience and pave the way for immigrant children's long-term success.
However, while the children of immigrants are more likely to face economic hardships, to enter formal schooling less prepared and to experience a subsequent achievement gap throughout K-12, fewer are enrolled in pre-K programs than children from native-born families. Barriers to early childhood education opportunities for children in immigrant families include their parents' lack of awareness of programs, a lack of affordable and accessible programs, and programs that are not always responsive to the needs of diverse families.
Key components of early childhood education that help incorporate integration include:
· Community Outreach: What strategies are employed to reach out to the immigrant community in order to make diverse parents aware of early childhood education opportunities and their role in U.S. education?
· Enrollment: To what extent is the enrollment process accessible and flexible to support families with diverse needs?
· Provider culture: How does the classroom environment portray a climate that appreciates and values diverse cultures?
· Instruction: How are the curriculum and classroom instruction tailored to fit with cultural differences and to promote sharing across cultural groups?
· Integration services: What family support services are available, such as parenting classes, English language instruction, health literacy, citizenship classes and workforce training?
· Parent involvement: How are parents empowered to be part of the school? How are they communicated to about their child? Is there a give-and-take between their needs and the schools expectations for parent involvement? Are their skills used? Are there opportunities to prepare them for future interaction with the K-12 system?
· Community Building: Are there meaningful opportunities for parents and families from diverse cultures to interact with each other and with longer-term families?
· Staff professional development: What are the requirements for teachers to demonstrate skills in cultural competency? How are teachers' cross-cultural skills enhanced over time? How are diverse providers recruited into the profession and supported?
· Language Access: Are there linguistically diverse providers who are able to meet the needs of linguistically diverse children? If so, what is their fluency level? If not, how are language interpretation needs addressed? To what extent are dual language programs in place and supported?
Ensuring that all children achieve in school, both academically and socially, are the hallmarks of educational reform efforts. Surprisingly, such efforts have not always focused on the unique needs of children from immigrant backgrounds.
Perhaps the most significant piece of policy for immigrant children and education is No Child Left Behind (NCLB), which federally mandates the ongoing testing of all children in order to measure academic progress. Not surprisingly, many LEP students do not perform as well as their peers on these state tests. However, as they gain English proficiency, their scores increase and they are close to their U.S. born counterparts in both math and reading.
Schools and school districts should comprehensively examine how they support integration. In particular, they should examine their policies and practices as they relate to:
- School enrollment
- School culture and climate
- Family and community outreach
- Classroom instruction
- Student assessment
- School-based Adult ESL and family literacy
Many youth from immigrant backgrounds need to be made aware of their options regarding higher education and should be encouraged to pursue advanced degrees. Many will be the first in their families to pursue a college education in the U.S., so they may need extra support in understanding higher education financing and the skills needed to succeed in college. While some may go on to pursue advanced degrees through community colleges or universities, higher education remains out of reach for many youth, particularly those who are undocumented. Many are very strong students, but without any avenues to regularize their status, they will be unlikely to be able to afford the out-of-state tuition that public universities would require of them. Because they may not find options that allow them to attend college, many may be more inclined to drop out of high school. |
Termites play an important role in the natural ecological cycle. They feed on cellulose, the principal ingredient of wood, and help to break down dead trees in forests and other wooded areas, thus enriching the soil. Termites began attacking houses when the wooded areas were cleared for building construction and there was no other available source of food near their nest. Subterranean termites are found in every state except Alaska. Their overall distribution within the continental United States is shown in FIG. 8-1. As their name implies, subterranean termites live in a colony (nest) that is usually located in the ground below the frost line. Even when a house is infested with termites, they usually do not have a nest in the house. They are there only to gather food. The only condition under which a nest might exist in a house (a rare occurrence) is a constant source of moisture such as a leaky waterpipe or drainpipe that wets the surrounding area.
Termites are social insects. Within each colony, there is a rigid caste system consisting of a queen and king, workers, soldiers, and reproductives. Each member of the colony instinctively performs its special task. The function of the queen and king is to propagate the colony. The fertilized queen lays the eggs and might live for as long as twenty-five years. The workers care for the eggs, feed the young and the queen, and generally maintain the colony. They also forage from the nest to the wood supply and return with food. The soldiers defend the colony against attack by other insects, mostly ants. The average worker and soldier live only two or three years. The function of the reproductives is to replace the queen and king in the event of their injury or death. They also lay eggs that rapidly increase the termite population.
When a colony matures, reproductives leave the nest (swarm) to set up a new colony. Although thousands of reproductives leave the nest, only a handful survive to establish a new colony. The remainder die because of adverse conditions in the soil or attacks by other insects. Reproductive termites sprout wings for the swarm. With their wings, they are only about 1⁄2 inch long. They are considered poor fliers and generally flutter around before falling to the ground. Some, however, might be picked up in the wind and carried great distances. Once the reproductives land, they shed their wings, pair off in couples, and return to the soil in search of a suitable place to build a nest.
In most parts of the country, swarming generally occurs in the spring, sometimes in the fall. However, swarming termites have been found in January in some heated houses. In the warm, humid parts of the country, swarming can occur at any time. Even if there are no other outward signs of termite activity, termite swarming in a house is an indication that there is a healthy established colony nearby from which worker termites are coming in their search for food.
Swarming termites do not attack wood. Their only function is to start a new colony. Even if a swarm is in your house, you might not see it. A swarm might last from fifteen minutes to one hour, and if you are not in the right place at the right time, it can be over by the time you enter the room. However, if there was a swarm, you can tell by the discarded wings. They are often found on windowsills and light fixtures, and beneath doors. Do not confuse swarming termites with swarming ants. To the untrained eye, they appear similar, but there are distinctive differences. (See FIG. 8-2.) The most obvious difference is that termites have a thick waist and ants have a pinched (hourglass) waist. |
Mathematics is integral to every aspect of daily life. Mathematical skills are essential for solving problems in most areas of life and are part of human history. All peoples have used and continue to use mathematical knowledge and competencies to make sense of the world around them.
Mathematical values and habits of mind go beyond numbers and symbols; they help us connect, create, communicate, visualize, and reason, as part of the complex process of problem solving. These habits of mind are valuable when analyzing both novel and complex problems from a variety of perspectives, considering possible solutions, and evaluating the effectiveness of the solutions. When developed early in life, mathematical habits of mind help us see the math in the world around us and help to generate confidence in our ability to solve everyday problems without doubt or fear of math.
Observing, learning, and engaging in mathematical thinking empowers us to make sense of our world. For example, exploring the logic of mathematics through puzzles and games can foster a constructive mathematical disposition and result in a self-motivated and confident student with unique and individualized mathematical perspectives. Whether students choose to pursue a deeper or broader study in mathematics, the design of the Mathematics curriculum ensures that they are able to pursue their individual interests and passions while establishing a strong mathematical foundation. |
When working with our young readers, the way we design our instruction and assessment has a significant impact on the development of proficiency. Without a doubt, the strategies that we implement in our planning and development will need to be based on assessed student need, interest, and of course, within the context of universal design and differentiated instruction. Our theoretical foundations, including theories of exceptionality will affect how we adapt our programs and have a particularly significant impact on how our diverse learners become better readers.
Cross-curricular instruction, using real-life experiences, and personal interests all become part of our action plans for supporting struggling readers. However, we also need to ensure that we are using evidence-based practices to create learning environments that reflect the ethical standards of our profession. Our instructional and assessment strategies are proven to support struggling readers, including students who are English Language Learners (ELL).
The following Instructional and Assessment Strategies are essential to supporting struggling readers, including ELL readers. How you design and implement these strategies for your own instruction and assessment needs, will reflect your own contexts and learning environments. In addition, these strategies work best when implemented in a safe and supportive learning environment that contributes to the equity of reading outcomes for all students.
They are based on Information from the Edugains Website:
Please keep in mind that there are multiple forms of assessment that can be used in reading, and Language Learning is developmental. It involves experimentation and approximation. You need to trust your professional judgement, and seek out professional consultation where necessary. Collaboration and moderation is key to providing reliable instructional and assessment methods.
- A lot of pre-reading discussion
- Graphic organizers before, during and after reading
- Scaffolding comprehension texts – preview and discuss text features first
- Daily read-alouds and think alouds with a variety of media and texts
- Opportunities to make predictions and disucss in shared reading
- Explicitly teaching semantic, syntactic and graphophonic cueing systems
- Language-experience texts
- Subject-specific and cross-curricular reading materials
- Time for students to read each day
- Help students choose the just right book
- Small group work with English speaking peers
- Anticipation guides to assess pre-reading beliefs
- Make predictions in pre-reading based on visuals
- Make preditions based on first sentence, first paragraph, key text
- Adopt roles of different characters while reading Readers Theater Texts
- Create a story map or timeline as a visual representation of main features of the story
- Introduce music, chants, poems etc. to reinforce expressions and patterned speech. Keep a collection of them for re-reading.
- Read first language or dual-reading books
- Model how to skim and scan texts for pre-reading
- Jigsaw reading where each student becomes and expert on one section of reading and then shares
- Literature circles for opportunities for a student to share about a book
- Deepen understanding of text by taking on role of character in the hot seat
I am sure that you already have key design ideas for incorporating more cross-curricular connections as well.
Possible Assessment Strategies:
- Portfolios: Help students to see progress over time, recognize quality work and share with parents.
- Create Goal-setting checklists
- Opportunities for assessment for, as and of learning
- Provide assessment in students first language if necessary – access experts for that
- Use effective rubrics, but provide differentiated opportunities for students to express their comprehension, decoding, metacognition orally and in writing
- Google Forms for Assessment:
- When I run guided reading groups, I also like to fill out google forms. I change them based on the expectations / skills I am looking at. But the most important thing that comes of this are the anecdotal comments I make. I end up with amazing spreadsheets of comments that I am able to find patterns with. It is also amazing at how often I forget the nuances, but then can achieve a much clearer picture:)
How do you take your reading strategies and design your Instruction and Assessment to improve reading skills? |
A list of student-submitted discussion questions for Echinoderms.
To predict the main ideas of the text based on context clues, to generate questions about a given topic and to organize and review the knowledge learned about the topic using the SQ3R strategy.
This covers rapid speciation, punctuated equilibrium and evolution by means other than natural selection. It covers how scientists use molecular techniques.
Covers plants endemic to Antarctica, their relationship to fossilized penguin guano, and the effects of climate change on them. Illustrates the importance of bottom up forces in establishing ecosystems.
Covers population biology through invasive species. It looks at what makes a species a successful invader and methods of transporting invasive species around the world.
This study guide summarizes the ecology, body structure, and reproduction of sponges, cnidarians, flatworms, roundworms, mollusks, annelids, arthropods, insects, and echinoderms. It also includes further classification of arthropods and echinoderms.
These flashcards help you study important terms and vocabulary from Echinoderms. |
Respiration is much more than just breathing; in fact, the term refers to two separate processes, only one of which is the intake and outflow of breath. At least cellular respiration, the process by which organisms convert food into chemical energy, requires oxygen; on the other hand, some forms of respiration are anaerobic, meaning that they require no oxygen. Such is the case, for instance, with some bacteria, such as those that convert ethyl alcohol to vinegar. Likewise, an anaerobic process can take place in human muscle tissue, producing lactic acid—something so painful that it feels as though vinegar itself were being poured on an open sore.
Respiration can be defined as the process by which an organism takes in oxygen and releases carbon dioxide, one in which the circulating medium of the organism (e.g., the blood) comes into contact with air or dissolved gases. Either way, this means more or less the same thing as breathing. In some cases, this meaning of the term is extended to the transfer of oxygen from the lungs to the bloodstream and, eventually, into cells or the release of carbon dioxide from cells into the bloodstream and thence to the lungs, from whence it is expelled to the environment. Sometimes a distinction is made between external respiration, or an exchange of gases with the external environment, and internal respiration, an exchange of gases between the body's cells and the blood, in which the blood itself "bathes" the cells with oxygen and receives carbon dioxide to transfer to the environment.
This is just one meaning—albeit a more familiar one—of the word respiration. Respiration also can mean cellular respiration, a series of chemical reactions within cells whereby food is "burned" in the presence of oxygen and converted into carbon dioxide and water. This type of respiration is the reverse of photosynthesis, the process by which plants convert dioxide and water, with the aid of solar energy, into complex organic compounds known as carbohydrates. (For more about carbohydrates and photosynthesis, see Carbohydrates.)
Later in this essay, we discuss some of the ways in which various life-forms breathe, but suffice it to say for the moment—hardly a surprising revelation!—that the human lungs and respiratory system are among the more complex mechanisms for breathing in the animal world. In humans and other animals with relatively complex breathing mechanisms (i.e., lungs or gills), oxygen passes through the breathing apparatus, is absorbed by the bloodstream, and then is converted into an unstable chemical compound (i.e., one that is broken down easily) and carried to cells. When the compound reaches a cell, it is broken down and releases its oxygen, which passes into the cell.
On the "return trip"—that is, the reverse process, which we experience as exhalation—cells release carbon dioxide into the bloodstream, where it is used to form another unstable chemical compound. This compound is carried by the bloodstream back to the gills or lungs, and, at the end of the journey, it breaks down and releases the carbon dioxide to the surrounding environment. Clearly, the one process is a mirror image of the other, with the principal difference being the fact that oxygen is the key chemical component in the intake process, while carbon dioxide plays the same role in the process of outflow.
In humans the compound used to transport oxygen is known by the name hemoglobin. Hemoglobin is an iron-containing protein in red blood cells that is responsible for transporting oxygen to the tissues and removing carbon dioxide from them. In the lungs, hemoglobin, known for its deep red color, reacts with oxygen to form oxyhemoglobin. Oxyhemoglobin travels through the bloodstream to cells, where it breaks down to form hemoglobin and oxygen, and the oxygen then passes into cells. On the return trip, hemoglobin combines with carbon dioxide to form carbaminohemoglobin, an unstable compound that, once again, breaks down—only this time it is carbon dioxide that it releases, in this case to the surrounding environment rather than to the cells.
In other species, compounds other than hemoglobin perform a similar function. For example, some types of annelids, or segmented worms, carry a green blood protein called chlorocruorin that functions in the same way as hemoglobin does in humans. And whereas hemoglobin is a molecule with an iron atom at the center, the blood of lobsters and other large crustaceans contains hemocyanin, in which copper occupies the central position. Whatever the substance, the compound it forms with oxygen and carbon dioxide must be unstable, so that it can break down easily to release oxygen to the cells or carbon dioxide to the environment.
Both forms of respiration involve oxygen, but cellular respiration also involves a type of nutrient—materials that supply energy, or the materials for forming new tissue. Among the key nutrients are carbohydrates, naturally occurring compounds that consist of carbon, hydrogen, and oxygen. Included in the carbohydrate group are sugars, starches, cellulose, and various other substances.
Glucose is a simple sugar produced in cells by the breakdown of more complex carbohydrates, including starch, cellulose, and such complex sugars as sucrose (cane or beet sugar) and fructose (fruit sugar). In cellular respiration, an organism oxidizes glucose (i.e., combines it with oxygen) so as to form the energy-rich compound known as adenosine triphosphate (ATP). ATP, critical to metabolism (the breakdown of nutrients to provide energy or form new material), is the compound used by cells to carry out most of their ordinary functions. Among those functions are the production of new cell parts and chemicals, the movement of compounds through cells and the body as a whole, and growth.
In cellular respiration, six molecules of glucose (C6H12O6) react with six molecules of oxygen (O2) to form six molecules of carbon dioxide (CO2), six molecules of water (H2O), and 36 molecules of ATP. This can be represented by the following chemical equation:
6C6H12O6 + 6 O2 → 6 CO2 + 6 H2O + 36 ATP
The process is much more complicated than this equation makes it appear: some two dozen separate chemical reactions are involved in the overall conversion of glucose to carbon dioxide, water, and ATP.
All animals have some mechanism for removing oxygen from the air and transmitting it into the bloodstream, and this same mechanism typically is used to expel carbon dioxide from the bloodstream into the surrounding environment. Types of animal respiration, in order of complexity, include direct diffusion, diffusion into blood, tracheal respiration, respiration with gills, and finally, respiration through lungs. Microbes, fungi, and plants all obtain the oxygen they use for cellular respiration directly from the environment, meaning that there are no intermediate organs or bodily chemicals, such as lungs or blood. More complex organisms, such as sponges, jellyfish, and terrestrial (land) flatworms, all of which have blood, also breathe through direct diffusion. The latter term describes an exchange of oxygen and carbon dioxide directly between an organism, or its bloodstream, and the surrounding environment.
More complex is the method of diffusion into blood whereby oxygen passes through a moist layer of cells on the body surface and then
In tracheal respiration air moves through openings in the body surface called spiracles. It then passes into special breathing tubes called tracheae that extend into the body. The tracheae divide into many small branches that are in contact with muscles and organs. In small insects, air simply moves into the tracheae, while in large insects, body movements assist tracheal air movement. Insects and terrestrial arthropods (land-based organisms with external skeletons) use this method of respiration.
Much more complicated than tracheae, gills are specialized tissues with many infoldings. Each gill is covered by a thin layer of cells and filled with blood capillaries. These capillaries take up oxygen dissolved in water and expel carbon dioxide dissolved in blood. Fish and other aquatic animals use gills, as did the early ancestors of humans and other higher animals. A remnant of this chapter from humans' evolutionary history can be seen in the way that an embryo breathes in its mother's womb, not by drawing in oxygen through its lungs but through gill-like mechanisms that disappear as the embryo develops.
Lungs are composed of many small chambers or air sacs surrounded by blood capillaries. Thus, they work with the circulatory system, which transports oxygen from inhaled air to all tissues of the body and also transports carbon dioxide from body cells to the lungs to be exhaled. After air enters the lungs, oxygen moves into the bloodstream through the walls of these capillaries. It then passes from the lung capillaries to the different muscles and organs of the body.
Although they are common to amphibians, reptiles, birds, and mammals, lungs differ enormously throughout the animal kingdom. Frogs, for instance, have balloon-like lungs that do not have a very large surface area. By contrast, if the entire surface of an adult male human's lungs were spread flat, it would cover about 750 sq. ft. (70 m2), approximately the size of a handball court. The reason is that humans have about 300 million gas-filled alveoli, tiny protrusions inside the lungs that greatly expand the surface area for gas exchange.
Birds have specialized lungs that use a mechanism called crosscurrent exchange, which allows air to flow in one direction only, making for more efficient oxygen exchange. They have some eight thin-walled air sacs attached to their lungs, and when they inhale, air passes through a tube called the bronchus and enters posterior air sacs—that is, sacs located toward the rear. At the same time, air in the lungs moves forward to anterior air sacs, or ones located near the bird's front. When the bird exhales, air from the rear air sacs moves to the outside environment, while air from the front moves into the lungs. This efficient system moves air forward through the lungs when the bird inhales and exhales and makes it possible for birds to fly at high altitudes, where the air has a low oxygen content.
Humans and other mammals have lungs in which air moves in and out through the same pathway. This is true even of dolphins and whales, though they differ from humans in that they do not take in nutrition through the same opening. In fact, terrestrial mammals, such as the human, horse, or dog, are some of the only creatures that possess two large respiratory openings: one purely for breathing and smelling and the other for the intake of nutrients as well as air (i.e., oxygen in and carbon dioxide out).
Activity that involves oxygen is called aerobic; hence the term aerobic exercise, which refers to running, calisthenics, biking, or any other form of activity that increases the heart rate and breathing. Activity that does not involve oxygen intake is called anaerobic. Weightlifting, for instance, will increase the heart rate and rate of breathing if it is done intensely, but that is not its purpose and it does not depend on the intake and outflow of breath. For that reason, it is called an anaerobic exercise—though, obviously, a person has to keep breathing while doing it.
In fact, a person cannot consciously stop breathing for a prolonged period, and for this reason, people cannot kill themselves simply by holding their breath. A buildup of carbon dioxide and hydrogen ions (electrically charged atoms) in the bloodstream stimulates the breathing centers to become active, no matter what we try to do. On the other hand, if a person were underwater, the lungs would draw in water instead of air, and though water contains air, the drowning person would suffocate.
Some creatures, however, do not need to breathe air but instead survive by anaerobic respiration. This is true primarily of some forms of bacteria, and indeed scientists believe that the first organisms to appear on Earth's surface were anaerobic. Those organisms arose when Earth's atmosphere contained very little oxygen, and as the composition of the atmosphere began to incorporate more oxygen over the course of many millions of years, new organisms evolved that were adapted to that condition.
The essay on paleontology discusses Earth's early history, including the existence of anaerobic life before the formation of oxygen in the atmosphere. The appearance of oxygen is a result of plant life, which produces it as a byproduct of the conversion of carbon dioxide that takes place in photosynthesis. Plants, therefore, are technically anaerobic life-forms, though that term usually refers to types of bacteria that neither inhale nor exhale oxygen. Anaerobic bacteria still exist on Earth and serve humans in many ways. Some play a part in the production of foods, as in the process of fermentation. Other anaerobic bacteria have a role in the treatment of sewage. Living in an environment that would kill most creatures—and not just because of the lack of oxygen—they consume waste materials, breaking them down chemically into simpler compounds.
Even in creatures, such as humans, that depend on aerobic respiration, anaerobic respiration can take place. Most cells are able to switch from aerobic to anaerobic respiration when necessary, but they generally are not able to continue producing energy by this process for very long. For example, a person who exercises vigorously may be burning up glucose faster than oxygen is being pumped to the cells, meaning that cellular respiration cannot take place quickly enough to supply all the energy the body needs. In that case, cells switch over to
Eventually, the buildup of lactic acid is carried away in the bloodstream, and the lactic acid is converted to carbon dioxide and water vapor, both of which are exhaled. But if lactic acid levels in the bloodstream rise faster than the body can neutralize them, a state known as lactic acidosis may ensue. Lactic acidosis rarely happens in healthy people and, more often than not, is a result of the body's inability to obtain sufficient oxygen, as occurs in heart attacks or carbon monoxide or cyanide poisoning or in the context of diseases such as diabetes.
The ability of the body to metabolize lactic acid is diminished significantly by alcohol, which impairs the liver's ability to carry out normal metabolic reactions. For this reason, alcoholics often have sore muscles from lactic acid buildup, even though they may not exercise. Lactic acid also can lead to a buildup of uric acid crystals in the joints, in turn causing gout, a very painful disease.
Lactic acid is certainly not without its uses, and it is found throughout nature. When lactose, or milk sugar, is fermented by the action of certain bacteria, it causes milk to sour. The same process is used in the manufacture of yogurt, but the reaction is controlled carefully to ensure the production of a consumable product. Lactic acid also is applied by the dairy industry in making cheese. Molasses contains lactic acid, a product of the digestion of sugars by various species of bacteria, and lactic acid also is used in making pickles and sauerkraut, foods for which a sour taste is desired.
A compound made from lactic acid is used as a food preservative, but the applications of lactic acid extend far beyond food production. Lactic acid is important as a starting material for making drugs in the pharmaceutical industry. Additionally, it is involved in the manufacturing of lacquers and inks; is used as a humectant, or moisturizer, in some cosmetics; is applied as a mordant, or a chemical that helps fabrics accept dyes, to textiles; and is employed in tanning leather.
In almost any bodily system, there are bound to be disorders, or at least the chance that disorders may occur. This is particularly the case with something as complex as the respiratory system, because the more complex the system, the more things that can go wrong. Among the respiratory disorders that affect humans is a whole range of ailments from the common cold to emphysema, and from the flu to cystic fibrosis.
Colds are among the most common conditions that affect the respiratory system, though what we call the common cold is actually an invasion by one of some 200 different types of virus. Thus, it is really not one ailment but 200, though these are virtually identical, but the large number of viral causative agents has made curing the cold an insurmountable task.
When you get a cold, viruses establish themselves on the mucus membrane that coats the respiratory passages that bring air to your lungs. If your immune system is unsuccessful in warding off this viral infection, the nasal passages become inflamed, swollen, and congested, making it difficult to breathe.
Coughing is a reflex action whereby the body attempts to expel infected mucus or phlegm. It is essential to removing infected secretions from the body, but of course it plays no role in actually bringing a cold to an end. Nor do antibiotics, which are effective against bacteria but not viruses (see Infection). Only when the body builds up its own defense to the cold—assuming the sufferer has a normally functioning immune system—is the infection driven away.
Influenza, a group of viral infections that can include swine flu, Asian flu, Hong Kong flu, and Victoria flu, is often far more serious than the common cold. A disease of the lungs, it is highly contagious, and can bring about fever, chills, weakness, and aches. In addition, influenza can be fatal: a flu epidemic in the aftermath of World War I, spread to far corners of the globe by returning soldiers, killed an estimated 20 million people.
Respiratory ailments often take the form of allergies such as hay fever, symptoms of which include sneezing, runny nose, swollen nasal tissue, headaches, blocked sinuses, fever, and watery, irritated eyes. Hay fever is usually aggravated by the presence of pollen or ragweed in the air, as is common in the springtime. Other allergy-related respiratory conditions may be aggravated by dust in the air, and particularly by the feces of dust mites that live on dust particles.
Allergic reactions can be treated by antihistamines (see The Immune System for more about allergies), but simple treatments are not available for such complex respiratory disorders as asthma, chronic bronchitis, and emphysema. All three are characterized by an involuntary constriction in the walls of the bronchial tubes (the two divisions of the trachea or windpipe that lead to the right and left lungs), which causes the tubes to close in such a way that it becomes difficult to breathe.
Emphysema can be brought on by cigarette smoking, and indeed some heavy smokers die from that ailment rather than from lung cancer. On the other hand, a person can contract a bronchial illness without engaging in smoking or any other activity for which the sufferer could ultimately be blamed. Indeed, small children may have asthma. One treatment for such disorders is the use of a bronchodilator, a medicine used to relax the muscles of the bronchial tubes. This may be administered as a mist through an inhaler, or given orally like other medicine.
More severe is tuberculosis, an infectious disease of the lungs caused by bacteria. Tuberculosis attacks the lungs, leading to a chronic infection with such symptoms as fatigue, loss of weight, night fevers and chills, and persistent coughing that brings up blood. Without treatment, it is likely to be fatal. Indeed, it was a significant cause of death until the introduction of antibiotics in the 1940s, and it has remained a problem in underdeveloped nations. Additionally, thanks to mutation in the bacteria themselves, strains of the disease are emerging that are highly resistant to antibiotics.
Another life-threatening respiratory disease is pneumonia, an infection or inflammation of the lungs caused by bacteria, viruses, mycoplasma (microorganisms that show similarities to both viruses and bacteria), and fungi, as well as such inorganic agents as inhaled dust or gases. Symptoms include pleurisy (chest pain), high fever, chills, severe coughing that brings up small amounts of mucus, sweating, blood in the sputum (saliva and mucus expelled from the lungs), and labored breathing.
In 1936, pneumonia was the principal cause of death in the United States. Since then, it has been controlled by antibiotics, but as with tuberculosis, resistant strains of bacteria have developed, and therefore the number of cases has increased. Today, pneumonia and influenza combined are among the most significant causes of death in the United States (see Diseases).
Respiratory ailments may also take the form of lung cancer, which may or may not be a result of smoking. Cigarette smoking and air pollution are considered to among the most significant causes of lung cancer, yet people have been known to die of the disease without being smokers or having been exposed to significant pollution.
One particularly serious variety of respiratory illness is cystic fibrosis, a genetic disorder that causes a thick mucus to build up in the respiratory system and in the pancreas, a digestive organ. (For more about genetic disorders, see Heredity; for more on role of the pancreas, see Digestion.) In the United States, the disease affects about one in every 3,900 babies born annually. No cure for cystic fibrosis exists, and the disease is invariably fatal, with only about 50% of sufferers surviving into their thirties.
Lung complications are the leading cause of death from cystic fibrosis, and most symptoms of the disease are related to the sticky mucus that clogs the lungs and pancreas. People with cystic fibrosis have trouble breathing, and are highly susceptible to bacterial infections of the lungs. Coughing, while it may be irritating and painful if you have a cold, is necessary for the expulsion of infected mucus, but mucus in the lungs of a cystic fibrosis is too thick to be moved. This makes it easy for bacteria to inhabit the lungs and cause infection.
Bryan, Jenny. Breathing: The Respiratory System. New York: Dillon Press, 1993.
Cellular Metabolism and Fermentation. Estrella Mountain Community College (Web site). <http://gened.emc.maricopa.edu/bio/bio181/BIOBK/BioBookGlyc.html>.
Kimball, Jim. "The Human Respiratory System." Kim ball's Biology Pages (Web site). <http://www.ultranet.com/~jkimball/BiologyPages/P/Pulmonary.html>.
Levesque, Mireille, Letitia Fralick, and Joni McDowell. "Respiration in Water: An Overview of Gills." University of New Brunswick (Web site). <http://www.unb.ca/courses/biol4775/SPAGES/SPAGE13.HTM>.
Llamas, Andreu. Respiration and Circulation. Milwaukee: Gareth Stevens, 1998.
Paustian, Timothy. Anaerobic Respiration. Department of Bacteriology, University of Wisconsin-Madison (Web site). <http://www.bact.wisc.edu/microtextbook/Metabolism/RespAnaer.html>.
Roca, Núria, and Marta Serrano. The Respiratory System, the Breath of Life. Illus. Antonio Tenllado. New York: Chelsea House Publishers, 1995.
Silverstein, Alvin, and Virginia B. Silverstein. The Respiratory System. New York: Twenty-First Century Books, 1994.
Adenosine triphosphate, an energy carrier formed when a simpler compound, adenosine diphosphate (ADP), combines with a phosphate group.
A very small blood vessel. Capillaries form networks throughout the body.
Naturally occurring compounds, consisting of carbon, hydrogen, and oxygen, whose primary function in the body is to supply energy. Included in the carbohydrate group aresugars, starches, cellulose, and various other substances. Most carbohydrates are produced by green plants in the process of undergoing photosynthesis.
A process that, when it takes place in the presence of oxygen, involves the intake of organic substances, which are broken down into carbon dioxide and water, with the release of considerable energy.
The parts of the body that work together to move blood and lymph. They include the heart, blood vessels, blood, and the lymphatic glands, such as the lymph nodes.
A substance in which atoms of more than one element are bonded chemically to one another.
A process, involvingenzymes, in which a compound rich in energy is broken down into simpler substances.
A monosaccharide (sugar) that occurs widely in nature and which is the form in which animals usually receive carbohydrates. Also known as dextrose, grape sugar, and corn sugar.
An iron-containing protein in human red blood cells that is responsible for transporting oxygen to the tissues and removing carbon dioxide fromthem. Hemoglobin is known for its deep red color.
The portion of the blood that includes white blood cells and plasma but not red blood cells.
Masses of tissue, at certain places in the body, that act as filters for blood.
The chemical process by which nutrients are broken down and converted into energy or are used in the construction of new tissue or other material in the body. All metabolic reactions are either catabolic or anabolic.
The simplest type of carbohydrate. Monosaccharides, which cannot be broken down chemically into simpler carbohydrates, also are known as simple sugars.
Materials that supply energy or the materials to form new tissue for organisms. They include proteins, carbohydrates, lipids (fats), vitamins, and minerals.
A group (that is, a combination of atoms from two or more elements that tend to bond with other elements or compounds in certain characteristic ways) involving a phosphate, or a chemical compound that contains oxygen bonded to phosphorus.
The biological conversion of light energy (that is, electromagnetic energy) from the Sun to chemical energy in plants. In this process carbondioxide and water are converted to carbohydrates and oxygen.
A term that can refer either to cellular respiration (see definition) or, more commonly, to the process by which an organism takes in oxygen and releases carbon dioxide. Sometimes a distinction is made between external respiration, or an exchange of gases with the external environment, and internal respiration, an exchange of gases between the body's cells and the blood.
A monosaccharide, or simple carbohydrate.
A group of cells, along with the substances that join them, that forms part of the structural materials in plants oranimals. |
Key Stage 1 and 2 Curriculum
The School Curriculum in England
Section 78 of the 2002 Education Act requires us to provide a balanced and broad based curriculum which promotes spiritual, moral, cultural, mental and physical development and which prepares our pupils for the opportunities, responsibilities and experiences of later life.
The school curriculum comprises of all learning and other experiences that we plan for our pupils. The national curriculum forms part of the school curriculum.
We are required to make provision for a daily act of collective worship and must teach religious education to pupils in every key stage.
We are legally required to follow the statutory national curriculum which sets out in programmes of study, on the basis of key stages, subject content for those subjects that should be taught. We must publish online our school curriculum by subject and academic year.
We should make provision for personal, social, health and economic education (PSHE), drawing on good practice. We are also able to include other subjects or topics of our choice when planning and designing our own programme of education.
The National Curriculum in England
The national curriculum provides pupils with an introduction to the essential knowledge that they need to be educated citizens. It introduces pupils to the best that has been thought and said; and helps engender an appreciation of human creativity and achievement.
The national curriculum is just one element in the education of every child. There is time and space in the school day and in each week, term and year to range beyond the national curriculum specifications. The national curriculum provides an outline of core knowledge around which teachers can develop exciting and stimulating lessons to promote the development of pupils’ knowledge, understanding and skills as part of the wider school curriculum.
We set high expectations for every pupil. We will plan stretching work for pupils whose attainment is significantly above the expected standard. We have an even greater obligation to plan lessons for pupils who have low levels of prior attainment or come from disadvantaged backgrounds. We will use appropriate assessment to set targets which are deliberately ambitious.
We will take account of our duties under equal opportunities legislation.
A wide range of pupils have special educational needs, many of whom also have disabilities. Lessons will be planned to ensure that there are no barriers to every pupil achieving. In many cases, such planning will mean that these pupils will be able to study the full national curriculum. The SEN Code of Practice includes advice on approaches to identification of need which can support this. A minority of pupils will need access to specialist equipment and different approaches. The SEN Code of Practice outlines what needs to be done for them.
With the right teaching, that recognises their individual needs, many disabled pupils may have little need for additional resources beyond aids which they use as part of their daily life. We will plan lessons so that these pupils can study every national curriculum subject. Potential areas of difficulty should be identified and addressed at the outset of work.
We will also take account of the needs of pupils whose first language is not English. Monitoring of progress will take account of the pupil’s age, length of time in this country, previous educational experience and ability in other languages.
The ability of pupils for whom English is an additional language to take part in the national curriculum may be in advance of their communication skills in English. We will plan teaching opportunities to help pupils develop their English and aim to provide the support pupils need to take part in all subjects.
Numeracy and Mathematics
We will use every relevant subject to develop pupils’ mathematical fluency. Confidence in numeracy and other mathematical skills is a precondition of success across the national curriculum.
We will develop pupils’ numeracy and mathematical reasoning in all subjects so that they understand and appreciate the importance of mathematics. We will teach pupils to apply arithmetic fluently to problems, understand and use measures, make estimates and sense check their work. Pupils will apply their geometric and algebraic understanding, and relate their understanding of probability to the notions of risk and uncertainty. They should also understand the cycle of collecting, presenting and analysing data. We will teach them to apply their mathematics to both routine and non-routine problems, including breaking down more complex problems into a series of simpler steps.
Language and Literacy
We will develop pupils’ spoken language, reading, writing and vocabulary as integral aspects of the teaching of every subject. English is both a subject in its own right and the medium for teaching; for pupils, understanding the language provides access to the whole curriculum. Fluency in the English language is an essential foundation for success in all subjects.
Pupils will be taught to speak clearly and convey ideas confidently using Standard English. They will learn to justify ideas with reasons; ask questions to check understanding; develop vocabulary and build knowledge; negotiate; evaluate and build on the ideas of others; and select the appropriate register for effective communication. We will teach pupils to give well-structured descriptions and explanations and develop their understanding through speculating, hypothesising and exploring ideas. This will enable them to clarify their thinking as well as organise their ideas for writing.
We will develop pupils’ reading and writing in all subjects to support their acquisition of knowledge. We will teach pupils to read fluently, understand extended prose (both fiction and non-fiction) and encourage them to read for pleasure. We will promote wider reading. We will provide library facilities and set ambitious expectations for reading at home. Pupils will develop the stamina and skills to write at length, with accurate spelling and punctuation. We will teach the correct use of grammar. Pupils will build on what they have been taught to expand the range of their writing and the variety of the grammar they use. The writing they will do includes narratives, explanations, descriptions, comparisons, summaries and evaluations: such writing supports them in rehearsing, understanding and consolidating what they have heard or read.
Pupils’ acquisition and command of vocabulary are key to their learning and progress across the whole curriculum. We will therefore develop vocabulary actively, building systematically on pupils’ current knowledge. We will increase pupils’ store of words in general; simultaneously, also making links between known and new vocabulary and discuss the shades of meaning in similar words. In this way, pupils will expand the vocabulary choices that are available to them when they write. In addition, it is vital for pupils’ comprehension that they understand the meanings of words they meet in their reading across all subjects, and older pupils will be taught the meaning of instruction verbs that they may meet in examination questions. It is particularly important to induct pupils into the language which defines each subject in its own right, such as accurate mathematical and scientific language.
Statutory Requirements and Non-Statutory Guidance
We will teach all statutory requirements of the curriculum. In regard to the non-statutory notes and guidance we will be sensitive to the religious and cultural beliefs of all our staff and pupils. Therefore we will…
- focus upon teaching the Y6 evolution and inheritance programmes of study through natural selection.
- ensure all food products are vegetarian or Halal.
- not play music during the holy month of Ramadan.
- be sensitive to the types of music the pupils experience.
- introduce dance as movement to music.
The Local Agreed Syllabus for Religious Education in Kirklees and Calderdale 2014-2019 is the statutory curriculum for maintained schools in Calderdale and Kirklees. It is authorised by the Standing Advisory Councils (SACREs) in Calderdale and Kirklees for five years from 1st September 2014.
The syllabus requires us to teach about Christianity and another five world faiths: Buddhism, Hinduism, Islam, Judaism and Sikhism. However, there is enormous diversity within these traditions and this will be recognised in curriculum planning. The syllabus also encourages us to study faiths and traditions not included in the six world religions defined in guidance. We have discretion in this and should reflect the community and context within which we work.
In addition, we are required to include other world views throughout the study of RE. This recognises that one of RE’s most important contributions to education is enabling all learners to explore questions of meaning, purpose and value. This is important from a perspective of faith or non-religious understanding and recognises that most people do not adhere to formal religious structures.
The syllabus is supported by an extensive range of units of work which have been written by teachers from within Kirklees and Calderdale and by RE Today Services. The units of work are non-statutory and we are free to use, adapt or change these in line with our needs. Other world views is taken to mean beliefs, arguments or philosophies that approach questions of meaning and purpose without reference to belief in a deity. This may include a structured, named philosophy such as Humanism, or a more general argument or approach relevant to the questions studied. Exemplar materials are provided within the units of work. |
Global warming is irreversible, U.S. study concludes
Washington, D.C. – The U.S. Climate Change Science Program, a research project led by the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Department of Commerce, has published a new dramatic report on global climate change and the impact in the United States. It is the most dramatic report published by the U.S. government so far, indicating that the pace of global warming can be slowed, but not stopped or reversed. The advice is to better adjust to the effects of global warming.
Trees, Global Warming, U.S. climate research
“No matter how aggressively heat-trapping emissions are reduced, the world will still experience some continued climate change and resulting impacts,” the conclusion of the report (13 MB PDF file) states.
The scientists believe that some of the gases that are emitted today are long-lived and lead to circumstances that create high enough levels of atmospheric heat-trapping gases for hundreds of years. Additionally, the world’s oceans already have absorbed much of the heat added to the climate system and it is expected that they will retain that heat and in fact support the global warming process “for many decades,” even if mankind is able to reduce human-induced emissions. And, of course, there are natural causes of greenhouse gas emissions, such as eruptions of volcanoes, as well as effects of climate change that may push us beyond certain thresholds of additional climate change that could create a virtually unstoppable chain reaction of ecological changes that would lead to further climate changes.
The agency outlines such expected changes in a different report entitled “Thresholds of Climate Change in Ecosystems”: For example, predicted warmer, drier conditions in the semiarid forests and woodlands of the southwestern United States would place those forests under more frequent water stress, resulting in the potential for shifts between vegetation types and distributions, and could trigger rapid, extensive, and dramatic forest dieback.
Alaska is often described as a key example of observed climate-related threshold change. Warming has caused a number of effects, including earlier snowmelt in the spring, reductions in sea-ice coverage, warming of permafrost, and resultant impacts to ecosystems including dramatic changes to wetlands, tundra, fisheries, and forests, including increases in the frequency and spatial extent of insect outbreaks and wildfire, scientists for the U.S. Climate Change Science Program said.
Observations indicate that the global average temperature since 1900 has risen by about 1.5° F. By 2100, it is projected to rise another 2 to 10°F, according to the report. And while temperatures in the U.S. have risen in line with the rest of the world, scientists now believe that temperatures in the U.S. are “very likely” to rise much faster than the global average down the road. “Increases at the lower end of this range are more likely if global heat-trapping gas emissions are cut substantially, and at the upper end if emissions continue to rise at or near current rates,” the report states. Human influence on the climate system is believed to remain the key factor that affects the range of temperature changes.
While the report states that it is imperative that emissions of carbon dioxide is imperative, it also says that reducing emissions of the greenhouse gas would only “reduce” warming over this “century and beyond”. As a result we will have to adapt to inevitable changes, which will “impact human health, water supply, agriculture, coastal areas, and many other aspects of society and the natural environment.” Additional expected changes include “more intense hurricanes and related increases in wind, rain, and storm surges (but not necessarily an increase in the number of storms that make landfall), as well as drier conditions in the Southwest and Caribbean.”
The report does not recommend specific actions to respond to climate change, but is limited to general reactions. The first is “mitigation”, which would reduce the effects of global warming. The second is “adaptation,” which would help humans to better respond to climatic conditions.
Mitigation and adaptation are both essential parts of a climate change response strategy, says the report, adding that effective mitigation reduces the need for adaptation. The scientists admit that there are limits to how much adaptation can be achieved, but say there are encouraging elements here too:
“Adaptation involves deliberately adjusting to observed or anticipated changes to avoid or reduce detrimental impacts or to take advantage of beneficial ones. For example, a farmer might switch to growing a different crop variety better suited to warmer or drier conditions. A company might relocate key business centers away from coastal areas vulnerable to sea-level rise and hurricanes.” |
Benefits of Coloring and Drawing for Children
One of the benefits of coloring pages teaching children to distinguish different colors. While every child should know about the essential red, green, blue, pink, yellow, the most popular colors, there is justified reason to show them what they are called of more obscure hues. Recent research shows that vocabulary helps people tell colors apart. Comparing different linguistic groups scientists have shown when a language does not have an identity for any color then the speaker includes a more challenging time differentiating similar shades of color. If a child isn't taught so that you can recognize the main difference between brilliant white and eggshell (or rose and pink, fuchsia and red, etc.) then as adult they could never be in a position to tell both the apart. So discussing and mentioning the subtle differences one of many big box of crayons is really a significant cognitive opportunity.
Knowing The Names Of Colors
There is another important aspect of teaching colors that all parents should become aware of. A recent study in Scientific American Magazine demonstrates the position of descriptive adjective describing along with produces a huge difference to help kids understanding. For example in the English language we would say the "red crayon" and it turns out this can be more difficult for children's brains to know then the linguistic structure used in a number of other languages like Spanish, which would be said as the "crayon is red." When teaching colors or some other important property to small children, always first identify the object, then identify the house. While every single day English just isn't spoken in this way, and it is faster to speak or write, young brains cannot process information in this way effectively.
Educational Content of Activity Pages
Further consideration must be provided to your subject matter depicted around the coloring page it self. While kids could possibly be happy coloring an image of the old princess or animal, whenever feasible adults should choose activity pages for his or her educational value. Pages showing new ideas and concepts are always a good idea. Beyond exposing young learners to new ideas and concepts, activity sheets featuring numbers and letters are always ideal for growing young minds. |
A statistical hypothesis, sometimes called confirmatory data analysis, is a hypothesis that is testable on the basis of observing a process that is modeled. Quizlet provides hypothesis activities, flashcards and games start learning today for free. Notice that all of the statements, above, are testable the primary trait of a hypothesis is that something can be tested and that those tests can be replicated. In order to be considered testable, two criteria must be met: 1) a counter-example of the hypothesis must be possible in other words, it must be falsifiable. 4 testablequestion:whatistheeffectof&the&amountof&lightacorn&plantgets&on& their&growth&rate mv:&&& & & & & & rv:& & hypothesis: 5 testablequestion.
Best answer: well, a hypothesis is not testable it is a proposal check the word out in your dictionary and realise why if it is testable, then it no. A testable hypothesis is one that can be used as the basis for an experiment it predicts the correlation between two variables and can be tested by varying one of. Examples of hypotheses (testable statements) non-testable statements (could be modified, so as to be testable) dogs are more social than are cats.
Testability – the bedrock of theory whenever you create a hypothesis to prove a part of a theory, it must be testable and analyzable with current technology. Once you have generated a hypothesis, the process of hypothesis testing becomes important. Formatting a testable hypothesis what is a real hypothesis a hypothesis is a tentative statement that proposes a possible explanation to some phenomenon or event. Lecture notes null hypotheses regarding the distribution of genetic variation i what is a null hypothesis and how is it useful a null hypothesis: a testable.
Looking for some examples of hypothesis a number of great examples are found below. A testable hypothesis is the cornerstone of experimental design here is an explanation of what a testable hypothesis is, with examples. How to formulate a hypothesis ashfordscience loading unsubscribe from ashfordscience cancel unsubscribe working subscribe subscribed. A research hypothesis is the statement created by researchers when they speculate upon the outcome of a research or experiment a hypothesis must be testable.
A testable hypothesis could be formed from which question are bass more active in the daytime than at night when whales jump out of water, are they happy. Formulating a testable hypothesis to explain a phenomenon is a significant first step in the scientific method. Since the scientific method proceeds by contradiction, for the scientific method to make progress, for any given hypothesis, you must be able to conceive of. Testable definition, the means by which the presence, quality, or genuineness of anything is determined a means of trial see more.
Testable hypothesis name: institution: course: tutor name: date: november 7, 2013 testable hypothesis hypothesis 1 one of the possible testable hypotheses from. How to test hypotheses using four steps: state hypothesis, formulate analysis plan, analyze sample data, interpret results lists hypothesis testing examples.
A testable hypothesis must be testable, repeatable, have clear cut (or close to clear cut) results which can be used to disprove or prove a. A testable hypothesis is one which you can formulate an experiment around in simpler terms, a testable hypothesis is one you can test to see if it is true or not. Question: what are examples of a hypothesis a hypothesis is an explanation for a set of observations improving a hypothesis to make it testable. Testable questions testable question: how does the amount of sunlight in my room affect the time at which i wake hypothesis. |
Human pluripotent stem cells are ‘master cells’ that have the ability to develop into almost any type of tissue, including brain cells. They hold huge potential for studying human development and the impact of diseases, including cancer, Alzheimer’s, multiple sclerosis, and heart disease.
In a human, it takes nine to twelve months for a single brain cell to develop fully. It can take between three and 20 weeks using current methods to create human brain cells, including grey matter (neurons) and white matter (oligodendrocytes) from an induced pluripotent stem cell – that is, a stem cell generated by reprogramming a skin cell to its ‘master’ stage. However, these methods are complex and time-consuming, often producing a mixed population of cells.
The new platform technology, OPTi-OX, optimises the way of switching on genes in human stem cells. Scientists applied OPTi-OX to the production of millions of nearly identical cells in a matter of days. In addition to the neurons, oligodendrocytes, and muscle cells the scientists created in the study, OPTi-OX holds the possibility of generating any cell type at unprecedented purities, in this short timeframe.
To produce the neurons, oligodendrocytes, and muscle cells, the team altered the DNA in the stem cells. By switching on carefully selected genes, they reprogrammed the stem cells and created a large and nearly pure population of identical cells. The ability to produce as many cells as desired combined with the speed of the development gives an advantage over other methods. The new method opens the door to drug discovery, and potentially therapeutic applications in which large amounts of cells are needed.
Study author Professor Ludovic Vallier from the Wellcome Trust-Medical Research Centre Stem Cell Institute at the University of Cambridge says: “What is really exciting is we only needed to change a few ingredients – transcription factors – to produce the exact cells we wanted in less than a week. We over-expressed factors that make stem cells directly convert into the desired cells, thereby bypassing development and shortening the process to just a few days.”
OPTi-OX has applications in various projects, including the possibility to generate new cell types which may be uncovered by the Human Cell Atlas. The ability to produce human cells so quickly means the new method will facilitate more research.
Joint first author, Daniel Ortmann from the University of Cambridge, adds: “When we receive a wealth of new information on the discovery of new cells from large scale projects, like the Human Cell Atlas, it means we’ll be able to apply this method to produce any cell type in the body, but in a dish.”
Dr Mark Kotter, lead author and clinician, also from Cambridge, says: “Neurons produced in this study are already being used to understand brain development and function. This method opens the doors to producing all sorts of hard-to-access cells and tissues so we can better our understanding of diseases and the response of these tissues to newly developed therapeutics.”
The research was supported by Wellcome, the Medical Research Council, the German Research Foundation, the British Heart Foundation, The National Institute for Health Research UK and the Qatar Foundation.
Adapted from an article published by the University of Cambridge. This text is licensed under a Creative Commons Attribution 4.0 International License. |
Introduction to Climate Change
1. Introduction to Climate Change
Climate change looms as a defining issue of the 21st century, because it pits
the potential disruption of our global climate system against the future of
a fossil fuel based economy. Climate change refers to the response of the planet's
climate system to altered concentrations of "greenhouse gases" in
the atmosphere. If all else is held constant (e.g., cloud cover, capacity of
the oceans to absorb carbon dioxide, etc.), increases in greenhouse gases will
lead to "global warming"-an increase in global average temperatures-as
well as other changes in the earth's climate patterns.
The expected impacts of climate change-increase in global temperature, a rise in the "energy" of storms, and the consequent sea level rise-could have significant environmental and social ramifications. Weather patterns could become more extreme and unpredictable and the intensity and frequency of floods, as well as the duration and severity of droughts, are expected to increase in many regimes. These conditions, coupled with warmer temperatures, could fan the spread of water and insect borne diseases, such as typhoid, dengue and malaria. Areas currently facing food or water shortages could face increased shortages in the future. Forests and other ecosystems might not be able to adapt to the rate of change in temperature, leading to substantial loss of biodiversity and natural resources. The range of possible impacts is so broad and severe that many observers believe climate change to be the most significant environmental problem facing the planet.
Concern about climate change and calls for international action began in the 1970s and continued throughout the 1980s. In 1990, the United Nations authorized an Intergovernmental Negotiating Committee on Climate to begin discussions of a global treaty. These negotiations culminated in the 1992 Framework Convention on Climate Change ("the Climate Change Convention") signed at UNCED. The Climate Change Convention established a general framework, but delineated few specific and substantive obligations to curb climate change. Ongoing scientific research, however, continued to support the need for binding "targets and timetables" for the reduction of greenhouse gases. In December 1997, the Parties responded by negotiating the Kyoto Protocol to the Climate Change Convention, which established binding reduction targets for the United States and other developed countries. Despite the growing scientific urgency, the potential economic costs of limiting fossil fuel use in both developing and developed countries have led to substantial opposition to the climate change regime. Indeed, in 2001 President George W. Bush unilaterally announced that the United States would not proceed with ratification of the Protocol. This announcement led to an acrimonious split with the European Union and indeed the rest of the world. As a result, the climate regime remains a complex and controversial regime-a regime that will undoubtedly evolve substantially in the years to come. |
|Romania Table of Contents
In 1683 Jan Sobieski's Polish army crushed an Ottoman army besieging Vienna, and Christian forces soon began the slow process of driving the Turks from Europe. In 1688 the Transylvanian Diet renounced Ottoman suzerainty and accepted Austrian protection. Eleven years later, the Porte officially recognized Austria's sovereignty over the region. Although an imperial decree reaffirmed the privileges of Transylvania's nobles and the status of its four "recognized" religions, Vienna assumed direct control of the region and the emperor planned annexation. The Romanian majority remained segregated from Transylvania's political life and almost totally enserfed; Romanians were forbidden to marry, relocate, or practice a trade without the permission of their landlords. Besides oppressive feudal exactions, the Orthodox Romanians had to pay tithes to the Roman Catholic or Protestant church, depending on their landlords' faith. Barred from collecting tithes, Orthodox priests lived in penury, and many labored as peasants to survive.
The Uniate Church
Under Habsburg rule, Roman Catholics dominated Transylvania's more numerous Protestants, and Vienna mounted a campaign to convert the region to Catholicism. The imperial army delivered many Protestant churches to Catholic hands, and anyone who broke from the Catholic church was liable to receive a public flogging. The Habsburgs also attempted to persuade Orthodox clergymen to join the Uniate Church, which retained Orthodox rituals and customs but accepted four key points of Catholic doctrine and acknowledged papal authority. Jesuits dispatched to Transylvania promised Orthodox clergymen heightened social status, exemption from serfdom, and material benefits. In 1699 and 1701, Emperor Leopold I decreed Transylvania's Orthodox Church to be one with the Roman Catholic Church; the Habsburgs, however, never intended to make the Uniate Church a "received" religion and did not enforce portions of Leopold's decrees that gave Uniate clergymen the same rights as Catholic priests. Despite an Orthodox synod's acceptance of union, many Orthodox clergy and faithful rejected it.
In 1711, having suppressed an eight-year rebellion of Hungarian nobles and serfs, the empire consolidated its hold on Transylvania, and within several decades the Uniate Church proved a seminal force in the rise of Romanian nationalism. Uniate clergymen had influence in Vienna; and Uniate priests schooled in Rome and Vienna acquainted the Romanians with Western ideas, wrote histories tracing their Daco-Roman origins, adapted the Latin alphabet to the Romanian language, and published Romanian grammars and prayer books. The Uniate Church's seat at Blaj, in southern Transylvania, became a center of Romanian culture.
The Romanians' struggle for equality in Transylvania found its first formidable advocate in a Uniate bishop, Inocentiu Micu Klein, who, with imperial backing, became a baron and a member of the Transylvanian Diet. From 1729 to 1744 Klein submitted petitions to Vienna on the Romanians' behalf and stubbornly took the floor of Transylvania's Diet to declare that Romanians were the inferiors of no other Transylvanian people, that they contributed more taxes and soldiers to the state than any of Transylvania's "nations," and that only enmity and outdated privileges caused their political exclusion and economic exploitation. Klein fought to gain Uniate clergymen the same rights as Catholic priests, reduce feudal obligations, restore expropriated land to Romanian peasants, and bar feudal lords from depriving Romanian children of an education. The bishop's words fell on deaf ears in Vienna; and Hungarian, German, and Szekler deputies, jealously clinging to their noble privileges, openly mocked the bishop and snarled that the Romanians were to the Transylvanian body politic what "moths are to clothing." Klein eventually fled to Rome where his appeals to the pope proved fruitless. He died in a Roman monastery in 1768. Klein's struggle, however, stirred both Uniate and Orthodox Romanians to demand equal standing. In 1762 an imperial decree established an organization for Transylvania's Orthodox community, but the empire still denied Orthodoxy equality even with the Uniate Church.
Source: U.S. Library of Congress |
Spot the Batteries
This page provides you with a snapshot of this activity that allows students to learn about the different types of batteries that power of every day lives. You can find batteries almost everywhere. They help us speak on the phone, get us places on bikes and in cars, and give us light when the power is off. Because batteries are so widely used and do not last forever, many batteries are thrown away. Batteries contain heavy metals like lead, mercury, nickel and cadmium which are dangerous to the environment. This is why we need to take special care when we have finished using them. Spot everyday items that use batteries and colour them in, while learning about the different types of batteries and how you can make conscious changes to save the environment from pollution.
We recommend that you use the ‘Introduction to Solid Waste’ activity before any of the other activities as it is a good starting point for students to learn about waste and provides a comprehensive overview of how waste is generated and its impacts on animals, the environment and us. If you have already done so and wish to continue to download the rest of this free activity please click on the download icon below. Enjoy!
- Identify items that use batteries
- Learn about the different types of batteries
- Discover that batteries contain dangerous chemicals (like heavy metals)
- Understand that batteries need to be disposed of carefully
- Photocopies of the ‘Spot the Batteries’ A4 handout for students
- Colouring pens or pencils
1 class period
(approximately 60 minutes)
- Photocopy the ‘Spot the Batteries’ A4 handout and read through the activity information.You could bring an example of a primary battery (non-rechargeable), for example an AA battery, and a secondary battery (rechargeable) such as a mobile phone battery.
1 Introduction (20 minutes):
Explain the activity to the students.
Give the class a short introduction on batteries:
- What are batteries and how do they work?
- Why do we need batteries? Imagine a life without batteries.
- Discuss the difference between non-rechargeable (primary) and rechargeable (secondary) batteries.
Downloading the full activity is absolutely free.
Click the link, fill the form and we’ll send the download links per mail.
Lesson plan (English)
Teacher’s cheat sheet (English)
Students Handout (English)
Lesson plan (Tamil)
Teacher’s cheat sheet (Tamil)
Students Handout (Tamil) |
What is a Stroke
A stroke is the rapidly developing loss of brain functions due to a disturbance in the blood vessels supplying blood to the brain. This can be due to ischemia (lack of blood supply) caused by thrombosis or embolism or due to a hemorrhage. As a result, the affected area of the brain is unable to function, leading to inability to move one or more limbs on one side of the body, inability to understand or formulate speech or inability to see one side of the visual field
A stroke occurs when the supply of blood to the brain is suddenly interrupted. There are two types of strokes. When the arteries carrying blood to the brain are abruptly blocked, it is called an ischemic stroke. When a blood vessel bursts and blood seeps into the brain tissue it is known as a hemorrhagic stroke.
Types of Strokes Include:
Ischemic stroke - In an ischemic stroke, blood supply to part of the brain is decreased, leading to dysfunction of the brain tissue in that area.
Hemorrhagic stroke - Intracranial hemorrhage is the accumulation of blood anywhere within the skull vault.
Thrombotic stroke - In thrombotic stroke, a thrombus (blood clot) usually forms around atherosclerotic plaques. Since blockage of the artery is gradual, onset of symptomatic thrombotic strokes is slower.
Embolic stroke - An embolic stroke refers to the blockage of an artery by an embolus, a traveling particle or debris in the arterial bloodstream originating from elsewhere.
According to neurologists, if a stroke can be immediately recognized and medical attention made available within three hours, it is normally possible to reverse the effects, often completely. The problem is that strokes are often unrecognized since most people are unaware of the symptoms.
A severe stroke, if not treated in time, can result in death. Even if the stroke is not fatal, it may cause neurological damage that will the leave patient incapacitated for life. The brain is one of the most complex organs in the body. Even if other organs fail, the brain may continue to keep functioning. But when the brain stops functioning completely - brain death - the other organs have nothing to control them and gradually die also.
The effects of a stroke depend on the location of the obstruction - which part of the brain is deprived of blood - and the amount of tissue damage.
One side of the brain controls the opposite side of the body and also specific organs, so a stroke occurring in the right side of the brain could result in, among others:
- Paralysis of the left side of the body
- Problem with vision
- A sudden change in behavior
- usually rapid erratic movement
- Loss of memory
- Paralysis to the right side of the body
- Problem in speaking, incoherent speech
- Memory loss
- Slow uncertain body movements
- Any sudden weakness or numbness of the face or the limbs, especially on one side of the body
- Sudden severe headaches with no discernible cause
- A sudden onset of confusion
- being unable to talk, speaking in an unclear or garbled manner, speaking illogically
- Inability to understand what is being said
- Trouble with seeing or focusing, with both or just one eye
- A sudden onset of dizziness, loss of balance, uncoordinated physical movements or trouble in walking.
There is a simple and medically approved way to see if a person has suffered as stroke. It is called STR and is worth remembering.
STR stands for:
Smile - ask the person suspected of having had a stroke to smile.
Talk - ask the person to speak a simple sentence: describe what kind of car he owns or where he lives.
Raise - ask the person to raise his arms above his head.
Doctors suggest one other way to know if a person has suffered a stroke - ask the person to stick out his tongue. If his tongue is not straight or droops or slants to one side rather than coming straight out of his mouth, it is an indication of a stroke.
If the person has difficulty in performing any one of these tasks, it is more than likely he has suffered a stroke and medical help should be IMMEDIATELY called for.
Until help arrives, caring for a stroke victim is limited to offering support to the victim. But this is important and may prevent further deterioration of the condition while waiting for medical help - If there is someone available with CPR training, the victim's circulation, breathing and airway should be checked as per standard CPR procedure.
The paramedics should be briefed, when they arrive, on symptoms observed and action taken.
- Lay the victim down flat with the head and shoulder slightly raised to reduce the blood pressure in the brain
- If the victim is unconscious, gently roll him so he is lying on his left side and pull the chin forward. This will help to keep the airway open and allow any vomit to drain and not hamper the breathing.
- If the victim is conscious speak reassuringly and offer all the positive support you can. Keep saying that help is on the way.
- Never give a stroke victim any thing to eat or drink. The throat may be paralyzed and they may choke.
There is a relationship between high blood pressure, snoring and strokes.
Various systems have been proposed to increase recognition of stroke by patients, relatives and emergency first responders. Sudden-onset face weakness, arm drift, and abnormal speech are the findings most likely to lead to the correct identification of a case of stroke.
Stroke Awareness InformationThe color red ribbon is the symbol of stroke as well as heart disease awareness. February is American Heart Month and the month of May is American Stroke Month Awareness.
Quick Facts: StrokeHypertension (high blood pressure) accounts for 35 to 50% of stroke risk. Blood pressure reduction of 10 mmHg systolic or 5 mmHg diastolic reduces the risk of stroke by ~40%. Lowering blood pressure has been conclusively shown to prevent both ischemic and hemorrhagic strokes. It is equally important in secondary prevention. Even patients older than 80 years and those with isolated systolic hypertension benefit from antihypertensive therapy. The available evidence does not show large differences in stroke prevention between antihypertensive drugs - therefore, other factors such as protection against other forms of cardiovascular disease should be considered and cost. The routine use of beta-blockers following a stroke or TIA has not been shown to result in benefits.
- Dysfunctions correspond to areas in the brain that have been damaged.
- The results of stroke vary widely depending on size and location of the lesion.
- Stroke can affect peoples physically, mentally, emotionally, or a combination of the three.
- A stroke sufferer may be unaware of his or her own disabilities, a condition called anosognosia.
- If a stroke is severe enough, or in a certain location such as parts of the brainstem, coma or death can result.
- Post-stroke emotional difficulties include anxiety, panic attacks, flat affect (failure to express emotions), mania, apathy and psychosis.
- Cognitive and psychological outcome after a stroke can be affected by the age at which the stroke happened, pre-stroke baseline intellectual functioning, psychiatric history and whether there is pre-existing brain pathology.
- Overall two thirds of strokes occurred in those over 65 years old.
- Disability affects 75% of stroke survivors enough to decrease their employability.
- Stroke was the second most frequent cause of death worldwide in 2011, accounting for 6.2 million deaths (~11% of the total).
- Approximately 17 million people had a stroke in 2010 and 33 million people have previously had a stroke and were still alive.
- Between 1990 and 2010 the number of strokes decrease by approximately 10% in the developed world and increased by 10% in the developing world.
- 30 to 50% of stroke survivors suffer post-stroke depression, which is characterized by lethargy, irritability, sleep disturbances, lowered self-esteem and withdrawal.
- Up to 10% of people following a stroke develop seizures, most commonly in the week subsequent to the event; the severity of the stroke increases the likelihood of a seizure.
Article Source: disabled-world.com |
Rotation of the midgut happens during the second month of intra-uterine life. This is the gastrointestinal tract, consisting of the foregut, the hindgut, and the midgut. The midgut is continuous with the vitelline duct or yolk stalk, which later becomes obliterated.
Here's the aorta, here are the three arteries that supply the GI tract: the celiac for the foregut, the inferior mesenteric for the hindgut, and the superior mesenteric for the midgut.
As the midgut develops it protrudes into the body stalk forming a loop, with the superior mesenteric artery forming the axis of the loop.
As it protrudes, the midgut loop makes a quarter turn counter-clockwise, so its distal part is to the left and its proximal part is to the right. The distal part of the loop develops a bulge that will become the cecum, and the proximal part of the loop becomes quite convoluted.
During the time these changes are happening, the body continues to grow, and the abdominal cavity becomes large enough to allow the midgut to return.
The proximal part of the loop returns first. It passes under the distal part, and over to the left, that’s towards us in this view. The distal part of the loop returns last. It passes in front of the proximal part, and ends up over to the right.
Let's look at the same sequence of events from in front and somewhat to the left, so that we can understand how these changes produce a rotation of the midgut.
Here's the midgut loop protruding towards us, and making its first quarter turn counter clockwise. A bulge appears for the cecum, and the proximal part of the loop becomes convoluted.
The abdomen becomes larger, and the proximal limb of the loop returns. It passes under the distal limb, in effect making another quarter turn counter clockwise. Then the distal limb returns, completing the third quarter turn. This proximal part of the midgut, the distal duodenum, ends up behind this distal part of the midgut, the proximal transverse colon.
Understanding those developmental changes helps us understand not only where the duodenum lies, but also why the colon is where it is. |
2007 Schools Wikipedia Selection. Related subjects: Mineralogy
Feldspars crystallize from magma in both intrusive and extrusive rocks, and they can also occur as compact minerals, as veins, and are also present in many types of metamorphic rock. Rock formed entirely of plagioclase feldspar (see below) is known as anorthosite. Feldspars are also found in many types of sedimentary rock.
Feldspar is derived from the German Feld, field, and Spat, a rock that does not contain ore. "Feldspathic" refers to materials that contain feldspar. The alternative spelling, felspar, has now largely fallen out of use.
This group of minerals consists of framework or tectosilicates. Compositions of major elements in common feldspars can be expressed in terms of three endmembers:
Solid solutions between K-feldspar and albite are called alkali feldspar. Solid solutions between albite and anorthite are called plagioclase. Only limited solid solution occurs between K-feldspar and anorthite, and in the two other solid solutions, immiscibility occurs at temperatures common in the crust of the earth.
Sanidine ( monoclinic), orthoclase, and microcline ( triclinic) refer to polymorphs of K-feldspar. Sanidine is stable at the highest temperatures, and microcline at the lowest. Perthite is a typical texture in alkali feldspar, due to exsolution of contrasting alkali feldspar compositions during cooling of an intermediate composition. The perthitic textures in the alkali feldspars of many granites are coarse enough to be visible to the naked eye.
Compositions of the plagioclase series have been labeled as follows (percent anorthite in parentheses):
- albite (0 to 10)
- oligoclase (10 to 30)
- andesine (30 to 50)
- labradorite (50 to 70)
- bytownite (70 to 90)
- anorthite (90 to 100)
Intermediate compositions of plagioclase feldspar also may exsolve to two feldspars of contrasting composition during cooling, but diffusion is much slower than in alkali feldspar, and the resulting two-feldspar intergrowths typically are too fine-grained to be visible with optical microscopes. The immiscibility gaps in the plagioclase solid solution are complex compared to the gap in the alkali feldspars. The play of colors visible in some feldspar of labradorite composition is due to very fine-grained exsolution lamellae ( see image).
- Feldspar is a common raw material in the production of ceramics.
- Feldspars are used for thermoluminescence dating and optical dating in earth sciences and archaeology
- Feldspar is an ingredient in Bon Ami brand household cleaner. |
Ratifieed by the required three-fourths of states on July 9, 1868 Reprinted on GPO Access: Constitution of the United States (Web site)
Ex-slaves are granted citizenship and afforded civil liberties
"No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law.…"
The rise of "Black Codes"—discriminatory local laws subjecting African Americans to harsher penalties or forced labor for certain crimes, among other restrictions—prompted the U.S. Congress to pass the Civil Rights Act of 1866 (see Chapter 8). The bill stated that all African Americans born in the United States were citizens entitled to the "full and equal benefit of all laws" enjoyed by whites. It also outlawed providing "different punishment, pains, or penalties" for ex-slaves than for whites.
Congress gathered the two-thirds majority necessary to pass the bill over the veto of President Andrew Johnson (1808–1875; served 1865–69). But the Northerners still had two problems. U.S. representative Thaddeus Stevens (1792–1868) of Pennsylvania, a leader of the antislavery Radical Republicans, feared the Civil Rights Act could be overturned by a future Congress less sympathetic to African Americans' rights. Congress also needed to establish the ground rules for Reconstruction, the process of bringing the Southern states back into the Union after the American Civil War (1861–65).
The Fourteenth Amendment to the U.S. Constitution, introduced April 30, 1866, tried to answer both concerns. The first section echoed the key points of the Civil Rights Act: The states could not "deprive any person of life, liberty, or property, without due process of law" or deny any person "equal protection of the laws." Placing those rights in the Constitution would make it much harder for a future Congress to take them away, as it would require passing a new amendment, a process that needs the approval of at least two-thirds of the members of Congress and at least three-fourths of the states.
The second section of the Fourteenth Amendment offered a compromise in the heated debate over African American suffrage (voting rights). The Radical Republicans had pushed for a measure granting African American men the right to vote, arguing that ex-slaves would not be truly free without a voice in the political process. But some moderate congressmen (many of them facing reelection in 1866) feared the idea would be unpopular even in the North, where many whites who opposed slavery still viewed African Americans as inferior. White voters in Connecticut, Minnesota, and Wisconsin had defeated proposals in 1865 to give African Americans the ballot in those states. There was also a legal question of whether the decision on African American suffrage belonged to the federal government or the individual states.
The amendment offered a compromise: Any state that denied some men the right to vote would not be able to count those men in the congressional districts, which are defined by population. Moderates hoped that measure would give Southern states an incentive to give African American men the ballot, as the South would lose up to one-third of its congressional seats if it failed to do so. But leading abolitionists (opponents of slavery) such as Frederick Douglass (1817–1895), who was African American, criticized the compromise. "To say that I am a citizen to pay taxes … obey the laws, support the government, and fight the battles of the country, but, in all that respects voting and representation [in Congress], I am but as so much inert [powerless] matter, is to insult my manhood," said Douglass, as quoted in The Struggle for Equality.
The third section of the Fourteenth Amendment also reflected a compromise, this time over the political rights of ex-Confederates. To prevent the former "rebels" from having a hand in building the new Southern state governments, the Radical Republicans wanted to bar all former supporters of the Confederacy from voting until 1870. But the moderates thought the measure was too extreme. After some debate, Congress changed the section to allow ex-Confederates to vote, but to exclude some of the higher-ranking officials from elected office. Specifically, anyone who had taken an oath before the war to support the Constitution (as most elected officials do), then participated in the "rebellion" against the Union, was not allowed to hold elected office again.
The fourth section rejected any responsibility for the debt accumulated by the Confederacy during the war. Finally, the fifth section gave Congress the power to enforce these measures by passing laws.
In most points, the Fourteenth Amendment reflected a compromise between the ideals of the abolitionists and the reality of what most whites were willing to accept. As quoted in Reconstruction: America's Unfinished Revolution, U.S. senator James W. Grimes (1816–1872) of Iowa said, "It is not exactly what any of us wanted, but we were each compelled to surrender some of our individual preferences in order to secure anything." Some abolitionists supported the amendment, hoping it would lay the groundwork for a future amendment granting African Americans the right to vote. Others opposed it for not going far enough. They particularly opposed the idea of allowing Southern states back into the Union if they approved the Fourteenth Amendment. They feared this would bring an end to Reconstruction without placing the ballot in the hands of African American men.
Congress was split along party lines—Republicans in favor of the amendment, Democrats against it—for months of passionate debate. U.S. representative Andrew J. Rogers (1828–1900), a Democrat from New Jersey, said the amendment was an attempt to legitimize (or legally justify) the Civil Rights Act, which he believed to be unconstitutional, according to a speech reprinted in Reconstruction: Opposing Viewpoints. The Civil Rights Act and the amendment both stepped on states' rights to determine how African Americans should be treated, Rogers said.
Take the State of Kentucky, for instance. According to her laws, if a negro commits a rape upon a white woman he is punished by death. If a white man commits that offense, the punishment is imprisonment. Now, according to this proposed amendment, the Congress of the United States is to have the right to repeal [undo] the law of Kentucky and compel that State to inflict the same punishment upon a white man for rape as upon a black man.
But U.S. representative John A. Bingham (1815–1900), an Ohio Republican who helped draft the amendment, said the measure simply allowed Congress to enforce the rights already outlined in the Constitution. Specifically, the Fifth Amendment states no person shall be "deprived of life, liberty, or property without due process of the law." The Fourteenth Amendment would allow the federal government to step in if one of the states so deprived its citizens, Bingham argued in a speech reprinted in Reconstruction: Opposing Viewpoints.
The adoption of the proposed amendment will take from the States no rights that belong to the States. They elect their Legislatures; they enact their laws for the punishment of crimes against life, liberty, or property; but in the event of the adoption of this amendment, if they conspire together to enact laws refusing equal protection to life, liberty, or property, the Congress is thereby vested with power to hold them to answer before the bar of the national courts for the violation of their oaths and of the rights of their fellow-men. Why should it not be so?
Stevens, one of the leading Radical Republicans in the House, summed up the amendment this way: "Whatever law punishes a white man for a crime shall punish the black man precisely in the same way and to the same degree. Whatever law protects the white man shall afford 'equal' protection to the black man." The House of Representatives approved the amendment in May 1866; the Senate did the same the following month. Now the measure needed at least three-fourths of the states to pass it.
Things to remember while reading the Fourteenth Amendment:
- In response to the discriminatory Black Codes that popped up in the South after the Civil War, Congress passed the Civil Rights Act of 1866, which prohibited states from creating different laws or criminal penalties for African Americans and whites. But some Republicans feared a future Congress that was less sympathetic to African Americans could overturn the Civil Rights Act. So they put similar civil rights protections in the first section of the Fourteenth Amendment, knowing it would be very difficult for a future Congress to undo an amendment to the Constitution.
- Congress, like the nineteenth-century public, was divided on whether African American men should have the right to vote, and whether the federal government or the states should make the decision. The second section of the Fourteenth Amendment offered a compromise: States that refused to grant some men the ballot could lose some of their congressional seats, which are based on population. Some people hoped this would give states the incentive to provide African American men the ballot.
- After the Civil War, many Northerners were wary of seeing former Confederate leaders appear in the new Southern state governments—a possible sign that the "rebellion" against the Union had not been extinguished. The third section of the Fourteenth Amendment addresses that concern by barring certain high-ranking ex-Confederates from elected office.
Section 1. All persons born or naturalized in the United States and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside. No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.
Section 2. Representatives shall be apportioned among the several States according to their respective numbers, counting the whole number of persons in each State, excluding Indians not taxed. But when the right to vote at any election for the choice of electors for President and Vice President of the United States, Representatives in Congress, the Executive and Judicial officers of a State, or the members of the Legislature thereof, is denied to any of the male inhabitants of such State, being twenty-one years of age, and citizens of the United States, or in any way abridged, except for participation in rebellion, or other crime, the basis of representation therein shall be reduced in the proportion which the number of such male citizens shall bear to the whole number of male citizens twenty-one years of age in such State.
Section 3. No person shall be a Senator or Representative in Congress, or elector of President and Vice President, or hold any office, civil or military, under the United States, or under any State, who, having previously taken an oath, as a member of Congress, or as an officer of the United States, or as a member of any State legislature, or as an executive or judicial officer of any State, to support the Constitution of the United States, shall have engaged in insurrection or rebellion against the same, or given aid or comfort to the enemies thereof. But Congress may by a vote of two-thirds of each House, remove such disability.
Section 4. The validity of the public debt of the United States, authorized by law, including debts incurred for payment of pensions and bounties for services in suppressing insurrection or rebellion, shall not be questioned. But neither the United States nor any State shall assume or pay any debt or obligation incurred in aid of insurrection or rebellion against the United States, or any claim for the loss or emancipation of any slave; but all such debts, obligations and claims shall be held illegal and void.
Section 5. The Congress shall have power to enforce, by appropriate legislation, the provisions of this article.
What happened next …
Congress discussed the possibility of requiring the Southern states to adopt the Fourteenth Amendment in order to rejoin the Union. The state of Tennessee jumped at the idea and passed the Fourteenth Amendment during the summer of 1866, then asked to be readmitted to the Union. In the spirit of cooperation, Congress agreed, but some abolitionists were upset this was done without giving African American men in Tennessee the ballot. "Tennessee is permitted to deny to her blacks a voice in the state, while she herself is permitted to resume her voice in the nation," wrote social reformer Theodore Tilton (1835–1907), as quoted in The Struggle for Equality. "The spectacle is a national humiliation."
The Fourteenth Amendment would face a rocky road to approval. The rest of the ex-Confederate states initially rejected it, although some of the new Southern state governments formed under the Reconstruction Acts of 1867 (see Chapter 10) later approved it. Complicating matters, two Northern states that approved it—New Jersey and Ohio—later passed resolutions withdrawing their support. On July 20, 1868, Secretary of State William Seward (1801–1872) announced that the amendment received the necessary three-fourths support from the states, but his count included the withdrawn states of New Jersey and Ohio. In a rare step, Congress passed a resolution declaring the Fourteenth Amendment to be part of the Constitution. In order to reach the three-fourths state approval requirement, Congress's count excluded the Southern states that had not yet been readmitted to the Union. According to The Reconstruction of the Nation, despite this unusual route to approval, "the amendment has been completely validated by practice and judicial decree [court decisions]."
As for the question of African American suffrage, Congress would offer a separate amendment in 1869 granting African American men the right to vote. The Fifteenth Amendment would be approved the following year (see Chapter 16). But it is worth noting: Another half-century would pass before women of either color would get the ballot under a separate constitutional amendment.
Did you know …
- Several leaders of the women's suffrage movement, including Susan B. Anthony (1820–1906) and Elizabeth Cady Stanton (1815–1902), also supported the abolitionist movement to end slavery and give African Americans the ballot (see box). Both causes were based on the idea that all people are equal. But the leaders of the two movements parted ways over the Fourteenth Amendment, as noted in Reconstruction: America's Unfinished Revolution. For the first time in the Constitution, the amendment used the word "male" to describe people entitled to voting rights. The women's suffrage leaders felt "a deep sense of betrayal."
- During the 1866 campaign, some Northern Congressmen painted the amendment as a way to keep the South from attaining too much political power—not as a measure to help African Americans.
- New Jersey, one of two Northern states that withdrew its support for the amendment, later reversed itself again. The state made a symbolic announcement on November 12, 1980, supporting the Fourteenth Amendment. Other states offered their support to the amendment years after it was ratified: Delaware in 1901, Maryland and California in 1959, and Kentucky in 1976.
Consider the following …
- Why didn't the Fourteenth Amendment guarantee voting rights for African American men?
- What political rights, if any, do you think former Confederates should have had after the Civil War?
- Why did the majority in Congress press for the Fourteenth Amendment, after they already passed the Civil Rights Act with some of the same provisions?
For More Information
Flexner, Eleanor. Century of Struggle: The Woman's Rights Movement in the United States. Cambridge, MA: Harvard University Press, 1975.
Foner, Eric. Reconstruction: America's Unfinished Revolution. New York: Harper & Row, 1988.
McPherson, James M. The Struggle for Equality. Princeton, NJ: Princeton University Press, 1964.
Patrick, Rembert W. The Reconstruction of the Nation. New York: Oxford University Press, 1967.
Stalcup, Brenda, ed. Reconstruction: Opposing Viewpoints. San Diego: Greenhaven Press, 1995.
United States Government Printing Office. "Fourteenth Amendment—Rights Guaranteed: Privileges and Immunities of Citizenship, Due Process, and Equal Protection." GPO Access: Constitution of the United States. http://www.gpoaccess.gov/constitution/html/amdt14.html (accessed on September 20, 2004). |
Lesson 3: How Computers Work With Pictures
How Computers Work With Pictures
Picture this. A computer is made up of millions of electronic switches (transistors). They're either on or off, open or closed.
Now picture this. Your computer screen has hundreds of thousands of dots arranged in rows and columns. Each dot is a picture element or pixel. Each of these pixels displays some combination of red/green/blue according to a device called a Video Graphic Array (VGA). The VGA translates binary-coded information (0s and 1s) into the color combinations required to make up an image on your computer screen. |
No. 62: Mar-Apr 1989
Most of the Martian surface is thought to be more than 3.8 billion years old. This portion is densely cratered from a period pf heavy meteorite bombardment. It is also carved by many channels that are thought to have been cut in ancient times by flowing water, water which quickly escaped into space or combined chemically with Martian minerals. The present atmosphere of Mars, in consequence, contains little water vapor.
But some of the Martian landscape, notably Alba Patera, raises questions about the above scenario. The anomalous characteristic of Alba Patera is its relative smoothness and scarcity of impact craters. This Martian real estate is believed to be 2 billion years younger than the rest of the planet. Even so, it, too, is marked by "fluvial" features that resemble stream beds.
Question #1. How did Alba Patera get smoothed out or "reworked"? In other words, what happened to the ancient craters that must have pocked its surface, as they do everywhere else?
Question #2. Where did the water come from to cut Alba Patera's stream beds if all of the Martian water disappeared 2 billion years earlier?
One line of thought maintains that "fluvial" does not mean "pluvial," and that Martian water has come from below rather than as rain from the atmosphere. Both fluvial episodes, in this view, occurred when something caused the Martian crust to release huge quantities of stored water. Hydrothermal activity is mentioned as a possibility.
(Eberhart, J.; "The Martian Atmosphere: Old Versus New," Science News, 135:21, 1989.)
Comment. Another speculation is that immense quantities of Martian water are tied up in methane hydrate and is released when the ambient temperature is somehow increased or perhaps by seismic activity.
Reference. Anomalous characteristics of the Martian surface are cataloged in chapters AME and AMO in our catalog: The Moon and the Planets. Details here. |
Their technique can both dramatically speed up and improve the accuracy of the most precise and delicate nanoscale measurements done with atomic force microscopy (AFM).
If you’re trying to measure the contours of a surface with a ruler that’s crumbling away as you work, then you at least need to know how fast and to what extent it is being worn away during the measurement.
This has been the challenge for researchers and manufacturers trying to create images of the surfaces of nanomaterials and nanostructures. Taking a photo is impossible at such small scales, so researchers use atomic force microscopes. Think of a device like a phonograph needle being used, on a nanoscale, to measure the peaks and valleys as it’s dragged back and forth across a surface. These devices are used extensively in nanoscale imaging to measure the contours of nanostructures, but the AFM tips are so small that they tend to wear down as they traverse the surface being measured.
Today, most researchers stop the measurement to “take a picture” of the tip with an electron microscope, a time-consuming method prone to inaccuracies.
NIST materials engineer Jason Killgore has developed a method for measuring in real time the extent to which AFM tips wear down. Killgore measures the resonant frequency of the AFM sensor tip, a natural vibration rate like that of a tuning fork, while the instrument is in use. Because changes to the size and shape of the tip affect its resonant frequency, he is able to measure the size of the AFM’s tip as it works—in increments of a tenth of a nanometer, essentially atomic scale resolution. The technique is called contact resonance force microscopy.
The potential impact of this development is considerable. Thousands of AFMs are in use at universities, manufacturing plants and research and development facilities around the world. Improving their ability to measure and image nanosized devices will improve the quality and effectiveness of those devices.
Another benefit is that developing new measurement tips—and studying the properties of new materials used in those tips—will be much easier and faster, given the immediate feedback about wear rates.
COMPAMED.de; Source: National Institute of Standards and Technology (NIST) |
Japanese terms that do not belong to any of the inflected grammatical word classes, often lacking their own grammatical functions and forming other parts of speech or expressing the relationship between clauses.
- Category:Japanese interrogative particles: Japanese particles that indicate questions.
|Top – あ い う え お か き く け こ さ し す せ そ た ち つ て と な に ぬ ね の は ひ ふ へ ほ ま み む め も や ゆ よ ら り る れ ろ わ ゐ ゑ を ん|
The Japanese part of speech called the particle (or relational) relates the preceding word to the rest of the sentence, or affects the mode of the preceding sentence. Japanese particles include everything described by the term 助詞 (じょし, joshi), as the concept is much better and more consistently defined in Japanese than in English.
This category has only the following subcategory.
- ► Japanese interrogative particles (0 c, 2 e)
Pages in category "Japanese particles"
The following 89 pages are in this category, out of 89 total. |
Mathematician - An individual who studies and explores such topics as numbers, quantities, and space, finding patterns, solving problems, and forming conjectures. Those solving problems outside of pure mathematics are called applied mathematicians and tackle many problems in related scientific fields.
Symmetric - A geometric term that describes a regular, balanced and repeated pattern.
Old Glory - A common nickname for the flag of the United States of America. The term was coined by William Driver, an early 19th century sea captain. |
Aneurysm is a blood-filled dilatation of a local area of a blood vessel. The cause of an aneurysm is generally unknown. Aneurysms can be present before birth or may be caused by disease or weakening of a blood vessel wall. Most aneurysms occur in adults, but it may also occur in children. Some people have a family history of aneurysm. Aneurysms usually don't cause symptoms until they burst or rupture.
Ruptured aneurysms can be fatal; therefore, a quick and accurate diagnosis is essential. Symptoms may include: a sudden and severe headache, coma or loss of consciousness, eye movement problems, photophobia (eyes are sensitive to the light), seizures, stiff neck, vomiting, and weakness. A child with symptoms of a burst aneurysm will be seen by a pediatric neurologist, neurosurgeon, and/or cardiologist for more definitive diagnosis using angiography, CT scan, and/or MRI scan. Almost all aneurysms must be treated. In most cases, aneurysms can be successfully treated through surgery.
How a child does after a ruptured aneurysm depends on how much bleeding occurred and how function and consciousness was initially affected by the ruptured aneurysm.
These symptoms may include:
- Weakness in the large and small muscles
- Difficulty eating and feeding
- Difficulty with thinking skills
- Difficulty learning new things
- Difficulty with speech and/or language
- Difficulty with sensory processing
- Many other accompanying difficulties.
The first 6 months post diagnosis of an aneurysm is critical to recovery.
After recovery from surgery, your child will be referred for rehabilitation evaluation and therapy. A child may need an evaluation with the Speech Language Pathologist (SLP), the Physical Therapist (PT) and the Occupational Therapist (OT).
At Intermountain Pediatric Rehabilitation, we work with each family to set goals that best meet your child’s needs, and provide best evidence based treatment for each symptom.
Your child will likely be treated both at the hospital and after they go home at an outpatient clinic. Your child may need help with school as well. |
Essay Topic 1
Discuss the structure and role of the classical Greek play, being sure to address the following: Is there a particular format a tragedy was to follow; what did audiences expect to receive when going to see a play in ancient Greece; were the plays used primarily for storytelling, or was there also another purpose?
Essay Topic 2
The chorus plays an integral role within "Medea" and many classic Greek plays. Discuss this role, considering the following: What was the primary function of the chorus; how were the lines of the chorus delivered, and why was that delivery used; what is the interaction between the audience and the chorus?
Essay Topic 3
Medea has been a reviled character as long as the myth has been known, and yet Euripides clearly attempts to make the character sympathetic. Discuss the various methods the author uses to elicit audience sympathy for Medea...
This section contains 934 words
(approx. 4 pages at 300 words per page) |
Management Information System
Syed. Sajid. Hussain.
MINHAJ UNIVERSITY Gulberg Campus Lahore.
A protocol is a set of rules that governs the communications between computers on a network. These
rules include guidelines that regulate the following characteristics of a network: access method, allowed
physical topologies, types of cabling, and speed of data transfer.
• IP - Internet Protocol
IP assigns a unique number to every network device in the world, which is called IP address.
• TCP - Transmission Control Protocol
TCP provides a reliable stream delivery and virtual connection service to applications through the use of
sequenced acknowledgment with retransmission of packets when necessary.
Several physical data-transmission media are available to connect together the various devices on a
network. One possibility is to use cables.
There are many types of cables, but the most common are:
1. Coaxial cable
2. Twisted pair
3. Optical fibre
1. Coaxial Cable:
Coaxial cable has long been the preferred form of cabling, for the simple reason that it is inexpensive
and easily handled (weight, flexibility, ...).
A coaxial cable is made of up a central copper wire (called a core) surrounded by an insulator, and then a
braided metal shield.
2. Twisted Pair Cabe:
In its simplest form, twisted-pair cable consists of two copper strands woven into a braid and covered
• Twisted Pair Connectors:
Twisted pair cable is connected using an RJ-45 connector. This connector is similar to the RJ-11 used in
telephony, but differs on a few points: RJ-45 is slightly larger and cannot be inserted into an RJ-11 jack.
In addition, the RJ-45 has eight pins while the RJ-11 has no more than six, usually only four.
3. Fibre Optics:
Optical fibre is a cable advantages:
• Immune to noise
• Low attenuation
• Tolerates data rates on the order of 100 Mbps
• Bandwidth from tens of megahertz to several gigahertz (monomode fibre)
Fibre optic cabling is particularly suited to links between central link between several buildings as it
allows connections over long distances from several kilometres. Furthermore, this type of cable is very
secure as it is extremely difficult to tap in to such a cable.
The physical topology of a network refers to the configuration of cables, computers, and other
Types of Network Topologies:
• Star Topology
• Ring Topology
• Bus Topology
• Tree Topology
• Mesh Topology
Many home networks use the star topology. A star network features a central connection point called a
"hub" that may be a hub, switch or router. Devices typically connect to the hub with Unshielded Twisted
Pair (UTP) Ethernet. a star network generally requires more cable, but a failure in any star network cable
will only take down one computer's network access and not the entire LAN.
• Easy to install and wire. • Requires more cable length than a linear
• No disruptions to the network then topology.
connecting or removing devices. • If the hub or concentrator fails, nodes
• Easy to detect faults and to remove parts. attached are disabled.
• More expensive than linear bus topologies
because of the cost of the concentrators.
In a ring network, every device has exactly two neighbors for communication purposes. All messages
travel through a ring in the same direction (either "clockwise" or "counterclockwise"). A failure in any
cable or device breaks the loop and can take down the entire network.
Ring topologies are found in some office buildings or school campuses.
A single cable, the backbone functions as a shared communication medium that devices attach or tap
into with an interface connector. A device wanting to communicate with another device on the network
sends a broadcast message onto the wire that all other devices see, but only the intended recipient
actually accepts and processes the message.
Ethernet bus topologies are relatively easy to install and don't require much cabling compared to the
alternatives. However, bus networks work best with a limited number of devices. If more than a few
dozen computers are added to a network bus, performance problems will likely result. In addition, if the
backbone cable fails, the entire network effectively becomes unusable.
• Easy to connect a computer or peripheral to • Entire network shuts down if there is a
a linear bus. break in the main cable.
• Requires less cable length than a star • Terminators are required at both ends of
topology. the backbone cable.
• Difficult to identify the problem if the
entire network shuts down.
• Not meant to be used as a stand-alone
solution in a large building.
Tree topologies integrate multiple star topologies together onto a bus. In its simplest form, only hub
devices connect directly to the tree bus, and each hub functions as the "root" of a tree of devices.
• Point-to-point wiring for individual • Overall length of each segment is
segments. limited by the type of cabling used.
• Supported by several hardware and software • If the backbone line breaks, the entire
venders. segment goes down.
• More difficult to configure and wire than
Mesh topologies involve the concept of routes. Unlike each of the previous topologies, messages sent on
a mesh network can take any of several possible paths from source to destination.
A mesh network in which every device connects to every other is called a full mesh partial mesh
networks also exist in which some devices connect only indirectly to others.
►Local Area Network◄
A local area network (LAN) is a network used for connecting a business or organisation's computers to
one another for
• Exchange information
• Access various services
A local area network usually links computers (or resources such as printers) using a wired transmission
medium (most frequently twisted pairs or coaxial cables) over a circumference of about a hundred
Hardware components of a local area network
A local area network is made of computers linked by a set of software and hardware elements. The
hardware elements used for connecting computers to one another are:
• Network Card:
This is a card connected to the computer's motherboard, which interfaces with the
physical medium, meaning the physical lines over which the information travels.
This is used to transform the signals travelling over the physical support into
logical signals that the network card can manipulate, both when sending and receiving data.
This is the element used to mechanically connect the network card with the
• Physical Medium:
This is the support (generally wired, meaning that it's in the form of a cable) used
to link the computers together. The main physical support media used in local area networks are:
• Coaxial cable
• Twisted pair
• Fibre optics
►Wide Area Network◄
A Wide Area Network ( WAN) is a computer network covering multiple distance areas, which may
spread across the entire world. WANs often connect multiple smaller networks, such as local area
networks (LANs) or metro area networks (MANs). The world's most popular WAN is the Internet. Some
segments of the Internet are also WANs in themselves.
The key difference between WAN and LAN technologies is scalability. WAN must be able to grow as
needed to cover multiple cities, even countries and continents. A set of switches and routers are
interconnected to form a Wide Area Network.
WANs are either point-to-point, involving a direct connection between two sites, or operate across
packet-switched networks, in which data is transmitted in packets over shared circuits. Point-to-point
WAN service may involve either analog dial-up lines, in which a modem is used to connect the
computer to the telephone line, or dedicated leased digital telephone lines, also known as "private lines."
The user of a WAN usually does not own the communications lines that connect the remote computer
systems; instead, the user subscribes to a service through a telecommunications provider. Unlike LANs,
WANs typically do not link individual computers, but rather are used to link LANs. WANs also transmit
data at slower speeds than LANs. WANs are also structurally similar to metropolitan area networks
(MANs), but provide communications links for distances greater than 50 kilometers.
►Metropolitan Area Network◄
A MAN is a relatively new class of network, it serves a role similar to an ISP, but for corporate users
with large LANs. There are three important features which discriminate MANs from LANs or WANs:
1. The network size falls intermediate between LANs and WANs. A MAN typically covers an area of
between 5 and 50 km diameter. Many MANs cover an area the size of a city, although in some cases
MANs may be as small as a group of buildings.
2. A MAN (like a WAN) is not generally owned by a single organization. The MAN, its
communications links and equipment are generally owned by either a consortium of users or by a
single network provider who sells the service to the users. This level of service provided to each user
must therefore be negotiated with the MAN operator, and some performance guarantees are
A MAN often acts as a high speed network to allow sharing of regional resources (similar to a large
LAN). It is also frequently used to provide a shared connection to other networks using a link to a WAN. |
Newly analyzed tooth samples from the great apes of the Miocene indicate that the same dietary specialization that allowed the apes to move from Africa to Eurasia may have led to their extinction, according to results published May 21, 2014, in the open access journal PLOS ONE by Daniel DeMiguel from the Institut Catalá de Palontologia Miquel Crusafont (Spain) and colleagues.
Apes expanded into Eurasia from Africa during the Miocene (14 to 7 million years ago) and evolved to survive in new habitat. Their diet closely relates to the environment in which they live and each type of diet wears the teeth differently. To better understand the apes' diet during their evolution and expansion into new habitat, scientists analyzed newly-discovered wearing in the teeth of 15 upper and lower molars belonging to apes from five extinct taxa found in Spain from the mid- to late-Miocene (which overall comprise a time span between 12.3–12.2 and 9.7 Ma). They combined these analyses with previously collected data for other Western Eurasian apes, categorizing the wear on the teeth into one of three ape diets: hard-object feeders (e.g., hard fruits, seeds), mixed food feeders (e.g. fruit), and leaf feeders.
Previous data collected elsewhere in Europe and Turkey suggested that the great ape's diet evolved from hard-shelled fruits and seeds to leaves, but these findings only contained samples from the early-Middle and Late Miocene, while lack data from the epoch of highest diversity of hominoids in Western Europe.
In their research, the scientists found that in contrast with the diet of hard-shelled fruits and seeds at the beginning of the movement of great apes to Eurasia, soft and mixed fruit-eating coexisted with hard-object feeding in the Late Miocene, and a diet specializing in leaves did not evolve. The authors suggest that a progressive dietary diversification may have occurred due to competition and changes in the environment, but that this specialization may have ultimately lead to their extinction when more drastic environmental changes took place.
Materials provided by PLOS. Note: Content may be edited for style and length.
Cite This Page: |
There are a lot of interesting things to learn about blood. We have grasped the meaning and parts of this body fluid in Science class and further learned what it looks like under the power of microscope in Biology class. Some of us have even tasted blood after getting wounded and this is another learning experience for us.
Blood is a body fluid that is used as means to deliver important substances to and from the body. One of its functions is to carry nutrients and oxygen to various cells as well as transporting wastes that the cell no longer needed. Human beings need to have normal amount of blood for optimal functions. And talking about blood levels, what comprises normal blood?
Blood comprises about 8% of adult’s body weight and normally, adults have 5 liters of blood circulating in their system. Blood also consists of plasma and corpuscles, and formed elements like erythrocytes, or popularly known as the red blood cell (RBCs), leukocytes or white blood cells (WBCs) and thrombocytes or the platelets.
The ability of blood to carry oxygen lies on the hemoglobin. Hemoglobin is a protein present in every red blood cell. This protein component circulates in the lungs and picks up oxygen to be utilized by various parts of the body such as the cells and tissues. After oxygen has been distributed to these parts, the hemoglobin collects cellular wastes in the form of carbon dioxide and brings this back to the lungs to pick up more oxygen and the process starts all over again.
What does blood taste like? We often meet this question as many people have tasted blood many times in their lives. Actually, blood tastes like iron because of the presence of hemoglobin in the blood though others have claimed it has a similar taste with copper. Hemoglobin contains iron and the amount of iron present in hemoglobin is determined by a laboratory test called Complete Blood Count (CBC).
This laboratory exam usually does not need special preparations prior to the test. A small amount of blood is drawn from our vein and this is then submitted for laboratory analysis. The normal values for adult hemoglobin levels are as follows:
Male: 13.8- 17.2 gm/dL
Female: 12.1- 15.1 gm/dL
The result of this test is also one way to determine if we have anemia. When hemoglobin values are below normal, this can indicate anemia due to insufficient amount of iron in the diet or may also indicate further health problems. To assure us about our current health condition, it is best to consult medical doctors so that any problems seen will be given immediate interventions. |
For health professionals: West Nile virus
Get detailed information on West Nile virus, its diagnosis, clinical assessment and prognosis.
On this page
What health professionals need to know about West Nile virus
West Nile virus belongs to a family of viruses known as Flaviviridae. It was first identified in Africa in 1937. In Canada, West Nile virus disease is nationally notifiable and provincially and territorially reportable.
Health care providers are encouraged to monitor their patients for symptoms of the virus and request laboratory tests. All probable and confirmed cases of West Nile virus must be reported to local and provincial or territorial health authorities.
West Nile virus disease should be considered in any patient with:
- febrile or acute neurological illness
- recent exposure to:
- blood transfusion
- organ transplantation
This is especially true during the summer months.
The diagnosis should also be considered in any infant born to a mother infected with the virus during pregnancy or while breastfeeding.
West Nile virus should be considered in the differential diagnosis when the following diseases are suspected:
- encephalitis and aseptic meningitis, such as:
- herpes simplex virus
- other arboviruses, such as:
- La Crosse
- St. Louis encephalitis
- Eastern equine encephalitis
- Powassan virus
West Nile virus neurological syndrome
West Nile virus meningitis is clinically indistinguishable from viral meningitis.
West Nile virus encephalitis is a more severe clinical syndrome that usually manifests with:
- fever and altered mental status
- focal neurological deficits
- movement disorders, such as tremor or parkinsonism
This manifestation is more commonly seen in:
- older individuals (particularly those over the age of 50)
- immunocompromised persons
West Nile virus acute flaccid paralysis is usually clinically and pathologically identical to poliovirus-associated poliomyelitis, with:
- damage of anterior horn cells
- possible progression to respiratory paralysis requiring mechanical ventilation
West Nile virus poliomyelitis often presents as isolated limb paresis or paralysis. It can occur without fever or apparent viral prodrome.
Rarely, the following symptoms have been described in patients with West Nile virus disease:
- cardiac dysrhythmias
- optic neuritis
Patients have also been described as having:
- kidney disease
West Nile virus disease during pregnancy and in children
Most women known to have been infected with West Nile virus during pregnancy have delivered infants without evidence of infection or clinical abnormalities.
Most children with symptomatic West Nile virus infection present with fever.
Of those who develop West Nile virus Neurological Syndrome (or neuroinvasive disease), it most frequently manifests as meningitis. However, children with West Nile virus have been described as having:
- severe and fatal encephalitis
Similar to adults, immunocompromised children may be more susceptible to more severe illness.
Routine clinical laboratory studies are generally nonspecific. In patients with West Nile virus Neurological Syndrome examination of cerebrospinal fluid generally shows lymphocytic pleocytosis. However, neutrophils may predominate early in the course of illness.
Brain magnetic resonance imaging is frequently normal, but signal abnormalities may be seen in patients with encephalitis. These abnormalities are found in the:
- basal ganglia
In patients with poliomyelitis, the abnormalities are found in the anterior horn cells of the spinal cord.
Laboratory diagnosis is generally accomplished by testing of serum or cerebrospinal fluid to detect West Nile virus-specific IgM antibodies.
West Nile virus-specific IgM antibodies are usually detectable 3 to 8 days after onset of illness. They usually persist for 30 to 90 days, but longer persistence has been documented. Therefore, positive IgM antibodies occasionally may reflect a past infection.
If serum is collected within 8 days of illness onset, the absence of detectable virus-specific IgM does not rule out West Nile virus infection. The test may need to be repeated on a later sample.
Most patients with mild symptoms of West Nile virus Non-Neurological Syndrome (or non-neuroinvasive West Nile virus disease) recover completely.
Some symptoms, however, can linger for weeks or months, like:
Patients who recover from West Nile viral encephalitis or poliomyelitis often have residual neurologic deficits.
Among patients with West Nile virus Neurological syndrome the overall case-fatality ratio is approximately 10%. It is significantly higher for patients with West Nile viral encephalitis and poliomyelitis than for West Nile viral meningitis.
For more information
- West Nile virus: Pathogen Safety Data Sheet – Infectious substances
- Canada Communicable Disease Report
Report a problem or mistake on this page
- Date modified: |
|photo by Jorge Diaz|
Hearts, like doors, will open with ease,
To very, very little keys,
And don't forget that two of these
Are "Thank you, sir" and "If you please!"
Similes and MetaphorsWhen I teach this poem, I talk about similes and metaphors. The first line relates hearts to doors using the word "like." This is a straightforward simile. A simile is a figure of speech that rhetorically transfers aspects of one word to another, using "like," "as," or another similar word. A metaphor, like a simile, compares or relates unlike words, but it doesn't necessarily utilize a comparing word such as "like." You could say that all similes are metaphors, but not all metaphors are similes.
The other metaphor in "Hearts Are Like Doors" is harder to pick out. "Thank you, sir" and "If you please" are two of the little keys that can open a heart.
Young children can learn about similes and metaphors, but have a hard time using them skillfully at first. My little guy started with "That wall is white like this table." Not bad, but few people reading this sentence know how white our table or wall is.
To help your child understand these figures of speech, try starting a simile and asking your child to finish it: "As cold as ___." As fast as ___." The clouds are like ___." If you need some inspiration, here is a pdf simile worksheet. You have to register to get rid of the nag screen, but you can see enough to get some ideas.
Other Teaching Points
- Manners: Brainstorm other polite "keys" that can open people's hearts.
- Punctuation and Phrasing: Observe and punctuation provided in the poem. "To very (pause) very little keys." There is no punctuation at the end of the third line. The "these" rhyme is enough to indicate the end of the like. There is no need to pause.
- Bible Connection: In Revelation 3:20, Jesus says, "Behold, I stand at the door and knock. If anyone hears my voice and opens the door, I will come in to him and eat with him, and he with me." The door in this passage is often thought of as a person's heart door. |
When the diameter of a piece changes uniformly from one end to the other, the piece is said to be tapered. Taper turning as a machining operation is the gradual reduction in diameter from one part of a cylindrical workpiece to another part. Tapers can be either external or internal. If a workpiece is tapered on the outside, it has an external taper; if it is tapered on the inside, it has an internal taper. There are three basic methods of turning tapers with a lathe. Depending on the degree, length, location of the taper (internal or external), and the number of pieces to be done, the operator will either use the compound rest, offset the tailstock, or use the taper attachment. With any of these methods the cutting edge of the tool bit must be set exactly on center with the axis of the workpiece or the work will not be truly conical and the rate of taper will vary with each cut.
The compound rest is favorable for turning or boring short, steep tapers, but it can also be used for longer, gradual tapers providing the length of taper does not exceed the distance the compound rest will move upon its slide. This method can be used with a high degree of accuracy, but is somewhat limited due to lack of automatic feed and the length of taper being restricted to the movement of the slide.
The compound rest base is graduated in degrees and can be set at the required angle for taper turning or boring. With this method, it is necessary to know the included angle of the taper to be machined. The angle of the taper with the centerline is one-half the included angle and will be the angle the compound rest is set for. For example, to true up a lathe center which has an included angle of 60°, the compound rest would be set at 30° from parallel to the ways Figure 1.
If there is no degree of angle given for a particular job, then calculate the compound rest setting by finding the taper per inch, and then calculating the tangent of the angle (which is the: compound rest setting) .
For example, the compound rest setting for the workpiece shown in Figure 2 would be calculated in the following manner
TPI = (D – d)/L angle = TAN (TPI)/2
Where TPI = taper per inch
D = large diameter,
d = small diameter,
L = length of taper
angle = compound rest setting
The problem is actually worked out by substituting numerical values for the letter variables:
TPI = (1.000 – 0.375)/0.750
Apply the formula to find the angle by substituting the numerical values for the letter variables:
angle = TAN (0.833)/2
angle = TAN 0.41650
Using the trig charts the TAN of 0.41650 is found to be 22º37′. This angle is referred to as 22 degrees and 37 minutes.
To machine the taper shown in Figure 2, the compound rest will be set at 22°37 ‘. Since the base of the compound rest is not calibrated in minutes, the operator will set the base to an approximate degree reading, make trial cuts, take measurements, and readjust as necessary to obtain the desired angle of taper. The included angle of the workpiece is double that of the tangent of angle (compound rest setting). In this case, the double of 22°37′ would equal the included angle of 45°14′.
To machine a taper by this method, the tool bit is set on center with the workpiece axis. Turn the compound rest feed handle in a counterclockwise direction to move the compound rest near its rear limit of travel to assure sufficient traverse to complete the taper. Bring the tool bit into position with the workpiece by traversing and cross-feeding the carriage. Lock the carriage to the lathe bed when the tool bit is in position. Cut from right to left, adjusting the depth of cut by moving the cross feed handle and reading the calibrated collar located on the cross feed handle. feed the tool bit by hand-turning the compound rest feed handle in a clockwise direction.
Offsetting the Tailstock
The oldest and probably most used method of taper turning is the offset tailstock method. The tailstock is made in two pieces: the lower piece is fitted to the bed, while the upper part can be adjusted laterally to a given offset by use of adjusting screws and lineup marks Figure 3.
Since the workpiece is mounted between centers, this method of taper turning can only be used for external tapers. The length of the taper is from headstock center to tailstock center, which allows for longer tapers than can be machined using the compound rest or taper attachment methods.
The tool bit travels along a line which is parallel with the ways of the lathe. When the lathe centers are aligned and the workpiece is machined between these centers, the diameter will remain constant from one end of the piece to the other. If the tailstock is offset, as shown in Figure 4, the centerline of the workpiece is no longer parallel with the ways; however, the tool bit continues its parallel movement with the ways, resulting in a tapered workpiece. The tailstock may be offset either toward or away from the operator. When the offset is toward the operator, the small end of the workpiece will be at the tailstock with the diameter increasing toward the headstock end.
The offset tailstock method is applicable only to comparatively gradual tapers because the lathe centers, being out of alignment, do not have full bearing on the workpiece. Center holes are likely to wear out of their true positions if the lathe centers are offset too far, causing poor results and possible damage to centers.
The most difficult operation in taper turning by the offset tailstock method is determining the proper distance the tailstock should be moved over to obtain a given taper. Two factors affect the amount the tailstock is offset: the taper desired and the length of the workpiece. If the offset remains constant, workpieces of different lengths, or with different depth center holes, will be machined with different tapers Figure 5.
The formula for calculating the tailstock offset when the taper is given in taper inches per foot (tpf) is as follows:
Offset = TPF x L/24
Where: Offset = tailstock offset (in inches)
TPF = taper (in inches per foot)
L = length of taper (in feet) measured along the axis of the workpiece
For example, the amount of offset required to machine a bar 42 inches (3.5 feet) long with a taper of 1/2 inch per foot is calculated as follows:
OFFSET = TPF x L/24
OFFSET = 1/2 X 42/24
OFFSET = 21/24
OFFSET = 0.875 inch
Therefore, the tailstock should be offset 0.875 inch to machine the required taper. The formula for calculating the tailstock offset when the taper is given in TPF is as follows:
OFFSET = TPI X L/2
Where OFFSET = tailstock offset
TPI = taper per inch
L = length of taper in inches
For example, the amount of offset required to machine a bar 42 inches long with a taper of 0.0416 TPI is calculated as follows:
OFFSET = TPI X L/2
OFFSET = 0.0416 x 42/2
OFFSET = 1.7472/2 or rounded up 1.75/2
OFFSET = .875 inch
Therefore, the tailstock should be offset 0.875 inch to machine the required taper.
If the workpiece has a short taper in any par of it’s length and the TPI or TPF is not given. use the following formula:
OFFSET = L X (D-d)/ 2 X L1
D = Diameter of large end
d = Diameter of small end
L = Total length of workpiece in inches diameter (in inches)
L1 = Length of taper
For example, the amount of tailstock offset required to machine a bar 36 inches (3 feet) in length for a distance of 18 inches (1.5 feet) when the large diameter is 1 3/4 (1 .750) inches and the small diameter is 1 1/2 (1.5) inches is calculated as follows:
OFFSET = L X (D-d)/2XL1
OFFSET =36 X (1.750 – 1.5)/36
OFFSET = 9/36
OFFSET = 0.25 inch
Therefore, the tailstock would be offset (toward the operator) 0.25 inch to machine the required taper.
Metric tapers can also be calculated for taper turning by using the offset tailstock method. Metric tapers are expressed as a ratio of 1 mm per unit of length. Figure 6 shows how the work would taper 1 mm in a distance of 20 mm. This taper would then be given as a ratio of 1:20 and would be annotated on small diameter (d) will be 1 mm greater (d + ). Refer to the following formula for calculating the dimensions of a metric taper. If the small diameter (d), the unit length of taper (k), and the total length of taper (1) are known, then the large diameter (D) may be calculated. The large diameter (D) will be equal to the small diameter plus the amount of taper. The amount of taper for the unit length (k) is (d + 1) -(d). Therefore, the amount of taper per millimeter of unit length = (l/k). The total amount of taper will be the taper per millimeter (l/k) multiplied by the total length of taper (l).
Total Taper = l/k
D = d + total amount of taper
For example, to calculate for the large diameter D for a 1:30
D = d/k + l/k
taper having a small diameter of 10 mm and a length of 60 mm, do the following:
Since the taper is the ratio 1:30, then (k)= 30, since 30 is the unit of length.
D= (10+60)/30 = 12 mm
Tailstock offset is calculated as follows:
Tail Stock Offset = [(D-d)/2 x l ] X L
D = large diameter
d = small diameter
I = length of taper
L = length of the workpiece
Thus, to determine the tailstock offset in millimeters for the taper in Figure 7, substitute the numbers and solve for the offset. Calculate the tailstock offset required to turn a 1:50 taper 200 mm long on a workpiece 800 mm long. The small diameter of the tapered section is 49 mm.
D = d + (l/k)
D = 49 + (200/50)
The tailstock would be moved toward the operator 8 mm.
to = [(53 - 49)/(2 X 200)] X 800
= 8 mm
Another important consideration in calculating offset is the distance the lathe centers enter the workpiece. The length of the workpiece (L) should be considered as the distance between the points of the centers for all offset computations.
Therefore, if the centers enter the workpiece 1/8 inch on each end and the length of the workpiece is 18 inches, subtract 1/4 inch from 18 inches and compute the tailstock offset using 17 3/4 inches as the workpiece length (L).
The amount of taper to be cut will govern the distance the top of the tailstock is offset from the centerline of the lathe. The tailstock is adjusted by loosening the clamp nuts, shifting the upper half of the tailstock with the adjusting screws, and then tightening them in place.
There are several methods the operator may use to measure the distance the tailstock has been offset depending upon the accuracy desired Figure 8.
One method is to gage the distance the lineup marks on the rear of the tailstock have moved out of alignment. This can be done by using a 6-inch rule placed near the lineup marks or by transferring the distance between the marks to the rule’s surface using a pair of dividers.
Another common method uses a rule to check the amount of offset when the tailstock is brought close to the headstock.
Where accuracy is required, the amount of offset may be measured by means of the graduated collar on the cross feed screw. First compute the amount of offset; next, set the tool holder in the tool post so the butt end of the holder faces the tailstock spindle. Using the cross feed, run the tool holder in by hand until the butt end touches the tailstock spindle. The pressure should be just enough to hold a slip of paper placed between the tool holder and the spindle. Next, move the cross slide to bring the tool holder toward you to remove the backlash. The reading on the cross feed micrometer collar may be recorded, or the graduated collar on the cross feed screw may be set at zero. Using either the recorded reading or the zero setting for a starting point, bring the cross slide toward you the distance computed by the offset. Loosen and offset the tailstock until the slip of paper drags when pulled between the tool holder and the spindle. Clamp the tailstock to the lathe bed.
Another and possibly the most precise method of measuring the offset is to use a dial indicator. The indicator is set on the center of the tailstock spindle while the centers are still aligned. A slight loading of the indicator is advised since the first 0.010 or 0.020 inches of movement of the indicator may be inaccurate due to mechanism wear causing fluctuating readings. Load the dial indicators follows: Set the bezel to zero and move tailstock towards the operator the calculated Famount. Then clamp the tailstock to the way.
Whichever method is used to offset the tailstock, the offset must still be checked before starting to cut. Set the dial indicator in the tool post with its spindle just barely touching far right side of the workpiece. Then, rotate the carriage toward the headstock exactly 1 inch and take the reading from the dial indicator. One inch is easily accomplished using the thread chasing dial. It is 1 inch from one number to another.
Alternatively, 1 inch can be drawn out on the workpiece. The dial indicator will indicate the taper for that 1 inch and, if needed, the tailstock can be adjusted as needed to the precise taper desired. If this method of checking the taper is not used, then an extensive trial and error method is necessary.
To cut the taper, start the rough turning at the end which will be the small diameter and feed longitudinally toward the large end Figure 4. The tailstock is offset toward the operator and the feed will be from right to left. The tool bit, a right-hand turning tool bit or a round-nose turning tool bit, will have its cutting edge set exactly on the horizontal centerline of the workpiece, not above center as with straight turning.
The taper attachment Figure 9 has many features of special value, among which are the following:
- The lathe centers remain in alignment and the center holes in the work are not distorted.
- The alignment of the lathe need not be disturbed, thus saving considerable time and effort.
- Taper boring can be accomplished as easily as taper turning.
- A much wider range is possible than by the offset method. For example, to machine a 3/4-inch-per-foot taper on the end of a bar 4 feet long would require an offset of 1 1/2 inches, which is beyond the capabilities of a regular lathe but can be accomplished by use of the taper attachment.
Some engine lathes are equipped with a taper attachment as standard equipment and most lathe manufacturers have a taper attachment available. Taper turning with a taper attachment, although generally limited to a taper of 3 inches per foot and to a set length of 12 to 24 inches, affords the most accurate means for turning or boring tapers. The taper can be set directly on the taper attachment in inches per foot; on some attachments, the taper can be set in degrees as well.
Ordinarily, when the lathe centers are in line, the work is turned straight, because as the carriage feeds along, the tool is always the same distance from the centerline. The purpose of the taper attachment is to make it possible to keep the lathe centers in line, but by freeing the cross slide and then guiding it (and the tool bit) gradually away from the centerline, a taper can be cut or, by guiding it gradually nearer the centerline Figure 10), a taper hole can be bored.
A plain taper attachment for the lathe is illustrated in Figure 9. A bed bracket attaches to the lathe bed and keeps the angle plate from moving to the left or the right. The carriage bracket moves along the underside of the angle plate in a dovetail and keeps the angle plate from moving in or out on the bed bracket. The taper to be cut is set by placing the guide bar, which clamps to the angle plate, at an angle to the ways of the lathe bed. Graduations on one or both ends of the guide bar are used to make this adjustment. A sliding block which rides on a dovetail on the upper surface of the guide bar is secured during the machining operation to the cross slide bar of the carriage, with the cross feed screw of the carriage being disconnected. Therefore, as the carriage is traversed during the feeding operation, the cross slide bar follows the guide bar, moving at the predetermined angle from the ways of the bed to cut the taper. It is not necessary to remove the taper attachment when straight turning is desired. The guide bar can be set parallel to the ways, or the clamp handle can be released permitting the sliding block to move without affecting the cross slide bar, and the cross feed screw can be reengaged to permit power cross feed and control of the cross slide from the apron of the carriage.
Modern lathes use a telescopic taper attachment. This attachment allows for using the cross feed, and set up is a bit faster than using a standard taper attachment. To use the telescopic attachment, first set the tool bit for the required diameter of the work and engage the attachment by tightening the binding screws, the location and number of which depend upon the design of the attachment. The purpose of the binding screws is to bind the cross slide so it may be moved only by turning the cross feed handle, or, when loosened, to free the cross slide for use with the taper attachment. To change back to straight turning with the telescopic attachment, it is necessary only to loosen the binding screws.
When cutting a taper using the taper attachment, the direction of feed should be from the intended small diameter toward the intended large diameter. Cutting in this manner, the depth of cut will decrease as the tool bit passes along the workpiece surface and will assist the operator in preventing possible damage to the tool bit, workpiece, and lathe by forcing too deep a cut.
The length of the taper the guide bar will allow is usually not over 12 to 24 inches, depending on the size of the lathe. It is possible to machine a taper longer than the guide bar allows by moving the attachment after a portion of the desired taper length has been machined; then the remainder of the taper can be cut. However, this operation requires experience.
If a plain standard taper attachment is being used, remove the binding screw in the cross slide and set the compound rest perpendicular to the ways. Use the compound rest graduated collar for depth adjustments.
When using the taper attachment, there may be a certain amount of “lost motion” (backlash) which must be eliminated or serious problems will result. In every slide and every freely revolving screw there is a certain amount of lost motion which is very noticeable if the parts are worn. Care must be taken to remove lost motion before proceeding to cut or the workpiece will be turned or bored straight for a short distance before the taper attachment begins to work. To take up lost motion when turning tapers, run the carriage back toward the dead center as far as possible, then feed forward by hand to the end of the workpiece where the power feed is engaged to finish the cut. This procedure must be repeated for every cut.
The best way to bore a taper with a lathe is to use the taper attachment. Backlash must be removed when tapers are being bored with the taper attachment, otherwise the hole will be bored straight for a distance before the taper starts. Two important factors to consider: the boring tool must be set exactly on center with the workpiece axis, and it must be small enough in size to pass through the hole without rubbing at the small diameter. A violation of either of these factors will result in a poorly formed, inaccurate taper or damage to the tool and workpiece. The clearance of the cutter bit shank and boring tool bar must be determined for the smaller diameter of the taper. Taper boring is accomplished in the same manner as taper turning.
To set up the lathe attachment for turning a taper, the proper TPF must be calculated and the taper attachment set-over must be checked with a dial indicator prior to cutting. Calculate the taper per foot by using the formula:
TPF = (D – d)/L x 12
TPF = taper per foot,
D = large diameter (in inches),
d = small diameter (in inches),
L = length of taper
After the TPF is determined, the approximate angle can be set on the graduated TPF scale of the taper attachment. Use a dial indicator and a test bar to set up for the exact taper. Check the taper in the same manner as cutting the taper by allowing for backlash and moving the dial indicator along the test bar from the tailstock end of the head stock end. Check the TPI by using the thread-chasing dial, or using layout lines of 1-inch size, and multiply by 12 to check the TPF. Make any adjustments needed, set up the work to be tapered, and take a trial cut. After checking the trial cut and making final adjustments, continue to cut the taper to required dimensions as in straight turning. Some lathes are set up in metric measurement instead of inch measurement. The taper attachment has a scale graduated in degrees, and the guide bar can be set over for the angle of the desired taper. If the angle of the taper is not given, use the following formula to determine the amount of the guide bar set over:
Guide Bar Set Over (in millimeters) = [(D + d)/2] X L/I
D = large diameter of taper (mm)
d = small diameter of taper (mm)
I = length of taper (mm)
L = length of guide bar (mm)
Reference lines must be marked on the guide bar an equal distance from the center for best results.
A metric dial indicator can be used to measure the guide bar set over, or the values can be changed to inch values and an inch dial indicator used.
Checking Tapers for Accuracy
Tapers must be checked for uniformity after cutting a trial cut. Lay a good straight edge along the length of the taper and look for any deviation of the angle or surface. Deviation is caused by backlash or a lathe with loose or worn parts. A bored taper may be checked with a plug gage Figure 11 by marking the gage with chalk or Prussian blue pigment. Insert the gage into the taper and turn it one revolution. If the marking on the gage has been rubbed evenly, the angle of taper is correct. The angle of taper must be increased when there is not enough contact at the small end of the plug gage, and it must be decreased when there is not enough contact at the large end of the gage. After the correct taper has been obtained but the gage does not enter the workpiece far enough, additional cuts must be taken to increase the diameter of the bore.
An external taper may be checked with a ring gage Figure 11. This is achieved by the same method as for checking internal tapers, except that the workpiece will be marked with the chalk or Prussian blue pigment rather than the gage. Also, the angle of taper must be decreased when there is not enough contact at the small end of the ring gage and it must be increased when there is not enough contact at the large end of the gage. If no gage is available, the workpiece should be tested in the hole it is to fit. When even contact has been obtained, but the tapered portion does not enter the gage or hole far enough, the diameter of the piece is too large and must be decreased by additional depth of cut
Another good method of checking external tapers is to scribe lines on the workpiece 1 inch apart Figure 12; then, take measurements with an outside micrometer. Subtracting the small reading from the large reading will give the taper per inch.
Duplicating a Tapered Piece
When the taper on a piece of work is to be duplicated and the original piece is available, it may be placed between centers on the lathe and checked with a dial indicator mounted in the tool post.. When the setting is correct, the dial indicator reading will remain constant when moved along the length of taper.
This same method can be used on workpieces without centers provided one end of the workpiece can be mounted and held securely on center in the headstock of the lathe. For example, a lathe center could be mounted in the lathe spindle by use of the spindle sleeve, or a partially tapered workpiece could be held by the nontapered portion mounted in a collet or a chuck. Using either of these two methods of holding the work, the operator could use only the compound rest or the taper attachment for determining and machining the tapers
There are various standard tapers in commercial use, the most common ones being the Morse tapers, the Brown and Sharpe tapers, the American Standard Machine tapers, the Jarno tapers, and the Standard taper pins.
Morse tapers are used on a variety of tool shanks, and exclusively on the shanks of twist drills. The taper for different numbers of Morse tapers is slightly different, but is approximately 5/8 inch per foot in most cases. Dimensions for Morse tapers are given in Table 1.
Brown and Sharpe tapers are used for taper shanks on tools such as end mills and reamers. The taper is approximately ½ inch per foot for all sizes except for taper No 10, where the taper is 0.5161 inch per foot.
The American Standard machine tapers are composed of a self-holding series and a steep taper series. The self-holding taper series consists of 22 sizes which are given in Table 2. The name “self-holding” has been applied where the angle of the taper is only 2° or 3° and the shank of the tool is so firmly seated in its socket that there is considerable frictional resistance to any force tending to. turn or rotate the tool in the holder. The self-holding tapers are composed of selected tapers from the Morse, the Brown and Sharpe, and the ¾-inch-per foot machine taper series. The smaller sizes of self-holding tapered shanks are provided with a tang to drive the cutting tool. Larger sizes employ a tang drive with the shank held by a key, or a key drive with the shank held with a draw bolt. The steep machine tapers consist of a preferred series and an intermediate series as given in Table 3. A steep taper is defined as a taper having an angle large enough to ensure the easy or self-releasing feature. Steep tapers have a 3 ½-inch taper per foot and are used mainly for aligning milling machine arbors and spindles, and on some lathe spindles and their accessories.
The Jarno taper is based on such simple formulas that practically no calculations are required when the number of taper is known. The taper per foot of all Jarno tapers is 0.600 inch per foot. The diameter at the large end is as many eighths, the diameter at the small end is as many tenths, and the length as many half-inches as indicated by the number of the taper. For example: A No 7 Jarno taper is 7/8 inch in diameter at the large end; 7/10 or 0.7 inch in diameter at the small end; and 7/2, or 3 ½ inches long. Therefore, formulas for these dimensions would read:
Diameter at small end= No. of timer/8
Diameter at small end= No. of taper/10
Length of taper= No. of taper/2
The Jarno taper is used on various machine tools, especially profiling machines and die-sinking machines. It has also been used for the headstock and tailstock spindles on some lathes.
The Standard taper pins are used for positioning and holding parts together and have a ¼-inch taper per foot. Standard sizes in these pins range from No 7/0 to No 10 and are given in Table 4. The tapered holes used in conjunction with the tapered pins utilize the processes of step-drilling and taper reaming.
To preserve the accuracy and efficiency of tapers (shanks and holes), they must be kept free from dirt, chips, nicks, or burrs. The most important thing in regard to tapers is to keep them clean. The next most important thing is to remove all oil by wiping the tapered surfaces with a soft, dry cloth before use, because an oily taper will not hold. |
Micro and Macro these terms were first used by the Swedish economist Rognar Frisch in 1920s to represent the "level of aggregation" of economic variables analyzed in an economic problem. The word micro means "small", while macro stands for "large".
In microeconomics, variables under discussion are "not aggregated" and pertain to, individual economic units or their "small" groups. In contrast, in macroeconomics, variable under discussion are" aggregated" and relate to "large" groups of economic units.
However, the level of aggregation can vary from case to case and depends upon the nature of problem at hand, and the purpose of analyzing them. The aggregation may extend to the entire economy, particularly when we are studying the issues of inflation, cyclical fluctuations in national income, balance of payments and do on.
In microeconomics, individual economic units are considered as components of a big organism, namely, the economy as a whole. In it, we study the behavior of the individual economic units or their small groups in their alternative capacities, such as a consumer, producers, and investors, suppliers of labor and other inputs, and the like. For example, we study the way in which a typical consumer decides whether to buy consumption good or not, or the manner in which to divide his total expenditure on alternative consumption goods. Similarly, we take a single good and study the determination of its demand in the market. In the same manner, we study the behavior pattern of a typical firm, or an industry, in response to alternative cost and demand conditions faced by it.
In microeconomics, we adopt the approach of breaking a big problem into small parts and study one bit at a time. This bit-by-bit approach makes the task of analysis simpler and easier. It enables us to use our findings for extending them to the study of economy as a whole, in the absence of microeconomics, the analysis of simultaneous interaction involving a large number of variables and the responses of individual economic units becomes next to impossible. Therefore, conclusions drawn from microeconomics are suitably adjusted for use in the analyze of problems at macro level.
It is noteworthy that the field of study of microeconomics is not a narrow ore. Its coverage is quite wide and comprehensive. Some of its leading components are:
- The entire theory of consumer behavior;
- The theory of a firm;
- The theory of an industry;
- Theory of production;
- Theory of product pricing;
- Theory of pricing of factors of production;
- Theories of welfare of individuals as compared with each other; and so on. |
Location and General Description
A wide variety of substrates comprise this ecoregion, which runs along the central coast of eastern Australia, from near Sydney in New South Wales in the south to central Queensland and inland to Great Dividing Range. In coastal parts there are extensive sand deposits including high dunes and the great sandmass of Fraser Island. There are several major occurrences of Mesozoic sedimentary rocks, the most notable being the Sydney Basin in the south of the ecoregion. It is characterized by sandstones and shales, which were laid down by riverine sediments from the Late Permian to the Mid Triassic. Dissected plateaus are prominent. Much of the region is geologically complex with hills and ranges formed on acid to basic volcanics and metamorphic rocks, interspersed with well-developed stream valleys. Volcanic activity during the Tertiary resulted in some extensive areas of basalt. The Border Ranges which form the boundary between Queensland and New South Wales are remnants of two ancient shield volcanoes, which are 20.5 to 23.5 million years old. The erosion caldera of the Tweed River Volcano is one of the world’s largest. It is renowned for its size, its central mountain mass (Mt. Warning), and its erosion patterns, which have worn the caldera floor down to basement Mesozoic and Palaeozoic rocks (McDonald and Adams 1995, WCMC 1996). In southeast Queensland there are also localised remnant Tertiary surfaces with duricrust and laterite. An elevated area dominated by a granite batholith lies on the western edge of the ecoregion south of the Queensland-New South Wales border. It is known as the New England Tableland.
The climate in the Blue Mountains region near Sydney is warm-temperate with summer maximum temperatures of 28?C in the lowlands, and winter minimum temperatures of 3?C recorded at approximately 1,000 m in elevation. In the central areas of the Blue Mountains, rainfall averages from 1,100 to 1,400 mm per year (Ingwersen 1994). Climate in the coastal regions is humid, with high rainfall (1200 mm to1600 mm per annum). Rainfall decreases as one moves inland to the New England region, and Armidale receives approximately 800 mm of rain each year on average. Winters here are cold and wet and higher elevations receive snowfall most years. Further north in the Border Ranges, monthly summer temperatures vary from 21.5?C maximum to 19.7?C minimum. Corresponding winter temperatures from Mount Tamborine in the Border Ranges vary from 17.8?C maximum to 12.3?C minimum. Throughout the ecoregion, rainfall is concentrated in the summer (McDonald and Adams 1995). Towards the north of the ecoregion rainfall is lower (750 mm to 1100 mm per annum) and more seasonal.
The ecoregion includes contains a wide variety of vegetation types, as a result of the varied substrates, altitudinal gradients and microclimates. Temperate eucalypt forest dominates most of the region, with important rainforest communities found in the Border Ranges region as well as other parts of the region. Eucalypt communities along the coast are normally tall ‘wet’ forests, ranging from 30 percent to 70 percent closed canopy cover. Common species in wet eucalypt forests of southern Queensland and northern New South Wales include tallowwod (E. microcorys), blackbutt (E. pilularis), brush box (Lophostemon confertus), flooded gum (E. grandis), and Gympie messmate (E. cloeziana) which is restricted to southern Queensland. The complex understorey contains small broadleaved trees, vines, ferns and shrubs. Wet eucalypt forests are relatively restricted in southeast Queensland, with drier forms of eucalypt forest predominant. Major species in these forests include spotted gum (Corymbia citriodora), bloodwoods (C. trachyphloia, C. intermedia), white mahogany (Eucalyptus acmenoides) and ironbarks (E. siderophloia, E. crebra). Queensland blue gum (E. tereticornis) is predominant in alluvial areas away from the coast. In the Queensland part of the region a chain of topographic isolates support taxa more characteristic of southern parts of the ecoregion (Nix 1993) while lower altitude areas contain many taxa at the northern and southern limits of their geographic distribution.
Further south in New South Wales, Sydney blue gum (E. saligna), ironbark (E. paniculata), and blackbutt are common (Ashton and Attiwill 1994). Thin-leaved stringybark (E. eugenioides) Southern coastal regions of this ecoregion. The New England Tableland region is dominated by ash, stringybark, peppermint, and box species, including E. andrewsii, E. caliginosa, E. nova-anglica, E. melliodora, and E. blakleyi. Further west, white box (E. albens) woodlands dominate. Rainforest vegetation is normally found in sheltered, well-watered sites with good soils (often derived from basic igneous rocks). The transition from eucalypt to rainforest vegetation is often complex and ‘mixed’ eucalypt forests with rainforest elements in the understory may occur (Floyd 1990a).
The rainforest communities of this ecoregion have been extensively researched and described (Floyd 1990a, 1990b), and are included in the Central Eastern Rainforest Reserves World Heritage Site. Four distinct communities are found here: subtropical rainforest, dry rainforest, warm temperate rainforest, and cool temperate rainforest. Subtropical rainforest is the best developed community in New South Wales, growing in warm, fertile sites with rainfall greater than 1,300 mm per annum. Forest ranges from 30 m to 45 m in height, with two to three tree strata forming an uneven canopy. Emergent tree species include booyong (Argyrodendron trifoliolatum), black booyong (A. actinophyllum), figs (Ficus spp.), yellow carrabeen (Sloanea woollsi), and the red cedar (Toona ciliata), which is highly prized for its timber. Stranglers, palms, plank buttressing, woody vines, and large epiphytes are characteristic. A coastal variant of subtropical rainforest known as littoral rainforest is capable of withstanding high levels of airborne salt (Floyd 1990a). On the sandmass of Fraser Island littoral rainforest overtopped by brush box and satinay (Syncarpia hillii) grows in the swales of giant sand dunes.
Dry rainforest is found in sites with lower rainfall, ranging from 600 mm to 1,100 mm annual rainfall. Scattered emergents include hoop pine (Araucaria cunninghamii), lacebark tree (Brachychiton discolor), and crow’s ash (Flindersia australis). Woody vines and stranglers may be common, but there are no palms, large epiphytes, and plank butresses are uncommon. Sapindaceae, Euphorbiaceae, Rutaceae, and Myrtaceae are all well represented in dry rainforest. Dry rainforest was widespread in southeastern Queensland where it occupied about half a million hectares (Young and Dillewaard 1999). It has been extensively cleared for agriculture and hoop pine plantations.
Warm temperate rainforest is less diverse than dry or subtropical communities and grows on low-nutrient soils. It is largely restricted to the southern half of the ecoregion. This community grows in cool, moist areas where lichens and ground ferns are common. Typical tree species include coachwood (Ceratopetalum apetalum), sassafras (Doryphora sassafras), and lillypilly (Acmena smithii). Cool temperate rainforest is also found in areas with rainfall in excess of 1,750 mm per year and more fertile soil. Only several tree species are common here, including Eucryphia moorei and Antarctic beech (Nothofagus moorei), which can form extensive stands (Floyd 1990a).
Shrublands, shrubby woodlands (heaths), and associated sandplain vegetation are characteristic of coastal parts of the region. They are species-rich with the families Epacridaceae, Myrtaceae, Rutacaea, Fabaceae, Proteaceae, and Cyperaceae well-represented. Banksia spp., and Eucalyptus racemosa form woodlands in places and paper-barked teas-tree (Melaleuca quinquenervia) is present in swampy areas. The coastal heaths and low fertility substrates further inland such as the Sydney sandstones and elevated areas of rhyolite and granite share many genera and even some species.
This ecoregion contains two outstanding areas for plant endemism and diversity, the sandstone cliffs around Sydney (Ingwersen 1995) and the Border Ranges region (McDonald and Adams 1995). The Border Ranges harbor more than 1,200 vascular plants, reflecting the variety of local habitats and the refugia role this region likely played during the continental aridity of the late Tertiary and the climatic fluctuations of the Quaternary. The rainforest communities found in this ecoregion demonstrate floristic links to other locations: the cool temperate rainforest is allied to that found in Tasmania, the warm temperate rainforest has links to the North Island of New Zealand, and the subtropical and dry communities are also found further north in the Queensland Tropical Rainforest ecoregion (Floyd 1990a).
In the Border Ranges, approximately 140 dicotyledon genera are Gondwanan in origin, including rainforest genera (Nothofagus, Ceratopetalum, Akania) and non-rainforest genera such as Cassinia, Bauera, Hibbertia, and Leucopogon. More than 70 plant species are restricted to the Border Ranges region, and the area is also rich in mammals, reptiles, amphibians, birds, and invertebrates (McDonald and Adams 1995). The Border Ranges are the center of distribution for the pouched frog (Assa darlingtoni) and harbor a number of restricted range birds, including the black-breated buttonquail (Turnix melanogaster VU) (Hilton-Taylor 2000, Stattersfield et al. 1998). Rainforests outside of the Border ranges include dry rainforest types which also contain many taxa with highly localised distributions. For example the only known population of the southern Queensland dry rainforest species Alectryon ramiflorus consists of approximately 40 individuals (Barry 2000).
The Blue Mountains World Heritage Area contains 90 eucalypt taxa, or 13 percent of the global distribution. Nearly 130 nationally threatened plants are found in the Blue Mountains World Heritage Area, and 115 taxa are exclusively or predominantly found only with the World Heritage Area. Many of the rare and endemic plants have small ranges, restricted to specialized habitats such as clifftops and healthlands. Several relic taxa are represented (Wollemia, Microstrobos, Acrophyllum), including the recently discovered Wollemi pine (Wollemia nobilis). A wide variety of Australian fauna occurs here, although few species are endemic. The broad-headed snake (Hoplocephalus bungaroides VU) is an exception, largely restricted to the later, quartz-rich Hawkesbury sandstone. In total, over 60 reptiles, 65 mammals, and 275 birds have been recorded in the Blue Mountains. Among the birds, honeyeaters are especially well-represented, with 25 species found in the World Heritage Area.
The coastal sandplains and montane shrublands support a large number of taxa endemic to the region (McDonald and Elsol 1984). The sandmasses of Fraser Island and Cooloola in Queensland, known collectively as the Great Sandy region, are contained within a World Heritage Area in recognition of the area’s unique landscapes and biodiversity values (Commonwealth Department of the Arts, Sport, the Environment, Tourism and Territories 1991).
A number of globally threatened species inhabit this ecoregion. The avifauna includes red goshawk (Erythrotriorchis radiatus VU), swift parrot (Lathamus discolor EN), regent honeyeater (Xanthomyza phrygia EN), Albert's lyrebird (Menura alberti EN), and eastern bristlebird (Dasyornis brachypterus EN). The superb lyrebird (Menura novaehollandiae) also inhabits this ecoregion and may have drastically affected vegetation and erosion rates: it turns over an estimated 63,000 kg of debris per hectare each year looking for food or nest-mound building materials (WCMC 1998). Threatened mammals include the brush-tailed rock-wallaby (Petrogale penicillata VU) and the Hastings River mouse (Pseudomys oralis EN). Among the herpetofauna, the broad-headed snake (Hoplocephalus bungaroides VU) and the stuttering frog (Mixophyes balbus VU) are both present in this ecoregion.
When Europeans first arrived, the Border Ranges area contained one of the largest expanses of rainforest in Australia. The 750 km2 ‘Big Scrub’comprised the largest stand of lowland subtropical rainforest in Australia and one of the biggest in the world. Today the Big Scrub has been reduced to mere fragments. Important timber species found in this ecoregion include red cedar, hoop pine, and white beech (Gmelina leichardtii). The Border Ranges are also vital for water catchment. Rainforests in this region are largely protected within the 3,700 km2 Central Eastern Rainforest Reserves World Heritage Site. The Blue Mountains area is also protected in a 2,500 km2 World Heritage Site. An extensive network of National and State Parks are spread throughout New South Wales and Queensland, although the representation of habitats varies throughout the ecoregion.
Sustainable logging continues in state-held eucalyptus forests and woodlands, with tallowwod, Sydney blue gum, spotted gum, blackbutt, and flooded gum harvested (McDonald and Adams 1995). Logging of eucalyptus forest in state forests in southeast Queensland is gradually being phased out. Eucalypt woodlands and dry forests have also been cleared for development or to enhance grazing. This ecoregion contains several large population centers, most notably Sydney and Brisbane.
Types and Severity of Threats
Major threatening processes are continuing clearing and fragmentation of native vegetation, introduced species, and altered fire regimes. Water pollution and schemes for water use are also threats. Coastal development in New South Wales has greatly intensified over the last 20 years, and nearly the entire coastline is inhabited. Coastal development in southeastern Queensland has continued at a similar pace, with all coastal lowland vegetation affected by rapid urban expansion (Glanznig 1995). One study found that a third of all bushland cover in coastal southeast Queensland had been lost, and predicated that if clearance continued unabated all vegetation would be gone by 2019 (Catterall and Kingston 1993 in Glanznig 1995).
Even within protected areas, there are a number of threats to native flora and fauna, including trampling by tourists, altered fire regimes, problems of sewage disposal, and the continued spread of weeds and exotic animals. The Central Eastern Rainforest Reserves comprises disjunct and fragmented sites, which presents management challenges. The Blue Mountains World Heritage Area actually includes an estimated 80,000 inhabitants, living in residential, tourist, and small farm development along the Great Western Highway which bisects the site. These inhabitants further contribute to erosion, waste disposal problems, the spread of exotic plants, and development pressures (WCMC 1998). Invasive plant species include privet (Ligustrum spp.), camphor laurel (Cinnamomum camphora), and Lantana camara (McDonald and Adams 1995).
Justification of Ecoregion Delineation
The Eastern Australian Temperate Forests ecoregion is a transition zone between temperate southeastern Australia and the tropical climate of north and northeastern Australia. It includes five full IBRAs: ‘South Eastern Queensland, ‘New England Tableland’, ‘Nandewar’, ‘NSW North Coast’, and ‘Sydney Basin’ (Thackway and Cresswell 1995). This ecoregion includes the ‘Border Ranges’ Centre of Plant Diversity and part of the ‘Sydney Sandstone Region’ CPD (Ingwerson 1995, McDonald and Adams 1995).
Ashton, D.H. and P.M. Attiwill. 1994. Tall open-forests. Pages 157 – 196 in R.H. Groves, editor. Australian Vegetation. Cambridge University Press, Cambridge, United Kingdom.
Barry, S.J. 2000. Recovery plan for the endangered vascular plant Alectryon ramiflorus Reynolds. Queensland Environmental Protection Agency and the Natural Heritage Trust.
Commonwealth Department of the Arts, Sport, the Environment, Tourism and Territories. 1991. Nomination of Fraser Island and the Great Sandy Region by the Government of Australia for Inclusion in the World Heritage List. Canberra.
Floyd, A.G. 1990a. Australian Rainforests in New South Wales. Volume 1. Surrey Beatty & Sons, Chipping Norton, Australia.
Floyd, A.G. 1990b. Australian Rainforests in New South Wales. Volume 1. Surrey Beatty & Sons, Chipping Norton, Australia.
Glanznig, A. 1995. Native vegetation clearance, habitat loss, and biodiversity decline: an overview of recent native vegetation clearance in Australia and its implications for biodiversity. Biodiversity Series, Paper No.6. Biodiversity Unit, Department of the Environment, Sport, and Territories, Canberra, Australia.
Hilton-Taylor, C. 2000. 1998. The IUCN 2000 Red List of Threatened Species. IUCN, Gland, Switzerland and Cambridge, United Kingdom.
Ingwersen, F. 1995. Sydney Sandstone Region. Pages 490 – 494 in S.D. Davis, V.H. Heywood and A.C. Hamilton. editors. Centres of Plant Diversity. Volume 2. Asia, Australasia, and the Pacific. WWF/IUCN, IUCN Publications Unit, Cambridge, UK.
McDonald, W.J.F., and P. Adams. 1995. Border Ranges. Pages 462 – 466 in S. D. Davis, V.H. Heywood and A.C. Hamilton, editors, Centres of plant diversity. Volume 2. Asia, Australasia, and the Pacific. WWF/IUCN, IUCN Publications Unit, Cambridge, UK.
McDonald, W.J.F., and J.A. Elsol, 1984. Moreton Region Vegetation Map series, Summary report for Caloundra, Brisbane, Beenleigh, Murwillumbah sheets. Botany Branch, Queensland Department of Primary Industries.
Nix H.A. 1993. Bird distributions in relation to imperatives for habitat conservation in Queensland. Pages 12 – 21 in C.P. Catterall, P.V. Driscoll, K. Hulsman, D. Muir, A. Taplin, editors. Birds and their habitats. Conference Proceedings, Queensland Ornithological Society Inc., Brisbane.
Stattersfield, A.J., M.J. Crosby, A.J. Long, and D.C. Wedge. 1998. Endemic bird areas of the World. Priorities for biodiversity conservation. BirdLife Conservation Series No. 7. BirdLife International, Cambridge, United Kingdom.
Thackway, R., and I.D. Cresswell. editors. 1995. An Interim Biogeographic Regionalisation for Australia: a framework for establishing the national system of reserves, Version 4.0. Australian Nature Conservation Agency, Canberra.
WCMC. 1996. Central Eastern Rainforest Reserves. In WCMC Database of Protected Areas. http://www.wcmc.org.uk/protected_areas/data/wh/cerr.html. viewed September 20, 2001.
WCMC. 1998. Blue Mountains National Park. In WCMC Database of Protected Areas. http://www.wcmc.org.uk/protected_areas/data/wh/blue_mountain.html. viewed September 20, 2001.
Young, P.A.R. and H.A. Dillewaard. 1999. Southeast Queensland. Pages 12/1-12/75 in P. S. Sattler and R. D. Williams. The Conservation Status of Queenslands Bioregional Ecosystems. Environment Protection Agency, Brisbane.
Prepared by: Miranda Mockrin
Reviewed by: Peter Young |
Rickettsial infections are caused by a variety of bacteria from the genera Rickettsia, Orientia, Ehrlichia, Neorickettsia, Neoehrlichia, and Anaplasma . Rickettsia spp. are classically divided into the typhus group and spotted fever group (SFG). Orientia spp. make up the scrub typhus group.All pathogenic Salmonella species, when present in the gut are engulfed by phagocytic cells, which then pass them through the mucosa and present them to the macrophages in the lamina propria. Nontyphoidal salmonellae are phagocytized throughout the distal ileum and colon.
In the Australia, about 650 cases of Typhus fever are reported in each year. Almost 12% of the Australia population is suffering with Typhus fever, a disease caused by bacteria called Rickettsia typhi or bacteria called Rickettsia felis.Treatment of patients with possible rickettsioses should be started early and should never await confirmatory testing, which may take weeks when serology is used. Immediate empiric treatment with a tetracycline is recommended, most commonly doxycycline. Broad-spectrum antibiotics are not usually helpful. Chloramphenicol may be an alternative in some cases, but its use is associated with more deaths. |
water is essential to human health, but around the world our supplies of freshwater
are increasingly threatened by pollution, overuse and climate change.
Problems with access are most severe in the developing world, where more than 5 million people perish every year from water-related diseases, and more than 1 billion people suffer without access to water for their basic needs. Even wealthy and relatively water-rich nations, like the United States, need to take action to ensure that their water supplies can meet looming threats.
Women and young girls gather around a newly built safe water point to collect water in a Kenyan village. The pumping station allows the women who operate it to avoid visiting water sources which may contain parasites. International efforts are trying to bring clean water to the nearly 1.1 billion people worldwide who lack it. Photo courtesy of WHO/TDR/Crump.
While we have the technology and a range of practical, effective approaches, substantial obstacles stand in the way of progress. International efforts to improve water access have been hobbled by a lack of funding and a focus on big, centralized infrastructure. Here in the United States, many key resources are already overstretched, and ongoing and future problems will make things worse.
With the importance of water both at home and abroad becoming clearer to U.S. policy-makers and political leaders, there is hope. But it cannot come fast enough to those who do not have clean water available to them.
How much is enough?
A seemingly simple water and human health question has actually engendered much debate: How much water is needed to sustain life?
The answer varies widely depending on climate and a persons activity level and metabolism, but for most people, the absolute minimum needed to stay healthy is around 3 liters per day, or just over three-quarters of a gallon. This is, however, a bare minimum; in a hot climate, for example, people exerting themselves could consume more than 20 liters per day. These estimates do not account for cooking, washing or sanitation nor the many other things for which we use water, such as growing food or manufacturing products.
As Peter Gleick of the Pacific Institute details in his 1996 article Basic Water Requirements for Human Activities: Meeting Basic Needs, published in Water International, technology exists to meet sanitation needs with no water; but because many systems still use water, it is more realistic to add an additional 25 liters per person per day for direct sanitation needs. Other studies have added 25 more liters per day for bathing and cooking.
Adding up all these estimates, we find a reasonable minimum amount of water to be 50 liters per person per day, or a little over 13 gallons. And this is similar to the figure used by the World Health Organization (WHO) for their Basic Water Requirement.
But of course, 50 liters per day is still only the absolute minimum needed for human health and well-being, and does not include water needed to grow food, produce energy, water landscaping or create goods. Thus, average water use in industrialized nations is far higher than this basic minimum.
In the United States, for instance, the average person uses about 380 liters per day for indoor residential use, although there is wide variation across the country. European nations tend to use less people in the Netherlands, for example, use an average of just over 100 liters per day. Much of the variation in water use between urban areas of the United States, or between different industrial nations, boils down to how much water is for landscaping, gardens and lawns, and the efficiency of homes and businesses.
In the United States, per capita water use has been declining over the past few decades, as we continue to improve efficiency with more thrifty toilets and showers, front-loading washing machines and low-water gardens. But even though per capita use has fallen to a level not seen since the 1970s, national water efficiency and conservation efforts are unfocused and often not seen as important, especially when compared to energy.
Still, much of the water efficiency and conservation techniques and technology we have been using in the United States can be beneficial to developing nations struggling to meet their basic water needs. A simple example would be low-flush toilets and efficient showerheads. These approaches will not help the poorest people who live without plumbing, but in many areas, improving the efficiency of existing infrastructure can free up water and resources.
For those in rural areas of the developing world, technology like solar-powered water pumps, low-cost water filtration, water harvesting and simple waste disposal systems can work wonders. Organizations, including Water Partners International, Water Aid and the Pacific Institute-affiliated Water Words project, are trying to go one step further by helping affected communities develop the local know-how and resources to build and design systems tailored to their own needs, instead of just deploying systems designed elsewhere.
A global health crisis
The United States is lucky to be a relatively water-wealthy nation. Although some areas, such as the western states and the Southeast, have experienced severe droughts in the last few years, in general we are blessed with copious water resources aided by a vast network of dams, aqueducts and other infrastructure that brings this natural bounty into our homes and offices, factories and fields. Outbreaks of water-related disease are rare and our tap water quality is generally excellent and inexpensive.
But other nations, especially in the developing world, are not nearly as lucky or as well-equipped in terms of infrastructure or water resources. Current estimates by WHO find that roughly 1.1 billion people do not have access to clean water to meet their basic daily needs, and that 2.4 billion people dont have adequate sanitation. These conditions lead to at least 5 million deaths every year from water-related diseases and many millions of cases of sickness and disease.
By far, the biggest killer is diarrheal diseases, which kill about 2 million people a year, according to WHO estimates. These diseases hit young children the hardest, and are the product of drinking tainted water or not having enough water for proper hygiene. Parasite-related diseases and water-related diseases like malaria (see story, page 18) and dengue fever are responsible for the balance of deaths.
Despite this grim toll, deaths tell only one piece of the story: For every person who dies, many more get sick; at least 250 million people suffer from water-related diseases each year. In turn, these cases of sickness and death take a great toll on the economy of developing nations, costing untold billions of dollars in lost economic output and diminished markets.
Regions affected by water-related disease are diverse, with deaths and illnesses in South America, Asia, India and Africa. Sub-Saharan Africa has the highest percentage of population without access to clean water, but populous nations in relatively more arid regions, like India and Bangladesh (see story, page 36), are also affected.
Attention to the critical role water plays in these regions and elsewhere arguably began in the 1970s at the United Nations-sponsored Mar Del Plata Conference and accelerated in the 1980s with the Water Supply and Sanitation decade. Today, this international effort continues with the Water for Life decade, which was launched on March 22. Due in part to these ongoing international efforts, which take the form of conferences, reports, policy directives and in-the-field projects, progress has been made in reducing the proportion of people without access to clean water.
For example, a global effort to eliminate dracunculiasis, a preventable parasitic disease more commonly known as Guinea worm disease, has also had some success. Because the parasite is transmitted via contaminated drinking water, educating people to follow simple control measures, including drinking from groundwater and filtering their water, can completely prevent illness and eliminate the disease. Such efforts have reduced the number of cases from over 3 million in the 1980s to 150,000 by 1996, and to around 75,000 in 2000, according to WHO. The disease, however, is still found among the poorest rural communities in areas without safe water supplies in sub-Saharan Africa.
In another example, a joint project between UNICEF, the government of India and local nongovernmental organizations, rural Indian villages are receiving more than 2 million hand pumps to access groundwater, instead of having to rely on often-contaminated surface water. And thousands of smaller projects have also been effective. Despite these advances, however, due to rapid population growth over the last few decades, the overall number of people without clean water continues to grow.
The United Nations has affirmed the critical role that water plays in human health in several different statements and treaties, but has taken up the more focused cause of reducing deaths from water-related diseases with the adoption of the Millennium Development Goals. These eight goals deal with pressing issues like poverty, hunger and environment. Goal seven, Ensure environmental sustainability, includes a target to halve by 2015 the proportion of people without sustainable access to safe drinking water and basic sanitation.
Even if the U.N. Millennium Goals are met, however, between 34 and 76 million people could perish from water-related diseases by 2020, according to an analysis by the Pacific Institute, making the global water crisis one of the most serious threats to human health we now face. The global water crisis is squarely on the level with other mass killers such as AIDS and heart disease.
Sadly, industrial nations spend a pittance on overseas water and sanitation projects only five of 22 nations have met the U.N. goal for spending 0.7 percent of a nations gross national income on overseas development assistance, and only a fraction of all international assistance is spent on water and sanitation projects. In the period 1999 to 2001, an average of only $3 billion annually was provided for water supply and sanitation projects, but consumers are thought to spend at least $100 billion per year on bottled water (see sidebar).
Back at home
Although the developing world faces the brunt of the water-related human health crisis, the United States and many industrialized nations also face threats, albeit of a different nature.
Over the past 100 years, we have built hundreds of dams, blocking the flow of almost every major river in the United States. This development has destroyed critical habitat, decimated salmon and other fish runs and harmed users downstream. It has also pushed water resources in many places to the brink; we may be using certain supplies faster than nature can recharge them and in some places, water contamination is getting worse (see Geotimes, May 2004). To top it all off, climate change, by altering temperatures and when and where precipitation falls, may further stress water systems around the country.
We can, however, meet much of our future need through improved efficiency and intelligent planning. The Pacific Institutes report, Waste Not, Want Not, found that California can cut its urban use by one-third using currently available water-saving technology.
New technologies and innovative planning, especially when it comes to water for agriculture, industry and energy, could yield huge further savings; we can meet our future needs, but only if we become more aggressive and organized in improving efficiency, tracking use and protecting overstressed resources. And only if we acknowledge that the global water crisis could one day come home. Bettering our situation here and overseas will require the ongoing efforts of hydrologists, geologists and other water experts, who have already given us a better picture of how the water cycle works and how to tackle both natural and human-caused change.
What we need is a new global push at the political and even societal level to help those without basic water access, so we can stem the terrible tide of death and disease that haunts the developing world. Some experts believe that $10 billion to $20 billion per year spent intelligently on community-scale efforts could make a huge improvement in the crisis over the next decade or two.
them drink tap water |
Computer science is often defined by the perception that it is limited to tech savvy kids. This can create mental hurdles for students who do not consider themselves well-versed in tech or don’t have as much experience with computers. Educators can change this perception by introducing inclusive activities that spark students’ interest and nurture their current capabilities. We looked at 3 ways to organically motivate your computer science students.
1. Break the stigma against CS-related disciplines.
Many students may be apprehensive about diving into computer science because they don’t consider themselves digitally literate. Teachers can help break the stigma by introducing computer science as a multi-curricular discipline that can benefit students tremendously throughout their careers. Computer science topics can be as diverse as coding an animated video, to programming robots, creating one-of-a-kind digital structures or making your own moving model car.
Teachers can create a computer science gallery on Creatubbles and invite students in other schools to share creations into it. Students can learn from each other’s activities and passions, see the vast range of projects that computer science covers, and connect and collaborate on programming projects.
2. Make computer science identifiable for students.
Like each educational subject, students will potentially give more effort if the topic is relatable for them. When introducing computer science, teachers can assign realistic lessons that students can not only identify with, but enjoy in their daily lives. For example, students interested gaming might be keen on making one for themselves. They can create characters, storylines and code to realize their very own, original computer game.
3. Help make computer science inclusive for all students
Students come from many different backgrounds and levels of digital awareness. Teachers should make sure that lessons are accessible to students who haven’t had much interaction with digital tools. For instance, teachers can use age-appropriate, entertaining apps to introduce coding to students, like Scratch or Tynker. They can then, create their own animations, quizzes or make their own apps. Using these applications, students can practice at their own pace, create coding projects that interest them and learn on their individual terms.
Creatubbles offers teachers plenty of fun, engaging computer science-related activities. Explore Creatubbles for inspiration for your own classroom.
Creatubbles™ is the safe global community for creators of all ages. Save, share, discover and interact with multimedia creativity portfoliosSign up to Creatubbles
Creatubbles ™ è la comunità sicura e globale per i creatori di tutte le età. E' lo spazio ideale, adatto a bambini, famiglie e insegnanti, per salvare, condividere, scoprire e interagire con portfolio creativi e multimediali.Crea il tuo profilo |
The way the brain generates empathy, even for those who differ physically from themselves has been mapped in a new study.
USC researcher Lisa Aziz-Zadeh found that empathy for someone to whom one can directly relate, for example, because they are experiencing pain in a limb that one possesses, is mostly generated by the intuitive, sensory-motor parts of the brain.
However, empathy for someone to whom one cannot directly relate relies more on the rationalizing part of the brain.
Aziz-Zadeh, assistant professor at USC's Division of Occupational Science and Occupational Therapy, said though they are engaged to differing degrees depending on the circumstance, it appears that both the intuitive and rationalizing parts of the brain work in tandem to create the sensation of empathy.
"People do it automatically," she said.
In an experiment, Aziz-Zadeh and a team from USC showed videos of tasks being performed by hands, feet, and a mouth to a woman who had been born without arms or legs and also to a group of 13 typically developed women.
Videos showed activities such as a mouth eating and a hand grasping an object. Researchers also showed videos of pain, in the form of an injection, being inflicted on parts of the body.
While the participants watched the videos, their brains were scanned using functional magnetic imaging (fMRI), and then those scans were compared, revealing the differing sources of empathy.
In an additional finding, Aziz-Zadeh discovered that when the congenital amputee viewed videos of tasks being performed that she could also perform but using body parts that she did not have, the sensory-motor parts of her brain were still strongly engaged.
For example, the participant can hold objects, but uses a stump in conjunction with her chin to do so rather than a hand.
If the goal of the action was impossible for her, then another set of brain regions involved in deductive reasoning were also activated.
The findings have been published online by Cerebral Cortex |
Organization of the Respiratory System
Each lung is composed of air sacs called alveoli - the sites of gas exchange with blood. Airways are tubes through which air flows between external environment and alveoli. A respiratory cycle consists of an inspiration (inhalation) movement of air from the external environment into alveoli, and an expiration (exhalation) - movement of air from alveoli to external environment.
Airways and Blood Vessels
During inspiration, air passes through nose/mouth, pharynx (throat) and larynx. These constitute the upper airways. Airways beyond the larynx are divided into 2 zones:
(1) The conducting zone where there is no gas exchange. This consists of the tracheal tube, which branches into two bronchi, one of which enters each lung and makes further branching. Walls of trachea and bronchi contain cartilage for support. The first branches without cartilage are called terminal bronchioles.
(2) The respiratory zone where gas exchange occurs. Consists of respiratory bronchioles with alveoli attached to them.
Epithelial surfaces of airways up to respiratory bronchioles have cells that secrete mucus to trap particulate matter in air, which is then moved by cilia present on these cells and swallowed. Macrophages, which engulf pathogens, are also present.
Alveoli: The Site of Gas Exchange
Alveoli are hollow sacs having open ends continuous with lumens of airways. Inner walls lined by a single layer of flat epithelial cells called type I alveolar cells, interspersed by thicker, specialized cells called type II alveolar cells. Alveolar walls contain capillaries and a small interstitial space with interstitial fluid and connective tissue. Blood within an alveolar wall capillary is separated from air within alveolus by a very thin barrier. There are also pores in the walls that permit flow of air. The extensive surface area and the thin barrier permit rapid exchange of large quantities of oxygen and carbon dioxide by diffusion.
Lungs and the Thoracic Wall
Lungs are situated in thorax - the body compartment between neck and abdomen. Thorax is a closed compartment, bound at the neck by muscles and separated from the abdomen by a sheet of skeletal muscle, the diaphragm. Wall of thorax is composed of ribs, breastbone (sternum) and intercostal muscles between ribs.
A closed sac, the pleural sac, consisting of a thin sheet of cells, called pleura, surrounds each lung. The pleural surface coating the lung (visceral pleura) is attached to lung by connective tissue. The outer layer (parietal pleura) is attached to the thoracic wall and diaphragm. A thin layer of intrapleural fluid separates the two layers of pleura. Changes in hydrostatic pressure of the intrapleural fluid - the intrapleural pressure (Pip) or the intrathoracic pressure cause lungs and thoracic wall to move in and together during breathing.
Ventilation and Lung Mechanics
Ventilation is exchange of air between atmosphere and alveoli. Air moves by bulk flow, from a high pressure to a low pressure region. Flow rate can be found with:
F = (Patm - Palv)/R
where, Patm, is the atmospheric pressure and Palv is the alveolar pressure.
During ventilation, air is moved in and out of lungs by changing alveolar pressure through changes in lung dimensions.
Volume of lungs depends on (1) difference in pressure between inside and outside of lungs, called transpulmonary pressure and (2) stretchability of the lungs, called lung compliance.
Muscles used in respiration are attached to chest wall. When they contract or relax, they change the chest dimensions, which in turn changes transpulmonary pressure, which in turn changes lung volume, which in turn changes alveolar pressure, causing air to flow in or out of lungs.
Stable Balance between Breaths
Transpulmonary pressure = Palv - Pip
Palv is zero, which means it is same as atmospheric pressure. Pip is negative, or less than atmospheric pressure because the elastic recoil of the lung inwards and the elastic recoil of chest wall outwards increases volume of intrapleural space between them and decreases the pressure within.. Therefore, transpulmonary pressure is greater than zero and this pressure puts an expanding force equal to the force of elastic recoil of lung and keeps it from collapsing. Volume of lungs is kept stable and there is air inside lungs. By a similar phenomenon, the pressure difference across chest (Patm- Pip) directed inward keeps the elastic chest wall from moving outward excessively.
Inspiration is initiated by neurally induced contractions. The diaphragm moves down and intercostal, muscles moves rib cage out. The size of the thorax increases and Pip drops even further. This increases transpulmonary pressure, thus expanding the lungs. This increases size of alveoli, decreasing pressure within them. When Patmalv, it causes a bulk flow of air from the external environment through airways and into the lungs. When Patm = Palv, air flow ceases.
The diaphragm and intercostal muscles relax during expiration. The chest recoils, becoming smaller. Pip increases, thus decreasing transpulmonary pressure. Lungs recoil, compressing air in alveoli and increasing Palv. Air passively flows out from alveoli to the external environment. Under certain conditions, air can also be expired actively by contracting a set of intercostal and abdominal muscles that decrease thoracic dimensions.
Lung compliance is a measure of elasticity or the magnitude of change in lung volume (ΔVL) that can be produced by a given change in transpulmonary pressure.
CL = ΔVL / Δ (Palv - Pip)
When lung compliance is low, Pip must be made lower to achieve lung expansion. This requires more vigorous contractions of diaphragm and intercostal muscles.
Determinants of Lung Compliance
Since the surface of alveolar cells is moist, surface tension between water molecules resists stretching of lung tissue. Type II alveolar cells secrete a substance called pulmonary surfactant that decreases surface tension and increases lung compliance. Respiratory distress syndrome of newborns is a result of low lung compliance.
Resistance is determined mainly by radius. Transpulmonary pressure exerts a distending force and keeps airways from collapsing, makes them larger during expiration and smaller during inspiration.
Asthma is a disease in which airway smooth muscle contracts and increases airway
Chronic Obstructive Pulmonary Disease (COPD) is chronic bronchitis or the production of excessive mucus in bronchi that obstructs the airways.
This maneuver is the manual application of an upward pressure applied to the abdomen of a person, who is choking on an object caught in the airways. This maneuver can force the diaphragm to move up, reducing thoracic size and increasing alveolar pressure. The forceful expiration that is produced can expel the lodged object.
Lung Volumes and Capacities
Tidal volume is the volume of air entering lungs during a single inspiration or leaving the lungs in a single expiration. Maximal amount of air that can be increased ABOVE this value during the deepest inspiration is called inspiratory reserve volume. After expiration of a resting tidal volume, volume of air still remaining in lungs is called functional residual capacity. Additional volume of air that can be expired (by active contraction of expiratory muscles) after expiration of resting tidal volume is called expiratory reserve volume. Air still remaining in lungs after a maximal expiration is called residual volume. Vital capacity is maximal volume of air that can be expired after a maximal inspiration.
Minute ventilation = Tidal volume x Respiratory rate
Units: (ml/min) = (ml/breath) x (breaths/minute)
Anatomic dead space is space within the airways that does not permit gas exchange with blood. Total volume of fresh air entering the alveoli per minute is called alveolar ventilation.
Ventilation = (Tidal volume - anatomic dead space) x respiratory rate
Units: (ml/min) = (ml/breath) – (ml/breath) x (breaths/min)
Since a fixed volume of each tidal volume goes to dead space, increased depth of breathing is more effective in elevating alveolar ventilation than increased breathing rate.
The volume of inspired air that is not used for gas exchange as a result of reaching alveoli with no blood supply is called alveolar dead space. The sum of anatomic and alveolar dead space is called physiologic dead space.
Gas Exchange in Alveoli and Tissues
In steady state, volume of oxygen consumed by body cells per unit time is equal to volume of oxygen added to blood in lungs, and volume of carbon dioxide produced by cells is. identical to rate at which it is expired.
The ratio of CO2 produced / O2 consumed is called respiratory quotient (RQ), which depends on type of nutrients being used for energy.
Alveolar Gas Pressures
Alveolar P02 is lower than atmospheric P02 because oxygen in alveolar air keeps entering pulmonary capillaries. Alveolar Pool is higher than atmospheric PCO2 because carbon dioxide enters alveoli from pulmonary capillaries. P02 is positively correlated with (1) PO2, of atmospheric air, (2) rate of alveolar ventilation and inversely correlated with
(3) rate of oxygen consumption. PCO2 is inversely correlated with (1) rate of alveolar ventilation and (2) positively correlated with rate of oxygen consumption.
Hypoventilation is an increase in the ratio of carbon dioxide production to alveolar ventilation while hyperventilation is a decrease in this ratio.
Alveolar-Blood Gas Exchange
Blood entering pulmonary capillaries is systemic venous blood having a high PCO2 and a low P02. Differences in partial pressures of oxygen and carbon dioxide on two sides of alveolar-capillary membrane result in net diffusion of oxygen from alveoli to blood and of carbon dioxide from blood to alveoli. With this diffusion, capillary blood P02 rises and its PCO2 falls and net diffusion of these gases ceases when capillary partial pressures become equal to those in alveoli.
In diffuse interstitial fibrosis, alveolar walls thicken with connective tissue reducing gas exchange. Ventilation-perfusion inequality can result from:
(1) ventilated alveoli with no blood supply
(2) blood flow through alveoli with no ventilation, reducing gas exchange.
Gas Exchange in Tissues
Metabolic reactions within cells consume oxygen and produce carbon dioxide. Intracellular PO2 is lower and PCO2 is higher than in blood. As a result, there is a net diffusion of oxygen from blood into cells, and a net diffusion of carbon dioxide from cells into blood.
Transport of Oxygen in Blood
Oxygen is carried in 2 forms:
(1) dissolved in plasma
(2) reversibly combined with hemoglobin (Hb) molecules in erythrocytes. Each Hb molecule is a globin protein with four iron containing hemegroups attached to it. Each heme group binds one molecule of oxygen. Hb exist in two forms: deoxyhemoglobin (Hb) and oxyhemoglobin (HbO2). Fraction of all Hb in form of Hb02 is called percent Hb saturation.
O2 bound to Hb x 100
Percent saturation = -----------------------------------
Maximal capacity of Hb to bind O2
(Oxygen carrying capacity)
Effect of P02 on Hemoglobin Saturation
Raising blood PO2 increases combination of oxygen with Hb and binding of one oxygen molecule to Hb increases the affinity of the remaining sites on the same molecule. Therefore, extent to which oxygen combines with Hb increases rapidly as P O2 increases and this relationship between the two variables is called the oxygen-hemoglobin dissociation curve. The plateau of the curve at higher P O2 provides a safety factor for oxygen supply at low alveolar P O2.
Diffusion gradient favoring oxygen movement from alveoli to blood is maintained because oxygen binds to Hb and keeps the plasma PO2, low and only dissolved oxygen contributes to P O2. In tissues the procedure is reversed.
Carbon Monoxide and Oxygen Carriage
CO competes for the oxygen binding sites on Hb and also decreases the unbinding of oxygen from Hb.
Effects of Blood PC02, H+ concentration, Temperature and DPG on Hb Saturation
The more active a tissue is, the greater is its PCO2, H+ concentration and temperature. CO2, H+ ions and DPG (2, 3-diphosphoglycerate) combine with Hb and modify it allosterically, thereby shifting the dissociation curve to the right. This shift causes Hb to release more oxygen to the tissues.
Transport of Carbon Dioxide in Blood
Some fraction of carbon dioxide is dissolved and carried in blood. Some reacts reversibly with Hb to form carbamino Hb.
CO2 + Hb ↔ HbC02
Some carbon dioxide is converted to bicarbonate.
CO2 + H2O ↔ H2CO3 ↔ HCO3- + H+
The enzyme, carbonic anyhydrase, is present in erythrocytes where the reaction takes place after which the bicarbonate moves out into the plasma.
Transport of H+ ions between Tissues and Lungs
If a person is hypoventilating, arterial H+ concentration rises due to increased PCO2 and this is called respiratory acidosis. Hyperventilation lowers H+ and this is called respiratory alkalosis. Deoxyhemoglobin has a higher affinity for H+ ions than oxy-hemoglobin and binds most of H+ produced. In the lungs, when deoxyhemoglobin is converted to oxyhemoglobin, H+ ions are released.
Control of Respiration
Diaphragm and intercostal muscles are skeletal muscles and therefore breathing depends upon cyclical excitation of these muscles. Control of this neural activity resides in neurons called medullary inspiratory neurons in medulla oblongata. These neurons receive inputs from apneustic and pneumotaxics center in pons. Negative feedback from pulmonary stretch receptors is also involved in controlling respiration (Hering Breur reflex).
Control of Ventilation by PO2, PCO2, and H+ Concentration
Control by P02 and PC02
Peripheral chemoreceptors called carotid bodies and aortic bodies are in close contact with arterial blood and are stimulated by a steep decrease in arterial PO2 and an increase in H+ concentration. They give inputs to medulla.
Control by H+ not due to CO2
Lactic acid in exercising muscles can cause metabolic acidosis or metabolic alkalosis, changing H+ concentration and stimulating peripheral chemoreceptors.
Control of Ventilation during Exercise
Blood PCO2, PO2, and H+ concentration due to CO2 do no change much during exercise due to compensatory hyperventilation. Change in H+ concentration due to lactic acid, input from mechanoreceptors in joints and muscles, increase in body temperature, increase in plasma epinephrine, etc. play important roles in stimulating ventilation.
Other Ventilatory Responses
Protective reflexes such as cough and sneeze reflexes, protecting the respiratory system from irritants. Receptors for sneeze located in nose or pharynx, while those for cough are located in the larynx, trachea and bronchi. The reflexes are characterized by a deep inspiration followed by a violent expiration.
Voluntary control of breathing is accomplished by descending pathways from the cerebral cortex. Cannot be maintained when involuntary stimuli are very high.
Reflex from J receptors, which are located in lungs, are stimulated by increase in lung interstitial pressure due to occlusion of a pulmonary vessel (pulmonary embolus), left ventricle failure etc. Reflex effect is tachypnea (rapid breathing).
Deficiency of oxygen at tissue level. There are four types:
(1) Hypoxic hypoxia (hypoxemia) which is characterized by reduced arterial PO2.
(2) Anemic hypoxia occurs when total oxygen content of blood is reduced due to inadequate number of erythrocytes, deficient or abnormal Hb, or binding of CO to Hb. Arterial PO2 remains normal.
(3) Ischemic hypoxia (hypoperfusion hypoxia) occurs when blood flow to tissues is low.
(4) Histotoxic hypoxia occurs when tissue is unable to utilize the oxygen due to interference from a toxic agent. However, the quantity of oxygen reaching the tissue is normal.
Retention of carbon dioxide and increased arterial PCO2, is called hypercapnea.
Disease characterized by increased airway resistance, decreased surface area for ventilation due to alveolar fusion, and ventilation-perfusion inequalities.
Nonrespiratory Functions of the Lungs
Lungs (1) concentrate a large number of biologically active substances in the bloodstream and also remove them, (2) produce and add new substances to blood and (3) trap and dissolve small blood clots. |
Scientists from Korea have developed a new biosensor that uses fluorescence resonance energy transfer (FRET) to detect DNA double-strand breaks in living specimens in real time.
This biosensor has the potential to aid the development of new treatments for DNA damage-related diseases by revealing how our bodies repair damaged DNA.
Double-strand breaks (DSBs) are a type of DNA damage where both strands of DNA break at the same location. They can adversely affect cell growth and functioning.
Currently, DSBs are detected by immunostaining techniques, which identify markers that accompany DNA damage, such as the protein γH2AX. However, these methods are tedious, and cannot be used to detect DSBs in real time in living specimens.
In a 2023 study, researchers describe a fluorescence resonance energy transfer (FRET) biosensor that can detect DSBs in real time and provide time- and location-based information on yH2AX.
The biosensor also has the potential to revolutionise cancer treatment by helping doctors understand how cells respond to treatment and facilitating the discovery of new DNA repair drugs.
“The biosensor we have designed could be useful in areas such as cancer treatment and drug discovery,” says Associate Professor Tae-Jin Kim, from Pusan National University, Korea, who led the study.
FRET sensors consist of two fluorescent proteins or dyes – a donor and an acceptor – which investigate interactions between biological molecules. The energy transfer, and consequently, the amount of emitted light (the FRET signal) depends on the distance and orientation between the two dyes.
“Moreover, as changes in the FRET signal give useful indications of the extent of the DNA damage, the sensor can also be used to examine DNA damage and repair mechanisms, optimise cancer treatments, discover and assess DNA repair drugs, and identify DNA damaging factors in the environment,” concludes Associate Prof Kim. |
Grab a rubber band and stretch your curiosity to discover exothermic and endothermic reactions
Simple household objects are involved in this practical, which shows off a simple principle in a clear and effective way.
This experiment should take 30 minutes.
- Eye protection
- Rubber band, 0.5 cm wide (one for each participant)
- Hair dryer
- Weight >1 kg
Health, safety and technical notes
- Read our standard health and safety guidance.
- Always wear eye protection.
- Ensure rubber bands are sterile and clean.
- Ask participant to stand back so that broken rubber bands do not drop weights onto feet.
- Hairdryers should not be brought from home, ensure all electricals used have an up-to-date pat test.
- Take the rubber band. Quickly stretch it and press it against your lips. Note any temperature change compared with the unstretched band.
- Now carry out the reverse process. First stretch the rubber band and hold it in this position for a few seconds. Then quickly release the tension and press the rubber band against your lips.
- Compare this temperature change with the first situation.
- Set up the apparatus as shown in the diagram. Make sure that if the rubber band breaks, the weight cannot drop on feet.
- Predict what happens if this rubber band is heated with a hair dryer.
- Write down your prediction.
- Measure the length of the stretched rubber band.
- Now heat the rubber band using the hair dryer and observe the result.
- Measure the new length.
- The depth of treatment depends on the ability of the students.
- Students should recognise the difference between exothermic and endothermic reactions.
- A rubber band width of 1–1.5 cm and a 2 kg mass works well.
- A ruler standing beside the apparatus is effective as students can see the contraction as it occurs.
- Another alternative is to use a clampstand and adjust the height of the weight until it just touches the bench.
By placing the rubber band against their lips, students may detect the slight warming that occurs when the rubber band is stretched (exothermic process) and the slight cooling effect that occurs when the rubber band contracts (endothermic process).
The equation ΔG = ΔH - TΔS (where ΔG means change in Gibb’s free energy, ΔH is enthalpy change, ΔS is entropy change and T is the absolute temperature) can be rearranged to give TΔS = ΔH - ΔG. The stretching process (exothermic) means that ΔH is negative, and since stretching is nonspontaneous (that is, ΔG is positive and -ΔG is negative), TΔS must be negative.
Since T, the absolute temperature, is always positive, we conclude that ΔS due to stretching must be negative.
This tells us that rubber under its natural state is more disordered than when it is under tension.
When the tension is removed, the stretched rubber band spontaneously snaps back to its original shape; that is, ΔG is negative and -ΔG is positive.
The cooling effect means that it is an endothermic process (ΔH > 0), so that TΔS is positive. Thus, the entropy of the rubber band increases when it goes from the stretched state to the natural state.
- Based on your initial testing (by placing the rubber band against your lips) decide which process is exothermic (heat given out): stretching or contracting of the rubber band?
- The chemist Le Chatelier made the statement, ‘… an increase in temperature tends to favour the endothermic process’. Explain in your own words how this statement and how your answer to question 1 can account for your observations when heating the rubber band.
- Draw a number of lines to represent chains of rubber molecules, showing how they might be arranged in the unstretched and stretched forms.
- They should observe that the rubber band contracts when heated, which may well be the opposite of what they have predicted. The most simplistic answer may be that since the endothermic process is favoured when heating occurs, this is a contraction in the case of the rubber polymer since this is the endothermic process.
Rubber band experiment- student sheetPDF, Size 0.14 mb
Rubber band experiment- teacher notesPDF, Size 0.18 mb
This practical is part of our Classic chemistry experiments collection.
No comments yet |
The variety of habitats in our state gives Missouri an amazing diversity of plants and animals. While we may enjoy going outside and experiencing nature through fishing, birdwatching, hunting, hiking, and other activities, nature provides benefits far beyond recreation.
Individual species and their habitats interact and function together as one system (ecosystem). Healthy ecosystems provide us with important environmental services we all depend on. The services include:
- Clean air and water
- Flood protection
- Pollination and food production
- Carbon storage
- Building materials
- Stress relief
Nature does all this — and more — for free! When ecosystems (habitats and the species within them) are healthy, we benefit. Healthy ecosystems support a large variety of species, which is called biodiversity.
Every living thing in an ecosystem plays an important role.
- Tree roots absorb water, prevent erosion, and filter pollutants.
- The leaves of plants remove pollutants from the air, create oxygen, and convert sunlight into sugars (the basis of all food).
- Bees, butterflies, and other insects, as well as birds and bats, pollinate our food and other plants.
- Fungi, beetles, and bacteria break down dead leaves and dead trees into soil.
- Worms and moles aerate soil.
- Snakes and owls eat rats and mice.
- Vultures, opossums, and other scavengers keep our environment clean by eating dead animals, which keeps diseases from spreading and aids in limiting fly reproduction.
And the list goes on. Many Missouri species need large areas of healthy, connected habitat to survive and maintain a genetically strong population. Thus, development and habitat loss are among the greatest pressures faced by individual animals and their populations at large.
But we can stop or reduce species declines by taking action now. Acting now is not only sensible but more cost efficient than taking emergency actions after a species is in trouble.
That’s why MDC is taking proactive measures to protect and improve habitats throughout our state. With the help of our partners and private landowners, we can contribute to the long-term welfare of both people and wildlife by ensuring that Missouri has healthy, functioning ecosystems for ages to come.
Full details about MDC’s work and focus on habitats and species around the state can be found in the State Wildlife Action Plan and the Terrestrial Natural Communities of Missouri page below. |
Industries have been an integral part of human civilization since ancient times. From the early days of agriculture and textile manufacturing to the modern-day technology and software industries, industries have undergone a significant transformation over time. In this article, we will explore the evolution of industries and the factors that have contributed to their growth and development.
The earliest industries were based on agriculture and animal husbandry. Humans discovered the benefits of farming around 12,000 years ago, which led to the establishment of settlements and communities. Agriculture was the foundation of early civilization, and it allowed for the development of other industries such as pottery, metallurgy, and textile manufacturing.
As human societies became more complex, the need for more advanced technologies and systems grew. The industrial revolution, which began in the 18th century, marked a significant turning point in the history of industries. It was a period of great change, and it transformed the way goods were produced and consumed.
The industrial revolution started in Britain and then spread to other parts of the world. It was characterized by the development of new technologies and machinery, which made the production process more efficient and cost-effective. The steam engine, for example, played a significant role in powering the factories and machines of the industrial revolution.
During this time, new industries emerged, such as coal mining, iron and steel production, and the textile industry. The industrial revolution also led to the growth of transportation and communication industries, which facilitated trade and commerce.
The 20th century was marked by rapid industrialization, which transformed the global economy. The growth of industries such as automobiles, electronics, and telecommunications revolutionized the way people lived and worked. The development of the assembly line and mass production techniques made goods more affordable and accessible to a wider range of people.
The Second World War had a significant impact on the growth and development of industries. The war led to the rapid expansion of industries such as aviation, chemicals, and electronics. It also stimulated research and development in new technologies such as nuclear power, computers, and semiconductors.
The post-war era saw the emergence of new industries, such as aerospace, biotechnology, and information technology. These industries have played a significant role in shaping the modern world and have revolutionized the way we live, work, and communicate.
The 21st century has been marked by the rise of new industries, such as renewable energy, e-commerce, and artificial intelligence. The development of new technologies and the expansion of the global economy have created new opportunities for businesses and entrepreneurs.
The evolution of industries has been driven by various factors, including technological advancements, changes in consumer behavior, and shifts in global economic trends. The growth and development of industries have also been influenced by government policies and regulations, as well as social and cultural factors.
In conclusion, the evolution of industries has been a continuous process, shaped by a wide range of factors. From the early days of agriculture and animal husbandry to the modern-day technology and software industries, industries have undergone significant changes over time. The growth and development of industries have transformed the way we live, work, and interact with each other. As we continue to move forward, it will be interesting to see how industries evolve and adapt to the changing needs of society. |
Exploration is the process of trying to find accumulations of oil and natural gas trapped under the Earth’s surface. Production is the process of recovering those hidden resources for processing, marketing and use.
To understand the challenges the oil and natural gas industry faces in exploration and production, it helps to understand how oil and gas accumulations – often called “reservoirs” – develop in the first place:
Oil and natural gas are formed when decaying plants and micro-organisms are trapped in layers of sediment and – over the course of millions of years – become buried deep within the earth, where underground heat and pressure turn them into useful hydrocarbons, such as oil and natural gas.
The layers of rock in which hydrocarbons are formed are called source rocks. High pressures underground tend to squeeze hydrocarbons out of source rocks into what are called reservoir rocks. These are rocks, such as sandstone, which feature pores large enough to permit fluids like oil, natural gas, and water to pass through them. Since oil and natural gas are less dense than water, they will float upward toward the surface. If nothing stops this migration, the oil and natural gas may reach daylight through what is called a surface seep.
More often, however, hydrocarbons’ path upward is blocked by a layer of impermeable rock, such as shale, or by some other geologic formation. These trap the oil and natural gas, either in an underground pocket or in a layer of reservoir rock, so that it may be recovered only by drilling a well.»next |
“Socratic seminar is a formal discussion, based on a text, in which the leader asks open-ended questions. Within the context of the discussion, students listen closely to the comments of others, thinking critically for themselves, and articulate their own thoughts and their responses to the thoughts of others.”
Israel, Elfie. “Examining Multiple Perspectives in Literature.” In Inquiry and the Literary Text: Constructing Discussions n the English Classroom. James Holden and John S. Schmit, eds. Urbana, IL: NCTE, 2002.
Inside and Outside Circles
We will use a backchannel from time to time during our socrative seminars. A backchannel is an digital or online conversation that occurs simultaneous to a face to face discussion or other learning experience. Backchannels are a way of encouraging interaction and reflection while not interrupting the flow of the primary discussion. This augmented discussion provides a structure for those of you students who may need time to process your thinking but want to contribute to the learning, or it could be a way for those of you who enjoy sharing your ideas but are working on not dominating the conversation. You can contribute to backchannel discussions asynchronously—any time, any where—this gives you and I another means of communication. So lessons are no longer confined to just the day and time when you experience them. Our backchannel is not meant to replace our conversations in class, they improve and extend them!
Reflection Form– Fill out your responses on paper or a Google Doc, then cut and paste them into the Reflection Form to submit. Do NOT compose your responses in the form.
- What did you find interesting? Include specific ideas that stood out from the discussion to you and why. Be specific, citing whose idea it was, and explain your reasoning. How did this discussion deepen your understanding of the story?
- What questions do you have? Include specific questions, ideas, issues, concerns that you are still struggling with and why this is still an issue for you.
- How did the discussion go?
- Inner Circle: Evaluate your personal, overall participation: how prepared were you? What were your strengths during discussion? Areas of improvement? Be specific.
- Outer Circle: How do you think the inner circle did? What were the strengths of their discussion? What points were developed well; conversely, what points were dropped? Be specific. |
Among all the vitamins, those of group B often go almost unnoticed. There is a lot of talk about vitamin A for healthy eyesight or vitamin C to strengthen the immune system, or vitamin D for bones and to strengthen defenses against viruses. And what about the B vitamins? Well, this group of water-soluble vitamins is actually indispensable for our body and our brain. Often these vitamins are also found together in the same foods and share the same role. Let's try to deepen the properties and possible sources of B vitamins, based on the most recent scientific research.
B vitamins, properties
The B vitamins are essential to our body as they allow us to use the foods we ingest to produce energy for the cells. Not only that, they participate in the synthesis of DNA and RNA, protect the liver, the nervous system and mood, counteracting depression (Wu et al, Nutr Rev, 2022). According to recent scientific research, vitamin B levels are also an indicator of bone health. Indeed, in older people, a vitamin B deficiency increases the risk of bone fractures by 70% (McLean et al, J Clin Endocrinol Metab, 2008).
B vitamins and brain
Do not forget the role of the B vitamins on the functioning of the brain. All these vitamins, in fact, cross the barrier that separates blood and brain and reach this organ where they are always kept at high levels. For example, folate, or vitamin B9, is found in the brain up to four times more than it is in plasma, while biotin, or B8, in the brain can reach up to 50 times more than seen in the blood. But what can B vitamins do in the brain? First of all, they contribute to the structure and good functioning of neurons, such as vitamin B1, they counteract the action of free radicals, as in the case of vitamins B2 and B5, they can counteract inflammation but also disturbed sleep, especially in the case of the Parkinson's disease, such as vitamin B3, but also help regulate glucose metabolism, to which the brain is very sensitive, as does vitamin B7 (Kennedy et al, Nutrients, 2016). It is noteworthy that there is an amino acid, called homocysteine, whose values higher than normal indicate a greater risk of cognitive decline and Alzheimer's disease. Well, higher homocysteine values are linked to a deficiency of vitamins B12, B6 and B9. Studies have observed that taking a supplement containing these vitamins for one month has significantly reduced homocysteine, showing that it protects the brain and improves cognitive function (Olaso Gonzalez et al, IUBMB Life, 2022 - Cheng et al, Nutr Neurosci, 2016).
B vitamins, where to find them
B vitamins are found in whole grains, brown rice, green leafy vegetables, legumes, milk and dairy products, eggs, brewer's yeast, but also fish, oilseeds, bananas, citrus fruits, pollen and royal jelly. In general, adherence to the Mediterranean Diet is associated with an increase in the intake of vitamins, including those of group B (Kennedy et al, Nutrients, 2016).
B vitamins, when a deficiency is possible
A healthy and varied diet should ensure a regular supply of B vitamins and thus avoid their deficiency. However, the problem of a poorly varied diet is apparently more widespread than one might think, since deficiencies of B vitamins are frequently observed in the population. For example, as regards vitamin B6, it is possible to reach almost 30% of the population who do not get enough, while as regards folate, or vitamin B9, 12% of the population has a deficiency (Kennedy et al, Nutrients, 2016). Then there is the problem of vitamin B12, which is found in abundance in meat and in dairy products. Therefore, it may happen that those who follow a very restricted diet, as in the case of the vegan diet, may present a deficiency of this vitamin (Kennedy et al, Nutrients, 2016). It should also be emphasized that the abuse of alcohol, but also some medicines such as antibiotics can limit the absorption of the B vitamins. Finally, boiling makes you lose part of these vitamins which pass into the water of cooking (Hrubsa et al, Nutrients, 2022).
B vitamins, supplements and warnings
In some cases and periods of life, an additional intake of B vitamins through supplements may be required. For example, during pregnancy, with advancing age, in the case of a vegan diet or following the intake of certain drugs, it may be necessary to resort to supplements. Since the B vitamins are water-soluble, any excesses are excreted through the urine thus avoiding damage. However, in any case it is always good to ask your doctor for advice who will evaluate the correct dosage based on your personal situation. |
Activations of the visual system are normally generated by stimulation of the eye’s retina. The interdisciplinary research team, led by Professor Gregor Rainer of the University of Freiburg, has succeeded in technically producing such activations without any visual information reaching the eye, by stimulating appropriate neurons. Using the optogenetic method, which involves introducing transmembrane proteins into brain neurons, these were stimulated by pulses of light.
When the brain can see without the eyesIn principle, this method makes it possible to inject information into the visual system, even in cases of functional loss of the eyes, which the brain then interprets as vision. The results obtained could therefore form the basis of a future generation of visual prostheses. Further targeted research is required to make this a reality.
The visual prostheses currently available work inside the eye. While they may be useful for certain eye diseases, they have not yet achieved any real breakthroughs. The present work focuses not on the eye, but on the visual thalamus, the relay zone in the center of the brain that collects and transmits information from the eyes. Further fundamental research is now needed to ensure that these artificially created perceptions in the visual system reproduce natural vision as faithfully as possible.
The animal model as a basis for gaining new insights into a global problemThe tupaia’s developed visual system, similar to that of humans, makes it ideally suited to this study. Switzerland’s strict animal experimentation standards guarantee animal-friendly conditions.
According to an estimate by the World Health Organization, at least 2.2 billion people worldwide are visually impaired at near or distance. Most of those suffering from visual impairment or blindness are over the age of 50. It is to be hoped that further research in this field will lead in the medium term to the development and testing of innovative visual prostheses. Such a breakthrough would improve the quality of life for many people, particularly in societies with an ageing population. |
What Does The Internet Look Like?
While the internet is an integral and pervasive part of society — cloud based services are making the internet a homogenized experience no matter where you go, whether at home, at work, or even on the other side of the globe — many don’t understand the structure upon which it’s built. The internet was built in a similar manner to a house, with architects and engineers who design, shape, and create its structure. However, while we expect the internet to be built securely and solidly, we keep encountering security issues that make us shake our heads, often forgetting that the backbone of the internet is constructed primarily on technology that's more than 30 years old. Forget zero-days — the internet has enough existing holes to last a lifetime.
Ideally, an outward network connection should look similar to the diagram on the left, but the reality is that after a while, most data centers (where the actual physical wired connections are made), often look like the image on the right.
The Way Data Travels
Because of the way routing works, data does not generally move in a straight line. Although protocols exist to allow for direct connectivity, it is unwieldy at scale. The reality is that the internet is a collection of networks that each connect to one another, and packets of data bounce around each of these until they find the network to deliver the data. The bulk of internet traffic is routed by the Border Gateway Protocol (BGP), which is designed to allow routers to create relationships with each other that that tells them how to pass along data — effectively the largest of all peer-to-peer networks.
Basically, the internet addresses that computer or mobile devices use (IPs, both v4 and v6) are put together in aggregated address blocks called prefixes (sometimes also called CIDRs), and assigned to different organizations. Organizations then create bundles of their IP address space, referred to as an Autonomous System, which is then assigned a number (ASN), then use a method like BGP to send data between ASNs. In layman’s terms, think of ASN as your city, the prefix as your postal/ZIP code, the IP address as your house address, and the BGP as the in-car navigation system that tells you how to drive from Point A to Point B.
There exists a variety of protocols on how to transmit data between IPs and between ASNs, including international guidelines and standards on how all this interaction should happen. Not only might the data bounce between many ASN points in its path to go from the source to the destination, most protocols break the data into small chunks, called packets, each of which might take its own path between the source and destination. The transmission protocols then have to re-assemble them at the opposite end.
The issue is that none of the standards on interoperating or routing data are legally binding. So as long as two AS network operators are willing to connect their networks, neither of them has to follow any of the international guidelines on how to do so.
The challenge then is in knowing where your data has been with all this bouncing around going on.
Visualizations by Country
When it comes to internet routing, geopolitical awareness does not exist. When the internet was first designed, there was nothing within the protocols to enforce geolocation identification. As data routes across various AS network clouds, it does so without consideration of the geographic location it is routing through. Neither does it care about the country of origin its AS network operator is incorporated in. In an era of growing concerns regarding data privacy, data access, and data localization, we decided to do some internet traffic routing cartography to visualize how much routing stays within a given country, and how many network connections route out of the country.
We achieved this by collecting data on BGP announcements from a variety of sources, then creating a peer relationship table from these via inbound and outbound route advertisements. Because the routers can only “see” the announcements occurring within its own proximity, BGP route announcement collections are only as good as the number of routes being passed through them. Hence, data was collected from around the world, and then compared to one another to create an understanding of the quality and delta value-add that each data source provided.
As one can expect, this data set became rather large, so we created subsets using the country assignment based on the whois registration for each ASN in order to create lists of ASNs per country. Finally, we created visualizations for each country based on a snapshot in time, showing all ASNs identified for that particular country and each of their BGP peers, regardless of whether the BGP peer was identified as part of that country. All the ASNs local to the country are represented in red, while all the ASNs that were not local to that country because of peership are indicated in black. ASNs for the country that had no peers were shown as single dots around the central visualization.
These visualizations, where the red parts refer to the local country and the black parts are operated by a foreign ASN, show how impossible it is to keep data within one’s own country. As an example, we adjusted the US cartography to indicate how many ASNs were based in the US (green) and how many were in Europe (red). The results show a lot of overlap:
Country by Country Observations
Some of the observations we noted when looking at the routing landscapes for each country is that one can see some distinct differences in their approach.
- In the US, the peering seems to cluster on three main ASNs, and many of the “blooms” in the visualization show large clusters of ASNs that only have a single BGP peership. These cases leave the country vulnerable to BGP attacks. If the single point to which any of the larger blooms connect to is BGP-hijacked, all the single ASN nodes behind it will lose their routing to the rest of the globe. Ultimately, their upstream ASN will also fall victim to the BGP hijack. Canada also has these same kinds of clustering and single peer blooms, but not to the same extent of tight clustering as the US.
- We observed the exact opposite In Russia. ASN peering is highly distributed, thus making the internet space in that region highly stable. However, it also will make data localization particularly challenging so long as the bulk of the ASNs in Russia have non-Russian ASN peering.
- Interestingly enough, despite the perception that China is segregated from the rest of the world’s infrastructure, China-based ASNs can be observed to have a number of foreign ASN peers. The assumption is that the country uses network-based traffic filtering rather than a BGP based one to enforce the data wall put in place. Additionally, China can be observed as having ASNs—that are not otherwise part of the core China-based internet—that have direct peering to foreign based ASNs. In all of these cases, the foreign-based ASNs appear to be subsidiaries or partners of China-based companies and likely received special approval for such connections.
- Japan’s infrastructure is elegant in its simplicity. Despite the large number of users and IPs in the country, these all appear to use only a few ASNs that then connect to the rest of the world. The one weakness is that, like in countries other than Russia, there appears to be a susceptibility to BGP hijacking given how few active ASNs there are.
- The German infrastructure was particularly interesting to observe, especially in light of the GDPR’s data in transit regulations. It has heavy peering with non-German-based ASNs, and if these ASN are not European-based, they may have GDPR regulatory requirements that they’re unaware of.
- Globally, there are thousands of ASNs not in use. When studying the day over day data, we could observe that occasionally these ASNs that have long been abandoned would start to announce IP space and have peers for a short period of time -- and then stop again. In many cases, this is due to the use of the ASN as part of a criminal scheme to do xx announcements of IP space.
The visualizations immediately raised a number of data regulatory and localization questions.
How is the Internet Standardized and Regulated?
The majority of specifications and guidelines for the engineering of the internet were (and continue to be) developed by the Internet Engineering Task Force (IETF.org). The IETF is a non-profit organization formed in 1986 that allows anyone (without specific membership requirements) to join to get more participation in standards development and maintenance. Through a variety of Special Interest Working Groups, the IETF has published a series of voluntary standards over the years; these generally are one of three types: RFCs (Request for Comments), BCPs (Best Current Practices), and STDs (Standards). The Internet Architecture Board (IAB), whose role is to ensure consistency across the Working Groups and develop long term planning for voluntary standards development, oversees the IETF.
However, the IETF's standards are not legally binding. These are voluntary adoption standards that may or may not be followed; although successful integration with the rest of the internet is fairly dependent on compliance.
Relevant Laws and Regulations
If the IETF standards are not legally binding regulations for internet routing, then what is? The issue of laws, regulations, and jurisdictions become complicated when put into the context of cyberspace. Long gone are the days of “my data is stored on this disk” — can anyone even say for sure where their data has been, let alone attest to it for compliancy audits? Do people know for sure that those who handle their data are compliant to various regulations on privacy and data protection? Are they even subject to regulations by countries not their own?
Privacy, law enforcement access, and data localization — these three concepts all focus on the same thing, but from different perspectives. How do we protect, or gain access to, data on individuals’ activities online, when “online” could mean anywhere around the globe? This makes matters even more complex. Multiple factors make it a challenge to determine where your data resides and where it is during transit, whether due to the fact that organizations are moving to cloud based systems, or because of the way the internet's building blocks connect (via routing / BGP) in a completely geopolitically agnostic way.
After 9/11, global concerns on safety and security moved many countries to increase their surveillance laws in order to provide law enforcement and other investigative bodies easier access to information that can better assist them with their case. This is especially true in the US, where the first of these types of laws — the Patriot Act — was enacted. The Patriot Act granted law enforcement greater authority in tracking and intercepting communications — with the caveat that it was limited for the most part to data and communications within the US. More recently, the US passed the Cloud Act, a law that has caused apprehension amongst privacy advocates. The Cloud Act was meant to address inabilities to access data on US citizens stored outside of the US or data held by companies with a US presence stored outside of the US. The latter concept raises concerns because of the range of possible interpretations of the concept of “presence” in a digital world. In other words, this could be applied to just about any data stored or traversing the internet anywhere around the world, if one can make the case for the need to access it.
In response to privacy concerns raised by these new access laws, many countries enacted stronger privacy regulations to protect its citizens from these types of unreasonable intrusions into their digital lives. Perhaps the biggest and most notable one is the European General Data Protection Regulation (GDPR). The GDPR was intended to ensure that personal data is handled with data protection by design — aka the highest level of protection — by default. Data on and about the individual must be stored separately, and can only be retained with informed consent. The GDPR takes the approach that it is sufficient for the data to pertain to its citizens in order for it to be applicable, rather than just being physically stored within a European country. It contains specific articles (Articles 44-50) on the transfer of data outside of the country. Specifically:
- Article 44 outlines that any transfer of personal data must be done with the highest levels of protection, and can only be done in a manner that does not undermine any of the protections outlined in GDPR.
- Article 48 states that a third party country cannot access data protected under GDPR unless it’s under an international agreement such as MLAT or it constitutes a violation of GDPR.
Clearly, the regulations of GDPR and similar privacy protection laws are in direct opposition to those in access laws, creating a confusing situation where one set of regulations is non-compliant with the other. And although GDPR and the Cloud Act are used here as examples, many countries have enacted similar kinds of regulations. Mapping which regulations are applicable to your organization should be a top priority given the high cost of fines for non-compliance.
In response to the growing schism between data privacy and data access, some countries have attempted to implement what has become known as data localization regulations. Bret Cohen, Britanie Hall, and Charlie Wood’s paper, "Data Localization Laws and Their Impact on Privacy, Data Security and the Global Economy," published in the American Bar Association’s Antitrust Magazine in the fall of 2017, summarizes various data localization laws regulations. In 2015, Albright Stonebridge Group’s paper entitled Data Localization: A Challenge to Global Commerce and the Free Flow of Information identified 23 countries that have had various forms of regulations attempting to stipulate that data relating to their citizens must stay within the country. The paper also illustrates the various strengths of each country's localization laws:
|Color||Strength of measures||Country|
Explicit requirements that data must be stored on servers within the country.
|Brunei, China, Indonesia, Nigeria, Russia, Vietnam|
Laws that create such large barriers to the transfer of data across borders that they effectively act as data localization requirements.
Wide range of measures, including regulations applyingonly to certain domain names and regulations requiring the consent of an individual before data about them is transferred internationally.
|Belarus, India, Kazakhstan, Malaysia, South Korea|
Restrictions on international data transfer under certain conditions.
|Argentina, Brazil, Colombia, Peru, Uruguay|
Tailored to specific sectors, including healthcare, telecom, finance, and national security.
|Australia, Canada, New Zealand, Taiwan, Turkey, Venezuela|
No known data localization laws.
Are Regulations Consistent With the Way the Internet Operates?
The problem with many data privacy and localization laws and regulations is that they do not take into account how internet routing actually works.
In the case of data privacy regulations, there appears to be a tendency in most of these to treat data routing with a client/server application layer mentality. Most treat the concept of “data goes from point A, the server, to Point B, the client” as if there were three boxes – data stored at one location, data stored at the end location, and then “the middle bit.” There seems to be a misunderstanding of what happens between Point A and Point B, ranging from privacy standards that treat it as if it is a local data store as well, to treating it as if data is transported in a straight and direct path between the two, with no intersections or stops in between. The technical reality is “the middle bit” is a series of data transportations then data stores (when it hits the next hop router in the path it is traveling, similar to a red light at a road traffic intersection), then another set of data transportations and data stores until it makes all the necessary hops to the final destination. Most global privacy standards do not take into account the geopolitical-agnostic nature of routing, and often do not truly understand how frequently data crosses various international borders. And those few that do take into account how data bounces around require textbook high-end encryption, without understanding that implementing it to be compliant at this scale would become so cost prohibitive that it would prohibit its adoption and implementation.
Turning to data localization laws, or in other words, laws that have been implemented to avoid the multijurisdictional data transversal issues raised above, there have been many famous cases where countries attempted to cut itself off from the internet in an attempt to control its citizens' data traffic:
- Syria: Internet connectivity between Syria and the outside world was shut down in late November 2011 and again in early May 2013 in order to control information regarding the ongoing civil war at the time., However, news of the ongoing events was still able to flow in and out of the country.
- Pakistan: In one of the best examples of issues arising from network route manipulation, an attempt to control its citizens' access to YouTube resulted in Pakistan inadvertently disabling access for most of the globe to the popular web service.
- Yemen: Rebels managed to disrupt nearly 80% of all internet traffic to the country in July 2017 when they cut one of largest fiber cables into the country. Single points of failure in the design of the country’s infrastructure led to what could have been a very serious national security threat.
Additionally, some countries have heavily segregated the internet infrastructures within the country from outside connections by design. These include countries such as:
- North Korea
- Saudi Arabia
These countries have implemented strict technical and legislative controls within their borders in order to limit their citizens and the content of other countries. However, China’s internet map, for example, shows that digital segregation does not completely cut off access from outside connections (shown in black).
Ironically, our research found that it seems the more robust a network a country has, the harder it is to enact data localization and geographic network segregation. Russia, by far, has one of the most robust routing maps of all countries we have investigated thus far:
The visualization above shows that the number of peering points within its network minimizes the risk of national internet outages. Unfortunately, this works against their attempt to implement Federal Law No 242-FW, which requires that all Russian citizens’ personal data be collected, stored, and processed fully within Russian borders. This regulation also applies to multinational companies that operate in Russia, even if they have no current physical presence but are used by or contain information about Russian citizens. This has forced many large organizations such as Amazon, Google, and Facebook to set up data centers in Russia in order to continue their operations there. Additionally, Russia has been experimenting with models that disconnect it from the internet for several years now, and, based on the global routes, this was attempted again earlier this year. While not a complete success, Russia is now publicly stating that its attempts at localization have been successful enough to allow itself to lock out the West should political tensions rise sufficiently.
In an era so fraught with political tensions, these regulations are only going to become more complex. It would behoove any organization to carefully consider which of these many various global regulations it must adhere to when designing its network architecture and outbound connection points. This includes considering what types of data will be flowing across its network peering points, as well as understanding where its own data goes and the data path it will take across the internet. Any mistake could be costly; The GDPR includes fines of up to 4% of global revenues, or up to €20 million, whichever is higher, for such omissions, and it’s been shown that data breaches can irreparably damage the reputation of a brand or organization.
There is a mismatch between the nature of the internet and the ways we regulate it. We are making a call to action for both governments and organizations to recognize that this is a global issue that cannot be solved by any one government or organization, as each plays a vital role in the solution. It will take a global united effort to address the issues raised by our current routing infrastructure:
- International routing guidelines are not legally binding, but routing does not adhere to geopolitical boundaries.
- Despite various regulations, data is still prone to theft when misrouted to unintended destinations via BGP hijacking.
- Threat actors are realizing flaws in the foundational components of internet infrastructure as shown by a marked increase in BGP-based attacks (BGP Hijacking, IP spoofing, etc).
Governments should have a better understanding of the nuances of data routing across the globe, including the realization that internet cabling and routing does not allow for geopolitical distinction. They should also understand the complicated interrelationships between internet engineering, data privacy considerations, and cross-geopolitical impacts of regulations — especially since the nature of data routing means that legislation from one country will also apply to others outside of that country.
For organizations, there is a need to understand how regulations affect them, and to recognize that poor network engineering on their part contributes to global security and privacy issues.
The issue of digital sovereignty is still far from polished — organizations and individuals that create laws and regulations need to collaborate with those that design and implement how the internet works in their respective countries.
For governments, there is a need for a more realistic approach to data protection, both for privacy and localization. In an ideal scenario, they would implement regulations that consider several factors:
- Realizing that it will cost billions to retrofit the internet so be aware that this is not possible, nor enforceable.
- Realizing that it will be near impossible to implement true data localization, so truly private information must be encryption encapsulated in some form.
- Enforcing thorough policy standards like the IETF’s regulations BCP38 Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing and BCP84 Ingress Filtering for Multihomed Networks will aid in limiting the impact of address spoofing.
- Requiring ISPs and Telcos to have stronger identification and request authentications standards for IP space reassignments.
- Funding national CIRTs to do BGP monitoring for the ASNs within their country to see when routes are being hijacked such that alerting and mitigation can be done on these.
As the operators of the bulk of the ASNs within a given country, the role of private sector enterprises is even more important. In this case, the private sector can play an even bigger role than governments in protecting their country’s national interests, thus they should keep in mind the following:
- Ensuring that organization peers directly (or no more than one) hop between the organization and the outsourced data cloud service providers. This will help promote transparency in data transmission.
- Implementing systems so that there are at least two or more outbound peers, in order to minimize the potential damage should one peering point become unavailable.
- Assessing the transmission path for data to determine the geographic paths the data will traverse.
- Ensuring that the network has BCP38 and 82 enabled, to limit the potential of spoofed IP addresses traversing the network.
- Building strong network route monitoring. This will help organizations quickly detect if data is being routed – accidentally or maliciously to places not intended, which can lead to serious consequences (a quick look at the website BGPMon.net will show how prevalent this issue is).
- Ensuring that peering partners have BPC38 and 82 implemented as part of an organization’s peering agreements.
- Levering influence throughout the supply chain and include these same requirements contractually of the suppliers.
Like it? Add this infographic to your site:
1. Click on the box below. 2. Press Ctrl+A to select all. 3. Press Ctrl+C to copy. 4. Paste the code into your page (Ctrl+V).
Image will appear the same size as you see above.
- Exposed Container Registries: A Potential Vector for Supply-Chain Attacks
- LockBit, BlackCat, and Clop Prevail as Top RAAS Groups: Ransomware in 1H 2023
- Stepping Ahead of Risk: Report Trend Micro sulle minacce di cybersecurity del primo semestre 2023
- Diving Deep Into Quantum Computing: Modern Cryptography
- Uncovering Silent Threats in Azure Machine Learning Service: Part 2 |
THE ENIGMATIC POWER OF 142857 IN THE ANCIENT EGYPTIAN PYRAMIDS
Ever since the enigmatic number 142857 was unearthed within the archaeological context of the ancient Egyptian pyramids, it has captivated the attention of mathematicians and scientists worldwide. Numerous studies have been conducted in an attempt to establish a direct or indirect connection between this number and the pyramids, yet no conclusive research findings have emerged thus far. Nevertheless, based on the continuous research of ancient civilizations, characters, and masterpieces, the calculation method of the line segment and the relationship between polygons and numbers are discovered in the exploration of polygons, right triangles, multiplication tables, and the mathematical constant π. These discoveries have unveiled a fascinating string of numbers that can potentially reveal that ancient Egyptian pyramids were designed and constructed using the mathematical approach rooted in the application of isosceles right triangles, with "5" and "7" as representative values, rather than simply representing the cyclic meaning. Furthermore, they have illuminated the profound wisdom possessed by the ancient Egyptians, affirming that the pyramids stand as a testament to the advanced development of ancient mathematics. This discovery not only represents a cognitive advancement in the study of ancient Egyptian archaeology and history but also holds immense importance in the broader realm of ancient mathematics and its impact on the development of modern mathematics. |
School readiness remains high on the political agenda amid reports of increasing numbers of children starting reception in nappies and lacking basic speech and language skills. But what does school readiness really mean and how can early years providers help to prepare children as they approach school age? Elizabeth Walker explains.
The term “school readiness” is widely used but there is no nationally agreed definition. The term is therefore open to interpretation and the subject prompts intense debate from policy makers, educationalists and childcare practitioners alike.
Unicef’s interpretation highlights the importance of collaborative working between childcare professionals, teachers and parents. Early years providers therefore play a key role in preparing children for the next stage in their education and must work closely with families and schools to ensure a smooth transition for children at this important time in their lives.
Indicators of school readiness
The Early Years Foundation Stage (EYFS) states that it promotes teaching and learning to ensure children’s “school readiness” and gives children the broad range of knowledge and skills that provide the right foundation for good future progress through school and life.
Although there is no common agreement on the definition of school readiness, it is widely accepted that it refers not just to a child’s cognitive and academic skills but involves physical, social and emotional development as well.
The Professional Association for Childcare and Early Years (PACEY) published a 2013 research report into what school readiness means for childcare professionals, parents and primary school teachers. What Does “School Ready” Really Mean? shows that the majority of each group agree that children who are school ready:
have strong social skills
can cope emotionally with being separated from their parents
are relatively independent in their own personal care
have a curiosity about the world and a desire to learn.
For a child to be considered school ready, respondents stated that cognitive and academic skills such as reading and writing are not as important as children being confident, independent and curious.
Starting school is a time of transition and it requires co-operation between childcare professionals, teachers and parents. Unicef describes school readiness as consisting of three pillars:
children’s readiness for school
schools’ readiness for children
families’ readiness for school.
By working together the three pillars maximise each child’s likelihood of success as they progress through their time in school. Therefore, school readiness refers not only to the attributes of a child but also to the roles and responsibilities of families, teachers and practitioners in ensuring that children are ready and able to access learning as they enter school and beyond.
Early years practitioners are vital at this time of transition and must seek to collaborate and communicate effectively with parents and schools to help to prepare children for school.
Early years providers
Research suggests that early years education makes a positive difference to children’s school readiness and to outcomes in later life. Early years practitioners should develop effective practice to ensure that they are providing the relevant support to children and their families when they are approaching school age. Good practice includes the following points.
Establish positive relationships and communicate with parents about the transition to school.
Share ideas with parents about how to support children’s development and learning at home.
Track individual children’s progress and share information with schools and all relevant partners.
Share good practice and reinforce positive relationships with schools to identify the common expectations they share in the EYFS.
Ensure staff understand the different stages of child development, how these relate to each other and how to plan appropriate resources and activities.
Demonstrate high expectations for each child by providing challenge, promoting resilience and raising aspirations.
Recognise, record and respond to the different ways that children learn and reflect this in provision and practice.
Respect and respond to children’s backgrounds and circumstances.
Encourage children to develop independence at meal and snack times.
Encourage children to develop independence in self-care including getting dressed and using the toilet.
Encourage children to play co-operatively together and learn to take turns.
Review practice and identify gaps in staff training.
When children start at primary school they arrive at different levels of learning and development and can have up to a year’s difference in age. In order for children to settle quickly and make good progress, teachers need to assess children’s starting points and provide resources and activities that are suited to their needs, interests and abilities.
Early years providers should share all relevant information regarding children’s learning and development prior to starting school so that teachers can plan accordingly for their needs.
It is important for practitioners to identify when children are experiencing learning and developmental delays as early as possible and whether any additional support is required. Children might require specific programmes of support or appropriate intervention as soon as they start in reception so it is vital that early years providers work closely with the next school and with other relevant agencies to ensure a smooth transition.
Working in partnership with parents is central to the EYFS and it is important that early years providers and teachers recognise that parents are children’s first and most enduring educators. Engaging families in their children’s learning and development has a significant impact on children’s progress and wellbeing as well as outcomes in later life.
It is therefore important that families are fully supported in the transition to school and offered clear and accessible information on how to help prepare their children for the next stage in their learning. This could include information on:
learning through play at home and outdoors
developing speech and language skills through singing songs, nursery rhymes and talking together
supporting children’s independence and self-care skills at home, including getting dressed, eating meals, and using the toilet
offering opportunities to play with other children and experience socialising, sharing and taking turns
recognising and talking through children’s feelings and emotions about starting school
reading with and to children every day if possible
offering opportunities for mark-making, painting and colouring
promoting an active and healthy lifestyle
establishing a good sleep routine
helping children to get to know their new school and teacher before starting
seeking professional support and advice if appropriate. |
The genus of Tapirus consists of many extinct and still extant species of tapirs. Nowadays, five extant species of tapir are recognized and live in South America and Southeast Asia, but extinct species have also been known from North America, Europe and Asia in general. The shape of tapirs has evolved very early on and the first relatives already looked like tapirs during the Eocene epoch. Tapirus lived in forested areas and evolved to feed mainly on aquatic vegetation and are therefore bound to watery habitats.
North & South America, Eurasia
Miocene to Holocene
The Auvergne Tapir was apparently a common species during the Miocene, Pliocene and Pleistocene epoch of Europe and Russia. It was fairly small in size, reaching two meters in length, and looked very similar to modern tapirs. Therefore, it is believed that its behaviour is comparable too. It probably fed on aquatic vegetation using its trunk and lived in forested areas with lots of water around. The Auvergne Tapir disappeared due to the replacement of the Pliocene forests by the open savannas of the Pleistocene epoch.
Croizet & Jobert, 1828
Miocene to Pleistocene
van Kolfschoten, T. (2001). Pleistocene mammals from the Netherlands. Bollettino-Societa Paleontologica Italiana, 40(2), 209-216. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.