content
stringlengths
275
370k
PPDL Picture of the Week for March 2, 2015 Dan Egel, Vegetable Pathologist, SWPAC, Botany & Plant Pathology, Purdue University The photograph above is of fungal spores that form a beneficial association with roots of plants. The name of this association is known as mycorrhiza (mycorrhizae, plural). The fungus helps the plant take up nutrients, particularly phosphorus, and the fungus gets carbohydrates (sugars) from the plant. Mycorrhizae are particularly helpful in barren soils such as one might encounter in mining waste. There is some evidence that these fungi may help in disease and drought resistance as well. In general, there are two types of mycorrhizal associations. The spores of the fungi shown here enter into the roots of the plants and grow between cells. This type of association is known as endo-mycorrhizae. These fungi specialize in associations with plants such as oak and beach trees as well as with some herbaceous plants. Other fungi specialize with trees such as pines and grow on the outside of the root like a sheath. In some cases, it is possible to see the mushrooms of mycorrhizal fungi surrounding the host tree. Gardeners may be wondering how to get mycorrhizae into their gardens. A quick search of the Internet shows that there are many products that are mycorrhizal inoculants (if you purchase a product, follow the directions carefully). However, mycorrhizal fungi are common in most soils and will form an association with plants if given a chance. To encourage mycorrhizal associations in your garden, take good care of your soil. Add organic matter, for example, by using cover crops. Avoid over fertilization. Use a good rotation in vegetable gardens. The next time you see mushrooms of the same type popping up around a tree, remember that the mushrooms may be forming mycorrhizae with the roots under the soil.
History of the Stars The Stars and Stripes originated as a result of a resolution adopted by the Marine Committee of the Second Continental Congress at Philadelphia on June 14, 1777. The resolution read: “Resolved, that the flag of the United States be thirteen stripes, alternate red and white; that the union be thirteen stars, white in a blue field representing a new constellation.” The resolution gave no instruction as to how many points the stars should have, nor how the stars should be arranged on the blue union. Consequently, some flags had stars scattered on the blue field without any specific design, some arranged the stars in rows, and some in a circle. The first Navy Stars and Stripes had the stars arranged in staggered formation in alternate rows of threes and twos on a blue field. Other Stars and Stripes flags had stars arranged in alternate rows of four, five and four. Some stars had six points while others had eight. Strong evidence indicates that Francis Hopkinson of New Jersey, a signer of the Declaration of Independence, was responsible for the stars in the U.S. flag. At the time that the flag resolution was adopted, Hopkinson was the Chairman of the Continental Navy Board’s Middle Department. Hopkinson also helped design other devices for the government including the Great Seal of the United States. For his services, Hopkinson submitted a letter to the Continental Admiralty Board asking “whether a Quarter Cask of the public Wine will not be a proper & reasonable Reward for these Labours of Fancy and a suitable Encouragement to future Exertions of a like Nature.” His request was turned down since the Congress regarded him as a public servant. During the Revolutionary War, several patriots made flags for our new nation. Among them were Cornelia Bridges, Elizabeth (Betsy) Ross, and Rebecca Young, all of Pennsylvania, and John Shaw of Annapolis, Maryland. Although Betsy Ross, the best known of these persons, made flags for 50 years, there is no proof that she made the first Stars and Stripes. It is known that she made flags for the Pennsylvania State Navy in 1777. The flag popularly known as the “Betsy Ross flag,” which arranged the stars in a circle, did not appear until the early 1790s. The claims of Betsy Ross were first brought to the attention of the public in 1870 by one of her grandsons, William J. Canby. In a paper he read before the meeting of the Historical Society of Pennsylvania, Canby stated: “It is not tradition, it is report from the lips of the principal participator in the transaction, directly told not to one or two, but a dozen or more living witnesses, of which I myself am one, though but a little boy when I heard it. . . . Colonel Ross with Robert Morris and General Washington, called on Mrs. Ross and told her they were a committee of Congress, and wanted her to make a flag from the drawing, a rough one, which, upon her suggestions, was redrawn by General Washington in pencil in her back parlor. This was prior to the Declaration of Independence. I fix the date to be during Washington’s visit to Congress from New York in June, 1776 when he came to confer upon the affairs of the Army, the flag being no doubt, one of these affairs.” The first flag of the colonists to have any resemblance to the present Stars and Stripes was the Grand Union Flag, sometimes referred to as the Congress Colors, the First Navy Ensign, and the Cambridge Flag. Its design consisted of 13 stripes, alternately red and white, representing the 13 colonies, with a blue field in the upper left-hand corner bearing the red cross of St. George of England with the white cross of St. Andrew of Scotland. As the flag of the revolution, it was used on many occasions. It was first flown by the ships of the Colonial Fleet on the Delaware River. On December 3, 1775, it was raised aboard Captain Esek Hopkin’s flagship Alfred by John Paul Jones, then a Navy lieutenant. Later the flag was raised on the liberty pole at Prospect Hill, which was near George Washington’s headquarters in Cambridge, Massachusetts. It was our unofficial national flag on July 4, 1776, Independence Day, and it remained the unofficial national flag and ensign of the Navy until June 14, 1777, when the Continental Congress authorized the Stars and Stripes. Interestingly, the Grand Union Flag also was the standard of the British East India Company. It was only by degrees that the Union Flag of Great Britain was discarded. The final breach between the colonies and Great Britain brought about the removal of the British Union from the canton of our striped flag and the substitution of stars on a blue field. When two new states were admitted to the Union (Kentucky and Vermont), a resolution was adopted in January of 1794, expanding the flag to 15 stars and 15 stripes. This flag was the official flag of our country from 1795 to 1818, and was prominent in many historic events. It inspired Francis Scott Key to write “The Star-Spangled Banner” during the bombardment of Fort McHenry; it was the first flag to be flown over a fortress of the Old World when American Marine and Naval forces raised it above the pirate stronghold in Tripoli on April 27, 1805; it was the ensign of American forces in the Battle of Lake Erie in September of 1813; and it was flown by General Jackson in New Orleans in January of 1815. However, realizing that the flag would become unwieldy with a stripe for each new State, Capt. Samuel C. Reid, USN, suggested to Congress that the stripes remain 13 in number to represent the 13 colonies, and that a star be added to the blue field for each new state coming into the Union. Accordingly, on April 4, 1818, President Monroe accepted a bill requiring that the flag of the United States have a union of 20 stars, white on a blue field, and that upon admission of each new state into the Union, one star be added to the union of the flag on the fourth of July following its date of admission. The 13 alternating red and white stripes would remain unchanged. This act succeeded in prescribing the basic design of the flag, while assuring that the growth of the Nation would be properly symbolized. Eventually, the growth of the country resulted in a flag with 48 stars upon the admission of Arizona and New Mexico in 1912. Alaska added a 49th in 1959, and Hawaii a 50th star in 1960. With the 50-star flag came a new design and arrangement of the stars in the union, a requirement met by President Eisenhower in Executive Order No. 10834, issued August 21, 1959. To conform to this, a national banner with 50 stars became the official flag of the United States. The flag was raised for the first time at 12:01 a.m. on July 4, 1960, at the Fort McHenry National Monument in Baltimore, Maryland. Traditionally a symbol of liberty, the American flag has carried the message of freedom to many parts of the world. Sometimes the same flag that was flying at a crucial moment in our history has been flown again in another place to symbolize continuity in our struggles for the cause of liberty. One of the most memorable is the flag that flew over the Capitol in Washington on December 7, 1941, when Pearl Harbor was attacked. This same flag was raised again on December 8 when war was declared on Japan, and three days later at the time of the declaration of war against Germany and Italy. President Roosevelt called it the “flag of liberation” and carried it with him to the Casablanca Conference and on other historic occasions. It flew from the mast of the U.S.S. Missouri during the formal Japanese surrender on September 2, 1945. Another historic flag is the one that flew over Pearl Harbor on December 7, 1941. It also was present at the United Nations Charter meeting in San Francisco, California, and was used at the Big Three Conference at Potsdam, Germany. This same flag flew over the White House on August 14, 1945, when the Japanese accepted surrender terms. Following the War of 1812, a great wave of nationalistic spirit spread throughout the country; the infant Republic had successfully defied the might of an empire. As this spirit spread, the Stars and Stripes became a symbol of sovereignty. The homage paid that banner is best expressed by what the gifted men of later generations wrote concerning it. The writer Henry Ward Beecher said: “A thoughtful mind when it sees a nation’s flag sees not the flag, but the nation itself. And whatever may be its symbols, its insignia, he reads chiefly in the flag, the government, the principles, the truths, the history that belongs to the nation that sets it forth. The American flag has been a symbol of Liberty and men rejoiced in it. “The stars upon it were like the bright morning stars of God, and the stripes upon it were beams of morning light. As at early dawn the stars shine forth even while it grows light, and then as the sun advances that light breaks into banks and streaming lines of color, the glowing red and intense white striving together, and ribbing the horizon with bars effulgent, so, on the American flag, stars and beams of many-colored light shine out together . . . .” In a 1917 Flag Day message, President Wilson said: “This flag, which we honor and under which we serve, is the emblem of our unity, our power, our thought and purpose as a nation. It has no other character than that which we give it from generation to generation. The choices are ours. It floats in majestic silence above the hosts that execute those choices, whether in peace or in war. And yet, though silent, it speaks to us-speaks to us of the past, of the men and women who went before us, and of the records they wrote upon it. “We celebrate the day of its birth; and from its birth until now it has witnessed a great history, has floated on high the symbol of great events, of a great plan of life worked out by a great people…. “Woe be to the man or group of men that seeks to stand in our way in this day of high resolution when every principle we hold dearest is to be vindicated and made secure for the salvation of the nation. We are ready to plead at the bar of history, and our flag shall wear a new luster. Once more we shall make good with our lives and fortunes the great faith to which we were born, and a new glory shall shine in the face of our people.”
The objective of the Activate module is to empower students to create campaigns and plan actions that have the potential to raise awareness, influence behaviour and create change. The lessons contain examples of different kinds of activism and teachers are encouraged to supplement these examples with current and relevant examples. Students will be guided to create action plans, delegate tasks and enact these plans (where possible). If you have limited class time, we recommend that students investigate the examples of “what activism looks like” in Lesson 1, and use these as inspiration for their planning process. As with all our resources, choose the activities and lessons which suit your needs (and those of your students) the best. The lessons which we consider to be a “core” component of the Passport to Democracy program have been highlighted in red in the downloadable resource.
STATES CHRONICLE – Researchers made an incredible discovery regarding the hottest planet in the Solar System. Given its proximity to the Sun, Mercury is regarded to have a scorching surface. However, they managed to observe certain frozen regions covered in ice, indicating the planet hosts more water than we used to think. During a thorough analysis of the planet’s surface, researchers noticed the north pole contains three areas covered in ice. The sunlight doesn’t reach the north pole directly, so they managed to exist even if Mercury is situated so close to our star. The ice-covered areas are permanently away from the sun After performing more research, scientists found out these areas were constantly in shadow. Also, they had a protective layer above them, which harbors them from the scorching rays of the sun. All of these allowed for the presence of water in its frozen state, showing that tiny Mercury can still surprise us. What is even more surprising is the fact that these formations might not even be unique. Scientists think there could be other shadowed regions on the surface, where the temperatures are not so high, and sunlight doesn’t get through. These areas might be present on broken terrain, within small cracks and craters in the surface, where ice can resist melting. The research revealed these areas might have temperatures around -280 degrees Fahrenheit, or even lower. These tiny cracks are quite small, and occupy a small surface, but this is exactly what helps them shelter the ice layer, thus increasing the water deposits of Mercury already present in larger craters. How did water get on Mercury? However, how could such a fiery planet harbor water in the first place? Researchers assume most of it has been brought by asteroids which collided with the planet. Another hypothesis suggests solar winds carried hydrogen, and brought it to Mercury. If oxygen was also present there, the two mixed and produced water. Regardless of where all this water comes from, this is a massive discovery. Now, researchers are trying to find out how such volatile substances get to travel through the Solar System, and how they reach planets where they usually cannot exist. All the details of this study have been published in the journal Geophysical Research Letters. Image Source: NASA Jet Propulsion Laboratory
The main function of your liver is to keep you healthy. It is an important organ to digest food, to turn nutrients into chemicals that your body requires and to turn food into energy, and clear your body of toxins. When your liver fails to perform its designated functions, it can cause significant damage to your whole body. Any disturbance of liver function that causes illness is liver disease. Liver disease is a general term covering all the potential problems that cause your liver to stop working properly. Since there is a wide variety of liver disease, the symptoms are usually specific for that illness. Most types do not cause any symptoms in the early stages, and the symptoms occur when your liver is already damaged and scarred. General symptoms of liver disease can include vomiting, abdominal pain, jaundice, nausea, ongoing fatigue, weakness, decreased appetite, easy bruising, dark urine, and weight loss. Liver disease is caused by many conditions and factors, the following are some of the most common causes. Inflammation by parasites and viruses can reduce liver function. The viruses may be spread through contaminated food or water, blood, semen, or close contact with an infected person. The most common types are hepatitis viruses. All types of hepatitis are contagious and the best way to prevent it is by getting vaccinated for types A and B, practicing safe sex, and not sharing needles with anyone. There are five types of hepatitis: - Hepatitis A mostly occurs when you eat or drink something that is contaminated by fecal matter. You may not feel any symptoms, but if you do, it may clear up without treatment. It can go away by itself after a few months without any long-term issues. - Hepatitis B can be short-term (acute) or long-term (chronic). Most people usually get it from someone else through bodily fluids, such as blood and semen. This type is treatable, but there is no cure for it. To avoid complications, make sure you get early treatment and get regular screenings. It makes you more likely to get liver cancer if it lasts for more than six months. - Hepatitis C can also be acute or chronic. It comes from the blood of someone with hepatitis C that gets into your blood. It usually spreads when you take drugs with shared needles or if you are a healthcare practitioner, you might get it accidentally from an infected needle. It does not cause any symptoms in the early stages, but it may lead to permanent liver damage. - Hepatitis D only develops in people with Hepatitis B. It is a serious type of hepatitis that can’t be contracted on its own. - Hepatitis E mainly caused by drinking contaminated water. It may clear up on its own without any permanent complications. The job of your immune system is to fight off invaders such as bacteria and viruses. However, it sometimes mistakenly attacks healthy cells in your body and other healthy parts such as your liver. Several autoimmune conditions that attack your cells and liver are: - Autoimmune hepatitis is a condition where your immune system attacks the liver and it results in inflammation. If you leave it untreated, it may lead to liver failure. - Primary biliary cirrhosis (PBC) is a result of the damage to tiny tubes in your liver called bile ducts and it can lead to cirrhosis and liver failure. - Primary sclerosing cholangitis is an inflammatory condition that causes gradual damage to your bile ducts. The damage can eventually block the bile ducts and this condition may lead to liver cancer and you may end up needing a liver transplant. Liver disease can be inherited if it runs in your family. - Hemochromatosis is a condition where your body stores too much iron from your food than what you actually need. That extra iron remains in your liver, heart, and other organs. If you leave it untreated, it may lead to life-threatening conditions. - Wilson’s disease causes your liver to build up copper in your liver instead of realizing it to the bile duct. In the long run, your liver can get damaged and will store more copper and allow it to travel through the bloodstream. It can cause nerve and psychiatric problems. - Hyperoxaluria happens when your urine has too much oxalate. Oxalate is a chemical and a natural part of your system. Your liver makes a chemical that controls oxalate, but if your liver makes too little of that chemical, oxalate can build up and cause kidney stones or kidney failure. Cancer and Tumors Cancer can develop in your liver, or it can also start elsewhere in your body and spread to the liver. - Liver cancer happens when you already have hepatitis or drink too much. The most common type of liver cancer is hepatocellular carcinoma that tends to develop as several spots of cancer in your liver. - Bile duct cancer affects the tubes that run from the liver to the small intestine to carry bile. It is uncommon, but it mainly affects people over the age of 50. Liver disease can be caused by other factors, such as drug overdose, alcohol abuse, and nonalcoholic fatty liver disease (NAFLD). Dire complications of liver disease are acute liver failure and cirrhosis. The treatment for liver disease may vary according to each cause. Many liver diseases last for years or may never go away. Some people have managed to keep the symptoms away by changing their lifestyles, such as limiting alcohol, drinking more water, and adopting a diet that includes plenty of fiber, as well as reducing fat, sugar, and salt. Depending on which condition you have, you may need medical treatment, including antiviral drugs, steroids, blood pressure medication, and antibiotics. Most liver disease can be managed if you treat them early. Just like every other disease, it is best to prevent them from happening. Make sure to attend routine checkups and maintain a healthy lifestyle.
The maintenance of a constant internal environment in the body The concept of energy in food is developed across four activities, each developing on the information gained from the previous. The 'Testing Track' allows children to estimate how much energy is provided by equal amounts of different types of food. The amount chosen is 10g and these amounts are shown as pictures to allow children to get some understanding of what 10g of each of the food types looks like. The markers displaying the different food types may be moved up or down using the 'mouse' to click and hold. Where the markers overlap, clicking the rear marker brings it forward. The word 'estimate' is not used although this is what the children are being encouraged to do. They are asked to try to work out, initially by trial and error, how far a runner can move for the types of food that they select. As their estimation skills develop, they will become more accurate with their attempts. They should be able to get some understanding of which foods contain a lot of energy and which contain little energy. Feedback is provided for all attempts and a marker shows the distance travelled for each food selected. Having completed all the food items the children will have a complete display indicating the relative energy content of all the food items. A simplistic approach has been adopted which, although introducing the concepts of homeostasis and metabolism, assumes that energy input is directly related to energy output in the form of movement. If you wish to print out the Testing Track, please set your printer page orientation to "landscape". In the 'Distance Challenge' the screen displays two runners each of whom are supplied with a different type of food. Children compare the energy content of the two foods displayed by choosing which runner will be able to run the furthest. Children can check their prediction by letting the runners run and see how far they travel. For the 'Get to the Finish' activity children develop the concept of energy from food by choosing not only the food type but also the quantity. They are required to estimate (or work out) how much of the chosen food item will take the runner to the end of the track - but no further. The final stage is 'Energy Combinations'. Here the energy in food concept is developed with a mathematical content. The challenges here are given as questions with three possible answers, only one of which is correct. The questions increase in difficulty as they progress. The small red marker below the runner indicates the registration point at which the distance is calculated.
The C Dating or Radiocarbon Dating is the oldest physical method, which allows to determine the age of an object, if it contains carbon. The method is named after its principle, it is based on the natural radioactive decay of the carbon isotope C It was developed in the s by a team of scientists under Professor Willard F. Libby of the University of Chicago. Libby received the Nobel Prize in Chemistry “for his method to use Carbon for age determinations in archaeology, geology, geophysics, and other branches of science. First a word on how the name of this method is written. The C14 is a isotope of carbon, which is otherwise C12 or C The C means carbon, the number gives the atomic weight rounded. Dating methods in Archaeology. Are they accurate? Signing up enhances your TCE experience with the ability to save items to your personal reading list, and access the interactive map. For those researchers working in the field of human history, the chronology of events remains a major element of reflection. Archaeologists have access to various techniques for dating archaeological sites or the objects found on those sites. There are two main categories of dating methods in archaeology : indirect or relative dating and absolute dating. How radiocarbon gets there; The dating principle; Complications; Sample selection widely used scientific dating methods in archaeology and environmental science. The dating process is always designed to try to extract the carbon from a. Dating techniques are procedures used by scientists to determine the age of an object or a series of events. The two main types of dating methods are relative and absolute. Relative dating methods are used to determine only if one sample is older or younger than another. Absolute dating methods are used to determine an actual date in years for the age of an object. Before the advent of absolute dating methods in the twentieth century, nearly all dating was relative. The main relative dating method is stratigraphy pronounced stra-TI-gra-fee , which is the study of layers of rocks or the objects embedded within those layers. This method is based on the assumption which nearly always holds true that deeper layers of rock were deposited earlier in Earth’s history, and thus are older than more shallow layers. The successive layers of rock represent successive intervals of time. Since certain species of animals existed on Earth at specific times in history, the fossils or remains of such animals embedded within those successive layers of rock also help scientists determine the age of the layers. Similarly, pollen grains released by seed-bearing plants became fossilized in rock layers. If a certain kind of pollen is found in an archaeological site, scientists can check when the plant that produced that pollen lived to determine the relative age of the site. Absolute dating methods are carried out in a laboratory. How has radiocarbon dating changed archaeology? When museums and collectors purchase archaeological items for their collections they enter an expensive and potentially deceptive commercial fine arts arena. Healthy profits are to be made from illicitly plundered ancient sites or selling skillfully made forgeries. Archaeology dating techniques can assure buyers that their item is not a fake by providing scientific reassurance of the artefact’s likely age. Dating methods Dating techniques are procedures used by scientists to In a landmark study, archaeologist James Ford used seriation to determine the. Dating refers to the archaeological tool to date artefacts and sites, and to properly construct history. Relative techniques can determine the sequence of events but not the precise date of an event, making these methods unreliable. This method includes carbon dating and thermoluminescence. The first method was based on radioactive elements whose property of decay occurs at a constant rate, known as the half-life of the isotope. Today, many different radioactive elements have been used, but the most famous absolute dating method is radiocarbon dating, which uses the isotope 14 C. This isotope, which can be found in organic materials and can be used only to date organic materials, has been incorrectly used by many to make dating assumptions for non-organic material such as stone buildings. The half-life of 14 C is approximately years, which is too short for this method to be used to date material millions of years old. The isotope of Potassium, which has a half-life of 1. Another absolute dating method is thermoluminescence, which dates the last time an item was heated. How Do Scientists Date Ancient Things? Relative Dating Prior to the availability of radiocarbon dates and when there is no material suitable for a radiocarbon date scientists used a system of relative dating. Relative dating establishes the sequence of physical or cultural events in time. Knowing which events came before or after others allows scientists to analyze the relationships between the events. There are two main categories of. Chronological dating , or simply dating , is the process of attributing to an object or event a date in the past, allowing such object or event to be located in a previously established chronology. This usually requires what is commonly known as a “dating method”. Several dating methods exist, depending on different criteria and techniques, and some very well known examples of disciplines using such techniques are, for example, history , archaeology , geology , paleontology , astronomy and even forensic science , since in the latter it is sometimes necessary to investigate the moment in the past during which the death of a cadaver occurred. Other markers can help place an artifact or event in a chronology, such as nearby writings and stratigraphic markers. Dating methods are most commonly classified following two criteria: relative dating and absolute dating. Relative dating methods are unable to determine the absolute age of an object or event, but can determine the impossibility of a particular event happening before or after another event of which the absolute date is well known. In this relative dating method, Latin terms ante quem and post quem are usually used to indicate both the most recent and the oldest possible moments when an event occurred or an artifact was left in a stratum , respectively. But this method is also useful in many other disciplines. Historians, for example, know that Shakespeare’s play Henry V was not written before because Shakespeare’s primary source for writing his play was the second edition of Raphael Holinshed ‘s Chronicles , not published until Dating Techniques In Archaeology Radiocarbon dating is one of the most widely used scientific dating methods in archaeology and environmental science. It can be applied to most organic materials and spans dates from a few hundred years ago right back to about 50, years ago – about when modern humans were first entering Europe. For radiocarbon dating to be possible, the material must once have been part of a living organism. minations have made in understanding the process and pace of culture development of human soci- Radiocarbon Dating: An Archaeological “Atomic Bomb”. Love-hungry teenagers and archaeologists agree: dating is hard. But while the difficulties of single life may be intractable, the challenge of determining the age of prehistoric artifacts and fossils is greatly aided by measuring certain radioactive isotopes. Until this century, relative dating was the only technique for identifying the age of a truly ancient object. By examining the object’s relation to layers of deposits in the area, and by comparing the object to others found at the site, archaeologists can estimate when the object arrived at the site. Though still heavily used, relative dating is now augmented by several modern dating techniques. Radiocarbon dating involves determining the age of an ancient fossil or specimen by measuring its carbon content. Carbon, or radiocarbon, is a naturally occurring radioactive isotope that forms when cosmic rays in the upper atmosphere strike nitrogen molecules, which then oxidize to become carbon dioxide. Green plants absorb the carbon dioxide, so the population of carbon molecules is continually replenished until the plant dies. Carbon is also passed onto the animals that eat those plants. Radiocarbon dating—also known as carbon dating—is a technique used by In this process, nitrogen (7 protons and 7 neutrons) gains a neutron and loses a 12, years, this is not a serious limitation to southwest archaeology.). All rights reserved. Relative techniques were developed earlier in the history of archaeology as a profession and are considered less trustworthy than absolute ones. There are several different methods. In stratigraphy , archaeologists assume that sites undergo stratification over time, leaving older layers beneath newer ones. Archaeologists use that assumption, called the law of superposition, to help determine a relative chronology for the site itself. Then, they use contextual clues and absolute dating techniques to help point to the age of the artifacts found in each layer. Learn how archaeologists dated the earliest metal body part in Europe. Objects can be grouped based on style or frequency to help determine a chronological sequence. Relative dating has its limits. For a more precise date, archaeologists turn to a growing arsenal of absolute dating techniques. Perhaps the most famous absolute dating technique, radiocarbon dating was developed during the s and relies on chemistry to determine the ages of objects. Its inventor, Willard Libby, eventually won a Nobel Prize for his discovery.
Geometry for Computer Graphics Introduction to Geometry for Computer Graphics In this article we're going to look at some of the basic geometric constructs we commonly use in computer graphics, with an emphasis on those for real-time graphics and games. Geometry is a large and exciting topic, but we're only going to touch on a few interesting aspects in this intro article. When we're making a CG scene, be it for a game or an offline render, we usually want it to include some objects which may or may not have real-world-like properties. The way we represent our shapes in our virtual world will have an impact in what we're able to achieve and how fast it can be drawn or manipulated. Let's consider two choices in how we represent geometry (there are more, but let's start with these two). We can describe an object using a set of points and polygons, which we refer to as a mesh. Or we can use a mathematical equation, such as expressing a sphere as a centre point and a radius then using maths to find the surface or volume as needed. Each approach has advantages and disadvantages: - Points and Polygons: - These can be hardware accelerated by the graphics card so they're really quick to draw - Transparent materials and materials with internal structures are difficult to represent. How can we draw smoke? Or a glass marble? - There are accuracy issues for curved surfaces. No matter how many triangles you use, the underlying geometry will always be faceted. There are rendering tricks we can use to hide some of this, but silhouettes are usually a problem. - The graphics card is terrible at handling these (at the moment) so they're really slow to draw. - Some mathematical methods can work well with transparent materials and insanely complex geometry. - There are no mathematical limits on accuracy, just practical computational ones like the limits of a floating point number. In reality, both approaches have their use in games and graphics. For most rendering tasks, points and polygons currently win. This is not likely to change in the short-term, but as graphics hardware gets more powerful we are gradually seeing more exciting things in this area so it's just a matter of time. The maths approaches are great for describing simple shapes such as spheres. We can use these as primitive shapes for collisions as the intersection calculations are significantly easier than if we were to describe two spheres with meshes and perform thousands of triangle-triangle intersections. How we apply these different approaches are both important and interesting aspects of computer graphics. Points are the fundamental building block for meshes and are very important in how we define geometry that is destined for the graphics card. Mathematically we can define a point in space using its coordinates, (x,y,z) in a 3D Cartesian system. We can also use a position vector for this, which is a subtle but important shift in thinking. The conceptual use of a vector means we can transform points to wherever we want. This basically means we can move our objects around in space, and we'll be using matrices to do this. It's important to realise though that a position by itself might not be enough information. For example, perhaps we want to associate each point with a colour too. Or maybe some other property. Our actual definition then needs to be expanded to include a collection of different attributes, though the position in space remains the most important. I hesitate to put this in the 'geometry' article but point clouds are important and useful. They're basically a collection of points in 3D space, are often generated by 3D scanners and frequently have colour information too. Mostly there's not a lot you can do with a point cloud by itself, we normally have to convert them into something else. Unfortunately there are a whole range of problems with this type of data, which are mostly to do with how it was captured: - It's common to have artefacts and noise which can distort the shape. Most visual-based (including laser) 3D scanners have difficulty coping with very reflective and very dark surfaces. They just can't tell where the surface is supposed to be, which can result in recorded positions that are well above or below their actual position (or sometimes they're simply missing). - Scans of objects are usually done in multiple parts, which each result in a separate dataset. To reconstruct a complete model we have to 'stitch' these separate datasets together, which may include overlaps and missing parts. - Once we have a 'complete' point cloud, we normally need to convert it into a full mesh. This is a potentially complex problem, but there is lots of research in this direction nowadays and we have some reasonable methods. It's worth starting with some definitions here. In mathematics, a line will extend infinitely. If we have two points and wish to define a line between them, this is known as a line segment. A line segment has no fixed start or end point. If we need this, we will call it a directed line segment and the start point is usually called the tail and the end point is usually called the head. A vector between two points is an example of a directed line segment. Algebraically, we define a line in 2D with the equation y = mx + c, where m is the gradient and c is the y-intercept. We can also define a line using vector notation, where a specific point on the line p is given by p = a + tn, where a is a position vector to a point on the line, n is a unit vector defining the direction of the line and t is a scalar that defines the distance from a to p in the direction of n. The vector notation of a straight line is quite important and can appear in a variety of different places. For example, if you're building a ray tracer it's the form you'll most likely use to describe a ray (though here, a is usually the origin of the ray and it doesn't make sense to consider negative values of n). Curves are very important for computer graphics. Some 3D modelling operations rely on them. You'll see them in 2D vector or raster graphics libraries and programs (E.g. Inkscape, Illustrator, Photoshop). Curves are also very important for things like animation. They are used to define the motion paths for objects and cameras, to describe how their positions and properties change over time. To allow curves to be used for all of these different tasks, it's important that our artists can control them. This element of artistic control is a very important one in computer graphics and one that's often overlooked by technically-focussed people so it's worth keeping in mind. These are a common and useful way of expressing curves using end points and control points. Until I get diagrams and full maths notation up and running there's probably not much point talking about these in any depth, but it's useful to know they exist. I think they deserve their own article anyway. There are various different 2D shapes and properties that we could discuss, but these are mostly early maths concepts and I want to point out a few issues with polygons and triangles. For lines, or specifically line segments, we took the concept of a point and connected two of them together. The whole idea of 2D shapes is that we extend this further to connect many points so that they form a surface. Such a surface is called a polygon. While we can define these 2D shapes in 2D space, more interesting things happen when we work in 3D space. If we connect, say four points together, they will form a four-sided polygon. In 2D space this is quite a well behaved shape, but in 3D space an important question to ask is whether they are all on the same plane. I.e. if you describe a plane in space, do all the points sit on the plane, or are some higher than others? If the points are not on the same plane, we would say it is non-planar, and it's potentially a bit of a problem. Given the information of just the points and their connections, there's no easy way of defining where the surface actually lies. With a four-sided convex polygon this will typically take one of two forms with a straight part between opposing corners. Triangles are wonderful little creatures. Triangles don't suffer from this problem. If you define three points in space they can only ever be on the same plane. To get around the ambiguity of non-planar polygonal shapes, the graphics card will only work with triangles. In the past we could send polygons to the graphics card, but internally it would triangulate them. The difficulty with triangulation approaches is that there are many ways to achieve it. When it comes to triangulating an artistic model, especially one that needs to be animated, this can potentially cause a problem. If the triangulation goes in the wrong direction we can end up with quite nasty looking shapes. This is why I always advocate that it's best for the artist to decide how they want their geometry triangulated. They know how their model needs to bend and flex, so are far better placed to decide what would look best. Once we have defined a set of shapes, it's usual to want to move them around in some way. It's worth looking at some definitions and classifications before we look at the maths for these transforms: Rigid (Isometric) Transformations - Translation -e.g. moving something from side-to-side - Reflection - an operation that produces a mirror-image copy - The above rigid transformations - Plus, uniform scale - i.e. growing or shrinking something by the same amount in every dimension - All the above similarity and rigid transforms - Plus non-uniform scale - All the above affine, similarity and rigid transformations - Plus projection - this is like a slide projector that projects the geometry at an angle to the screen It's then interesting to plot which transformations preserve which properties: |Distance||Angle||Ratios of Distance||Parallel Lines||Collinearity| Ratios of distance - the modpoint of a line segment will remain the midpoint after the transformation Collinearity - all points that start in a line will still be on a line after the transformation These transformations cover pretty much all we need for making games and most graphics applications. Fundamentally, all of the above transformations can be expressed as matrices. I will cover this in a future article. We assemble 3D shapes by piecing together 2D polygons - preferably triangles. To make them appear solid we make sure the corners match and there are no gaps. Each flat 2D polygonal surface is called a face. A shared side where two faces meet is called an edge. The corners where multiple faces meet are called vertices. The important thing to realise about this setup is that an 'object' here is not a solid thing at all. It's just a collection of surfaces and it's only when we make sure there are no gaps between them that we achieve the illusion of a solid 3D shape. For computer graphics and games this is fine and actually you'll find it's common to remove faces that we know we'll never see - if you look at the breakdown of film effects you'd surprised at how much geometry is missing. This way of describing an object, with just its outer shell, is known as a boundary representation (or b-rep). While this approach works well for graphics, where fewer faces means faster rendering times, it doesn't work well for other applications like 3D printing. For a 3D printer, we need to know about a real and solid shape. It's too easy to make mistakes and take shortcuts with b-reps. They can have holes and intersections that might look ok when rendered but make it difficult to tell where the exterior of an object ends and the interior begins. This doesn't make life easy for the software that interprets the 3D model and controls the 3D printer. It needs to know where to deposit material. It needs to know the actual interior volume of the object. But I'll save that for another article. This article has been a gentle introduction to some exciting geometric concepts in computer graphics. We've barely scratched the surface of the topic but already there are some important points I suggest you take away: - We use a position vector to define a vertex (a point in space). - Triangles are great. You gotta love triangles. We use three vertices (i.e. position vectors) to define a triangle. - A model / object / mesh (lots of words for the same thing really) is made up of lots of triangles. This means a mesh is a collection of position vectors. - The different types of transformation. These cover all the common ways of manipulating an object (not modelling it, I mean when an object is completed and we want to move it around our game world). It's important to realise all these transforms can be expressed with a matrix. Since we can use a matrix to change a vector we can use a transformation matrix to move our models.
It is often helpful to introduce new learning activities by talking about why you're including it in the course. Make sure you explain how the activities you've planned are linked with the course learning objectives. These people will have dates on a regular basis, and they may or may not be having sexual relations. This period of courtship is sometimes seen as a precursor to engagement. The protocols and practices of dating, and the terms used to describe it, vary considerably from country to country and over time. While the term has several meanings, the most frequent usage refers to two people exploring whether they are romantically or sexually compatible by participating in dates with the other. Note: if there is a large whiteboard, each group can have its own space to report their work. Then divide the class into pairs and have them stand facing one another. Each person takes turns introducing his or her partner and a summary of his/her responses to the group. Then give the students a question or problem and have them state their ideas aloud as they write them down, each taking turns. Ideally students will not skip turns, but if one gets stuck, he or she may “pass.”Divide students into small groups. Undertheories of cognitive development, collaborative learning creates opportunities for peers to learn from more competent others. And recent studies in cognitive science suggest that collaborative structures may deepen learning by giving students the opportunity to rehearse, manipulate, and elaborate on knowledge. One student in each group has two minutes to explain the obstacle he/she has encountered. During this time no one is allowed to interrupt with comments or questions. Active learning goes by many names and can assume many forms, such as pairs of students conducting peer reviews, small groups of students discussing the assigned reading, or highly-structured cooperative learning projects that extend over the entire quarter.Regardless of the specific form, active learning more often than not involves students working together toward a common goal.Give students a list of questions or prompts--either ahead of time or during class--to respond to (e.g., “What is your topic? In 3 minute rounds, students share their responses and their “date” gives suggestions and/or feedback. One member of each pair will stay in place while the other members circulate down the line until each set of pairs have spoken with one another.
Middle School English The two-year fifth and sixth grade English curriculum seeks to nurture students’ love of reading and a critical eye towards texts and the world at large. Year One of the course begins with the study of short stories to help students develop a shared language for talking about literature. The class then delves into the study of mystery novels, emphasizing the connection between using clues to solve a mystery and employing evidence to support an assertion as students begin to write arguments of their own. Students then consider how fiction about both recent history (September 11th) and contemporary issues (immigration) works to teach, inform, and persuade readers about their role as critical thinkers and upstanders beyond the classroom. Year Two of the course begins by considering family stories and histories (both in literature and in their own lives) to examine the role of multiple perspectives and individual bias in storytelling. Students then apply this to various works of historical fiction as they continue to develop their understanding of key literary elements. Centered around the autobiography-in-verse Brown Girl Dreaming, students study novels-in-verse in order to explore key aspects of poetry and experiment with writing their own. In 5th and 6th grade English, the key curricular elements include developing increasingly complex strategies for reading comprehension, inference, and analysis across genres (fiction, non-fiction, poetry). As writers, students engage meaningfully in every step of the writing process from brainstorming to revision; emphasis is placed on mastering the paragraph form as students practice developing and presenting their own ideas in clear, logical prose. Throughout both Year One and Year Two, students write narrative, persuasive, and informative pieces, while thinking critically about the purpose and audience in each of these types of writing. Grammar and mechanics are taught in the context of students’ own writing as they revise and edit their work. The collaborative approach to learning and making meaning nurtures students’ skills as listeners, speakers, critical thinkers, and community members. The two-year seventh and eighth grade English curriculum seeks to inspire students to develop their own opinions and to use their voices (both on and off the page) to let their own lives speak. Year One of the course begins with the study of the Hero’s Journey and examines how and why this archetypal story applies to ancient, mythic characters as much as it does their own coming-of-age experiences. Students then reconsider the literary concept of the “hero” in the context of the Civil Rights Movement as they read historical fiction and John Lewis’ graphic novel series March. The class examines the complex role of cultural and familial heritage in the formation of one’s identity in both fiction and their own lives as well as the power of stories to help us understand ourselves and others. Year Two of the course begins with a study of persuasive arguments and focuses on helping students to evaluate validity and bias as they read others’ arguments and work to construct their own. Students then read various works of dystopian fiction to engage with essential moral and ethical questions about society, civic duty, and individual freedom. Students study Shakespeare (A Midsummer Night’s Dream or The Tempest) in order to explore key aspects of verse and drama and to consider how and why familiar archetypes make the Bard’s work seemingly “timeless” (or not). At the end of the year, students state their personal life philosophies in their own “This I Believe” essays (based on the National Public Radio program). In 7th and 8th grade English, the key curricular elements include analyzing how an author’s choices (point of view, structure, genre) help to determine the meaning of a work of literature. Students develop strategies for comparing and contrasting different texts as well as tracing important themes within a single text and across the reading curriculum. As writers, students continue to engage meaningfully in every step of the writing process from brainstorming to revision; emphasis is placed on developing assertions and linking them together to make a logical argument. Throughout both Year One and Year Two, students continue to write narrative, persuasive, and informative pieces of increasing depth and complexity. Grammar and mechanics are taught in the context of students’ own writing as they revise and edit their work. The collaborative approach to learning and making meaning nurtures students’ skills as listeners, speakers, critical thinkers, and community members.
Here is another example of how teachers are creating “playful” spaces for authentic writing experiences in their classrooms. Children’s Writing Shapes their Play Narratives Three young girls brought some writing materials to the tables (along the wall) in the corner of the classroom. They each set up a “work station” at the table and began to stock their stations with wooden blocks. As they did this, they also wrote some names/letters/lines/shapes on their papers. One girl announced that the blocks were “medicine” and soon these stations became “pharmacies” and the blocks were pretend medicine that they had to organize. They gathered the medicine (blocks), they took notes, and talked about doctors and patients. With this type of writing-play scenario … We noticed teachers: - provide writing materials for children in the classroom - remain observers of the children’s play We observed students: - bring writing materials to the play area - write independently as they talk and play with peers - integrate writing into their play - shape their play around the writing
Even if they manage to avoid the obstacle of the turbine blades, the turbulence leaves fish stunned and vulnerable to predators when they emerge at the end of the power plant. In hopes of giving fish an easier ride, engineers are working with GE Renewable Energy to build more fish-friendly hydro turbines in the Columbia River, which separates Oregon and Washington. The high-tech project — which marries computer modeling with the latest turbine technology — could become a blueprint for hydropower plants around the globe. “We use a digital model, based on computer-fluid dynamics, to simulate the water flow through the turbines,” says Kristopher Toussaint, a hydraulic engineer with GE Renewable Energy. “We also have something that simulates a physical particle representing the fish to see, if it impacts components, what pressure that particle is exposed to, and so on.” If you’re not a fish, hydro plants might look benign enough: They work by harnessing the force and pressure of water flowing from a high point to a lower point through chutes in a dam. Near the bottom of the chute sits a turbine. The water spins the turbine, which creates power that can be transmitted to businesses and homes. But this is also the main passage for the migrating fish. Using the digital model, the team discovered that fish are particularly vulnerable to injuries when they pass through a first grid of stationary vanes, then another grid of movable vanes and finally through the blade channels of a runner in rotation. If they survive from the risk of strikes, they still have to bear the rapid pressure drop that occurs inside the runner. (See illustration below.) To lessen the risk of strikes, one strategy is to align the two sets of vanes and shrink the gaps between rotating parts and stationary parts. Using modeling technology, engineers were able to figure out the optimal vane geometry to minimize the risk to the fish while keeping a smooth hydraulic passage for the flow of water. “Reducing the risk here for the fish is not detrimental to turbine efficiency,” says Laurent Bornard, a hydraulic consulting engineer at GE Renewable Energy. To reduce the pressure drop in the runner, several strategies are usually possible, including changing the number of blades or the runner diameter to reduce the flow velocity and then increase the pressure. But even if a salmon makes it past the vanes and the runner, its troubles aren’t over. To reach the river downstream of the power plant, it still has to go through the diffuser. “The flow deceleration inside the diffuser can create whirlpools and flow detachment that spin the fish around, disorienting them and making them easy prey for predators outside the power plant,” Bornard says. In order to prevent fish injury, the engineers typically optimize the flow characteristics delivered by the runner and seek, in some extreme cases, to modify the diffuser geometry to reduce the flow turbulence and make the fish passage as smooth as possible. As a result of the design changes, engineers expect the survival rate of salmon to improve. And the further potential for the fish-friendly technology is, well, dizzying. “There are close to 10 power plants on the Snake and Columbia rivers that are especially concerned by fish passages during the salmon migration,” Bornard says. GE is also working to make hydro turbines along the Mekong River in Southeast Asia safer for fish that pass through it.
The petty constable was a local official whose origins date back to Anglo-Saxon times. They were unpaid, and were elected from local men. On the whole they were chosen from respectable tradesmen, craftsmen and shopkeepers, not ordinary labourers. They served for one year only. Their job was arrest criminals and to carry out instructions passed down from the JPs or the County Assize Justices. This could be awkward, as the petty constable found himself having to report on, even arrest, his neighbours. For this reason, they often used their discretion in applying the law and could get into trouble with higher authorities as a result. However, most petty constables did their duties as best as they could, alongside their full-time employment. It also meant that local people were involved in enforcing the law. Watchmen had long been employed by local communities, more often in towns, to patrol the streets at night. Each one had a lantern, and a stick, and traditionally called out the hours and the weather. Because they were regulated by an Act of Charles II they were known as "Charleys".
The African American Journey Slavery and the Americas Under the auspices of UNESCO, this project is one of the first international efforts to document, preserve, and digitize original archival materials and finding aids of the international trade in slaves during the 18th and 19th centuries. So far, the following countries have agreed to participate in the project: Angola, Benin, Brazil, Cameroon, Côte d'Ivoire, Democratic Republic of Congo, Gabon, The Gambia, Ghana, Guinea, Guinea-Bissau, Haïti, Mozambique, Nigeria, Senegal, and Togo. - The Atlantic Slave Trade and Slave Life in the Americas: A Visual Record - Born in Slavery: Slave Narratives from the Federal Writers' Project, 1936-1938: This collection contains more than 2,300 first-person accounts of slavery and 500 black-and-white photographs of former slaves. Slaves and the Courts, 1740-1860 contains just over a hundred pamphlets and books (published between 1772 and 1889) concerning the difficult and troubling experiences of African and African American slaves in the American colonies and the United States. This electronic exhibit focuses on the depictions of slaves in Confederate currency. It is important to remember that these images were created by those who institutionalized and worked to preserve slavery, and they do not necessarily portray the slaves as they viewed themselves and their condition.
We make dozens of decisions every day, some simple, some more complex. Some internet sources estimate that an adult makes about 35,000 conscious decisions each day.We make 226.7 decisions each day on just food alone according to researchers at Cornell University. Leaders in organizations are often faced with difficult decisions that can determine the welfare of others and their organizations. The study of decision making, consequently, has been the subject of a number of intellectual disciplines: mathematics, sociology, psychology, economics, and political science, to name a few. Philosophers ponder what our decisions say about ourselves and about our values; historians dissect the choices leaders make at critical junctures. Research into risk and organizational behavior springs from a more practical desire: to help managers achieve better outcomes. And while a good decision does not guarantee a good outcome, such pragmatism has paid off. A growing sophistication with managing risk, a nuanced understanding of human behavior, and advances in technology that support and mimic cognitive processes have improved decision making in many situations. While leadership research and practices in the past has focused on the external processes of decision-making—the forces, data and relationships in the environment that impact the leader’s decision-making, recent times have focused the processes that occur in the brain and mind. Neuroscience research has now provided leaders with information that can be helpful in becoming better decision-makers. Researchers have examined a wide variety of questions, including: - Where in the brain do decision-making processes take place? - What physical changes in the brain have an impact on how the decision is made and its subsequent outcome? - What role does intuition and “gut feeling” play in the decision-making process? - What role does memory play in making decisions? - What decisions are automatic or unconscious and what decisions are conscious? The decision-maker’s environment can play a part in the decision-making process. For example, environmental complexity is a factor that influences cognitive function. A complex environment is an environment with a large number of different possible states which come and go over time. Studies done at the University of Colorado have shown that more complex environments correlate with higher cognitive function, which means that a decision can be influenced by the location. What Neuroscience Can Tell Us In 2014, researchers in Switzerland discovered that the prefrontal cortex not only shows increased activity during decisions requiring self-control, but during all decision-making processes. Sarah Rudorf and Todd Hare of the Department of Economics of the University of Zurich were able to identify specific regions of the prefrontal cortex that are most active in the process of making a decision.The study, “Interactions between Dorsolateral and Ventromedial Prefrontal Cortex Underlie Context-Dependent Stimulus Valuation in Goal-Directed Choice,” was published in the Journal of Neuroscience. Previous studies have shown that a specific network in the brain is active when a person has to decide between various choices in different situations. This research emphasizes the importance of the interaction between neurons in two different brain areas within the prefrontal cortex. Decisions that require self-control are extremely important, as they directly affect a person’s bodily, social, or financial welfare. The determination of the mechanisms in the brain that are not only involved in decisions requiring self-control but that are also used in general decisions could open new points of interaction for therapies.Damage to the brain’s frontal lobe is known to impair one’s ability to think and make choices. And now scientists say they’ve pinpointed the different parts of this brain region that preside over reasoning, self-control and decision-making. Researchers say the data could help doctors determine what specific cognitive obstacles their patients might face after a brain injury. For the study, neuroscientists at the California Institute of Technology (Caltech) examined 30 years worth of data from the University of Iowa’s brain lesion patient registry and mapped brain activity in almost 350 people with lesions in their frontal lobes. They linked these maps with data on how each patient performed in certain cognitive tasks. With this information, the researchers could see exactly which parts of the frontal lobe were critical for different tasks like behavioral control(refraining from ordering a chocolate sundae) and reward-based decision making (trying to win money at a casino), a statement from Caltech explained. “The patterns of lesions that impair specific tasks showed a very clear separation between those regions of the frontal lobes necessary for controlling behavior, and those necessary for how we give value to choices and how we make decisions,” said neuroscientist Daniel Tranel, of the University of Iowa. Caltech researcher Ralph Adolphs explained that several different parts of the brain might be activated during a particular type of decision-making. And the maps show which parts of the frontal lobe are the most critical areas that, if damaged, could result in lifelong impairment. “That knowledge will be tremendously useful for prognosis after brain injury,” Adolphs said in the Caltech statement. “Many people suffer injury to their frontal lobes — for instance, after a head injury during an automobile accident — but the precise pattern of the damage will determine their eventual impairment.” Free Will and Will Power According to research by the Max Planx Institute for Human Cognitive and Brain Science, “the brain activity of the decision can be encoded up to 10 seconds prior to your awareness” of making the decision. When you decide which person to hire for a new position, your brain has already made the decision and your conscious thoughts simply justify the decision. Experiments suggest that we are not free to make our decisions, that they are made unconsciously by the neuro-circuitry of our brain. Contrary to what most of us would like to believe, decision-making may be a process handled to a large extent by unconscious mental activity. A team of scientists has unraveled how the brain actually unconsciously prepares our decisions. Even several seconds before we consciously make a decision its outcome can be predicted from unconscious activity in the brain. This is shown in a study by scientists from the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, in collaboration with the Charité University Hospital and the Bernstein Center for Computational Neuroscience in Berlin. The researchers from the group of Professor John-Dylan Haynes used a brain scanner to investigate what happens in the human brain just before a decision is made. “Many processes in the brain occur automatically and without involvement of our consciousness. This prevents our mind from being overloaded by simple routine tasks. But when it comes to decisions we tend to assume they are made by our conscious mind. This is questioned by our current findings.” Alex Pouget, associate professor of brain and cognitive sciences at the University of Rochester, has shown that people do indeed make optimal decisions—but only when their unconscious brain makes the choice. “A lot of the early work in this field was on conscious decision making, but most of the decisions you make aren’t based on conscious reasoning,” says Pouget. “You don’t consciously decide to stop at a red light or steer around an obstacle in the road. Once we started looking at the decisions our brains make without our knowledge, we found that they almost always reach the right decision, given the information they had to work with.” Roy F. Baumeister, a social psychologist at Florida State University and author of the book, Willpower: Rediscovering the Greatest Human Strength, argues that willpower plays a part in all our decisions and that willpower fluctuates. Ask people to name their greatest strengths and they’ll often cite things as honesty, kindness, humor, courage or other virtues. Surprisingly, self-control or willpower came dead last among virtues being studied by research with over 1 million people. The most successful people, Baumeister contends, don’t have super-strong willpower when making decisions. Rather, they conserve their willpower by developing habits and routines, so they reduce the amount of stress in their lives. He says these people use their self-control or willpower not to get through crises, but avoid them. They make important decisions early before fatigue sets in. Steven Pinker, and a world-renowned cognitive scientist at Harvard, contends, in an article in The New York Times ,reviewing Baumeister’s work, “together with intelligence, self-control turns out to be the best predictor of a successful and satisfying life.” Too Many Decisions and Too Much Information Depletes Cognitive Resources Science writer Sharon Begley, writing in Newsweek, says that experts advise, “dealing with emails and texts in batches, rather than in real time; that should let your unconscious decision making system kick in. Avoid the trap of thinking that a decision requiring you to assess a lot of complex information is best made methodically and consciously. You will do better, and regret less, if you let your unconscious turn it over by removing yourself from the info flux.” In other words, learn to switch off the information flow. In the publication, Science, researchers Ap Kigksterhuis, Maarten Bos, Loran Nordgen and Rick van Baaren, argue that effective, conscious decision-making requires cognitive resources, and because increasingly complex decisions place increasing strain on these sources, the quality of our decisions declines as the complexity of decisions increases. In short, complex decisions overrun our cognitive powers. That’s because decision-making is an energy-consuming process. According to Prince Ghuman, minimalist, co-founder of 15Center and professor of Neuromarketing,“When it comes to making decisions, our brain functions in two modes. One mode is largely automatic, it makes reactive decisions based on intuition. The second mode is deliberate, it makes rational, analytical decisions.” He explains that the second mode is finite, which means “we can only make so many logical decisions before the tank is empty.”Now you’re likely wondering: how many decisions, on average, can we make every day before the “tank” hits empty? Each of our dozens of daily decisions requires our conscious attention and mental energy. To make such a decision, we need to compare options, analyze the pros and cons, try to predict possible outcomes, etc. Now imagine doing this 75 times a day. Take umbrella or not, get to work by a taxi or use the subway, order pizza or sushi, watch a movie or go for a walk, watch this movie or that movie, the list of decisions we need to make daily is long and diverse. It’s no wonder we feel tired by the end of the day! Now that you know your decision tank gets empty after 75 decisions, it is clear that excessive, unimportant mini-decisions waste this finite resource and take a toll on your day-to-day wellbeing. Prince Ghuman explains that the mini-decisions we make every day — from “deciding to respond to a mobile notification to picking which shoes to wear” — eat the fuel we need for making really important decisions. So it’s logical to minimize the small and low-priority choices we make wherever possible and save that energy for decisions that matter. “If I reduce options, I minimize decisions. And if I minimize mini-decisions, I have more willpower left in my tank for the important stuff.” Angelika Dimoka, Director of The Center for Neural Decision-Making at Temple University, conducted studies to see what happens when people’s decision-making abilities are overtaxed. She found rational and logical prefrontal cortex functioning declined when it becomes overloaded with information and as a result, subjects in her experiments began to make stupid mistakes and bad choices. “With too much information,” says Dimoka, “people’s decisions make less and less sense.” So much for the idea of making well-informed decisions. We are steeped in the belief of due diligence and today’s flood of information in the Internet and social media sites can surely overload our cognitive functions. Sheena Iyengar of Columbia University and author of The Art of Choosing, studied the impact of more information for people making investment decisions. She argues that although we say we prefer more information, in fact, more can be “debilitating.” “There is a powerful recency effect in decision-making,” says behavioral economist Geroge Loewenstein of Carnegie Mellon University, “we pay a lot of attention to the most recent information, discounting what came earlier. We are often fooled by immediacy and quantity and think it’s quality.” In their research Nadev Klein and Ed O’Brien tested whether people can correctly anticipatehow much information they and others use when making varied judgments. They consistently found that people were surprised by how quickly they make judgments and how little information they use doing so. “One possibility is a belief that the human mind processes information incrementally. A naive perspective might imagine that new information stacks on top of old information until some mental threshold is reached for making a decision. In reality, however, preliminary research suggests that information aggregation is much closer to an exponential function; the first few pieces of information are weighted much more heavily than later information,” the researchers contend. Another possibility is that people fail to realize how rich and engrossing each separate piece of information is. In psychology, this is called an empathy gap. Consider the question of how many interactions are necessary for you to decide whether you like and trust someone. It’s not clear that quick decisions are always bad, Klien and O’Brien argue. Sometimes snap judgments are remarkably accurate and they can save time. It would be crippling to comb through all the available information on a topic every time a decision must be made. However, misunderstanding how much information we actually use to make our judgments hasimportant implications beyond making good or bad decisions. The authors go on to cite the problem of self-fulfilling prophesies. Imagine a situation in which a manager forms a tentative opinion of an employee that then cascades into a series of decisions that affect that employee’s entire career trajectory. A manager who sees an underling make a small misstep in an insignificant project may avoid assigning challenging projects in the future, which in turn would hamstring this employee’s career prospects. If managers are unaware how willing they are to make quick and data-poor initial judgments, they’ll be less likely to nip these self-fulfilling destructive cycles in the bud. Another example might be the human tendency to rely on stereotypes when judging other people. Although you may believe that you’ll consider all the information available about another person, people in fact are more likely to consider very little information and let stereotypes creep in. It may be a failure to understand how quickly judgments get made that make it so hard to exclude the influence of stereotyping. Modern technology allows virtually any decision made today to be more informed than the same decision made a few decades ago, Klein and O’Brien say. But the human reliance on quick judgments may forestall this promise. In the quest for more informed decision-making, researchers will need to explore ways to encourage people to slow down the speed of judgment. The Role of Heuristics in Decision-Making We have to be able to do a lot of our decision-making essentially on autopilot to free up cognitive resources for more difficult decisions. So, we’ve evolved in the human brain a set of what we understand to be heuristics or rules of thumb. And heuristics are great in, say, 95 percent of situations. It’s that five percent, or maybe even one percent, that they’re really not so great. That’s when we have to become aware of them because in some situations they can become biases. For example, it doesn’t matter so much that we’re not aware of our rules of thumb when we’re driving to work or deciding what to make for dinner. But they can become absolutely critical in situations where a member of law enforcement is making an arrest or where you’re making a decision about a strategic investment or even when you’re deciding who to hire. Let’s take hiring for a moment. How many years is a hire going to impact your organization? You’re potentially looking at 5, 10, 15, 20 years. Having the right person in a role could change the future of your business entirely. That’s one of those areas where you really need to be aware of your own heuristics and biases—and we all have them. There’s no getting rid of them. Heuristics and biases are very topical these days, thanks in part to Michael Lewis’s fantastic book, The Undoing Project, which is the story of the groundbreaking work that Nobel Prize winner Danny Kahneman and Amos Tversky did in the psychology and biases of human decision-making. Their work gave rise to the whole new field of behavioral economics. In the last 10 to 15 years, neuroeconomics has really taken off. Neuroeconomics is the combination of behavioral economics with neuroscience. In behavioral economics, they use economic games and economic choices that have numbers associated with them and have real-world application. The “control” network in the brain suppresses the default network and allows the brain to focus on the present moment and not wander all the time. It keeps us on task. Scientists have discovered that the control network works best when it is faced with limited distractors such as email, phone calls, the Internet and all the other daily factors that draw us away from task and increase anxiety. This network also prevents us from being effective multi-taskers. Studies show that people who try to multi-task are unable to allocate brain resources in a way that matches their priorities. The result of multi-tasking in one or more jobs done poorly, mental fatigue, shallow thinking and impaired self-regulation (Waytz and Mason, 2013). Leaders who are familiar with this research take steps to modify or eliminate open offices, interruptions and multiple electronic devices that are always on. The Role of Emotions in Decision-Making Research conducted by neuroscientists Daeyeol Lee of Yale University, Daniel Salzman of Columbia University and Xiao-Jing Wang of Yale University has reached the following conclusions regarding decision-making: - Our emotions affect all our decisions. - Most decisions involved some kind of reward we receive as a result. - Poor decision-making can be a result of dysfunctional brain activity or the impact of negative emotional states such as extreme anxiety. Neuroscientist Jonah Lehrer, author of How We Decide, points out that people who experience damage to the emotional centers of their brain are unable to make decisions. Lehrer argues that there is a sweet spot between logic and emotion that makes for good decisions. A book entitled 30-Second Brain by Anil Seth shows how effective decision making is not possible without the motivation and meaning provided by emotional input. Seth describes how Antonio Damasio’s patient, “Elliott,” a previously a successful businessman, underwent neurosurgery for a tumor and lost a part of his brain—the orbitofrontal cortex—that connects the frontal lobes with the emotions. He became a real life Mr. Spock, devoid of emotion. But rather than this making him perfectly rational, he became paralyzed by every decision in life. Damasio later developed the somatic marker hypothesis to describe how visceral emotion supports our decisions. For instance, he showed in a card game that people’s fingers sweat prior to picking up from a losing pile, even before they recognize at a conscious level that they’ve made a bad choice. Daniel Kahneman demonstrated with Amos Tversky that the negative emotional impact of losses is twice as intense as the positive effect of gains, which affects our decision making in predictable ways. For one thing it explains our stubborn reluctance to write off bad investments. Should You Trust Your Gut? Do our leaders—or for that matter do any of us—trust our brains and rational thinking when making important decisions? Or do we make better decisions based on gut instinct and emotions? Recent research on the process of decision-making has brought to light surprising that contradict conventional wisdom. Research by Daniel Kahneman, a psychologist and Nobel Prize winner in economics, and Gary Klein, a senior scientist at MacroCognition, discussed the power of intuition to support decision-making in high pressure situations. When asked, “when should you trust your gut?” Klein responded, “never,’ arguing that leaders need to consciously and deliberately evaluate their gut feelings. Kahneman argues that when leaders are under time pressure to make a decision, they need to follow their intuition, but adding that overconfidence in intuition can be a powerful source of illusions. Klein argues that intuition is more reliable in structured stable conditions but may be unreliable in turbulent conditions, using the example of a broker choosing stocks. Kahneman cautions leaders to be wary of “experts’ intuition,” unless those experts have dealt with many similar situations in the past, citing the example of surgeons. A study by Joseph Mikels, Sam Maglio, Andrew Reed and Lee Kaplowitz published in the journal, Emotion, supports the power of gut instincts for quick decisions. They gave subjects a series of complex decisions of various types, with the instruction of whether to go with gut instinct or reason it out with information. Overall, they found that compared with trying to work out the details, using their gut led to much better outcomes. On the other hand, the researchers argue, unconscious decision-making–or intuition or gut instinct–requires no cognitive resources, so task complexity doesn’t degrade its effectiveness. This seemingly counter-intuitive conclusion is that although simple decisions are enhanced by conscious thought, the opposite holds true for complex thinking. In my article “Should You Trust Your Gut to Make Decisions” I described the following: “More recent research on the complexity of making decisions based on gut feelings is being done by Shabnam Mousavi, an assistant professor at the Johns Hopkins Carey Business School. She is the lead author of “Risk, Uncertainty, And Heuristics,” a paper that explores the idea that intuition can be a more useful tool than deliberate calculation in certain situations. Their research digs deeper into Nobel prizewinner Daniel Kahneman’s work, which showed how often humans elect to make a snap judgment based on intuition, rather than deliberating with available information. Mousavi proposes that too much information can be just as misleading as a hunch in some cases. One example came from quizzing both German and U.S. students to see if they could guess which city was larger: Detroit or Milwaukee. Score: Germans 90% correct vs. Americans 60% correct. Why? Because the Germans simply picked the one they’d heard more about and guessed it was the larger of the two. Americans, armed with “knowledge” of these cities didn’t reach for the obvious—and failed. Though this seems like a simplistic example, the researchers note that given two cities that students had never heard of would have changed the results dramatically. Likewise, some financial newbies have no trouble picking better stocks than a seasoned expert, but give them a portfolio of unrecognizable brands and watch the game change. Which validates Damasio’s theory that the experience of trusting the gut and getting something right or wrong is key in making good decisions. Mousavi posits that it’s important to take intuitive decision-making one step further by recognizing why people have developed such instincts and the best place to use them. While business favors doing a cost/benefit analysis and a rational approach before deciding which way to go, in an article for Johns Hopkins’ The Hub, Mousavi recommends an alternative. Create a decision tree that starts with the fundamental question: “If the worst-case scenario of a proposal were to occur, could you survive?” If no, don’t pursue it. If yes, the next question might be whether the company was well-positioned as a first mover in an area. By making each decision sequentially, the company can more effectively limit its information to relevant factors—avoiding information overload and not attempting to quantify the unquantifiable”. A 2013 study found that 15-minutes of mindfulness meditation can help people make smarter choices. The findings from the Wharton School of business where published in the journal Psychological Science. A series of studies led by Andrew Hafenbrack found that mindfulness helped counteract deep-rooted tendencies and lead to better decision-making. The researchers found that a brief period of mindfulness allowed people to make more rational decisions by considering the information available in the present moment, which led to more positive outcomes in the future. Using mindfulness could give various regions of your striatum and prefrontal cortex time to relay the true “neuroeconomic” costs of a decision and help you make smarter choices. Mindful decision-making can derail compulsive or addictive patterns of behavior and take you down a path that’s in your best interest for long-term health, happiness, and overall well-being. Applying Neuroscience Knowledge to Improve Leadership and HR Practices The knowledge gained from neuroscience research can make a significant difference both in terms of leadership and HR practices. Here are some examples: A new area of research and practice called Neuroleadershipwhich refers to the application of findings from neuroscience to the field of leadership. The term neuroleadership was first coined by Dr. David Rockin 2006. This research showed how instituting change in organizations affected the brains of employees. Rock’s SCARF model (Status, Certainty, Autonomy, Relatedness, Fairness). Status relates to a person’s relative importance to others. Certainty is about being able to predict the future. Autonomy provides a sense of control over events. Relatedness is the sense of connection and safety with others (the brain perceives a friend versus a foe). Fairness is the perception of being treated justly. These domains in the brain activate either a “primary reward” or “primary threat” response in the brain. For example, a perceived threat to one’s status will trigger a primary threat response in the brain because, as discussed earlier, the brain’s primary goal is survival. A perceived increase in fairness (an open discussion of a company’s compensation practices to assure all employees perceive that their compensation is fair and equitable, for example) will activate the same “primary reward” response as when receiving a monetary reward. Fairness is core not only to humans but hard wired in primates as well. Rock’s SCARF model that summarizes the top five social rewards and threats important to the brain and discusses how HR and talent management professionals can use this model to improve employee and organizational performance. The Brain’s Emotional Response Can Determine Behavior. A building body of applicable neuroscience research includes that when managers offer feedback to subordinates (whether it is positive or negative), an emotional reaction is triggered in the subordinate’s brain that controls survival. Researchers Van Hecke, Callahan, Kolar, and Paller found that social pain—such as being ignored, ostracized, or humiliated—is just as intense as physical pain.Also, change is feared because the brain, which is hard wired to survive, perceives it as a threat. This, in turn, causes an explosion of negative emotions that causes the brain and body to go into a threat response mode. Anxiety shoots through the roof, thinking becomes muddled, and the brain and body instinctively resist the perceived threat. These responses are so instantaneous that they may not even be recognized at first by the person experiencing the response. This deeper understanding of the fear of change—that the human brain will resist change that is perceived as a threat—has widespread implications for how leaders, HR and talent management professionals approach change management. If change is presented as a crisis–“If we don’t change immediately, we’ll all be out of a job;’ or if a “just do it and don’t ask questions”– approach is taken, the change effort will likely fail. For change management to work, a more thoughtful approach may be needed. HR professionals and leaders should try to reduce stress and anxiety by focusing on the positive aspects of the proposed change, asking questions, and listening actively to employees’ concerns. Fairness in Dealing with Employees Primatologist Frans de Waal found that capuchin monkeys can discern unfairness, particularly when it comes to pay inequity. As he explains in a YouTube clip, when one capuchin monkey is rewarded with a cucumber for a completed task, she accepts it willingly—the first time. When she sees another capuchin monkey rewarded with a more desired grape for the same completed task and she is rewarded again with the less-desired cucumber, she throws the cucumber at the researcher. Primates like humans have a highly honed sense of fairness. Employees react emotionally and almost instinctively when they feel they are being treated unfairly, either in terms of compensation or in terms of treatment by their boss and co-workers. A study conducted by Jamil Zaki of Stanford University and Jason Mitchell of Harvard University found that when people were allowed to divide up small amounts of money among themselves, the brain’s reward network responded much more when the participants made generous, equitable choices (Waytz and Mason, 2013). The Importance of Status Rock identifies status as one of the most significant drivers in the brain. A person will avoid a decrease in status in much the same way he or she would avoid pain, because the perceived threat of diminished status triggers the same area in the brain as pain. This is an important consideration, as previously discussed, when presenting change because change to the brain means a threat to social status. The Brain Craves Certainty and Autonomy When presented with any change, the brain will activate the limbic system and put it on alert, putting the recipient of the potential change into the “fight or flight” mode to survive. The brain also prefers autonomy, the ability to predict and have input into the future, so when presenting issues that may trigger uncertainty, leaders should consider how to calm those triggers by allowing employee discussion and participation into resolving the issues. The Brain Seeks Connection with Others. The brain also seeks connection with others—this is relatedness. Employees will respond better to bosses and peers whom they find to be “resonant”—fair, compassionate and empathetic. Dissonant managers trigger a negative response in the brain, which then categorizes them as a foe, leading to distrust and disconnection. Toxic leader behavior can have a serious and viral impact on employees which will result in a loss of motivation, productivity and well-being. The Influence of Peers and the Group I think we’re all aware that the kinds of choices we make are influenced by the people around us. In fact, this is true of other animals as well. One thing that we have learned from studying the choices that animals make and the choices that people make and what happens in their brains when they make these choices is that there are very specific, highly specialized mechanisms that detect the presence of other individuals, identify who they are, evaluate their importance to us and allow us to learn from their behavior. More recently, researchers have identified a very specific set of brain cells that actually responds when another individual feels a reward. A really interesting and important potential practical application of this discovery is that we might find ways, either behaviorally or pharmacologically or using other means, to activate these cells. And if we did so, we might be able to promote pro-social behaviors such as charitable giving. This could be a very important and practical way of enhancing the welfare of society. The Role of Gender New research by psychologists at the University of Warwick suggests gender plays a role in decision-making. They argue that because men and women perceive the world differently, they make decisions differently. The researchers say that because men organize their world into “black of white” distinct categories, women see things as more conditional and in shades of gray. Traditionally, cultures have rewarded males for being decisive and proactive, whereas females are socialized to be more thoughtful and receptive to others’ views. The Resonant Leader In the not so distant past, the conventional definition of an effective leader was one who got results, boosted the bottom line, and generally forced productivity out of his or her employees. As HR and talent management professionals know all too well, some of the management practices used to get these results were at the cost of employee motivation, retention, trust, and ultimately the bottom line. With a window into neuroscience, today we have more insight into how to improve leadership behaviors. For example, a study (cited in Richard Boyatzis’ Resonant Leadership,) found a link between effective leaders and resonant relationships with others. The study, using fMRI technology, found that when middle managers were asked to recall specific experiences with “resonant” leaders, 14 regions of the brain were activated. When asked to recall specific experiences with “dissonant” leaders, only six regions of the brain were activated and 11 regions were deactivated. There is also a physical connection in the brain associated with trust, an emotion that is increasingly cited as a critical leadership trait to exhibit. A 2008 study identified a chemical in the brain called oxytocin that when released, makes a person more receptive to feel trust toward a stranger (Margie Meacham, 2013). The brain actually determines trustworthiness within milliseconds of meeting a person. That initial determination is continually updated when more information is received or processed, as the brain takes in a person’s appearance, gestures, voice tone, and the content of what is said. What this means for leaders is that it is possible to build trust among employees even if it has been lacking in the past. Meacham, an adult learning expert, offers the following steps leaders can take to build trust in an organization: - Make people feel safe. The brain categorizes survival as its top priority, so leaders who can show they are not a threat will be seen as trustworthy. - Demonstrate fairness. The brain seeks fairness and will react to perceived injustice with anger and frustration. - Be genuine and be sure to show trust in others. Meacham writes that “when we watch someone else, our brains are activated in the same way that the brain of the person we are observing is activated—through the function of special ‘mirror neurons.’” In other words, if a leader distrusts the person with whom they are speaking, the other person will pick up on it and mirror that distrust back. Neuroscience shows us that resonant leaders open pathways in their employees’ brains that encourage engagement and positive working relationships. Good leaders pay attention to relationship building. Paying attention to trust levels in the organization and among managers and employees in particular. Leaders HR can emphasize trust development in leadership development activities, and highlight the neuroscience behind why trust is so important. Trust can be fostered through open communication, clearly communicated goals, and transparency (Broughton and Thomas, 2012). Summary: We know so much more now about the neurological and psychological processes involved in decision-making, knowledge that should be used by leaders to be more effective. Copyright: Neither this article or a portion thereof may be reproduced in any print or media format without the express permission of the author. Read my latest book: Eye of the Storm: How Mindful Leaders Can Transform Chaotic Workplaces, available in paperback and Kindle on Amazon and Barnes & Noble in the U.S., Canada, Europe and Australia and Asia.
A complement describes a verb's argument (subject or object) more closely: 1. Sir George is a knight. (Subject complement) 2. The Queen made Sir George a knight. (object complement) Notice that you can put "a knight" in the object slot, in 2.: 3. The Queen made a knight of Sir George. Notice that it is possible to read sentence 2. in such a way that "a knight" is, in fact, the object of the sentence. In that case, "Sir George" gets demoted from direct object of the sentence to indirect object of the sentence. The meaning of the sentence changes quite drastically. It now means: 4. The Queen made a knight for Sir George. Because of what we know about the world, reading sentence 2. in a way that it means the same as sentence 4. is very unlikely, but grammar allows it. So, in summary, the sentence # 2 could mean: 2. a The Queen (Subject) made Sir George (direct object) a knight (object complement). 2. b The Queen (Subject) made Sir George (indirect object) a knight (direct object). So, I'm quite curious how to analyse a sentence like: 5. We changed train at Victoria station. so that "train" is a complement. What does it complement? Since this sentence has no object, "trains" would have to be a "subject complement". But I don't see that. It might be an "adverbial complement"? I'm not sure myself.
This lesson can serve as the culminating review lesson for the entire EDSITEment Marco Polo Curriculum Unit, or you may use it to complete your own series of lessons for 3rd through 5th graders that focus on Marco Polo's journey to China and back. After resting up and replenishing their supplies in the trading city of Kashgar, Marco Polo and his father and uncle continued eastward on their journey from Venice to China. After spending 17 years in China, Marco Polo and his father and uncle finally had an opportunity to return home to Venice. Student follow their homeward journey starting with a sea voyage to India. Marco Polo's father and uncle returned to Venice when he was 15 years old. Two years later, when they set off again for China, they decided to take Marco with them. Students will take a “virtual” trip with Marco Polo from Venice to China and back. The first leg of the journey ends at Hormuz. The Polos were so concerned about the seaworthiness of the ships they found at Hormuz that they changed their plans and decided instead to follow a series of trade routes across Asia to China. Students will "accompany" them on this leg og the trip, from Hormuz to Kashgar. Over half of all English surnames used today are derived from the names of places where people lived. This type is known as a locative surname. For example, a man called John who lived near the marsh, might be known as John Marsh. John who lived in the dell was called John Dell. Other examples are John Brook, John Lake, and John Rivers. Fables, such as those attributed to Aesop, are short narratives populated by animals who behave like humans, and which convey lessons to the listener. Jataka Tales are often short narratives which tell the stories of the lives of the Buddha before he reached Enlightenment. In this lesson students will be introduced to both Aesop’s fables and to a few of the Jataka Tales, and through these stories will gain an understanding of one genre of storytelling: morality tales. Eric A. Blair, better known by his pen name, George Orwell, is today best known for his last two novels, the anti-totalitarian works Animal Farm and 1984. He was also an accomplished and experienced essayist, writing on topics as diverse as anti-Semitism in England, Rudyard Kipling, Salvador Dali, and nationalism. Among his most powerful essays is the 1931 autobiographical essay "Shooting an Elephant," which Orwell based on his experience as a police officer in colonial Burma. Joan of Arc is likely one of France's most famous historical figures, and has been mythologized in popular lore, literature, and film. She is also an exceptionally well-documented historical figure. Through such firsthand accounts students can trace Joan's history from childhood, through her death, and on to her nullification trial. Through reading chapters of Edith Wharton's book, Fighting France, From Dunkerque to Belfort, students will see how an American correspondent recounted World War I for American readers.
NCERT Solutions For Class 10 Maths Chapter 15 In the introduction, the concept of experimental or empirical probabilities is explained first. Following this, a theoretical approach to probability is presented and it is mentioned that theoretical probability is also known as classical probability. Further, the formula for theoretical probability is presented as the division of the number of outcomes favourable to an event E by the of all possible outcomes of the experiment. Also, it is observed that the sum of the probabilities of all the elementary events of an experiment is 1. Further to this, concepts like impossible events, sure or certain, and complementary events are explained. Download Chapter 15 Probability Class 10 Maths Solutions PDF for FREE:
What is a Strain Gage? Strain gages are fundamental sensing devices that function as the building blocks of many other types of transducers—including pressure, load, and torque sensors—used extensively in structural test and monitoring applications. An example of such a transducer is a load cell that converts a mechanical force to an electrical output signal. In these designs, gages are connected as a Wheatstone bridge, resulting in an accurate and rugged transducer that can operate in extreme environments. To achieve accuracy, the Wheatstone bridge is adjusted for manufacturing initial tolerance and ambient and self-heating temperature effects. "Compensation" high precision resistors are then added to correct for bridge unbalance, and to adjust the output sensitivity. Other compensation resistors correct for the errors that result when the transducer is used over a widely changing temperature range. The True Pioneer of Strain Gage Technology While there are several ways of measuring strain, the most common one is using a bonded resistance strain gage, a device whose electrical resistance varies on proportion to the amount of strain in the device. Today, the most widely used strain gage is the Advanced Sensors Technology bonded resistance strain gage. Micro-Measurements uses a precisely manufactured (in house) metallic foil to produce to resistive element providing for the best consistency and gage to gage matching available. The metallic strain gage consist of metallic foil arranged in a grid pattern. The grid pattern maximizes the amount of metallic foil subject to strain in parallel directions. The Advancement of the Strain Gage The discovery of the principle upon which the foil resistance strain gage is based was made in 1856 by Lord Kelvin, who loaded iron and copper wires in tension and noted their resistance changes while applying strain in the wire. In his classical experiment Lord Kelvin introduced three important fundamentals which help to develop the foil strain gage: The resistance of the wire changes as a function of the applied strain, Each material has a different sensitivity, using a Wheatstone bridge is vital to the measurement accuracy of the resistance change. Strain gages are resistive sensors whose resistance is a function of applied strain or force (unit deformation). Stress is calculated from the strain information. Typically a strain gage is attached to a structure and when such a structure is deformed (tension, compression, shear), the resistive strands in the strain gage follow the structure deformation which causes an electrical resistance change. The resistance change is then expressed in units of strain or stress. Strain gages are used in many types of transducers including pressure, load, and torque sensors—used extensively in structural test and monitoring applications. An example of such a transducer is a load cell that converts a mechanical force to an electrical output signal. In these designs, gages are connected as a Wheatstone bridge, resulting in an accurate and rugged transducer that can operate in extreme environments. To achieve accuracy, the Wheatstone bridge is adjusted for manufacturing initial tolerance and ambient and self-heating temperature effects. "Compensation" high precision resistors are then added to correct for bridge unbalance, and to adjust the output sensitivity. Other compensation resistors correct for the errors that result when the transducer is used over a widely changing temperature range. Strain gages are also used for measuring strain in structures (stress analysis) such as spacecraft, airplanes, cars, machines, bridges and other structures. Performance specifications to consider when searching for strain gages include operating temperature, the state of the strain (including gradient, direction, magnitude, and time dependence), and the stability required by the application. The construction of resistance strain gage involves bringing together the best electrical resistance material and backing in the optimal way from manufacturing, application and performance point of view. What we expect from a bonded resistance strain gage? - Small size and mass. - Agile development – ease of manufacturing in different resistance values, overall sizes and measurement configurations. - Durable, with ease of handling and use. - Excellent stability, repeatability and linearity over wide strain range. - Practical sensitivity to strain - Ability to control, effects of environmental variables in the measurement system such as temperature. - Applicable for dynamic and static measurements and remote recording. The foil resistance strain gage sensor is the most frequently used sensor in stress analysis measurements through the world today. Many factors, such as test duration, strain range required, and operating temperature, must be considered in selecting the best strain gage/adhesive combination for a given test profile. Since transducer behavior can be observed to a resolution better than one part in 20K, the advanced sensors technology strain gage system must be selected and installed with utmost care. In contrast to the situation for stress analysis applications, the strain gages installed on a transducer can readily be calibrated against known physical standards – dead weights, for example , or previously calibrated transducer. The existence of precise standards and sensitive electronic instrumentation allow the constructor of a transducer to quantify its performance to a very high degree. It is possible, in fact, to observe transducer behavior to a resolution better than one part in 20000. This corresponds, in effect, to detecting a strain of 0.05 μϵ or less on the surface of the spring element. Clearly, the resolution of such small dimensional changes requires that the strain gage system be selected and installed with the utmost care. The strain gage selection procedure for stress analysis and transducer applications is similar. The preferred sequence is: - Operating temperature range - Gage length. - S-T-C number. - Pattern Geometry. - Strain Gage Series. - Grid Resistance. - Creep compensation code (transducer only). - Optional features and custom strain gages. Explore our extensive collection of tutorials and informational videos on a variety of strain measurement topics. Questions and answers on the go! Listen to Micro-Measurements experts explain the many facets of strain gage technology.
Abraham Lincoln’s famous speech told during an internal crisis within American history is one of the most important ever given because of the time he delivered it and the consequences that would come thereafter. He united the north and the south into the Union through the very short, yet powerful speech called the Gettysburg address and kept the nation titled “The United States of America” upheld. Lincoln addressed his listeners as if he were talking to them as friends gossiping or discussing topics about work. In reality, he was giving the speech that tied the United States of America together for its existence through and after the war. Lincoln was giving the Americans within the crowd the “half time” speech of the Civil War about how important it was that they stick together and if the South refused, they would receive their punishment until they agreed that they would reunite with the north and end their rebellions. At the time, around 45,000 people were already dead, captured, or missing, so Lincoln left them with words that did not apologize for the deaths of their family members and friends. Instead, he pushed them on to keep struggling and fighting for unification. To show that it was right to create this memorial for the brutal war that involved Americans killing other Americans in order to split over a sort of border dispute. In conclusion, the Gettysburg address given by Abraham Lincoln is one of the most powerful speeches ever delivered because it withheld our nation from dividing in a time of the most important yet brutal crisis in the history of the United States. Fisher High School
Charles dissolves Parliament when it tries to expand powers to deal with an economic crisis. The Parliament of 1628 produces the Petition of Right. This document prohibited the king from 1. raising taxes without the consent of Parliament, 2. imprisoning subjects without due cause, 3. housing soldiers in private homes, 4. imposing martial law in peacetime. Charles did sign the petition but then dissolved Parliament in 1629 for 11 years. During this time, he ignored the promises made in the petition and ruled the nation without Parliament.
Presentation on theme: "Basic Statistics The Chi Square Test of Independence."— Presentation transcript: 1 Basic StatisticsThe Chi Square Test of Independence 2 Chi Square Test of Independence A measure of association similar to the correlations we studied earlier.Pearson and Spearman are not applicable if the data are at the nominal level of measurement.Chi Square is used for nominal data placed in a contingency table.A contingency table is a two-way table showing the contingency between two variables where the variables have been classified into mutually exclusive categories and the cell entries are frequencies. 3 An ExampleSuppose that the state legislature is considering a bill to lower the legal drinking age to 18. A political scientist is interested in whether there is a relationship between party affiliation and attitude toward the bill. A random sample of 150 registered republicans and 200 registered democrats are asked their opinion about the proposed bill. The data are presented on the next slide. 4 Political Party and Legal Drinking Age Bill Attitude Toward BillForUndecidedAgainstTotalRepublican381795150Democrat92189020013035185350The bold numbers are the observed frequencies (fo) 5 Determining the Expected Frequencies (fe) First, add the columns and the rows to get the totals as shown in the previous slide.To obtain the expected frequency within a particular cell, perform the following operation: Multiply the row total and the column total for the cell in question and then divide that product by the Total number of all respondents. 6 Calculating the Expected Value for a Particular Cell Attitude Toward BillForUndecidedAgainstTotalRepublican381795150Democrat9218902001303518535055.71. 130*150 = 19500/350 = 55.7 7 Political Party and Attitude toward Bill ForUndecidedAgainstTotalRepublican3855.717159579.3150Democrat9274.3182090105.720013035185350Numbers in Black are obtained (fo),Numbers in Purple are expected (fe) 8 The Null Hypothesis and the Expected Values The Null Hypothesis under investigation in the Chi Square Test of Independence is that the two variables are independent (not related).In this example, the Null Hypothesis is that there is NO relationship between political party and attitude toward lowering the legal drinking age. 9 Understanding the Expected Values If the Null is true, then the percentage of those who favor lowering the drinking age would be equal for each political party.Notice that the expected values for each opinion are proportional for the number of persons surveyed in each party. 10 Political Party and Attitude toward Bill ForUndecidedAgainstTotalRepublican3855.717159579.315042.8%Democrat9274.3182090105.720057.2%13035185350The numbers in Green are the percentage of the total for each Party. 11 The Expected ValuesThe expected values for each cell are also equal to the percentage of each party for the column total.For example, Republicans were 42.8% of the total persons surveyedIf 130 people were in favor of the bill, then 42.8% of them should be Republican (55.7), if there is no relationship between the variables 12 Calculating the Chi Square Statistic The Chi Square statistic is obtained via this formulaThe Chi Square statistic is(1) the sum over all cells of(2) the difference between the obtained value and the expected value SQUARED, which is then(3) divided by the expected frequency.The numbers in Purple on the next slide illustrate this calculation 13 Calculating the Chi Square Statistic =5.62=0.27=3.11=4.22=0.20=2.33X2 = = 15.75 14 Interpreting the Results The calculated value for the chi square statistic is compared to the critical value found in Table H, page 544.Note: The distribution of the Chi Square Statistic is not normal and the critical values are only on one side. If the obtained values are close to the expected value, then the chi square statistic will approach 0. As the obtained value is different from the expected, the value of chi square will increase. This is reflected in the values found in Table H.The Degrees of Freedom for the Chi Square Test of Independence is the product of the number of rows minus 1 times the number of columns minus 1 15 Interpreting Our Results In our study, we had two rows (Republicans and Democrats) and three columns (For, Undecided, Against).Therefore, the degrees of freedom for our study is (2-1)(3-1) = 1(2) = 2.Using an a of .05, the critical value from Table H would be 5.991Since our calculated chi square is 15.75, we conclude that there IS a relationship between political party and opinion on lowering the drinking age, thereby rejecting the Null Hypothesis
Chemical equilibrium deals with to what extent a chemical reaction proceeds. It is observed that, in most of the chemical reactions, the reactants are not completely converted to products. The reaction proceeds to certain extent and reaches a state at which the concentrations of both reactants and products remain constant with time. This state is generally referred to as equilibrium state. In these reactions, not only the conversion of reactants to products occurs, but also the conversion of products to reactants is possible. These reactions are known as reversible reactions. They reach equilibrium state where the number of reactant species converted to products becomes equal to the number of product species converted to reactants at a given instant of time i.e., the rate of forward reaction becomes equal to the rate of backward reaction. That is why at equilibrium, there is no observable change in the concentration of reactants and products. The reaction is said to be halted and no further conversion of reactants is possible under given set of conditions. Chemical equilibrium deals with these reversible reactions, which reach equilibrium state. The scope of chemical equilibrium includes the study of characteristics and factors affecting the chemical equilibria. Irreversible reaction: A reaction that occur only in one direction is called an irreversible reaction i.e., only the reactants are converted to products and the conversion of products to reactants is not possible. The single headed arrow () is used to indicate the irreversible reactions. 1) The combustion of methane is an irreversible reaction since it is not possible to convert the products (carbon dioxide and water) back to the reactants (methane and oxygen). 2) The decomposition of potassium chlorate is also an irreversible reaction. It is not possible to prepare potassium chlorate directly from KCl and O2. Reversible reaction : A reaction that occurs in both forward and backward directions is called reversible reaction. In a reversible reaction, the reactants are converted into products and the products can also be converted back to the reactants. The half headed arrows () are used to indicate the reversible reactions. E.g. The following reactions are reversible reactions since they occur in both directions. The chemical equilibrium is possible in reversible reactions only. Chemical equilibrium: The state at which the rate of forward reaction becomes equal to the rate of backward reaction is called chemical equilibrium. Explanation: Initially the rate of forward reaction is greater than the rate of backward reaction. However during the course of reaction, the concentration of reactants decreases and the concentration of products increases. Since the rate of a reaction is directly proportional to the concentration, the rate of forward reaction decreases with time, whereas the rate of backward reaction increases. At certain stage, both the rates become equal. From this point onwards, there will be no change in the concentrations of both reactants and products with time. This state is called as equilibrium state. The state of chemical equilibrium can be shown graphically as follows: 1) At equilibrium state, the rates of forward and backward reactions are equal. 2) The observable properties such as pressure, concentration, color, density, viscosity etc., of the system remain unchanged with time. 3) The chemical equilibrium is a dynamic equilibrium, because both the forward and backward reactions continue to occur even though it appears static externally. The concentrations of reactants and products do not change with time but their inter conversion continue to occur. 4) The chemical equilibrium can be reached by starting the reaction either from the reactants side or from the products side. 5) Both pressure and concentration affect the state of equilibrium but do not affect the equilibrium constant. 6) However, temperature can affect both the state of equilibrium as well as the equilibrium constant. 6) A positive catalyst can increase the rates of both forward and backward reactions and thus helping the system to attain the equilibrium faster. But it does not affect the state of equilibrium and the equilibrium constant. The chemical equilibria are classified into two types: 1) Homogeneous equilibrium and 2) Heterogeneous equilibrium. 1) Homogeneous equilibrium: A chemical equilibrium is said to be homogeneous if all the substances (reactants and products) at equilibrium are in the same phase. 2) Heterogeneous equilibrium: A chemical equilibrium is said to be heterogeneous if all the substances at equilibrium are not in the same phase. The law of mass action was proposed by Guld berg and Wage. It can be stated as: The rate of a reaction at an instant of time is proportional to the product of active masses of the reactants at that instant of time under given conditions. The active masses for different substances and systems can be expressed as mentioned below. i) For dilute solutions, the molar concentrations are taken as active masses. ii) For gases at low pressures, the partial pressures are taken as active masses. However the molar concentrations can also be taken as active masses. iii) The active masses of pure solids and pure liquids are taken as unity since their active masses (or concentrations) are independent of their quantities taken. E.g. For the reaction: N2(g) + 3H2(g)2NH3(g) The rate of the reaction at an instant of time can be expressed as: r ∝ [N2] [H2]3 r ∝ pN2.pH23 [N2] and [H2] are the molar concentrations of gases. pN2 and pH2 are the partial pressures of gases. Consider the following reaction at equilibrium; By applying law of mass action, The rate of forward reaction can be written as; Vf ∝ [A]a[B]b Vf = kf [A]a[B]b The rate of backward reaction can be written as; Vb ∝ [C]c[D]d Vb = kb [C]c[D]d [A], [B], [C] and [D] are the equilibrium concentrations of A, B, C and D respectively. a, b, c & d represent the stoichiometric coefficients of A, B, C & D respectively. Kf and Kb are the rate constants of forward and backward reactions respectively. However at equilibrium, rate of forward reaction = rate of backward reaction i.e. Vf = Vb Kc is called as equilibrium constant expressed in terms of molar concentrations. It is the ratio of products of equilibrium concentrations of products to the product of equilibrium concentrations of reactants. In general, it can be represented as: Units of Kc: Most of the times the Kc is expressed with out units. However the units of Kc can be given as (mol.L-1)Δn. Where Δn = (c+d) - (a+b) = (total no. of moles of products) - (total no. of moles of reactants). Note: These no. of moles are the stoichiometric coefficients in the balanced chemical equation for the equilibrium. While counting this number, the no. of moles of solids and liquids are not taken into consideration. Factors affecting the equilibrium constant: * There is no effect of pressure, concentration and catalyst on the value of equilibrium constant. * However, the equilibrium constant depends on the temperature. Usually, in exothermic reactions, increase in temperature decreases the the equilibrium constant, Kc. Whereas, in endothermic reactions, increase in temperature increases the Kc value. Kp is the equilibrium constant in terms of partial pressures. Let A, B, C and D are gases in the following reaction. Then for the above reaction, the Kp can be written as: Where PA, PB, PC and PD are the partial pressures of A, B, C and D at equilibrium. From the ideal gas equation; Where [M] = molar concentration. Now we can write: PA = [A]RT, PB = [B]RT, PC = [C]RT & PD = [D]RT Where Δng = (no. of moles of gaseous products) - (no. of moles of gaseous reactants). Note: These no. of moles are the stoichiometric coefficients of only gaseous reactants and products in the balanced chemical equation for the equilibrium. The following conclusions can be drawn from above equation: If Δng = 0 then Kp = Kc If Δng > 0 then Kp > Kc If Δng < 0 then Kp < Kc The expressions for Kc and Kp and the relation between them for some reversible reactions are illustrated below. Since Δng = (2) - (1+1) = 0 Kp = Kc Since Δng = (1+1) - (1) = 1 Kp > Kc Since Δng = (2) - (1+3) = -2 Kp < Kc Since Δng = (1+3) - (2) = 2 Kp > Kc i.e., The expressions for Kc and Kp and the relation between them depends on how we expressed the reversible reaction as the stoichiometric equation. KC = [CO2] Since Δng = (1) - (0) = 1 Kp > Kc In above expressions, the active masses of solids are taken as one and hence do not appear in the expressions. The reaction quotient, Q is defined as the ratio of products of concentrations of products to the product of concentrations of reactants at an instant of time. i.e., The equilibrium constant, Kc is the special case of reaction quotient, Q. If the Q value is equal to Kc, the reaction is said to be at equilibrium. If the Q value is not equal to Kc, the reaction has to be proceeded either to the right (towards the products side) or to the left (towards the reactants side) to reach the equilibrium state. For example, if Q < KC, the reactants concentration has to be decreased to make Q equals to KC. Hence the forward reaction is favored to restore the equilibrium. Else if, Q > KC, then the products concentration must be decreased to make Q equals to KC again. Hence the backward reaction is favored to restore the equilibrium. In the same way, the reaction quotient in terms of partial pressures, Qp can be defined. It is the ratio of product of partial pressures of products to product of partial pressures of reactants at any instant of time. The same arguments about Q and Kc are valid for Qp and Kp.
Recommended age group: 7–18 Time required: one to two 45–60 minute sessions Equipment: History of the Olympic Games information sheets, film (optional). Learn all about the fascinating history of the Olympic Games, including the differences between the ancient and modern Olympic Games; the symbols, mascots and ceremonies associated with the Games; the Torch Relay and the Olympic Truce. There are also lots of amazing historical facts to discover and explore about the Games. Inspire your students to find out more through History projects, assemblies, presentations and the fact files below. They are all designed to provide you with basic background information, which you may wish to use when planning a project. You could also print the sheets and incorporate them into wall displays or presentations. Try out some of these ideas, using the fact sheets in the classroom, to get you started. - How many differences can the pupils find about the ancient Olympic Games compared with the modern Games? - Watch the Olympic torch video – when did the tradition of the torch relay start? What meaning does the relay have through history and today? - Create an assembly about the Olympic Truce which the students could present to their peers. You could watch the Olympic Truce video in the film bank. - Make up a quiz about the history of the Olympic Games, and challenge another class to find the answers. - London 2012 is now history. Write/draw/talk about the top 10 best things about it. Make your own 'London 2012 – amazing facts' sheet. For younger students, select a fact file to focus on as a reading task. Focus students research on a specific area of the Olympic Games – for instance, what new sports have been added since the first Games, or since London 2012? Older or more able readers will be able to use the sheets independently as part of their project. You could also print the sheets out to incorporate in wall displays or presentations.
Words by Kayla Burns Art by Austin Quintana and Kayla Burns It is an annual change for all deciduous trees. As the weather grows colder, and the season morphs from autumn to winter, we witness one final show in the botanical world. With colder weather comes less sunlight, and with less sunlight comes the inability for plants to photosynthesize. Naturally, the chlorophyll in the leaves fades away, revealing the warm hues of the fall season. The world unfolds itself in bursts of red and yellow and orange. And although this transition means that these trees will soon become bare, leafless, and dormant, there is still life in these colors. They are changing because they are preparing — soaking up as much sunlight as they can, so that they may store its energy and survive the winter. The trees may lose their vibrant green coloration, but they gain a certain vitality that will get them through the cold, and allow them to prosper yet again as they reconnect in the springtime with the warmth that birthed them.
Colorism in the black community, not a new concept, but also not acknowledged as much as it should be. The black community is well versed in racism, but are often oblivious to colorism. Colorism is the prejudicial or preferential treatment of people of the same race, solely based on the color of their skin . Typically, individuals of lighter skin are preferred in the black community. Not only is it a preference, it is almost a fetish. Colorism is so detrimental to the community because not only are we dealing with racism, prejudice, and oppression from other races, we must deal with the prejudice and stigmas within our own race. An old children’s rhyme captures the definition of colorism and its inner workings in a nutshell. “If you’re black, stay back; if you’re brown, stick around; if you’re yellow, you’re mellow; if you’re white, you’re all right.” - Why it matters? - Skin Bleaching - Relationship to bodylore Colorism can be dated back to slavery. When lighter/fairer skinned slaves were in the house doing domestic work, the darker skinned slaves were outside in the fields. The lighter skinned slaves were often preferred in the house because they were children or grandchildren to the plantation owner due to the sexual assault that slaves often experienced. Although these mixed race babies were not freed or claimed by their white fathers, they were awarded privileges like being in the house and doing less labor intensive work. As a result, light skin grew to become a positive attribute in the black community. Unfortunately, the end of slavery in the United States did not mean the end of colorism. Light skinned people (LSP), were awarded better employment opportunities over dark skinned people (DSP). This can be one explanation as to why a lot of upper-class blacks were of lighter skin. Brent Staples often recalled reading newspaper ads in the 1940’s were job seekers would list that they were ‘light skinned’ before even listing their experience and qualifications. Even within the community, DSP were excluded on the basis of their skin colors. Often times is people were darker than a brown paper bag, they were not invited into certain social circles . - Why it matters? One study revealed that people tend to associate positive attributes with lighter skin tones. Students were shown a word, either “educated” or “ignorant”. They were then shown a picture of man. The researchers also showed the students six other pictures of the man. Three of the photos were darker, and three were light. When shown the word “educated”, people tended to recall the photos with the lighter skin tones than the one that they were originally shown . A similar study was conducted using images of former President Obama with photoshopped skin tones. People who agreed with his political views selected images of him with lighter skin. People who disagreed with his views tended to select the images of him with darker skin . These two articles show an association between light skin and positive attributes. It also goes to shows how even though it may happen subconsciously, there is an underlying stigma towards DSP. Many girls of darker skin tones are under represented on camera. Black women on TV shows, magazines, and the big screen are oftentimes light skinned. Actress Zendaya, who is of light skin, even admitted to ”have a bit of a privilege compared to [her] darker sisters and brothers” . Even in movies where the original characters were meant to be dark skinned, Hollywood chose to opt for a ‘light skin version.’ For example instead of casting a DSP for the role of Nina Simone, whom was a darker skinned woman, they casted light skinned actress Zoe Saldana and used makeup to make her appear darker . Actions like this are what cause little boys and girls of darker skin to believe that they aren’t beautiful. They see and notice how these LSP are chosen over them on the TV shows, the movies, and even in their own classrooms. This can lead to a low self-esteem and have them believing that they are ugly because of the amount of melanin in their skin . 3.1 Skin Bleaching This lack of preference and representation for DSP can affect someones self-esteem and induce serious self hate. One form or serious self hate that has been on the rise is skin bleaching. This is when someone takes measures to lighten their skin using bleach. This bleach is typically a topical that they apply all over in efforts to make them more beautiful, in their mind. Even celebrities like Michael Jackson, Sammy Sosa, Lil Kim, and Azaelia Banks have skin bleached. Not only is this psychological warfare on the individual, but warfare on other people of color, especially those of darker skin. It influences the black community as a whole. It sends a message that “____ didn’t want to have brown skin, so why should I? I want to be like ____”. Fill in the blank . - Relationship to bodylore From information obtained during Dr. Milligan’s class I can state that bodylore is the way we communicate with our bodys and how our bodies affect our social meaning. Bodylore is how we tie our bodies to our identities. Skin color is something that identifies us, that is a fact. Someone can look at a person and tell if they are a LSP or a DSP. Though perception of light and dark may differ from person to person, skin tone is something that can be seen with the naked eye by a person. People don’t have to ask each other what their skin tone is. They don’t have to get to know each other to know what their skin tone is, you can see it on the surface. And although one might not internally or externally express their privileges a as a LSP, the privileges still exist. A DSP may get the short end of the stick in several sectors of life such as employment, sentencing, and social gatherings . If DSP feel like white America won’t accept them, and their own community oust them and ranks them below their light skin counterparts, it is no wonder as to why some may resort to skin alterations and expensive procedures to change their skin tone. Yes, there are many DSP that are completely comfortable in their skin but this article shows evidence of many that are not happy. This should open some sort of dialogue or discussion to people of all ages and genders to ensure that they feel welcomed in wanted in the black community. Things to check out: Keyondra Wilson is studying Applied Sociology at Old Dominion University. She is particularly interested in inequalities in race and social class. She works as a Graduate Research Assistant at the Social Science Research Center at Old Dominion University. Her life goal is to someday own her own charity specializing in helping disadvantaged individuals and families. Her life motto is, “Don’t Survive. Thrive.”
Weed control is a type of pest control, which attempts to stop or reduce growth of weeds, especially noxious weeds, with the aim of reducing their competition with desired flora and fauna including domesticated plants and livestock, and in natural settings preventing non native species competing with native species. Weed control is important in agriculture. Methods include hand cultivation with hoes, powered cultivation with cultivators, smothering with mulch, lethal wilting with high heat, burning, and chemical control with herbicides (weed killers). Weeds compete with productive crops or pasture, they can be poisonous, distasteful, produce burrs, thorns or otherwise interfere with the use and management of desirable plants by contaminating harvests or interfering with livestock. Weeds compete with crops for space, nutrients, water and light. Smaller, slower growing seedlings are more susceptible than those that are larger and more vigorous. Onions are one of the most vulnerable, because they are slow to germinate and produce slender, upright stems. By contrast broad beans produce large seedlings and suffer far fewer effects other than during periods of water shortage at the crucial time when the pods are filling out. Transplanted crops raised in sterile soil or potting compost gain a head start over germinating weeds. Weeds also vary in their competitive abilities according to conditions and season. Tall-growing vigorous weeds such as fat hen (Chenopodium album) can have the most pronounced effects on adjacent crops, although seedlings of fat hen that appear in late summer produce only small plants. Chickweed (Stellaria media), a low growing plant, can happily co-exist with a tall crop during the summer, but plants that have overwintered will grow rapidly in early spring and may swamp crops such as onions or spring greens. The presence of weeds does not necessarily mean that they are damaging a crop, especially during the early growth stages when both weeds and crops can grow without interference. However, as growth proceeds they each begin to require greater amounts of water and nutrients. Estimates suggest that weed and crop can co-exist harmoniously for around three weeks before competition becomes significant. One study found that after competition had started, the final yield of onion bulbs was reduced at almost 4% per day. Perennial weeds with bulbils, such as lesser celandine and oxalis, or with persistent underground stems such as couch grass (Agropyron repens) or creeping buttercup (Ranunculus repens) store reserves of food, and are thus able to persist in drought or through winter. Some perennials such as couch grass exude allelopathic chemicals that inhibit the growth of other nearby plants. Weeds can also host pests and diseases that can spread to cultivated crops. Charlock and Shepherd's purse may carry clubroot, eelworm can be harboured by chickweed, fat hen and shepherd's purse, while the cucumber mosaic virus, which can devastate the cucurbit family, is carried by a range of different weeds including chickweed and groundsel. Pests such as cutworms may first attack weeds but then move on to cultivated crops. Some plants are considered weeds by some farmers and crops by others. Charlock, a common weed in the southeastern US, are weeds according to row crop growers, but are valued by beekeepers, who seek out places where it blooms all winter, thus providing pollen for honeybees and other pollinators. Its bloom resists all but a very hard freeze, and recovers once the freeze ends. Annual and biennial weeds such as chickweed, annual meadow grass, shepherd's purse, groundsel, fat hen, cleaver, speedwell and hairy bittercress propagate themselves by seeding. Many produce huge numbers of seed several times a season, some all year round. Groundsel can produce 1000 seed, and can continue right through a mild winter, whilst Scentless Mayweed produces over 30,000 seeds per plant. Not all of these will germinate at once, but over several seasons, lying dormant in the soil sometimes for years until exposed to light. Poppy seed can survive 80–100 years, dock 50 or more. There can be many thousands of seeds in a square foot or square metre of ground, thus any soil disturbance will produce a flush of fresh weed seedlings. The most persistent perennials spread by underground creeping rhizomes that can regrow from a tiny fragment. These include couch grass, bindweed, ground elder, nettles, rosebay willow herb, Japanese knotweed, horsetail and bracken, as well as creeping thistle, whose tap roots can put out lateral roots. Other perennials put out runners that spread along the soil surface. As they creep they set down roots, enabling them to colonise bare ground with great rapidity. These include creeping buttercup and ground ivy. Yet another group of perennials propagate by stolons- stems that arch back into the ground to reroot. The most familiar of these is the bramble. Weed control plans typically consist of many methods which are divided into biological, chemical, cultural, and physical/mechanical control. In a domestic gardens, methods of weed control include covering an area of ground with a material that creates an unsuitable environment for weed growth, known as a weed mat. For example, several layers of wet newspaper prevent light from reaching plants beneath, which kills them. In the case of black plastic, the greenhouse effect kills the plants. Although the black plastic sheet is effective at preventing weeds that it covers, it is difficult to achieve complete coverage. Eradicating persistent perennials may require the sheets to be left in place for at least two seasons. Some plants are said to produce root exudates that suppress herbaceous weeds. Tagetes minuta is claimed to be effective against couch and ground elder, whilst a border of comfrey is also said to act as a barrier against the invasion of some weeds including couch. A 5–10 centimetres (2.0–3.9 in) layer of wood chip mulch prevents some weeds from sprouting. Gravel can serve as an inorganic mulch. Irrigation is sometimes used as a weed control measure such as in the case of paddy fields to kill any plant other than the water-tolerant rice crop. Many gardeners still remove weeds by manually pulling them out of the ground, making sure to include the roots that would otherwise allow some to re-sprout. Hoeing off weed leaves and stems as soon as they appear can eventually weaken and kill perennials, although this will require persistence in the case of plants such as bindweed. Nettle infestations can be tackled by cutting back at least three times a year, repeated over a three-year period. Bramble can be dealt with in a similar way. A highly successful, mostly manual, removal programme of weed control in natural bush land has been the control of sea spurge by Sea Spurge Remote Area Teams in Tasmania. Ploughing includes tilling of soil, intercultural ploughing and summer ploughing. Ploughing uproots weeds, causing them to die. Summer ploughing also helps in killing pests. Mechanical tilling with various types of cultivators can remove weeds around crop plants at various points in the growing process. An Aquamog can be used to remove weeds covering a body of water. Several thermal methods can control weeds. Flame weeding uses a flame several centimetres/inches away from the weeds to singe them, giving them a sudden and severe heating. The goal of flame weeding is not necessarily burning the plant, but rather causing a lethal wilting by denaturing proteins in the weed. Similarly, hot air weeders can heat up the seeds to the point of destroying them. Flame weeders can be combined with techniques such as stale seedbeds (preparing and watering the seedbed early, then killing the nascent crop of weeds that springs up from it, then sowing the crop seeds) and pre-emergence flaming (doing a flame pass against weed seedlings after the sowing of the crop seeds but before those seedlings emerge from the soil—a span of time that can be days or weeks). Hot foam causes the cell walls to rupture, killing the plant. Weed burners heat up soil quickly and destroy superficial parts of the plants. Weed seeds are often heat resistant and even react with an increase of growth on dry heat. Since the 19th century soil steam sterilization has been used to clean weeds completely from soil. Several research results confirm the high effectiveness of humid heat against weeds and its seeds. Soil solarization in some circumstances is very effective at eliminating weeds while maintaining grass. Planted grass tends to have a higher heat/humidity tolerance than unwanted weeds. In 1998, the Australian Herbicide Resistance Initiative debuted. gathered fifteen scientists and technical staff members to conduct field surveys, collect seeds, test for resistance and study the biochemical and genetic mechanisms of resistance. A collaboration with DuPont led to a mandatory herbicide labeling program, in which each mode of action is clearly identified by a letter of the alphabet. The key innovation of the Australian Herbicide Resistance Initiative has been to focus on weed seeds. Ryegrass seeds last only a few years in soil, so if farmers can prevent new seeds from arriving, the number of sprouts will shrink each year. Until the new approach farmers were unintentionally helping the seeds. Their combines loosen ryegrass seeds from their stalks and spread them over the fields. In the mid-1980s, a few farmers hitched covered trailers, called "chaff carts", behind their combines to catch the chaff and weed seeds. The collected material is then burned. An alternative is to concentrate the seeds into a half-meter-wide strip called a windrow and burn the windrows after the harvest, destroying the seeds. Since 2003, windrow burning has been adopted by about 70% of farmers in Western Australia. Yet another approach is the Harrington Seed Destructor, which is an adaptation of a coal pulverizing cage mill that uses steel bars whirling at up to 1500 rpm. It keeps all the organic material in the field and does not involve combustion, but kills 95% of seeds. Another manual technique is the ‘stale seed bed’, which involves cultivating the soil, then leaving it fallow for a week or so. When the initial weeds sprout, the grower lightly hoes them away before planting the desired crop. However, even a freshly cleared bed is susceptible to airborne seed from elsewhere, as well as seed carried by passing animals on their fur, or from imported manure. Buried drip irrigation involves burying drip tape in the subsurface near the planting bed, thereby limiting weeds access to water while also allowing crops to obtain moisture. It is most effective during dry periods. Rotating crops with ones that kill weeds by choking them out, such as hemp, Mucuna pruriens, and other crops, can be a very effective method of weed control. It is a way to avoid the use of herbicides, and to gain the benefits of crop rotation. A biological weed control regiment can consist of biological control agents, bioherbicides, use of grazing animals, and protection of natural predators. Post-dispersal, weed seed predators, like ground beetles and small vertebrates, can substantially contribute to the weed regulation by removing weed seeds from the soil surface and thus reduce seed bank size. Several studies provided evidence for the role of invertebrates to the biological control of weeds Main article: Conservation grazing Companies using goats to control and eradicate leafy spurge, knapweed, and other toxic weeds have sprouted across the American West. Main article: Herbicide The above described methods of weed control use no or very limited chemical inputs. They are preferred by organic gardeners or organic farmers. However weed control can also be achieved by the use of herbicides. Selective herbicides kill certain targets while leaving the desired crop relatively unharmed. Some of these act by interfering with the growth of the weed and are often based on plant hormones. Herbicides are generally classified as follows: In agriculture large scale and systematic procedures are usually required, often by machines, such as large liquid herbicide 'floater' sprayers, or aerial application. Organic weed control involves anything other than applying manufactured chemicals. Typically a combination of methods are used to achieve satisfactory control. Sulfur in some circumstances is accepted within British Soil Association standards. The Bradley Method of Bush Regeneration uses ecological processes to do much of the work. Perennial weeds also propagate by seeding; the airborne seed of the dandelion and the rose-bay willow herb parachute far and wide. Dandelion and dock also put down deep tap roots, which, although they do not spread underground, are able to regrow from any remaining piece left in the ground. One method of maintaining the effectiveness of individual strategies is to combine them with others that work in complete different ways. Thus seed targeting has been combined with herbicides. In Australia seed management has been effectively combined with trifluralin and clethodim. Resistance occurs when a target plant species does not respond to a chemical that previously used to control it. It has been argued that over-reliance on herbicides along with the absence of any preventive or other cultural practices resulted in the evolution and spread of herbicide-resistant weeds. Increasing number of herbicide resistance weeds around the world has led to warnings on reducing frequent use of herbicides with the same or similar modes of action and combining chemicals with other weed control methods; this is called 'Integrated Weed Management'. Herbicide resistance recently became a critical problem as many Australian sheep farmers switched to exclusively growing wheat in their pastures in the 1970s. In wheat fields, introduced varieties of ryegrass, while good for grazing sheep, are intense competitors with wheat. Ryegrasses produce so many seeds that, if left unchecked, they can completely choke a field. Herbicides provided excellent control, while reducing soil disrupting because of less need to plough. Within little more than a decade, ryegrass and other weeds began to develop resistance. Australian farmers evolved again and began diversifying their techniques. In 1983, patches of ryegrass had become immune to Hoegrass, a family of herbicides that inhibit an enzyme called acetyl coenzyme A carboxylase. Ryegrass populations were large, and had substantial genetic diversity, because farmers had planted many varieties. Ryegrass is cross-pollinated by wind, so genes shuffle frequently. Farmers sprayed inexpensive Hoegrass year after year, creating selection pressure, but were diluting the herbicide in order to save money, increasing plants survival. Hoegrass was mostly replaced by a group of herbicides that block acetolactate synthase, again helped by poor application practices. Ryegrass evolved a kind of "cross-resistance" that allowed it to rapidly break down a variety of herbicides. Australian farmers lost four classes of herbicides in only a few years. As of 2013 only two herbicide classes, called Photosystem II and long-chain fatty acid inhibitors, had become the last hope. Internationally, weed societies help collaboration in weed science and management. In North America the Weed Science Society of America (WSSA) was founded in 1956 and publishes three journals: Weed Science, Weed Technology, and Invasive Plant Science and Management. In Britain, European Weed Research Council was established in 1958 and later expanded their scope under the name European Weed Research Society The main journal of this society is Weed Research. Moreover, the Council of Australasian Weed Society (CAWS) serves as a centre for information on Australian weeds, while New Zealand Plant Protection Society (NZPPS) facilitates information sharing in New Zealand. Strategic weed management is a process of managing weeds at a district, regional or national scale. In Australia the first published weed management strategies were developed in Tasmania, New South Wales and South Australian 1999, followed by the National Weeds Strategy in 1999. ((cite web)): CS1 maint: multiple names: authors list (link) ((cite book)): CS1 maint: others (link)
Mathematics and Geometry Age 3 to 12 Often, students learn math by memorizing facts and solutions, with little true understanding or ability to use mathematics in everyday life. Math is a series of abstract concepts for most children, and learning tends to come much more easily when they have hands on experience with concrete educational materials that show what is taking place in a given mathematical process. Montessori’s famous hands-on learning math materials make abstract concepts clear and concrete. Students can literally see and explore what is going on in math. Our approach offers a clear and logical strategy for helping students understand and develop a sound foundation in math and geometry. As an example, consider the very basis of mathematics: the decimal system – units, tens, hundreds, and thousands. Since quantities larger than twenty rarely have any meaning to a young child, Dr. Montessori reasoned that we should present this abstract concept graphically. Children cannot normally conceive of the size of a hundred, thousand, or million, much less the idea that a thousand is equal to ten hundreds or one hundred tens. Dr. Montessori overcame this obstacle by developing a concrete representation of the decimal system. Units are represented by single one-centimeter beads; a unit of ten is made up of a bar of ten beads strung together; hundreds are squares made up of ten ten-bars; and thousands are cubes made up of ten hundred-squares. Together, they form a visually and intellectually impressive tool for learning. Great numbers can be formed by very young children: “Please bring me three thousands, five hundreds, six tens and one unit.” From this foundation, all of the operations in mathematics, such as the addition of quantities into the thousands, become clear and concrete, allowing the child to internalize a clear image of how the process works. We follow the same principle in introducing plane and solid geometry to very young students, using geometric insets and three-dimensional models which they learn to identify and define. Five-year-olds can commonly name geometric forms that most adults wouldn’t recognize. The study of volume, area and precise measurement in everyday applications around the school is introduced in the early years and continually reinforced and expanded. Montessori mathematics climbs in sophistication through the higher levels. It includes a careful study of the practical application of mathematics in everyday life, such as measurement, handling finances, making economic comparisons, or in gathering data and making a statistical analysis. Elementary students continue to apply math in a wide range of projects and challenges. They prepare scale drawings, calculate area and volume, and build scale models of historical devices and structures. Precise measurement and comparison is a crucial application of mathematics, and our students engage in all sorts of calculations: determining the amount of gas used by the family car, the electricity burned when our lights are left on overnight, and the perimeter of the buildings. Our students are typically introduced to numbers at age 3: learning the numbers and number symbols one to ten: the red and blue rods, sand-paper numerals, association of number rods and numerals, spindle boxes, cards and counters, counting, sight recognition, concept of odd and even. Introduction to the decimal system typically begins at age 3 or 4. Units, tens, hundreds, thousands are represented by specially prepared concrete learning materials that show the decimal hierarchy in three dimensional form: units = single beads, tens = a bar of 10 units, hundreds = 10 ten bars fastened together into a square, thousands = a cube ten units long ten units wide and ten units high. The children learn to first recognize the quantities, then to form numbers with the bead or cube materials through 9,999 and to read them back, to read and write numerals up to 9,999, and to exchange equivalent quantities of units for tens, tens for hundreds, etc. Linear Counting: learning the number facts to ten (what numbers make ten, basic addition up to ten); learning the teens (11 = one ten + one unit), counting by tens (34 = three tens + four units) to one hundred. Development of the concept of the four basic mathematical operations: addition, subtraction, division, and multiplication through work with the Montessori Golden Bead Material. The child builds numbers with the bead material and performs mathematical operations concretely. (This process normally begins by age 4 and extends over the next two or three years.) Work with this material over a long period is critical to the full understanding of abstract mathematics for all but a few exceptional children. This process tends to develop in the child a much deeper understanding of mathematics. Development of the concept of “dynamic” addition and subtraction through the manipulation of the concrete math materials. (Addition and subtraction where exchanging and regrouping of numbers is necessary.) Memorization of the basic math facts: adding and subtracting numbers under 10 without the aid of the concrete materials. (Typically begins at age 5 and is normally completed by age 7.) Development of further abstract understanding of addition, subtraction, division, and multiplication with large numbers through the Stamp Game (a manipulative system that represents the decimal system as color–keyed “stamps”) and the Small and Large Bead Frame (a color–coded abacus). Skip counting with the chains of the squares of the numbers from zero to ten: i.e., counting to 25 by 5’s, to 36 by 6’s, etc. (Age 5-6) Developing first understanding of the concept of the “square” of a number. Skip counting with the chains of the cubes of the numbers zero to ten: i.e., counting to 1,000 by ones or tens. Developing the first understanding of the concept of a “cube” of a number. Beginning the “passage to abstraction,” the child begins to solve problems with paper and pencil while working with the concrete materials. Eventually, the materials are no longer needed. Development of the concept of long multiplication and division through concrete work with the bead and cube materials. (The child is typically 6 or younger, and cannot yet do such problems on paper without the concrete materials. The objective is to develop the concept first.) Development of more abstract understanding of “short” division through more advanced manipulative materials (Division Board); movement to paper and pencil problems, and memorization of basic division facts (Normally by age 7–8). Development of still more abstract understanding of “long” multiplication through highly advanced and manipulative materials (the Multiplication Checkerboard); (Usually age 7-8). Development of still more abstract understanding of “long division” through highly ad-danced manipulative materials (Test Tube Division apparatus); (Typically by age 7-8). Solving problems involving parentheses, such as (3 X 4) – (2 + 9) = ? Missing sign problems: In a given situation, should you add, divide, multiply or subtract ? Introduction to problems involving tens of thousands, hundreds of thousands, and millions (Normally by age 7). Study of fractions: Normally begins when children using the short division materials who find that they have a “remainder” of one and ask whether or not the single unit can be divided further. The study of fractions begins with very concrete materials (the fraction circles), and involves learning names, symbols, equivalencies common denominators, and simple addition, subtraction, division, and multiplication of fractions up to “tenths” (Normally by age 7-8). Study of decimal fractions: All four mathematical operations. (Normally begins by age 8-9, and continues for about two years until the child totally grasps the ideas and processes.) Practical application problems, which are used to some extent from the beginning, become far more important around age 7-8 and afterward. Solving word problems, and determining arithmetic procedures in real situations becomes a major focus. Money: units, history, equivalent sums, foreign currencies (units and exchange). (Begins as part of social studies and applied math by age 6.) Interest: Concrete to abstract; real life problems involving credit cards and loans; principal, rate, time. Computing the squares and cubes of numbers: Cubes and squares of binomials and trinomials (Normally by age 10). Calculating square and cube roots: From concrete to abstract (Normally by age 10 or 11). The history of mathematics and its application in science, engineering, technology & economics. Reinforcing application of all mathematical skills to practical problems around the school and in everyday life. Basic data gathering, graph reading and preparation, and statistical analysis. Sensorial exploration of plane and solid figures at the Primary level (Ages 3 to 6): the children learn to recognize the names and basic shapes of plane and solid geometry through manipulation of special wooden geometric insets. They then learn to order them by size or degree. Stage I: Basic geometric shapes. (Age 3-4) Stage II: More advanced plane geometric shapes-triangles, polygons, various rectangles and irregular forms. (Age 3-5) Stage III: Introduction to solid geometric forms and their relationship to plane geometric shapes. (Age 2-5) Study of the basic properties and definitions of the geometric shapes. This is essentially as much a reading exercise as mathematics since the definitions are part of the early language materials. More advanced study of the nomenclature, characteristics, measurement and drawing of the geometric shapes and concepts such as points, line, angle, surface, solid, properties of triangles, circles, etc. (Continues through age 12 in repeated cycles.) Congruence, similarity, equality, and equivalence. The history of applications of geometry. The theorem of Pythagoras. The calculation of area and volume.
Computer Graphics Basics Computer graphics is an art of drawing pictures on computer screens with the help of programming. It involves computations, creation, and manipulation of data. In other words, we can say that computer graphics is a rendering tool for the generation and manipulation of images. Cathode Ray Tube The primary output device in a graphical system is the video monitor. The main element of a video monitor is the Cathode Ray Tube (CRT), shown in the following illustration. The operation of CRT is very simple − The electron gun emits a beam of electrons (cathode rays). The electron beam passes through focusing and deflection systems that direct it towards specified positions on the phosphor-coated screen. When the beam hits the screen, the phosphor emits a small spot of light at each position contacted by the electron beam. It redraws the picture by directing the electron beam back over the same screen points quickly. There are two ways (Random scan and Raster scan) by which we can display an object on the screen. In a raster scan system, the electron beam is swept across the screen, one row at a time from top to bottom. As the electron beam moves across each row, the beam intensity is turned on and off to create a pattern of illuminated spots. Picture definition is stored in memory area called the Refresh Buffer or Frame Buffer. This memory area holds the set of intensity values for all the screen points. Stored intensity values are then retrieved from the refresh buffer and “painted” on the screen one row (scan line) at a time as shown in the following illustration. Each screen point is referred to as a pixel (picture element) or pel. At the end of each scan line, the electron beam returns to the left side of the screen to begin displaying the next scan line. Random Scan (Vector Scan) In this technique, the electron beam is directed only to the part of the screen where the picture is to be drawn rather than scanning from left to right and top to bottom as in raster scan. It is also called vector display, stroke-writing display, or calligraphic display. Picture definition is stored as a set of line-drawing commands in an area of memory referred to as the refresh display file. To display a specified picture, the system cycles through the set of commands in the display file, drawing each component line in turn. After all the line-drawing commands are processed, the system cycles back to the first line command in the list. Random-scan displays are designed to draw all the component lines of a picture 30 to 60 times each second. Application of Computer Graphics Computer Graphics has numerous applications, some of which are listed below − Computer graphics user interfaces (GUIs) − A graphic, mouse-oriented paradigm which allows the user to interact with a computer. Business presentation graphics − "A picture is worth a thousand words". Cartography − Drawing maps. Weather Maps − Real-time mapping, symbolic representations. Satellite Imaging − Geodesic images. Photo Enhancement − Sharpening blurred photos. Medical imaging − MRIs, CAT scans, etc. - Non-invasive internal examination. Engineering drawings − mechanical, electrical, civil, etc. - Replacing the blueprints of the past. Typography − The use of character images in publishing - replacing the hard type of the past. Architecture − Construction plans, exterior sketches - replacing the blueprints and hand drawings of the past. Art − Computers provide a new medium for artists. Training − Flight simulators, computer aided instruction, etc. Entertainment − Movies and games. Simulation and modeling − Replacing physical modeling and enactments
(Gum Disease, Periodontitis, Trench Mouth) Gingivitis is an inflammation of the gums (known to doctors as the gingiva) caused by bacteria. The bacteria that cause gingivitis lurk in the gum line, at the point where the teeth emerge. Many species of bacteria are involved, but they go by the universal name of plaque. Plaque is made of bacteria, mucus, and small particles of food. New bacteria are arriving constantly, and if they're not brushed off within about 2 days, they form a rock-hard layer called tartar. Toothbrushes and dental floss can't remove tartar, only a dentist can. Some people are more prone to getting gingivitis than others. Gingivitis is particularly likely to occur in people with diabetes, AIDS, or - poor-fitting fillings and crowns (also known as caps) - mouth breathing - allergic reaction (e.g., cinnamon gum) - vitamin C deficiency (scurvy) - niacin (vitamin B3) deficiency (pellagra) - medications (e.g., use of the female contraceptive pill) - poorly aligned teeth or poorly fitted mouth appliances (such as retainers or crowns) Pregnant women frequently have gum problems. Hormonal changes and tartar can combine to provoke an excess growth of gum tissue. Sometimes, a lump forms that may bleed easily. It's called a pregnancy tumour, though it has nothing to do with cancer. This lump can obstruct areas of gum line from brushing, letting bacteria prosper in safety. Postmenopausal women can develop a painful condition called desquamative gingivitis. For unknown reasons, the outer layers of the gums come away from the teeth and lose their solidity. This disease can be very painful, as nerve endings are often exposed. Some medications are also associated with gingivitis, including: - cyclosporine* (used to treat rheumatoid arthritis and other autoimmune diseases) - phenytoin (used to control epilepsy and other seizures) - calcium channel blockers such as nifedipine (used treat high blood pressure and other heart conditions) Some viruses can also infect the mouth. The one most likely to attack the gums is the herpes virus. It causes tiny ulcers and holes to appear in the gums and other parts of the mouth. This disease is called acute herpetic gingivostomatitis (stoma is a medical term for "hole"). It only strikes people who have just caught herpes for the first time. Symptoms and Complications Typical bacterial gingivitis is usually a painless condition, even when the gums are bleeding. They become bright red and swell up. They are less firm than usual and may even be movable. They are likely to bleed during brushing and perhaps eating. Sometimes they bleed at night. Herpetic gingivostomatitis also turns the gums bright red, but it can be easily distinguished because it's usually quite painful. There are dozens of tiny white or yellow sores visible in the gums and inner cheeks. The tartar that can be seen at the gum line may be the tip of the iceberg. It generally spreads between the teeth and gums, forcing the two apart and living in the newly created pocket. There, the bacteria release chemicals that can eat away at the bones that hold the roots of the teeth. These same chemicals cause bad breath. Trench mouth, also known as Vincent's infection, is a particularly severe form of gingivitis caused by a combination of two bacteria. Your dentist may refer to it by its other name, acute necrotizing ulcerative gingivitis (or ANUG). This disease causes a rapid onset of swelling, bleeding, severe pain, and terrible bad breath. The gums are grey in Gum disease has been linked to various health concerns such as premature birth, lung disease, heart disease, stroke, and heart attack. Making the Diagnosis Gingivitis is easily diagnosed by the appearance of the gums. The appearance of the inflammation will help your doctor or dentist distinguish a bacterial infection from the herpes virus. Scrapings could yield information on the species of bacteria involved, but it's rarely relevant to treatment, so it's not generally done. Occasionally, gingivitis is the first sign of some other disease such as diabetes or Treatment and Prevention Thorough flossing and brushing can prevent gingivitis. Tartar control toothpaste, though not scientifically evaluated, may also help with prevention. Some types of antibacterial mouthwash may also be helpful. The most effective ones contain the ingredient chlorhexidine (e.g., Perichlor®, Denti-Care®). Most traditional mouthwashes contain high amounts of alcohol, which may cause alcohol burn. These mouthwashes can be very irritating to already inflamed gums. They also do not get rid of sulphur-containing compounds (bacteria toxins) that cause bad breath. Mouthwashes containing chlorhexidine or chlorine dioxide will control bacterial growth. Electric toothbrushes are also more effective than manual toothbrushes in removing the plaque that causes gingivitis. Studies have shown that brushing can prevent gingivitis in adults and children. Flossing appears not to help in children, though it's a good habit for them to form. However, people with diseases that make gingivitis more likely (such as diabetes) shouldn't rely on good oral hygiene alone to prevent it. Treating the disease itself is very important in preventing gingivitis. Once plaque has turned to tartar, only a dentist can remove it. Dentists recommend having your teeth professionally cleaned every year or every 6 months. Plaque and tartar removal can also be the treatment for early gingivitis. Once the plaque and tartar are gone, the inflammation tends to subside quickly. If the disease develops into periodontitis, periodontal deep cleaning or periodontal surgery may be needed. This involves opening up the gums to get at the infected area. Infected tissue is removed, and the root of the threatened tooth is scaled (the tartar is scraped off). Sometimes this can be done without actually cutting the gum (periodontal deep cleaning). Acute herpetic gingivostomatitis can't be cured, but it goes away on its own after about 2 weeks. Pregnancy tumours can be removed by a dentist. Trench mouth, or acute necrotizing ulcerative gingivitis (ANUG), can be treated with appropriate antibiotics and thorough tooth and gum cleaning by a dental professional. Early treatment by your dentist is recommended. Postmenopausal women who have desquamative gingivitis may benefit from hormone replacement therapy. *All medications have both common (generic) and brand names. The brand name is what a specific manufacturer calls the product (e.g., Tylenol®). The common name is the medical name for the medication (e.g., acetaminophen). A medication may have many brand names, but only one common name. This article lists medications by their common names. For information on a given medication, check our Drug Information database. For more information on brand names, speak with your doctor or pharmacist.
The Goal of Forestry It is the chief goal of forestry to devise methods for felling trees that provide for the growth of a new forest crop and to ensure that adequate seed of desirable species is shed onto the ground and that conditions are optimal for seed germination and the survival of saplings. The basic rule of timber management is sustained yield; that is, to cut each year a volume of timber no greater than the volume of wood that grew during that year on standing trees. Desirable timber species are usually those of the native climax vegetation (see ecology) that can perpetuate themselves by natural succession, although at times (intentionally or unintentionally) a forest may not represent the climax vegetation—such as the pine of the SE United States, which grows faster than, and has replaced, the hardwoods destroyed by fire and logging. The Douglas fir of Western forests is encouraged because it is more valuable than the climax vegetation of mixed conifers that tends to establish itself in the absence of human intervention. Planting trees of different sizes (either because of species or of age) prevents crowding and insures maximal growth for the given area. Extermination of diseases and insect pests is standard forestry practice. Sections in this article: The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
When we think of student engagement in learning activities, it is often convenient to understand engagement with an activity as being represented by good behavior (i.e. behavioral engagement), positive feelings (i.e. emotional engagement), and, above all, student thinking (i.e. cognitive engagement) (Fredricks, 2014). This is because students may be behaviorally and/or emotionally invested in a given activity without actually exerting the necessary mental effort to understand and master the knowledge, craft, or skill that the activity promotes. In light of this, research suggests that considering the following interrelated elements when designing and implementing learning activities may help increase student engagement behaviorally, emotionally, and cognitively, thereby positively affecting student learning and achievement. 1. Make It Meaningful In aiming for full engagement, it is essential that students perceive activities as being meaningful. Research has shown that if students do not consider a learning activity worthy of their time and effort, they might not engage in a satisfactory way, or may even disengage entirely in response (Fredricks, Blumenfeld, & Paris, 2004). To ensure that activities are personally meaningful, we can, for example, connect them with students' previous knowledge and experiences, highlighting the value of an assigned activity in personally relevant ways. Also, adult or expert modeling can help to demonstrate why an individual activity is worth pursuing, and when and how it is used in real life. 2. Foster a Sense of Competence The notion of competence may be understood as a student's ongoing personal evaluation of whether he or she can succeed in a learning activity or challenge. (Can I do this?) Researchers have found that effectively performing an activity can positively impact subsequent engagement (Schunk & Mullen, 2012). To strengthen students' sense of competence in learning activities, the assigned activities could: - Be only slightly beyond students' current levels of proficiency - Make students demonstrate understanding throughout the activity - Show peer coping models (i.e. students who struggle but eventually succeed at the activity) and peer mastery models (i.e. students who try and succeed at the activity) - Include feedback that helps students to make progress 3. Provide Autonomy Support We may understand autonomy support as nurturing the students' sense of control over their behaviors and goals. When teachers relinquish control (without losing power) to the students, rather than promoting compliance with directives and commands, student engagement levels are likely to increase as a result (Reeve, Jang, Carrell, Jeon, & Barch, 2004). Autonomy support can be implemented by: - Welcoming students' opinions and ideas into the flow of the activity - Using informational, non-controlling language with students - Giving students the time they need to understand and absorb an activity by themselves 4. Embrace Collaborative Learning Collaborative learning is another powerful facilitator of engagement in learning activities. When students work effectively with others, their engagement may be amplified as a result (Wentzel, 2009), mostly due to experiencing a sense of connection to others during the activities (Deci & Ryan, 2000). To make group work more productive, strategies can be implemented to ensure that students know how to communicate and behave in that setting. Teacher modeling is one effective method (i.e. the teacher shows how collaboration is done), while avoiding homogeneous groups and grouping by ability, fostering individual accountability by assigning different roles, and evaluating both the student and the group performance also support collaborative learning. 5. Establish Positive Teacher-Student Relationships High-quality teacher-student relationships are another critical factor in determining student engagement, especially in the case of difficult students and those from lower socioeconomic backgrounds (Fredricks, 2014). When students form close and caring relationships with their teachers, they are fulfilling their developmental need for a connection with others and a sense of belonging in society (Scales, 1991). Teacher-student relationships can be facilitated by: - Caring about students' social and emotional needs - Displaying positive attitudes and enthusiasm - Increasing one-on-one time with students - Treating students fairly - Avoiding deception or promise-breaking 6. Promote Mastery Orientations Finally, students' perspective of learning activities also determines their level of engagement. When students pursue an activity because they want to learn and understand (i.e. mastery orientations), rather than merely obtain a good grade, look smart, please their parents, or outperform peers (i.e. performance orientations), their engagement is more likely to be full and thorough (Anderman & Patrick, 2012). To encourage this mastery orientation mindset, consider various approaches, such as framing success in terms of learning (e.g. criterion-referenced) rather than performing (e.g. obtaining a good grade). You can also place the emphasis on individual progress by reducing social comparison (e.g. making grades private) and recognizing student improvement and effort. Do you generally consider any of the above facilitators of engagement when designing and implementing learning activities? If so, which ones? If not, which are new to you? - Ames, C. (1992). Achievement goals and the classroom motivational climate. In D. Schunk & J. Meece (Eds.), Student perceptions in the classroom (pp. 327-348). Hillsdale, N.J: L. Erlbaum. - Anderman, E. M., & Patrick, H. (2012). Achievement goal theory, conceptualization of ability/intelligence, and classroom climate. In S. Christenson, A. Reschly, & C. Wylie (Eds.), Handbook of Research on Student Engagement (pp. 173-191). New York, NY: Springer. - Assor, A., Kaplan, H., & Roth, G. (2002). Choice is good, but relevance is excellent: Autonomy-enhancing and suppressing teacher behaviours predicting students' engagement in schoolwork. British Journal of Educational Psychology, 72(2), 261-278. - Baker, J. A., Grant, S., & Morlock, L. (2008). The teacher-student relationship as a developmental context for children with internalizing or externalizing behavior problems. School Psychology Quarterly, 23(1), 3-15. - Bandura, A., & Schunk, D. H. (1981). Cultivating competence, self-efficacy, and intrinsic interest through proximal self-motivation. Journal of Personality and Social Psychology, 41(3), 586-598. - Belland, B. R., Kim, C., & Hannafin, M. J. (2013). A framework for designing scaffolds that improve motivation and cognition. Educational Psychologist, 48(4), 243-270. - Black, P., Harrison, C., Lee, C., & Marshall, B. (2003). Assessment for learning: Putting it into practice. Maidenhead: Open University Press. - Deci, E. L., & Ryan, R. M. (2000). The "what" and "why" of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268. - Driscoll, K. C., & Pianta, R. C. (2010). Banking time in head start: Early efficacy of an intervention designed to promote supportive teacher-child relationships. Early Education and Development, 21(1), 38-64. - Fredricks, J. A. (2014). Eight Myths of Student Disengagement: Creating Classrooms of Deep Learning. Los Angeles: Corwin. - Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59-109. - Gillies, R. M., & Ashman, A. F. (1998). Behavior and interactions of children in cooperative groups in lower and middle elementary grades. Journal of Educational Psychology, 90(4), 746-757. - Gregory, A., & Weinstein, R. S. (2004). Connection and regulation at home and in school: Predicting growth in achievement for adolescents. Journal of Adolescent Research, 19(4), 405-427. - Johnson, D. W., Johnson, R. T., & Holubec, E. (1994). The new circles of learning: Cooperation in the classroom and school. Alexandria, VA: Association for Supervision and Curriculum Development. - Kidd, C., Palmeri, H., & Aslin, R. N. (2013). Rational snacking: Young children's decision making on the marshmallow task is moderated by beliefs about environmental reliability. Cognition, 126(1), 109-114. - Linnenbrink, E. A., & Pintrich, P. R. (2003). The role of self-efficacy beliefs in student engagement and learning in the classroom. Reading & Writing Quarterly, 19(2), 119-137. - Middleton, M. J., & Midgley, C. (2002). Beyond motivation: Middle school students' perceptions of press for understanding in math. Contemporary Educational Psychology, 27(3), 373-391. - Newmann, F., Wehlage, G., & Lamborn, D. (1992). The significance and sources of student engagement. In Student Engagement and Achievement in American Secondary Schools (pp. 11-39). ERIC. - Noels, K. A., Clement, R., & Pelletier, L. G. (1999). Perceptions of teachers' communicative style and students' intrinsic and extrinsic motivation. The Modern Language Journal, 83(1), 23-34. - Peter, F., & Dalbert, C. (2010). Do my teachers treat me justly? Implications of students' justice experience for class climate experience. Contemporary Educational Psychology, 35(4), 297-305. - Reeve, J. (1998). Autonomy support as an interpersonal motivating style: Is it teachable? Contemporary Educational Psychology, 23(3), 312-330. - Reeve, J., & Jang, H. (2006). What teachers say and do to support students' autonomy during a learning activity. Journal of Educational Psychology, 98(1), 209-218. - Reeve, J., Jang, H., Carrell, D., Jeon, S., & Barch, J. (2004). Enhancing students' engagement by increasing teachers' autonomy support. Motivation and Emotion, 28(2), 147-169. - Scales, P. C. (1991). Creating a developmental framework: The positive possibilities of young adolescents. In A portrait of young adolescents in the 1990s: Implications for promoting healthy growth and development. ERIC. - Schunk, D., & Swartz, C. (1993). Goals and progress feedback: Effects on self-efficacy and writing achievement. Contemporary Educational Psychology, 18, 337-354. - Schunk, D. H., & Mullen, C. A. (2012). Self-Efficacy as an engaged learner. In S. Christenson, A. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement (pp. 219-235). Boston, MA: Springer US. - Schunk, D. H. (2003). Self-efficacy for reading and writing: influence of modeling, goal setting, and self-evaluation. Reading & Writing Quarterly, 19(2), 159–172. - Shernoff, D. J., Csikszentmihalyi, M., Shneider, B., & Shernoff, E. S. (2003). Student engagement in high school classrooms from the perspective of flow theory. School Psychology Quarterly, 18(2), 158-176. - Slavin, R. E. (1996). Cooperative learning in middle and secondary schools. The Clearing House, 69(4), 200-204. - Turner, J. C., Midgley, C., Meyer, D. K., Gheen, M., Anderman, E. M., Kang, Y., & Patrick, H. (2002). The classroom environment and students' reports of avoidance strategies in mathematics: A multimethod study. Journal of Educational Psychology, 94(1), 88-106. - Tyler, J. M., Feldman, R. S., & Reichert, A. (2006). The price of deceptive behavior: Disliking and lying to people who lie to us. Journal of Experimental Social Psychology, 42(1), 69-77. - Webb, N. M., Nemer, K. M., & Ing, M. (2009). Small-Group reflections: Parallels between teacher discourse and student behavior in peer-directed groups. The Journal of the Learning Sciences, 15(1), 63–119. - Wentzel, K. R. (2009). Peers and academic functioning at school. In K. Rubin, W. Bukowski, & B. Laursen (Eds.), Handbook of peer interactions, relationships, and groups. Social, emotional, and personality development in context (pp. 531-547). New York, NY: Guilford Press. - Willingham, D. T. (2009). Why don't students like school?: A cognitive scientist answers questions about how the mind works and what it means for the classroom. San Francisco, CA: Jossey-Bass.
In order to understand what freedom of expression is (articulated in Article 19 of the United Nations “Universal Declaration of Human Rights”) students first need to be able to define expression and recognize its various forms. This lesson focuses primarily on the freedom of speech, but it also examines the right to have an opinion and express that opinion without interference from any person or government. Article 19: Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. “The Universal Declaration of Human Rights”, 1948 Grade Level: 9-12 Time Allotment: Two to four 45-minute class periods Subject Matter: Freedom of expression Use primary sources such as news reports and video to gather information about current events and recent world history. Analyze the information gathered from these primary resources to draw conclusions about freedom of expression and its various forms. Encourage students to form and create their own individual ideas and concepts about freedom of expression. Gain a broader view and understanding of freedom of expression and of Article 19 of the “Universal Declaration of Human Rights” (UNDHR) and the effect it has on the United States, themselves and the global community as a whole. National Standards for History Standards 5A, 5B, 5C, 5D, 5E, 5F - A. Identify issues and problems in the past and analyze the interests, values, perspectives, and points of view of those involved in the situation. B. Marshal evidence of antecedent circumstances and current factors contributing to contemporary problems and alternative courses of action. C. Identify relevant historical antecedents and differentiate from those that are inappropriate and irrelevant to contemporary issues. D. Evaluate alternative courses of action, keeping in mind the information available at the time, in terms of ethical considerations, the interests of those affected by the decision, and the long and short-term consequences of each. E. Formulate a position or course of action on an issue by identifying the nature of the problem, analyzing the underlying factors contributing to the problem, and choosing a plausible solution from a choice of carefully evaluated options. F. Evaluate the implementation of a decision by analyzing the interests it served; estimating the position, power, and priority of each player involved; assessing the ethical dimensions of the decision; and evaluating its costs and benefits from a variety of perspectives. National Standards for Social Studies - IX. Global Connections: Social studies programs should include experiences that provide for the study of global connections and interdependence. X. Civic Ideals and Practices: Social studies programs should include experiences that provide for the study of the ideals, principles, and practices of citizenship in a democratic republic.
Earth is the only planet in the Solar System that shows a form of planetary evolution known as plate tectonics. In this mode of planetary cooling, a convecting mantle conducts heat through a relatively rigid outer shell called the lithosphere. This shell is produced at spreading centres (boundaries between two separating plates), and is recycled back into the mantle at subduction zones (regions in which one plate is being forced underneath another). Why planetary cooling on Earth operates in this mode, and when the current period of plate tectonics began, remain subject to debate1–3. Writing in Nature, Sobolev and Brown4 propose answers to these questions that could have fundamental implications for understanding the connections between internal dynamics and surface processes — including climatic and atmospheric processes — on Earth and other planets. Subduction is one of the main drivers of plate motion, and therefore of heat loss, on Earth. Fast-moving plates lead to enhanced cooling. By contrast, if plates slow down or are stalled, heat becomes trapped in the mantle and cooling is reduced. The rate at which subduction can proceed depends on a few factors5,6. These include the material strength of the descending plate, and the strength of the interface between the descending and overriding plates (Fig. 1). The interface strength is a parameter that can be particularly sensitive to the composition of the material that is being subducted7,8. For example, magnesium- and iron-rich igneous rocks (those formed by the solidification of lava or magma) that characterize the oceanic crust are dry and strong, and therefore lead to low subduction velocities8. By contrast, blankets of sediment that are mostly derived from eroding continents and laid down on top of the oceanic crust are wet and weak, and result in accelerated subduction — a process known as sediment lubrication. This process might have influenced the dynamics of several modern subduction zones, including those associated with the Andes7,9 and the Himalayas8. Sobolev and Brown explore the potential role of sediment lubrication in the dynamics of early Earth (about 4.5 billion to 2 billion years ago). They consider global glaciation events — periods in which Earth’s surface was mostly covered in ice — that are well established in the geological record. They point out that these events led to enhanced weathering and erosion of emerging continents. Moreover, they hypothesize that the corresponding supply of sediments at continental margins helped to lubricate the interfaces between descending and overriding plates, and therefore facilitated Earth’s modern episode of plate tectonics. To support this hypothesis, the authors searched for correlations between the vigour of subduction-dominated plate tectonics and the supply of continent-derived sediments to the oceans through time. As proxies for subduction rates, Sobolev and Brown compared temporal variations in several existing data sets, including those describing the cumulative length of mountain belts — interpreted by the authors to reflect the frequency of continental collisions — and the occurrences of paired metamorphic belts. Such belts comprise parallel strips of metamorphic rock (that formed from pre-existing rock under extreme heat or pressure) that have a similar age but contrasting mineral assemblages. Paired metamorphic belts have long been considered to be hallmarks of asymmetric subduction10. As proxies for sediment delivery to the oceans, the authors focused on the relative influences of crustal material (representing sediments) and mantle material on the geochemistry of both sea water and volcanic rocks. The strength of Sobolev and Brown’s work is that it brings together globally compiled data sets to form a unified hypothesis. Each of these data sets is inherently complex and has disputed implications. But when combined, the data sets seem to coalesce into three broad peaks over geological time. One of these peaks coincides with the emergence of continents above sea level, and all three peaks seem to coincide largely with global glaciation events. The peaks also seem to precede the assembly of continental landmasses known as supercontinents, which was presumably driven by increased plate motion. Although the concept put forward by Sobolev and Brown is intriguing, there is more work to be done to test it. A key avenue for further exploration is quantifying the feedback between sediment lubrication and mountain building. For example, the development of elevated topography in the overriding plate increases the frictional resistance of the plate interface, and therefore reduces plate velocities11,12. Yet simultaneously, the growth of mountain ranges induces surface erosion and increases sediment supply. Moreover, volcanic activity and the burial of carbon at subduction zones affect the global climate, and therefore erosion. Which of these processes dominates over specific timescales and how the processes are coupled are poorly understood. It would also be valuable to assess sediment fluxes and budgets (the differences between inputs and outputs), and the geochemical tracers of these sediments from continental mountain belts to subduction trenches. Such an assessment would need to take into account how Earth’s lithosphere and climate at these early times differed from those of today. From the viewpoint of geology, better constraints from natural rocks and experiments on material strength for both the shallow (frictional) and deep (viscous) plate interface are needed to quantify the importance of changes in the physical characteristics of subducted rocks on interface properties. Nature 570, 38-39 (2019)
This tables presents the different categories of ecosytem services that ecosystems provide. Provisioning Services are ecosystem services that describe the material or energy outputs from ecosystems. They include food, water and other resources. Food: Ecosystems provide the conditions for growing food. Food comes principally from managed agro-ecosystems but marine and freshwater systems or forests also provide food for human consumption. Wild foods from forests are often underestimated. Raw materials: Ecosystems provide a great diversity of materials for construction and fuel including wood, biofuels and plant oils that are directly derived from wild and cultivated plant species. Fresh water: Ecosystems play a vital role in the global hydrological cycle, as they regulate the flow and purification of water. Vegetation and forests influence the quantity of water available locally. Medicinal resources: Ecosystems and biodiversity provide many plants used as traditional medicines as well as providing the raw materials for the pharmaceutical industry. All ecosystems are a potential source of medicinal resources. Regulating Services are the services that ecosystems provide by acting as regulators eg. regulating the quality of air and soil or by providing flood and disease control. Local climate and air quality: Trees provide shade whilst forests influence rainfall and water availability both locally and regionally. Trees or other plants also play an important role in regulating air quality by removing pollutants from the atmosphere. Carbon sequestration and storage: Ecosystems regulate the global climate by storing and sequestering greenhouse gases. As trees and plants grow, they remove carbon dioxide from the atmosphere and effectively lock it away in their tissues. In this way forest ecosystems are carbon stores. Biodiversity also plays an important role by improving the capacity of ecosystems to adapt to the effects of climate change. Moderation of extreme events: Extreme weather events or natural hazards include floods, storms, tsunamis, avalanches and landslides. Ecosystems and living organisms create buffers against natural disasters, thereby preventing possible damage. For example, wetlands can soak up flood water whilst trees can stabilize slopes. Coral reefs and mangroves help protect coastlines from storm damage. Waste-water treatment: Ecosystems such as wetlands filter both human and animal waste and act as a natural buffer to the surrounding environment. Through the biological activity of microorganisms in the soil, most waste is broken down. Thereby pathogens (disease causing microbes) are eliminated, and the level of nutrients and pollution is reduced. Erosion prevention and maintenance of soil fertility: Soil erosion is a key factor in the process of land degradation and desertification. Vegetation cover provides a vital regulating service by preventing soil erosion. Soil fertility is essential for plant growth and agriculture and well functioning ecosystems supply the soil with nutrients required to support plant growth. Pollination: Insects and wind pollinate plants and trees which is essential for the development of fruits, vegetables and seeds. Animal pollination is an ecosystem service mainly provided by insects but also by some birds and bats. Some 87 out of the 115 leading global food crops depend upon animal pollination including important cash crops such as cocoa and coffee (Klein et al. 2007). Biological control: Ecosystems are important for regulating pests and vector borne diseases that attack plants, animals and people. Ecosystems regulate pests and diseases through the activities of predators and parasites. Birds, bats, flies, wasps, frogs and fungi all act as natural controls. Habitats for species: Habitats provide everything that an individual plant or animal needs to survive: food; water; and shelter. Each ecosystem provides different habitats that can be essential for a species’ lifecycle. Migratory species including birds, fish, mammals and insects all depend upon different ecosystems during their movements. Maintenance of genetic diversity: Genetic diversity is the variety of genes between and within species populations. Genetic diversity distinguishes different breeds or races from each other thus providing the basis for locally well-adapted cultivars and a gene pool for further developing commercial crops and livestock. Some habitats have an exceptionally high number of species which makes them more genetically diverse than others and are known as ‘biodiversity hotspots’. Recreation and mental and physical health: Walking and playing sports in green space is not only a good form of physical exercise but also lets people relax. The role that green space plays in maintaining mental and physical health is increasingly being recognized, despite difficulties of measurement. Tourism: Ecosystems and biodiversity play an important role for many kinds of tourism which in turn provides considerable economic benefits and is a vital source of income for many countries. In 2008 global earnings from tourism summed up to US$ 944 billion. Cultural and eco-tourism can also educate people about the importance of biological diversity. Aesthetic appreciation and inspiration for culture, art and design: Language, knowledge and the natural environment have been intimately related throughout human history. Biodiversity, ecosystems and natural landscapes have been the source of inspiration for much of our art, culture and increasingly for science. Spiritual experience and sense of place: In many parts of the world natural features such as specific forests, caves or mountains are considered sacred or have a religious meaning. Nature is a common element of all major religions and traditional knowledge, and associated customs are important for creating a sense of belonging. TEEBAgriFood Interim Report The Interim Report introduces the key questions, issues and arguments to be addressed by TEEBAgriFood.Read more TEEB is bringing together economists, business leaders, agriculturalists and experts in biodiversity and ecosystem services to systematically review the economic interdependencies between agriculture and natural ecosystems, and provide a comprehensive economic valuation of eco-agri-food systems. Alexander Müller, TEEB for Agriculture & Food…Read more TEEB Challenges and Responses TEEB ‘s progress, challenges and responses towards mainstreaming the economics of nature. [ENG] [ESP]Read more TEEB for Agriculture & Food Concept Note February 2014- The Concept Note presents the case for and proposed outline content of a TEEB for Agriculture & Food study.Read more Natural Capital Accounting and Water Quality: Commitments, Benefits, Needs and Progress December 2013 – The briefing note outlines existing guidance and examples on water quality accounting and identifies the ongoing challenges related to the development of natural capital accounting and water quality accounting. Inspired by the growing global focus on natural capital accounting, the note identifies the ongoing challenges related to the development of natural capital accounting and water quality accounting, in order to encourage debate and commitment towards effective water and biodiversity policy.Read more
Smokeless Tobacco Contains at Least 28 Cancer Causing Chemicals. Smokeless tobacco is a tobacco product that is not burned. It includes chewing tobacco, dip, and snuff. At least 28 chemicals in these products have been found to cause cancers, including: - Esophageal cancer - Mouth cancer - Pancreatic cancer Smokeless tobacco causes tooth decay in exposed tooth roots and may cause your gums to pull away from your teeth. If this happens, your gums will not grow back. Leathery white patches and red sores are common in dippers and chewers, and these injuries can eventually turn into cancer. But that’s not all. Recent research shows that’s smokeless tobacco might also cause problems beyond the mouth. Scientists are looking at the possibility that using smokeless tobacco might play a role in causing heart disease and stroke. Click here to learn more about the dangers of smokeless tobacco.
Ants are closely related to wasps, bees and sawflies. They undergo a complete metamorphosis, starting as an egg, then larva, pupa, and adult. Bed bugs feed on the blood of mammals and birds. Adult bed bugs can survive for about 6 to 7 months without a blood meal. They are attracted to carbon dioxide and heat produced by their host. Activity starts around 7 p.m. and continues until midnight or later, but they can adapt to daysleepers. The common name "chinch bug" comes from the Spanish word chinche, which means bug or pest. Chinch bugs are especially attracted to St. Augustine grass, and cause millions of dollars worth of damage each year. Domestic (indoor) cockroaches live their entire lives inside structures. Species include the German cockroach and Brown-banded cockroach. These indoor roaches are unable to survive away from humans or human activity. Fleas are wingless parasites that feed off the blood of mammals or birds. House Flies are a pest that can carry serious diseases. They are capable of carrying more than 100 pathogens, including those causing typhoid, cholera, salmonellosis, bacillary dysentery and tuberculosis. Rodents are mammals with sharp, continuously growing incisors they use to gnaw. Most rodents eat seeds or plants, though some have more varied diets. Some species have historically been pests, eating seeds stored by people and spreading disease. Spiders are air-breathing arthropods with eight legs and fangs that inject venom. They rank seventh in total species diversity among all other groups of organisms. Ticks are parasites that satisfy all of their nutritional requirements on a diet of blood. They are carriers of a number of diseases including Lyme disease and Rocky Mountain spotted fever. While termites are sometimes called "white ants," especially in some regions like Australia, they are only distantly related to the ants. In our service area, there are two main types of termite that damage structures: Drywood and subterranean.
Section 2: Theorems on limits We will now prove that a certain limit exists, namely the limit of f (x) = x as x approaches any value c. (That f(x) also approaches c should be obvious.) THEOREM. If f (x) = x, then for any value c that we might name: Theorems on limits To help us calculate limits, it is possible to prove the following. Let f and g be functions of a variable x. Then, if the following limits exist: In other words: 1) The limit of a sum is equal to the sum of the limits. 2) The limit of a product is equal to the product of the limits. 3) The limit of a quotient is equal to the quotient of the limits, Also, if c does not depend on x -- if c is a constant -- then the value of 5 -- or any constant -- does not change. It is constant When c is a constant factor, but f depends on x, then A constant factor may pass through the limit sign. (This follows from Theorems 2 and 4.) For example, It should be clear from this example that to evaluate the limit of any power of x as x approaches any value, simply evaluate the power at that value. Repeated application of Theorem 2 affirms that. Problem 3. Evaluate the following limits, and justify your answers by quoting Theorems 1 through 5. 43 + 4 = 64 + 4 = 68. This follows from Theorem 1 and Theorem 2. 42 + 1 = 16 + 1 = 17. This follows from Theorem 1, Theorem 2, and Theorem 4. The limits of the numerator and denominator follow from Theorems 1, 2, and 4. The limit of the fraction follows from Theorem 3. Limits of polynomials The student might think that to evaluate a limit as x approaches a value, all we do is evaluate the function at that value. And for the most part that is true One of the most important classes of functions for which that is true are the polynomials. (Topic 6 of Precalculus.) A polynomial in x has this general form: where n is a whole number, and an0. If P(x) is a polynomial, then Compare Example 1 and Problem 2. (In the following Topic we will see that is equivalent to saying that polynomials are continuous functions. ) the variable x is never equal to c, and therefore P(x) is never equal to P(c) Both c and P(c) are approached as limits. The point is, we can name the limit simply by evaluating the function at c. In that polynomial, let x = −1: 5(1) − 4(−1) + 3(1) − 2(−1) + 1 On replacing x with c, c + c = 2c. [Hint: This is a polynomial in t.] On replacing t with −1: 3(−1)2 −5(−1) + 1 = 3 + 5 + 1 = 9. [Hint: This is a polynomial in h.] On replacing h with 0, the limit is 4x3. Some of the most important limits, however, will not be polynomials. Dealing with that will be the challenge. Example 2. Consider the function g(x) = x + 2, whose graph is a simple straight line. And just to be perverse (and to illustrate a logical point to which we shall return in Lesson 3), let the following function f(x) not be defined for x = 2. That is, let In other words, the point (2, 4) does not belong to the function; it is not on the graph. Yet the limit as x approaches 2 -- whether from the left or from the right -- is 4 For, every sequence of values of x that approaches 2, can come as close to 2 as we please. (The limit of a variable is never a member of the sequence, in any case; Definition 2.1.) Hence the corresponding values of f(x) will come closer and closer to 4. Definition 2.2 will be satisfied. Copyright © 2017 Lawrence Spector Questions or comments?
The ability of light to exert forces has been known for quite some time. In fact, Johannes Kepler (1619) recognised that the tails of a comet — an example of which is shown in Fig. 1.1 — are due to the force exerted on the particles surrounding the comet’s body by the Sun’s rays. However, optical forces are extremely small. So small, in fact, that only in recent years, and only thanks to the advent of the laser, it has been possible to concentrate in a small area enough optical power to significantly affect the motion of microscopic particles, thereby leading to the invention of the optical tweezers. Optical tweezers are generated by a tightly focused laser beam that can hold and manipulate a particle in the high-intensity region that is the focal spot. Optical tweezers and other optical manipulation techniques have heralded a revolution in the study of microscopic systems, spearheading new and more powerful techniques, e.g., to study biomolecules, to measure forces that act on a nanometre scale and to explore the limits of quantum mechanics. This Book provides a comprehensive guide to the theory [Chapters 2-7], practice [Chapters 8- 12] and applications [Chapters 13-25] of optical trapping and optical manipulation. 1.1 A brief history of optical manipulation 1.2 Crash course on optical tweezers 1.3 Optical trapping regimes 1.4 Other micromanipulation techniques 1.5 Scope of this Book 1.6 How to read this Book 1.7 OTS – The Optical Tweezers Software
(The information in this section was taken from “The Consumer’s Guide to Effective Environmental Choices” by Michael Brower and Warren Leon from the Union of Concerned Scientists) When most people consider reducing their environmental impact, eating less meat is not one of the first lifestyle changes that springs to mind. Food production, though, and especially livestock cultivation, puts this category of energy expenditure at a level comparable to the energy expenditure of the transportation industry. Livestock cultivation contributes to both global warming and water pollution. Worldwide, agriculture is responsible for 70% of all global methane emissions. After carbon dioxide, methane is ranked as the second largest cause of global warming. Animal livestock, and especially cattle, produce large quantities of methane through belching, flatulence, and dried animal waste. This is a significant issue because methane has a 21% more potent warming effect in the atmosphere than does carbon dioxide, according to the Intergovernmental Panel on Climate Change. Animal waste production is also a significant cause of water pollution. Cattle alone produce more than 2 billion wet tons of manure each year. Runoff of this waste causes 20% of the total common water pollution in the country. It contaminates both drinking and irrigation water. This problem has widespread effects because 40% of American land is dedicated to grazing livestock. Eating less meat will greatly reduce your negative impact on the environment. If you do want to eat meat, you can make several choices that will lessen your environmental impact: Chose poultry and avoid red meat! The environmental impact of poultry is much smaller! Poultry is also healthier than red meat. Buy organic meat products! These are produced using sustainable agriculture practices, and the animals are treated much more humanely during the production process.
Pepi I , the first king of the 6th Dynasty, had built his pyramid complex South Saqqara . His two immediate predecessors, Unas and Teti, had chosen the vicinity of the Step Pyramid complex of Zoser in Saqqara-North as their last resting place. Who built it? This complex was built by Pepi I , the first king of the 6th Dynasty . Why was it built? The pyramid complex of Pepi I was built as the ancient Egyptians believe in resurrection. Pepi I and his family were buried in this pyramid complex according to the ancient Egyptian concept of life after death. The name of this complex, mn-nfr, " the beautiful monument" would later be used for the city that lay to the east, and would be rendered in Greek as Memphis . The Pyramid Complex of Pepi I comprises all the elements that by the 6th Dynasty had already become standard: a pyramid with a mortuary temple and a satellite pyramid to the east of it and further to the east a causeway that leads towards a valley temple . In the late 1980's, an enormous mound of debris and rubble located to the south of the main pyramid was examined by a French team of archaeologists. They found four or possibly five smaller pyramids with adjourning mortuary temples that once belonged to Pepi I's queens. The queen for whom the eastern most of these pyramids was built was called Nebwenet. She bore the title 'beloved wife of the king'. The queen of the second pyramid bore the name Inenek/Inti and the third queen, whose name is not yet known, bore the title 'eldest daughter of the king'. A stela inscribed with the name of Meritites, 'daughter of the king and wife of the king' has led to the discovery of a fourth pyramid and even a fifth queen's pyramid has been found.
When you hear the word “Vaccine,” you may get this word confused with two other words, “Vaccinations” and ” Immunizations.” This article will explain simply and exactly what a “Vaccine” is and is not. First of all, a vaccine is administered one of three ways by needle injection, orally by mouth, or by a nasal spray, depending upon the vaccine administered. There are different kinds of vaccines on the medical marketplace today. All vaccines contain a weakened organism or a killed organism. The vaccine’s purpose is to produce in your body a certain level of immunity against the organism of which its contents are made. Each vaccine’s purpose is to offer you protection against certain diseases or at least minimize the effects of the illness on your body. Are Vaccines Safe? Even since vaccines were born, there has been a controversy within the medical community and individuals alike as to the necessity and safety of certain vaccines. Many people do not believe the worth of these vaccines, while much more people know that vaccines lessen all kinds of disease processes and save lives. For you to understand the risks and benefits of any vaccine that is offered to you, you must that the responsibility of delving deeper into any information available and weigh the pros and cons to make an informed decision as to whether you or your family should take any vaccine. Discover how the medical community monitors and makes any vaccine safe, providing many more benefits to you. - Prevent serious diseases, for example, polio, chicken pox, and German measles - Increases your immunity - Protect you and all those around you from spreading diseases - Eliminated certain diseases - Prevents disease outbreaks which in turn protect those who cannot be vaccinated for one reason or another - Decrease in deaths from different diseases How Long Do Vaccines Last? - Some vaccines are given yearly such as the flu vaccine - Some vaccines have an active age limit such as five to ten years such as the tetanus vaccine - Some vaccines given as a child require a follow-up booster injection after several years. - Some vaccines that are given to you as a child remain in your system, protecting you for life As with everything else in life you are unique. The way in which you may react to anything is unlike your neighbor. Individuals respond differently to vaccination. For example, when receiving the flu vaccination, you may have no side effects. Another person may complain of generalized ill feelings. Someone else may complain of some flu symptoms such as nausea, vomiting, and diarrhea. None of these side effects are long-lasting and are short-lived causing a bit of inconvenience. Making an informed decision regarding vaccinations is vital to you and your children’s health and well-being. Research shows that some of the diseases that were called “Eliminated” due to this prevention process are now starting to rear their ugly heads. This surge in some rarely seen diseases is because many people feel that vaccinations are not necessary or important in this day and age. People are refusing any protection that vaccinations offer them. Many parents are refusing to have their children vaccinated with routine childhood vaccinations due to possible side effects. Each state has certain requirements as to what protection they require children and adults have as a resident of that state. Check with your state regarding the rules and regulations regarding vaccinations. When traveling abroad, certain vaccinations are required before you can enter your chosen country. You must present documentation that you are protected by certain disease processes prevalent in that country.
Talk your baby into reading Jennifer M. Prinz, MA, CCC-SLP MedCentral Pediatric Therapy Childhood experts lately have been discussing the value of talking and reading to one's baby as much as possible - even before birth. Why? Because research shows that speaking (oral language) and the ability to read and write (literacy) are interrelated skills. A baby is ready to begin learning language at birth. Babies naturally orient to their parents' voices, and they show a preference for the language their parents speak. Babies begin babbling and learn the specific sounds of their parents' language between the ages of 6 and 8 months. At 12 months, they refine their repertoire of sounds to contain only those of their parents' language and begin to sequence those sounds into meaningful words. This progression demonstrates that a baby is listening to the subtleties of their parents' language and learning those particular rules long before she utters her first word. Therefore, talking to one's baby is not a futile exercise; it provides opportunities for baby to listen, select important meaningful sounds, store them in memory, sequence them into words and learn the underlying rules and vocabulary that construct his language. preschool children learn to play with sounds (rhyming words and isolating sounds, for example), they gradually learn to separate words into sounds and map sounds onto printed letters. At this stage, children begin to read. This evolution in learning creates the prerequisite skills for learning to read and write. As babies grow into toddlers, their vocabulary flourishes. They are able to create longer sentences to express their needs and make observations about their world. During this stage, young children begin to learn about the world through language. Preschool age children learn that symbols represent a concrete object, that printed words represent spoken language and that spoken words are comprised of individual sounds (phonological awareness). As At each stage of development, parents can stimulate language and literacy skills. Birth - One Year: - Talk to baby about her environment (toys, people, places, activities). - Read to baby (books, magazines, newspapers). - Make books part of baby's daily environment. Encourage her to explore books, look at the pictures and play with them. - Participate in story time at local libraries or book stores. 12 months - 24 months: - Talk during shared activities (grocery shopping, playing at the playground). This allows for vocabulary growth and helps your child practice increasingly complex ways to express himself. - Read stories together. This fosters a positive association between the child's experience and reading. - Attend story time at local libraries or bookstores. - Make books available to the child to stimulate knowledge and vocabulary. - Read favorite stories as many times as the child requests. 2 - 5 Years: - Talk during shared experiences to promote vocabulary development. - Read stories to each other and encourage the child to "read" the book in his own words. - Play rhyming games and sound games to strengthen the child's phonemic awareness skills (for example, "What rhymes with ‘bee?'" "What begins with the ‘b' sound?"). - Draw pictures and write a story about the picture together. - Make up stories together and tell them to family or friends. - Point out words in a book as it is being read. - Play games which involve the child hunting for words in their environment (such as during grocery trips or park outings). - Play games that involve matching a sound to its symbol (alphabet bingo, for example). - Make alphabet soup and search for letters or sounds. - Break words into individual sounds (i.e. how many sounds are in the word "bat"?). - Participate in joint literacy experiences at the library or area bookstores. - Encourage book reading in a child's daily routine. Make age-appropriate books available. The ways to nurture language and literacy are endless. Children are first born with the capacity to learn the sounds and structure of their parents' language. Babies want to hear their caretakers talking to them. From there, babies acquire oral language skills necessary for reading and writing. Talking and reading to an infant and preschooler are not pointless acts. Research continues to show the relationship between listening comprehension and reading comprehension. Children need to develop the skills of listening and talking in order to read and write. By taking the time to talk and read to baby, a parent is assisting in establishing the foundational skills necessary for that child's future academic success.
July 18, 2012 NASA Mission Set To Study Plasma [ Watch the Video ] Lee Rannals for redOrbit.com - Your Universe Online Ninety-nine percent of the universe is made of this electrified gas, but when it comes to the Earth it is more absent than present. There are two giant donuts of plasma surrounding Earth trapped within a region known as the Van Allen Radiation Belts. The belts are close to Earth, sandwiched between satellites in geostationary orbit above. Satellites in low Earth orbit (LEO) are generally below the belts. RBSP is set up to help improve our understanding of what makes plasma move in and out of these electrified belts wrapped around our planet, NASA said. "We discovered the radiation belts in observations from the very first spacecraft, Explorer 1, in 1958," David Sibeck, a space scientist at NASA's Goddard Space Flight Center in Greenbelt, Md., and the mission scientist for RBSP, said. "Characterizing these belts filled with dangerous particles was a great success of the early space age, but those observations led to as many questions as answers. These are fascinating science questions, but also practical questions, since we need to protect satellites from the radiation in the belts." The inner radiation belt is essentially stable, but the number of particles in the outer belt can jump 100 times or more, which could encompass a horde of communication satellites and research instruments orbiting Earth. In order to understand more about what is driving these changes in the belts, scientists must have a better understanding about what drives the plasma. Plasmas generally flow along a skeletal structure made of invisible magnetic field lines, while simultaneously creating more magnetic fields as they move. Understanding the rules that help dictate the environment will help scientists also have a deeper grasp about the range of events that make up space weather. RBSP scientists have designed a suite of instruments to answer three broad questions, including: Where do the extra energy and particles come from?; Where do they disappear to, and what sends them on their way?; and How do these changes affect the rest of Earth's magnetic environment, the magnetosphere? The mission will also use two spacecraft in order to better map out the full spatial dimensions of a particular event and how it changes over time. Scientists want to understand not only the origins of electrified particles, but also what mechanisms gives the particles their extreme speed and energy. "We know examples where a storm of incoming particles from the sun can cause the two belts to swell so much that they merge and appear to form a single belt," Shri Kanekal, RBSP's deputy project scientist at Goddard, said. "Then there are other examples where a large storm from the sun didn't affect the belts at all, and even cases where the belts shrank. Since the effects can be so different, there is a joke within the community that 'If you've seen one storm . . . You've seen one storm.' We need to figure out what causes the differences." RBSP will be able to measure a wide range of energies from the coldest particles in the ionosphere to the most energetic, most dangerous particles. Having information about how the radiation belt swells and shrinks will help improve models of Earth's magnetosphere as a whole. "Particles from the radiation belts can penetrate into spacecraft and disrupt electronics, short circuits or upset memory on computers," Sibeck said. "The particles are also dangerous to astronauts traveling through the region. We need models to help predict hazardous events in the belts and right now we are aren´t very good at that. RBSP will help solve that problem." Another reason scientists are interested in studying the belts is because it is the closest place to study plasma. NASA said that understanding this environment so foreign to our own is crucial in understanding the make-up of the universe.
The research study that we were part of was studying how the brain develops by imaging myelination in typical development and in individuals with autism. Myelin is an insulating layer, or sheath, that forms around nerves, including those in the brain and spinal cord. It is made up of protein and fatty substances. The purpose of the myelin sheath is to allow electrical impulses to transmit quickly and efficiently along the nerve cells. If myelin is damaged, the impulses slow down. Myelin is a dielectric (electrically insulating) material that forms a layer, the myelin sheath, usually around only the axon of a neuron. It is essential for the proper functioning of the nervous system. It is an outgrowth of a type of glial cell. The production of the myelin sheath is called myelination. In humans, myelination begins in the 14th week of fetal development, although little myelin exists in the brain at the time of birth. During infancy, myelination occurs quickly and continues through the adolescent stages of life. Myelin is made up by different cell types, and varies in chemical composition and configuration, but performs the same insulating function. Myelinated axons are white in appearance, hence the "white matter" of the brain. The fat helps to insulate the axons from electrically charged atoms and molecules. These charged particles (ions) are found in the fluid surrounding the entire nervous system. Under a microscope, myelin looks like strings of sausages. Myelin is also a part of the maturation process leading to a child's fast development, including crawling and walking in the first year. Is Myelin Content Altered In Young Adults with Autism? There is increasing evidence that autism is associated with abnormal white matter development and impaired ‘connectivity’ of neural systems. Brain connectivity is mediated by myelinated axons, which may be altered or abnormal in autism. However, to date, no study has directly investigated brain myelin content of autistic individuals in vivo. The primary objective of this study is to elucidate differences in myelin content in typical and autistic brains. The ultimate aim is to improve our understanding of the underlying neurobiology of autism using non-invasive magnetic resonance imaging (MRI) techniques, Using a new myelin-specific magnetic resonance imaging technique, termed mcDESPOT, brain myelin content was compared between 14 young adults with autism, and 14 matched controls. Relationships between myelin content and clinical symptom severity within the autistic group (measured by the Autism Diagnostic Instrument, ADI-R); and the severity of autistic traits in both cases and controls, using the Autism Quotient (AQ). Individuals with autism demonstrated a highly significant (p < 0.0017) reduction in myelin content in numerous brain regions and white matter tracts. Affected regions included the frontal, temporal, parietal and occipital lobes. White matter tracts most affected included the corpus callosum; the uncinate and posterior segments bilaterally; left inferior occipitofrontal tract and cerebellar peduncle, arcuate fasiculus and inferior and superior longitudinal fasciculi; and the right anterior segment. Further, within autistic individuals, worse interaction score on the ADI-R was significantly related to reduced myelin content in the frontal lobe; genu of the corpus callosum; and the right internal capsule, optic radiation, uncinate, inferior frontal occipital fasciculus and cingulum. Additionally, increased autistic traits in both cases and controls were significantly related to reduced myelin content of the left cerebellar; genu of the corpus callosum; and left temporal lobe white matter. Individuals with autism have significantly reduced myelin content in numerous brain regions and white matter tracts. We also provide preliminary evidence that reduced brain myelin content is associated with worsened social development in autistic individuals, and increased autistic traits in both cases and controls.
Scroll down for lesson plans, posters and other resources for teaching What does Gender Equality mean? - End all forms of discrimination against all women and girls everywhere. - End all forms of violence against women and girls, including sex trafficking and other forms of exploitation. - End all practices and traditions that may impair the physical, mental and sexual health of women and girls. - Recognize and value women’s work at home. Encourage women and girls to have equal opportunities to be heard and to have real opportunities to participate in all political, economic and public spheres. - Protect women’s rights to sexual and reproductive health. - Promote policies and laws to ensure gender equality including reforms to give women equal access to ownership and control over land and other forms of property, financial services, inheritance, and natural resources.End all forms of discrimination against all women and girls everywhere
1) Why did Maria jump through the window? a) Boy was chasing her. b) She saw Boy and was terrified. c) Boy pushed her through the window. 2) How did Boy know something was wrong in Grade 2? a) The students were shouting. b) The students touched him. c) A student jumped through the window. 3) The entire school was alerted because: a) Mrs Redmond fainted b) the Grade 2 students screamed c) the principal was angry and shouted 4) Why did Josh tell the principal that Boy was not a cow? a) Josh did not want to get punished. b) Josh was a male. c) Boy was a male. 5) What literary device is used in the following: Mrs Redmond was as pale as a chalky blackboard? a) simile b) metaphor c) personification 6) Where did Maria go after jumping through the window? a) Under the school b) Home c) Into her classroom 7) When Josh passed his teacher on the way out of the office, why couldn't he face her? a) He was the reason she was called to the office. b) He didn't want her to see the tears in his eyes. c) She had sent him to the office. 8) What literary device is used in the following: she was in the frying pan? a) onomatopoeia b) metaphor c) idiom 9) Why did Mrs Bernard come to the school? a) There was a PTA meeting. b) The principal sent for her. c) Her daughter was injured at school. 10) Whom did the principal blame for what had happened in the school? a) Josh Mahon b) Mrs Anthony c) Maria Bernard A Cow Called Boy - Chapter 3 (There's a Cow in the School) Forms 1 -3 A Cow Called Boy Редактиране на съдържание Това табло е в момента частна. Щракнете върху да я направи публична. Тази класация е забранено от собственика на ресурса. Тази класация е забранено, като опциите са различни за собственика на ресурса. Влезте в изисква Влезте в изисква Шаблон за превключване Повече формати ще се появи, докато играете дейността. Възстановяване на авто-записаната:
Let us make in-depth study of the components of water potential and osmotic relations of cells according to water potential. Water potential term was coined by Slatyer and Taylor (1960). It is modern term which is used in place of DPD. The movement of water in plants cannot be accurately explained in terms of difference in concentration or in other linear expression. The best way to express spontaneous movement of water from one region to another is in terms of the difference of free energy of water between two regions (from higher free energy level to lower free energy level). According to principles of thermodynamics, every components of system is having definite amount of free energy which is measure of potential work which the system can do. Water Potential is the difference in the free energy or chemical potential per unit molar volume of water in system and that of pure water at the same temperature and pressure. It is represented by Greek letter or the value of is measured in bars, pascals or atmospheres. Water always moves from the area of high water potential to the area of low water potential. Water potential of pure water at normal temperature and pressure is zero. This value is considered to be the highest. The presence of solid particles reduces the free energy of water and decreases the water potential. Therefore, water potential of a solution is always less than zero or it has negative value. Components of Water Potential: A typical plant cell consists of a cell wall, a vacuole filled with an aqueous solution and a layer of cytoplasm between vacuole and cell wall. When such a cell is subjected to the movement of water then many factors begin to operate which ultimately determine the water potential of cell sap. For solution such as contents of cells, water potential is determined by 3 major sets of internal factors: (a) Matrix potential (Ψm) (b) Solute potential or osmotic potential (Ψs) (c) Pressure potential (Ψp) Water potential in a plant cell or tissue can be written as the sum of matrix potential (due to binding of water to cell and cytoplasm) the solute potential (due to concentration of dissolve solutes which by its effect on the entropy components reduces the water potential) and pressure potential (due to hydrostatic pressure, which by its effect on energy components increases the water potential). Ψw = Ψs + Ψp + Ψm In case of plant cell, m is usually disregarded and it is not significant in osmosis. Hence, the above given equation is written as follows. Ψw = Ψs + Ψp Solute Potential (Ψs): It is defined as the amount by which the water potential is reduced as the result of the presence of the solute, s are always in negative values and it is expressed in bars with a negative sign. Pressure Potential (Ψp): Plant cell wall is elastic and it exerts a pressure on the cellular contents. As a result the inward wall pressure, hydrostatic pressure is developed in the vacuole it is termed as turgor pressure. The pressure potential is usually positive and operates in plant cells as wall pressure and turgor pressure. Its magnitude varies between +5 bars (during day) and +15 bars (during night). Important Aspects of Water Potential (Ψw): (1) Pure water has the maximum water potential which by definition is zero. (2) Water always moves from a region of higher Ψw to one of lower Ψw. (3) All solutions have lower w than pure water. (4) Osmosis in terms of water potential occurs as a movement of water molecules from the region of higher water potential to a region of lower water potential through a semi permeable membrane. Osmotic Relations of Cells According to Water Potential: In case of fully turgid cell: The net movement of water into the cell is stopped. The cell is in equilibrium with the water outside. Consequently the water potential in this case becomes zero. Water potential is equal to osmotic potential + pressure potential. In case of flaccid cell: The turgor becomes zero. A cell at zero turgor has an osmotic potential equal to its water potential. In case of plasmolysed cell: When the vacuolated parenchymatous cells are placed in solutions of sufficient strength, the protoplast decreases in volume to such an extent that they shrink away from the cell wall and the cells are plasmolysed. Such cells are negative value of pressure potential (negative turgor pressure). 1. Suppose there are two cells A and B, cell A has osmotic potential = -16 bars, pressure potential = 6 bars and cell B as osmotic potential = – 10 bars and pressure potential = 2 bars. What is the direction of movement of water? Water potential of cell A = Ψs +Ψp = – 16 + 6 = – 10 bars Ψ of cell B = -10 + 2 = -8 bars. As movement of water is from higher water potential (lower DPD) to lower water potential (higher DPD), hence the movement of water is from cell B to cell A. 2. If osmotic potential of a cell is – 14 bars and its pressure potential is 7 bars. What would be its water potential? We know Ψw = Ψs + Ψp Given, osmotic potential (Ψs) is – 14 bars. Pressure potentials (Ψp) is 7 bars Water potential = (-14) + 5 = – 9 bars.
How to support a foster child with FASD FASD (Foetal Alcohol Spectrum Disorder) can have a serious, long-term impact on a baby’s growth and development. Around 7,000 (1 per cent) of UK babies have FASD. It’s important that foster carers are aware of the condition and how it may affect a child’s development, learning and behaviour. Definition of FASD FASD is a spectrum disorder or umbrella term for a range of defects which a baby can suffer if their mum drank alcohol while pregnant. It is one of the leading causes of learning disability in the UK and one of the leading causes of birth defects in newborn babies. The individual conditions under this umbrella term are: - Foetal Alcohol Syndrome (FAS) - Alcohol-Related Neurodevelopmental Disorder (ARND) - Alcohol-Related Birth Defects (ARBD) - Foetal Alcohol Effects (FAE) - Partial Foetal Alcohol Syndrome (PFAS) Sometimes FASD is detectable at birth, especially if severe and life-threatening. Early diagnosis of FASD is crucial FASD is often compared to autism because of the scale of different defects within the spectrum. Some babies with FASD may have severe, life-shortening symptoms which can be detected at birth, while others will show the effects much later in life. If parents and foster carers know the signs to look out, it means a child can be properly diagnosed. Early diagnosis limits the impact the condition will have on their long-term learning and development. What are the effects of FASD on children? FASD can have a wide-ranging impact on a newborn’s health and development. When alcohol passes into the foetus, it circulates through the bloodstream, killing brain cells and damaging the nervous system. It can also cause abnormal growth defects and facial disfigurement – all through exposure to alcohol. Some of the most common defects and symptoms caused by FASD include: - Learning disabilities – affecting a child’s academic performance, attention span, organisation, and ability to read and write. - Balance and hearing problems - Abnormal growth and development – height and weight issues are common in FASD sufferers - Liver damage - Weak immune system - Kidney and heart problems - Mouth, teeth and facial defects Not all children will have the same symptoms and, even if they do, it may still affect their quality of life differently. Among the most difficult FASD defects to spot are neurological problems, which could hinder a child’s learning, development, behaviour and relationships in later life. Symptoms: how to identify if your foster baby or child has FASD If FASD isn’t detected at birth, the condition gets more difficult to spot and diagnose correctly as a child gets older. An accurate and timely diagnosis can vastly improve a child’s chances of living a normal life, so it’s important that foster carers know some of the signs to look for. - A slow rate of growth – FASD causes growth defects which can affect a child all the way to early adulthood. If you’re concerned your child isn’t growing at the normal rate (assuming they’ve been in your care for a few years), it could be worth getting them checked over by your GP. - Hyperactivity, poor social skills or lack of focus – This is where diagnosing FASD can get difficult. The condition shares similarities with other learning disabilities, so your child may need to sit several tests before an accurate diagnosis can be made. If they aren’t behaving well in school or struggle in the classroom, it’s worth talking to their doctor. - Sight and hearing problems – Alcohol-related birth defects can affect a child’s sight and hearing at any age, and problems may be undetectable until they’re older. Does your child often struggle to hear you or sometimes seems lost in thought? It could be a sign of FASD. - Poor hand-eye coordination – Does your child fall over a lot, or perhaps they’re always dropping things? Excessive clumsiness and poor coordination are among the most common symptoms of FASD in young people. - A distinct facial shape – FASD can result in a child being born with a mild facial deformity, which may not appear that obvious until they’re in their teens. This is characterised by small, wide-set eyes, a thin upper lip, a smooth ridge between the upper lip and nose, and other unusual facial features. If you have concerns about your child, it’s always worth speaking to their doctor, as well as their social worker. They are best placed to look into the child’s family and medical history. Groups and resources for foster parents of children with FASD Given the number of children now born with FASD, there is a growing number of support groups and resources. Providing help and advice for children and their families, these support networks are invaluable in helping a child cope with their condition. |NOFAS UK||The National Organisation for Foetal Alcohol Syndrome is the UK’s leading organisation dedicated to supporting people affected by FASD. It advocates for better public awareness about the dangers of drinking while pregnant, and provides a support network for people living with alcohol-related birth defects. NOFAS also offers free and comprehensive online training for parents and carers on how to provide help for a child suffering from FASD.| |FASD Network UK||FASD Network UK is a social enterprise which works with local authorities to offer training and support to foster carers, adoptive parents and birth families who are raising children with FASD. It can provide local training and information to foster carers across the north of England.| |FASAwareUK||An FASD learning resource, offering essential information on the various alcohol-related birth defects caused by the condition, as well as links to additional resources and accredited training courses.| |FASD Trust||A support network for those affected by FASD, as well as their parents and carers. The FASD Trust runs a confidential helpline for those living with FASD or looking for information on the condition. Just call 01608 811599.| Fostering a child with FASD FASD can have a significant impact on a child’s life, both physically and psychologically. National Fostering Group provides excellent training and support for carers of children with special needs, empowering them to provide the help and support their child’s needs both now and in the future. The NFA are also here for you every step, providing 24/7 support to all our foster families.
I’ve heard multiple sources say the sun is white, that it just looks yellow because the Earth’s atmosphere is scattering the blue light. I’ve heard in other places that the sun is yellow because of its position in the Hertzsprung-Russell (H-R) Diagram. Which is it? It can’t be both yellow and white at the same time, or can it? The Sun emits a lot of energy in the visible range. In wavelength scale it is from 390 nm to 700 nm, and when you translate it to colors, you get all colors from violet to red, just as we see them in the rainbow. When you mix all those colors together you get white, and that is why white is the true color of the Sun. Check out photos of the Sun taken by astronauts (with no filters). The Sun appears white on them! But seen from the Earth, the Sun can have many colors: from whitish-yellowish when it is high above the horizon, to red when it sets or rises. But you are right – most people see it as yellow, because the shortest wavelengths (that we see as different shades of blue) are being scattered by the Earth’s atmosphere, coloring the sky blue. And when our eyes combine all those rainbow colors, except the blue ones, the Sun’s color our eyes see is yellowish. The lower toward horizon the Sun is, the more blue is scattered and the “average” Sun’s color shifts to red. The position of star on the H-R Diagram depends on star’s temperature and brightness. One of versions of H-R diagram is often called “color-magnitude diagram”, but here “color” (or “color index”) is a number representing a difference in stellar brightness in two chosen spectral ranges. In many H-R Diagrams stars are colored according to theirs temperatures (blue for hot stars, red for cool ones) to make them more informative and appealing. The Sun and stars with similar temperatures are yellow when observed from the Earth, and that is why they are often represented with this color and called “yellow dwarfs”. However, you can also find diagrams for which real stellar colors are kept and in those diagrams Sun will be a white point. In some H-R Diagrams colors are coded with the wavelength for which star emits the most of its energy. When we use this criterion, we should use green for the Sun. But why don’t we see green stars (from Earth or space)? It is because stars emit energy in a really wide range. Even if the peak falls in green, a lot of energy is emitted in all colors, from blue to red. And with our eyes, we always observe the mixture of those colors. If you add a bit of blue to green, you will get something our eyes interpret as a tint of blue, and when you add something from the red side – you get yellow. So, when you see a colorful H-R diagram, remember that choice of colors is up to its author and the palette used does not necessarily represent the real colors of stars. Please, remember to be careful when checking Sun’s color. Looking directly at the Sun, even with sunglasses, may hurt your eyes! Dr. Monika Adamow
The Art of Mosaic Design Introduction: Mosaic design is a fascinating art based on paradoxes that must be embraced. Among these are the pieces, multiple objects of simplicity fused into a singular, complex wholeness; the irregularity that springs from deliberation;and more often than not, the creation that is wrought from destruction.Such paradoxes are put into context by the mosaicist when the principles of other visual arts are applied. By shaping tesserae like a sculptor, choosing colors like a paint, and weaving patterns like a fiber artist, the mosaicist presents his or her vision. For the first time, in The Art of Mosaic Design, beautiful examples of the contemporary movement in mosaics are revealed; there are works presented here by mosaicists from around the world. Some of these artists have achieved great distinction, other quietly execute the works they are driven to create, but all will surely leave their imprint on the history of mosaic design as an art form. That history is long.Some of civilization’s earliest artistic expressions were rendered in mosaic as part of construction.Simple mosaics made of pebbles decorated pavements found in the People’s Republic of China dating back more than two thousand years.By the fifth century B.C. the art of mosaic design was well established in Europe and the Mediterranean region, while figurative portrayals became increasingly sophisticated. Soon after that, the ability to cut rocks into small, regular units known as tesserae liberated the medium from its strictly functional role. This was a critical turning point for mosaics. Using these uniform pieces, stone murals could be applied to walls, ceilings, or other objects. In addition, widespread production of glass allowed its regular incorporation into mosaics; its radiant, reflective qualities were accentuated against mortar, bringing new life to the art form.
Purpose: To explain the orientation angles used for Inertial Labs devices and how they relate to one another. Last Updated: August 2019 What's the difference and why does it matter? Euler angles are generally what most people consider when they picture 3D space. Each value represents the rotation in degrees (it could technically be in any units) around one of the 3 axes in 3D space. Most of the time you will want to create angles using Euler angles because they are conceptually the easier to understand. The flaw is that Euler angles have a problem known as the gimbal lock that prevents certain rotations when two axes align. The solution: quaternions.
Maritime law is a body of laws, conventions, and treaties that govern shipping and related activities. It covers a wide range of topics, including marine commerce, navigation, shipping contracts, bills of lading, carriage of goods by sea, marine insurance, salvage, collisions, and environmental pollution. Maritime law also deals with the admiralty jurisdiction of courts and the law of the sea. Maritime law has its roots in the maritime codes of ancient civilizations, such as the Rhodian Sea Law of 700 BC and the Justinian Codex of 535 AD. Modern maritime law developed during the Age of Sail, when nations began to codify their laws and regulations to better reflect the growing international nature of maritime trade. The first comprehensive attempt to codify maritime law was the British Merchant Shipping Act of 1854. Today, maritime law is an important area of international law, with numerous conventions and treaties governing various aspects of shipping and related activities. In addition, many nations have their own national laws and regulations that apply to shipping within their territorial waters. Maritime law is also a vital component of the economic development of coastal communities and the global economy, as it governs the transport of goods and people by sea. If you are involved in any activity related to shipping, it is important to have a basic understanding of maritime law. This can help you avoid legal problems and protect your rights and interests. Maritime lawyers can provide guidance on the applicable laws and regulations, and help resolve disputes that may arise. Controls Maritime Law International Maritime Law: The United Nations Convention on the Law of the Sea (UNCLOS) is the primary source of international maritime law. International Maritime Organization is a UNO-based organization. UNCLOS sets forth the rules and regulations governing all aspects of ocean use, including navigation, environmental protection, and resource exploitation. The convention also establishes an international tribunal to adjudicate disputes arising under UNCLOS. National Law: In addition to international law, each nation has its own laws governing maritime activities within its territorial waters. These laws may be based on UNCLOS or may be entirely independent of the convention. For example, the United States has its own set of maritime laws, known as the Jones Act, which govern shipping and commerce within US waters. Customary Law: Customary law is a set of unwritten rules and regulations that have developed over time through the practice of nations. Many of the rules of customary international law are reflected in UNCLOS, but some are not. For example, the rule of “innocent passage” – which allows ships to pass through another nation’s territorial waters without permission – is not explicitly mentioned in UNCLOS, but is considered to be part of customary international law. Difference Between Maritime Law and Law of the Sea Maritime law and the law of the sea are two terms that are often used interchangeably, but there is a difference between the two. Maritime law is a body of laws that govern shipping and navigation, while the law of the sea is a branch of international law that deals with the rights and duties of nations in relation to the sea. The law of the sea is a relatively new field, only coming into existence in the mid-20th century. Maritime law, on the other hand, has a long history, with its roots going back to ancient Rome. Because of this, maritime law is much more developed and comprehensive than the law of the sea. The registration of a ship gives it nationality and ensures its protection by the laws of that nation. A ship is registered in the country of its owner and must be re-registered if it wants to go in the international water. Registration provides proof of ownership and is used to identify the ship for purposes of navigation, safety, taxation, and International Maritime Law.
In France during the early 1600s there lived a man named Cardinal Richelieu. This man was a clergyman, a nobleman and a statesman. In 1614, he entered politics in France, a common practice for members of the clergy. He was smart and ambitious, and quickly rose through the ranks of both the Church and the Government. During this day and age, the divide between Church and state was virtually non-existent, and many major statesmen either belonged to the clergy, or had close ties to them. In 1616, Cardinal Richelieu was appointed as Secretary of State and was given the responsibility of dealing with French foreign affairs. At the time, there were no more than 100 French permanent residents in the New World, and the Cardinal saw an opportunity to expand the boundaries of both his religion and the empire. In 1627, as a part of his program to develop external trade for France, he created the Company of 100 Associates and the company was officially established by an edict of King Louis X111 in May of 1628. This new association was formed with a large sum of starting capital, and was divided into 100 shares. Members of this company included the Cardinal himself, officials, merchants, as well as the now-famous Samuel de Champlain. Management of this company was entrusted to 12 directors. Their mandate was to colonize the New World, and their first step was to agree to transport 4,000 colonists to Canada before 1953. They also agreed to support the colonists during their first three years colonizing. Due to the highly religious nature of both the Company and its members, only native French Catholics were to be sent, and priests and nuns were to be maintained at all French settlements. In return, the Company was given a monopoly of the whole country of New France, all the way from Florida in the south, to the Arctic circle in the north. It was granted the authority to distribute lands, and was given a complete monopoly of the fur trade, a monopoly of 15 years for all other trade with the exception of cod and whale fishing, which was to be available to all French subjects. Unlike many of their contemporaries, the French had maintained good relations with the Natives in the New World. They saw these people as means to quickly increase their ranks, thereby solidifying their control of the country. The Natives were expected to convert to Christianity, and Jesuits were an integral part of the mission. Any Natives who converted were considered Frenchman and were encouraged to integrate with the French colonists. Their method of conversion, not requiring the Native people to give up their entire culture or belief system, was an easy sell for the Natives, and many converted, benefitting from the new rights afforded to them as Frenchman and from the assistance of the ever-present Jesuit priests and Nuns. One of the most well-known employees of the Company of 100 Associates was Samuel de Champlain. His journeys through Canada also brought the first mention of another major player in our story, Olivier LeTardif. LeTardif was in Quebec from about 1621 onward. He was an assistant clerk for the Company of 100, and Samuel du Champlain’s most trusted personal interpreter. He was a collaborator in the missionary effort as well, supporting the Jesuits and acting as a godfather to the Indians. He even administered baptisms, and as we will discuss later, adopted Indian children. Olivier LeTardif spent much time travelling deep into the Canadian wilderness working to make contact with the many outlying Indian settlements in what was known as the “bad country”. At his side during these journeys was his friend and faithful companion Roch Manitoubeouich. Roch was a Huron man who was converted to Christianity by French missionaries. His Christian name, Roch, was in honor of St. Roch, the patron saint of cattle and dogs, and those who love them. The two men travelled for many years as representatives of the company, Roch as interpreter and Olivier as company man. One can only imagine the types of dangers and adventures they had in their travels, and the people they met and interacted with. Their travels were highly successful, and eventually, Olivier LeTardif was given a promotion to Head Clerk of the Company of 100 Associates, forcing him to settle down into a more sedentary life in a more administrative position. Roch also settled down, going to join his people at the Aboriginal settlement at Sillery near Quebec. It is in Sillery that Roch meets Outchibahanoukoueou (often referred to as Oueou). Oueou was a young Abenaki woman. Many of the Abenaki people who had lived along the Becancour River had made their way into Quebec as their territory and resources were destroyed by encroaching settlements, and they were further pushed out by the warring Iroquois. Oueou is an ancestor claimed by many Abenaki people, and was born along the Becancour River around 1602-1606. It is believed by many that the couple married at Sillery, but there is no official record of their marriage. Roch and Oueou carried on their lives until the arrival of their first child, Marie Sylvestre Olivier. Unbeknownst to them at the time, this very special little girl would come to hold a unique place in Canadian history, and be an important fountainhead to many Métis families from this region. Her life was quite well documented, and in our next blog post, we will discuss Marie, her life, and why she is such an important part of not only Métis history, but the history of Canada.
- What is the difference between organic and inorganic chemistry? - What is this molecule? 3. What is this molecule? 1. The difference between organic and inorganic chemistry is carbon. Carbon (C) is the basic building block of life, thus “organic chemistry”. Everything that lives or has ever lived contains it. Carbon dioxide (CO2), considered a “greenhouse gas” in today’s parlance, is part of the natural life cycle, exhaled by human beings and animals, used by plants for growth. The earth’s atmosphere is composed of 78 percent nitrogen and 21 percent oxygen. The remaining one percent consists chiefly of argon, with extremely small amounts of other gases. Carbon dioxide, then, constitutes significantly less than one percent of the earth’s atmosphere. Green plants take in carbon dioxide and give off oxygen in “photosynthesis,” a process involving chemical reactions, using the sun as an energy source. Life is an organizing force which defies “entropy.” “Entropy” has several definitions, but it is generally perceived as the ultimate degradation of matter and energy in the universe toward patternless conformity, degradation, disorder, and death. However, the organizing force of life concentrates the energy in the living or dead organism. Wood, coal, oil, and natural gas are examples of stored energy sources derived from living or formerly living organisms. 2. If you answered that CH4 is methane, you would be right. Methane is another so-called “greenhouse gas.” It is produced by all living and decaying organisms. It is the simplest molecule in organic chemistry, consisting of one carbon and four hydrogen atoms. Everything from marshlands to landfill, from animal waste to human farts, add methane to the atmosphere. If you answered that CH4 is natural gas, you would also be right. This is why natural gas is considered the cleanest fuel of all, because it produces no toxic by-products. The chemical reaction for natural gas when used for energy production is: CH4 + 2O2 + flame = CO2 + 2H2O Translated, this means that one methane molecule plus two oxygen molecules plus heat of combustion generates one carbon dioxide molecule and two water molecules. Thus, burning natural gas generates twice as much water as carbon dioxide. If you are considering “greenhouse gases,” you must recognize that water (steam) is a potent one. The cloud cover of the earth has the effect of trapping heat inside the atmosphere. You will note that “climate change scientists” want to reduce CH4 levels, but oil and gas companies want to capture and sell CH4 in the “global economy.” They are using “fracking” and other techniques to extract CH4 from trapped deposits in the earth. 3. If you answer that CH3-CH2-OH is whiskey, you would be right. Whiskey is a distilled alcohol, usually from grain, such as rye and maize or corn. It is also distilled from barley. Corn liquor was an early American product and used in bartering by cash-strapped farmers to pay bills. George Washington was a large-scale whiskey distiller. In his later years, he made most of his money from the distilling business. Distilleries are examples of “economic narrows” that operate as toll gates between producer and retail purchaser. Washington and Alexander Hamilton conspired to enact the “Whiskey Tax” in 1791 to undermine the bartering system and replace it with a cash-based system that could be more easily taxed. (Alexander Hamilton, Ron Chernow, 2007) This led to the infamous Whiskey Rebellion, in which Washington betrayed the farmers who had fought in the Revolution (thereby neglecting their farms) and were going bankrupt because of debt, taxes, and the devaluation of the Continental dollar, after the new United States currency was introduced. If you answer that CH3-CH2-OH is ethanol (or ethyl alcohol), you would also be right. The 2007 Congressional mandate to blend gasoline with at least 10% ethanol proved a boon for Archer Daniels Midland and other corporate giants, which benefitted mightily from the mandate, through tax breaks, other ethanol subsidies, and price supports. It must be remembered that “farmers” and the “farming industry” are not the same. In fact, “farmers,” as we perceive them, are being displaced in large numbers by corporate mega-farms. The corporate “farming industry” has significant political clout through donations to both major parties. They also have armies of lobbyists, lawyers, and friends in federal and state regulatory agencies like the USDA and EPA. They are the major beneficiaries of federal and state mandates, subsidies, and price supports. They have their fingers in every point of the farm to table (or vehicle) distribution chain, including storage, distilleries, commodities futures markets, transportation (ADM Trucking is a subsidiary of Archer Daniels Midland), and global sales. In this election year, while the media and public are focusing on the presidential candidates, let us not forget that the entire House of Representatives and one third of the Senate are up for grabs. Whatever anyone thinks of Donald Trump, we must admit he is a game-changer. His grass roots appeal is showing the power of the people to make a significant difference in how the game is played. We may be moving closer to a true democracy, by default, as the “ruling elite” of the two-party system desperately tries to recapture its “market share” of public trust and acceptance. Yes, the individual can make a difference, whether at the national or local level. If that individual is informed well enough ask the right questions of all candidates, from local to national levels, and to demand informed answers, we might wrest a revolution in consciousness from this circus of political psychodrama. So far, Ted Cruz is the only presidential candidate who has come out against the ethanol mandate, but he has begun to waffle under political pressure from the “farm lobby” and others. Hillary Clinton does not seem to know the difference between natural gas and methane. She is not alone. It is frightening to think that so many people with zero knowledge of science are in positions to write and pass legislation mandating, regulating, and subsidizing industries that affect us all and to such a great extent. It probably doesn’t matter much who becomes president. The real power is in Congress, which has the power to repeal stupid legislation, like the ethanol mandate. Especially now that there’s a worldwide oil glut—one of the premiere reasons for passing the mandate—it’s especially good timing to revisit that law and its consequences.
Manners and etiquette are both critical to functioning in society. Both of these are about how we behave, but they are slightly different, and involve different skills. They are also culturally dictated, so what we consider good manners or appropriate etiquette may not be considered so in other cultures. Etiquette is about protocol, and requires knowledge of rules of behaviour in certain defined situations. It is about knowing how to behave at, for example, a shooting party, a funeral, a business meeting, a formal dinner party, or dinner with the Queen. There is appropriate etiquette for a whole host of activities like cricket, croquet or rock-climbing. There is also an elaborate system of etiquette related to social class. People can learn the rules of etiquette through formal training, or by reading books about it. Knowledge of etiquette is never wasted. Good etiquette training provides the skills (and therefore the confidence) to cope with any occasion with ease. However, etiquette, because it involved rules for behaviour which have to be learnt, very rarely allows for personal variations and individual concerns. One of the important things to note about mistakes in etiquette is that they immediately clearly identify a person as an ‘outsider’ to the group in question. For example, not knowing the right piece of cutlery to use for a particular course at a dinner party might immediately make it obvious that you are not from the same social class as the rest of the group. Knowledge of etiquette helps us to fit in! Having good manners is more fundamental and relates to the way we treat people generally. To have good manners is to treat other people in the way we ourselves would like to be treated. Although a kind caring disposition will be helpful in producing good mannered behaviour, good manners are a learnt behaviour and will sometimes involve a degree of acting. We may feel in a lousy mood but we can still smile and exchange pleasantries with a work colleague. Good manners could be thought of as a kind of social lubricant, making our social interactions much smoother and pleasanter. They are also enormously effective in getting others to do things we want them to, simply because they will inevitably feel warm and positive about us. If we have bad manners or no manners, they will feel uncomfortable with us, dislike us, or possibly even feel contempt for us. Where do good manners come from? We learn manners from our parents and from the formal education we receive. We can also, if we take the time, learn them by observation of others. However, there is hardly anyone whose manners cannot be improved, often by a large amount, and if this is the case, organised study seems to offer the best solution. Why is knowledge of etiquette useful? Knowledge of etiquette enables us to fit in comfortably with an often formal activity that involves other people. Rather than feeling stressed and worrying about whether we are using the correct fork, we can relax and focus, fully participating in the occasion. For example, business etiquette is now big business and important for those moving up the corporate ladder. Spouses are often invited along to interviews which can include receptions and dinners, so both the interviewee and their spouse will need a knowledge of the appropriate etiquette. The difference between manners and etiquette Etiquette and manners do overlap considerably, and often both words can be used about the same thing. What is considered good manners is often also considered to be appropriate etiquette. The primary difference between etiquette and manners is that etiquette involves the knowledge of specific rules of conduct, and manners is more generalised. Good manners go beyond socially acceptable behaviour and are much more about treating people with respect and kindness, and making other people feel comfortable whatever the social situation. Good manners are under our control because they about showing concern for others. It is entirely possible to have a vast knowledge of etiquette and no manners at all (perhaps you have met people like that?). On the other hand, there are a large number of people who have wonderful manners, but are a bit wobbly about etiquette at certain social occasions (most of us have probably been at a social occasion where we have worried about which piece of cutlery to use, or where we stand). Ideally, we need a knowledge of both, but good manners will get us further. After all, we can always watch carefully what others do before we pick up a piece of cutlery, or even ask the person next to us (because, if they are well-mannered, they will care about how you feel, and kindly tell you which one to use!). How a knowledge of etiquette and good manners helps us Confidence is a vital attribute in life, as it enables us to step into new situations without fear. It therefore gives us choices and a sense of control. Research has demonstrated that a sense of control is vital for good emotional well-being. A knowledge of etiquette and the ability to interact with others with good manners are important skills to possess. They will ease our path through life – knowing what to do and when (etiquette) means you can relax and enjoy the occasion, and being considerate to others and showing concern about their well-being (manners) means that people will enjoy being around you and think highly of you!
What is Brain Health? Brain health refers to how well someone’s brain functions across several areas and includes: - Cognitive health – how well we think, learn, and remember - Motor function – how well we make and control our movements, including balance - Emotional function – how well we interpret and respond to emotions (both pleasant and unpleasant) - Tactile function – how well we feel and respond to sensations of touch, including pressure, pain, and temperature Brain health can be affected by injuries or illnesses like stroke, head injuries, depression, substance use, or dementia. And as we get older, our brain health, just like our physical health, can worsen. While some of these things can’t be changed, there are lifestyle changes you can make that can help you to maintain a healthy brain as you get older. How to Support Brain Health As You Get Older We now know more than ever about brain health and how to support it as we get older. Research has shown that maintaining and even improving brain health and mental functioning in old age is linked to: 1. Staying Physically Active Research has found that physical activity, especially aerobic activity, may diminish cognitive impairment and reduce the risk of dementia. 2. Staying Connected Socially Staying connected to others has been shown to improve brain health. If you live alone and don’t have a lot of chances to socialize, consider getting a pet. Pet ownership has been shown to improve memory, suggesting that owning a pet could help to reduce the risk of developing dementia. 3. Managing Stress Taking steps to manage stress can help you to maintain and improve brain health. 4. Taking Care of Your Mental Health Mental health conditions like depression can impact our brain health negatively. Managing your mental health also helps you to manage your brain health. 5. Reducing Risks to Cognitive Health Having high blood pressure, which can lead to stroke, or other lifestyle factors can affect your cognitive health negatively. 6. Preventing Head Injuries Head injuries can cause cognitive impairment, so taking steps to prevent them is important. Wear a helmet when riding a bike or motorcycle, always wear your seatbelt when driving or riding in a vehicle, ensure safety in the home, and always take steps to avoid falls and other accidents. 7. Keeping Your Brain Active One of the most important things you can do to maintain brain health as you age is to keep your brain active. Challenging your brain has both short-term and long-term positive effects on brain health. The Importance of Keeping Our Brains Active as We Get Older Research has shown that one of the most important things you can do when it comes to protecting and promoting your brain health is keeping your brain active, especially as you get older. In one study, mentally intact people in their 70s and 80s were asked how often they did six activities that required active mental engagement – reading, writing, doing crossword puzzles, playing board or card games, engaging in group discussions, and playing music. Over a period of five years, this group was studied, and those who placed in the highest third in terms of how often they engaged in mentally stimulating activities were half as likely to develop mild cognitive impairment. 5 Ways for Seniors to Keep Their Brains Active 1. Read More Research has consistently shown that reading more improves cognitive function, especially as we get older. One study from the Rush University Medical Center in Chicago found that reading books and magazines can help to keep memory and thinking skills intact. 2. Try Some Puzzles If reading isn’t your thing, don’t worry. You can still reap the benefits of keeping your brain active by participating in other mentally stimulating activities. Puzzles – whether it’s jigsaw puzzles, crosswords, Sudoku, or something else – can also help to keep our mental skills strong. 3. Play Games Games are fun, but they’re more than that. Games can actually help to keep our brains healthy because, like reading and puzzles, they keep our brains active. Whether you like board games, card games, chess, or even video games, you can have fun while also improving your cognitive skills, especially if it’s a game that requires you to think strategically. 4. Create Something You can draw or paint, write (a short story, poem or just in a journal), sculpt or mold, or even build something – as long as you’re challenging your brain. 5. Enjoy a Favorite Hobby Whether it’s something you’ve been doing for years, or a new interest, participating in a hobby can keep your cognitive facilities going strong. Knitting, sewing, gardening, woodwork, sports, or music can help to keep your mind active. Looking to Keep Your Brain Active as You Get Older? As we age, it gets harder to keep our brains healthy. It’s especially hard for seniors who live alone or who have small social circles. Staying healthy, socializing with others, managing stress and depression, and keeping our brains active all contribute to continued cognitive health as we get older, but seniors often don’t have access to the opportunities or resources needed to do those things. That’s why many people choose to make the move to a senior living community. At Eagle Flats Village, we provide our residents with access to services and amenities that empower them to fuel their bodies, brighten their minds, and enhance their spirit. Our community supports and encourages social interaction, healthy habits, and plenty of mentally and physically engaging activities to ensure residents enjoy a full, productive, and carefree life. Ready to find out more? Schedule a visit today.
Humans can withstand lower maximum temperatures and humidity than previously believed. As global temperatures rise as a result of climate change, researchers are becoming more interested in the maximum environmental conditions to which people can adapt, such as heat and humidity. According to new Penn State research, the temperature in humid climates may be lower than previously thought. It was previously thought that a wet-bulb temperature of 35 degrees Celsius (equivalent to 95°F at 100% humidity or 115°F at 50% humidity) was the maximum a human could tolerate before losing control of their body temperature, potentially resulting in heatstroke or death if exposed for an extended period of time. A thermometer with a wet wick over its bulb is used to measure wet-bulb temperature, which is impacted by humidity and air movement. It denotes a humid temperature in which the air is saturated and holds as much moisture as it can in the form of water vapor; at that skin temperature, a person’s perspiration will not evaporate. However, the researchers discovered that even for young, healthy people, the real maximum wet-bulb temperature is lower—around 31 degrees Celsius wet-bulb or 87 degrees Fahrenheit at 100% humidity—in their latest study. The temperature is likely to be significantly lower for older people, who are more susceptible to heat. The findings could help individuals better plan for extreme heat events, which are becoming more common as the world warms, according to W. Larry Kenney, professor of physiology and kinesiology and Marie Underhill Noll Chair in Human Performance. “We can better prepare people—especially those who are more vulnerable—ahead of a heat wave if we know what those upper temperature and humidity limitations are,” Kenney said. “This could include prioritizing the sickest people who require care, setting up notifications to go out to a community when a heatwave is approaching, or creating a chart that provides information for various temperature and humidity ranges.” It’s also worth noting that utilizing this temperature to estimate risk only makes sense in humid areas, according to Kenney. Sweat is able to drain off the skin in drier regions, which helps to reduce the body temperature. The temperature and ability to sweat, rather than the humidity, are more important in unsafe dry heat situations. The research was published in the Journal of Applied Physiology recently. Previous studies had assumed that a wet-bulb temperature of 35 degrees Celsius was the upper limit of human adaptation, but that temperature was based on theory and modeling rather than real-world evidence from humans, according to the researchers. Kenney and his colleagues intended to test this theoretical temperature as part of the PSU H.E.A.T. (Human Environmental Age Thresholds) project, which is looking at how hot and humid an environment has to be before older persons have trouble managing heat stress. “When you look at heat wave statistics, you’ll notice that the majority of people that die during heat waves are older people,” Kenney said. “Heat waves will become more common—and more severe—as the climate changes. As the population ages, there will be an increase in the number of older persons. As a result, studying the intersection of those two movements is critical.” The researchers gathered 24 volunteers between the ages of 18 and 34 for their study. While the researchers intend to do these tests on older folks as well, they preferred to start with children. “Young, fit, healthy people tend to endure heat better,” Kenney explained, “so they will have a temperature limit that can serve as a ‘best case’ baseline.” “Older people, people on drugs, and other susceptible populations will very certainly have a lower tolerance limit.” Each subject swallowed a tiny radio telemetry device wrapped in a capsule prior to the experiment, which would then detect their core temperature throughout. The individual was then placed in a dedicated environmental chamber with temperature and humidity controls. While the subject engaged in light physical activity such as light cycling or walking slowly on a treadmill, the temperature and humidity in the chamber gradually increased until the person’s body could no longer sustain its core temperature. The researchers discovered that crucial wet-bulb temperatures varied from 25 to 28 degrees Celsius in hot-dry conditions and from 30 to 31 degrees Celsius in warm-humid environments, all of which were lower than 35 degrees Celsius wet-bulb temperatures. “Our findings imply that when it gets above 31 degrees wet-bulb temperature in humid places of the world, we should start to be concerned—even about young, healthy people,” Kenney said. “As we continue our research, we’ll look at what that number is in older folks, because it’s likely to be considerably lower.” Furthermore, because individuals adapt to heat differently depending on humidity levels, the researchers concluded that there is unlikely to be a single cutoff limit that can be designated as the “maximum” that humans can tolerate across all settings found on Earth.
Green Energy’s Hidden Impact on the Environment Conventional energy sources such as coal, oil, natural gas, and nuclear material have a substantial effect on the environment via the release of harmful greenhouse gases (GHG). Carbon Dioxide (CO2) and Methane (CH4) account for over 90% of gases emitted into the atmosphere. GHGs also cause water pollution, wildlife and habitat loss, and require significant land and water use. Renewable (non-conventional) energy is considered to be less harmful and energy efficient because it does not deplete the earth’s natural resources (directly). However, numerous studies have shown that renewable sources such as solar, wind, biomass, geothermal and hydropower also have environmental impacts, although at lower levels than fossil fuels. In other words, there are few entirely ‘free’ alternatives from an environmental perspective. The environmental impact of renewables varies based on their respective transformation processes (first law of thermodynamics) for generating energy. By understanding the supply chain of each renewable source, we can determine the potential and quality of the environmental risks associated. Solar energy is considered the most important source of renewable energy because it is abundant, inexhaustible and clean (i.e. it does not emit GHG). However, there are some environmental risks during the energy transformation process. Photovoltaic (PV) technology requires 3.5-10 acres of land and concentrating solar ‘thermal’ plant (CSP) facilities require 4-16.5 acres per megawatt of energy generated creating a potential for habitat loss. In addition, CSPs require 600-650 gallons of water inside the solar array per megawatt-of electricity capacity. PV cells are manufactured using chemicals to purify the semiconductor materials. These chemicals – gallium arsenide (GaAs), copper indium gallium-diselenide (CIGS), and cadmium telluride (CdTe) – are known to cause environmental risks. For example, GaAs is known to cause toxicity in the lungs, kidneys and reproductive organs, CIGS may cause pulmonary toxicity and CdTe can cause lung, kidney or liver failure. 90% of solar panels are made up of glass which cannot be recycled because the panels contain plastic and lead, so as a result end up in landfills. The International Renewable Energy Agency (IRENA) estimates that there will be 78 million tonnes of PV raw materials globally by 2050 if companies do not recycle. Wind (a secondary effect of solar energy) is a clean and sustainable way to generate electricity due to its small land footprint. Modern wind turbines are relatively low maintenance and able to last several months compared to their predecessors that were relatively unreliable. Like solar, wind energy is also free, making it a cheap input resource for utility companies. Wind energy’s carbon footprint is considered to be the lowest per unit of electric energy generated as it produces zero carbon emissions. However, wind turbines require land use and are usually built in rural areas leading to “industrialisation of the countryside” affecting wildlife and fish habitats. Spinning turbine blades pose a threat to flying wildlife such as birds and bats. Most wind turbines contain sulfur hexafluoride (SF6) which is 23,500 times more potent than carbon dioxide as an atmospheric heating agent. However, only a small amount gets released from the turbines during operation and therefore it is not considered to have a significant impact on the environment, though some authorities are raising serious challenges about the reported accuracy and so true extent of electricity generation SF6 release to the atmosphere. Biomass is organic material derived from plant, animal, and human waste. Energy is generated by burning or anaerobically digesting/fermenting biomass products which releases heat in similar ways that burning fossil fuels does and also raises concerns about air emissions vis flu gas waste. The energy transformation process emits harmful greenhouse gases into the environment. Biomass burning is the second largest (after fossil fuels) source of non-methane volatile organic compounds (NMVOCs), which include ethanol and formaldehyde. Biomass plants for transportation liquid fuels in particular lead to significant land use to generate feedstock. For example, in order to produce 3,914,000 tonnes of liquid transportation biofuel, 2,800,000 hectares of land is needed for energy crop production. In addition, due to the significant use of wood in the production of non-transportation bioenergy, there is also a risk of deforestation. Geothermal energy is heat derived within the sub-surface of the earth whereby water and/or steam carry the geothermal energy to the Earth’s surface. Geothermal energy can be used for heating and cooling purposes or used to generate clean electricity. The most common type of geothermal power plant (hydrothermal plant) is located near geologic “hot spots” (hot molten rock located close to the earth’s crust that produces hot water). Other types include enhanced geothermal systems (hot dry rock geothermal) where the earth’s surface is drilled, cold water is pumped in and returns to the surface as heated water. During the drilling phase harmful gases such as CO2, CH4, Ammonia (NH3) and Hydrogen Sulphide (H2S) can be released affecting air and water quality. In addition, terrain changes may occur expressed as seismic shocks, subsidence or even volcanic eruptions. Hydroelectric power energy is sourced from large dams and small run-of-the-river plants and delivers approximately 16% of the world’s electricity today. The largest producers are China, Brazil, Canada, and the United States. The dams capture free flowing water into hydro plants that is replenished by rain and snow. Hydroelectric plants have the ability to generate large amounts of electricity that can easily be adjusted to meet consumer demands by controlling the flow of water to the electricity generating turbines. Dams and hydroelectric plants block off the natural current of rivers and sediments which can impact downstream habitats. This causes a disruption to the migration routes of fish and decreases the flow of nutrients required for underwater habitats. In addition, dams can prevent fish from reaching their natural environment and spawning grounds causing a reduction in fish populations because they get trapped inside the dams. Hydrokinetic energy is sourced from river currents or waves. River hydrokinetic energy (RHK) is a practically untapped renewable energy source. Electricity-generating turbines are placed directly in fast river flows, avoiding the need for a dam to impede water flow. The kinetic energy generated from the flow, is then converted into electricity. However energy is being removed from the ecosystem and depending on the relative scale of the turbines this could be significant. Wave energy technology can require a significant amount of ocean space, competing with the shipping and fishing industries while potentially damaging local marine life. Emissions from wave technology are known to produce 0.05 pounds of CO2 equivalent per kilowatt-hour (KWh) compared to 1.4 to 3.6 pounds of CO2/KWh from coal generated electricity, which is 28-72x less CO2.emitted per KWh. Renewable energy technologies have been observed to have environmental impacts albeit at lower levels compared to conventional energy sources. With sustainable investment at the forefront, investors are more environmentally conscious and incorporate ESG (Environmental, Social, and Governance) filters into investment decisions to better identify risks and opportunities.
The Very Real Reindeer and How They Became Associated With Christmas Unlike Santa, elves or even clean coal, reindeer are real. They may not fly, but there’s a good deal of truth around the many myths of Christmas’s favorite animal. Yes, they do live in extremely cold conditions. Yes, they are known to pull sleds. And, yes, their noses really do turn a shade of red given the right conditions. First off, caribou and reindeer essentially are the same animal and are classified as the same species (Rangifer tarandus). They are also both part of the deer family, or cervidae, which also includes deer, elk and moose. However, there are subtle differences. “Reindeer” is often used to describe the domesticated animals, the ones that are herded and employed by humans to pull sleds. They are also often smaller and have shorter legs than their wild brethren. In addition, the name reindeer is more often used to refer to the European variety, ones that live in Siberia, Greenland and northern Asia. The word “caribou” tends to mean the North American (meaning living in Canada and Alaska) and/or the wild variety. Because caribou are wild and reindeer are domesticated, scientists agree that most of the differences between the two are evolutionary as opposed to inherent. Caribou are larger, more active, faster and migrate further than reindeer. In fact, the caribou undertake the largest land migration of any animal in North America every year in search of better conditions and food for their young. Antlers are the defining characteristic of many large deer and Rangifer tarandus certainly have large antlers (in fact, they are the largest and heaviest antlers of any living deer species). However, there are differences between their antlers and other deer. Unlike other deer species, both male and female Rangifer tarandus can have antlers, but they possess them at different times of the year depending on gender. Males start growing them in February and shed them in November. Females start growing them in May and keep them until their calves are born sometime in the spring. This has led many to note that Santa’s reindeer (including Rudolph) would technically have to be all female because males usually shed their antlers by November- only females have them through the Christmas season. For both caribou and reindeer, cold climates are where they thrive. Covered in head to toe with hollow hairs that trap in the air and insulate from the cold, they are built for the tundra and high mountain ranges. Their hooves and footpads also are adapted for frigid temperatures, shrinking and contracting in the cold which exposes the rim of the hoof. This allows them to gain better traction by cutting into the ice and snow. Another cold weather adaptation is that the animal’s nose does, in fact, turn red. In much the same manner as humans, caribou and reindeer have a dense amount of blood capillaries in their nasal cavities – actually 25% more than humans. When the weather turns particularly cold, blood flow in the nose increases. This helps keeps the nose surface warm when they root around in the snow looking for food; plus, it’s essential for regulating the animal’s internal body temperature. This results in a reddened nose, matching Santa’s own cold weather red nose. It’s believed reindeer were domesticated by native peoples (particularly by the Nenets) at least two thousand years ago in northern Eurasia. Reindeer bones have been found in ancient caves in Germany and France, meaning they once roamed much of Europe. Old Chinese annals dating back nearly eighteen hundred years ago also mention domesticated reindeer. Over a thousand years later, Marco Polo also wrote about tamed reindeer in his journals. People used reindeer in much the same way we use horses today, to transport people and supplies. There is even a good deal of evidence that humans used to milk reindeer. To this day, there are still certain peoples (including Scandinavia’s Sapmi, Northern Europe oldest surviving indigenous people) who have come rely on reindeer domestication. Native peoples in Serbia and Canada (where again, they are called caribou) use reindeer for clothing, work, food and to even pull sleds. In fact, they are thought to be more powerful than an average horse and can run up to forty miles an hour even with an attached sled. Beyond horse-like chores, reindeer meat is also an important food source and has come to be considered something of a delicacy. (There’s even reindeer jerky). While reindeer seem to be a pretty obvious animal to help Santa on his Christmas travels, they didn’t become part of the Jolly St. Nick story until the 19th century. In 1821, a New York writer named William Gilley published a children’s booklet where Santa and reindeer were first mentioned together: ”Old Santeclaus with much delight, his reindeer drives this frosty night.” Later, Gilley would write that he knew about reindeer living in Arctic lands from his mother, who was from the area. A year later, Clement Clarke Moore would anonymously publish his poem “A Visit from St. Nicholas,” otherwise known as “The Night Before Christmas,” co-opting the idea and popularized it as part of Christmas lore. Although it should be noted in his version he describes St. Nick riding a “miniature sleigh” with “eight tiny reindeer” that had little hooves. This, of course, explains how St. Nick was able to fit down a chimney- he was a tiny little elf. In the 20th century, it was department stores that pushed the reindeer and Christmas narrative even further. Working with businessman Carl Lomen – who had become known as the “reindeer king of Alaska” for selling the animal’s meat across the state – Macy’s put on what may be the first Christmas display featuring Santa, a sleigh and real reindeer in 1926. Thirteen years later, the department store (now-defunct) Montgomery Ward distributed a coloring book featuring a cute little reindeer with a nose “red as a beet..twice as bright.” The author was an ad man named Robert L. May who, after writing the initial draft of the story, perfect it with the help of his four year old daughter. May’s boss did not like Rudolph the Red-Nosed Reindeer at first, as he felt a red nose implied the reindeer had been drinking. However, once it was partially illustrated by Denver Gillen, who worked in Montgomery Ward’s art department and was a friend of May’s, his boss decided to approve the story. In the first year after its creation, around 2.4 million copies of Rudolph the Red-Nosed Reindeer were given away. By 1946, over six million copies of the story had been distributed by Montgomery Ward, which was particularly impressive considering it wasn’t printed through most of WWII. After the war, demand for the story skyrocketed, receiving its biggest boost when May’s brother in law, radio producer Johnny Marks, created a modified musical version of the story. The first version of this song was sung by Harry Brannon in 1948, but was made nationally popular by Gene Autry’s 1949 version, selling 2.5 million copies of that version in 1949 alone and has sold to date over 25 million copies. Interestingly, despite the fact that May created the story of Rudolph and it was wildly popular, he did not initially receive any royalties for it because he had created it as an assignment for Montgomery Ward; thus, they held the copyright, not him. In a rare move for a business, in 1947, Montgomery Ward decided to give the copyright to May with no strings attached. At the time, May was deeply in debt due to medical bills from his wife’s terminal illness. Once the copyright was his, May quickly was able to pay off his debts and within a few years was able to quit working at Montgomery Ward, though just under a decade later, despite being quite wealthy from Rudolph, he did go back and work for them again until retiring in 1971. Today, reindeer (and caribou) are still found in cold, tundra climates across the northern world. Unfortunately, at least according to one study, reindeer populations globally are plunging. If things don’t improve for them soon, they may become as fictional as Santa himself. - Krampus, the Christmas Demon - What was Wrong with Tiny Tim? - A Christmas Oddity: The Giant Straw Goat in Sweden That People Try to Burn Down Every Year - The Origin of the Candy Cane - Why Do They Say “Mush” to Make Sled Dogs Go? - The primary differences between the original Rudolph the Red-Nosed Reindeer story and the one we know today from the song and TV special are as follows. In the original story: - Rudolph did not live at the North Pole nor was he descended from one of Santa’s reindeer. He was simply a regular reindeer living elsewhere in the world. - Santa knew nothing of Rudolph until the end of the story when one foggy Christmas Eve he was delivering presents to Rudolph’s house and saw the glowing from Rudolph’s window. Due to the thickening fog that night, he decided to ask Rudolph to fly the lead. - Despite being Jewish, Johnny Marks wrote many other Christmas songs, a few of which, like Rudolph the Red-Nosed Reindeer, have popularly survived today. These include: Rockin’ Around the Christmas Tree; A Holly Jolly Christmas; and Run Rudolph Run, among others. - The voice actors who played Rudolph and Hermey in the stop motion CBS classic version of Rudolph the Red-Nosed Reindeer now both live in the same retirement community in Ontario. - In that original TV version, Rudolph, Hermey, and Yukon Cornelius promise to help the toys on the Island of Misfit Toys. However, in that original version, once Rudolph and company leave the island, they never actually bother to help the toys. This resulted in numerous complaints that Rudolph broke his promise, so a new scene was added to the end where Rudolph leads Santa to the island to collect the toys. - “11 Things You Might Not Know About Reindeer” – Mental Floss - “Reindeer or Caribou?’ – Seeker - “The Other 364 Days of the Year: The Real Lives of Wild Reindeer” – USGS - “Rudolph, and Santa’s 27 Other Reindeer” – Mental Floss - “How Santa got his reindeer” – CNN - “Reindeer” – San Diego Zoo - “Real-life reindeer games” – The Week - “Nenets Tribe” – BBC - “Facts About Reindeer” – Live Science - “Frequently Asked Questions about Caribou” – US Fish & Wildlife Service - “Rudolph the red nosed reindeer is a female” – The Telepgraph - “Rudolph the Red-Nosed Cash Cow: Inside the Economics of Reindeer Farming” – The Atlantic - “Delay Announced for the 2011-12 Caribou Hunting Season in Labrador” – Environment and Conservation Labrador and Aboriginal Affairs - “Reindeer & Caribou Populations Plunge” – Live Science - “Revealed: Rudolph Really Did Have a Red Nose” – Live Science - “6 Surprising Facts About Reindeer” – Live Science - “Rangifer tarandus” – The IUCN List of Endangered Species - “Caribou & Reindeer” – Arctic Studies Center, Smithsonian National Museum of Natural History - “The Nenets of Siberia” – The Atlantic - “Cool Green Science” - “Writing ‘Rudolph’: The Original Red-Nosed Manuscript” – NPR - “Caribou” – The Nature Conservancy - “Are Santa’s Reindeer Males?” – Live Science - “REINDEER FACT SHEET” – World Animal Foundation - “Rudolph’s Red Nose Resolved” – MedPage Today |Share the Knowledge!|
This drawing shows the train track joining the Train Yard to all the stations labelled from A to S. Find a way for a train to call at all the stations and return to the Train Yard. Without taking your pencil off the paper or going over a line or passing through one of the points twice, can you follow each of the networks? Can you cross each of the seven bridges that join the north and south of the river to the two islands, once and once only, without retracing your steps? I start my journey in Rio de Janeiro and visit all the cities as Hamilton described, passing through Canberra before Madrid, and then returning to Rio. What route could I have taken? Given the nets of 4 cubes with the faces coloured in 4 colours, build a tower so that on each vertical wall no colour is repeated, that is all 4 colours appear. Think about the mathematics of round robin scheduling. In how many distinct ways can six islands be joined by bridges so that each island can be reached from every other island... The graph represents a salesman’s area of activity with the shops that the salesman must visit each day. What route around the shops has the minimum total distance? If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable. Decide which of these diagrams are traversable. Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges. A Hamiltonian circuit is a continuous path in a graph that passes through each of the vertices exactly once and returns to the start. How many Hamiltonian circuits can you find in these graphs? A little mouse called Delia lives in a hole in the bottom of a tree.....How many days will it be before Delia has to take the same route again? The Four Colour Conjecture was first stated just over 150 years ago, and finally proved conclusively in 1976. It is an outstanding example of how old ideas can be combined with new discoveries. prove. . . . This article for teachers discusses examples of problems in which there is no obvious method but in which children can be encouraged to think deeply about the context and extend their ability to. . . . Read about the problem that tickled Euler's curiosity and led to a new branch of mathematics! The reader is invited to investigate changes (or permutations) in the ringing of church bells, illustrated by braid diagrams showing the order in which the bells are rung. You can trace over all of the diagonals of a pentagon without lifting your pencil and without going over any more than once. Can the same thing be done with a hexagon or with a heptagon? A personal investigation of Conway's Rational Tangles. What were the interesting questions that needed to be asked, and where did they lead? This article looks at the importance in mathematics of representing places and spaces mathematics. Many famous mathematicians have spent time working on problems that involve moving and mapping. . . . Investigate the number of paths you can take from one vertex to another in these 3D shapes. Is it possible to take an odd number and an even number of paths to the same vertex? Toni Beardon has chosen this article introducing a rich area for practical exploration and discovery in 3D geometry This is the second of two articles and discusses problems relating to the curvature of space, shortest distances on surfaces, triangulations of surfaces and representation by graphs. This article invites you to get familiar with a strategic game called "sprouts". The game is simple enough for younger children to understand, and has also provided experienced mathematicians with. . . .
Working, creating and playing with reusable materials helps children acquire skills such as creativity, confidence, problem solving, focus and persistence. Children communicate their understanding of concepts and the world with materials. Our workshops are intended for teachers who would like to incorporate stimulating arts media and open ended materials into the classroom. A certificate with contact hours will be provided upon the conclusion on the workshops. Workshops offered in the past have included: Introduction to UpCycling with Young Learners - Using Open Ended Materials With Children Workshop participants learn how to collect, prepare, store and display a wide array of reusable materials that young children can use to explore and communicate their understanding of the world. The presentation highlights the benefits of including open-ended materials with children and provides innovative, yet practical, ideas for using them with young children. We start by describing core skills that young children develop by experimenting with reuse materials, and then show how these skills lay the foundation for facilitated group and self-directed work in later years. Participants have an opportunity to try out the activities described in the presentation. Additional Workshop Topics See below and check our blog for UpCycle's workshop topics. We can customize a workshop to suit your needs! Materials Workshop - Printmaking and Paper Cloth Workshop participants explore the possibilities of working with young children to print and make paper cloth From found object printing, to making print plates, to developing a printing process to use with children, many different printing techniques will be shared. We will also make paper cloth, a collage material that is part paper and part fabric. This unique and beautiful product can be used for sewing, book binding and more.
The ‘holy grail’ solution of water shortage has always pertained to filtration of salt from the massive quantities of seawater present in our planet. Of course, a few of these desalination techniques already exist in our contemporary technological scope. But this time around, researchers from the University of Illinois may have come up with a more effective method to filter salt from seawater – in the form of a new material with ‘nanopore’ arrangements. This material in question here pertains to a nanometre-thick sheet of molybdenum disulphide (MoS2) that is punctured with a set of nano-sized pores. So when high volumes of seawater passes through the material layer, salt and other contaminants are more effectively blocked by the minuscule-sized nanopores. In comparative terms, the scientists estimate that their contrived thin-film membrane technology can account for filtration of 70 percent more water than even graphene. The question naturally arises – how is this tech different from the present-day desalination processes? Well in conventional terms, most desalination processes relate to reverse osmosis, where the water is forced through a thin plastic membrane. Now while normally these plastic membranes may seem to comprise ultra-thin layers, from the microscopic angle, their structure pertains more to tubular arrangements than 2D membranes. In essence, more pressure is required for the water to push through the material, which in turns equates to more energy expended in the process. Moreover, the tube-like nanometer funnels are susceptible to clogging (from both salt or dirt) that can further increase the operational costs. Suffice it to say, the aforementioned molybdenum disulphide membrane eschews such problems, by allowing the passing of water with far lesser resistance (due to the prevalence of single-layer sheets). This is coupled with the fact that MoS2 is also a robust material that can take enormous pressure from the passing seawater volumes. However, beyond just its streamlined characteristics, it is intrinsic chemistry that proves to be an ultimate advantage in the case of this technology. As Mohammad Heiranian, first author of the study, makes it clear – MoS2 has inherent advantages in that the molybdenum in the center attracts water, then the sulphur on the other side pushes it away, so we have much higher rate of water going through the pore. It’s inherent in the chemistry of MoS2 and the geometry of the pore, so we don’t have to functionalise the pore, which is a very complex process with graphene. With all said and done, the technology is still in its nascent stage, with the researchers looking forward to collaborative efforts with other organizations, to field test the water desalination credentials of MoS2. And in spite of the novelty of this single-layer material, the scientists are quite confident of the effectiveness of MoS2 in actual commercial applications. As Amir Barati Farimani, one of the members of the team, said – Nanotechnology could play a great role in reducing the cost of desalination plants and making them energy efficient. I’m in California now, and there’s a lot of talk about the drought and how to tackle it. I’m very hopeful that this work can help the designers of desalination plants. This type of thin membrane can increase return on investment because they are much more energy efficient. The study was originally published in Nature Communications. Source: University of Illinois
Normal Biological Foam As bacteria grow and divide, they produce extracellular materials including polysaccharides, proteins (enzymes), and even DNA/RNA. Also important for biofilm or floc adhesion, these materials trap air bubbles in a light color foam that is easily broken by waters spray if it even builds up more than 1 - 2" on the water surface. When the system is in log phase growth, you may notice both a lighter color and more foam than when the system is mature in decline phase growth. Nocardia/Microthrix Parvicella (filaments) Nocardia (often actually a Gordonia sp) is promoted by long chain fatty acids (FOG) and Microthrix is triggered often by low F/M, long chain fatty acids and lower temperatures. Both genera can exist in wastewater and not cause foaming, but under appropriate conditions, they start to produce hydrophobic extracellular materials. The hydrophobic biological polymers trap air and create the thick, stable, greasy foam. The best way to prevent foaming is to reduce grease and keep appropriate MCRT or F/M ratios in the system. Surfactants are used in many products not just cleaning agents. Firefighting foams, food product emulsion stabilizers, even many personal care products contain surfactant chemistries. Under normal concentrations, bacteria readily remove surfactants or they are diluted beyond their ability to reduce surface tension (creating foam). Periodically, you may see higher loadings hit the aeration basin. With surfactants the foam is usually stable and has a light color. While their presence is temporary and foam levels should drop, you can use commercial antifoam products to prevent excess foam during peak events. Plant soaps, Starches, & Proteins The above chemicals are natural polymers found in wastewater. Foaming occurrs when normal biological foam beomes stabilized by the natural polymers. In this case, antifoams and water sprays are often effective. We have also effectively used pretreatment with enzymes to pre-digest the polymers prior to the aeration basin - this can be very valuable in food processing wastewater treatment.
Shrimp is more than just a popular delicacy served in seafood restaurants across the country. It is also an important part of the ocean food chain. There are many different types of shrimp, and this organism is a lot more complicated than the food we are used to seeing on our dinner plates. In fact, shrimps can range in many different shapes and sizes, anywhere between the size of a fingernail to as big as 8 inches in length. What Are Shrimp? Shrimps are scientifically classified as crustaceans and are most structurally similar to crabs and lobsters. Their bodies are designed with a hard outer shell, known as the exoskeleton, which forms the head, thorax and abdomen. They also have two pairs of antennae that allow them to taste and touch, as well as eight pairs of legs that help shrimp move around and feed. Despite having very small brains, shrimp actually show rather complex behaviors. According to MSN Encarta, the cleaner shrimp feed on dead scales and parasites from the skin of living fish. However, the interesting part is not what they eat, but how they do it. Cleaner shrimp are known to participate in a stylized dance that attracts other fish to come close enough so that the shrimp can feed on it, or "clean" the other fish. Sometimes these fish are over twice the size of the shrimp feeding on it. There are several different types of shrimps, all of which live in different parts of the ocean, as well as in freshwater lakes and streams. Open-water shrimp are known to be constantly moving around to avoid the threat of predators, so there is no specific place in the ocean where they can be found. However, they typically feed close to the ocean's surface during the night, and spend their days hiding in the depths of the ocean. The types of shrimp most often eaten by humans are called bottom-dwellers. Just as the name implies, these shrimp live on parts of the bottom ocean floor, known as seabeds. More interesting though is a type of shrimp found in Southeast Asia, called burrowing shrimp. This type of species creates habitats by digging holes into soft sediment. However, the muddy waters created by their digging behavior decrease the oxygen levels in the water, and in turn can have a negative effect on rice fields growing nearby.
Rett syndrome is a problem with the development of the nervous system. It is most common in girls. Boys with Rett syndrome are usually stillborn or die shortly after birth. Many people with Rett syndrome live into adulthood. Most have severe disabilities, including an inability to talk or walk. Rett syndrome is most often caused by nonhereditary mutations on a specific gene on 1 X chromosome. Females have 2 X chromosomes. Males have 1 X and 1 Y chromosome. Males usually die from Rett syndrome because they lack a second normal X chromosome. The second normal X chromosome in girls may provide some protection. In Rett syndrome, the mutated gene affects methyl cytosine binding protein 2 (MECP2). When it is mutated, there is a deficiency of this important protein. Not everyone with the MECP2 mutation will have Rett syndrome. Some females may be normal or have only mild symptoms. It is not clear what causes the Rett gene to mutate. Rett syndrome is usually nonhereditary. This means it does not run in families. Children with Rett syndrome will start developing normally. They will smile, move, and pick items up with their fingers. But by 18 months of age, the developmental process seems to stop or reverse itself. The age of onset and the severity of symptoms are different from person to person. There are 4 stages. Symptoms for each stage include: Stage I: Early Onset Stage - Occurs at age 6-18 months Can last for months and include: - Less eye contact with parents - Less interest in toys and play - Slow head growth - Calm, quiet baby Stage II: Rapid Destructive Stage - Occurs at age 1-4 years Can last weeks to months and include: - Small head - Developmental/intellectual disability - Inability to purposely use hands - Loss of previous ability to talk - Repeatedly moving hands to mouth - Other hand movements, such as clapping, tapping, or random touching - Hand movements stop during sleep - Holding breath, gaps in breathing, taking rapid breaths - Irregular breathing that stops during sleep - Teeth grinding - Laughing or screaming spells - Decreased social interactions - Trouble sleeping - Cold feet - Trouble crawling or walking Stage III: Plateau Stage - Occurs at preschool through school years Can last for years and include: - Difficulty controlling movement - Less irritability and crying - Communication that may improve Stage IV: Late Motor Deterioration Stage - Occurs at age when stage III ceases, can be anywhere from age 5-25 Can last up to decades and include: - Decreased ability to walk - Muscle weakness or wasting - Stiffness of muscles - Spastic movements - Curvature of the spine - Breathing trouble and seizures that often decrease with age |Copyright © Nucleus Medical Media, Inc.| You will be asked about your child’s symptoms and medical history. A physical and neurological exam will be done. Genetic testing can often confirm the diagnosis. Your doctor may also do tests to rule out other conditions, like autism . Some symptoms of Rett syndrome are similar to those of autism. Children with autism, who are more often boys, do not maintain person-to-person contact. Most girls with Rett syndrome, though, prefer human contact to focusing on inanimate objects. These differences may give the first clue in diagnosing Rett syndrome. Physical and developmental symptoms can often lead your doctor to a Rett syndrome diagnosis. Your child's bodily fluids may be tested. This can be done with blood tests. Your child's brain may be tested. This can be done with: - Electroencephalogram (EEG) There is no cure for Rett syndrome. People with this condition need to be monitored for problems of the bones and heart. Treatment aims to control symptoms and includes: Medications that may help with symptoms include: - Anticonvulsants to control seizure activity - Stool softeners or laxatives if constipated - Drugs to help with breathing - Drugs to ease agitation - Drugs to relieve muscle spasms To support nutrition, your doctor may recommend: These therapies will help manage physical and general care challenges: - Occupational therapy—to help those learn to perform daily activities, such as dressing and eating - Physical therapy—to help with improvement coordination and movement - Speech therapy aids—to build communication skills - Social workers—to help a family cope with caring for a child with Rett syndrome Techniques for Limiting Problem Behaviors Keeping a diary of your child's behaviors and activities helps determine the cause of agitation. The following may help to prevent or control behavior problems: - Warm baths - Soothing music - Quiet environment There is no way to prevent Rett syndrome. If you have questions about the risk of Rett syndrome in your family, talk to a genetic counselor. International Rett Syndrome Foundation http://www.rettsyndrome.org National Institute of Neurological Disorders and Stroke http://www.ninds.nih.gov Health Canada http://www.hc-sc.gc.ca Ontario Rett Syndrome Association http://www.rett.ca Kazantsev AG, Thompson LM. Therapeutic implication of histone deacetylase inhibitors for central nervous system disorders. Nature Review Drug Discovery. 2008;7:854-868. Rett syndrome. EBSCO DynaMed Plus website. Available at: https://www.dynamed.com/topics/dmp~AN~T115304/Rett-syndrome. Updated August 11, 2016. Accessed September 23, 2016. Rett syndrome fact sheet. National Institute of Neurological Disorders and Stroke website. Available at: http://www.ninds.nih.gov/disorders/rett/detail%5Frett.htm. Updated July 27, 2015. Accessed March 10, 2016. - Reviewer: EBSCO Medical Review Board Kari Kassir, MD - Review Date: 03/2018 - Update Date: 06/03/2014
Smell of CO2 boosts mosquitoes' ability to visually track targets In order to better trap or evade malaria-carrying Aedes aegypti mosquitoes, it helps if we know more about the manner in which they track their victims. New research now indicates that it's a matter not just of smell, but also enhanced visual processing that's triggered by smell. It's long been known that – among other things – mosquitoes are attracted to the odor of the carbon dioxide which we exhale. A team of Virginia Tech scientists, however, wondered if there was more to it than that. Led by Asst. Prof. Clément Vinauger, they built a sort of "flight simulator" for mosquitoes in order to find out. A series of tethered female Aedes aegypti mosquitoes were placed in the device, which uses an immersive cylindrical array of flashing LEDs to simulate moving objects – in a real-life scenario, those objects could be people. Each insect, which had been fitted with a tiny 3D-printed helmet utilized to monitor their brain activity, was then subjected to a puff of CO2 (similar to what a person might exhale). It was found that when this happened, not only did their brain's olfactory center register the odor, but that region also responded by activating neurons in the brain's visual processing center. This in turn allowed the insect to visually track the simulated moving objects much more accurately – the researchers were able to determine that this was the case by analyzing the manner in which the mosquitoes' wingbeat frequency, acceleration, and turning behavior changed in accordance to the moving LED light patterns. "Analyzing how mosquitoes process information is crucial to figuring out how to create better baits and traps for mosquito control," says Vinauger. "My research aims at closing the key knowledge gaps in our understanding of the mechanisms that allow mosquitoes to be such efficient disease vectors and, more specifically, to identify and characterize factors that modulate their host-seeking behavior." A paper on the research was recently published in the journal Current Biology. Source: Virginia Tech
We demystify numbers and make math concepts relevant and fun for children. With playful manipulatives, music, and rhymes, our numbers and math program teaches counting, comparisons, spatial awareness, patterning, sequencing, matching, sorting, problem solving, and even Pre-K geometry skills. The program helps students build number sense right from the start. They also get time to play with real objects and test their ideas so that math becomes real and meaningful. Children also develop oral language that helps them learn about and express math concepts. Visit the Learning Lounge for Scope & Sequence and Sample Lessons. Student Activity Books
Tech moves fast! Stay ahead of the curve with Techopedia! Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia. Native code compiler for Java (NCCJ) is a compiler application that converts Java code to a native code that can be executed without the need for interpreters. Native code compiler for Java translates the Java code into a binary representation that can be linked to precompiled library files and resources to create an executable program. Native code compilers eliminate the need for JVM and interpreters to convert the Java byte code, which is a portable intermediate code. By helping convert Java code directly into machine code, native code compilers help in reducing redundancy, reverse engineering and optimizing program execution. Java code is usually converted into an intermediate byte code, which is then compiled into a machine-dependent code with the help of JVM running on each machine where the program is to be executed. This particular feature of Java makes Java programs more flexible and portable across a wide range of devices. But this introduces an overhead and may cause Java programs to take more time than natively compiled code. As the primary design concern for Java as to make it a platform-independent and secure development model, the execution performance lag due to the byte code feature was sidelined. But when developers want to improve execution performance, they may choose to natively compile the Java classes or certain parts of the code. Native code compilers for Java help to achieve this, and thus help to achieve better processing speed than the byte code interpretation. The increase in speed may occur due to several factors, such as: The two major types of native code compilers are just-in-time (JIT) compilers and ahead-of-time (AOT) compilers. JIT compilers allow the JVM to translate Java code to machine code as and when needed by the JDK. AOT compilers compile the Java code within a JAR file into native shared libraries before the execution time. Native compile code is also known as static compilation and provides a consistent performance.
Pneumococcal diseases are becoming more common among adults these days. Each year pneumococcal diseases kill thousands of adults around the world. Additionally, another handful ends up in the hospital. Pneumococcal infection is caused by the bacteria named ‘Streptococcus pneumoniae’ which causes infection in the lungs (pneumonia), the lining of the brain (meningitis), bloodstream (bacteremia), and the spinal cord (meningitis). With no other possible prevention, vaccines are the best way to prevent pneumococcal diseases. There are mainly two types of vaccines that provide protection against this serious disease. PCV13 – Pneumococcal Conjugate Vaccine PVC13 provide protection against 13 strains of pneumococcal bacteria that are found commonly in children and adults. PCV13 is best recommended for babies, adults over the age 65, and people of the age group 2 to 64 with health conditions such as HIV, diabetes mellitus, heart diseases etc. It is given as several doses to children, and as a single dose to adults. To infants, PCV13 is given as a series of four shots. The first shot is given at the age of 2 months, and then at the age of 4 months, 6 months, and 12-15 months. PPSV23 – Pneumococcal Polysaccharide Vaccine PPSV23 protects people from 23 different strains of pneumococcal bacteria. The vaccine is given as a single shot to adults, and is best recommended for adults over the age 65, people of the age group 2 to 64 with health conditions such as HIV, diabetes mellitus, heart diseases etc., and adults who use tobacco products. Pneumonia Shots – How long do they last? The number of shots, and the duration through which they remain effective is mainly decided by the age group that you belong to. - Children under the age of 2 years: Children belonging to this category require four shots which are given at 2 months, 4 months, 6 months, and 12-15 months of age. - Older than 65 years: Adults of this age group receive two shots which will protect them for the rest of their life. - People of the age group 2 and 64 years: People coming under this category will require anywhere around one to three shots. The number of shots is determined mainly by the immune system disorders and smoking habits of the person. - Who Needs The Vaccine & Who Doesn’t? Pneumococcal vaccine is recommended for people of all age group (infants to adults over the age of 65). They are particularly important to be given to individuals suffering from a weak immune system and other health conditions. Adults who use tobacco products are also advised to get vaccinated to prevent the serious disease. Furthermore, there are a particular group of individuals who shouldn’t get a pneumococcal vaccine as well. In case of PCV13 vaccine, individuals who are currently feeling under the weather, or suffering from life-threatening allergies should restrain from getting vaccinated. While this is the same case of PPSV23 as well, pregnant women are also advised to not get PPSV23 vaccine. - Side Effects Of Pneumococcal Vaccine Although not common, sometimes, both adults and children can suffer from serious allergic reactions to pneumococcal vaccines. Though we are speaking about cases that occur to 1 person in a million, we still ought to be careful. Symptoms of an allergic reaction occur shortly after receiving the vaccines. Some of these symptoms have been listed below. - Lightheaded feeling - Difficulty breathing - Rapid heartbeat - Clammy skin Pneumococcal diseases have the potential to become life-threatening in both children as well as adults. The two vaccines available in the market these days provide ultimate protection against the bacteria. Side effects are always mild and thus resolve in a few days and it is in very rare cases that the allergic reaction occurs. To be on the safer side, it is always better to talk to a doctor about which pneumococcal vaccine you should get and when.
There is some degree of risk, no matter how small, that substantial amounts of radiation may leak from one or more of the Japanese reactors, make its way into the upper atmosphere, and ultimately drop down on the United States and the rest of the world. The amount of exposure to citizens outside of Japan would ultimately be small; but unfortunately, in the case of some types of radiation, small amounts of exposure can have significant health consequences — particularly for the vulnerable. Learn what you can do to protect yourself from nuclear fallout. Certainly, everyone now knows about the major earthquake and tsunami that struck Japan last week. And certainly, everyone who has been following this great human tragedy is also aware that three nuclear power stations are at risk. After that, however, accurate information is spottier, and speculation is far, far higher. Words such as meltdown and partial meltdown and containment are being bandied about with little understanding of what they actually mean. And far too many people outside of Japan are panicking with little justification for panic…yet. Let me quickly explain over the next few minutes: - What we know is happening. - What might happen in the near future. - What the potential dangers are. - What precautions you might want to take — for yourself and your children. China syndrome — not really Several decades ago, it was hypothesized that in an extreme nuclear reactor accident, the reactor’s core could get so hot that it might possibly melt down, burn through the containment barriers beneath it, and then continue to flow downwards through the floor of the containment building — ultimately melting all the way through the crust of the earth and popping out on the other side in China. Thus, the name: “China syndrome.” In truth, this scenario is likely as fictional as the movie based on the name. Since the surrounding ground beneath the reactor would absorb most of the heat during a meltdown — transferring the heat ever outward to the surrounding ground. For that reason, it is likely that the uranium core of a nuclear reactor would not melt down into the earth more than about 90-100 feet (about 30 meters), which is a bit short of the 8,000 miles needed to realize the China syndrome. So, the bottom line is that a meltdown by itself would be unlikely to pose a danger to the world at large — although it would be severely damaging to the area immediately surrounding the reactor for many, many years to come. Into the atmosphere For the world at large, the danger comes when containment is broken, not downwards in a China syndrome type event, but rather in an upward direction from an explosion, which then releases substantial amounts of radioactivity high into the atmosphere by some subsequent event. For example, in the case of the Chernobyl reactor in 1986, the Number Four RBMK reactor went out of control during a test, which demolished the entire reactor building. It was a subsequent fire that then spewed large amounts of radiation high (a critical point) into the atmosphere. Once in the upper atmosphere, high winds and jet streams can carry the radioactivity all around the world, ultimately dropping radioactivity on everyone. But the reactors at Chernobyl were very, very different from the reactors in Japan. Unlike most reactors used in the developed world (including Japan), the Soviet Union RBMK reactors were built without a containment structure, the concrete and steel dome over the reactor designed to keep radiation inside the plant in the event of such an accident. The bottom line is that even if there is a meltdown in one of the Japanese reactors, it is unlikely to breach containment in an upward direction. In fact, there has already been an explosion in one of the reactors with no breach of containment. (Understand, containment structures in nuclear reactors are really, really strong. In the United States, for example, they must be strong enough to withstand the impact of a fully loaded passenger airliner without rupture — for obvious reasons.) And if there is any breach of containment, it is likely to be small in scope and unlikely to reach the upper atmosphere, in which case, damage would be localized, not global. That said, it is important to recognize that “unlikely” does not mean “impossible.” In other words, there is some degree of risk, no matter how small, that substantial amounts of radiation may leak from one or more of the Japanese reactors, make its way into the upper atmosphere, and ultimately drop down on the United States and the rest of the world. The amount of exposure to citizens outside of Japan would ultimately be small; but unfortunately, in the case of some types of radiation, small amounts of exposure can have significant health consequences — particularly for the vulnerable. The particularly nasty forms of radiation that we’re talking about include plutonium, iodine-131 and 134, strontium-90, and cesium-137. Given exposure to radioactive fallout, you will want to focus on three things: - Protecting your thyroid, the most vulnerable organ in your body - Removing as much of the radiation as possible from your body, as quickly as possible - Protecting your DNA from genetic mutation Let’s now talk about how we do this. As mentioned above, radioactive iodine-131 is one of the elements likely to be released into the upper atmosphere after a nuclear event. Carried great distances on high speed winds, it can then drop down into the lower atmosphere, where it may be breathed into the lungs. It can also contaminate crops on the ground and get into the body through food and drink. (Fruits and wines are particularly susceptible.) The problem is that your thyroid gland has a tremendous affinity for iodine, radioactive or otherwise. In other words, the thyroid gland quickly absorbs radioactive iodine, where it can injure or even kill the gland. In fact, radioactive iodine is often administered by doctors specifically to kill the thyroid as a treatment in some thyroid diseases such as Grave’s disease. If, on the other hand, you want to protect your thyroid from exposure to radioactive iodine as might be experienced through fallout, taking non-radioactive iodine just before (or immediately after) exposure will block radioactive iodine from being taken into the thyroid gland. It will thus protect this gland from injury. However, it is important to note that it will not prevent radioactive iodine, or any other form of radiation for that matter, from entering your body. It will not repair damage to the thyroid; nor will it remove the radioactive iodine once it has entered your body. Taking non-radioactive iodine before exposure will merely “pre-fill” your thyroid with iodine so that there is no room for the radioactive iodine to be taken up by your thyroid; thus the need to take the non-radioactive iodine before or immediately after exposure. Likewise, if radioactive iodine is not present or imminent, taking prophylactic non-radioactive iodine offers no protection, not to mention some risk from reactions to the high levels of supplemental iodine. Ideally, the best time to take supplemental iodine is an hour or so before exposure, or immediately upon exposure, for maximum protection. Take it too soon in advance, and it will begin to clear the thyroid before the radioactive iodine enters the body, thus diminishing its effectiveness. (Iodine pretty much clears the thyroid in about 24 hours.) Take it too late, and the radioactive iodine will have already been taken up by the thyroid, in which case there will be little benefit. One thing to keep in mind is that a good liquid form of iodine, such as is available at most health stores will be taken up by your body almost immediately after ingestion, thus allowing you to wait until the last possible second. Note: you don’t have to jump the gun. Public health officials will advise you when you need to take supplemental iodine as protection. (Yes, I understand, they may prevaricate about the events leading up to a nuclear event. But once the event has happened and the radiation has escaped into the atmosphere, it will be impossible to hide. You will be told.) The trick is to make sure you have a supply of iodine on hand when you need it. Public health officials are prepared to provide everyone supplies of potassium iodide after a localized incident in areas surrounding a single nuclear plant, for example. But they certainly do not have enough iodine on hand to cover broad areas of a country to protect from exposure settling down from the upper atmosphere. Unfortunately, if you wait until the last minute, stores are likely to be sold out in a spree of panic buying — as we are seeing now. Just keep an emergency supply on hand for you and your family, and you’ll be fine. The standard form of iodine used in nuclear power plants to protect workers against radiation exposure in case of a leak is potassium iodide (also called KI). It is a salt of iodine that has the virtue of being stable. It will also be the kind you hear recommended most often on television since newscasters get their marching orders from the medical community and governments. But potassium iodide is not the only form of stable iodine. In fact, all food grade sources (and extracts from those sources) such as kelp are equally stable and may be used instead. You just have to make sure you use enough. How much iodine should I take? According to the FDA, the following doses are appropriate to take after internal contamination with (or likely internal contamination with) radioactive iodine: - Adults up through age 40 should take 130 mg. (Note: this is about 700 times the normal daily recommended dose of 150 mcg. Also note that most iodine supplements sold in health food stores are sold in microgram doses, not the milligrams you need for thryroid blockage.) People over the age of 40 should only take supplemental iodine if they are exposed to a large dose of radiation. Older adults are the least likely to develop thyroid cancer and the most likely to have allergic reactions to the iodine. Obviously, the older you are, the less you should think about taking prophylactic doses of iodine. - Women who are breastfeeding should take 130 mg. Pregnant women should take only one dose. And, I hate to say this, but nursing mothers should probably stop breastfeeding if they are exposed and use formula if available. If formula is not available, continue breastfeeding. - Children between the ages of 3 and 18 should take 65 mg. Children who weigh 150 lbs or more should take 130 mg, regardless of their age. - Infants and toddlers between the ages of 1 month and 3 years (either nursing or non-nursing) should take 32 mg. - Newborns from birth to 1 month (both nursing and non-nursing) should be given 16 mg. Note: newborns less than 1 month old who receive more than one dose of KI are at particular risk for developing hypothyroidism. If not treated, hypothyroidism can cause brain damage. Infants who receive supplemental iodine should have their thyroid hormone levels checked and monitored by a doctor. Avoid repeat dosing. Note: The thyroid glands of a fetus and of an infant are most at risk of injury from radioactive iodine. Young children and people with low stores of iodine in their thyroid are also at risk of thyroid injury. A single dose of KI protects the thyroid gland for 24 hours. A one-time dose at the levels recommended above is usually all that is needed to protect the thyroid gland. In some cases, radioactive iodine might be in the environment for more than 24 hours. If that happens, local emergency management or public health officials may tell you to take one dose of KI every 24 hours for a few days. You should do this only on the advice of emergency management officials, public health officials, or your doctor. Avoid repeat dosing with KI for pregnant and breastfeeding women and newborn infants. For those individuals, evacuation may be the best alternative until levels of radioactive iodine fall. Taking a higher dose of iodine, or taking iodine more often than recommended, does not offer more protection and can cause severe illness or death. Also do not take iodine: - If you are already taking medication with high levels of iodine. - You are allergic to iodine. - If you have a thyroid disease that is iodine sensitive such as Grave’s disease, do not take supplemental iodine without your doctor’s permission and guidance. And finally, if panic buying has cleaned your local store’s shelves of iodine tablets, there is an alternative. Most people probably went to the “iodine” section of their health food store. There’s a good chance they didn’t check out the herbal extract section. You may find an iodine extract there that might have been ignored because the dosage “seems” low at first glance. I particularly like the Tincture of Iodine with Kelp from Vitality Works. The dosage seems low since it’s listed by the drop, but each bottle contains about 195 mg of iodine, making it easy to divide as necessary to get the appropriate dose. Uptake by the body is really quick. In most cases, two-thirds of a bottle will provide 130 mg. That means 2-3 bottles will cover most families. Is there anything else you should do? Iodine only protects the thyroid, and only protects against radioactive iodine (iodine-131 and iodine-134). It doesn’t offer any protection against plutonium, cesium-137, and strontium-90, which are also likely to be present. It doesn’t clear radioactive matter from your body. It doesn’t protect against damage to your genetic material. If worst comes to worst, then I recommend a three-pronged approach. - Use supplemental prophylactic iodine as described above. - Use a good colon detox formula that contains substantial amounts of apple pectin and montmorillonite clay. As I’ve said for years, apple pectin actually draws radioactive waste from your body and passes it out through your colon. It’s one of the reasons I include it in my Colon Detox formula — to remove everyday contamination. This is not wishful alternative health thinking. Apple pectin was used in the aftermath of Chernobyl to reduce the load of radioactive cesium in children. Montmorillonite clay also has a strong affinity for radioactive matter. - Use a supplement such as a good antioxidant formula or blood cleansing formula that contains chaparral extract. The primary biochemical in chaparral, NDGA (nordihydroguaiaretic acid), has been shown to protect the body against genetic damage caused by exposure to radioactivity. - (Addendum — added 3/17) Keep in mind that plutonium, cesium, and strontium are all metals and so, to some degree, can be chelated from the body. Look for a heavy metal detox formula that contains both chlorella and cilantro. - We do not have an emergency situation yet. - You don’t want to take prophylactic iodine prematurely since it clears out of the thyroid in 24 hours. - Overdosing on iodine is a distinct possibility if you get carried away. Don’t get carried away. The bottom line is that there is no need for panic. Outside of Japan, nothing has happened yet. Chill out. The odds of anything serious happening outside of Japan are very, very low. Your best bet is to make sure you have some iodine locked away for some future emergency. For further information, check out the related topic: Radiation Therapy, What Comes After? For more information about radiation and Japan’s nuclear disaster, continue on and read Jon’s newsletter titled Radioactive Fallout Update!
optics, the f-number (sometimes called focal ratio, f-ratio, or relative aperture [Smith, Warren "Modern Lens Design" 2005 McGraw-Hill] ) of an optical system expresses the diameter of the entrance pupilin terms of the focal lengthof the lens; in simpler terms, the f-number is the focal length divided by the "effective" aperturediameter. It is a dimensionless numberthat is a quantitative measure of lens speed, an important concept in photography. The f-number f/#, often notated as , is given by:where is the focal length, and is the diameter of the entrance pupil. By convention, "f/#" is treated as a single symbol, and specific values of f/# are written by replacing the number signwith the value. For example, if the focal length is 16 times the pupil diameter, the f-number is f/16, or . The greater the f-number, the less light per unit area reaches the imageplane of the system; the amount of light transmitted to the film (or sensor) decreases with the f-number squared. Doubling the f-number increases the necessary exposure time by a factor of four. The literal interpretation of the f/ notation for f-number is as an arithmetic expression for the effective aperture diameter (entrance pupil diameter), which is equal to the focal length divided by the f-number: . The notation is commonly read aloud as "eff" followed by the number: f/8, for example, is usually pronounced "eff eight". The pupil diameter is proportional to the diameter of the aperture stopof the system. In a camera, this is typically the diaphragm aperture, which can be adjusted to vary the size of the pupil, and hence the amount of light that reaches the film or image sensor. The common assumption in photography that the pupil diameter is "equal" to the aperture diameter is not correct for many types of camera lens, because of the magnifying effect of lens elements in front of the aperture. A 100 mm lens with an aperture setting of f/4 will have a pupil diameter of 25 mm. A 135 mm lens with a setting of f/4 will have a pupil diameter of about 33.8 mm. The 135 mm lens' f/4 opening is larger than that of the 100 mm lens but both will transmit the same amount of light to the film or sensor. Other types of optical system, such as telescopes and binocularsmay have a fixed aperture, but the same principle holds: the greater the focal ratio, the fainter the images created (measuring brightness per unit area of the image). Stops, f-stop conventions, and exposure The term "stop" is sometimes confusing due to its multiple meanings. A stop can be a physical object: an opaque part of an optical system that blocks certain rays. The " aperture stop" is the aperture that limits the brightness of the image by restricting the input pupil size, while a "field stop" is a stop intended to cut out light that would be outside the desired field of view and might cause flare or other problems if not stopped. In photography, stops are also a "unit" used to quantify ratios of light or exposure, with one stop meaning a factor of two, or one-half. The one-stop unit is also known as the EV ( exposure value) unit. On a camera, the f-number is usually adjusted in discrete steps, known as "f-stops". Each "stop" is marked with its corresponding f-number, and represents a halving of the light intensity from the previous stop. This corresponds to a decrease of the pupil and aperture diameters by a factor of or about 1.414, and hence a halving of the area of the pupil. Modern lenses use a standard f-stop scale, which is an approximately geometric sequenceof numbers that corresponds to the sequence of the powers of (1.414): f/1, f/1.4, f/2, f/2.8, f/4, f/5.6, f/8, f/11, f/16, f/22, f/32, f/45, f/64, f/90, f/128, etc. The values of the ratios are rounded off to these particular conventional numbers, to make them easy to remember and write down. Shutter speeds are arranged in a similar scale, so that one step in the shutter speed scale corresponds to one stop in the aperture scale. Opening up a lens by one stop allows twice as much light to fall on the film in a given period of time, therefore to have the same exposure at this larger aperture, as at the previous aperture, the shutter speed is set twice as fast (i.e., the shutter is open half as long); the film will usually respond equally to these equal amounts of light, since it has the property known as "reciprocity". Alternatively, one could use a film that is half as sensitive to light, with the original shutter speed. Photographers sometimes express other exposure ratios in terms of 'stops'. Ignoring the f-number markings, the f-stops make a logarithmic scale of exposure intensity. Given this interpretation, one can then think of taking a half-step along this scale, to make an exposure difference of "half a stop". Most old cameras had an aperture scale graduated in full stops but the aperture is continuously variable allowing to select any intermediate aperture. Click-stopped aperture became a common feature in the 1960s; the aperture scale was usually marked in full stops, but many lenses had a click between two marks, allowing a gradation of one half of a stop. On modern cameras, especially when aperture is set on the camera body, f-number is often divided more finely than steps of one stop. Steps of one-third stop (1/3 EV) are the most common, since this matches the ISO system of film speeds. Half-stop steps are also seen on some cameras. As an example, the aperture that is one-third stop smaller than f/2.8 is f/3.2, two-thirds smaller is f/3.5, and one whole stop smaller is f/4. The next few f-stops in this sequence are :f/4.5, f/5, f/5.6, f/6.3, f/7.1, f/8, etc. To calculate the steps in a full stop (1 EV) one could use: 20×0.5, 21×0.5, 22×0.5, 23×0.5, 24×0.5 etc. The steps in a half stop (1/2 EV) series would be: 20/2×0.5, 21/2×0.5, 22/2×0.5, 23/2×0.5, 24/2×0.5 etc. The steps in a third stop (1/3 EV) series would be: 20/3×0.5, 21/3×0.5, 22/3×0.5, 23/3×0.5, 24/3×0.5 etc. As in the earlier DIN and ASA film-speed standards, the ISO speed is defined only in one-third stop increments, and shutter speeds of digital cameras are commonly on the same scale in reciprocal seconds. A portion of the ISO range is the sequence : ... 16/13°, 20/14°, 25/15°, 32/16°, 40/17°, 50/18°, 64/19°, 80/20°, 100/21°, 125/22°...while shutter speeds in reciprocal seconds have a few conventional differences in their numbers (1/15, 1/30, and 1/60 second instead of 1/16, 1/32, and 1/64). In practice the maximum aperture of a lens is often not an integral power of (i.e. to the power of a whole number), in which case it is usually a half or third stop above or below an integral power of . Modern electronically-controlled interchangeable lenses, such as those from Canon and Sigma for SLR cameras, have f-stops specified internally in 1/8-stop increments, so the cameras' 1/3-stop settings are approximated by the nearest 1/8-stop setting in the lens. Standard full-stop f-number scale Including aperture value AV: Notice that sometimes a number is ambiguous; for example, f/1.2 may be used in either a half-stop [http://books.google.com/books?vid=ISBN0240804953&id=YjAzP4i1oFcC&pg=PA136&lpg=PA136&dq=1.4-1.7-2-2.4-2.8&sig=krpuY3M6-EW10kDAdTPsrFLs2hk] or a one-third-stop system [http://books.google.com/books?vid=ISBN186108322X&id=DvYMl-s1_9YC&pg=PA19&lpg=PA19&dq=1.4-1.6-1.8-2-2.2-2.5&sig=6fS4fpjJjg8ga0w9bdDi-9nVF74] ; sometimes f/1.3 and f/3.2 and other differences are used for the one-third stop scale [http://books.google.com/books?vid=ISBN0240514807&id=IWkpoJKM_ucC&pg=PA145&lpg=PA145&dq=1.4-1.6-1.8-2-2.2-2.5&sig=n7onatWjW2v6N15vL35lQluf3iU] . Since all lenses absorb some portion of the light passing through them (particularly zoom lenses containing many elements), T-stops are sometimes used instead of f-stops for exposure purposes, especially for motion picture camera lenses. The practice became popular in cinematographic usage before the advent of zoom lenses, where fixed focal length lenses were calibrated to T-stops: This allowed the turret-mounted lenses to be changed without affecting the overall scene brightness. Lenses were bench-tested individually for actual light transmission and assigned T stops accordingly (The "T" in T-stop stands for "transmission"), [ Eastman Kodak, [http://www.kodak.com/US/en/motion/support/h2/intro01P.shtml "H2: Kodak Motion Picture Camera Films"] , November 2000 revision. Retrieved 2007-09-02.] but modern cinematographic lenses now usually tend to be factory-calibrated in T-stops. T-stops measure the amount of light transmitted through the lens in practice, and are equivalent in light transmission to the f-stop of an ideal lens with 100% transmission. Since all lenses absorb some quantity of light, the T-number of any given aperture on a lens will always be greater than the f-number. In recent years, advances in lens technology and film exposure latitude have reduced the importance of t-stop values.Remember: F-stops are for "focal ratio", T-stops are for "transmission". Sunny 16 rule An example of the use of f-numbers in photography is the " sunny 16 rule": an approximately correct exposure will be obtained on a sunny day by using an aperture of f/16 and a shutter speed close to the reciprocal of the ISO speed of the film; for example, using ISO 200 film, an aperture of f/16 and a shutter speed of 1/200 second. The f-number may then be adjusted downwards for situations with lower light. Effects on image quality Depth of fieldincreases with f-number, as illustrated in the photos below. This means that photos taken with a low f-number will tend to have one subject in focus, with the rest of the image out of focus. This is frequently useful for nature photography, portraiture, and certain special effects. The depth of fieldof an image produced at a given f-number is dependent on other parameters as well, including the focal length, the subject distance, and the format of the film or sensor used to capture the image. Smaller formats will have a deeper field than larger formats at the same f-number for the same distance of focus and same angle of view. Therefore, reduced–depth-of-field effects, like those shown below, will require smaller f-numbers (and thus larger apertures and so potentially more complex optics) when using small-format cameras than when using larger-format cameras. Picture sharpness also varies with f-number. The optimal f-stop varies with the lens characteristics. For modern standard lenses having 6 or 7 elements, the sharpest image is often obtained around f/5.6–f/8, while for older standard lenses having only 4 elements (Tessar formula) stopping to f/11 will give the sharpest image. The reason the sharpness is best at medium f-numbers is that the sharpness at high f-numbers is constrained by diffraction, [cite book | title = Basic Photography | author = Michael John Langford | isbn = 0240515927 | year = 2000 | publisher = Focal Press] whereas at low f-numbers limitations of the lens design known as aberrations will dominate. The larger number of elements in modern lenses allow the designer to compensate for aberrations, allowing the lens to give better pictures at lower f-stops. Light falloff is also sensitive to f-stop. Many wide-angle lenses will show a significant light falloff ( vignetting) at the edges for large apertures. To measure the actual resolution of the lens at the different f-numbers it is necessary to use a standardized measurement chart like the 1951 USAF Resolution Test Chart. Photojournalists have a saying, "f/8 and be there," meaning that being on the scene is more important than worrying about technical details. The aperture of f/8 gives adequate depth of field, assuming a 35 mm or DSLR camera, minimum shutter-speed, and ISO film rating within reasonable limits subject to lighting. Varying the f-number varies the amount of light that is let through the lens. If the f-number is too low (for the combination of shutter speed, ISO film speed, and illumination), the image may be over-exposed, resulting in blown-out highlight areas. Conversely, if the f-number is too high the image may be under-exposed, resulting in image noise and loss of shadow detail. The f-number of the human eyevaries from about f/8.3 in a very brightly lit place to about f/2.1 in the dark. [cite book | first=Eugene|last=Hecht|year=1987|title=Optics|edition=2nd ed.|publisher=Addison Wesley|id=ISBN 0-201-11609-X Sect. 5.7.1] Toxic substances and poisons (like Atropine) can significantly reduce this range. Pharmaceutical products such as eye drops may also cause similar side-effects. Focal ratio in telescopes In astronomy, the f-number is commonly referred to as the "focal ratio" (or "f-ratio"). It is still defined as the focal lengthof an objective divided by its diameter or by the diameter of an aperturestop in the system. Even though the principles of focal ratio are always the same, the application to which the principle is put can differ. In photographythe focal ratio varies the focal-plane illuminance (or optical power per unit area in the image) and is used to control variables such as depth of field. When using an optical telescopein astronomy, there is no depth of field issue, and the brightness of stellar point sources in terms of total optical power (not divided by area) is a function of absolute aperture area only, independent of focal length. The focal length controls the field of view of the instrument and the scale of the image that is presented at the focal plane to an eyepiece, film plate, or CCD. The f-number accurately describes the light-gathering ability of a lens only for objects an infinite distance away.cite book | first=John E. | last=Greivenkamp | year=2004 | title=Field Guide to Geometrical Optics | publisher=SPIE | others=SPIE Field Guides vol. FG01 | id=ISBN 0-8194-5294-7 p. 29.] This limitation is typically ignored in photography, where objects are usually not extremely close to the camera, relative to the distance between the lens and the film. In optical design, an alternative is often needed for systems where the object is not far from the lens. In these cases the working f-number is used. The working f-number "Nw" is given by where "N" is the uncorrected f-number, "NA" is the numerical apertureof the lens, and is the lens's magnificationfor an object a particular distance away. (Note that the magnification "m" here is negative for the common case where the image is inverted.) In photography, the working f-number is described as the f-number corrected for lens extensions by a "bellows factor". This is of particular importance in macro photography. The system of f-numbers for specifying relative apertures evolved in the late nineteenth century, in competition with several other systems of aperture notation. Origins of relative aperture In 1867, Sutton and Dawson defined "apertal ratio" as essentially the reciprocal of the modern f-number:Thomas Sutton and George Dawson, "A Dictionary of Photography", London: Sampson Low, Son & Marston, 1867, (p. 122).] In every lens there is, corresponding to a given apertal ratio (that is, the ratio of the diameter of the stop to the focal length), a certain distance of a near object from it, between which and infinity all objects are in equally good focus. For instance, in a single view lens of 6 inch focus, with a 1/4 in. stop (apertal ratio one-twenty-fourth), all objects situated at distances lying between 20 feet from the lens and an infinite distance from it (a fixed star, for instance) are in equally good focus. Twenty feet is therefore called the 'focal range' of the lens when this stop is used. The focal range is consequently the distance of the nearest object, which will be in good focus when the ground glass is adjusted for an extremely distant object. In the same lens, the focal range will depend upon the size of the diaphragm used, while in different lenses having the same apertal ratio the focal ranges will be greater as the focal length of the lens is increased. The terms 'apertal ratio' and 'focal range' have not come into general use, but it is very desirable that they should, in order to prevent ambiguity and circumlocution when treating of the properties of photographic lenses. John Henry Dallmeyercalled the ratio the "intensity ratio" of a lens:John Henry Dallmeyer, "Photographic Lenses: On Their Choice and Use—Special Edition Edited for American Photographers", pamphlet, 1874.] The "rapidity" of a lens depends upon the relation or ratio of the aperture to the equivalent focus. To ascertain this, divide the "equivalent focus" by the diameter of the actual "working aperture" of the lens in question; and note down the quotient as the denominator with 1, or unity, for the numerator. Thus to find the ratio of a lens of 2 inches diameter and 6 inches focus, divide the focus by the aperture, or 6 divided by 2 equals 3; i.e., 1/3 is the intensity ratio. Although he did not yet have access to Ernst Abbe's theory of stops and pupils [http://books.google.com/books?vid=OCLC01942476&id=-r6LPy-nWPwC&pg=RA3-PA537&dq=theory-of-stops] , which was made widely available by Siegfried Czapskiin 1893,Siegfried Czapski, "Theorie der optischen Instrumente, nach Abbe," Breslau: Trewendt, 1893.] Dallmeyer knew that his "working aperture" was not the same as the physical diameter of the aperture stop:John Henry Dallmeyer, "Photographic Lenses: On Their Choice and Use—Special Edition Edited for American Photographers", pamphlet, 1874.] It must be observed, however, that in order to find the real "intensity ratio", the diameter of the actual working aperture must be ascertained. This is easily accomplished in the case of single lenses, or for double combination lenses used with the full opening, these merely requiring the application of a pair of compasses or rule; but when double or triple-combination lenses are used, with stops inserted "between" the combinations, it is somewhat more troublesome; for it is obvious that in this case the diameter of the stop employed is not the measure of the actual pencil of light transmitted by the front combination. To ascertain this, focus for a distant object, remove the focusing screen and replace it by the collodion slide, having previously inserted a piece of cardboard in place of the prepared plate. Make a small round hole in the centre of the cardboard with a piercer, and now remove to a darkened room; apply a candle close to the hole, and observe the illuminated patch visible upon the front combination; the diameter of this circle, carefully measured, is the actual working aperture of the lens in question for the particular stop employed. This point is further emphasized by Czapski in 1893. According to an English review of his book, in 1894, "The necessity of clearly distinguishing between effective aperture and diameter of physical stop is strongly insisted upon." [Henry Crew, "Theory of Optical Instruments by Dr. Czapski," in "Astronomy and Astro-physics" XIII pp. 241–243, 1894.] J. H. Dallmeyer's son, Thomas Rudolphus Dallmeyer, inventor of the telephoto lens, followed the "intensity ratio" terminology in 1899. [Thomas R. Dallmeyer, "Telephotography: An elementary treatise on the construction and application of the telephotographic lens", London: Heinemann, 1899.] Aperture numbering systems At the same time, there were a number of aperture numbering systems designed with the goal of making exposure times vary in direct or inverse proportion with the aperture, rather than with the square of the f-number or inverse square of the apertal ratio or intensity ratio. But these systems all involved some arbitrary constant, as opposed to the simple ratio of focal length and diameter. For example, the "Uniform System" (U.S.) of apertures was adopted as a standard by the Photographic Society of Great Britain in the 1880s. Bothamley in 1891 said "The stops of all the best makers are now arranged according to this system." [C. H. Bothamley, "Ilford Manual of Photography", London: Britannia Works Co. Ltd., 1891.] U.S. 16 is the same aperture as f/16, but apertures that are larger or smaller by a full stop use doubling or halving of the U.S. number, for example f/11 is U.S. 8 and f/8 is U.S. 4. The exposure time required is directly proportional to the U.S. number. Eastman Kodakused U.S. stops on many of their cameras at least in the 1920s. By 1895, Hodges contradicts Bothamley, saying that the f-number system has taken over: "This is called the f/x system, and the diaphragms of all modern lenses of good construction are so marked." [John A. Hodges, "Photographic Lenses: How to Choose, and How to Use", Bradford: Percy Lund & Co., 1895.] Here is the situation as seen in 1899: Piper in 1901 [C. Welborne Piper, "A First Book of the Lens: An Elementary Treatise on the Action and Use of the Photographic Lens", London: Hazell, Watson, and Viney, Ltd., 1901.] discusses five different systems of aperture marking: the old and new Zeisssystems based on actual intensity (proportional to reciprocal square of the f-number); and the U.S., C.I., and Dallmeyer systems based on exposure (proportional to square of the f-number). He calls the f-number the "ratio number," "aperture ratio number," and "ratio aperture." He calls expressions like f/8 the "fractional diameter" of the aperture, even though it is literally equal to the "absolute diameter" which he distinguishes as a different term. He also sometimes uses expressions like "an aperture of f 8" without the division indicated by the slash. Beck and Andrews in 1902 talk about the Royal Photographic Society standard of f/4, f/5.6, f/8, f/11.3, etc. [Conrad Beck and Herbert Andrews, "Photographic Lenses: A Simple Treatise", second edition, London: R. & J. Beck Ltd., c. 1902.] The R.P.S. had changed their name and moved off of the U.S. system some time between 1895 and 1902. Modern conventions have rounded the numbers from f/5.66 to f/5.6, f/11.13 to f/11, and f/44.72 to f/45. This is only for ease of writing – the actual ratio of aperture size to focal length is still based on the doubling or halving of the amount of light getting through the lens. By 1920, the term "f-number" appeared in books both as "F number" and "f/number". In modern publications, the forms "f-number" and "f number" are more common, though the earlier forms, as well as "F-number" are still found in a few books; not uncommonly, the initial lower-case "f" in "f-number" or "f/number" is set as the hooked italic "f" as in f/|#. [ [http://books.google.com/books?as_q=lens+aperture&num=50&as_epq=f-number Google search] ] Notations for f-numbers were also quite variable in the early part of the twentieth century. They were sometimes written with a capital F, [cite book| url=http://books.google.com/books?vid=0QGJD11a_YfZO-mqMs&id=ypakouuKvwYC&pg=RA2-PA61| format=Google| accessdate=2007-03-12| title=Airplane Photography| last=Ives| first=Herbert Eugene| publisher=J. B. Lippincott| location= Philadelphia| year=1920| pages=p. 61] sometimes with a dot (period) instead of a slash, [cite book| url=http://books.google.com/books?vid=0aQ_TjaH48eWw32YA6mq&id=V7MCVGREPfkC&q=aperture+lens+uniform-system+date:0-1930&pgis=1| accessdate=2007-03-12| title=The Fundamentals of Photography| first=Charles EdwardKenneth| last=Mees| publisher=Eastman Kodak| year=1920| pages=p. 28] and sometimes set as a vertical fraction. [cite book| url=http://books.google.com/books?vid=0OrF3Gg18eOZGCnsbWwn&id=AN6d4zTjquwC&pg=PA83| format=Google| accessdate=2007-03-12| title=Photography for Students of Physics and Chemistry| first=Louis| last=Derr| location=London| publisher=Macmillan| year=1906| pages=p. 83] The 1961 ASA standard PH2.12-1961 "American Standard General-Purpose Photographic Exposure Meters (Photoelectric Type)" specifies that "The symbol for relative apertures shall be f/ or f : followed by the effective f-number." Note that they show the hooked italic f not only in the symbol, but also in the term "f-number", which today is more commonly set in an ordinary non-italic face. Circle of confusion Depth of field * [http://tangentsoft.net/fcalc/help/FNumber.htm f Number Arithmetic] * [http://www.largeformatphotography.info/fstop.html Large format photography—how to select the f-stop] Wikimedia Foundation. 2010. Look at other dictionaries: Number Six (The Prisoner) — Number Six is the central fictional character in the 1960s television series The Prisoner, played by Patrick McGoohan. In the AMC remake, the character is played by Jim Caviezel, renamed Six . In several episodes, his attempts to escape his… … Wikipedia Number Nine Visual Technology — Corporation was a manufacturer of video graphics chips and cards from 1982 to 1999. Number Nine developed the first 128 bit graphics processor (the Imagine 128), as well as the first 256 color and 16.8 million color cards. The name of the… … Wikipedia Number 96 (TV series) — Number 96 Title card from a 1975 episode of Number 96. Where the cliff hanger resolution that followed this shot at the start of the episode took place in one of the building s flats, the shot of the building would zoom in on that flat as the… … Wikipedia Number 1 (Goldfrapp song) — Number 1 Single by Goldfrapp from the album Supernature B side … Wikipedia Number 1 (Tinchy Stryder song) — Number 1 Single by Tinchy Stryder featuring N Dubz from the album Catch 22 Against All Odds B side Stuck on … Wikipedia Number One — or number one abbreviated #1, No 1 is used in a variety of meanings: Numerical * 1 (number) Music * #1 , an album by Fischerspooner * Konono N°1, a musical group from Kinshasa, Democratic Republic of the Congo * No.1 , an album by BoA * #1 , a… … Wikipedia Number Ones (Janet Jackson album) — Number Ones / The Best Greatest hits album by Janet Jackson Released November 17, 2009 (see … Wikipedia Number Two (The Prisoner) — Number Two was the title of the chief administrator of The Village in the 1967 68 British television series The Prisoner. More than 17 different actors appeared as holders of the office during the 17 episode series (some episodes featured more … Wikipedia Number Six (Battlestar Galactica) — Number Six Battlestar Galactica character Promotional still of Number Six in season four First appearance … Wikipedia Number 2 (Austin Powers) — Number 2 Austin Powers character First appearance Austin Powers: International Man of Mystery … Wikipedia Number Ones: Up Close and Personal — World Tour Official poster for the tour Tour by Janet Jackson Associated album Number Ones … Wikipedia
Martin Luther King – Civil Disobedience Essay From ancient times to the Enlightenment period, the rule of government and God hardly came into question, both accepted as ultimate powers that alone could dictate the lives of the masses. However, with greater scientific discovery and evolving political philosophy, thinkers began to question the nature of laws, fairness, and justice. Social contract theories began to appear, stating that humans had an unwritten contract with fellow members of society to follow laws and values to create a more stable society. Men like John Locke and Thomas Hobbes philosophized about the rights of humans in relation to the governments and natural laws, and the duty that each party has to the other, claiming that citizens must obey the laws of its government. By the time Martin Luther King Jr. fought to ensure equality for African Americans, centuries of laws, governments, rebellions, and philosophies gave him not only the precedents he needed, but also the inspiration. Though his peaceful resistance may seem to oppose Locke and Hobbes, his greater message echoes their sentiments in that citizens and governments share an equal responsibility to each other. Thomas Hobbes contended that people in a state of nature ceded their individual rights to a strong sovereign in return for his protection, and argued that a “dissolute condition of masterlesse men, without subjection to Lawes, and a coercive Power to tye their hands from rapine, and revenge would make impossible all of the basic security upon which comfortable, sociable, civilized life depends” (Hobbes). Differing slightly but along the same line of thought is Locke, who declared that the state of Nature is the source of all rights and unity, and the purpose of the state is to protect, and not hold back, the state of Nature. According to Locke, “From this natural state of freedom and independence, Locke stresses individual consent as the mechanism by which political societies are created and individuals join those societies” (Locke). In both Hobbes and Locke, while the duty of citizens is to follow the rule of law in exchange for the comforts of civilization, in both men it is the consent of the citizens that makes this contract possible. In Martin Luther King’s belief, it is when the government becomes unjust that citizens must remove their consent and resist. Dr. King expressed his conception of the legitimacy of the law clearly in his “Letter from Birmingham Jail,” written as a response to local clergy who denounced his tactics of civil disobedience. As he served the penalty for his equal rights demonstrations, which he believed to be protected by the Constitution and God, Dr. King drove home the point that he and his followers demanded only those freedoms that had been promised to them as citizens of the United States. “We have waited more than 340 years for our Constitutional and god-given rights,” drawing a parallel between deeply religious beliefs of his audience and the ongoing fight to have justice represented in civil society, bringing the law into concordance with moral right. The religious leaders who opposed the demonstrations in Birmingham – representing various denominations united in disapproval – stated their belief that protestors should not break local laws while demonstrating for their cause. Dr. King replied to this charge with a powerful question about justice: “One may well ask, ‘How can you advocate breaking some laws and obeying others?’ The answer lies in the fact that there are two types of laws: just and unjust. One has not only a legal but a moral responsibility to obey just laws. Conversely, one has a moral responsibility to disobey unjust laws” (King). He supports this in later paragraphs by suggesting that the Constitution represents a just law that has been unevenly applied, allowing the unjust laws of segregation to remain in force and leaving a blot on the absolute fairness of our founding principles. Dr. King’s conception of morality was founded in his religious belief and reflected in his view of law: “A just law is a man-made code that squares with the moral law or the law of God. An unjust law is a code that is out of harmony with the moral law. To put it in the terms of St. Thomas Aquinas: An unjust law is a human law that is not rooted in eternal law and natural law” (King). To Dr. King, a just law is rooted in religious morality, particularly Christian morality. The highest authority is God, and to understand the justice of a particular law, one must understand the laws of God, eternal and natural laws. To decipher these laws, Dr. King states: “Any law that uplifts human personality is just. Any law that degrades human personality is unjust” (King). By his logic, the degrading nature of segregation is an unjust law and Dr. King goes on to again reference the religious icon St. Thomas Aquinas who stated that “an unjust law is no law at all.” This is the basis for Dr. King’s civil disobedience and why some laws must be purposefully ignored. Dr. King uses his own example to illustrate how sometimes laws are twisted to violate natural law. “Sometimes a law is just on its face and unjust in its application. For instance, I have been arrested on a charge of parading without a permit. Now, there is nothing wrong in having an ordinance which requires a permit for a parade” (King). Then he offers his logical conclusion: “But such an ordinance becomes unjust when it is used to maintain segregation and to deny citizens the First-Amendment privilege of peaceful assembly and protest” (King). Believing that the Constitution represents God’s laws, he shows how the letter of the law can be manipulated for unjust reasons. He offers the solution of disobeying these laws as the only way to combat them. Dr. King’s bedrock principle was nonviolent social change. He and his followers were beaten, blasted with fire-hoses, and jailed without ever striking a retaliatory blow. Their willingness to suffer the consequences of their actions showed an admirable respect for the rule of law in America. The letter states, “One who breaks an unjust law must do so openly, lovingly, and with a willingness to accept the penalty. I submit that an individual who breaks a law that conscience tells him is unjust and who willingly accepts the penalty of imprisonment in order to arouse the conscience of the community over its injustice, is in reality expressing the highest respect for law” (King). By paying the price for civil disobedience, the Birmingham protestors were able to take the moral high ground from those who hid behind the strict interpretation of the law. Dr. King exhibited how by disobeying the unjust law, they were following a higher law, one based on morality and God-given values of fairness and equality. By using faith and the belief that an unjust government is no longer entitled to the citizens’ consent, King is merely making the point that governments sometimes violate the social contract first and citizens are forced to react. While Dr. King is sure to warn against anarchical views of his statement to disobey laws, his argument against following unjust laws is sound and easy to understand and has its roots centuries earlier in the words of Hobbes and Locke. King, Martin Luther. “Letter from a Birmingham Jail [King Jr.].” African Studies Center – University of Pennsylvania. 16 Apr 1963. 5 Jul 2008. <http://www.africa.upenn.edu/Articles_Gen/Letter_Birmingham.html>. Lloyd, Sharon A. “Hobbes Moral and Political Philosophy.” Stanford Encyclopedia of Philosophy. 12 Feb 2002. 5 Jul 2008. < http://plato.stanford.edu/entries/hobbes-moral/>. Tuckness, Alex. “Locke’s Political Philosophy.” Stanford Encyclopedia of Philosophy. 2005. 5 Jul 2008. <http://plato.stanford.edu/entries/locke-political/>.
|Infobox on Phosphine| |Example of Phosphine| |Stowage factor (in m3/t)||-| |Humidity / moisture||-| |Risk factors||See text| Description / Application Phosphine is the compound with the chemical formula PH3. It is a colourless, flammable, toxic gas. Pure phosphine is odourless, but technical grade samples have a highly unpleasant odour like garlic or rotting fish, due to the presence of substituted phosphine and diphosphane (P2H4). With traces of P2H4 present, PH3 is spontaneously flammable in air, burning with a luminous flame. Phosphines are also a group of organophosphorus compounds with the formula R3P (R = organic derivative). Organophosphines are important in catalysts where they complex to various metal ions; complexes derived from a chiral phosphine can catalyze reactions to give chiral, enantioenriched products. Phosphine is soluble in alcohol, ether, and cuprous chloride solution; slightly soluble in cold water; insoluble in hot water. Phosphine is used as a pesticide. Phophine gas has long been recognized as highly toxic. However, it is not widely known that it is, potentially, an inflammable gas, with a low flammability level of 1,8% by volume in air. In the event that a mixture of air/phosphine – in which the phosphine concentration exceeds its inflammable limit – is ignited in a confined space, it is highly probable that an explosion will occur. Phosphine gas is generated from aluminium phosphide tablets by reaction of the aluminium phosphide with moisture in the air. This process, in addition to liberating phosphine, also produces Aluminium Oxide as a by-product. Additionally, small quantities of another gas known as diphosphine are also sometimes produced during this reaction. Unlike phosphine, diphosphine is spontaneously inflammable, reacting instantly with oxygen in the air. Production of diphosphine occurs in a similar way to that generating phosphine, i.e. by reaction between aluminium phosphide and moisture, but in this case the aluminium phosphide tablets contain an imbalance between the aluminium and phosphorous, with an excess of phosphorous compared to aluminium. Such a situation may arise during production of the tablets if an excess of phosphorous is inadvertently used during preparation. Although not proven definitely, it is likely that potentially explosive mixtures of air and phosphine are frequently encountered during the first 12 to 24 hours of phosphine fumigations when the phosphine concentration in the upper reaches of the hold reaches a peak concentration. The resulting high concentrations of phosphine then disperse by diffusion, with the gas diffusing into the less accessible portions of the cargo. Aluminium phosphide tablets are routinely used in fumigation and a very large number of shipments are fumigated annually without problems. Incidences of explosions are therefore very rare and as far as known, fumigant explosions have only been encountered when companies have used cheaper brand of aluminium phosphide tablets produced in developing countries. Such tablets could be envisaged as producing localised high concentrations of diphosphine leading to a very rapid reaction with oxygen and to ignition. Shipment / Storage The most widely used fumigant for intransit fumigation is phosphine (PH3). The gas is normally generated from aluminium phosphide or sometimes magnesium phosphide, but can also be applied direct from cylinders. Phosphine is only fully effective if a lethal concentration is maintained for a period of time that can be as little as 3 days or as much as 3 weeks. The actual time needed will vary according to the cargo temperatures, insect species that may be present, and the system of fumigation. This is the reason why fumigation with phosphine is almost always carried out during the voyage (intransit) so that the voyage time can be used to ensure a fully effective treatment. Phosphine gas is more dense than air and hence may collect in low-lying areas. It can form explosive mixtures with air and also self-ignite. When phosphine burns it produces a dense white cloud of phosphorus pentoxide – a severe respiratory irritant. Phosphine can be absorbed into the body by inhalation. Direct contact with phosphine liquid – although unlikely to occur – may cause frostbite, like other cryogenic liquids. The main target organ of phosphine gas is the respiratory tract. According to the 2009 U.S. National Institute for Occupational Safety and Health (NIOSH) pocket guide, and U.S. Occupational Safety and Health Administration regulation the 8 hour average respiratory exposure should not exceed 0.3 ppm. NIOSH recommends that the short term respiratory exposure to phosphine gas should not exceed 1 ppm. The Immediately Dangerous to Life or Health level is 50 ppm. Overexposure to phosphine gas causes nausea, vomiting, abdominal pain, diarrhea; thirst; chest tightness, dyspnea (breathing difficulty); muscle pain, chills; stupor or syncope; pulmonary edema. Phosphine has been reported to have the odour of decaying fish or garlic at concentrations below 0.3 ppm. The smell is normally restricted to laboratory areas or phosphine processing since the smell comes from the way the phosphine is extracted from the environment. However, it may occur elsewhere, such as in industrial waste landfills. Exposure to higher concentrations may cause olfactory fatigue. Note: For overseas carriage aspects of Chemicals, the readers are recommended to acquire or have access to a good chemical dictionary, and a copy of the International Maritime Dangerous Goods (IMDG) Code, issued by the International Maritime Organisation. Also consult the applicable MSDS sheet. Flammable, toxic, colourless gas with a garlic odour. Ignites spontaneously in air. Heavier than air. Irritating skin, eyes and mucous membranes.
Literacy is taught at Chiddingstone School both explicitly through direct lessons and underpinned by a broad range of foundation subjects, taught through the Cornerstones Creative Curriculum. This provides a context, which gives learning meaning, and enables the children to engage fully with the subject matter. Literacy reinforces all aspects of learning, and we believe it is a vital medium enabling our children to access other areas of knowledge; we offer rich experiences in reading, writing, speaking and listening or drama. The children are actively encouraged to enjoy reading and writing, choosing exciting and challenging texts for study into which we also interweave the teaching of spelling, grammar and handwriting. As with mathematics, our mornings are very structured with well organised timetables and high quality teaching and learning. Speaking and listening opportunities are a high priority, with children talking about their learning and explaining their thinking, with frequent opportunities to write for a range of audiences. We run a number of intervention support and extension programmes to support all pupils in achieving age-related expectations and to offer challenges to our more able children. Phonics lessons, which are matched to the understanding of each child, are delivered throughout the school and in addition, children are involved in individual and group reading activities. In guided work, your child will read in a group working at the same level, with the teacher focussing on a specific aim. This is carefully planned, differentiated to be at the correct level for your child and builds skills over time. From Reception through to Year 2 the teaching of reading is taught using the synthetic phonics programme called ‘Letters and Sounds ’http://www.letters-and-sounds.com/ and supported with resources from Read, Write Inc. Scheme. Beyond this, we use the spellings from the National Curriculum Programme of Study - using a 'Five A Day' challenge. Initially children are encouraged to write individual letters and attempt spellings phonetically. From Year 1, more formal spelling begins by the end of the year and from then onwards the children are given spellings to learn which follow the statutory requirements of the curriculum. The use of dictionaries is taught and encouraged. In addition, we use the ‘Oxford Tree Reading Scheme’ http://www.oup.com/oxed/primary/oxfordreadingtree/introduction/which supports the ‘Letters and Sounds’ phonics programmes. We have a well-equipped library and use the Junior Librarian system, which means children become used to borrowing and returning books, accessing a broad spectrum in line with their varied interests. Every child has their own library card and books must be scanned in order to be checked out. Junior Librarian builds a record over time of a child’s reading habits and can include book reviews and comments for readers to share. Click on the links to find out some more about how you can support your child's literacy development. At Key Stage 1 and Key Stage 2 the school follows the National Curriculum English Programmes of Study. These can be viewed here:
Learning About Money Grades 1-3 - Out Of Stock - Product Code: - ON THE MARK PRESS Currency & Literacy Combined in One Book! Help your students learn the concept of money. Includes: 21 mathematic activities and 10 money illustrations. Worksheets include skill at the bottom of each page. Additional activities in reading comprehension, phonics, word study, creative writing and matching complete these resources. 96 pages. Supports NCTM Standards.
“Bloody Sunday” refers to several violent incidents and confrontations in history. In Russia, it refers to the shooting of unarmed civilians by tsarist soldiers in St Petersburg in January 1905. This caused the deaths of many people and triggered the outbreak of the 1905 Revolution. The January 1905 incident began as a relatively peaceful protest by disgruntled steelworkers in St Petersburg. Angered by poor working conditions, an economic slump and the ongoing war with Japan, thousands marched on the Winter Palace to plead with Tsar Nicholas II for reform. The tsar was not present, however, and the workers were instead gunned down on the streets by panicky soldiers. At another time in Russian history, the mass killing of dissident civilians might have frightened the rest of the population into silent obedience – but the authority of the tsarist regime had been diminishing for months. Popular respect and affection for the tsar, already in decline beforehand, took a sudden turn for the worse. The ‘Bloody Sunday’ shootings triggered a wave of general strikes, peasant unrest, organised terrorism and political mobilisation that became known as the 1905 Revolution. Treatment of industrial workers The tsarist government’s economic stimulus of the late 1800s triggered a surge of industrial growth – but there were few legislative or regulatory protections for workers. By the start of the 20th century, Russia’s three million industrial workers were one of the lowest paid workforces in Europe. Low wage costs in Russia were one of the lures that attracted significant investment from countries like Britain and France. Russia’s industrial workers also laboured under appalling conditions. The average working day was 10.5 hours, six days a week, but 15-hour days were not unknown. There were no annual holidays, sick leave or superannuation. Workplace hygiene and safety were poor. Illness, accidents and injuries were common-place and with no leave or compensation available, sick or injured workers were summarily dismissed. In addition to this, factory owners often imposed fines for lateness, failing to meet production quotas and even trivial ‘offences’ like toilet breaks and talking or singing while working. These fines were imposed arbitrarily, with little or no opportunity for review. When not in factories or mines, most Russian industrial workers endured poor living conditions. Thousands of workers lived in crowded tenements or ramshackle barracks sheds owned by their employers. This accommodation was poorly constructed, overcrowded and lacked adequate heating, water or sewage facilities. Rising unrest in the cities This raft of grievances, along with the concentration of tens of thousands of workers in the cities, made them susceptible to revolutionary ideas. Marxist groups, who identified the industrial proletariat as the logical source of revolution, The dissatisfaction of factory workers grew steadily but became particularly acute in the final months of 1904. Not only had Russia initiated a disastrous war with Japan, its national economy had slipped into a severe recession. Production, foreign trade and government revenue all declined, compelling companies to dismiss thousands of workers and increase pressure on those they retained. This recession led to significant increases in homelessness, poverty and family. The tsarist government’s only response was to ask zemstvo leaders to organise charitable relief. Food prices in the cities increased by as much as 50 per cent but wages failed to increase correspondingly. These deteriorating conditions generated unrest and dissent. Some of this came from liberals, who renewed demands for an elected constituent assembly. Industrial workers also formed so-called ‘workers’ sections’, which served as militant discussion groups and, later, strike committees. Several of these sections were led by Georgy Gapon, a Ukrainian-born priest who had previously received support from the Okhrana (tsarist secret police). Gapon was an articulate and convincing public speaker and a skilled activist – but he was no obedient tool of the government. Working closely with impoverished and suffering workers, his loyalties eventually shifted to them. In late 1904, Gapon became an instrumental figure in unrest at the Putilov steel plant in St Petersburg. When factory managers sacked four workers there, the workers’ sections responded angrily and began organising strikes and demands for improvements to their rights and conditions. Somov, a Menshevik organiser, later commented on the tone of these meetings: “I found myself at several meetings [of Gapon’s workers’ sections] whose characteristic feature was that they imbued all demands with a ‘search for justice’, a general aspiration to put an end to the present impossible conditions… And although I thought that in all of these demands, workers were motivated not so much by considerations of a material character, as by purely moral aspirations to settling everything ‘according to justice’ and to force employers to atone for their past sins.” At the beginning of January 1905, Gapon drafted a petition to the tsar, seeking an improvement to working conditions – but it also called for several political reforms. More than 150,000 workers signed the petition. Death at the Winter Palace On Sunday January 9th, thousands of workers marched towards the Winter Palace in six columns, intending to present their petition to the tsar. Unbeknownst to the workers, Nicholas II was at his palace in Tsarskoye Selo, some 25 miles south of the capital. As several thousand workers approached the Winter Palace, officers called out the palace’s security garrison to guard its entry points. As the workers approached, the soldiers opened fire on the crowd. It is not known whether an order was given, whether soldiers fired spontaneously or if they were reacting to aggression. The number of victims is also unclear. Government sources declared that 96 were killed, eyewitnesses suggested in excess of 200, while reports and propaganda from revolutionary groups claimed even higher figures. Affection for tsarism shattered The events of ‘Bloody Sunday’ reverberated around the world. The newspapers of London, Paris and New York were already sharply critical of Nicholas II. After ‘Bloody Sunday’, they condemned the Russian tsar as a murderous tyrant. Within Russia, the response was also strong. Once the empire’s ‘Holy Father’, the tsar was given the epithet ‘Bloody Nicholas’. Marxist leader Peter Struve dubbed him the ‘People’s Executioner’. An infuriated Gapon, who escaped the violence of January 9th, declared that “There is no God any longer. There is no tsar!” The day after the Bloody Sunday killings, around 150,000 in the capital showed their disgust by refusing to work. Over the coming days, the strikes expanded around St Petersburg and other cities in the empire, including Moscow, Odessa, Warsaw and the Baltic states. Later, these actions became more coordinated and were accompanied by demands for political reform. Over the course of 1905, tsarism would face the most dire challenge in its 300-year-old history. A historian’s view: “The revulsion following the slaughter soon engulfed the whole nation and there were wide-spread manifestations of popular grief, indignation and anger against the guilty tsar. Not just the industrial workers but the middle classes, intellectuals, professional organisations and the whole of Russian society were roused to fury. The tsar, typically, did nothing until the February assassination of his uncle finally impelled him to issue a decree authorising the election of a consultative assembly. The announcement was sadly inadequate to respond to the popular mood and only served to spur both liberals and revolutionaries…” 1. ‘Bloody Sunday’ began as a protest by Russian industrial workers, who endured low wages, poor conditions and appalling treatment from employers. 2. Their conditions worsened in 1904 due to the Russo-Japanese War and an economic recession. This led to the formation of workers’ sections. 3. In January 1905, workers at the Putilov plant in St Petersburg, led by the priest Georgy Gapon, drafted a petition intended for the tsar. 4. When the workers marched on the Winter Palace to deliver this petition, scores were gunned down in the forecourt by tsarist soldiers. 5. ‘Bloody Sunday’, as it became known, eroded respect for tsarism and contributed to a wave of general strikes, political demands and violence that became the 1905 Revolution.
MCQ quiz on Radioactivity multiple choice questions and answers on Radioactivity MCQ questions quiz on Radioactivity objectives questions with answer test pdf. Professionals, Teachers, Students and Kids Trivia Quizzes to test your knowledge on the subject. 1. When unstable nuclei undergo radioactive decay, they emit three types of radioactivity. Which is not one of them? 2. Radioactivity is spontaneous and random. 3. Which type of radioactive decay doesnt change the atomic number? 4. Whether or not a nuclear fission reaction becomes self-sustaining depends on the release of: 5. Particles that are helium nuclei are called: 6. What is it called when two atomic nuclei are combined? 7. Quick electron emissions are called: 8. Radioactivity that takes the form of high energy electromagnetic waves would be: 9. Isotopes of an element have different numbers of: 10. What form of radioactive decay reduces the atomic number or number of protons by 2? MCQ Multiple Choice Questions and Answers on Radioactivity Radioactivity Trivia Questions and Answers PDF Radioactivity Question and Answer USA - United States of America Canada United Kingdom Australia New Zealand South America Brazil Portugal Netherland South Africa Ethiopia Zambia Singapore Malaysia India China UAE - Saudi Arabia Qatar Oman Kuwait Bahrain Dubai Israil England Scotland Norway Ireland Denmark France Spain Poland and many more....
A group of fishermen off Catalina Island were treated to a rare surprise on Sunday, September 14, 2014 when a large shadowy figure surfaced near their fishing boat, the Triton. It turned out to be a whale shark, estimated to be about 25 feet in length. Whale sharks are the world’s largest fish and have been measured at more than 40 feet long. They are plankton eaters, and usually seen by divers in the warmer waters of Southern Mexico and in the Sea of Cortez. In addition to the whale shark sighting, evidence of which was backed up by a video taken by those on board the fishing boat, anglers and divers have reported seeing other marine animals that are rarely encountered in SoCal waters. For example, blue marlin and mahi mahi have been reported as far north as Oxnard, hammerhead sharks have reportedly been seen off San Diego, and sperm whales have reportedly been spotted near Dana Point. So why have these creatures found their way north? Researchers believe that increased southern California sea surface temperatures–an El Niño–might be one reason why several marine species appear to have increased their northerly range. According to the website of the Scripps Institution of Oceanography at San Diego “El Niño is a phenomenon characterized by warmer sea surface water in the equatorial Eastern Pacific Ocean. An El Niño is defined by a seasonal sea surface temperature anomaly in the eastern/central equatorial Pacific greater than 0.5 C° (0.9° F) warmer than historical average temperature. The opposite phenomenon known as La Niña is defined as a seasonal sea surface temperature anomaly 0.5° C (0.9° F) colder than the historical average.” The Scripps website explains that the term “El Niño” has existed “since the 1890s, having been so nicknamed by South American fishermen, who acknowledged the birth of Christ by associating an onset of warming ocean water with its Christmastime appearance. Modern El Niño research, however, began mainly after a strong episode in 1982-83 that had gone largely undetected by scientists until it was well under way. That event caused more than $13 billion in economic loss worldwide and prompted more than a dozen countries to make large investments in El Niño research. Advances in data collection and computer modeling enabled researchers at Scripps and elsewhere to forecast a major El Niño in 1997-98 with success, though the models were relatively crude compared to current generations of models.” To learn more about the effects of El Niño and the ongoing efforts by Scripps researchers to collect and analyze oceanographic data, visit www.scripps.ucsd.edu.
Climate Change Adaptation Measures report Healthy soils have come to the forefront in the last few years as a means to address climate mitigation and adaptation. When healthy, soils can store and sequester atmospheric carbon; when damaged, soil becomes a source of atmospheric carbon. Increased carbon in healthy soils also has many co-benefits with respect to climate adaptation, including increased biomass, increased water holding capacity and reduced runoff, and improved water quality in surface and ground water. Despite these many benefits, measures for healthy soils are absent in many climate adaptation plans. To address this gap, this report proposes nearly 50 measures addressing how soils, compost, and mulch can be incorporated into climate adaptation plans.
Children Nutrition Education With the increasing rates of children obesity an intense need for promotion of the health of the children is highly important. Children nutrition is a key unit that contributes to obesity and other health deficiencies to the children. The key domain under the topic falls is nutrition. Through the study of dieting and eating behaviours the subject brings insight on the effects of the eating behaviour to a child. Most importantly, the domain helps build focus on nutrition based health conditions and how they can be addressed or prevented through proper child nutrition programs. The target population of the learning opportunity seeks to reach two major groups, the parents and schools. The choice of the target is based on the close people to a child, the people who offer care to the child and majority of the time spent by the child. It is with no doubt the two significant fields that takes the time of a child is the school program and home context (Nestlé Nutrition Workshop, 2008). - Schools: It is common that schools offer meals to children as they spend majority of the time at school. The unit therefore, potentially influences the nature of administered to the child. In the school context the members of administration serve as the target population as they influence the decisions and the meal routines in the school program (Bexter, 2009). - Parents or Guardians: the parents are key determinants of the type of food that the child consumes and the frequency of the meals as they are the primary care givers for the child. The general objective of the activity is to offer education on adequate nutrition for the children. The target goal of the objective is to improve the quality of health of the child. The specific objectives are as follows - Ensure proper dieting for the children both at home and while at school is achieved - Ensure adequate amount administration of food to all children in the society - Reduce the nutrition health related conditions among the children through integration of proper child nutrition Other significant objectives is to influence the policy formulation on the adequate children nutrition programs especially in the school programs. Additionally, is to improve the quality of health for the children in the society through the provision of adequate diets to a child. Introduction of Lesson Introduction of the lesson will begin with a power point slide presentation that shows the severity of the nutrition-based diseases among children and the statistics. The strategy is valuable in capturing the attention of the target audience and creating the relevance of the need for proper nutrition (Contento, 2011). Furthermore, the lesson will involve the engagement of the target audience to help gauge the understanding of adequate child nutrition through simple questions. The lesson will later proceed with offering insight on the clear definition of child nutrition and offering strategies to improve the nutrition and diets for a child. Description of Implementation Procedure The strategy to facilitate the child nutrition learning opportunity program is through education on the following attributes. Firstly, is through building a comprehensive understanding of what is child nutrition. Secondly, is through showing the significance of proper child nutrition and the consequences. Thirdly, is through analysing the strategies through which child nutrition can be promoted (Martoz, 2013). Additionally, emphasize on the need to promote good eating habits and the diseases associated with nutrition based context. The approach will serve significance to create an impact on improvement of eating behaviors at home and at school. List of Materials Needed The key materials that will facilitate the program includes - A computer: to facilitate preparation and presentation of the project - Projector: to facilitate presentation of the project Examples of Materials to be used For the case of the computer, applications like Microsoft Power Point will help in preparation of the project. Additionally, worksheets are effective in data accumulation for the statistics presentations.
Sounds the Same In this word search instructional activity, students search and find vocabulary words as they relate to words that sound the same. Words may be located forward, backward, up, down, and diagonally. 5 Views 13 Downloads Young literary analysts compare two poems by the same author. Readers look for slant rhyme, observe the beat and rhythm of each, and search for repeated vowel sounds. After re-reading, they observe the lack of punctuation and the stanza... 4th - 6th English Language Arts CCSS: Designed "Shakespeare and Star Wars": Lesson Plan Days 13 and 14 How important are sound effects in films? In stage plays? In radio programs? To gain an understanding of the impact of these special effects, class members watch a short video spoof of the sound in a scene from Star Wars: A New... 6th - 12th English Language Arts CCSS: Designed All About Homophones Put the fun back in reading fundamentals with an interactive set of lessons about homophones. Learners of all ages explore the relationships between words that sound the same but have different meanings, and complete a variety of fun and... 1st - 8th English Language Arts CCSS: Adaptable Welcome to the Color Vowel Chart Focus English language learners' attention on word stress and phrase stress with a pronunciation chart that breaks the sounds into moving and non-moving vowel sounds. The chart tool uses colors and key words to indicate where to put the... 4th - 12th English Language Arts CCSS: Adaptable Discriminating Phonemes 2 Some sounds sound very similar! Help your class learn how to distinguish between various sounds by following the steps outlined in this plan. The plan includes a warm-up, a teacher-led portion, and details for guided and independent... K - 8th English Language Arts CCSS: Adaptable Understanding Shakespeare - "Blow, Blow, Thou Winter Wind" Expose your class to Shakespearean language with a manageable excerpt from As You Like It. A wonderfully comprehensive plan, this resource requires pupils to use higher-level thinking skills to interact with a complex text and connect... 6th - 8th English Language Arts CCSS: Designed Settings that Reinforce Characters A vivid setting can bring a story to life. Challenge your writers to dive into this element as they complete worksheets in preparation for their first draft. This packet starts by giving an example of a description that simply tells who... 5th - 8th English Language Arts CCSS: Adaptable Culture of Sound: Traditional Korean Music Explore Korean music by listening to the sounds of Korean instruments. Students will listen to two Korean songs and identify the instruments they hear, as well as the type of instruments they are (woodwind, string, or brass). They then... 4th - 8th Visual & Performing Arts
Diet, Diabetes, and Obesity Your dietary habits affect your risk of heart disease. Modifying your diet to control weight, blood sugar, and cholesterol levels is a critical components of a healthy heart lifestyle. Obesity places an added workload on the heart which is directly proportional to body weight. The risk of developing heart disease increases as body weight increases. The heart requires more oxygen, because it must pump harder to supply blood to a larger area. Obesity is closely linked with a poor diet (a high fat and cholesterol intake), and a sedentary lifestyle. Elevated cholesterol levels are also linked to heart disease. Cholesterol deposits on the walls of blood vessels may lead to clogged arteries. Cholesterol can be controlled by diet, weight loss, and medication. Diabetes is characterized by an elevated blood sugar level due to an inadequate secretion or absence of insulin. It is a major risk factor for atherosclerosis and is compounded in the presence of other risk factors. Those with diabetes tend to have high cholesterol, triglycerides, and blood pressure. Therefore it is important to maintain good control of this disease with proper body weight through diet, exercise, regular medical checkups, and medication if ordered by a physician. Food Guide Pyramid The USDA's Food Guide Pyramid makes it easy to choose a balanced diet from the five major food groups. The base of the pyramid contains the largest portion of food in the form of grains: bread, cereal, rice, and pasta. Add the recommended number of servings from the fruit, vegetable, milk, and meat groups for a balanced diet. It is important to eat a variety of food from each group. The chart below shows examples of serving sizes. Please note: This is a general guide for people without dietary restrictions and may be modified by your physician or dietitian. Dietary Fats and Heart Disease Fat: An essential nutrient used by the body for many functions including energy, thermal insulation, vital organ protection, cell structure, and function. It is recommended that less than 30% of food calories come from dietary fats, which are present in foods of both animal and vegetable origin. Cholesterol: A waxy, fat related compound in the body tissues and organs of man and animal, cholesterol plays a vital role in metabolism. However, cholesterol is a key part in the creation of fatty deposits in the arterial walls and an increased blood cholesterol is a risk factor in coronary artery disease. Cholesterol is found only in foods of animal origin. It is recommended that the daily intake of dietary cholesterol be no more than 200 - 300 mg. per day. Low Density Lipoprotein (LDL): A type of cholesterol carrier which deposits cholesterol on the walls of blood vessels. High Density Lipoprotein (HDL): A type of cholesterol carrier which helps remove cholesterol from the bloodstream. Saturated Fat: Fat that is usually solid or semisolid at room temperature and can be found in animal as well as vegetable sources. A diet high in saturated fat frequently increases blood cholesterol and LDL. Polyunsaturated Fat: Fats primarily from vegetable sources which are generally liquid at room temperature. When used in moderation, they tend not to effect blood cholesterol levels. Monounsaturated Fat: Fats which help to lower blood cholesterol when used in place of saturated fat in the diet. Omega-3 Fatty Acids: Fats found in fish sources which help to lower LDL cholesterol. Reducing Dietary Cholesterol Protein is essential for good health. But many protein-rich foods are animal products which are also high in saturated fats and cholesterol. Fatty cuts of "red" meat, and organ meats are the worst offenders. In order to obtain the best protein with the least amount of fat and cholesterol, eat more fresh water fish, legumes (dried peas, beans, and grains), and skinless poultry. When you do eat meat, trim all visible fat before cooking and limit the portion size to three ounces/day (the size of a pack of cards). Skim milk, yogurt, and skim milk cheeses are the best dairy choices. When buying cheese (which is traditionally high in saturated fat), look for low fat varieties such as farmer' s cheese, pot cheese, uncreamed cottage cheese, or part-skim ricotta. Whole grain breads, cereals, and pastas are your best choices. When buying baked products, such as muffins, read labels carefully. Many obtain half their calories from saturated fats such as palm and coconut oil. With few exceptions, fresh fruits and vegetables are naturally low in saturated fat. Palm oil, palm kernel oil, coconut oil, and hydrogenated vegetable oils are highly saturated. Many fat calories come from the fats we add to foods in the form of butter, sauces, spreads, etc. To reduce added fats, try: - spreading sandwiches with mustard instead of mayonnaise. - switching to "light" mayonnaise (1/2 the fat). - buying "old-fashioned" peanut butter with no added fat and pouring off the oil instead of mixing it into the peanut spread. - using tub or pourable margarine instead of stick margarine or butter. - sautéing foods in broth, bouillon, or using oil sprays. - substituting two egg whites for the whole egg in recipes. - buy skim milk or 1% fat dairy products (which contain 28% of their calories from fat) and leanest of meats. - use more skinless poultry, fish, low fat dairy products, tofu, and legumes for protein sources rather than red meat. - prepare small portions of meats by baking, broiling, stir frying, poaching, steaming, or microwaving. Do not prepare meats with additional fats. Drain cooked burgers on paper towels. - use nonstick pans and spray. - skim the fat off of all gravies, soups, and sauces. (Best to chill it first). use spices and herbs instead of added fat. - use low-fat cottage cheese and yogurt, instead of sour cream and cream cheese, in dips and on potatoes. - use fruits, ice milk, or nonfat yogurt for dessert. - make complex carbohydrates (whole grain starches, fruits, and vegetables) a larger part of your meals. Food Sources of Fat in the Diet When you must use fats, use poly or monounsaturated vegetable oils Be Aware of Food Sources High in Sodium Many cardiac patients are restricted to 2000 mg. (2gm) of sodium/day to minimize fluid retention and reduce the workload on the heart. All the sodium we need can be found naturally in balanced meals excluding the use of processed foods, added salt during cooking, or at the table. The following are some foods to avoid: - 1 teaspoon table salt contains approximately 2000 mg sodium. - 1 tablespoon soy sauce contains 1029 mg sodium. - 1 teaspoon regular meat tenderizer contains approximately 1750 mg sodium.
This image shows a fog bank in a valley of the Atacama Desert, along the coast of northern Chile. These fog banks are called "las camanchacas." Click on image for full size Image Courtesy of Darryl Scott Extreme Weather in the Southeast Pacific The weather in the Southeast Pacific region can be considered extreme, in the sense that it receives very little rainfall and is extremely dry. For example, some places in the Atacama Desert in Chile receive an average of less than one millimeter (0.04 inches) of rain a year. Sometimes this region doesn't receive any rain at all for many years in a row. This region is dry due to a number of factors. The Chilean Coastal Range and the Andes mountains block this area from receiving moisture. In addition, a large wind current called the Pacific Anticyclone blows dry air into the region. Finally, an ocean current called the Humboldt Current brings cool water up the coast of Chile, which cools the air above it and forms clouds that tend not to produce precipitation. Over the Southeast Pacific Ocean the clouds do produce drizzle, but this doesn't usually happen over the land. Instead, fog sometimes forms along the coast. People who live in this region call this fog "camanchacas," and it can support life. Even though it doesn't actually rain, algae, lichen, and some cacti are able to capture enough moisture from the fog in order to survive. In a village in northern Chile called Chunungo, people use nets to capture water from the fog. Garua fog, which occurs near the coast of Chile and Peru, is a transparent mist that forces people to use the windshield wipers when driving. Shop Windows to the Universe Science Store! Learn about Earth and space science, and have fun while doing it! The games section of our online store includes a climate change card game and the Traveling Nitrogen game You might also be interested in: Rain is precipitation that falls to the Earth in drops of 5mm or greater in diameter according to the US National Weather Service. Virga is rain that evaporates before reaching the ground. Raindrops form...more This page describes environments that are very hot or very cold, extremely dry, or both. Extreme environments are places that are inhospitable to most "normal" living creatures. Extreme environments are...more Chile's Atacama Desert is one of the driest places on Earth. Much of the desert receives less than 1 millimeter (0.04 inch) of rainfall per year on average, making it 50 times more arid than California's...more Winds in the Southeast Pacific have a strong influence on regional climate and play an important role in several large-scale, global climate phenomena. The Hadley cell is a global atmospheric circulation...more The water at the ocean surface is moved primarily by winds. Large scale winds move in specific directions because they are affected by Earth’s spin and the Coriolis Effect. Because Earth spins constantly,...more A cloud is composed of tiny water droplets or ice crystals that are suspended in the air. A series of processes have to happen in order for these water droplets or ice crystals to form into clouds in the...more Drizzle is light precipitation that is made up of liquid water drops that are smaller than rain drops. Drizzle can be so light that only a millimeter of accumulation is measured at the Earth's surface....more
Image by ToniVC via Flickr Time has been studied by philosophers and scientists for 2,500 years, and thanks to this attention it is much better understood today. Nevertheless, many issues remain to be resolved. Here is a short list of the most important ones: What time actually is; whether time exists when nothing is changing; what kinds of time travel are possible; why time has an arrow; whether the future and past are real; how to analyze the metaphor of time’s flow; whether future time will be infinite; whether there was time before the Big Bang; whether tensed or tenseless concepts are semantically basic; what is the proper formalism or logic for capturing the special role that time plays in reasoning; what are the neural mechanisms that account for our experience of time; whether there is a timeless nature “beyond” spacetime; and whether time should be understood only in terms of its role in the laws governing matter and force. Some of these issues will be resolved by scientific advances alone, but others require philosophical analysis. Consider this one issue upon which philosophers of time are deeply divided: What sort of ontological differences are there among the present, past and future? There are three competing theories. Presentists argue that necessarily only present objects and present experiences are real, and we conscious beings recognize this in the special “vividness” of our present experience. The dinosaurs have slipped out of reality. However, according to the growing-universe or growing-block theory, the past and present are both real, but the future is not real because the future is indeterminate or merely potential. Dinosaurs are real, but our death is not. The third and more popular theory is that there are no significant ontological differences among present, past, and future because the differences are merely subjective. This view is called “the block universe theory” or “eternalism.” (Continue article)
Emotion and Emotional Development How do the following two lists differ and how are they the same? Each of the examples originates in the character. How they differ is one is a list of character traits. The second list is of emotions. Character emotional development and emotional change are each an essential scene element. They both: sound alike are related are often confused Character Emotional Development Showing how the character’s traits change and/or transform over the course of the entire story defines the character emotional development plot and the development of such is an essential element in every scene. The protagonist’s emotional development takes place over time and culminates at the end of the story in a lasting transformation. The character’s emotional development can be plotted from the beginning to the end of the story. In every scene, the protagonist displays a range of emotion reactions to the dramatic action. How she reacts often is reflective of the burden she carries from her backstory. These emotions, which fluctuate within each scene, are usually transitory and fleeting. Showing your character’s emotions shift and change is an essential element in every scene. Plot is how the events in a story directly impact the main character. Always, in the best-written stories, characters are emotionally affected by the events of the story. In great stories, the dramatic action transforms characters. This transformation makes a story meaningful. The dramatic action demands a goal. The character emotional development demands growth. Steps toward Transformation Each obstacle and antagonist in the dramatic action plot provides the protagonist with opportunities to learn about herself and thus advance her character emotional development plot. Before she can transform, she first must become conscious of her strengths and weaknesses. Stories show a character changing, at the least, and transforming at the most profound. Often you can accomplish this by creating a flawed character. Eventually, she will have to face that flaw and overcome it in order to achieve her ultimate goal. Examples of Character Flaws - Always the victim and unable to take responsibility for actions - Control freak - Argumentative and short-tempered - Liar and a cheat - Always has to be right - Sits in judgment The main character’s flaw establishes the protagonist’s level of emotional maturity and points to the potential for growth or transformation. Her flaw interferes with achieving her goal and riles up her emotions. How to Use a Character Flaw A character flaw is a coping mechanism that arises from the loss of an original state of perfection that occurred in the character’s backstory. The character stores the emotion created by what happened in the backstory. Her flaw is designed to compensate for a perceived vulnerability, sense of insecurity, and feeling threatened. No matter how confident, every major character demonstrates lessons learned from the wound inflicted in her backstory that now is lodged in her core belief system. In reaction, she often surrenders some or all of the authority over her own life to someone or something else. Emotion in the Beginning The beginning of your story establishes who the character is, flaws and all. Your readers can look back to this portrait and compare it to who she becomes as she undergoes a transformation after the crisis. The portrait also foreshadows who she will be at the climax. At the beginning of a story, the character’s emotional reactions help identify and introduce her. By the end of the beginning scene at the one-quarter mark of a story, all of the protagonist’s most defining traits, positive and negative have been introduced. This turning point scene reveals the most defining character trait of all. What emotion does your protagonist communicate about entering the middle of the story? This choice of hers shows her defining character trait starting out the story: Emotions Intensify in the Middle The emotions the protagonist managed to keep in check in the beginning of the story begin to unravel in the chaos and uncertainty of the unfamiliar world in the middle of a story. Overwhelmed and fearful, challenged and hurt, the protagonist becomes vulnerable. Most importantly, the middle deepens the audience’s appreciation for the protagonist’s emotional maturity, or lack thereof, by her emotional reactions as the obstacles become more difficult to surmount. All the outer events, ordeals, successes, and failures of the character constitute the dramatic action of a story and provide the catalyst for change. The farther the protagonist penetrates into the new world of the middle and the more obstacles she confronts, the character’s emotional defenses begin to break down and her emotions turn bleaker and darker. Unable to function at a superficial level any longer, she begins to experience heightened emotions, ones that touch the core of her being. When she is prevented from reaching her goal, her emotional reaction changes subtly over time, flicking back and forth in the scene like a trapped fly. One of the defining elements of the final quarter of a story are the number of complications the protagonist is slapped the nearer she moves toward achieving her goal. With each complication, the protagonist suffers some sort of reversal. Yet, unlike the reversals in the middle of the story, the protagonist no longer loses power even if she is physically, mentally or emotionally restrained or injured. As the character emotional development changes, her emotional expression changes, too. What begins with the display of emotional upheaval transforms into emotional maturity. At the end of a story, as a result of the action on the page, the character’s transformation is revealed through the change in her choices and in her emotional responses from how she acted in the beginning and in the middle. The plot of a story is about a character faced with a series of conflicts and obstacles while in pursuit of a goal, which, over time, inspire her to change her choices. In the end, she is transformed, and her ultimate transformation creates her anew with a different understanding of herself and her existence. For more: Read my Plot Whisperer and Blockbuster Plots books for writers Time to rewrite your story? - Ready for a massive rewrite? Re-vision first! - Confused about what you're really trying to convey in your story? - Lots of action, no character development? Lots of character development and no action? - Looking for tips to prop up your middle with excitement? - Wishing you understood how to show don't tell what your character is feeling? - Are even you sometimes bored with your own story? - Long to form your concept into words? We can help you with all of that and so much more! View your story in an entirely new light. Recharge your energy and enthusiasm for your writing. PlotWriMo: Revise Your Novel in a Month includes 8 videos (5.5 hours) + 30 exercises total
Good to have you back! If you've signed in to StudyBlue with Facebook in the past, please do that again. Ch. 8 Evolution Ch. 8 Evolution † The material on this site is created by StudyBlue users. StudyBlue is not affiliated with, sponsored by or endorsed by the academic institution or instructor. Get started today a variant form of a single gene model for the origin of a new species from a small population that become isolated from its parent population body part, such as the wings of insects and birds, that serves the same function but differs in structure and development the practice of selectively breeding plants and animals with desirable traits complex, double stranded, helical molecule of DNA specific segments of chromosomes are genes type of analysis of organisms in which they are grouped together on the basis of derived as opposed to primitive characteristics diagram showing the relationships among members of a clade, including their most recent common ancestor origin of similar features in distantly related organisms as they adapt in comparable ways, diversification of a species into two or more descendant species body part in different organisms with similar structure, similar relationships to other organs, and similar development but does not necessarily serve the same function inheritance of aquired characteristics Jean Baptiste de Lamarck's mechanism for evolution, holds that characteristics acquired during an individual's lifetime can be inherited by descendants an existing organism that has descended from ancient ancestors with little apparent change evolutionary changes that account for the origin of new species, genera, orders, and so on greatly accelerated extinction rates resulting in marked decrease in biodiversity evolutionary changes within a species combination of ideas of various scientists yielding a view of evolution that includes the chormosome theory of inheritance, mutation as a source of variation, and gradualism concept holding that not all parts of an organism evolve at the same rate, thus yielding organisms with features retained from the ancestral condition as well as more recently evolved features any change in the genes of organisms; yields some of the variation on which natural selection acts mechanism accounting for differential survival and reproduction among members of a species evolution of similar features in two separate but closely related lines of descent as a result of comparable adaptations concept that a species evolves gradually and continuously as it gives rise to new species concept holding that new species evolve rapidly, in perhaps a few thousands of years, then remains much the same during its several million years of existence theory of evolution holds that all livings things a re related and that they descended with modification from organisms that lived during the past any structure that no longer serves any or only a limited function, or a different funciton Want to see the other 25 Flashcards in Ch. 8 Evolution? JOIN TODAY FOR FREE! Words From the Students "The semester I found StudyBlue, I went from a 2.8 to a 3.8, and graduated with honors!" Colorado School of Mines Get started today Show & Tell StudyBlue is not sponsored or endorsed by any college, university, or instructor. © 2014 StudyBlue Inc. All rights reserved.
A pidgin //, or pidgin language, is a simplified version of a language that develops as a means of communication between two or more groups that do not have a language in common. It is most commonly employed in situations such as trade, or where both groups speak languages different from the language of the country in which they reside (but where there is no common language between the groups). Fundamentally, a pidgin is a simplified means of linguistic communication, as it is constructed impromptu, or by convention, between individuals or groups of people. A pidgin is not the native language of any speech community, but is instead learned as a second language. A pidgin may be built from words, sounds, or body language from multiple other languages and cultures. They allow people who have no common language to communicate with each other. Pidgins usually have low prestige with respect to other languages. Not all simplified or "broken" forms of a language are pidgins. Each pidgin has its own norms of usage which must be learned for proficiency in the pidgin. The word pidgin, formerly also spelled pigion, used to refer originally to Chinese Pidgin English, but was later generalized to refer to any pidgin. Pidgin may also be used as the specific name for local pidgins or creoles, in places where they are spoken. For example, the name of the creole language Tok Pisin derives from the English words talk pidgin. Its speakers usually refer to it simply as "pidgin" when speaking English. Likewise, Hawaiian Creole English is commonly referred to by its speakers as "Pidgin". The term jargon has also been used to refer to pidgins, and is found in the names of some pidgins, such as Chinook Jargon. In this context, linguists today use jargon to denote a particularly rudimentary type of pidgin; however, this usage is rather rare, and the term jargon most often refers to the words particular to a given profession. Pidgins may start out as or become trade languages, such as Tok Pisin. Trade languages are often fully developed languages in their own right such as Swahili. Trade languages tend to be "vehicular languages", while pidgins can evolve into the vernacular.[clarification needed] Common traits among pidgin languages - Uncomplicated clausal structure (e.g., no embedded clauses, etc.) - Reduction or elimination of syllable codas - Reduction of consonant clusters or breaking them with epenthesis - Basic vowels, such as [a, e, i, o, u] - No tones, such as those found in West African and Asian languages - Use of separate words to indicate tense, usually preceding the verb - Use of reduplication to represent plurals, superlatives, and other parts of speech that represent the concept being increased - A lack of morphophonemic variation The initial development of a pidgin usually requires: - prolonged, regular contact between the different language communities - a need to communicate between them - an absence of (or absence of widespread proficiency in) a widespread, accessible interlanguage Keith Whinnom (in Hymes (1971)) suggests that pidgins need three languages to form, with one (the superstrate) being clearly dominant over the others. Linguists sometimes posit that pidgins can become creole languages when a generation of children learn a pidgin as their first language, a process that regularizes speaker-dependent variation in grammar. Creoles can then replace the existing mix of languages to become the native language of a community (such as the Chavacano language in the Philippines, Krio in Sierra Leone, and Tok Pisin in Papua New Guinea). However, not all pidgins become creole languages; a pidgin may die out before this phase would occur (e.g. the Mediterranean Lingua Franca). Other scholars, such as Salikoko Mufwene, argue that pidgins and creoles arise independently under different circumstances, and that a pidgin need not always precede a creole nor a creole evolve from a pidgin. Pidgins, according to Mufwene, emerged among trade colonies among "users who preserved their native vernaculars for their day-to-day interactions". Creoles, meanwhile, developed in settlement colonies in which speakers of a European language, often indentured servants whose language would be far from the standard in the first place, interacted extensively with non-European slaves, absorbing certain words and features from the slaves' non-European native languages, resulting in a heavily basilectalized version of the original language. These servants and slaves would come to use the creole as an everyday vernacular, rather than merely in situations in which contact with a speaker of the superstrate was necessary. List of pidgins The following pidgins have Wikipedia articles or sections in articles. They are only a fraction of the pidgins of the world. - List of English-based pidgins - Algonquian–Basque pidgin - Arafundi-Enga Pidgin - Barikanchi Pidgin - Basque–Icelandic pidgin - Bimbashi Arabic - Broken Slavey and Loucheux Jargon - Pidgin Delaware - Duvle-Wano Pidgin - Eskimo Trade Jargon - Ewondo Populaire - Fanagalo (Pidgin Zulu) - Français Tirailleur - Haflong Hindi - Pidgin Hawaiian - Pidgin Iha - International Sign - Inuktitut-English Pidgin - KiKAR (Swahili pidgin) - Kwoma-Manambu Pidgin - Kyakhta Russian–Chinese Pidgin - Kyowa-go and Xieheyu - Labrador Inuit Pidgin French - Maridi Arabic - Mediterranean Lingua Franca (Sabir) - Mekeo pidgins - Mobilian Jargon - Namibian Black German - Ndyuka-Tiriyó Pidgin - Pidgin Ngarluma - Nootka Jargon - Broken Oghibbeway - Pidgin Onin - Pequeno Português - Plains Indian Sign Language - Plateau Sign Language - Settler Swahili - Taimyr Pidgin Russian - Tây Bồi Pidgin French - Town Bemba - West Greenlandic Pidgin - Pidgin Wolof - Yokohama Pidgin Japanese - See Todd (1990:3) - See Thomason & Kaufman (1988:169) - Bakker (1994:27) - Bakker (1994:26) - Online Etymology Dictionary - Crystal, David (1997), "Pidgin", The Cambridge Encyclopedia of Language (2nd ed.), Cambridge University Press - Bakker (1994:25) - Smith, Geoff P. Growing Up with Tok Pisin: Contact, creolization, and change in Papua New Guinea's national language. London: Battlebridge. 2002. p. 4. - Thus the published court reports of Papua New Guinea refer to Tok Pisin as "Pidgin": see for example Schubert v The State PNGLR 66. - Bakker & 1994 (pp25–26) - For example: Campbell, John Howland; Schopf, J. William, eds. (1994). Creative Evolution. Life Science Series. Contributor: University of California, Los Angeles. IGPP Center for the Study of Evolution and the Origin of Life. Jones & Bartlett Learning. p. 81. ISBN 9780867209617. Retrieved 2014-04-20. [...] the children of pidgin-speaking parents face a big problem, because pidgins are so rudimentary and inexpressive, poorly capable of expressing the nuances of a full range of human emotions and life situations. The first generation of such children spontaneously develops a pidgin into a more complex language termed a creole. [...] [T]he evolution of a pidgin into a creole is unconscious and spontaneous. - "Salikoko Mufwene: "Pidgin and Creole Languages"". Humanities.uchicago.edu. Retrieved 2010-04-24. - Bakker, Peter (1994), "Pidgins", in Jacques Arends; Pieter Muysken; Norval Smithh, Pidgins and Creoles: An Introduction, John Benjamins, pp. 26–39 - Hymes, Dell (1971), Pidginization and Creolization of Languages, Cambridge University Press, ISBN 0-521-07833-4 - McWhorter, John (2002), The Power of Babel: The Natural History of Language, Random House Group, ISBN 0-06-052085-X - Sebba, Mark (1997), Contact Languages: Pidgins and Creoles, MacMillan, ISBN 0-333-63024-6 - Thomason, Sarah G.; Kaufman, Terrence (1988), Language contact, creolization, and genetic linguistics, Berkeley: University of California Press, ISBN 0-520-07893-4 - Todd, Loreto (1990), Pidgins and Creoles, Routledge, ISBN 0-415-05311-0 - Holm, John (2000), An Introduction to Pidgins and Creoles, Cambridge Univ. Press.
Objective: Setting. Where a play, book or story takes place often affects the characters' personalities and the possibilities for plot. Setting is a usually carefully considered item in an author's set-up for fiction. The objective of this lesson is to look at setting. 1) 1. Homework. Students will rewrite the basic plot of Flipped and set it in another century, explaining how the different setting changes the work. For example, what would be different if it was set in the 1700s? 2. Class discussion. Could Flipped have been set anywhere? How does the setting make this a unique story? How do the people in Bryce's small town differ from the students' hometown? How did the setting affect the characters? The plot? The themes? Why is the setting important? 3. Group work. In groups students will research a setting that might be similar and discuss the ways in which Bryce's life would... This section contains 6,466 words (approx. 22 pages at 300 words per page)
Martin Luther King Jr Vs Declaration of Independence Vs The Gettysburg Address In this assignment I have chosen to analyse the three texts; “I have a Dream” by Martin Luther King Jr., “The Bliss version of the Gettysburg Address” by Abraham Lincoln, and “The American Declaration of Independence”. The reason why I chose to compare the two speeches “I have a Dream” and “the Gettysburg Address” with the “American Declaration of Independence” is due to them being an important part of American history. They both refer to the “American Declaration of Independence” and they both contain similarities that connect them. In The Gettysburg Address, Abraham Lincoln explains that many years ago, the forefathers of the American nation came together to write the Declaration of Independence, and he reminds the people that the nation must remember their forefathers and their deeds, and that the nation must rebuild their country after the Civil War, lest their forefathers died in vain. In Martin Luther King Jr.'s speech, ”I have a Dream”, he also reminds America of something, yet this time, he reminds them that the forefathers of America stated in the Declaration of Independence that all men are created equal. He then proceeds to explain that this equality was not true in America at that time – the blacks were not free at all. Finally, he explains his dream of freedom for everyone to those gathered to hear the speech – a dream where everyone is, in fact, created equal. Finally, in the Declaration of Independence, the foundation for the first two speeches is laid, along with the foundations of America: That, no matter who you are, you have some basic rights that includes “Life, liberty and the pursuit of happiness”. Also, it is stated very early in the speech that “all men are created equal.” These statements do not come alone. Afterwards follows a long list of crimes of King George the Third of Great Britain. Finally, the declaration states the...
(1869–1930) Austrian chemist Pregl, who was born at Laibach (now Ljubljana in Slovenia), was the son of a bank official. He graduated in medicine from Graz (1893) where he became an assistant in physiological chemistry in 1899. In 1910 he became head of the chemistry department at Innsbruck, remaining there until 1913 when he returned to Graz to become director of the Medico-Chemical Institute. Pregl began research on bile acids in about 1904 but soon found that he could only obtain tiny amounts. This led him to pioneer techniques of microanalysis. Justus von Liebig had needed about 1 gram of a substance before he could make an accurate analysis; through his new techniques Pregl was capable of working with 2.5 milligrams. This was achieved by the careful scaling down of his analytic equipment and the design of a new balance, which was produced in collaboration with the instrument maker W. Kuhlmann of Hamburg. With this he was capable of weighing 20 grams to an accuracy of 0.001 milligram. The techniques developed by Pregl are of immense importance in organic chemistry and he was awarded the Nobel Prize for chemistry in 1923 for this work. Subjects: Science and Mathematics.
New Moore Island has been sinking for 30 years. However, the island itself, known as New Moore, is no more. In fact, it’s now completely submerged under water. Scientists used satellite imagery to prove their point. Moreover, sea patrols have confirmed that New Moore Island has sunk. Now the Global Warming experts say it’s because of Climate Change. However, the fact is, the island has been sinking dramatically during the past decade. Global Warming experts claim that the sea level is rising in accordance with rising temperatures. The island is about two square miles. The island itself could be the first of many islands to soon disappear. Reports say that around 10 other islands are at risk of being submerged by rising waters. It is either caused by rising sea levels or the island itself might be sinking in mud. Bangladesh And India Fought Over The Land For Many Years The land is actually named South Talpatti Island in Bangladesh. However, India called it “New Moore Island” because it was uninhabited. The land emerged in the Bay of Bengal in the aftermath of the Bhola cyclone in 1970. Its sovereignty was disputed between Bangladesh and India for years until the island became submerged. There was never any permanent settlement on the land. The emergence of the island was first discovered by an American satellite in 1974. The satellite image showed the island to have an area of 27,000 square-feet. Later, various remote sensing surveys showed that the island had expanded gradually to an area of about 110,000 square-feet at low tide. The highest elevation had never exceeded two meters above sea level. The island was claimed by both Bangladesh and India. Neither country established any permanent settlement because of the island’s geological instability. India had reportedly hoisted the Indian flag on the territory in 1981 and established a temporary base of Border Security Forces. According to the Radcliffe Award, the “mid-channel flow” principle or “Thalweg Doctrine” is generally recognized as the international boundary on river borders between the two countries. The middle line of the mid-channel flow of the Hariabhanga River established the original boundary between the states. There was no eventual determination of the its boundary. There was no available conclusive evidence as to which side of the territory the main channel flowed. It may have changed over time given shifting silt of the Sunderbans delta. A 1981 detailed survey favored India. However, the Bangladeshi government claims that data provided clearly showed that the land belonged to Bangladesh. The location of the channel in 1947 may be more relevant than its later location. River channels often shift their locations from time to time.
First briefly explain what primarily distinguishes Renaissance art in general from the prior Medieval period. Then, trace the evolution and development of art through the periods of the Early Renaissance to the High Renaissance. Compare and contrast the work of an Italian Renaissance artist with a work done by a northern European Renaissance artist. Include a discussion of the different concerns and heritages of the Italian and Northern Renaissance artists and how these resulted in different characteristics in the art work of each region. Please use specific examples of art in your response. This solution provides an overview of traits from Renaissance Art: its distinguishment from the Medieval period, as well as its evolution. Includes examples from the Italian Renaissance and European Renaissance.
THE CHEMISTRY OF FIREWORKS Kaboom! Oooh! Aahh! The golden sparkles explode and float down the darkened sky, thrilling everyone watching below. Every Fourth of July, millions of Americans go to local parks to watch exciting fireworks presentations. Fireworks have been a familiar part of celebrations for centuries. For most of that time, the designing of fireworks was a craft. Only recently have people begun to try and understand the science involved in creating the spectacular fireworks displays we all enjoy. What are the component parts of fireworks? What chemical compounds cause fireworks to explode? What chemical compounds are responsible for the colors of fireworks? In this WebQuest you will explore the chemistry of fireworks and answer some of these questions. Your job in this WebQuest is to discover the component parts of fireworks, and to identify the chemical compounds that are responsible for the brilliant colors that light up the sky as fireworks explode. You will explore the history of fireworks and find out when the first fireworks were invented. You will learn about firework design and how fireworks are built. You will also find out what chemical compounds are responsible for the colors seen in fireworks. Finally, you will answer a set of questions about fireworks to demonstrate what you have learned about the chemistry of fireworks. Look at the web sites given here to find the information that will enable you to answer questions about the chemistry of fireworks. · A History of Fireworks. (http://library.thinkquest.org/15384/history/index.htm) At this site, you can learn about the history of fireworks. Where did fireworks begin? · Professional Colors. (http://www.allsands.com/Science/howtomakefire_ajx_gn.htm) Visit this site to learn how professionals create the colors that appear during the vibrant displays of fireworks. · Lights and Colours. (http://chemistry.about.com/library/weekly/aa062701a.htm) Go to this site to see what chemicals create the colors of firesworks. Before the 19th century, only the colors white, yellow, and orange were possible in fireworks. When did the colors red, green, blue, and purple become possible in fireworks? · How Fireworks are Made. (http://www.howstuffworks.com/fireworks1.htm) At this site you can find out what chemical compounds create the colors of modern fireworks. · NOVA Online: Kaboom! (http://www.pbs.org/wgbh/nova/kaboom/anatomy.html) Go to this site for a diagram of the parts of a modern firework. Each part of the diagram has an active label. Click on each label to learn more about that part of the firework. · The Chemistry of Fireworks. (http://library.thinkquest.org/15384/chem/index.htm) Visit this site to learn more about the chemical reactions in fireworks. Find out what two types of binders are used in fireworks today. Read through the following set of questions before you begin your Internet research. As you explore each site, look for answers to the questions. Questions about the Chemistry of Fireworks 1. What exactly is a firework? 2. Where and when were the first fireworks invented? 3. Who were the first Europeans to master fireworks? 4. What type of simple chemical reaction occurs in fireworks? 5. What are the components of black powder? What are the ratios of these components? 6. What three processes cause fireworks to emit light? 7. What types of elements are responsible for the colors of fireworks? 8. What is responsible for the whistling sound that often accompanies fireworks? 9. What are the component parts of modern fireworks? What does each part do? 10. Create a table that lists the chemical compounds that create the following colors of fireworks: blue, turquoise, yellow, pink, red, brilliant red, green, bright green, purple, white. You may use chemical formulas rather than common names of compounds in your table. Modification of www.glencoe.com/sec/science/webquest/content/fireworks.shtml (3/21/2004)
Swallowing & Belching Swallowing is referred to in medical terms as deglutition. Swallowing is actually a reflex which is initiated when a food or beverage is pushed backwards into the back of the mouth (pharynx) by the tongue. The food or beverage is then automatically (involuntarily) pushed down the tube to the stomach (esophagus). During swallowing the breathing passage (glottis) in the back of the mouth is closed as a part of the reflex. Thus, breathing is temporarily stopped while swallowing. A belch is the expulsion through the mouth of gas from the stomach or esophagus. This is accomplished by relaxing the esophageal sphincters (upper and/or lower) and increasing abdominal pressure. The glottis (opening to the trachea and lungs) is closed. Last Editorial Review: 12/31/1997 - Allergic Skin Disorders - Bacterial Skin Diseases - Bites and Infestations - Diseases of Pigment - Fungal Skin Diseases - Medical Anatomy and Illustrations - Noncancerous, Precancerous & Cancerous Tumors - Oral Health Conditions - Papules, Scales, Plaques and Eruptions - Scalp, Hair and Nails - Sexually Transmitted Diseases (STDs) - Vascular, Lymphatic and Systemic Conditions - Viral Skin Diseases - Additional Skin Conditions
Snakes are a vital component to our ecosystems and the northern water snake is no different. They primarily control the rodent populations from over exploding. They also feed on small and diseased fish that if left unchecked could create overpopulations and problems to the ecosystem. Snakes are a mid level part of the ecosystem, meaning they are also hunted. Raccoons, foxes, owls, eagles, are some of the animals that feed on the northern water snake. The northern water snake can defend itself by escaping to the water, where it can stay submerged for long periods of time. The northern water snake is a difficult species to identify. Their color patterns change, and oftentimes they are covered with earthy debris. Commonly, the northern water snake is often mistaken for the water moccasin or cottonmouth. The water moccasin is venomous, where the northern water snake is harmless. Northern water snakes grow to be about 24 to 55 inches. They are medium sized to large snakes, with heavy bodies. The northern water snake, similar to a pit viper, gives live birth; meaning they incubate their eggs in their bodies, and then hatch live snakes. They can hatch 9 to 45 young in a year.
Ultraviolet radiation (biology) Giese, Arthur C. Department of Biological Sciences, Stanford University, Stanford, California. Last reviewed:July 2018 Show previous versions - Photobiological effects - Action spectra - Effects on the skin - Clinical use - Links to Primary Literature - Additional Readings Electromagnetic radiation in the wavelength range of 10 to 400 nanometers that affects biological organisms. The ultraviolet (UV) portion of the electromagnetic spectrum includes all radiations from 10 to 400 nanometers (nm); in addition, radiations as low as 4 nm are sometimes included in this range. Radiations shorter than 200 nm are absorbed by most substances, even by air; therefore, they are technically difficult to use in biological experimentation. Radiations between 200 and 320 nm are selectively absorbed by organic matter, and they produce the best-known effects of ultraviolet radiations in organisms. Radiations between 320 and 400 nm are relatively little absorbed and are less active on organisms. In general, though, ultraviolet radiation in sunlight at the surface of the Earth is restricted to the span from about 290 to 400 nm as a result of the protective effects of the Earth's ozone layer. Notably, and in contrast to x-rays, ultraviolet radiations do not penetrate far into larger organisms; thus, the effects that they produce are surface effects, such as sunburn and development of D vitamins from precursors present in skin or fur. Moreover, excessive exposure to the ultraviolet rays in sunlight can cause skin cancer (Fig. 1). See also: Cancer; Electromagnetic radiation; Radiation biology; Radiation injury (biology); Stratospheric ozone; Ultraviolet radiation; Vitamin D The content above is only an excerpt. for your institution. Subscribe To learn more about subscribing to AccessScience, or to request a no-risk trial of this award-winning scientific reference for your institution, fill in your information and a member of our Sales Team will contact you as soon as possible. to your librarian. Recommend Let your librarian know about the award-winning gateway to the most trustworthy and accurate scientific information. AccessScience provides the most accurate and trustworthy scientific information available. Recognized as an award-winning gateway to scientific knowledge, AccessScience is an amazing online resource that contains high-quality reference material written specifically for students. Its dedicated editorial team is led by Sagan Award winner John Rennie. Contributors include more than 9000 highly qualified scientists and 42 Nobel Prize winners. MORE THAN 8500 articles and Research Reviews covering all major scientific disciplines and encompassing the McGraw-Hill Encyclopedia of Science & Technology and McGraw-Hill Yearbook of Science & Technology 115,000-PLUS definitions from the McGraw-Hill Dictionary of Scientific and Technical Terms 3000 biographies of notable scientific figures MORE THAN 17,000 downloadable images and animations illustrating key topics ENGAGING VIDEOS highlighting the life and work of award-winning scientists SUGGESTIONS FOR FURTHER STUDY and additional readings to guide students to deeper understanding and research LINKS TO CITABLE LITERATURE help students expand their knowledge using primary sources of information
Bubbles large and small signal changes in the Arctic and in Earth’s atmosphere. by Jane Beitler November 12, 2012 Bubbles large and small signal changes in the Arctic and in Earth’s atmosphere. It was nearly winter in Greenland, the tundra patchworked with rumples of earth holding lakes sheathed in smooth ice and snow. Researcher Katey Walter Anthony trudged through the light snow around yet another lake on her survey list, looking for bubbles trapped in the lake ice. “We stumbled across something really weird in a lake right in front of the ice sheet,” she said. “We saw a huge open area in the lake that looked like it was boiling.” Walter Anthony and her team were visiting lakes to measure methane bubbling up. But the roiling seep looked like none other she had seen. “It looked like something deeper and larger, large plumes of bubbles rushing upward,” Walter Anthony said. “So I got curious: where is this gas coming from and what is the mechanism for its release and how widespread is it?” It was a new twist in the problem of lake ice and methane emissions across the changing Arctic. Thawing out the freezer Walter Anthony had been studying methane seeping from Arctic lakes, beginning in northeast Siberia in 2000. Under the lakes, a thick layer of carbon from plants that died hundreds or thousands of years ago stays mostly locked up in permanently frozen ground, like broccoli in the freezer. Today, soils in Siberia and northern Alaska are particularly rich with that organic matter. Now Arctic tundra hovers at a colder temperature that sprouts no trees and only low shrubs and plants, but millions of ponds and lakes. In areas where that permafrost is warming, that organic matter is thawing, rotting, and producing gases that must escape through the lakes. Guido Grosse studies how these lakes, called thermokarst lakes, form and change. “Permafrost keeps the lakes from draining,” Grosse said. “That’s why there are so many lakes.” In recent years, the Arctic has warmed even more strongly than lower latitudes. Now in many areas, the ground is thawing deeper than it used to. “As permafrost degrades, lakes can drain,” said Grosse, at the Permafrost Laboratory at University of Alaska Fairbanks. In other areas, permafrost thaw results in a sinking land surface where new ponds and lakes form, exposing underlying permafrost to even more warming, thawing, and decay. Grosse said, “The lakes are a big emitter of methane in a warmer climate scenario, a warmer Arctic.” Some organic material from vegetation and frozen lake banks normally falls in the lake, thaws, and decays around its edges. This decay stops during the cold season in shallow lakes that freeze to the bottom in harsh Arctic winters. But most lakes deeper than 1.5 meters (5 feet) no longer freeze all the way to the bottom. In these lakes, the organic carbon is beginning to thaw and rot year-round, and the permafrost underneath the lake is beginning to thaw out deeply. Microbes decompose organic carbon in the lake sediments, and in the thawed-out zone under the lake, into methane gas that bubbles to the surface. As the lake surface refreezes in fall, researchers can see the bubbles, trapped in the ice. But they lacked wide-scale measurements of the escaping methane. Bubble, bubble, toil, and trouble In search of methane bubbles, Walter Anthony’s team traveled to lakes by snow machine, helicopter, hiking in, canoe, and bush airplane. “We’ve gone out now on hundreds of lakes and mapped out these methane seeps, in Alaska, Russia, Canada, Finland, Sweden, and Greenland,” she said. It is painstaking work conducted on often dangerously thin first ice in early winter. Melanie Engram, who works with Walter Anthony on the methane studies, explained what it takes to measure emissions at a single lake. Engram said, “There often is snow on top of the ice, so first you shovel a 1-meter wide by 50-meter long [3-foot by 164-foot] transect. Then we drill a hole in the ice on one side, and get a bucket of water and pour it over the transect to remove the last specks of snow so we can see through the ice. Then you can easily see, count, categorize, and measure methane bubbles.” As lakes freeze over in fall, bubbles released from lake sediments get trapped under the freezing surface. The researchers can see stacks of bubbles, separated by thin films of ice, like a time-lapse photograph showing where the bubbles are coming from under the lake. The bubbles and the rate of gas release vary across a lake, and from lake to lake. “If the bubbles are coming up slowly enough, the ice has a chance to grow around them,” Engram said. “Katey has been working to categorize the bubbles. Type A is slow and indicates a small gas flux hardly keeping up with lake ice growth; with type B, some of the bubbles have grouped together by the time the ice forms. Type C has quite large pillows of gas before the ice forms around it. Each of these categories corresponds to a certain rate of gas seepage.” The “boiling” lakes became a fourth type, called “hotspot,” where methane is nearly continuously seeping out at very high rates. The researchers were able to measure seepage rates for each category by installing automated bubble traps, which look like underwater umbrellas, to measure the gas escaping year round. As the permafrost thaws The ultimate goal of the team’s project is Arctic-wide estimates of lake methane emissions. Such estimates are needed for computer climate models, which help test and deepen scientists’ understanding of how Arctic climate responds to change. But with millions of lakes and millions of square miles of Arctic, Engram said, “We can’t go measure every lake. There’s no way of traveling everywhere.” The team thought they could inspect the lakes and compare field observations with satellite images on a larger scale. Then they could apply the bubble cluster classifications and the measurements from their ground studies to estimate how much methane each lake is emitting. This would give them a way to estimate methane emissions from lakes across the entire Arctic. Engram said, “Katey had the idea of looking at Synthetic Aperture Radar (SAR) data.” Other researchers had published studies noting that SAR can detect brighter areas corresponding to tubular bubbles in floating ice. Engram said, “We thought, well, if we see brighter ice where there are tubular bubbles, maybe we can find a SAR wavelength that would be sensitive to the various methane bubble types.” Engram was then working for the Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC), which distributes RADARSAT-1 SAR data. A major challenge was to align the data very precisely with the locations of individual lakes, and she thought she knew how to solve it with a new tool from ASF DAAC. “We took SAR data and pushed it through the Convert tool,” Engram said. The tool converted the SAR data into geolocated files that could be used in ArcGIS, a data mapping software. Engram compared the images with their ground observations. The brighter the ice in the SAR imagery, the more bubbles. Early winter SAR images showed the highest correlation with field measurements of methane bubbles. Engram said, “It’s important to know how much methane comes out of northern lakes, because methane is a very potent greenhouse gas that is 25 to 28 times more powerful than carbon dioxide at retaining heat in the atmosphere on a 100-year time scale. If we can do this withSAR remote sensing in a way that’s inexpensive, using NASA’s already available data and tools, we could contribute useful estimates to the Arctic methane budget.” Uncapping the cryosphere But what about the wildly boiling gas plumes? Walter Anthony still wanted to understand what was happening under the ground to cause such a high flow rate. “My husband and I got in little airplanes and started flying around looking for places in the winter where lakes were open because of methane seepage,” she said. “We flew around and looked at about 6,700 lakes in Alaska, but then we needed to ground truth it. So we went to fifty out of the seventy-seven of the sites where we had seen open areas. We found that yes, every one of them does indeed have very large plumes of methane coming up. But the weird thing is, it was only in certain places.” Walter Anthony and her colleagues studied the geology of the areas where they located the big seeps. In the Arctic, frozen ground can keep gas trapped for thousands of years. “Permafrost is a thick cap that seals off deeper geologic layers by blocking pathways through pore spaces with ice,” she said. “There is natural gas underneath some permafrost regions, and that gas cannot escape into the atmosphere because the permafrost is impermeable.” The team did a geospatial analysis, and found that the gas plumes were near places where glaciers and ice sheets are retreating, and where the thickest, most extensive layers of permafrost are now disintegrating from warming and thawing. These methane emissions are strong, but transient. “If you’ve got a pot of water boiling on the stove with a lid on top, you have a bunch of steam that’s building up inside of there, you take the lid off, that steam goes up, poof! But then the air clears,” Walter Anthony said. “And in the same way you pull back this cryosphere cap, it lets the methane out in a poof, over probably a century to thousands of years.” On a human scale, that poof of methane means large amounts of carbon added to an already warming atmosphere. “The lakes are much bigger emitters than we thought before, now that we have come to understand how much methane is actually bubbling out of the lakes,” Walter Anthony said. “In the future we don’t know what will happen. It is a bit of a wild card.” Sorting out all of these contributions helps scientists factor methane emissions into the overall study of Earth’s climate. “Our work is another piece of the puzzle, closely linked to other processes associated to a changing world, and important if you want to know how much methane and carbon dioxide will be emitted in the future,” Grosse said. Walter Anthony, K. M., P. Anthony, G. Grosse, and J. Chanton. 2012. Geologic methane seeps along boundaries of Arctic permafrost thaw and melting glaciers. Nature Geoscience, doi:10.1038/Ngeo1480. Grosse, G., J. Harden, M. Turetsky, D. A. McGuire, P. Camill, C. Tarnocai, S. Frolking, E. A. G. Schuur, T. Jorgenson, S. S. Marchenko, et al. 2011. Vulnerability of high-latitude soil organic carbon in North America to disturbance. Journal of Geophysical Research - Biogeosciences, 116: G00K06, doi:10.1029/2010JG001507. Walter, K. M., M. Engram, C. R. Duguay, M. O. Jeffries, and F. S. Chapin III. 2008. The potential use of synthetic aperture radar for estimating methane ebullition from Arctic lakes. Journal of the American Water Resources Association 44(2): 305–315, doi:10.1111 ⁄j.1752-1688.2007.00163.x. Walter, K. M., S. A. Zimov, J. P. Chanton, D. Verbyla, and F. S. Chapin III. 2006. Methane bubbling from Siberian thaw lakes as a positive feedback to climate warming. Nature 443, doi:10.1038/nature05040. For more information The photograph in the title graphic shows researcher Melanie Engram prodding the snow on the lake surface to check for thin ice, before approaching the snow-free circles that suggest methane seeping from underneath the lake. (Courtesy K. W. Anthony) Last Updated: Feb 16, 2018 at 10:24 AM EST
Advantages and Disadvantages of Biogas Advantages and Benefits of Biogas - Provides a non-polluting and renewable source of energy. - Efficient way of energy conversion (saves fuelwood). - Saves women and children from drudgery of collection and carrying of firewood, exposure to smoke in the kitchen, and time consumed for cooking and cleaning of utensils. - Produces enriched organic manure, which can supplement or even replace chemical fertilizers. - Leads to improvement in the environment, and sanitation and hygiene. - Provides a source for decentralized power generation. - Leads to employment generation in the rural areas. - Household wastes and bio-wastes can be disposed of usefully and in a healthy manner. - The technology is cheaper and much simpler than those for other bio-fuels, and it is ideal for small scale application. - Dilute waste materials (2-10% solids) can be used as in feed materials. - Any biodegradable matter can be used as substrate. - Anaerobic digestion inactivates pathogens and parasites, and is quite effective in reducing the incidence of water borne diseases. - Environmental benefits on a global scale: Biogas plants significantly lower the greenhouse effects on the earth’s atmosphere. The plants lower methane emissions by entrapping the harmful gas and using it as fuel. Disadvantages of Biogas - The process is not very attractive economically (as compared to other biofuels) on a large industrial scale. - It is very difficult to enhance the efficiency of biogas systems. - Biogas contains some gases as impurities, which are corrosive to the metal parts of internal combustion engines. - Not feasible to locate at all the locations.
Table of Contents - Why Nutrition is Important - Fixing Bad Eating Habits - General Concepts of Children’s Nutrition - Things Children Should NOT Eat - How to Feed Your Baby/Toddler Healthy Food - How to Feed Your School-Age Child - How to Feed Your Teenager Healthy Food - Most Common Mistakes Why Nutrition is Important The number of overweight and obese children is growing at an alarming rate. Almost one in three children ages five to eleven is considered to be overweight or obese. Extra pounds put children at risk for developing serious health problems, such as heart disease, diabetes, and asthma. Overweight children are also susceptible to emotional lability. They are frequently teased and outcast from team activities, which may potentially lead to low self-esteem, negative body image, and ultimately depression. Overweight and obese children have an increased risk of developing serious health problems, such as the following: - Type 2 diabetes - High blood pressure - High cholesterol - Bone and joint problems - Liver and gall bladder disease - Restless or disordered sleep patterns - Depression and low self-esteem In spite of these health issues, however, you can help your child reach and maintain a healthy weight with the right support, positive role modeling, and encouragement. Healthy eating and physical activity habits are important for your child’s well-being. I, as a mother, have always tried to take an active role to help my child learn healthy habits that last a lifetime. This article presents a detailed explanation on how to feed your child at any age, from creating proper eating habits for toddlers, school-age children, and teenagers to mentioning some of the most common mistakes that parents make. A healthy diet is essential for a healthy life. Starting and maintaining a healthy lifestyle during childhood can have long-term benefits. Having healthy eating and physical habits during the formative years of childhood greatly increases the chances that your child will continue these habits throughout life. Childhood obesity has been called “one of the most serious public health challenges of the 21st century” and with good reason (1). Obesity is capable of harming nearly every system in a child’s body, including the heart, lungs, bones, muscles, kidneys, hormones controlling blood sugar and puberty, as well as the digestive tract, and can also take a social and emotional toll (2). Children who are overweight or obese are also more likely to remain this way into adulthood, thus increasing their risk of disease and disability later in life (3). On a global scale, it is estimated that forty-three million preschool children under the age of five were overweight or obese in 2010, which is a 60 percent increase since 1990 (4). The problem affects both rich and poor countries. Moreover, according to the numbers, it places the greatest burden on the poorest: of the world’s forty-three million preschoolers that are overweight or obese, thirty-five million live in developing countries. By 2020, if the current epidemic remains unabated, 9 percent of all preschoolers will be overweight or obese, which is nearly sixty million children (4). Obesity rates are still higher for adults, though, than children. In relative terms, however, the U.S., China, Brazil, and other countries have been dealing with the problem of obesity rates escalating more rapidly in children than adults (5). On the other hand, some regions still struggle with child hunger, such as Southeast Asia and sub-Saharan Africa (6). Thanks to globalization, many regions in the world have become wealthier, and wealth and weight go hand in hand. As poor countries gradually prosper and move up the income scale, they also tend to switch from traditional diets to Western eating habits, which trigger higher obesity rates (7). One result of this “nutritional transition” is that low-income and middle-income countries often face two main issues: the diseases that accompany malnutrition, particularly in childhood, and debilitating chronic diseases that are related to obesity and Western lifestyles. It is often really challenging to estimate childhood obesity rates across the world. Many countries do not conduct nationally representative surveys that measure height and weight of school-age children. Fixing Bad Eating Habits The first step for healthy eating is consuming a variety of whole and nutritious foods. By consuming foods from different food groups in every meal on a daily basis, children should have a healthy and well-rounded diet. To put it simply, a dinner plate should be divided into four different parts with fruits and vegetables comprising half of the plate and grains and proteins the other half. Dairy products should also be incorporated into meals but in smaller amounts, whereas foods containing excess fats and sugars should become only occasional foods, eaten only once in a while. To support continued growth and physical development, children need to satisfy their daily dietary needs, which consist of foods that will provide the nutrients necessary for growth and development, including vitamins, minerals, proteins, and complex carbohydrates. It is recommended that children eat at least five fruits and vegetables every day. Children who eat a healthy diet are also more likely to have faster brain development. For instance, if children do not receive an adequate intake of iron and iodine, they may experience delays in both motor development and cognitive skills. A child whose diet is deficient in the essential fatty acid DHA may experience difficulties and delays in learning and development. Eating healthy foods certainly reduces the risk of health problems faced by children. Those who eat healthy foods on a regular basis might have fewer cavities, and the root cause for many chronic diseases, such as obesity, high blood pressure, heart disease, cancer, and diabetes, can be traced back to an unhealthy childhood diet. If children start life with healthy and whole foods, they may be able to avoid many health issues that could affect them both during childhood and into adulthood. General Concepts of Children’s Nutrition While most parents agree on the benefits of vegetables and healthy proteins, all other issues concerning nutrition seem to be debatable. Should children drink fruit juice? If yes, how much? Is sugar acceptable in moderation? What about high-fructose corn syrup? While there is enough room for good parental choices in a child’s diet, a few core food groups should be included. Most people are usually very busy and surrounded by the hubbub of daily life. They often eat on the go, and proper nutrition is not a top priority. Also, they easily forget the importance of their children’s nutrition. The simplest way to test if a child should eat a given substance is to determine whether or not it is actually a food. Any “food” that can sit on a shelf for a year and not decompose is probably not fit for consumption. In the “nonfood” list, you may want to include anything containing hydrogenated oils (e.g., peanuts, cottonseed, vegetables, soy, and canola), anything containing high-fructose corn syrup, anything containing processed grains, anything containing MSG, and anything containing artificial sweeteners. According to this list, you should also exclude all fast food, microwaveable food, “food” bars, and most drinks besides water from your diet. You must also avoid chemicals in sources that are not that easily recognized, such as the BPA in canned goods, bottled water in soft plastic, or hormones, pesticides, and antibiotics found in conventional meat. This list may be rather confusing and leave you wondering what your child is allowed to eat. Have no worries though. Shown below is a list of great foods that are both tasty and nutritionally rich. They are excellent sources of necessary nutrients that every child should consume on a regular basis. Good Sources Of Protein Protein (amino acids) are responsible for every function in the body and absolutely vital for every person, especially those who are still in the process of forming bones and muscles. Healthy meat offers complete proteins that children desperately need for proper growth. I have often heard how parents complain that their children do not like red meat. Children who rarely eat red meat often test positive for B-12 deficiency. Regarding what constitutes healthy meat, your child should be fed meat with real, untreated, chemical-free sources of protein. Do not buy such meat products as chicken nuggets because they are entirely unsuitable. Instead, buy organic pure chicken, beef, turkey, and eggs, and try to have your children eat these foods every day. Doing so will ensure that your child’s protein needs are being met. Most children are reluctant about eating healthy meats, but once they have been well prepared, they will eat it willingly. Proteins to eat: - Grass-fed beef - Organic organ meats - Free-range chicken and other poultry - Wild-caught fish - Wild game and other whole proteins - Free-range eggs - Luncheon meats and bacon if free of nitrates and nitrites Proteins to avoid: - Processed meats, such as chicken nuggets - Processed food, such as pizza and hamburgers - Non-meats, such as soy nuggets - Commercially raised poultry, beef, or fish Vegetables and Fruits Americans eat much more fruit than vegetables, and I really hope this trend will reverse. Vegetables are equally or more important than fruit. They are a sugar-free food, whereas fruit contains natural sugar and fructose in large amounts, which can be harmful to your child. Moreover, in most cases, children will choose fruit over vegetables if given an option. Many parents are willing to make this concession as long as their child is eating any healthy food. Americans consume French fries with ketchup more than any other type of vegetable. Even when we think we are increasing vegetable consumption by buying these happy meals, it is not the case. Foods such as potatoes (a tuber that is high in carbohydrates and low in nutrition compared with other vegetables), corn (a grain), and peas (a legume) are not really vegetables. Most children receive a great deal of their “vegetable” intake from tomato-based products, such as pasta sauce or ketchup, and tomato used in these products is genetically modified. Despite all the benefits of vegetables, including reducing the risk of almost every disease, we still do not eat and feed our children enough of them. Nonetheless, you have considerable influence as a parent. Use it. Vegetables and fruits to eat a lot of: - Green and leafy (e.g., spinach, kale, chard, turnips, lettuce, mixed greens, and mustard greens) - Colorful (e.g., tomatoes, peppers, eggplant, onions, carrots, broccoli, squashes, celery, cauliflower, cabbage, berries, bananas, avocado, cucumbers, and grapes) - Unusual (e.g., fennel, leeks, kohlrabi, asparagus, radishes, parsnips, olives, artichokes, okra, bok choy, Brussels sprouts, sea veggies, and beets) Vegetables and fruits for treats Instead of having sweets and soda drinks for treats at the end of a meal, turn to fruit. I usually like to serve berries while they are in season. My children are just “reckless” about eating berries, but most of the year vegetables come first, and fruits are served as the “dessert.” After the period of adjustment, children come to love the natural sweetness in fruit, even more than processed sugar. Some of the higher sugar content fruits used as great treats are: - Apples, oranges, papaya, melons, mango, pomegranates, peaches, and pears - Dried fruits, such as prunes, dates, dried cranberries, and prunes Vegetables and fruits to avoid: - Fried, such as French fries, onion rings, potato chips, and other nonfoods - “Veggie” chips, “fruit” roll-ups, and “fruit” snacks - Fruit juices, including the no-sugar-added juices (these are also simple carbohydrates) - Any “fruit” or “vegetable” product that has ingredients besides fruit on the label Unfortunately, the trend in low-fat food in America has been passed on to children as well. People often go to extremes regarding diets, thinking that eating fast food or junk food gives enough room for them to eat “healthy” low-fat alternatives at other times. Despite the best of intentions, some parents make a huge mistake by restricting fat in their child’s diet to prevent weight gain. A restricted fat diet in children is likely to cause health problems, vitamin deficiency, and ADHD. Dietary fats have the necessary vitamins A, D, E, and K. For example, breast milk, which is considered the best food for babies and toddlers, is more than 50% total fat and 40%–50% saturated fat. It is completely irrational to have your children suddenly go from a dietary need for this much fat to a much smaller need for dietary fat. A lack of necessary dietary fats, including saturated fats, can cause a reduction in the myelin sheath, which is responsible for coating children’s brain cells. In such cases, this may cause rapid or uncontrolled fire impulses in the brain, known as ADD or ADHD. It may be a big step forward, but our generation really needs to stop demonizing fats. Children under fourteen particularly need adequate amounts of fat, including saturated fat. This intake should comprise 30% of their total diet, but be mindful in choosing healthy fats. You do not want to consume trans fats and engineered fats, such as vegetable oils, hydrogenated oils, and shortening. Sources of dietary fat to eat: - Coconut (raw or shredded, flour, milk, butter as oil) - Olives/olive oil - Animal sources as long as they are organic or grass fed - Wild game - Organ meats - Also, consider supplementing Vitamin D and Omega-3s Sources of dietary fat to avoid: - Polyunsaturated oils, such as liquid peanut, canola, soy, and vegetable - Hydrogenated oils - Trans fats - Any other engineered form of oil or fat Things Children Should NOT Eat Many parents may be wondering why there are no “healthy whole grains” and dairy products on the list. You do not need them, and neither do your children, especially in pasteurized or processed forms, where these two food groups are associated with many childhood allergies and are not as superior in providing nutrition as they are made out to be. My personal experience proves that children who cannot eat either of these sources due to allergies get just as many nutrients as those children who eat them on a regular basis. Water-soluble proteins, such as lectin and gluten, which are found in grains, can be harmful to the digestive system over time. It is also possible that these particles pass through the small intestine and end up in the bloodstream, which makes them a pathogen. To protect itself, your body creates an immune response, and an allergy is born. The body is capable of healing itself, however, if provided with real food, especially concerning children. People who claim that healthy grain should be eaten for fiber and nutrients also acknowledge that meat, vegetables, fruits, and healthy fats have a much higher nutrient profile. Nevertheless, it is recommended to avoid most whole grains, although these should still be a small part of a child’s diet. Dairy products should not be eaten in large amounts but should be eaten in certain amounts in raw, unpasteurized forms. Many doctors agree that dairy is a staple in many children’s diets. Statistics indicate that children who do not eat dairy products by choice or by allergy still receive enough calcium and other nutrients (7), But dairy products are the main source of dietary fat for many children, meaning that even though they are not necessary, they are still good to eat until this fat is replaced with some more healthy sources. Given all the information, one may question what is the best food to feed your child. At first, I also felt perplexed, not knowing what to give to my child. As a new mom, it was hard to make my daughter eat things he did not like. On the other hand, I could not imagine letting him going hungry, if only for one meal. I started to realize how addicted to unhealthy foods he was and how resistant to healthy foods he had become. I knew something had to change and change it did. I decided that parents should exercise their authority over children regarding healthy eating habits, just as they exercise authority in many other aspects of their children’s lives. Eating healthy is as important as washing your hands or going to bed early. Even though not eating properly is more detrimental to your health than staying up late or dirty clothes, many parents are still lax on this issue. This attitude must stop! I decided to make the change. If I can eat healthy, so will my child. Much to my surprise and relief, the transition was much easier than I expected. Although they can be picky, children are extremely resilient and adaptable. They are more conscious about the effects of dietary improvements than adults. Creating a healthy habit will affect your children for the rest of their lives. How to Feed Your Baby/Toddler Healthy Food Feeding a child can be an endless puzzle. You know it is important that your baby eats, but he just won’t eat. You may have many thoughts about this problem, but he knows how much he should be eating. Yet, you will need some directions to make it easier both for you and him. The first thing you will probably notice is a sharp drop in your toddler’s appetite after his first birthday. Suddenly, your baby is very picky about the food he eats, refuses to come to the table, or turns his head away after just a few bites. Many parents think now that he is active and bigger he should be eating more, but rest assured there are good reasons for the change. His growth rate has slowed, and his body does not require as much food now as before. Why Toddlers are Different Regarding Nutrition A toddler generally needs about 1,000 calories a day, which is enough for growth, good nutrition, and energy. If you have ever been on a 1,000-calorie diet, you probably know it is not much food. A child will do just fine, however, with 1,000 calories divided into three small meals and two snacks a day. Still, do not expect that he will always eat that way because the eating habits of a toddler are unpredictable and erratic from one day to the next. For example, he can eat all the food in sight at breakfast but merely nothing else for the rest of the day. Also, your child is likely to eat only his favorite food for a few days in a row and then reject it completely. Or, he may eat 1,000 calories one day but then eat considerably more or less on a subsequent day. The needs of your child will vary and depend on his activity level, metabolism, and growth rate. Most Common Challenges and How to Overcome them You should not turn mealtime into a battlefield to make him eat a balanced diet. It is not you your child rejects. Rather, it is the food you prepared that he turns down, so do not take it personally. Besides, the harder you push him to eat, the less likely he will comply. Instead, try offering him a selection of nutritious foods and letting him choose what he wants to eat. Make sure to vary the consistency and taste as much as possible. If it turns out that he just won’t eat anything, it might be good to save the plate for later when he is hungry, but do not allow him to fill up on sweets or cookies after rejecting his meal. Doing so will only fuel his interest in empty-calorie foods (foods that are high in calories but low in important nutrients, such as minerals and vitamins) and diminish his appetite for nutritious foods. It may be hard to believe, but your child’s diet will balance out over a few days in case you make a range of wholesome foods available and do not pressure him to eat a particular food at any given time. By the age of two, your child should be eating three times a day, plus one or two snacks. Now he is ready to eat the same food as the rest of the family. With his enhanced social and language skills, he will be able to actively participate in mealtime conversations if given the chance to eat with everyone else. You should work on building healthy eating habits and making healthy food choices as a family. Sitting all together at mealtime is the beginning of a good habit. Fortunately, your child has improved his feeding skills, meaning he can use a spoon, serve himself a wide variety of finger foods, or drink from a cup with just one hand. Nonetheless, he is still learning to chew and swallow efficiently and is prone to gulp his food when wanting to play. This habit increases the risk of choking, and to decrease it, avoid the following foods, which could block the windpipe if swallowed whole: - hot dogs (unless sliced lengthwise and then across) - nuts (especially peanuts) - spoonfuls of peanut butter - whole raw carrots - whole grapes - raw celery - raw cherries with pits - round, hard candies or gum Still, you should not be alarmed if your child does not always follow this ideal plan. Many children refuse to eat particular foods or insist on eating only their favorite foods over long periods of time. The more you oppose your child and his eating preferences, the more persistent he will be to defy you. Offering him a variety of foods and letting him choose what to eat will eventually bring him to eat a balanced diet on his own. He will probably find healthy foods more interesting if he can eat them on his own. For this reason, offer him finger foods, such as raw vegetables and fresh fruits, instead of cooked ones that require a spoon or fork to eat. Before going to school, your preschool child should have a healthy eating attitude. Ideally, by the age of three, he no longer uses not eating or eating to display defiance. Also, your child should not confuse food with affection or love and probably views eating as a natural response to meals and hunger. Your child has altered his attitude toward eating, but despite his enthusiasm for eating, his preference for certain foods still remains, are very specific, and may vary from day to day. There may be times when your child gobbles down a particular food one day but pushes away the plate with the same food the next. He may insist on a particular food for a few days in a row and then refuse to eat it on the pretext he does not like it anymore. Although irritating, this is normal behavior for a preschooler, and the best thing you can do is not make an issue of it. Let your child eat the other foods on his plate or pick something else to eat. As long as he eats foods that are not extremely fatty, salty, overly sugary, make no objections. Still try to encourage him, however, to try new foods by offering a very small amount to taste and not by insisting that he eat an entire portion of some unfamiliar food. If healthy options are available, let your child decide what and how much to eat. If he is a picky eater and resists eating vegetables, for example, do not get frustrated or discouraged. Keep offering these foods to him even if he repeatedly turns his nose at the sight of them. It may happen that you will cause him to change his mind and develop a taste for foods that he used to ignore. This is the right time to reinforce or establish healthy eating habits. If healthy options are available, let your child decide what and how much to eat. This is the right time to reinforce or establish healthy eating habits. One of the major obstacles to good nutrition for your child is television advertising. According to some studies, children who watch more than twenty-two hours of TV per week are more likely to become obese (8). Children are extremely receptive to advertisements for candies and other sugary sweets. Obesity has become an alarming problem among children in America. Statistics show that you need to be aware of your child’s eating habits at home and away and constantly monitor him to ensure that he is eating as healthy as possible (9). Recommended Healthy Foods Just as you do, your toddler needs to eat foods from the same four basic nutrition groups: - Whole milk - Other dairy products (full-fat yogurt, soft pasteurized cheese and cottage cheese) - Fruits (papaya, melon, apricot, grapefruit) - Vegetables (cauliflower and broccoli, cooked until soft) - Juice (100 percent juice, citrus and noncitrus) - Iron-fortified cereals (wheat, oats, barley, mixed cereals) - Protein (eggs, small pieces of meat, poultry, thinly spread peanut butter, tofu, boneless fish, beans) While planning your child’s menu, keep in mind that cholesterol and other fats are essential for his development and normal growth, meaning that you should not restrict these nutrients during this period. Babies and young toddlers should get about half of their calories from fat. You can gradually decrease fat consumption once your child reaches the age of two. When your child is four or five, lower fat consumption to about one-third of daily calories. I am not saying you should ignore that the number of overweight and obese children has become an alarming problem, but youngsters at the age of two really need dietary fat. If your child’s caloric intake is around 1,000 calories a day, you should not be worrying about overfeeding him and putting him at risk of gaining too much weight. Vitamin supplements are often unnecessary for toddlers who eat a varied and balanced diet, but your children may need supplemental iron if they eat very little meat, vegetables rich in iron, or iron-fortified cereal. In case your child drinks a lot of milk (more than 32 ounces or 960 ml per day), it may inhibit the proper absorption of iron, which increases the risk of iron deficiency. Sixteen ounces or 480 ml of low-fat or nonfat milk per day is just enough for your child. This dosage will satisfy their needs for the calcium required for bone growth but at the same time not reduce their appetite for other foods, especially those providing iron. A vitamin D supplement of 400 IU a day is of great importance for children who are not regularly exposed to sunlight, are consuming less than thirty-two ounces a day of vitamin D-fortified milk, or do not consume a daily multivitamin supplement that contains at least 400 IU of vitamin D. This amount of vitamin D decreases the risk of rickets. By their first birthday, children will most likely be able to handle most of the foods you serve the rest of your family but with several precautions. First, make sure the food is cool enough so that it won’t burn the child’s mouth. You should test the temperature yourself because your child will probably dig in without any consideration for the heat. Moreover, do not serve your child foods that are heavily salted, sweetened, spiced, or buttered. Such foods will prevent your child from experiencing the real and natural taste of foods they eat, which may be damaging to their long-term health. Younger children are more sensitive to these flavorings than adults and may also reject heavily spiced foods. Also, bear in mind that your child can still choke on chunks of food that are large enough to plug the airway. In most cases, children do not learn to chew until they are about five years old. At the age of two, be sure to serve them mashed foods or foods cut into small and easily chewable pieces. Exclude whole grapes; cherry tomatoes; peanuts; sunflower or processed pumpkin seeds; meat sticks; large chunks of hot dogs; hard candies, including gummy bears or jelly beans; or chunks of peanut butter unless it is thinly spread on a piece of bread. Carrots and hot dogs should be quartered lengthwise and sliced into small pieces. Plus, be sure to have your child eat while seated and supervised by an adult. Children are rather impatient, wanting to do everything at once. “Eating on the run” is out of the question because it increases the risk of choking, and you should teach your child to finish a mouthful before speaking. When children turn one or soon thereafter, they should drink liquids from a cup. They will also need less milk now because solid foods will be the primary source of calories. How to Feed Your School-Age Child During early childhood and school-age years, a child begins to establish eating and exercise habits that stay with them for their entire life. If a child establishes healthy habits, the risk of developing many chronic diseases will be greatly decreased. On the other hand, establishing poor eating habits and physical inactivity during childhood increases the risk of health problems in adulthood. Why School-Age Children are Different Regarding Nutrition School-age children (ages six to twelve) need healthy foods and nutritious snacks. They have a consistent but slow rate of growth and usually eat four to five times a day, including snacks. Most food habits, likes, and dislikes are established during this period. Family, friends, and the media (particularly TV) have a great influence on their eating habits and food choices. School-age children are often more eager to eat a wider variety of foods than their younger siblings. Eating a healthy after-school snack is very important, as these snacks may contribute up to one-fourth of the total calorie intake for the day. School-age children have also developed more advanced feeding skills and are capable of helping with meal preparation. In contrast to the rapid physical growth and development during infancy and adolescence, the childhood years (ages two to eleven) are often characterized by much slower but more stable physical growth. On average, children gain four to seven pounds and one to four inches per year. At approximately age ten or eleven, the growth rate once again begins to increase, which indicates the child is about to hit puberty. Once the slower physical growth and development occurs, the body’s needs also decrease compared with infancy, especially concerning certain nutrients, such as calories and protein. As a result, it is not uncommon to see a child with an inconsistent and decreased appetite. On the other hand, as your child enters school and begins to participate in different school activities and organized sports events, which ultimately means an increase in physical activity, appetite and food intake will also increase. These new activities, including starting school or participating in other structured activities, place new mental, emotional, and social demands on children. Accordingly, the school-age years are characterized by intense development in cognitive and social skills. Without appropriate nutrition, your child will not be able to exhibit good cognitive and physical results. Most Common Challenges and How to Overcome them Parents should monitor what their child eats while the child is in the best position to decide how much to eat. Healthy and active children’s bodies know the right amount of food, even though their minds can lead them astray when choosing the right foods to eat. The amount of food your child needs can be easily overestimated, especially during the years of middle childhood. Children of this age do not need adult-sized servings of food, but many parents are unaware of this fact and just keep placing as much food on their child’s plate as on their own. The child is then left to choose between being criticized for not eating everything on the plate or for overeating and increasing the risk of obesity. Consequently, to decrease this risk, it is advisable to weigh your children occasionally, but you do not have to go too far by counting calories for your children, as most youngsters control their calorie intake quite well. As children grow up, their energy needs will also increase as will their food intake needs, particularly as they approach puberty. Between ages seven and ten, both boys and girls consume about 1,600 to 2,400 calories per day, even though caloric needs vary considerably even under normal circumstances. Girls experience a significant increase in growth between ages ten and twelve and will consume about two hundred calories more each day, whereas it will take two years more for boys to experience their food intake increase, which is nearly an extra five hundred calories a day. During the period of rapid growth, children will require increased total calorie intake and nutrients (e.g., calcium to encourage bone growth or protein to build body tissue) than at any other period in their lives. At most ages, boys need more calories than girls, primarily due to their larger body size. Nonetheless, appetites can vary, even on a daily basis, depending on such factors as activity levels. A child who spends the afternoon playing outdoors may require more calories than one who spends the afternoon doing homework. Finally, caloric needs of children vary. Recommended Healthy Foods As children develop, they require the same healthy foods that adults eat, along with more minerals and vitamins to support growth, including whole grains, a wide variety of fresh fruits and vegetables, calcium for growing bones, and healthy proteins. Here are dietary guidelines for school-age children: - Vegetables (3–5 servings per day. A serving might contain 1 cup of raw leafy vegetables, 1/2 cup of other vegetables, raw or cooked, or 3/4 cup of vegetable juice) - Fruits (2–4 servings per day. A serving might contain 1/2 cup of sliced fruit, a medium-size whole fruit, such as a banana, pear, or apple, or 3/4 cup of fruit juice) - Whole grains (6–11 servings per day. A serving may include 1 slice of whole-grain bread, 1 ounce of unsweetened cereal, or 1/2 cup of brown rice) - Protein (2–3 servings of 2–3 ounces of cooked meat, fish, or poultry per day. A serving may also include 1/2 cup of cooked dry beans, 2 tablespoons of peanut butter for each ounce of meat, or 1 egg) - Dairy products (2–3 servings or cups per day of milk or yogurt or natural cheese) - Zinc (Some studies indicate that zinc improves memory and school performance, particularly in boys. Good sources of zinc are beef, liver, pork, oysters, dried beans and peas, whole grains, nuts, fortified cereals, cocoa, milk, and poultry) Here are some tips to improve your child’s nutrition and establish healthy eating habits: - Try to control where and when your child eats food by providing regular daily mealtimes with social demonstration and interaction of healthy eating behaviors. - Involve your child in selecting and preparing foods. Teach your child to make healthy eating choices by providing opportunities to select foods that are based on their nutritional value. - According to one study, children, in general, do not consume enough of the following nutrients: calcium, fiber, iron, protein, and vitamins (10). For this reason, select foods that are rich in these nutrients when possible. - Studies indicate that the number of overweight and obese children is growing (11). Consequently, it is recommended that you control portion sizes and eating processed foods, as you will limit calorie intake and increase nutrients in your child’s diet. - Parents are encouraged to limit the amount of time children spend watching television, playing video games, and using the computer to less than two hours per day and replace such sitting activities with more active physical activities. - Parents are encouraged to make recommended serving sizes for children. - Sixty minutes of moderate to vigorous physical activity on most days is recommended for children and adolescents to have good health and healthy weight during growth. - To prevent dehydration, encourage your child to drink fluid on a regular basis during physical activity and to drink a few glasses of water or other fluid after the physical activity is completed. How to Feed Your Teenager Healthy Food Adolescence is a period of rapid physical, intellectual, emotional, and social maturation. To support this growth, your teenage child needs extra calories, iron, calcium, and sufficient protein. Parents and family are now less influential on teenage eating habits, as peers, body image issues, and media messages shape their mind-set about nutrition. Why Teenage Children are Different Regarding Nutrition Following a period of slow growth during late childhood, teenage growth rate is as fast and rapid as that of early childhood. By the end of adolescence, teenagers attain most of their weight and height. Although adolescence spans a period of five to seven years, teenagers do most of their growing during an 18- to 24-month period, which is called the “growth spurt.” To support this rapid growth, teenagers have to consume a lot of calories and other essential nutrients. During the adolescent period, teenagers go through puberty, a process that involves total body maturation and the development of adult sexual function. Adolescence is the period when the body composition of both males and females changes. Before puberty, females have approximately 19% body fat, which increases to about 22% after they hit puberty. Males, on the other hand, maintain their body fat percentage of approximately 15%, but they gain twice as much muscle mass than females during adolescence. Most Common Challenges and How to Overcome them By adolescence, your teenager’s eating habits are pretty much set in stone, but if these habits are based on sugary and junk foods, now is the perfect time to help them change course. One of the best ways to get your teenagers to make dietary changes is to present ideas in terms they understand, e.g., by showing them the short-term consequences of chronically eating unhealthy foods. Try to explain that changing and improving dietary habits and having an active lifestyle lead to a better physical appearance, better sleep, improved athletic abilities, and overall improvement in their enjoyment of life. Say such things as “Iron will help you do your math test better and stay up late without being as tired” and “Calcium will help you grow taller during your growth spurt.” Coax your child to engage in a 60-minute physical activity per day and make it fun. The activity does not have to be a sport since the goal is to get your children moving in whatever makes them active and happy. Encourage your teenager to jog, play basketball at the park, walk the dog, swim, go to the gym, or rollerblade. Parents should watch how they coax their teenager into making these changes. Most importantly, when parents themselves get active and eat healthier, the changes become a lifestyle habit that the children will follow. Recommended Healthy Foods Due to the growth spurt during adolescence, your teenager will need extra iron and calcium. - Calcium can be found in dairy foods, so eating such foods as milk, yogurt, and cheese three times a day will provide enough calcium for your child’s body. Enough calcium will also help your child build stronger bones for life and reach peak bone density. - Iron is necessary for your teenager. Indicators such as growing muscle mass and expanding blood volume mean that your child needs more iron in adolescence. Girls also have extra iron needs due to their periods. For example, red meat is one of the best sources of iron. Also, if your child is a vegetarian, it can help them get enough iron, but vegetarians need to eat iron-rich alternatives to meat. Some of the best sources of iron for vegetarians are green leafy vegetables; legumes, such as lentils and beans; fortified cereals; and whole grains. - Tell them they should drink water because this is helpful for losing or maintaining a healthy weight and also keeps you refreshed and energized. - Getting enough sleep is important because sleep deprivation has been linked with overeating. Also, sleep gives your child enough energy to move and stay awake. - Recommend eating foods that are rich in fiber because fiber aids digestion and helps your child feel satisfied. - Prepare a big breakfast for them. Make sure it is rich in fiber because that will help your teenager feel full throughout the day and will also save them from eating a snack, preventing empty calories from entering their body. - Make sure they are consuming enough protein, which helps build muscle mass and strength. It also boosts metabolism, thus improving nutrition. The basal metabolic rate is increased by protein, meaning that the more protein your children eat, the more they will burn while sleeping or just sitting. - Emphasize the importance of exercise. Your teenager does not have to train hard every day, but taking a stroll downtown would be healthy for them. - Teach them to cook. Show them how easily they can make a simple recipe at home that is twice as healthy as other foods. - Teach them not to trust ads. They should know it is all marketing and usually deceptive. Most of the foods that are advertised on TV are high-calorie foods that are low in nutrients. Most Common Mistakes Whether your child is overweight, obese, underweight, or just fine, they are still susceptible to some common mistakes that parents make. Here are seven of these mistakes and how to avoid them: Encouraging Children to Join the “Clean Plate Club” Healthy young children usually eat when they are hungry and stop when they feel full. They are led by their internal, natural cues, and you should not interfere by encouraging them to eat past the point of feeling full. Teach your child to eat according to how hungry they feel. Doing this will help them have a comfortable relationship with food and avoid overeating as they grow older. According to recent studies, all children, regardless of age, are more likely to eat more when served larger portions (12). To put it simply, the more you put food on your child’s plate, the more they will eat no matter how full they may be. For this reason: - Do not bribe or encourage your children to clean their plate - Serve small to moderate portions at meals (except vegetables, which can be served to your children in unlimited portions) - Encourage your children to eat until they feel full and comfortable with eating, and allow them to have additional servings if they request them Offering Sweet Rewards Some parents feel frustrated because it is hard for them to get their children to eat their vegetables, so they often resort to bribery. “Eat your veggies and you may have donuts for dessert” is a technique that teaches our children that vegetables are less appealing and that dessert is the prize, something to be valued over other foods. Different studies have shown that in the long-term preference for certain foods decreases when children are given rewards for eating them (13). Depriving Children of All Sweets Although sweets are one of the main contributors to childhood obesity, depriving your child of all sweets is an extreme measure. To help your child have a healthy relationship with foods, you have to make a balance and meet somewhere in the middle. There is nothing wrong with limiting sweets, but outlawing them altogether may be counterproductive. One study has found that children who are completely restricted from eating snacks and cookies have an increased desire to eat and are more likely to eat more of them than they should whenever they get the chance (14). It is perfectly fine to allow your child some type of dessert after dinner. Here are some tips for choosing a dessert/snack for your child: - Limit desserts/snacks to 150 calories (e.g., an ice-cream pop or two cookies) - Make sure to first read labels and then choose healthy ingredients - It is desirable to sneak in a little nutrition along with the sugar. For instance, ice cream and low-fat puddings provide calcium. Letting Little Children Eat Like Big Children Children with older siblings are more likely to eat junk foods (e.g., cookies, candy, cake, soda, and potato chips) than children without older siblings. Most of their older siblings are allowed to eat these treats, meaning their little brothers and sisters tend to be exposed to unhealthy foods. Although it is very hard to maintain the same age-based food standards for all your children, you should not let your little child get away with eating junk food just because you love them. Allow your older children to have snacks that are not appropriate for toddlers and preschoolers. At the same time, make sure this occurs when your youngest ones are not around. One solution may be to put the treats in lunch boxes to take to school. Also, consider giving sweets to your oldest when your youngest are in another room, when they go to bed at night, or when they are taking their evening bath. Offering Too Many Snacks If you provide your children with constant snacking throughout the day, it may leave them uninterested in eating a proper lunch. Consequently, they will feel less hungry, thus less willing to try new foods, such as vegetables. Therefore, you may want to: - Allow at least two hours between meals and snacks. - Stick to a consistent snack and meal schedule. - Limit snacks to no more than two or three a day and about 150 calories per snack. Getting Young Children Started on Liquid Calories A study found that youngsters today take in 10% to 15% of their total daily calories from sugar-sweetened beverages, such as soda, fruit drinks, and sports drinks (15). What’s more, children’s average daily calorie intake from these beverages has increased from 242 calories to 270 calories over the past ten years and continues to rise. These drinks are packed with empty calories, meaning they are rich in sugar but low in nutrients. Although high in calories, beverages do not trigger the same mechanisms as solid foods, meaning your children will most probably not feel satisfied from drinking soda or juice, which can result in weight gain in the long run. To prevent weight gain from drinking beverages, limit them in your home and replace them with water, diluted 100% fruit juice, and nonfat or 1% milk. Serving the Same Meals You Did Before Having Children Your perfect healthy meal might include fish, plain grilled chicken, salad, and plenty of vegetables, but chances are your young children will find these foods unappealing, bland, or downright disgusting. To persuade a picky child to try healthy foods, you will have to be a bit more creative in the kitchen. Experiment with meals and include condiments and flavorful marinades to make bland food more tasty and appealing, or simply play around with textures, colors, or shapes to liven up the dinner plate. Here are some ideas: - Top vegetables or poultry, such as cauliflower, asparagus, and broccoli, with your favorite part-skim mozzarella, Parmesan cheese, or jarred marinara sauce. - Cut fruits or vegetables into fun shapes with small cookie cutters. This works perfectly well with yellow and red bell pepper, cucumbers, raw beets, apples, melons, and pears. - Mix grated or chopped vegetables into meatloaf, chili, soups, casseroles, marinara sauce, or other mixed dishes. Food battles with your children can be really frustrating, but this is exactly why it is important to keep such issues as vegetable avoidance and picky eating in perspective. Celebrate the small victories, and continue to shape healthy eating behaviors for your children. As they get older, their healthy eating habits will be as good as they were when you insisted on them, if not better. Ultimately, you will reap the rewards of your persistent focus on good nutrition. Healthy eating habits can improve children’s overall health, sharpen their minds, stabilize their energy, and even out their moods. On one hand, TV commercials and peer pressure for junk food make it almost impossible for children to eat well, but parents can take steps to instill appropriate and healthy eating habits without turning mealtime into a war zone. If you encourage your children to form healthy eating habits now, it is highly likely that you will make a huge impact on their lifelong relationship with food, thus giving them the best opportunity to grow into healthy, confident adults. Let me know in the comments if you’ve tried other strategies with your children.