content
stringlengths 275
370k
|
---|
Mars came so close to our planet, Earth, up to a distance of 56million kilometers, in August 2003. Scientists estimate that this distance is the closest in 60,000 years. Astronomically, this closeness of mars to us puts the red planet at ‘our backyard’. In 2004, several spacecrafts that were launched had already started exploring Mars. Some explored it from the orbit while others from the Martian surface.
The Mars space craft known as Mars Global Surveyor which arrived Mars in 1997 discovered that it had, at a certain time, a strong magnetic field. This orbiter also mapped the topography of Mars which revealed the distance from the lowest spot to the highest. This distance far exceeds 29 kilometers, compared to about 19 kilometers for the Earth which is the distance from the base of the Mariana Trench in the pacific ocean up to the top of the highest mountain on Earth—Mount Everest.
The lowest spot on the Martian surface is the wide Hellas basin, which was formed by the impact of gigantic asteroids while the highest point is the summit of the magnificent Volcano Olympus Mons which is 21 kilometers high. Cameras mounted on Surveyor recorded the large boulders that seemed to be over 18 meters across including large fields of sand dunes and newly created gullies. Further discoveries revealed that most of the surface rocks on mars were of volcanic origin.
When scientists lost contact with the Mars Global Surveyor in November 2006, other orbiters continued their exploration of the red planet. These orbiters were the Mars Reconnaissance Orbiter, Mars Odyssey and Mars Express. They used more sensitive cameras in their exploration which further revealed the make up of the Martian atmosphere and an abundance of ice on the Martian North pole. This ice discovery was the main focus of yet another explorer known as the Phoenix Mars Lander which landed on the planet in 2008, precisely on the 25th of May. The mission of the Phoenix Mars Lander is to explore the permafrost icy region of the North Pole and find out if it had ever supported microbial life.
The Phoenix Mars Lander provided very useful information about the Martian surface including its icy regions. An arm of this space craft scooped down the surface of the ice and fed both ice and soil samples to the two laboratories on board for analysis. However, the mission was designed to be a short one as just months after the end of the Lander’s mission, the Martian winter was to set in which would wrap the Pheonix in a thick blanket of frosty carbon dioxide ice.
‘Spirit And Opportunity’ At Work
Spirit and Opportunity were two Martian exploration rovers that arrived Mars in January 2004 and their landing sites were informed by data generated from previous Martian explorations. Their mission was to assess the history environmental conditions and phenomena at areas that might be able to support life. While landing on the Mars, they slowed down as they descended through the Martian atmosphere with the aid of heat shields, rockets and parachutes. As they landed they bounced on the surface and freed from a cocoon of airbags as did the earlier Mars Pathfinder in 1997.
Robotic exploration on Mars is made more possible because it has basically the same surface area of dry land as is found on Earth. Spirit landed on a area of Mars close to the giant Gusev crater which is believed to have once contained an ancient lake while Opportunity landed on the Meridiani Planum which is a plateau of ancient rocks in layered platform that contains hematite—an iron rich mineral ore.
Geological Discoveries of ‘Spirit’ and ‘Opportunity’
When Spirit landed on its destination, it was on a barren landscape characterized by shallow but circular depressions that could have been created by meteorites. It further discovered that the surrounding rocks and soil formations were formed and embedded with volcanic rocks. Spirit then drove to a distance of about 2.6 kilometers in order to investigate a group of small hills and plateau where it discovered that there were quite unusual ledges and rock formations that have volcanic origins.
On the 25th of January 2004, Opportunity landed just 25km from its target area. This is after it had travelled over 456million km. Opportunity explored blueberries which were layered rocks that encapsulated small hematite-rich spherules. These blueberries are not truly blue; their gray color is a sharp contrast to the red background of Martian soil and rocks. Further discoveries by Opportunity showed that some rocks were formed ripples similar to that made by sand deposits in a flowing water body. With the discovery of chlorine and bromine in the rocks, scientists believe that salt water could have been present at certain point in time.Need Help? Visit our FAQ page.
|
Empathy or the ability to perceive another person’s point of view naturally emerges as a child grows and develops. However, valuing, respecting and understanding another person’s point of view are social and emotional skills that parents must help their children to understand in order to raise emphatic and compassionate individuals.
Teach them gratitude.
The old adage “count your blessings” is true. By focusing on the things that your children already have helps them realize and keep in mind that they are rich in so many ways. Teaching them gratitude also makes them sensitive to other people’s feelings, helping them to develop empathy, compassion and kindness.
Some of the ways you can teach gratitude to your children are”
- Model gratitude – children model their parents in every way, so make sure you always say please and thank you.
- Work gratitude in daily conversations – weave appreciation in your mundane talks, e.g. “Aren’t we lucky to have Sam for our cat?” or “Aren’t the stars lovely tonight?” You can also pick a thanking part of the day like talking about the things that happened that day that you’re thankful for over dinner.
- Ask your child to help – giving your child a task like clearing the table or washing the dishes helps him to appreciate the work that you put in when his parents do the task for everybody.
- Encourage generosity – encourage your children to give up and share their things to those have less
- Thank you notes –telling someone even one little thing that you appreciate about him helps foster connectedness and vice versa
Teach them about happiness
Happiness is a choice. It’s easy for your child to be happy when things are going their way. But when things get tough or when things don’t go their way, how do we teach children to cope and stay happy?
AS with most things, children learn by modelling.
Your language, behavior, mood, reactions, goals and habits all influence how your child sees and understands happiness.
The following five steps will help you make happiness a part of your daily conversation until it is practiced and observed and becomes a habit.
- Decide to be happy – teach your children that they can choose to be happy at any moment or whenever they want to feel good. Remind them that their happiness is so precious that they shouldn’t let other people or situations take it away from them.
- Mood boosters – teach them how to boost their mood by practicing gratitude, (e.g. “I am grateful for Sam the cat because he’s fun.”; kindness, e.g. donating old clothes and toys to the needy, watching over little sister, etc.; and positive reflection, e.g. expressing gratitude during bedtime prayers.
- Teach tools for resiliency – teach your children to respond with acceptance, forgiveness, compassion and gratitude when faced with adversarial situations. Offer how acceptance, forgiveness and compassion are the best ways to respond. Help them to see the best in any situation.
- Acceptance – Encourage your children to be true to themselves by accepting and having confidence on who they really are inside. Encourage them to use their strengths and support activities that use and showcase their strength.
- Trust and faith – Teach your children to respond with faith and trust when faced with the unknown. Encourage positivity and realistic optimism.
|
Arsenic is present in the environment as a naturally occurring substance or as a result of contamination from human activity. It is found in water, air, food, and soil in organic and inorganic forms.
The FDA has been measuring total arsenic concentrations in foods, including rice and juices, through its Total Diet Study program since 1991. The agency also monitors toxic elements, including arsenic, in a variety of domestic and imported foods under the Toxic Elements Program, with emphasis is placed on foods that children are likely to eat or drink, such as juices.
- Questions & Answers on Arsenic
- Information on Arsenic in Specific Products
- Method for Measuring Arsenic, Cadmium, Chromium, Mercury and Lead in Foods
What is Arsenic?
Arsenic is a chemical element present in the environment from both natural and human sources, including erosion of arsenic-containing rocks, volcanic eruptions, contamination from mining and smelting ores, and previous or current use of arsenic-containing pesticides.
Are there different types of arsenic?
There are two types of arsenic compounds in water, food, air, and soil: organic and inorganic (these together are referred to as “total arsenic”). The inorganic forms of arsenic are the forms that have been associated with long term health effects. Because both forms of arsenic have been found in soil and ground water for many years, some arsenic may be found in certain food and beverage products, including rice, fruit juices and juice concentrates.
How does arsenic get into foods? Do all foods have arsenic?
Arsenic may be present in many foods including grains, fruits, and vegetables where it is present due to absorption through the soil and water. While most crops don’t readily take up much arsenic from the ground, rice is different because it takes up arsenic from soil and water more readily than other grains. In addition, some seafood has high levels of less toxic organic arsenic.
Do organic foods have less arsenic than non-organic foods?
Because arsenic is naturally found in the soil and water, it is absorbed by plants regardless of whether they are grown under conventional or organic farming practices.
What are the health risks associated with arsenic exposure?
Long-term exposure to high levels of arsenic is associated with higher rates of skin, bladder, and lung cancers, as well as heart disease. The FDA is currently examining these and other long-term effects.
Does the FDA test for arsenic in foods?
Yes, the FDA has been testing for total arsenic in a variety of foods, including rice and juices, through its Total Diet Study program since 1991. The agency also monitors toxic elements, including arsenic, in selected domestic and imported foods under the Toxic Elements Program, including those that children are likely to eat or drink, such as juices.
- Arsenic in Rice
- Arsenic in Apple Juice
- Arsenic in Pear Juice Analytical Results, 2005-2011 Updated February 14, 2012
- Hazard Assessment and Level of Concern - Pear Juice April 8, 2008
- Analysis of Foods for As, Cd, Cr, Hg and Pb by Inductively Coupled Plasma-Mass Spectrometry (ICP-MS); Current Method (PDF, 116KB)
|
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2007 September 27
Explanation: The dark expanse below the equator of the Sun is a coronal hole -- a low density region extending above the surface where the solar magnetic field opens freely into interplanetary space. Shown in false color, the picture was recorded on September 19th in extreme ultraviolet light by the EIT instrument onboard the space-based SOHO observatory. Studied extensively from space since the 1960s in ultraviolet and x-ray light, coronal holes are known to be the source of the high-speed solar wind, atoms and electrons that flow outward along the open magnetic field lines. The solar wind streaming from this coronal hole triggered colorful auroral displays on planet Earth begining late last week, enjoyed by spaceweather watchers at high latitudes.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U.
|
The processing of raw materials into usable forms is termed fabrication or conversion. An example from the plastics industry would be the conversion of plastic pellets into films or the conversion of films into food containers. In this section the mixing, forming, finishing, and fibre reinforcing of plastics are described in turn.
The first step in most plastic fabrication procedures is compounding, the mixing together of various raw materials in proportions according to a specific recipe. Most often the plastic resins are supplied to the fabricator as cylindrical pellets (several millimetres in diameter and length) or as flakes and powders. Other forms include viscous liquids, solutions, and suspensions.
Mixing liquids with other ingredients may be done in conventional stirred tanks, but certain operations demand special machinery. Dry blending refers to the mixing of dry ingredients prior to further use, as in mixtures of pigments, stabilizers, or reinforcements. However, polyvinyl chloride (PVC) as a porous powder can be combined with a liquid plasticizer in an agitated trough called a ribbon blender or in a tumbling container. This process also is called dry blending, because the liquid penetrates the pores of the resin, and the final mixture, containing as much as 50 percent plasticizer, is still a free-flowing powder that appears to be dry.
The workhorse mixer of the plastics and rubber industries is the internal mixer, in which heat and pressure are applied simultaneously. The Banbury mixer resembles a robust dough mixer in that two interrupted spiral rotors move in opposite directions at 30 to 40 rotations per minute. The shearing action is intense, and the power input can be as high as 1,200 kilowatts for a 250-kg (550-pound) batch of molten resin with finely divided pigment.
In some cases, mixing may be integrated with the extrusion or molding step, as in twin-screw extruders.
The process of forming plastics into various shapes typically involves the steps of melting, shaping, and solidifying. As an example, polyethylene pellets can be heated above Tm, placed in a mold under pressure, and cooled to below Tm in order to make the final product dimensionally stable. Thermoplastics in general are solidified by cooling below Tg or Tm. Thermosets are solidified by heating in order to carry out the chemical reactions necessary for network formation.
In extrusion, a melted polymer is forced through an orifice with a particular cross section (the die), and a continuous shape is formed with a constant cross section similar to that of the orifice. Although thermosets can be extruded and cross-linked by heating the extrudate, thermoplastics that are extruded and solidified by cooling are much more common. Among the products that can be produced by extrusion are film, sheet, tubing, pipes, insulation, and home siding. In each case the profile is determined by the die geometry, and solidification is by cooling.
figureMost plastic grocery bags and similar items are made by the continuous extrusion of tubing. In blow extrusion, the tube is expanded before being cooled by being made to flow around a massive air bubble. Air is prevented from escaping from the bubble by collapsing the film on the other side of the bubble. For some applications, laminated structures may be made by extruding more than one material at the same time through the same die or through multiple dies. Multilayer films are useful since the outer layers may contribute strength and moisture resistance while an inner layer may control oxygen permeability—an important factor in food packaging. The layered films may be formed through blow extrusion, or extrudates from three machines may be pressed together in a die block to form a three-layer flat sheet that is subsequently cooled by contact with a chilled roll.
The flow through a die in extrusion always results in some orientation of the polymer molecules. Orientation may be increased by drawing—that is, pulling on the extrudate in the direction of polymer flow or in some other direction either before or after partial solidification. In the blow extrusion process, polymer molecules are oriented around the circumference of the bag as well as along its length, resulting in a biaxially oriented structure that often has superior mechanical properties over the unoriented material.
In the simplest form of compression molding, a molding powder (or pellets, which are also sometimes called molding powder) is heated and at the same time compressed into a specific shape. In the case of a thermoset, the melting must be rapid, since a network starts to form immediately, and it is essential for the melt to fill the mold completely before solidification progresses to the point where flow stops. The highly cross-linked molded article can be removed without cooling the mold. Adding the next charge to the mold is facilitated by compressing the exact required amount of cold molding powder into a preformed “biscuit.” Also, the biscuit can be preheated by microwave energy to near the reaction temperature before it is placed in the mold cavity. A typical heater, superficially resembling a microwave oven, may apply as much as 10 kilovolts at a frequency of one megahertz. Commercial molding machines use high pressures and temperatures to shorten the cycle time for each molding. The molded article is pushed out of the cavity by the action of ejector pins, which operate automatically when the mold is opened.
In some cases, pushing the resin into the mold before it has liquefied may cause undue stresses on other parts. For example, metal inserts to be molded into a plastic electrical connector may be bent out of position. This problem is solved by transfer molding, in which the resin is liquefied in one chamber and then transferred to the mold cavity.
In one form of compression molding, a layer of reinforcing material may be laid down before the resin is introduced. The heat and pressure not only form the mass into the desired shape but also combine the reinforcement and resin into an intimately bound form. When flat plates are used as the mold, sheets of various materials can be molded together to form a laminated sheet. Ordinary plywood is an example of a thermoset-bound laminate. In plywood, layers of wood are both adhered to one another and impregnated by a thermoset such as urea-formaldehyde, which forms a network on heating.
It is usually slow and inefficient to mold thermoplastics using the compression molding techniques described above. In particular, it is necessary to cool a thermoplastic part before removing it from the mold, and this requires that the mass of metal making up the mold also be cooled and then reheated for each part. Injection molding is a method of overcoming this inefficiency. Injection molding resembles transfer molding in that the liquefying of the resin and the regulating of its flow is carried out in a part of the apparatus that remains hot, while the shaping and cooling is carried out in a part that remains cool. In a reciprocating screw injection molding machine, material flows under gravity from the hopper onto a turning screw. The mechanical energy supplied by the screw, together with auxiliary heaters, converts the resin into a molten state. At the same time the screw retracts toward the hopper end. When a sufficient amount of resin is melted, the screw moves forward, acting as a ram and forcing the polymer melt through a gate into the cooled mold. Once the plastic has solidified in the mold, the mold is unclamped and opened, and the part is pushed from the mold by automatic ejector pins. The mold is then closed and clamped, and the screw turns and retracts again to repeat the cycle of liquefying a new increment of resin. For small parts, cycles can be as rapid as several injections per minute.
Reaction injection molding
One type of network-forming thermoset, polyurethane, is molded into parts such as automobile bumpers and inside panels through a process known as reaction injection molding, or RIM. The two liquid precursors of a polyurethane are a multifunctional isocyanate and a prepolymer, a low-molecular-weight polyether or polyester bearing a multiplicity of reactive end-groups such as hydroxyl, amine, or amide. In the presence of a catalyst such as a tin soap, the two reactants rapidly form a network joined mainly by urethane groups. The reaction takes place so rapidly that the two precursors have to be combined in a special mixing head and immediately introduced into the mold. However, once in the mold, the product requires very little pressure to fill and conform to the mold—especially since a small amount of gas is evolved in the injection process, expanding the polymer volume and reducing resistance to flow. The low molding pressures allow relatively lightweight and inexpensive molds to be used, even when large items such as bumper assemblies or refrigerator doors are formed.
The popularity of thermoplastic containers for products previously marketed in glass is due in no small part to the development of blow molding. In this technique, a thermoplastic hollow tube, the parison, is formed by injection molding or extrusion. In heated form, the tube is sealed at one end and then blown up like a balloon. The expansion is carried out in a split mold with a cold surface; as the thermoplastic encounters the surface, it cools and becomes dimensionally stable. The parison itself can be programmed as it is formed with varying wall thickness along its length, so that, when it is expanded in the mold, the final wall thickness will be controlled at corners and other critical locations. In the process of expansion both in diameter and length (stretch blow molding), the polymer is biaxially oriented, resulting in enhanced strength and, in the case of polyethylene terephthalate (PET) particularly, enhanced crystallinity.
Blow molding has been employed to produce bottles of polyethylene, polypropylene, polystyrene, polycarbonate, PVC, and PET for domestic consumer products. It also has been used to produce fuel tanks for automobiles. In the case of a high-density-polyethylene tank, the blown article may be further treated with sulfur trioxide in order to improve the resistance to swelling or permeation by gasoline.
Casting and dipping
Not every forming process requires high pressures. If the material to be molded is already a stable liquid, simply pouring (casting) the liquid into a mold may suffice. Since the mold need not be massive, even the cyclical heating and cooling for a thermoplastic is efficiently done.
One example of a cast thermoplastic is a suspension of finely divided, low-porosity PVC particles in a plasticizer such as dioctyl phthalate (DOP). This suspension forms a free-flowing liquid (a plastisol) that is stable for months. However, if the suspension (for instance, 60 parts PVC and 40 parts plasticizer) is heated to 180 °C (356 °F) for five minutes, the PVC and plasticizer will form a homogeneous gel that will not separate into its components when cooled back to room temperature. A very realistic insect or fishing worm can be cast from a plastisol using inexpensive molds and a cycle requiring only minutes. In addition, when a mold in the shape of a hand is dipped into a plastisol and then removed, subsequent heating will produce a glove that can be stripped from the mold after cooling.
Thermoset materials can also be cast. For example, a mixture of polymer and multifunctional monomers with initiators can be poured into a heated mold. When polymerization is complete, the article can be removed from the mold. A transparent lens can be formed in this way using a diallyl diglycol carbonate monomer and a free-radical initiator.
In order to make a hollow article, a split mold can be partially filled with a plastisol or a finely divided polymer powder. Rotation of the mold while heating converts the liquid or fuses the powder into a continuous film on the interior surface of the mold. When the mold is cooled and opened, the hollow part can be removed. Among the articles produced in this manner are many toys such as balls and dolls.
Thermoforming and cold molding
When a sheet of thermoplastic is heated above its Tg or Tm, it may be capable of forming a free, flexible membrane as long as the molecular weight is high enough to support the stretching. In this heated state, the sheet can be pulled by vacuum into contact with the cold surface of a mold, where it cools to below Tg or Tm and becomes dimensionally stable in the shape of the mold. Cups for cold drinks are formed in this way from polystyrene or PET.
Vacuum forming is only one variation of sheet thermoforming. The blow molding of bottles described above differs from thermoforming only in that a tube rather than a sheet is the starting form.
Even without heating, some thermoplastics can be formed into new shapes by the application of sufficient pressure. This technique, called cold molding, has been used to make margarine cups and other refrigerated food containers from sheets of acrylonitrile-butadiene-styrene copolymer.
Foams, also called expanded plastics, possess inherent features that make them suitable for certain applications. For instance, the thermal conductivity of a foam is lower than that of the solid polymer. Also, a foamed polymer is more rigid than the solid polymer for any given weight of the material. Finally, compressive stresses usually cause foams to collapse while absorbing much energy, an obvious advantage in protective packaging. Properties such as these can be tailored to fit various applications by the choice of polymer and by the manner of foam formation or fabrication. The largest markets for foamed plastics are in home insulation (polystyrene, polyurethane, phenol formaldehyde) and in packaging, including various disposable food and drink containers.
Polystyrene pellets can be impregnated with isopentane at room temperature and modest pressure. When the pellets are heated, they can be made to fuse together at the same time that the isopentane evaporates, foaming the polystyrene and cooling the assembly at the same time. Usually the pellets are prefoamed to some extent before being put into a mold to form a cup or some form of rigid packaging. The isopentane-impregnated pellets may also be heated under pressure and extruded, in which case a continuous sheet of foamed polystyrene is obtained that can be shaped into packaging, dishes, or egg cartons while it is still warm.
Structural foams can also be produced by injecting nitrogen or some other gas into a molten thermoplastic such as polystyrene or polypropylene under pressure in an extruder. Foams produced in this manner are more dense than the ones described above, but they have excellent strength and rigidity, making them suitable for furniture and other architectural uses.
One way of making foams of a variety of thermoplastics is to incorporate a material that will decompose to generate a gas when heated. To be an effective blowing agent, the material should decompose at about the molding temperature of the plastic, decompose over a narrow temperature range, evolve a large volume of gas, and, of course, be safe to use. One commercial agent is azodicarbonamide, usually compounded with some other ingredients in order to modify the decomposition temperature and to aid in dispersion of the agent in the resin. One mole (116 grams) of azodicarbonamide generates about 39,000 cubic cm of nitrogen and other gases at 200 °C. Thus, 1 gram added to 100 grams of polyethylene can result in foam with a volume of more than 800 cubic cm. Polymers that can be foamed with blowing agents include polyethylene, polypropylene, polystyrene, polyamides, and plasticized PVC.
The rapid reaction of isocyanates with hydroxyl-bearing prepolymers to make polyurethanes is mentioned above in Reaction injection molding. These materials also can be foamed by incorporating a volatile liquid, which evaporates under the heat of reaction and foams the reactive mixture to a high degree. The rigidity of the network depends on the components chosen, especially the prepolymer.
Hydroxyl-terminated polyethers are often used to prepare flexible foams, which are used in furniture cushioning. Hydroxyl-terminated polyesters, on the other hand, are popular for making rigid foams such as those used in custom packaging of appliances. The good adhesion of polyurethanes to metallic surfaces has brought about some novel uses, such as filling and making rigid certain aircraft components (rudders and elevators, for example).
Another rigid thermoset that can be foamed in place is based on phenol-formaldehyde resins. The final stage of network formation is brought about by addition of an acid catalyst in the presence of a volatile liquid.
The term polymer-matrix composite is applied to a number of plastic-based materials in which several phases are present. It is often used to describe systems in which a continuous phase (the matrix) is polymeric and another phase (the reinforcement) has at least one long dimension. The major classes of composites include those made up of discrete layers (sandwich laminates) and those reinforced by fibrous mats, woven cloth, or long, continuous filaments of glass or other materials.
Plywood is a form of sandwich construction of natural wood fibres with plastics. The layers are easily distinguished and are both held together and impregnated with a thermosetting resin, usually urea formaldehyde. A decorative laminate can consist of a half-dozen layers of fibrous kraft paper (similar to paper used for grocery bags) together with a surface layer of paper with a printed design—the entire assembly being impregnated with a melamine-formaldehyde resin. For both plywood and the paper laminate, the cross-linking reaction is carried out with sheets of the material pressed and heated in large laminating presses.
Fibrous reinforcement in popular usage is almost synonymous with fibreglass, although other fibrous materials (carbon, boron, metals, aramid polymers) are also used. Glass fibre is supplied as mats of randomly oriented microfibrils, as woven cloth, and as continuous or discontinuous filaments.
Hand lay-up is a versatile method employed in the construction of large structures such as tanks, pools, and boat hulls. In hand lay-up mats of glass fibres are arranged over a mold and sprayed with a matrix-forming resin, such as a solution of unsaturated polyester (60 parts) in styrene monomer (40 parts) together with free-radical polymerization initiators. The mat can be supplied already impregnated with resin. Polymerization and network formation may require heating, although free-radical “redox” systems can initiate polymerization at room temperature. The molding may be compacted by covering the mold with a blanket and applying a vacuum between the blanket and the surface or, when the volume of production justifies it, by use of a matching metal mold.
Continuous multifilament yarns consist of strands with several hundred filaments, each of which is 5 to 20 micrometres in diameter. These are incorporated into a plastic matrix through a process known as filament winding, in which resin-impregnated strands are wound around a form called a mandrel and then coated with the matrix resin. When the matrix resin is converted into a network, the strength in the hoop direction is very great (being essentially that of the glass fibres). Epoxies are most often used as matrix resins, because of their good adhesion to glass fibres, although water resistance may not be as good as with the unsaturated polyesters.
A method for producing profiles (cross-sectional shapes) with continuous fibre reinforcement is pultrusion. As the name suggests, pultrusion resembles extrusion, except that the impregnated fibres are pulled through a die that defines the profile while being heated to form a dimensionally stable network.
|
Cooking is defined as energy being transferred from a heat source to a food item or items. What are the methods of heat transfer in cooking?
The methods of heat transfer in cooking are; conduction, convection, radiation or a combination of these. The energy we are using here is heat and through these heat transfer methods we derive the method of cooking where food item can be prepared by using cooking methods such as saute, bake, broil, etc.
When heat is applied to a food item, the heat, as energy, makes the molecules of the item vibrate and expand. Through this movement they bounce off of each other distributing the energy in this case heat. Here are brief explanations of cooking and heat transfer for food items.
Conduction of heat is accomplished through an item transferring heat to another item by contact. Grilling a steak is a food item that can be prepare by using the method of conduction where the heating source, the flame, touches the food directly. The fact that conduction must have contact between the heat source and the food item makes conduction a slower method to cook. The example above where molecules collide to transfer energy is conduction in action. As the molecules of the food item near the heat source heat up they move more rapidly creating friction which transfers heat energy from the outer area to the interior. This is how a steak can be cooked thoroughly on the outside but still be rare on the interior.
Metals conduct heat very well so your cookware is all made of metals; aluminum and copper being the best conductors of all. Gases and liquids are poor conductors so steaming an item will take longer than grilling or pan sauteing.
Convection is heat transfer through the use of a fluid as the conducting material. The fluid can be either liquid or gas and is circulated either by natural or mechanical methods.
The natural method uses the principle of heated gas will rise while cooler gas will fall combined to create circulation.
Mechanical methods are used in modern ovens and happens through the use of fans. Ovens using convection will cook items evenly and in less time than with a conventional oven.
Radiation is where heat is transferred through waves that can be heat, light or radio. Microwave ovens through the use of microwave technology use radio waves to heat certain molecules while not heating others. This allows the food item to cook while the container remains cool. Infrared is another technology being used to cook items through either heat or light.
Through these three methods of heat transfer in cooking. These methods are used to transfer heat to a food item while the distribution of the heat within the food item will always be through conduction. It is through the understanding of types of heat transfer in cooking methods that we can better understand the subtle ways we can use this heat to produce different and unique meals.
|
PPT in Spanish to quickly introduce José Martí and other Latin American national heroes. I usually follow up with a reading of "versos sencillos" by José Martí, a Web Quest for José Martí and listening of the song "Guantanamera".
I like introducing students to the national heroes for Hispanic Heritage month. Not only are they identifying famous Hispanics but are also making a connection to Social Studies and Geography.
Afterwards, students research the internet for more national heroes and select one to create an informative slide show in Spanish that they will present to the class. This project earns them two grades. One grade for the project itself and the another for an oral assessment in the target language.
The project rules are simple. They must answer the five W's + How? (Who? When? Where? Why? What?).
|
An international study published this week in the journal Global Change Biology suggests that increased ocean acidity is affecting the size and weight of shellfish and their skeletons, afflicting a wide variety of marine species.
According to Reuters, it has become increasingly difficult for clams, sea urchins and other shellfish to grow their shells, and the trend is likely to be felt most in polar regions.
Ocean acidification, which comes partially from human emissions of greenhouse gases, including carbon dioxide from burning fossil fuels that dissolves into acid, makes it harder for creatures to extract calcium carbonate, which forms their skeletons and shells.
According to the Mother Nature Network, the research team tested four types of shell-building marine animals — clams, sea snails, lampshells and sea urchins — living in 12 different environments from the tropics to the poles. In all of the shellfish tested, they found that as the availability of calcium carbonate decreases, skeletons become lighter and account for a smaller percentage of an animal's overall weight.
Professor Lloyd Peck of the BAS told Reuters TV, "Where it gets colder and the calcium carbonate is harder to get out of the seawater the animals have thinner skeletons."
The researchers said these results show that changing ocean acidity in the Arctic and Antarctic may foreshadow similar issues elsewhere. The experiences in the Arctic and Antarctic may also show if some species are able to adapt to the changes in time to survive.
In a statement Peck said, "Evolution has allowed shellfish to exist in these areas and, given enough time and a slow enough rate of change, evolution may again help these animals survive in our acidifying oceans."
More on the plight of the shellfish can be seen in this video:
|
Insect – plant interrelationships are amazingly complex and curious, likely the result of eons of co-evolution. Galls are abnormal growths caused by mites, fungi, and insects, the product of intimate and often species-specific association .
Domatia are different. They are not ‘abnormal’; They are tiny ‘chambers’ produced by plants that house arthropods (insects).
Common grape vine(Vitis rotundifolia) has domatia on the undersides of leaves. This native grape vine grows from Delaware southward throughout the southeastern US and is well-adpated to warm and humid conditions. Unlike grapes of exotic origin, common grape is tolerant of native insect pests and diseases.
Domatia likely are an adaptation that plays a role in this tolerance. A paper published in the Florida Entomologist found that the number of insects living in domatia on common grape increased dramatically with the onset of the spring rainy season. 47% of the tiny insects, mostly mites, were fungivorous (fungus-eating) and nearly 8% were predatory. Less than 1% were herbivorous (plant-eating).
What a magnificent, mutualistic plant – insect relationship!
Common grape vine is common throughout the Oslo Riverfront Conservation Area. When this nature preserve was purchased, vigorous ‘Tarzan’ grape vines with woody trunks many inches thick were present in the hammock area, but these magnificent old vines were cut out by Indian River County Parks personnel. Common grape vine colonizes sunny, open areas as shown below by Sam (Bob Montanaro’s dog) posing in the scrubby pine flatwoods at south Oslo Riverfront Conservation Area.
|
The term language learning strategies in the literature refers to the methods adopted by people learning a second or a foreign language in order to acquire, integrate and consequently make better use of the target language. O'Malley & Chamot (1990) define learning strategies as thoughts or behaviours that individuals use to understand, learn or retain new information, while Oxford (1990) defines them as specific behaviours, actions, steps or techniques students use to improve their progress, as they develop specific skills during language learning.
Already in the 70s, research in the field of language teaching had begun to investigate the profile of the person who learns a second/foreign language 'effectively', i.e. the methods and tactics to which he resorts, and who has conventionally been called „the good learner', in order to create educational tools and methods that would help the so-called 'weak' learners to become more effective (Rubin 1975, Stern 1975).
During the 80s and 90s, research was largely influenced by the prevalence of both the communication approach, which seeks new teaching models, and the empirical findings of cognitive psychology. It focused even more on the study of learning strategy use in second / foreign language learning (language learning strategies, stratégies d'apprentissage; Cohen 1998, O 'Malley & Chamot 1990, Oxford 1990) and sought the strategic profile of learners enrolled in different levels of education. In particular, there was an attempt to highlight empirically the role of the individual who learns a second/foreign language – a role which had been neglected for a long time – as well as the influence of cognitive and emotional factors that affect the learning process.
Within this attempt, there were detailed studies on the effect of gender (Ehrman & Oxford 1989, Green & Oxford 1995, Lan & Oxford 2003, Lee 2003, Mochizuki 1999, Nyikos 1990; Oxford & Nyikos 1989, Peacock & Ho 2003, Politzer 1983, Sheorey 1999), of the target language (Chamot et al 1987, Politzer 1983), of the language level (Chamot & El-Dinary 1999, Hong-Nam & Leavell 2006, Green & Oxford 1995, Griffiths 2003, Kantaridou 2004, Kazamia 2003, Lan & Oxford 2003, O'Malley & Chamot 1990, Purdie & Oliver 1999), of motivation (Gardner, Tremblay, and Masgoret 1997, Kantaridou 2004, Oxford & Nyikos 1989, Pintrich 1989, Pintrich & De Groot 1990, Psaltou-Joycey 2003, Wharton 2000 ), of the cultural background (O'Malley & Chamot, 1990, Oxford 1996, Reid 1995, Psaltou-Joycey 2008, Rossi-Le 1995), of the teaching methods employed (Ehrman and Oxford 1989, Oxford & Nyikos 1989, Politzer 1983) and of the direction of studies (Mochizuki 1999, Oxford & Nyikos 1989, Peacock 2001, Peacock and Ho 2003, Politzer & McGroarty 1985) in the selection of specific strategies.
At the same time a series of methodological questions emerged, the most basic one concerning that of the most suitable and valid instrument for data collection. In order to record the learners‟ strategy use, researchers made use of interviews, questionnaires, diaries or classroom observations. Oxford (1990) designed the Strategy Inventory for Language Learning (SILL) which has since been widely used in many countries for the study the strategies (for an informative review of the literature on methodological issues in strategy research cf. Chamot 2005).
The new trends in the study of learning strategies took place during the first decade of 2000, primarily from the perspective of educational research and educational psychology. More specifically, Rubin (2001, 2005) replaced the term „strategies‟ with the term „self-management‟, meaning the learner‟s ability to use metacognitive strategies (self-regulation, planning, self-evaluation, etc.) and related knowledge (i.e. knowledge associated with strategy use, the particularities of specific language tasks, and awareness of personal abilities) in order to achieve effective learning. Boekaerts, Pintrich and Zeidner (2000), on the other hand, proposed the term „self-regulation‟ by focusing on the process of learning (self-regulation) rather than on its product (strategy use). The aim in both cases is learner autonomy, in other words, the student to be able to become master of the learning process, and manage acquisition of knowledge in the best possible way.
Integration of strategy instruction into the curricula of language teaching programmes can play a special role as such programmes aim at sensitizing learners towards strategy use, informing them about the value and purpose of each strategy and then helping them practice the use of strategies in authentic communicative situations. Such practice ensures, on the one hand, the development of the learners‟ metacognitive ability, and on the other, transfer of strategies in other similar language tasks (Chamot & O 'Malley 1987, Chamot et al. 1999, Nunan 1997, Oxford 1990, Oxford & Leaver 1996, Wenden 1986). However, research has shown that learner awareness can not be achieved without prior language-teacher awareness and familiarization with strategy instruction (Wenden 1986).
Research on learning strategies in the Greek context is found mainly in the work of Papaefthymiou-Lytra (1987), Psaltou-Joycey and Joycey (2001), Psaltou-Joycey (2003), Kazamia (2003), Gavriilidou (2004), Gavriilidou 2006 , Psaltou-Joycey (2008), Papanis 2008, Gavriilidou and Papanis (2009), Gavriilidou and Papanis (2010), Psaltou-Joycey, and Kantaridou (2009a), Psaltou-Joycey and Kantaridou (2009b), Psaltou-Joycey and Sougari (2010), and Vrettou (2011). Two more publications have contributed to the rising interest in the respective field: (a) the collective volume of selected papers by Psaltou-Joycey & Gavriilidou (eds) (2009) which was the result of the proceedings of a workshop on learning strategies, organized during the 19th International Symposium of Theoretical and Applied Linguistics, organized by the Department of Theoretical and Applied Linguistics, School of English, Aristotle University of Thessaloniki. During that workshop, the need for a standardized instrument for data selection became apparent as well as the need for the designing of educational programmes for language instruction with the use of strategies; and (b) Psaltou-Joycey‟s book entitled “Language Learning Strategies in the Foreign Language Classroom” (2010) which covers a wide range of topics related to language learning strategy research and classroom instruction.
The present research project aspires to promote theoretical research into learning strategies which could have applications in education. More specifically, it aims to adapt and standardize in Greek and Turkish the widely used instrument STRATEGY INVENTORY OF LANGUAGE LEARNING (SILL) which evaluates the use of learning strategies by learners of a second or foreign language. Its adaptation and standardization are considered essential so that henceforth data selection on learner use of strategies will be conducted in a valid and reliable manner and the resulting outcomes will be comparable in all cases.
The adjusted SILL will be used in the second phase of this proposal in order to investigate
(a) the learning profile of foreign language learners in primary and secondary education, i.e. research
into all the cognitive, metacognitive, memory, compensation, affective and social strategies that these students use when learning a foreign language,
(b) the learning profile of Muslim students in Thrace, who learn Greek as a second language in order to enhance the specific learner group‟s learning of Greek.
The expected results may be used as feedback for the designing of language learning curricula as well as for teacher training programmes in order to sensitize teachers towards classroom strategy instruction.
An additional aim of the present research is the comparison of strategies promoted in class by teachers in primary, secondary and minority education with those used by learners to determine the possible effect of strategy use by teachers on the type and number of strategies that these learners resort to during the learning process. The ultimate goal of this aspect of the research will be a more effective model to help teachers teach their learners how to learn.
To achieve these goals four research teams cooperate in the present research project each having a distinct (to a certain degree) field of study (1st research team: strategic profile of Muslim students in primary education learning Greek as second language and strategic profile of teachers in minority education, 2nd research team: strategic profile of students in primary education learning English as foreign language and strategic profile of teachers in primary education, 3rd research tem: strategic profile of students in secondary education learning English as foreign language and strategic profile of teachers in secondary education, 4th research team: adaptation of SILL in Greek and Turkish and construction and validation of a questionnaire for investigating teachers‟ strategic profile) and 21 external partners/ experts each connected to one of the research teams.
The overall coordination of the project is held by the coordinator who defines the frame of activities of each research team leader, ensures communication and uninterrupted cooperation among the four teams, observes the research deadlines and, in the final stage, coordinates the four teams in order to produce the final output through the synthesis of results of each research team. Team leaders coordinate the members of their team and their external partners/ experts, define responsibilities of each member and external partner and safeguard the research process in relation to the time schedule and their deliverables. Team leaders regularly meet with each other and the external partners/experts.
The expected results include
1) The elaboration of the Greek version of SILL,
2) The elaboration of the Turkish version of SILL,
3) The elaboration of a standardized Questionnaire to trace teacher-used strategies.
4) The strategic profile of Muslim students in primary education learning Greek as second language and strategic profile of teachers in minority education,
5) The strategic profile of students in primary education learning English as foreign language and strategic profile of teachers in primary education
6) The strategic profile of students in secondary education learning English as foreign language and strategic profile of teachers in secondary education
7) The Presentation of the results in conferences and publications in journals.
8) A handbook based on the outcomes of the present research which will be published for the use of educational institutions such as the Pedagogical Institute and community with suggestions for improving second/foreign language teaching.
The progress expected to be achieved through the proposed research consists in the following main points:
- At a methodological level, the adaptation of a valid and reliable data collection tool in the Greek language will enable all researchers involved in language learning strategies research in Greece to collect data in a uniform way and, moreover, to reach comparable results. Such a practice is not possible today, as data collection procedures are accomplished via different research protocols. At the same time massive data collection in a uniform way is bound to lead to the evaluation of the strategic profile of the pupils attending primary, secondary and/or minority education.
- At a language teaching level, investigating the strategy use by the teachers in the classroom environment, is expected to reveal the teaching practices exerted during the process of language teaching. On the other hand, the suggested evaluation of the strategic profile of the primary and secondary education pupils is expected to be the background for language teaching programs design, which will make the pupils autonomous and self-depended through the learning process, something that contributes to a more effective and quick learning. Such a fact becomes even more important within the frame of EC whose aim is the mutual understanding between different cultures and intercultural contacts, to be achieved through multiligualism. That is why they (EC) have put the ambitious, nevertheless realistic, „mother tongue plus two‟.
- At a theoretical level, the study of the pupils‟ strategic profile is expected to contribute towards a better comprehension of a language learning process.
a) The language proficiency of learners in general education. The implementation of the program for students of elementary and secondary education will contribute to effective foreign language learning. This will promote multilingualism in our country, which will enhance the communication of Greek speakers with other European partners and will provide the opportunity to Greek citizens to seek better jobs home or abroad.
b) The functional and effective learning of the Greek language by minority students. In the region of Thrace, the unemployment rate is very high. Muslims hold, almost exclusively, low-paid jobs, which require neither expertise, nor any special training. The implementation of the proposal aims at helping Muslim students to achieve a higher level of knowledge of Greek which will result in higher rates of those students in the Universities, more job choices for better paid jobs, better communication with the majority, all of which will ensure better social integration of this group.
c) The overall language proficiency of minority students
d) Training of teachers of foreign languages at schools and of those who teach Greek as a second language in minority schools
Consequently, the immediately benefited are:
a) primary and secondary education pupils learning a foreign language at school,
b) Muslim pupils attending minority education learning Greek as a Second Language,
c) foreign language teachers
d) teachers involved in teaching Greek as a Second Language,
e) educationalists specializing in curriculum designing focusing at foreign/second language teaching. The evaluation of the strategic profile of primary, secondary and minority pupils will provide for the curricula design specialists with empirical data which will enable the creation of effective programs for the strategic teaching of language. Thus, the multilingualism of Greek learners will increase fulfilling in this way the
From this research great benefits are expected to emerge regarding:
„Mother tongue plus two‟ target set by the EC; on the other hand Muslim pupils will learn Greek in a strategic way which will increase their communicative competence.
The indirectly benefited are Greek researchers investigating learning strategies and they will gain a valid and reliable data collection instrument.
The use of self-report instruments for the investigation and diagnosis of various aspects of learner individual characteristics and the differences that emerge among different groups is a common research practice in the field of second/foreign language acquisition. The adaptation of original, prototypical instruments is also another common practice, whenever these instruments are to be used in a different linguistic and/or sociocultural environment.
The current research proposal suggests the adaptation into Greek and Turkish of a valid and reliable instrument, such as the Strategy Inventory for Language Learning (SILL) (Oxford 1990), for the collection of data, in order to determine the language learning strategies used by primary and secondary school students learning a foreign language as well as those used by Muslim students learning Greek as a second language. (For an extensive literature review concerning research on strategies, see Chamot 2005).
Parallel to that, another aspect that will also be studied is the degree to which strategies are promoted by teachers in their classroom practice, so that it can be deduced whether the use of strategies by teachers during the learning procedure in the classroom contributes to an increased use of strategies by the learners. This research will serve as the theoretical background, for strategy training to be introduced smoothly in the language teaching curriculum for primary and secondary education at a later stage.
To be more specific, during the first stage of the research proposal (duration 12 months: 1-7-2011 to 30-6-2012) it is expected that the SILL (Oxford 1990), a widely-used data selection instrument in learning strategies research, will be adapted in both Greek and Turkish. Research teams 1, 2 and 4 participate in this phase. The participation in this phase, on the one hand, of the invited investigator from the University of Colorado (U.S.A), Achilles Bardos (member of research team 4), who specializes in research methods and applied statistics, educational assessment and measurement and program evaluation and, on the other, of the Professor of the University of Swansea, James Milton (member of research team 2) who is a world-wide known specialist of language teaching and the founder of the Center of Applied Language Studies guarantee the validity and the reliability of the experimental procedure.
The adaptation will include two phases:
a) the translation and cultural adaptation and
b) the testing of the psychometric properties of the SILL.
The translation phase
The translation phase will include five stages: forward translation in Greek and Turkish by two translators for each language, resolution of the discrepancies between the couples of translation for each language, back translation in the original language, revision of the translation by an expert committee, pretesting. More precisely, initially, the SILL will be translated into Greek and Turkish by two translators for each language. The mother tongue for each translator will be the target language (Greek and Turkish). The couples of translations for each language will be compared in order to resolve discrepancies in translation that may reflect more ambiguous wording in the original test or discrepancies in the translation process. Then the translated texts will be translated back in the original language, i.e English, by bilinguals, English native speakers in order to verify if each item of the SILL translation conveys an equivalent meaning as in English. This is a process of validity checking to make sure that the translated version of SILL is reflecting the same item content as the original version. An expert committee will consolidate all the previous translation versions and will develop the prefinal version of SILL translations. Decisions will be made by that expert committee in four areas: semantic equivalence, idiomatic equivalence, cultural equivalence and conceptual equivalence. Finally, the prefinal version will be administered to a small number of pupils for each language. The pupils will fill in the questionnaire and then will be interviewed to probe about what they thought was meant by each questionnaire item and the given response. This pilot study will investigate the linguistic efficiency and accuracy of the translation of the original version into Greek.
The testing of the psychometric properties of the SILL phase
Once the most suitable sampling procedure for the research method is selected, the Greek version of SILL will be administered on a random and representative sample of the Greek student population. The administration will be conducted on a national level throughout Greece, having in mind the aforementioned sampling procedure.
Following the codification and insertion of the data, the main work of adaptation will take place, concerning, on the one hand, the piloting, and, on the other, the final level of data analysis, which is the main target.
The procedure of adaptation of such instruments requires a factor analysis of their structure, which will take place at two levels: the exploratory and the confirmatory, so that the best possible and most reliable factorial solution regarding the Greek situation will be decided upon.
Similar procedures will be conducted in the case of Turkish, but to a more limited sample and to a more restricted geographical area, due to the concentration of the specific target population in a small area.
The new instruments in Greek and Turkish should retain both the item-level characteristics such as item-to-scale correlations and internal consistency; and the score-level characteristics of reliability, construct validity, and responsiveness.
The deliverable of the specific work package will be the final technical report regarding the adaptation of the instrument in Greek and in Turkish, which will include all the relevant sampling and statistical procedures that have been applied.
The adaptation of the SILL in Greek will give Greek researchers who are interested in learning strategy research the opportunity to use a valid and reliable instrument, which will allow them to collect comparable data, something that has not been possible till now.
Upon completion of the adaptation of the SILL procedure, the research teams 1, 2, and 3 will work in parallel and will embark on the selection of data for the determination of the profiles of the language learning strategies used by primary and secondary students who learn a foreign language and the Muslim students of Thrace who learn Greek as a second language (duration 15 months: 1-7-2012 to 30-9-2013).
The first research team will focus on the study and definition of language learning strategies used by Muslim students in Muslim minority primary schools who learn Greek as L2. The members of that team specialize in language learning strategies study and also have contacts with the local Muslim community, since they teach at the Democritus University of Thrace and have participated in education programs designed for the Muslim Minority. The second research team will focus on the study and definition of language learning strategies used by primary school students in Greek schools who learn English as a foreign language. The members of the second research team demonstrate a wide and long experience in issues of early and primary school language learning. Finally the third research team will focus on the study and definition of language learning strategies used by secondary school students in Greek schools who learn English as a foreign language, since its members have a wider experience of children of that age.
More specifically the following will be studied:
- The overall strategies that learners report they use in class
- The type of strategies (cognitive, metacognitive, social, affective, etc.) they use
- The impact of gender, age, school location, language proficiency, and motivation on the strategies that learners use.
The determination of the learners‟ strategy profile is considered worthwhile, so that the attitudes and preferences of the sample during the learning process emerge. This is a pre-requisite for the planning of programmes relevant to the strategic use of the language.
In the frame of surveying the learners‟ profile of language learning strategies, teachers themselves constitute quite a crucial factor in their undertaking to support and to guide such efforts of their learners. A central issue for such a role is the teaching strategies they use in their classroom practice in order to be more effective as teachers. Thus, after the selection of data for the designation of the learners‟ strategy profile, the research will focus on how teachers cultivate various types of strategies in their teaching, as well as on how
the use of strategies implemented by teachers during the lesson affects the use of strategies by their students. All four scientific teams participate in this phase (duration 18 months: 1-7-2013 to 31-12-2014).
To be more specific, the research will focus on the following:
- The set of strategies that teachers report they employ.
- The type of strategies (cognitive, metacognitive, memory, social, affective, etc.) used.
- The impact of factors such as age, gender and the location of the school on the use of strategies.
- Any correlation between strategies used by teachers and those used by learners.
As far as the strategies being used and cultivated by the teachers in the classroom are concerned, a specific questionnaire will be designed, based on related previous instruments, which will be administered to a sample of foreign language teachers.
The questionnaire will include scaled questions and will be materialized in two phases:
(a) during the first phase, the questionnaire will be put together and then the pilot questionnaire will be administered to a limited number of teachers, and
(b) during the second phase, the questionnaire will be administered to a larger number of foreign language teachers (i.e. this will use random sampling).
The validity of the research instrument will be assessed by a panel of experts who will evaluate the appropriacy of topics/suggestions used. The reliability of the research instrument will be assessed in the following two widely-used ways: factor analysis and cross-checks of internal consistency. In the final phase, the relationship between the two research instruments, the students‟ and the teachers‟ questionnaires, will be assessed.
Based on the results that will be deduced from the designation of the learning strategy profile of primary, secondary and Muslim students, as well as of the teachers, a book, which will serve as a manual, will be put together for the improvement of teaching techniques implemented for the teaching of a foreign/second language.
Γαβριηλίδου Ζ. (2004). Χπήζη ζηπαηηγικών εκμάθηζηρ ηηρ ελληνικήρ ωρ δεύηεπηρ /ξένηρ γλώζζαρ: πιλοηική μελέηη. Πρακηικά 6οσ Διεθνούς Σσνεδρίοσ Ελληνικής Γλωζζολογίας. Ρέθςμνο available at http://www.philology.uoc.gr/conferences/6thICGL/gr.htm
Γαβπιηλίδος Ζ. (2006). Σηπαηηγικέρ Εκμάθηζηρ ηηρ Ελληνικήρ ωρ Γ2 πος σπηζιμοποιούνηαι από ενήλικερ μοςζοςλμάνοςρ μαθηηέρ. Νέα Παιδεία. η. 123, pp. 103-117.
Chamot, U. (2005). Language learning strategy instruction: current issues and research. Annual Review of Applied Linguistics 25: 112-130.
Chamot U. and M. O‟ Malley (1987). The cognitive academic language learning approach: A bridge to the mainstream. TESOL Quarterly 21 (3): 227-249.
Chamot, A. U., O‟ Malley, J. M., Kupper, L. & Impink-Hernandez M., V. (1987), A study of learning strategies in foreign language instruction: Fisrt year report, Rosslyn, Inter America Research Associates.
Chamot U. and P.B. El-Dinary (1999). Children‟s learning strategies in immersion classrooms. Modern Language Journal 83 (3): 319-341.
Chamot U., S. Barnhardt, P.B. El Dinary and J.Robbins (1999). The learning strategies handbook. New York: Longman.
Ehrman M. & Oxford R. (1988). Effects of sex differences, career choice, and psychological type on adult language learning strategies. The Modern Language Journal, 72, iii, 253-265.
Gardner, R.C., P.F. Tremblay, and A-M. Masgoret (1997). Towards a full model of second language learning: An empirical investigation. The Modern Language Journal, 81 (93): 344-362.
Gavriilidou, Z., Papanis, A., (2009),The effect of strategy instruction on strategy use by Muslim pupils learning English as a foreign language. Journal of Applied Linguistics 25: 47-63.
Gavriilidou, Z., Papanis, A., (2010), A preliminary study of learning strategies in foreign language instruction: students‟ beliefs about strategy use, in Psaltou-Joycey, Α. & Μ. Matthaioudaki (eds), Advances in research on language learning and teaching: Selected Papers. Thessaloniki: Greek Association of Applied Linguistics, Volume Νο 10.
Gavriilidou, Z., Psaltou-Joycey A. (2009), Language learning strategies: an overview, Journal of Applied Linguistics 25: 11-25.
Green J.M. & Oxford R., (1995). A closer look at learning strategies, L2 proficiency and gender. TESOL Quarterly. 29, 261-297.
Griffiths, C. (2003). Patterns of language learning strategy use. System, 31(3): 367-383.
Hong-Nam, K. and A.G. Leavell (2006). Language learning strategy use of ESL students in an intensive English learning context. System, 34 (3): 399-415.
Kantaridou, Z. (2004). Motivation and involvement in learning English for academic purposes. Unpublished PhD thesis. Department of Theoretical and Applied Linguistics, School of English, Aristotle University of Thessaloniki.
Kazamia, V. (2003). Language Learning Strategies of Greek Adult Learners of English. Unpublished PhD thesis, University of Leeds, UK.
Lan, R., and R.L. Oxford (2003). Language learning strategy profiles of elementary school students in Taiwan. IRAL 41: 339-379.
Lee, K.O. (2003). The relationship of school year, sex, and proficiency on the use of learning strategies in learning English of Korean junior high school students. Asian EFL Journal 5 (3): 1-36.
Mochizuki, A. (1999) Language learning strategies used by Japanese university students. RELC Journal, 30
Nyikos, M. (1990). Sex-related differences in adult language learning: Socialisation and memory factors. The Modern Language Journal 74 (3): 273-287.
Nunan, D. (1997). Does learner strategy training make a difference? Lenguas Modernas 24: 123-142.
O‟Malley J. & Chamot A. (1990). Learning Strategies in Second Language Acquisition. Cambridge University Press.
Oxford R. (1990). Language Learning Strategies: What every teacher should know. New York: Newbury House Publishers.
Oxford, R.L. (1996). Language learning strategies around the world: Cross-cultural perspectives. University of Hawaii at Mãnoa.
Oxford, R.L. and M. Nyikos (1989). Variables affecting choice of language learning strategies by university students. The Modern Language Journal, 73 (3): 291-300.
Papaefthymiou-Lytra S. (1987). Communicating and learning strategies in English foreign language with particular reference to Greek learners of English. PhD Dissertation. University of Athens.
Peacock, M. (2001). Match or mismatch? Learning styles and teaching styles in EFL. International Journal of Applied Linguistics 11 (1): 1-19.
Peacock, M., and B. Ho (2003). Student language learning strategies across eight disciplines. International Journal of Applied Linguistics, 13(2): 179-198.
Pintrich, P. (1989). The dynamic interplay of student motivation and cognition in the college classroom. In
M. Maehr and C. Ames (eds), Advances in motivation and achievement: Motivation enhancing environments, Vol. 6. Greenwich, CT: JAI Press, 117-160.
Pintrich, P.R. and E.V. De Groot (1990). Motivational and self-regulating learning components of classroom academic performance. Journal of Educational Psychology 82: 33-40.
Politzer, R. (1983). An explanatory study of self-reported language learning behaviors and their relation to achievement. Studies in Second Language Acquisition 6 (1): 54-65.
Politzer, R. and M. McGroarty (1985). An explanatory study of learning behaviors and their relationship to gains in linguistic and communicative competence. TESOL Quarterly 19 (1): 103-124.
Psaltou-Joycey, A. (2010) Language Learning Strategies in the Language Learning Classroom. Thessaloniki: University Studio Press.
Psaltou-Joycey A. & Joycey E. (2001). The Effects of Strategy Instruction on Developing Speaking Skills. Proceedings of the 12th International Conference of the Greek Applied Linguistics Association. Thessaloniki,. 425-437.
Psaltou-Joycey, A. (2003). Strategy use by Greek university students of English. In E. Mela-Athanasopoulou (ed.), Selected Papers on Theoretical and Applied Linguistics of the 13th International Symposium of Theoretical and Applied Linguistics, School of English, Aristotle University of Thessaloniki, 591-601.
Psaltou-Joycey, A. (2008). Cross-cultural differences in the use of learning strategies by students of Greek as a second language. Journal of Multilingual and Multicultural Development 29 (3): 310-324.
Psaltou-Joycey, A. & Gavriilidou, Z. (eds) (2009). Journal of Applied Linguistics 25, Special Issue. Thessaloniki: Greek Applied Linguistics Association.
Psaltou-Joycey, A. & Kantaridou, Z. (2009a). Plurilingualism, language learning strategy use and learning style preferences. International Journal of Multilingualism 6 (4): 460-474
Psaltou-Joycey, A. & Kantaridou, Z. (2009b). Foreign language learning strategy profiles of university students in Greece. Journal of Applied Linguistics 25: 107-127.
Psaltou-Joycey, A. & Sougari, A-M. (2010). Greek young learners‟ perceptions about foreign language learning and teaching. In A. Psaltou-Joycey and M. Matthaioudakis (eds), Advances in research on language acquisition and teaching: Selected Papers (pp. 387-401). Thessaloniki: Greek Applied Linguistics Association.
Purdie, N. and R. Oliver (1999). Language learning strategies used by bilingual school-aged children. System 27: 375-388.
Reid, J.M. (1995). Learning styles in the ESL/EFL classroom. Boston: Heinle & Heinle, 118-125
Rossi-Le, L. (1995). Learning styles and strategies in adult immigrant ESL students. In J.M. Reid (ed.), Learning styles in the ESL/EFL classroom Boston, Massachusetts: Heinle & Heinle, 118-125.
Rubin J. (1975). What the “good language learner” can teach us. TESOL Quarterly. 9, 41-51.
Rubin J. (2001). Language learner self-management. Journal of Asian Pacific Communication, 11 (1), 25-37.
Rubin J. (2005). The expert language learner: A review of good language learner studies and learner strategies. In Expertize in second language learning and teaching. Johnson K. (ed). Basingstoke: Palgrave Macmillan.
Sheorey, R. 1999. An examination of language learning strategy use in the setting of an indigenized variety of English. System, 27 (1): 173-190.
Stern H. H. (1983). Fundamental concepts of language teaching. Oxford: Oxford University Press.
Vrettou, A. (2011). “Patterns of language learning strategy use by Greek-speaking young Learners of English”. Unpublished PhD thesis, Department of Theoretical and Applied Linguistics, School of English, Aristotle University of Thessaloniki.
Wenden, A. (1986). Incorporating learner training in the classroom. System 14 (3): 315-325.
Wharton, G. 2000. Language learning strategy use of bilingual foreign language learners in Singapore. Language Learning, 50 (2): 203-243.
|
Have you ever been driving down the highway and seen smoke billowing over the top of a hill and wondered what in the world was going on? You think to yourself, “Maybe I should call 911 to report the fire.” But as you approach the smoke, your thoughts about the situation change because it’s a grass fire, there are people watching it burn and it looks under control. Today’s blog post will help you understand a little more why we use this practice called controlled burn.
Controlled or “prescribed burns” have been used for many, many years. They have a long history in land management. Pre-agricultural societies used fire to regulate both plant and animal life. Fire history studies have documented periodic land fires ignited by indigenous peoples in North America and Australia. Since 1995, the US Forest Service has slowly incorporated burning practices into its forest management policies.
In our area, we use controlled burn on our rangeland for several reasons. Controlled burns are used most frequently to maintain and restore native grasslands. The burning can recycle nutrients tied up in old plant growth. We also use it to control many woody plants, trees and herbaceous weeds. Burning improves poor quality forage, increases plant growth and improves certain wildlife habitat. To achieve the above benefits, fire must be used under very specific conditions, using very specific techniques.
Controlled burning is usually overseen by fire control authorities for regulations and permits. When we decide to burn a pasture, we are responsible to obtain a burn permit, and we must state the intended time and place. Controlled burning is typically conducted during the cooler months to reduce fuel buildup and decrease the likelihood of serious hotter fires. In our area of Nebraska, controlled burning usually takes place from February – late April.
We begin our controlled burn by back burning. Back burning is a way of reducing the amount of flammable material during a rangefire by starting small fires along a man-made or natural firebreak in front of a main fire front. It is called back burning because the small fires are designed to ‘burn back towards the main fire front’. The basic reason for back burning is so that there is little material that can burn when the main fire reaches the burnt area. This is a way for the fire to burn out. The firebreaks that may be used could be a river, road, bulldozed clearing or a tilled strip.
We don’t burn every pasture every year, but use a rotation. We are done burning for the year. The grass grew really fast this spring, so all the green makes it difficult to burn. The new grass in the burned pasture begins growing back days after we burn. If we get some rain, we can turn cows out in it 4 to 5 weeks after we burn. The cows love the lush grass!
What are you going to do the next time you are driving down the highway with a friend and see a controlled burn? You can inform him or her of why farmers and ranchers use this practice to improve their rangeland!
|
Bonobos voluntarily share food and will even forego their own meals for a stranger, but only if the recipient offers them social interaction, according to research published January 2 by Jingzhi Tan and Brian Hare of Duke University.
In a series of experiments, the researchers found that bonobos would voluntarily forego their food and offer it to a stranger in exchange for social interaction. The authors found that the bonobos' behavior was at least partially driven by unselfish motivations, since the animals helped strangers acquire food that was out of reach even when no social interaction was possible as a result of helping them. However, their generosity had its limits: Animals would not share food in their possession if no social interaction was possible.
Though the study subjects were all bonobos that had been orphaned by the bushmeat trade in Congo, they showed no significant psychological differences from bonobos that had been raised by their mothers. According to the authors, their results reveal the evolution of generosity in these apes, our closest living relatives. They suggest that the behavior may have evolved to allow for the expansion of individual social networks.
Lead author Tan adds, "Our results show that generosity toward strangers is not unique to humans. Like chimpanzees, our species would kill strangers; like bonobos, we could also be very nice to strangers. Our results highlight the importance of studying bonobos to fully understand the origins of such human behaviors."
|
Home > Preview
The flashcards below were created by user
on FreezingBlue Flashcards.
What are the 3 functions of the Respitory System?
- 1. Inhaling Oxygen
- 2. Exhaling Carbon Dioxide and water
- 3. Breathing movement of gases in/out of lungs
II. Path of the air
A. Air goes through .
3. Blood Vessels
- 1. Mucus- moistens air and traps particles
- 2. Cilia- tiny hairs that sweep mucus to the throat
- 3. Blood vessels- warms the air
c. Enters the Trachea
2. The walls of the , are made up of
- 1. Epiglottis- seals off your Trachea when you swallow.
- 2. Trachea
- a. made of rings of cartilage
- b. lined with cilia and mucus
Air enters the right and left which directs air into the lungs.
Air enters the bronchi and goes to the .
- 1. Bronchi
- 2. Small, Alveoli
- Alveoli- tiny sacks of lung tissue where gases exchange between capillaries occur.
- Oxygen- goes into the blood through the capillary wall and CO2 goes the opposite way.
Diaphram- dome shaped muscle at the base of the lungs.
- 1. Diaphram contracts and moves down. Ribs move outward.
- 2. Diaphram moves upward and ribs move inward, pushing air out.
1. Vocal Cords-
2. High Voice-
3. Low Voice-
Larynx- located on top of the trachea
- 1. 2 folds of connective tissue stretched across the larynx.
- 2. vocal cords contract and shorten.
- 3. vocal cords are longer and relaxed.
I. Jobs of the Nerv. System
Recieve info from .
Tells your body to .
Helps maintain .
- 1. signal that makes you react.
- 2. your reaction
- 3. Homeostasis
3 types of Neurons?
- 1. Nerve cells
- 2. message that neurons carry
- 3. Sensory, Inter, and Motor
- 1. picks up the stimulus from the environment and changes it into a nerve impulse.
- 2. carries impulse from neuron to neuron.
- 3. sends impulse to the muscle.
Structure of Neuron:
4. Axon Tips-
- 1. Cell body with a nucleus
- 2. extensions on the cell
- 3. carries impulse away from cell body
- 4. releases chemicals that travel across the synapse
- 5. The chemiclas then go to the dendrites.
1.Central Nervous System-
3.What are the 3 main regions of the Brain?
4. Spinal Cord-
- 1. The conrol center of your body
- 2. Controls most functions in the body.
- 3. Cerebrum, Cerebellum, Brainstem
- 4. Thick column of nerve tissue that links the brain to the peripheral nervous system.
II. Jobs of the Brain
A. Jobs of the Cerebrum:( part of the brain)
- 1. Interpret info from senses.
- 2. Controls the movement of skeletal muscles.
- 3. Makes complex menatl processes: Judgement calls, remember, learning.
Left half of the brain controls the side of the body.
Right half of the brain controls the side of the body.
- 1. Right
- a. Math Skills, logic, writing, speech
- 2. Left
- b. Creative side, artistic ability
Job of the Cerebellum:( the largest part of the brain)
- 1. Coordinates your muscles
- 2. Helps you keep your balance
Job of the Brainstem: (located between and the .)
- Cerebellum and Spinal Cord.
- 1. Involuntary activities; digesting, heart beat, breathing, bladder.
Peripheral Nervous System:
A. Made of:
What are the 2 groups of peripheral nervous systems?
- A. nerves that connect CNS to the body.
- 1. Somatic Nervous System, and Autonomic Nervous system.
- 1. contols voluntary actions
- a. tieing shoes, smiling, frowning
- 2. controls involuntary actions
- a. changes diameter of blood vessels, breathing, digesting
- 1. automatic response that's rapid and without concious control.
- 2. bruiselike injury to the brain. When the Cerebrum hits the skull.
- 3. loss of movement to the part of the body due to spinal cord injury.
|
Human hearing is pretty dismal compared to animals like bats and dolphins that use sound to navigate, but blind people have been reported to be able to learn to use echolocation to sense their surroundings. Now, research out of the Ludwig Maximilian University of Munich (LMU) suggests that almost anyone could pick up the skill, and use echolocation to accurately estimate how big a room is. Using MRI scanners, the team also studied which parts of the brain activate during the process.
First, the team recorded the acoustic properties of a small, echoey chapel, before digitally manipulating this audio fingerprint to effectively make a virtual room sound larger or smaller. The aim of the project was to determine if people could be taught to use echolocation, clicking their tongues to figure out how big a given virtual space was.
"In effect, we took an acoustic photograph of the chapel, and we were then able to computationally alter the scale of this sound image, which allowed us to compress it or expand the size of the virtual space at will," says Lutz Wiegrebe, lead researcher on the study.
Using a fairly small sample size of 12 people – 11 sighted and one blind from birth – the participants were fitted with headphones and a microphone, and placed in an MRI scanner.
To determine the size of a virtual room, each participant had to make tongue clicking sounds into the microphone, and the headphones would play back echoes generated from the "acoustic photograph" of the real building at different virtual sizes.
"All participants learned to perceive even small differences in the size of the space," reports Wiegrebe. The researchers found that accurate room size assessment improved when subjects made click sounds in real time, rather than when recorded tongue clicks were played back to them. The team says that one subject eventually managed to estimate a virtual room space to within four percent of the correct answer.
While the tests were taking place, the MRI scanner allowed the researchers to peer inside the brains of the participants. As the sound waves from the tongue clicks bounced off the virtual walls and returned to the person's ears, the auditory cortex was activated. This was followed shortly after by activation of the motor cortex, stimulating the tongue and vocal cords to produce more clicking sounds.
Surprisingly, the experiments undertaken with the blind subject revealed that the returning sounds were mostly processed in the visual cortex. "That the primary visual cortex can execute auditory tasks is a remarkable testimony to the plasticity of the human brain," says Wiegrebe. Activation of the visual cortex in sighted participants during echolocation tasks was also recorded, but found to be relatively weak.
While we'll never be as good at echolocation as our bat buddies, the results do seem to suggest that humans could be taught to navigate using sound with some modicum of success. The next step for the researchers is to use their findings to develop an echolocation training program for the visually impaired.
The research was published in the Journal of Neuroscience.
Source: LMU Munich
|
Question 1. Choose the correct alternative from the clues given at the end of the each statement:
(a) The size of the atom in Thomson’s model is ___ the atomic size in
Rutherford’s model. (much greater than/no different from/much less than.)
(b) In the ground state of ___ electrons are in stable equilibrium, while in _ electrons always experience a net force. (Thomson’s model/Rutherford’s model.)
(c) A classical atom based on is doomed to collapse. (Thomson’s model/Rutherford’s model.
(d) An atom has a nearly continuous mass distribution in a ___ but has a highly non-uniform mass distribution in ……… (Thomson’s model/Rutherford’s model.)
(e) The positively charged part of the atom possesses most of the mass in _(Rutherford’s model/both the models.)
Sol. (a) No different from
(b) Thomson’s model; Rutherford’s model
(c) Rutherford’s model
(d) Thomson’s model; Rutherford’s model
(e) Both the models.
Note: In the Rutherford model of atom. Atom is electrically neutral sphere consisting of a very small, massive and positively charged nucleous at the centre, surrounded by revolving electrons.
Question 2. Suppose you are given a chance to repeat the alpha-particle scattering experiment using a thin sheet solid hydrogen in place of the gold foil. (Hydrogen is a solid at temperatures below 14 K.) What results do you expect?
Sol. The alpha particle scattering for large impact parameters would be the same. For smaller impact parameters the scattering would reduce considerably as hydrogen being a small atom would not be able to exert much force on the bombarding a-particle. Even for head-on collision there would be hardly any scattering for the same season.
Question 3. What is the shortest wavelength present in the Paschen series of spectral lines?
Question 4. A difference of 2.3 eV separates two energy levels in an atom. What is the frequency of radiation emitted when the atom make a transition from the upper level to the lower level?
Question 5. The ground state energy of hydrogen atom is -13.6 eV. What are the kinetic and potential energies of the electron in this state?
Question 6. A hydrogen atom initially in the ground level absorbs a photon, which excites it to the n= 4 level. Determine the wavelength and frequency of photon.
Question 7. (a) Using the Bohr’s model calculate the speed of the electron in a hydrogen atom in the n = 1, 2, and 3 levels.
(b) Calculate the orbital period in each of these levels.
Question 8. The radius of the innermost electron orbit of a hydrogen atom is 5.3 x 10-11m.What are the radii of then= 2 and n = 3 orbits?
Question 9. A 12.5 eV electron beam is used to bombard gaseous hydrogen at room temperature. What series of wavelengths will be emitted?
Question 10. In accordance with the Bohr’s model, find the quantum number that characterises the earth’s revolution around the sun in an orbit of radius 1.5 x 1011 m with orbital speed 3 x 104 mis. (Mass of earth= 6.0 x 1024 kg.)
ADDITIONAL NCERT EXERCISES
Question 11. Answer the following questions, which you understand the difference between Thomson’s model and Rutherford’s model better.
(a) Is the average angle of deflection of a-particles by a thin gold foil predicted by Thomson’s model much less, about the same, or much greater than that predicted by Rutherford’s model?
(b) Is the probability of backward scattering (i.e. scattering of a -particles at angles greater than 90°) predicted by Thomson’s model much less, about the same, or much greater than that predicted by Rutherford’s model?
(c) Keeping other factors fixed, it is found experimentally that for small thickness t, the number of a-particles scattered at moderate angles is proportional tot. What clue does this linear dependence on t provide?
(d) In which model is it completely wrong to ignore multiple scattering for the calculation of average angle of scattering of a -particles by a thin foil?
- About the same as we are talking about average angle of deflection.
- Much less as in Thomson’s model there is no such thing as a massive central core called nucleus.
- This implies that scattering of a-particles is due to single collision only. If thickness increases then chances of single collision increases because the number of target atoms would increase. Thus, moderately scattered a-particles would increase in number.
- In Thomson’s model, positive charge is uniformly spreadout, hence there would be hardly any noticeable deflection due to single collision. Hence, multiple scattering has to be considered for average scattering angle. In Rutherford’s model most of the scattering is due to single collision hence multiple scattering can be ignored.
Note: In Rutherford model, most of an atom is empty space, so most alpha particles go through it undeviated or with a small deviation.
Question 12. The gravitational attraction between electron and proton ina hydrogen atom is weaker than the coulomb attraction by a factor of about 10-4°. An alternative way of looking at this fact is to estimate the radius of the first Bohr orbit of a hydrogen atom if the electron and proton were bound by gravitational attraction. You will find the answer interesting.
Question 13. Obtain an expression for the frequency of radiation emitted when a hydrogen atom de-excites from level n to level (n -1). For large n, show that this frequency equals the classical frequency of revolution of the electron in the orbit.
Note: The period of revolution of electron in Bohr sorbit is proportional to the cube of quantum number.
Question 14. Classically, an electron can be in any orbit around the nucleus of an atom. Then what determines the typical atomic size? Why is an atom not, say, thousand times bigger than its typical size? The question had greatly puzzled Bohr before he arrived at his famous model of the atom that you have learnt in the text. To simulate what he might well have done before his discovery, let us play as follows with the basic constants of nature and see if we can get a quantity with the dimensions of length that is roughly equal to the known size of an atom(~ 10-10 m).
(a) Construct a quantity with the dimensions of length from the fundamental constants e, me, and c. Determine its numerical value.
(b) You will find that the length obtained in (a) is many orders of magnitude smaller than the atomic dimensions. Further, it involves c. But energies of atoms are mostly in non-relativistic domain where c is not expected to play any role. This is what may have suggested Bohr to discard c and look for ‘something else’ to get the right atomic size. Now, the Planck’s constant h had already made its appearance elsewhere. Bohr’s great insight lay in recognising that h, me and e will yield the right atomic size. Construct a quantity with the dimension of length from h, me and e and confirm that its numerical value has indeed the correct order of magnitude.
Question 15. The total energy of an electron in the first excited state of the hydrogen is about-3.4 eV.
(a) What is the kinetic energy of the electron in this state?
(b) What is the potential energy of the electron in this state?
(c) Which of the answers above would change if the choice of the zero of potential energy is changed?
Question 16. If Bohr’s quantisation postulate (angular momentum= nh/2 n:) is a basic law of nature, it should be equally valid for the case of planetary motion also. Why then do we never speak of quantisation of orbits of planets around the sun.
Sol. In Bohr’s quantisation angular momentum is proportional to h (Planck’s constant). But angular momenta associated with planetary motion are of the order of 1O70 h. This would mean n = 1 O70. For such large values of n, the differences in successive energies and angular momenta are so small that they are continuous and not discrete.
Question 17. Obtain the first Bohr’s radius and the ground state energy of a muonic hydrogen atom [i.e., an atom in which a negatively charged muon (-µ ) of mass about 207 me orbits around a proton].
- NCERT Solutions for Class 12 (All Subjects)
- NCERT Solutions for Class 12 Physics
- Atom Class 12 Notes Chemistry Chapter 12
|
In recent years, the use of drones has become increasingly popular across various industries. One sector that has embraced this technology is agriculture. Drones are revolutionizing farming practices by providing farmers with valuable insights and enhancing productivity. In India, where agriculture plays a vital role in the economy, the adoption of drones for agricultural purposes has the potential to address several challenges faced by farmers. In this article, we will explore how drones can unlock new possibilities for sustainable farming in India.
Precision Agriculture: Improving Efficiency and Accuracy
One of the key advantages of using drones in agriculture is their ability to facilitate precision farming practices. Traditional methods often lack accuracy and efficiency when it comes to monitoring crop health and identifying problem areas. Drones equipped with high-resolution cameras and sensors can provide farmers with real-time data on crop conditions, including nutrient deficiencies, pest infestations, and irrigation needs.
By capturing detailed aerial imagery, drones enable farmers to identify specific areas that require attention, allowing for targeted interventions instead of blanket treatments. This not only reduces input costs but also minimizes environmental impact by optimizing resource usage. With drones, Indian farmers can make informed decisions about fertilizers, pesticides, and water management techniques tailored to their specific crops’ needs.
Crop Monitoring: Enhancing Yield Prediction and Management
Accurate yield prediction is crucial for effective farm management and planning in India’s agricultural landscape. Drones equipped with multispectral or hyperspectral cameras can capture data beyond what the human eye can perceive. These cameras measure reflectance from different wavelengths of light emitted by crops, enabling precise analysis of plant health parameters such as chlorophyll content and vegetation indices.
By regularly monitoring crops throughout their growth cycle using drone technology, Indian farmers can obtain detailed information on crop vigor and identify potential yield-limiting factors early on. This allows them to take proactive measures such as adjusting irrigation schedules, applying targeted fertilizers, or implementing pest control strategies. With drones providing real-time crop monitoring capabilities, farmers can optimize their yield potential and minimize losses.
Irrigation Management: Optimizing Water Usage
Water scarcity is a significant challenge faced by Indian farmers, particularly in arid regions. Efficient irrigation management is crucial to ensure optimal water usage and mitigate the risk of water wastage. Drones equipped with thermal sensors can detect variations in crop temperature, helping farmers identify areas with inadequate or excessive irrigation.
By precisely mapping these variations, drones enable farmers to implement site-specific irrigation strategies that match the actual water requirements of each area. This data-driven approach not only conserves water but also improves crop health and reduces the risk of diseases caused by over-irrigation or under-irrigation.
Crop Spraying: Enhancing Efficiency and Safety
Traditionally, crop spraying has been a labor-intensive task that involves potential health risks for farmers due to exposure to chemicals. Drones equipped with spraying systems offer a safer alternative by reducing human contact with harmful substances while improving efficiency.
Drones can accurately apply pesticides or fertilizers to crops, ensuring even coverage and minimizing waste compared to traditional manual methods. They can also access difficult-to-reach areas or terrains where human operators might face challenges. By adopting drone-based crop spraying techniques, Indian farmers can save time and resources while reducing environmental impact.
The use of drones for agriculture in India has the potential to transform farming practices and address various challenges faced by farmers across the country. By enabling precision agriculture practices such as crop monitoring, irrigation management, and efficient spraying techniques, drones offer valuable insights that enhance productivity while minimizing resource usage and environmental impact. As technology continues to advance, it is essential for Indian farmers to embrace these innovations and unlock the full potential of drones for sustainable farming in India.
This text was generated using a large language model, and select text has been reviewed and moderated for purposes such as readability.
|
The Seder is a Jewish ritual meal in which multiple generations of a family take mark in recounting the liberation of the Israelites from slavery in ancient Egypt. The Seder itself is based on the command in the book of Exodus: "You shall tell your child on that day, saying, 'It is because of what the LORD did for me when I came out of Egypt.'" (Exodus 13:8) Traditionally, families and friends gather to read the text of the Haggadah, which contains the narrative of the Israelite exodus from Egypt, blessings, rituals, and songs. The Passover Seder Plate consists of symbolic foods which the leader of the Seder uses to tell the Exodus story. This year Passover begins on Friday. As the Passover is a critical part of the Easter story, learning about the ritual deepens understanding of the crucifixion and resurrection of Jesus.
If you have a story to submit, please email pictures and event to
|
Religion and the Middle Ages
In the later Middle Ages, the Church exerted a potent influence upon law. A very extensive jurisdiction was exercised by the ecclesiastical courts, which not only secured a more general exemption of the clergy from secular jurisdiction, but extended their own jurisdiction over laymen. Widows, orphans and helpless folk in general were protected by the Church, which also dealt with such a wide range of semi-secular offenses as falsification of measures, weights and coins; forgeries of documents; libel and scandal; perjury, including false witness and failure to perform an oath or vow; and usury, which then meant taking any interest for the use of money. In many of the above cases, the Church had a jurisdiction that was not exclusive, but exercised concurrently with that of the secular courts.
From the viewpoint of jurisdiction over sin, the Church naturally penalized a number of distinctly religious offenses: e.g., breaches of ecclesiastical regulations and discipline, and such major offenses as apostasy, heresy, schism, sorcery, witchcraft, sacrilege and sexual sins. In some of these cases, too, ecclesiastical jurisdiction was concurrent with that of the secular courts.
As was natural from the fact that marriage was considered a sacrament, the Church also exercised control over matrimonial cases and such related matters as the legitimacy of children, the recording of marriages and of baptisms, wills bequeathing personal property, and distribution of the property of intestates. For the exercise of its wide jurisdiction, the Church had developed courts superior to the secular courts in point of procedure, differentiation of penalties according to motives and other attendant circumstances, and use of the principles of jurisprudence. In some of these respects, the Church was transmitting much, which it had learned from the laws of the later Roman Empire; in others, Christian principles were humanizing the law.
Nor was ecclesiastical influence upon law confined to the Middle Ages, for a considerable part of the canon law, which developed then, continued to have vital effects upon public and private law long after the Protestant Revolt. In international relations, it was principally the Church, which kept alive the conception that all Christian peoples constitute a society of Christian nations, later broadened into the idea of a society of civilized nations; and there were valuable ecclesiastical contributions to the modern conviction that foreigners have legal rights, even in the absence of a treaty to that effect. Ecclesiastics taught that Christian principles should be followed in settling controversies between Christian princes and peoples; arbitration by the Pope was resorted to in a number of cases; and the Church made efforts to lessen the brutality of war. The chancellories and administrations of absolute monarchies of feudal states in Europe imitated the organization and procedure of the ecclesiastical hierarchy. In criminal law, the Church led in recognizing a class of healing penalties, imposed to reform the criminal, thus anticipating many recent reforms; and it spread the Christian conception that all should be equal before the law. In civil law, ecclesiastical influence is discernible in the institution of testamentary executors, and in the appointment of administrators of estates for which there were no wills; as well as in modifying certain details of civil procedure.
But many of the above-mentioned respects in which the Church influenced secular law in the later Middle Ages owed their foundations to periods earlier than the twelfth and the thirteenth centuries. In numerous instances, the origins go back to the early Middle Ages, that period which lies between the late fourth century and the end of the eleventh century; while some other precedents arose still earlier in the institutions of the Germanic and the Celtic peoples, or in the Greco-Roman civilization. Consequently, the relations of religion and law in the early Middle Ages deserve much more attention than they have received in popular accounts. In tracing these connections and their effects upon medieval civilization, we shall observe the part played by churchmen in making and writing down the secular laws, ecclesiastical influence upon the purposes of those laws, and ways in which the Church aided in reinforcing and in supplementing penal law.
Behind the writing down of the laws, there lay long centuries, in which there were very close connections between religion and law. For example, in those distant days, the office of king combined executive, judicial and religious functions, and pagan priests participated in the making of law and in its execution. The early assemblies and rudimentary courts of the barbarians met under the auspices of their pagan religions, which also provided some religious sanctions for the enforcement of law.
But although their pagan priests made some slight progress in devising means of writing, the customary laws of the Germanic and the Celtic peoples were not written down before the coming of Christianity; and, even then, considerable portions of the law remained unwritten for some time. In the formation of written codes of law among the barbarians, clerics played an important part, assisted in the beginning by lay experts in the Roman law, in Mediterranean countries where that law survived in its strongest form.
Legends ascribe to certain saints participation in the compiling of some codes of customary law, with revisions designed to eliminate pagan matter. One of the most interesting of these legends claims that the codes of the "Law of Nature" and of the Christian law were reconciled and further codified by a commission, in which St. Patrick exerted a powerful influence.1 According to Professor MacNeill, however, a more credible tradition places the first writing down of the early Irish laws in the seventh century. As compared with the peoples of England and of the Continent, the medieval Irish gave more power to trained lay jurists, called Brehons, in the making, interpretation and application of the laws. But even the Brehons received training in canon, as well as in secular law, in professional law schools, which anticipated by several centuries those of the medieval universities on the Continent.
In the revisions and additions to the early barbarian codes, also, the clergy exercised a potent influence. Prelates sat in the mixed assemblies, which made new laws; advised kings and emperors in drawing up their edicts, which, on the Continent and in England, were usually written by ecclesiastics; and, in some countries, had their representatives present in secular courts in the trial of certain cases. Spain under the Visigoths, from about A.D. 429 to the early eighth century, gave to prelates the most extensive legislative and judicial powers. For example, seventh-century Spanish synods went far beyond the field of Church legislation, covering many questions pertaining to the secular constitution; and, in the mixed council of Toledo of A.D. 653, ecclesiastical magnates far outnumbered the secular ones. In general, during the Visigothic monarchy, the form of the secular code became distinctly ecclesiastical, while the predominance of the hierarchy in Spain may be observed in numerous other respects.
Nor were the above the only instances of religious influence upon the beginnings of law among the barbarians, for they, like others in the early history of society, deemed law to be of divine origin and believed that the purposes of law were religious, as well as secular. This conception is expressed very well in one of the later Visigothic laws, known as the Forum Judicum or Fuero Juzgo, perhaps coming from as early as the year 932 but transmitted, in considerable part, to later Spanish codes. In this Visigothic code, "framed largely by the Spanish clergy in the councils of Toledo, law is defined as 'the emulator of divinity, the messenger of justice, the mistress of life.'"2
In contrast to our present practice, the secular laws of the Middle Ages constantly reiterated that crimes were sins, and that secular penal law had a religious, as well as a punitive purpose. In addition to being wrongs against individuals or the State, crimes were regarded as defiling the souls of the committor. This conception is best exemplified in the Irish laws, in which the Old Irish verb used for the commission of a crime also means "to defile," and it is definitely stated that "body and soul are defiled by committing crimes." Punishment was for the purpose of purging away this defilement. As further indication of the religious character of penal law in the early Middle Ages, many secular codes contain hosts of passages of a moralizing nature, sometimes with lengthy quotations from the Scriptures.3
In other respects, also, the contents of early medieval law repeatedly point to a union of religious and secular purposes. This characteristic becomes strongest in the more developed laws of the early Middle Ages, particularly in the monarchies of the Franks, the Visigoths, the Anglo-Saxons, the Welsh and the Irish. In the secular laws of the above peoples, a multitude of passages deal with ecclesiastical or semi-ecclesiastical matters. Thus they provide valuable supplements to other sets of purely ecclesiastical laws that were passed by Church councils and by the ecclesiastical and the mixed synods of national Churches.
Such a wide variety of ecclesiastical matters are dealt with in the secular laws of the early Middle Ages, as to preclude detailed treatment in this brief article. In general, however, important groups of such ecclesiastical provisions often dealt with such questions as the following: The protection of clerics and of Church property; penalties for injuries or wrongs to the same; the suppression of paganism, apostasy and heresy; provision for sanctuary and for the sanctity of ecclesiastical courts; the general enforcement of ecclesiastical discipline; the observance of holy days, etc.
Of special significance for the cooperation of Church and State in combating crime and paganism are the detailed passages in secular codes which enforced the performance of confession and of penance. Such requirements applied not only to criminals, but also to all Christians above the age of seven, a requirement coming from the canon law. In some of the secular codes, there are also many passages enforcing excommunication and putting excommunicated persons, particularly criminals, under a ban, which meant ostracism by all the faithful. Indeed, there is abundant evidence in the secular laws that the State gave material aid to a movement by which the Church was reviving the practice of penitential discipline from the sixth century forward, and was endeavoring to establish frequent confession and penance as a habit of the devout life.4
In many of the ecclesiastical provisions of the secular laws, one may also discern the influence of the developing science of moral theology, with its definitions of sins, distinctions between their degrees, prescriptions for their cure, etc.; while the related science of canon law is frequently revealed as influencing secular institutions. But in the actual administration of the sacrament of penance and other matters solely within ecclesiastical jurisdiction, it must be remembered that secular laws supplemented the canon law, and did not infringe upon powers invested in the clergy. This means, for example, that although the State backed the requirement of confession and penance, and enforced excommunication, the control of ecclesiastical discipline remained in the hands of the duly constituted clerics.
The Church, on its side, in various ways rendered valuable aid to the State in the endeavors of the latter to maintain law and order. That was an age when such assistance was sorely needed, for there were powerful obstacles to the suppression of crime and disorders, which long retarded the development of peaceful justice, particularly when the executive machinery was weak. Until the developing power of kings and of more effective executive machinery and law brought improvements, it was often difficult to make delinquents do justice; private vengeance was still allowed in a number of cases, at times developing into dangerous private wars or feuds; there was grave danger of perjury, because of defects in court procedure; and the secular laws sometimes left unpunished certain heinous offenses.
In such circumstances, the pressure exerted against malefactors by the secular government needed reinforcement and supplementing. This aid was rendered by the Church through religious sanctions and safeguards to strengthen legal procedure, other provisions which backed secular enforcement, and supplementary penalties for delinquents. Ecclesiastical discipline was the natural means used for this co-operation, and the usual instrument for such discipline was penance, sometimes backed, in cases of recalcitrance, by the ban of excommunication.
Penitential discipline had close connections with many more aspects of medieval life than has usually been represented by writers dealing with the history of penance.5 The influence of that sacrament had many ramifications, which penetrated deeply into numerous areas of medieval society social, economic, political and cultural, as well as ecclesiastical. Social and economic life was profoundly affected by numerous penitential prescriptions, regulating such varied matters as food and drink, marriage, sexual relations, charity, the treatment of women and children, the emancipation of bondsmen, and the sacredness of oaths.
Such penitential prescriptions are present in various sources of the canon law of that time, but are especially prominent in the little manuals of penance, that were customarily used by priests in assigning penances for all sins, from about the beginning of the seventh century to the end of the eleventh.6 These manuals were called penitentials, to be marked off distinctly from other manuals of penance by their detailed tariffs, or schedules, of specific penances for long lists of sins. From this characteristic, the system of penitential discipline under the penitentials has been aptly called "tariffed penance."
Among various ways in which the enforcement of secular law was aided by the Church, great significance attaches to the many and potent religious sanctions, developed much farther than had been the case in pagan times. Oaths, ordeals, and other parts of legal procedure were taken under the protection of the Church and surrounded with solemn rituals, employed to overawe criminals and witnesses, prevent perjury, appeal to the judgment of God to aid the right, etc. Solemn oaths, sworn on holy objects or persons, were required as religious safeguards in a multitude of penal, political, social, business and religious affairs. Perjury committed in such oaths, regarded as a most enormous sin, was visited by extremely severe penances in addition to secular penalties. Additional and valuable aid to criminal law was rendered by penitential and other canons, which insisted upon restitution to wronged persons, penalized refusal to do justice, and sternly punished the pursuing of private vengeance and the blood feud.
Penitential discipline also penalized a considerable number of serious offenses, which the early Germanic and Celtic laws, either left unpunished, or penalized too lightly. These delinquencies included a number of sexual offenses, infanticide, brawling, infringement of the marital code of the Church, and the mistreatment of slaves and of serfs. In punishing such wrongs against fellow human beings, penance made valuable contributions toward a higher evaluation of life, honor and humanity. Through such endeavors, the Church carried on a long and difficult struggle against many a savage or sensual custom, which militated against the welfare of women, children and other dependents. In particular, the upward progress of humanity received potent aid in the long struggle for the Christian ideal of monogamy, assisted by the heavy penances upon concubinage and other forms of immorality; while the growth of freedom gained momentum through the encouragement given by the Church to the emancipation of slaves or of serfs as a good work.
In addition to the above instances, there were various other respects in which early medieval law was influenced by the Church. It was largely through the work of clerics that the field of criminal law was first extended to cover offenses primarily against individuals but tending to undermine the social order. A religious marriage replaced the pagan one of sale or of contract, and the Church repeatedly intervened to protect the wife. Ecclesiastical protection was extended to slaves manumitted in a church or by testament. Roman conceptions of and instruments for wills and deeds were spread throughout Western Europe by churchmen; and Roman conceptions of differentiating penalties according to motives or other circumstances were gradually introduced by churchmen into the secular laws, sometimes by way of the penitentials.
We have now observed the main outlines in the relations between medieval religion and law, especially in the early Middle Ages. In these relations, Church and State constantly co-operated in making notable contributions to the maintenance of law and order, and hence, to the advancement of civilization. With the further revival of Roman law and the more extended development of canon law in the late Middle Ages, these processes made additional gains. But behind the co-operation of secular and ecclesiastical discipline in combating sin, there lay the constant and more positive work of the Church in inculcating religious doctrines and high moral ideals through religious instruction. Without such instruction to serve as inspiration and guide to right conduct, many of the relations between religion and law, which we have observed, would have been inconceivable or ineffective.
1 See A. S. Green, History of the Irish State to 1014, 1925, p. 233; cf. Eoin MacNeill, Early Irish Laws and Institutions, chaps. iii-iv.
2 See M. F. X. Millar, S.J., "History . . . of Democratic Theory," in Ryan and Millar, The State and the Church, 1922, p. 102.
3 For further details, see an article "Mediaeval Penance and the Secular Law," by the present writer, and Its references in Speculum, Vol. III, pp. 516 and passim.
4 See the article, '"Mediaeval Penance and the Secular Law," already cited; and O. D. Watkins, History of Penance, 1920, passim.
5 See "Some Neglected Aspects in the History of Penance," by the present writer, in the Catholic Historical Review, October, 1938.
6 In the British Isles, penitentials were also used in the sixth century.
© 1939 Missionary Society of St. Paul the Apostle (The Paulist Fathers), New York.
This item 5231 digitally provided courtesy of CatholicCulture.org
|
Transportation is a crucial aspect of modern life, enabling us to travel to work, school, and other activities. However, transportation is also one of the largest sources of greenhouse gas emissions, contributing to climate change and air pollution. In recent years, there has been a growing recognition of the importance of promoting sustainable commuting practices and reducing the carbon footprint of transportation. In this blog post, we will explore the benefits of sustainable commuting, the challenges to promoting it, and examples of sustainable commuting solutions. We will also provide tips on promoting sustainable commuting in your life and community.
The Benefits of Sustainable Commuting
Sustainable commuting, also known as green commuting, refers to the use of transportation modes that produce fewer emissions than traditional modes of transportation, such as single-occupancy vehicles. We can create a more livable, healthy, and equitable future by adopting sustainable commuting practices. Here are some of the benefits of sustainable commuting:
Reducing greenhouse gas emissions and mitigating the impacts of climate change
Transportation is one of the largest sources of greenhouse gas emissions, which contribute to climate change. By promoting sustainable transportation solutions, such as public transportation, biking, and walking, we can reduce our carbon footprint and mitigate the impacts of climate change, such as extreme weather events, rising sea levels, and more frequent natural disasters.
Improving air quality and public health outcomes
Air pollution from transportation is a major contributor to respiratory illnesses, heart disease, and other health problems. By promoting sustainable transportation solutions, we can improve public health outcomes and create more livable, healthy cities. For example, walking or cycling to work can improve physical fitness and mental health, while public transportation can provide affordable and reliable access to jobs, education, and other opportunities.
Conserving energy and resources
Sustainable transportation solutions, such as electric vehicles and public transportation, use less energy and resources than traditional transportation modes, such as gasoline-powered vehicles. By promoting these solutions, we can conserve energy and resources and create a more sustainable future.
Boosting the economy and creating jobs
Sustainable transportation solutions, such as public transportation and bike-sharing programs, can create jobs and boost local economies. Additionally, sustainable transportation solutions can save individuals money on transportation costs, such as gas, parking, and maintenance.
Improving the quality of life for individuals and communities
Sustainable transportation solutions can improve the quality of life for individuals and communities. For example, walking or cycling to work can improve physical fitness and mental health, while public transportation can provide affordable and reliable access to jobs, education, and other opportunities.
Challenges to Promoting Sustainable Commuting
While promoting sustainable commuting is important, there are several challenges to achieving it. Here are some of the challenges:
Lack of infrastructure for sustainable modes of transportation
One of the major challenges is the lack of infrastructure for sustainable modes of transportation, such as bike lanes, pedestrian walkways, and public transportation systems. Without proper infrastructure, it can be difficult for individuals to adopt sustainable commuting practices.
Accessibility issues in rural or low-income areas
Another challenge is the lack of accessibility to sustainable transportation solutions, particularly in rural or low-income areas. Public transportation systems may not be available or accessible in some areas, making it difficult for individuals to use sustainable transportation modes.
Behavioral change and shifting away from single-occupancy vehicles
Changing behavior is often challenging, particularly when it comes to transportation. Many people are used to driving alone, and persuading them to adopt sustainable commuting practices can be difficult.
Cost of sustainable transportation solutions
The cost of sustainable transportation solutions can also be a challenge for some individuals. Electric or hybrid vehicles, for example, can be more expensive than traditional gasoline-powered vehicles, making them less accessible to some people.
Policy and regulatory challenges
Policies and regulations can also be a challenge to promoting sustainable commuting. In some areas, regulations may favor automobiles over sustainable modes of transportation, or there may be a lack of political will to invest in sustainable transportation solutions.
Sustainable Commuting Solutions
Despite the challenges, there are many examples of sustainable commuting solutions that are being implemented around the world. Here are some examples:
- Public transportation systems: Public transportation systems, such as buses, trains, and subways, are a sustainable commuting solution that can provide affordable and reliable transportation options for individuals and communities. Many cities around the world have invested in public transportation systems to promote sustainable commuting.
- Bike lanes and pedestrian walkways: Bike lanes and pedestrian walkways provide safe and accessible transportation options for cyclists and pedestrians. Many cities around the world have implemented bike lanes and pedestrian walkways to promote sustainable commuting.
- Carpooling and ride-sharing services: Carpooling and ride-sharing services, such as Uber and Lyft, provide a more sustainable alternative to driving alone. By sharing rides with others, individuals can reduce the number of cars on the road and promote sustainable commuting.
- Electric and hybrid vehicles: Electric and hybrid vehicles produce fewer emissions than traditional gasoline-powered vehicles, making them a more sustainable commuting option. Many automobile manufacturers have introduced electric and hybrid vehicles to their lineups in recent years.
- Car-sharing and bike-sharing programs: Car-sharing and bike-sharing programs provide access to vehicles on a short-term basis. By promoting car-sharing and reducing the number of cars on the road, these programs are working to promote sustainable commuting.
How to Promote Sustainable Commuting
Promoting sustainable commuting requires individual and collective action. Here are some tips on how to promote sustainable commuting in your own life and community:
Educate yourself on sustainable transportation solutions and their benefits
Learn more about sustainable transportation solutions and their benefits. Many resources are available online, including government websites, non-profit organizations, and academic research.
Choose sustainable modes of transportation
Choose sustainable modes of transportation, such as walking, cycling, public transportation, or carpooling. Use an electric or hybrid vehicle if you must drive. Plan your trips ahead of time to avoid unnecessary driving and optimize your route.
Advocate for change
Contact local government officials to advocate for sustainable transportation solutions. Attend public meetings, sign petitions, and write letters to elected officials. Encourage your workplace to promote sustainable commuting options.
Support sustainable transportation initiatives
Support organizations and initiatives that promote sustainable transportation solutions, such as bike-sharing programs, public transportation systems, and carpooling services.
Set an example
Lead by example and encourage others to adopt sustainable commuting practices. Share your experiences and the benefits of sustainable transportation with friends, family, and colleagues.
Promoting sustainable commuting and reducing transportation's carbon footprint requires overcoming various challenges. However, the benefits of sustainable commuting are clear, including mitigating the impacts of climate change, improving public health outcomes, conserving energy and resources, boosting the economy, and improving the quality of life for individuals and communities.
By addressing the challenges through infrastructure improvements, accessibility improvements, behavioral change campaigns, cost-reducing initiatives, and policy changes, we can create a more sustainable future for ourselves and future generations. Supporting sustainable transportation initiatives is essential to creating a more livable, healthy, and equitable future.
|
About this schools Wikipedia selection
This Wikipedia selection is available offline from SOS Children for distribution in the developing world. Do you want to know about sponsoring? See www.sponsorachild.org.uk
- The operation is associative.
- The operation has an identity element.
- Every element has an inverse element.
(Read on for more precise definitions.)
Groups are building blocks of more elaborate algebraic structures such as rings, fields, and vector spaces, and recur throughout mathematics. Group theory has many applications in physics and chemistry, and is potentially applicable in any situation characterized by symmetry.
The order of a group is the cardinality of G; groups can be of finite or infinite order. The classification of finite simple groups is a major mathematical achievement of the 20th century.
Group theory concepts
A group consists of a collection of abstract objects or symbols, and a rule for combining them. The combination rule indicates how these objects are to be manipulated. Hence groups are a way of doing mathematics with symbols instead of concrete numbers.
More precisely, one may speak of a group whenever a set, together with an operation that always combines two elements of this set, for example, a x b, always fulfills the following requirements:
- The combination of two elements of the set yields an element of the same set ( closure);
- The bracketing is unimportant (associativity): a × (b × c) = (a × b) × c;
- There is an element that does not cause anything to happen ( identity element): a × 1 = 1 × a = a;
- Each element a has a "mirror image" ( inverse element) 1/a that has the property to yield the identity element when combined with a: a × 1/a = 1/a × a = 1
Special case: If the order of the operands does not affect the result, that is if a × b = b × a holds (commutativity), then we speak of an abelian group.
Some simple numeric examples of abelian groups are:
- Integers with the addition operation "+" as binary operation and zero as identity element
- Rational numbers without zero with multiplication "x" as binary operation and the number one as identity element. Zero has to be excluded because it does not have an inverse element. ("1/0" is undefined.)
This definition of groups is deliberately very general. It allows one to treat as groups not only sets of numbers with corresponding operations, but also other abstract objects and symbols that fulfill the required properties, such as polygons with their rotations and reflections in dihedral groups.
James Newman summarized group theory as follows:
|The theory of groups is a branch of mathematics in which one does something to something and then compares the results with the result of doing the same thing to something else, or something else to the same thing.
Definition of a group
A group (G, *) is a set G closed under a binary operation * satisfying the following 3 axioms:
- Associativity: For all a, b and c in G, (a * b) * c = a * (b * c).
- Identity element: There exists an e∈G such that for all a in G, e * a = a * e = a.
- Inverse element: For each a in G, there is an element b in G such that a * b = b * a = e, where e is the identity element.
In the terminology of universal algebra, a group is a variety, and a algebra of type .
A set H is a subgroup of a group G if it is a subset of G and is a group using the operation defined on G. In other words, H is a subgroup of (G, *) if the restriction of * to H is a group operation on H.
A subgroup H is a normal subgroup of G if for all h in H and g in G, ghg-1 is also in H. An alternative (but equivalent) definition is that a subgroup is normal if its left and right cosets coincide. Normal subgroups play a distinguished role by virtue of the fact that the collection of cosets of a normal subgroup N in a group G naturally inherits a group structure, enabling the formation of the quotient group, usually denoted G/N (also sometimes called a factor group).
Operations involving groups
A homomorphism is a map between two groups that preserves the structure imposed by the operator. If the map is bijective, then it is an isomorphism. An isomorphism from a group to itself is an automorphism. The set of all automorphisms of a group is a group called the automorphism group. The kernel of a homomorphism is a normal subgroup of the group.
A group action is a map involving a group and a set, where each element in the group defines a bijective map on a set. Group actions are used to prove the Sylow theorems and to prove that the centre of a p-group is nontrivial.
Special types of groups
A group is:
- Abelian (or commutative) if its product commutes (that is, for all a, b in G, a * b = b * a). A non-abelian group is a group that is not abelian. The term "abelian" honours the mathematician Niels Abel.
- Cyclic if it is generated by a single element.
- Simple if it has no nontrivial normal subgroups.
- Solvable (or soluble) if it has a normal series whose quotient groups are all abelian. The fact that S5, the symmetric group in 5 elements, is not solvable is used to prove that some quintic polynomials cannot be solved by radicals.
- Free if there exists a subset of G, H, such that all elements of G can be written uniquely as products (or strings) of elements of H. Every group is the homomorphic image of some free group.
Some useful theorems
Some basic results in elementary group theory:
- Lagrange's theorem: if G is a finite group and H is a subgroup of G, then the order (that is, the number of elements) of H divides the order of G.
- Cayley's Theorem: every group G is isomorphic to a subgroup of the symmetric group on G.
- Sylow theorems: if pn (and p prime) is the greatest power of p dividing the order of a finite group G, then there exists a subgroup of order pn. This is perhaps the most useful basic result on finite groups.
- The Butterfly lemma is a technical result on the lattice of subgroups of a group.
- The Fundamental theorem on homomorphisms relates the structure of two objects between which a homomorphism is given, and of the kernel and image of the homomorphism.
- Jordan-Hölder theorem: any two composition series of a given group are equivalent.
- Krull-Schmidt theorem: a group G satisfying certain finiteness conditions for chains of its subgroups, can be uniquely written as a finite direct product of indecomposable subgroups.
- Burnside's lemma: the number of orbits of a group action on a set equals the average number of points fixed by each element of the group.
Connection of groups and symmetry
Given a structured object of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. For example rotations of a sphere are symmetries of the sphere. If the object is a set with no additional structure, a symmetry is a bijective map from the set to itself. If the object is a set of points in the plane with its metric structure, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (an isometry).
The axioms of a group formalize the essential aspects of symmetry.
- Closure of the group law - This says if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry.
- The existence of an identity - This says that keeping the object fixed is always a symmetry of an object.
- The existence of inverses - This says every symmetry can be undone.
- Associativity - Since symmetries are functions on a space, and composition of functions are associative, this axiom is needed to make a formal group behave like functions.
Frucht's theorem says that every group is the symmetry group of some graph. So every abstract group is actually the symmetries of some explicit object.
Applications of group theory
Some important applications of group theory include:
- Groups are often used to capture the internal symmetry of other structures. An internal symmetry of a structure is usually associated with an invariant property; the set of transformations that preserve this invariant property, together with the operation of composition of transformations, form a group called a symmetry group. Also see automorphism group.
- Galois theory, which is the historical origin of the group concept, uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The solvable groups are so-named because of their prominent role in this theory. Galois theory was originally used to prove that polynomials of the fifth degree and higher cannot, in general, be solved in closed form by radicals, the way polynomials of lower degree can.
- Abelian groups, which add the commutative property a * b = b * a, underlie several other structures in abstract algebra, such as rings, fields, and modules.
- In algebraic topology, groups are used to describe invariants of topological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to some deformation. Examples include the fundamental group, homology groups and cohomology groups. The name of the torsion subgroup of an infinite group shows the legacy of topology in group theory.
- The concept of the Lie group (named after mathematician Sophus Lie) is important in the study of differential equations and manifolds; they describe the symmetries of continuous geometric and analytical structures. Analysis on these and other groups is called harmonic analysis.
- In combinatorics, the notion of permutation group and the concept of group action are often used to simplify the counting of a set of objects; see in particular Burnside's lemma.
- An understanding of group theory is also important in physics and chemistry and material science. In physics, groups are important because they describe the symmetries which the laws of physics seem to obey. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include: Standard Model, Gauge theory, Lorentz group, Poincaré group
- In chemistry, groups are used to classify crystal structures, regular polyhedra, and the symmetries of molecules. The assigned point groups can then be used to determine physical properties (such as polarity and chirality), spectroscopic properties (particularly useful for Raman spectroscopy and Infrared spectroscopy), and to construct molecular orbitals.
- Group theory is used extensively in public-key cryptography. In Elliptic-Curve Cryptography, very large groups of prime order are constructed by defining elliptic curves over finite fields.
There are three historical roots of group theory: the theory of algebraic equations, number theory and geometry. Euler, Gauss, Lagrange, Abel and French mathematician Galois were early researchers in the field of group theory. Galois is honored as the first mathematician linking group theory and field theory, with the theory that is now called Galois theory.
An early source occurs in the problem of forming an th-degree equation having as its roots m of the roots of a given th-degree equation (). For simple cases the problem goes back to Hudde (1659). Saunderson (1740) noted that the determination of the quadratic factors of a biquadratic expression necessarily leads to a sextic equation, and Le Sœur (1748) and Waring (1762 to 1782) still further elaborated the idea.
A common foundation for the theory of equations on the basis of the group of permutations was found by mathematician Lagrange (1770, 1771), and on this was built the theory of substitutions. He discovered that the roots of all resolvents (résolvantes, réduites) which he examined are rational functions of the roots of the respective equations. To study the properties of these functions he invented a Calcul des Combinaisons. The contemporary work of Vandermonde (1770) also foreshadowed the coming theory.
Ruffini (1799) attempted a proof of the impossibility of solving the quintic and higher equations. Ruffini distinguished what are now called intransitive and transitive, and imprimitive and primitive groups, and (1801) uses the group of an equation under the name l'assieme delle permutazioni. He also published a letter from Abbati to himself, in which the group idea is prominent.
Galois found that if are the roots of an equation, there is always a group of permutations of the 's such that (1) every function of the roots invariable by the substitutions of the group is rationally known, and (2), conversely, every rationally determinable function of the roots is invariant under the substitutions of the group. Galois also contributed to the theory of modular equations and to that of elliptic functions. His first publication on group theory was made at the age of eighteen (1829), but his contributions attracted little attention until the publication of his collected papers in 1846 (Liouville, Vol. XI).
Arthur Cayley and Augustin Louis Cauchy were among the first to appreciate the importance of the theory, and to the latter especially are due a number of important theorems. The subject was popularised by Serret, who devoted section IV of his algebra to the theory; by Camille Jordan, whose Traité des Substitutions is a classic; and to Eugen Netto (1882), whose Theory of Substitutions and its Applications to Algebra was translated into English by Cole (1892). Other group theorists of the nineteenth century were Bertrand, Charles Hermite, Frobenius, Leopold Kronecker, and Emile Mathieu.
Walther von Dyck was the first (in 1882) to define a group in the full abstract sense of this entry.
The study of what are now called Lie groups, and their discrete subgroups, as transformation groups, started systematically in 1884 with Sophus Lie; followed by work of Killing, Study, Schur, Maurer, and Cartan. The discontinuous ( discrete group) theory was built up by Felix Klein, Lie, Poincaré, and Charles Émile Picard, in connection in particular with modular forms and monodromy.
The classification of finite simple groups is a vast body of work from the mid 20th century, classifying all the finite simple groups.
Other important contributors to group theory include Emil Artin, Emmy Noether, Sylow, and many others.
Alfred Tarski proved elementary group theory undecidable.
An application of group theory is musical set theory.
In philosophy, Ernst Cassirer related group theory to the theory of perception of Gestalt Psychology. He took the Perceptual Constancy of that psychology as analogous to the invariants of group theory.
|
Understanding electricity and its basic concepts can be a fascinating experience for children. However, it can also be challenging for them to grasp the abstract and technical concepts of voltage and current. As a parent, teacher, or caregiver, it is important to use simple and relatable examples to explain these concepts to children. By doing so, you can spark their curiosity and help them understand the science behind everyday electronics.
When explaining voltage and current to a child, it is important to use age-appropriate language and tone. Avoid using complicated words or jargon that might confuse them. Instead, use analogies and metaphors that relate to their daily experiences and surroundings.
- Use relatable examples and analogies when explaining voltage and current to children.
- Avoid using technical jargon and complicated words.
- Remember to use age-appropriate language and tone.
What is Voltage?
When you talk about electricity, you often hear the word “voltage. So, what does it mean? Voltage is simply the force or pressure that pushes electric charges through a circuit. It’s like the energy that propels a car forward, except in this case it’s pushing electrons along a wire.
A good way to think about voltage is to imagine it like water pressure in a pipe. Higher voltage means higher pressure, which means more electricity can flow through the circuit. Voltage is measured in volts, and different devices require different voltages to work.
|Battery-powered toys and flashlights
|Household electrical outlets
|Large appliances like air conditioners and washing machines
So, in summary, voltage is like the pushing force behind electricity. It’s measured in volts and different devices need different amounts of voltage to function. Hopefully, this explanation helps simplify the concept of voltage for children.
What is Current?
In simple terms, current is the flow of electric charges through a circuit. Just like water flowing through a pipe, electric current flows through wires and conductors, powering appliances and devices.
Current is measured in amps, which tell us how much electric charge is flowing through a circuit. For example, if you have a light bulb with a current of 1 amp, that means 1 coulomb of electric charge is flowing through the circuit every second.
Think of it like water flowing through a pipe. The amps measure how much water is flowing every second. A larger current means more electric charge is flowing, just like a larger flow of water means more water is flowing.
Understanding current is important because it helps us determine how much power a device needs to function properly. Larger devices, like air conditioners or refrigerators, require more current to function than smaller devices, like a phone charger.
Just like voltage, current is necessary for electric devices to work. But it’s important to remember that dealing with electricity can be dangerous, especially for children. Always handle electrical devices with caution and follow safety guidelines to prevent electrical shocks.
How Voltage and Current Work Together
Now that you understand the basics of voltage and current, it’s important to know how they work together in a circuit. Think of voltage as the force or pressure that pushes electric charges through a circuit, and current as the flow of those electric charges.
To better understand this relationship, imagine a water slide. Just like how gravity provides the energy for water to flow down the slide, voltage provides the energy for current to flow through a circuit. The higher the voltage, the stronger and faster the current will be.
It’s important to note that not all devices require the same voltage to work. For example, a phone charger may require a low voltage of 5 volts, while a vacuum cleaner may require a higher voltage of 120 volts. Using the wrong voltage could damage the device or even cause a safety hazard.
That’s why it’s essential to have a basic understanding of voltage and current when dealing with electrical devices. By grasping these concepts, you can ensure safe and efficient use of electronics in your home.
Common Examples of Voltage and Current
Now that you understand the basics of voltage and current, let’s take a look at some everyday examples you can use to explain these concepts to children.
|Batteries: Show your child a AA battery and explain that the voltage is what makes it work. You could turn on a small flashlight or toy that uses a battery as an example.
|Appliances: Point out that all the electronics in your home need current to function. Turn on a lamp or TV to demonstrate this for your child.
|Outlets: Show your child an electrical outlet and explain that it provides power to whatever is plugged in. You could plug in a phone charger or hair dryer to show this in action.
|Circuit Boards: For older children, introduce them to circuit boards and explain that they use current to power electronic devices. You could take apart an old appliance or toy and show them how it works.
By showing your child how voltage and current work in everyday items, they’ll begin to understand how these concepts apply to their own lives.
Safety Tips for Voltage and Current
When it comes to electricity, safety should always be a top priority. It is important to teach children about the potential dangers of voltage and current, and how to handle electrical devices in a safe manner. Here are some basic safety tips to keep in mind:
|Never touch exposed wires
|Exposed wires can be very dangerous, even if they are not currently carrying electricity. Always use caution around electrical devices and avoid touching any wires or circuits that you are not familiar with.
|Keep water away from electrical devices
|Water can conduct electricity and increase the risk of electrical shock. Never use electrical devices near water, and avoid handling them with wet hands or in damp environments.
|Use devices as intended
|Read and follow the instructions that come with electrical devices, and only use them for their intended purpose. Avoid modifying or tampering with devices, as this can create safety hazards.
|Unplug devices when not in use
|When devices are not in use, unplug them from the electrical outlet. This can reduce the risk of electrical fires or damage to the device.
By following these safety tips and being mindful around electrical devices, you can help ensure a safe and enjoyable learning experience for children.
Fun Experiments to Demonstrate Voltage and Current
Learning about voltage and current can be fun and interactive for children! Here are some simple experiments that can help them understand these concepts:
|Lighting up a bulb with a battery
|Battery, bulb, wires
|Making a lemon battery
|Lemon, copper wire, zinc nail
|Creating a simple circuit
|Battery, wires, buzzer
For the first experiment, connect the wires between the battery and bulb, then touch the bulb’s wires to the battery’s terminals. The bulb should light up, demonstrating how the battery’s voltage provides the energy for the bulb to turn on.
The lemon battery experiment involves inserting a zinc nail and copper wire into a lemon, then using wires to connect the lemon to a small LED light or clock. The lemon’s acid helps create a small voltage, which can power the LED light or clock.
The final experiment involves creating a simple circuit with a battery, wires, and buzzer. Attach the buzzer to the wires and connect the circuit to the battery. The buzzer should sound, demonstrating how the battery’s voltage creates a current that powers the buzzer.
These experiments are not only educational but also promote hands-on learning and critical thinking. Encourage children to explore and ask questions as they conduct these experiments.Image source: seowriting.ai
Additional Resources for Learning Voltage and Current
If you want to dive deeper into the world of electricity with your child, there are many books, websites, and educational resources available that can help. Here are some recommendations:
- Electricity and Magnetism for Kids by Baby Professor: This book uses simple language and colorful illustrations to introduce the basics of electricity and magnetism to young children.
- BrainPOP: This educational website offers animated videos, quizzes, and games on a variety of science topics, including electricity and circuits.
- The Magic School Bus and the Electric Field Trip by Joanna Cole: In this classic children’s book, Ms. Frizzle takes her class on a field trip through the world of electricity.
- Exploratorium: This science museum’s website offers interactive exhibits and activities that allow children to explore electricity and circuits in a hands-on way.
Remember, the key to teaching your child about voltage and current is to keep it simple and fun. Don’t be afraid to experiment and explore together!
Explaining voltage and current to children may sound like a complex task, but it can be done in a simple and fun way. Remember to use age-appropriate language and relatable examples to help children understand these concepts. By breaking down voltage and current into easy-to-understand terms, you can spark curiosity and inspire a love for science in children.
Always emphasize electrical safety and provide simple guidelines for children to follow when using electrical devices. Encourage hands-on learning through fun experiments and provide additional resources for children who want to learn more about electricity. Above all, nurture an open dialogue with children about science and electricity, as this can inspire them to pursue careers in STEM fields in the future.
Can You Share Some Simple Tips for Explaining Newton’s Third Law to Kids?
Explaining Newton’s Third Law to kids may seem challenging, but it can be done effectively. Start by using relatable examples like bouncing balls or pushing a toy car. Teach them that every action has an equal and opposite reaction, meaning when they push or pull something, an equal force pushes or pulls back. Soon, they will grasp the concept, making explaining newton’s third law made easy for kids.
Q: How do I explain voltage and current to a child?
A: When explaining voltage and current to a child, it’s important to use simple and relatable examples. Start by describing voltage as the force that pushes electric charges through a circuit. You can use the analogy of water pressure in a pipe to help them understand how voltage works. Current, on the other hand, represents the flow of electric charges. It’s similar to the flow of water through a pipe. By using age-appropriate language and relatable scenarios, you can make these concepts easier for children to understand.
Q: What is voltage?
A: Voltage is the force or pressure that pushes electric charges through a circuit. Think of it as the water pressure that pushes water through a pipe. Voltage is measured in volts, and different devices require different voltages to work. For example, batteries usually have a specific voltage, and outlets provide a standard voltage for household appliances.
Q: What is current?
A: Current represents the flow of electric charges through a circuit. It’s like the flow of water through a pipe. Current is measured in amps, and it determines how much electric charge is flowing per second. Just like water can flow faster or slower through a pipe, the current can be stronger or weaker depending on the voltage and resistance in a circuit.
Q: How do voltage and current work together?
A: Voltage and current are closely related. Think of voltage as the energy that provides the push for the current to flow. Using the analogy of a water slide, the higher the voltage, the stronger and faster the current. Voltage is what sets the pace for the flow of electric charges in a circuit.
Q: Can you give me some examples of voltage and current?
A: Sure! For voltage, think of batteries and outlets as sources of power. When you insert a battery into a device, it provides the necessary voltage to make it work. Similarly, when you plug an electronic device into an outlet, it receives the voltage it needs to operate. For current, consider the flow of electricity in appliances. When you turn on a light switch, the current flows through the circuit, allowing the light to turn on. Charging a phone is another example of current flowing from a power source to the device.
Q: How can I ensure safety when dealing with voltage and current?
A: Safety is crucial when it comes to voltage and current. Avoid touching exposed wires and always handle electrical devices with caution. Make sure to explain to children the dangers of electrical shocks and why it’s important to follow safety guidelines. Additionally, remind them not to use appliances near water, as water conducts electricity and can pose a risk.
Q: Are there any fun experiments to demonstrate voltage and current?
A: Yes, there are! You can try simple experiments to help children grasp the concepts of voltage and current. For example, show them how a battery powers a light bulb by creating a circuit with wires and a bulb. Another fun experiment is making a buzzer work by creating a simple circuit with a battery, wires, and a buzzer. Encouraging children to explore and learn through hands-on activities can make understanding voltage and current more enjoyable.
Q: Can you recommend additional resources for learning more about voltage and current?
A: Absolutely! There are many books, websites, and educational resources available to enhance a child’s understanding of voltage and current. Some popular options include children’s science books, interactive websites, and educational videos. Continuous learning and curiosity are key when it comes to science and electricity.
|
Feathers are seen only in birds, and all the species have feathers. Interestingly, birds are dinosaurs – in fact, they are the sole surviving lineage in the entire dinosaur family tree. Hence, feathers, can be traced back to the dinosaurs. However, it is important to note that not all species sported feathers.
Functionally, feathers are thought to provide flight as well as insulation. However, since remains of feathers have been discovered in non-flying dinosaurs, the functions of feathers have been extended to waterproofing and even thermoregulation.
Traditionally, feathers are thought to have evolved from reptilian scales – where the scales in the ancestors of birds frayed and spilt, eventually turning into feathers. It was also speculated that the ancestors of birds were small reptilians that lived in the canopies of trees, leaping from tree to tree. If their scales were longer, it would have provided more lift, enabling the organism to glide further and further. Later down the line, the arms would have evolved into wings – transitioning from gliding to true powered flights. However, new research have objections to this idea.
Regardless, feathers in modern birds are brightly colored, which are in response to sexual selection (attracting the opposite sex). Hence, feathers in dinosaurs could have evolved based on the same idea. This speculation gained traction in 2009, when scientists took at a closer look at their structure, revealing microscopic structures called melanosomes. These fossilized, sac-like melanosomes were compared to the melanosomes from living birds.
What they found was very interesting – the melanosomes correspond precisely in shape to structures, which are associated to very specific colours in the feathers of extant birds. What this meant was that the scientists were able to reconstruct the feather’s colours. For instance, the tail of Sinosauropteryx appears to have red and white stripes on its tail. Scientists believe that this may have been used for courtship- where males attracted females. Some scientists also believe that both sexes might have used these stripes the way a zebra uses its stripes – to confuse potential predators.
Explore more fascinating topics at BYJU’S Biology. We also update our content to ensure accuracy as per the latest research, hence, consider registering with us – for free.
|
The first industrial robot was invented in 1954 and was installed seven years later in a General Motors factory for spot welding and die-casting. Since then, robotic technology has been used in industries from manufacturing to agricultural farming as a means to increase efficiencies, lower costs, and increase revenues. These robots are usually designed to work independently, executing pre-scripted tasks in spaces protected from human interference. They have increased factory productivity but have limited capabilities.
Cobots, or collaborative robots, are a new step in industrial robot technology. Unlike most robots, which act as replacements for human workers (and often operate in cages to prevent injury to workers), cobots are designed to work sideby- side with their human counterparts, even collaborating on the same task. How do these robots gain new abilities that can increase their operational value while remaining safe and secure as they operate in a factory near humans?
One way to increase robotic abilities in a safe and efficient manner is to use an innovative new technology: computer vision. This technology enables a robot, or computer system, to use a camera or scanner to transform multidimensional inputs into data it can process, “perceiving” its surroundings and mimicking sight. Computer vision coupled with machine learning gives the computer increased technical abilities and the opportunity to perform more complex tasks. Robots accessing computer vision gain abilities beyond scripted tasks and can augment the abilities of their human coworkers by participating in their labors or by using technologies such as infrared imaging to see and report on things invisible to the human eye.
This technology dramatically increases the potential for robotics in industry, creating avenues that would not otherwise be viable. For example, using an AI-enabled cloud, connected robots could recognize objects faster and send collective messages, notifying or warning humans of situations that they could not see. They could also aid in quality control, as they could be able to recognize the condition of products when compared against the expected visual representation.
Similar advantages could pertain to agricultural production. Independent robots using computer vision could differentiate between product quality levels; for example, the robot could use imaging types in both visible and ultraviolet light to detect below-surface discrepancies and extract a higher profit from varying products by identifying food grades. It could even warn for diseases, such as peach leaf curl on trees, that would significantly reduce productivity if not treated.
Currently, industrial robots harbor many potential safety dangers, as they have no awareness of their surroundings other than what is provided by sensors; this could cause serious harm to people working nearby or alongside them. However, with the addition of new sensing technologies, robots could be used in closer proximity to humans and in more confined spaces, so that factory workers and robots would be able to safely work in tandem. Both the production capacity and the safety of the factory could increase. Robots could perform more complex tasks, and they could operate in a disordered space by recognizing the objects they should interact with.
Wind River offers solutions that incorporate the latest ROS 2 framework, so developers can focus on application development, leading to more innovative robotics. Compute and partitioning capabilities can protect safety applications while providing the high performance that is important to enhance further collaboration between humans and robots.
To learn more contact our sales inquiry desk.
|
Science in the Scientific Revolution
Science in the Scientific Revolution is the third book in a hands-on, multilevel elementary science series that introduces scientific concepts using history as its guide. It covers the scientific works of natural philosophers from 1543 to the end of the 1600s. Because the course covers science as it was developed, it discusses a wide range of topics including astronomy, human anatomy, medicine, botany, zoology, heliocentrism, geocentrism, gases, pressure, electricity, fossils, microbiology, binary numbers, gravity, conservation laws, and the laws of motion. Throughout the course, the student learns that most of the great natural philosophers who lived during this time were devout Christians who were studying the world around them to learn more about the nature of God.
Because of its unique design, the course can be used by all elementary-age students. Each lesson contains an interesting hands-on activity that helps illustrate the scientific concept that is being discussed, and it concludes with three different levels of review exercises. Students do whatever review exercise matches their specific level of understanding.
|
Year Five Half Term Homework
Hello Year 5
Next week is our Half Term holiday so we have a project ready for you to have a go at if you would like to.
Our Geography topic for next Term is Rivers. We would like you to carry out some research and present your findings in any way you wish to- a powerpoint, a poster, a drawing, a painting, a Fact File, a model or any other imaginative way.
You can choose one or more of the following areas to research:
- The Water-Cycle- How are rivers a created?
- What is a river? - include features of a river
- How do people use rivers around the world?
- Flooding- problems and solutions
- Longest Rivers
- What lives in a river?
There is lots of useful information on the Canal and Rivers Trust website:
Here are some other websites to get you started.
I have attached a glossary to that will give you some information and technical language.
Have fun learning and we are looking forward to seeing your work.
Year 5 Team
|
Q: WHAT IS TARTAR AND WHY IS IT BAD FOR YOU?
A:Tartar (also known as dental calculus) is hardened bacteria that occurs when dental plaque gets stuck on your teeth and comes in contact with your saliva. So what is tartar? There are two types and one of them is more harmful than the other:
1. Tartar along the gum line
This type is called supragingival (above the gum) tartar and is often formed on the lingual surface of the mandibular anterior teeth or on the outside of the large teeth in the upper jaw.
2. Tartar between the teeth and the gingiva
The other, and more harmful, type of tartar is called subgingival (below the gum) tartar. It thrives within the sulcus (gum pockets) between the teeth and the gingiva.
|
The world around us is changing as all of our professional and personal work depends heavily upon the usage of the internet. Technology is evolving at a rate of knots as new devices are being incorporated into our daily lives on a regular basis. Mobile phone and computers are perhaps the most essential devices which we rely upon to execute any task. Java and C arrays are the two most significant applications which our laptops and computers depend upon. Without these two coding languages, it would have become really difficult for us to operate our computer systems.
No matter how big or small of an application you are running in your computer or mobile devices, all of them primarily run on the programming language. There are a considerable number of programming languages for developing applications and software.
Java is an object-oriented programming language, whose main objective is to let developers write the programming once and run it anywhere. Java code can run on any kind of platform without recompiling it, and the only need is that the platforms should support Java. The set of rules that Java uses is the same that is used by C and C++ programming language. Reportedly, Java was one of the most popular programming languages in 2018, mainly used for the development of web applications and software-based applications.
If you are new to the world of Java, there are certain sample java tutorial that would allow you to find the sum of two numbers.
These Java codes will allow you to enter any number and the programming will give you the sum of the number on the screen. Running these sample java tutorial will help you to get a better idea about the entire concept.
C language a procedural programming language. It is mainly used to develop the operating systems of our computer and mobile devices. C language uses C array to store a similar type of data in a group that can be accessed later when needed. For example, if you want the system to print all the numbers from 200 to 209, you try a certain C array tutorial that will give you the desired output.
Learning about C array can be a little complicated at first, but once you understand the concept and the basics of the language, with practice and patience, it becomes easier.
|
Vortex rings are vortices that form in a fluid or gas, often when the speed of a fluid/gas is rapidly changed or when a fluid or gas of a different makeup or speed is injected into a second fluid or gas. Fluid dynamics, buoyancy forces, friction, drag and various other factors play into the formation and sustenance of vortex rings both underwater and in the air.
If you have ever sat down for a relaxing afternoon at a hookah bar, you’ve likely seen someone showing off their skills in blowing smoke rings. They are captivating, because despite being made of a gas, which we know will (generally) evenly distribute within its container, these right rings seem to whirl and dance in place, refusing to break apart or fade. Some people can even do tricks, blowing dozens of rings or passing tiny spinning rings of gas through larger ones.
Humans aren’t the only creatures who have a fascination with such rings. Various marine creatures can create such rings underwater, both with air and fast-moving water. In fact, such rings—called vortex rings—can be made in a number of unexpected places, and still hold some mysteries for researchers to uncover.
How Are Vortex Rings Formed?
More formally known as a toroidal vortex, a vortex ring is a vortex of a fluid or gas that forms around an imaginary axis line in the form of a closed loop. Basically, it looks like a ring of water or air, spinning tightly around itself and temporarily maintaining its shape and form. Some vortex rings can retain their shape for impressively long distances through water and air, provided they aren’t disturbed.
These types of vortices form very often in turbulent water, where the speed and direction of water varies in different pockets and regions, but in such unstable conditions, they are difficult to observe. However, you can see them rather clearly in other forms, such as rising from a lit tobacco pipe, around an erupting volcano, in front of a just-fired artillery weapon, in the explosion of an atomic bomb, or even emerging from your own mouth at a hookah bar!
The science behind toroidal rings is quite simple at its most fundamental level. As a certain fluid or gas moving at a certain speed enters a region where another fluid or gas is moving at a different speed, a ring can form. At the interaction border of these two substances, the fluid particles move in a circular pattern around the core of the ring. The inner edges of the ring are moving faster than the outer edges, and therefore fold back in on themselves. These inner fluid particles spin around to the outer edge of the ring and begin to slow down and gather, but are then pulled back around the rotating core. as they are caught in the wake of the inner section, which is moving faster!
These visually fascinating types of rings can be caused by numerous things, such as dropping a mass into a fluid. In the wake of the descending mass, a ring of water will often form; imagine a bullet being fired through water in the movies! Some of the long watery wake from the rapidly moving bullet will form into vortex rings behind it, with the inner edges of the ring moving faster than the outer edges, perpetuating itself through the water.
Smoke rings are the easiest form of vortex rings to observe, and while we don’t encourage tobacco use in our readers, it is a rather beautiful thing to witness when done well. When forcing a large mass of fluid (smoke) out of a narrow opening (the mouth), the smoke will interact with the edges of the opening and begin to flow back upon itself when it encounters the non-moving air outside the opening. The inner edges of the smoke ring will move rapidly, coiling back around the core and holding the ring shape as it moves forward in the air. These smoke rings will slowly widen and lose velocity as more of the fluid particles are disturbed and break off from the toroidal motion.
Vortex Rings Underwater
The creation of a vortex ring, as we have described it, sounds like something inherently manmade, but knowledge of these rings extends into the animal kingdom, including cetaceans, specifically dolphins, beluga whales and humpback whales. There have been countless observations and intentional studies on these creatures, because they often create, manipulate, play with and utilize vortex rings underwater.
When a dolphin flicks the end of its tail, or moves its head quickly, the shift in speed of the surrounding water is often enough for a vortex ring to form made of water. While this would be invisible—as it is simply water rapidly rotating in other water—these creatures then blow bubbles, which get caught up in the swirl of the ring. There is perhaps no clearer evidence that dolphins know how to play then watching them create vortex rings, infuse them with water and then dart around, seeming to play catch with their own creations.
As mentioned earlier, these rings can often perpetuate for some time, allowing these creatures to play with them, moving them by flapping their tails, blowing other rings, cutting them in half, or even chomping through them. This final act of biting, when a playful sea creature is apparently “finished” with the game, will cause the ring to dissipate into normal air bubbles floating up to the surface.
Interestingly enough, vortex rings are particularly stable in water because fast-moving fluid has a lower pressure, so the still/slower water surrounding it will have a higher pressure, thus stabilizing the ring and keeping it in one piece, per se. This provides even more time for animals to play and maneuver these bubbles without them dissipating, as is often seen in gas or smoke rings.
Animals aren’t the only things that create vortex rings underwater; propellers and scuba divers are often the culprits. Propellers rapidly change the speed of water, and scuba divers occasionally release rapid expulsions of gas from their masks. Both of these scenarios are ideal settings for the formation of these entrancing rings!
A Final Word
If you’re feeling daring, next time you’re underwater, try blowing out a rapid burst of air from your mouth. If you do it right, you may be able to create a vortex ring of your own. For those who partake in the occasional evening at a hookah bar, work on your smoke ring-blowing skills and then explain the science behind them to your friends. Remember, the rapid movement of one fluid or gas through another fluid or gas of a different density, pressure or speed has the potential to form a vortex ring due to a combination of dynamic forces.
|
National Physical Activity Recommendations
Children love to play and be active!
Being physically active every day is important for the healthy growth and development of infants, toddlers and pre-schoolers. Physical activity for children includes both structured activities and unstructured free play, and can be done indoors or out. The following recommendations made by the Commonwealth of Australia, Department of Health and Ageing (2010) are for children who haven’t started school yet.
Active Play Ideas
- Children will love running and playing with streamers made from colourful ribbons or scarves, hoops and balloons.
- Catching and hitting games using a variety of objects and balls – you may like to try bubbles, bean bags and a range of balls of differing sizes.
- Create an obstacle course using items from around the house – try boxes, sheets, chairs and tables – kids will love exploring under, over, through and around the course that you create.
- Encourage jumping games – make an imaginary river using a rope, or an imaginary log using a pillow for children to jump over.
- Digging and building in the sand, either at the beach or in a sand pit.
- Children can help in the garden, maybe even create a small garden (in pots is fine if you have limited space) for children to tend and care for – digging holes for plants and carrying water cans are great ways to be active.
- Playgrounds offer a wide variety of experiences for children to be active – climbing, swings and slides are great opportunities for active play.
- Playing with pets is fun way to get kids moving.
These tips may help to develop positive TV viewing habits with your child:
- Set viewing time and content limitations for children – encourage your child to have an active role in selecting what TV programs they wish to view within these limitations.
- Avoid TV during times of the day when kids could be outside engaging in active play and exploration – if necessary record programs so they can be viewed at a more suitable time.
- Try to have TV-free meal times – allowing time for family conversation and interaction.
- Make your children’s bedroom screen free zones.
- Try to supervise your child during their TV watching and other electronic media use – parental involvement has been shown to have a positive impact on the educational value of these activities.
- Turn the TV off when the scheduled program is finished – having the TV on in the background can distract children while they are playing or interacting with others.
- Be prepared with active play alternatives when the kids want to turn on the TV.
For more information:
|
Many thermal processes have been developed in order to eliminate the municipal solid wastes or produce energy from them. These processes include a wide range of applications from the simplest burning system to plasma gasification. Plasma gasification is based on re-forming of molecules after all molecules convert to smaller molecules or atoms at high temperatures. In this work, the production of fuel gas is aimed by plasma gasification of municipal solid wastes in high temperatures. Because of this, a plasma reactor of the capacity of 10 kg h(-1)was designed which can gasify municipal solid wastes. Plasma gasification with and without steam and oxygen was performed in temperatures of 600, 800, 1000, 1200, 1400 and 1600 degrees C in the reactor. A gas mixture containing methane, ethane, hydrogen, carbon dioxide and monoxide, whose content varies with temperature, was obtained. It was found that plasma gasification (or plasma pyrolysis, PG), plasma gasification with oxygen (PGO) and plasma gasification with steam (PGS) were more prone to CO formation. A gas product which was consisted of 95% CO between 1200 and 1400 degrees C was produced. It was observed that a gas with high energy capacity may be produced by feeding oxygen and steam into the entrance of the high temperature region of the reactor.
|
For most of the 40-plus years the term "dyslexia" has been in existence -- and although the diagnosis has long been considered a "learning disability" -- it has been based on comparisons with average readers. Simply put, a child could be diagnosed with dyslexia if he or she shows an IQ in the "normal" range but falls at or below the 10th percentile on standardized reading tests. This cut-off has been arbitrary, often varying from district to district and based on Response to Intervention (RTI) criteria. As a result, a child who falls at the 12th percentile might be considered a poor reader while a child at the 10th percentile would be diagnosed with dyslexia.
For parents who have a child diagnosed with dyslexia, it is obvious early in the educational process that their bright child is not just behind in reading, but dumbfounded by the written word. A child with dyslexia seems to struggle at every turn.
Special educators, neurologists, and psychologists have understood that, too, and since the 1970s have assumed dyslexia has a neurological basis. "Dyslexia" stems from the Greek alexia, which means "loss of the word," and was the diagnostic term used when adults lost the ability to read after a brain injury. Dyslexia was a term adopted to confer a lesser, though still neurologically-based, form of reading impairment in children. However, determining the neurological basis has been elusive until recently.
The Search for a Neurological Basis
In early attempts at researching the underlying causes of dyslexia in the 1970s, there were no technological medical procedures to study brain processes that might be involved in reading normally or abnormally. Because of the inability to determine the neurological cause(s) of dyslexia, in some educational circles it became synonymous with "developmental reading disorder," and the cause was deemed unimportant. Rather, the goal was to develop and test interventions and measure their outcomes, without an effort to relate the interventions to underlying causation.
A major limitation to that approach is that it is symptom-based, yet determining the cause is essential to identifying an effective solution. When we clump children together into a single diagnostic category based on test scores, we not only fail to address what might be causing the dyslexia, but we also ignore variability in performance that limits our ability to identify individual differences.
Fortunately, advances in neuroscience, buttressed since the late 1990s by neuroimaging and brain electrophysiological technology, have led to an emerging consensus about the causes of dyslexia -- underlying capacities essential for learning to read, which emerge through brain development, are less developed in children diagnosed with dyslexia.
And the best news is that those processes are amenable to carefully designed training approaches.
The Dyslexic Brain
In the early to mid-2000s, research on the underlying basis of dyslexia pointed to a primary problem with the phonological processing of speech sounds. Early research, summarized in Stanislas Dehaene's Reading in the Brain (2009), identified problems with phonological awareness, or the ability to segment words into their component speech sounds. More recent research has delineated why that problem exists.
These findings have led to an emerging consensus, well summarized by Jane Hornickel and Nina Kraus in the Journal of Neuroscience in 2012: dyslexia is primarily an auditory disorder that arises from an inability to respond to speech sounds in a consistent manner. And Finn and colleagues at Yale published research in August 2014 (PDF, 4.7MB) indicating that this underlying problem with perception of speech sounds, in turn, affects the development of brain networks that enable a student to link a speech sound to the written letter.
Based on this research, reading interventions for dyslexia should be most effective if they combine auditory perceptual training and memory for speech sounds (phonological memory) with exercises that require relating speech sounds to the written letter (phonemic awareness and targeted decoding). And, in fact, neuroscience research bears that out. Temple et al (2003) used fMRI to show that when a program with that type of intervention was used intensively (five days a week for six weeks) with 35 students (as well as three adults) diagnosed with dyslexia, not only did decoding and reading comprehension improve significantly, but brain regions active in typical readers during phonological awareness tasks were activated.
Added to the neuroscience research on causation is additional scientific research conducted by education specialists on variability in patterns of dyslexia and the importance of individualizing interventions. Some children diagnosed with dyslexia read words as a whole and guess at internal detail, showing major problems with phonological awareness. But other children may over-decode to the point that they have trouble reading irregular sight words and read too slowly to comprehend what they have read.
Ryan S. Baker and his colleagues at Columbia University, Polytechnique Montréal, Carnegie Mellon, and other universities are researching the factors necessary for effective tutoring of students with learning issues (PDF, 682KB). Their research indicates that an effective tutor is one who considers variability and has the ability to diagnose what a student knows and does not know, and then adapt interventions to the diagnosis. For example, if a student has trouble with decoding, interventions that emphasize phonological awareness and provide additional practice with decoding are often helpful. But for children who over-decode, programs that build fluency through repetitive guided oral reading practice may be more useful. Baker and his colleagues have taken this research an extra step to determine the most effective intelligent tutoring systems -- technological interventions that can free up the teacher by providing adaptive tutoring programs individualized to each student.
The Potential to Retrain the Brain
Our understanding of dyslexia has come very far in the past 40 years, with neurophysiological models developed in just the past five years explaining the underlying capacities required for reading and the best methods for individualized adaptive interventions. Fortunately, treatment options have kept pace with the research, and children with dyslexia today have the potential to train their brains to overcome the learning difficulties that earlier generations were destined to carry with them for a lifetime.
|
OUR ULTIMATE COVID BOOKING GUARANTEE. FIND OUT MORE
Paris wears its history on its sleeve. Its countless celebrated landmarks – from the white domes of Sacré-Cœur to the imposing black edifice of the Tour de Montparnasse – provide vivid reminders of the different eras and rulers who have left their marks. Here are a few of the key events that have shaped the city’s history, making Paris what it is today.
The Parisii were a sub-tribe of the Celtic Senones, and they established themselves on the Île de la Cité, one of the remaining natural islands along the Seine, in the years between 250 and 225 BC. In 52 BC, the Roman army defeated the Parisii and established a Gallo-Roman city that they initially called Lutetia. By the time the Western Roman Empire fell in AD 476, however, the city was more commonly referred to as Parisius, a name that became Paris when translated from Latin to French.
Clovis I (c. AD 466-511) was the first king of France. He united several of the Frankish tribes living in territories that now form part of modern-day France, doing away with the system of local chieftains and bringing them together under a single ruler. He established a dynasty that became known as the Merovingians, with the kingship passed down his line of succession for over 200 years. Widely recognised as a clever military tactician, he took control of a small rump state in what had been the Western Roman province of Gaul, in northern France, in 486. From there, his territories quickly expanded, and he made Paris his capital city in 508.
The magnificent Our Lady of Paris, better known by its French name, Notre-Dame, is one of the most famous cathedrals in the world and one of the finest examples of French Medieval Gothic architecture. Construction began in 1163 during the reign of Louis VII and was completed around 1260.
The cathedral perfectly exemplifies this period’s style, which broke with Romanesque convention to introduce Gothic elements, including pointed arches, ribbed vaults and flying buttresses. Originating in France, the style spread across the continent in the 12th and 13th centuries and dominated European architectural tastes for 400 years.
On 15 April 2019, while undergoing restoration work, Notre-Dame caught fire. Much of the cathedral’s roof was destroyed, and the spire, or flèche, collapsed completely in dramatic images beamed around the world. In the immediate aftermath, French President Emmanuel Macron pledged to restore the cathedral to its former glory. Donations flooded in from both France and around the world to help with the reconstruction work.
Paris is often referred to as ‘The City of Light’, and not just because it was one of the first European cities to install widespread gas-powered lighting. It was also the centre of the European Enlightenment in the mid-18th century – a period that was marked by an explosion of new, revolutionary ideas concerning people’s relationships to the world around them, to God and to the state.
It was an era in which philosophers, writers and thinkers emphasised the primacy of logic and science over religion. They questioned the idea of the absolute monarchy and the divine right of kings and promoted notions of individual liberty and ideals such as freedom of speech.
Led by French thinkers such as Denis Diderot and Voltaire, as well as the Swiss-born Jean-Jacques Rousseau, this movement covered everything from arts and sciences to economics but mostly concerned itself with politics. Ideas that emerged during this ‘enlightened’ period provided much of the intellectual underpinning for the French Revolution as well as the foundation of the United States.
In the late 18th century, a number of factors contributed to growing discontent among France’s lower classes. Resentment stemmed in part from the privileges enjoyed by the aristocracy and the Catholic clergy, the government’s debt, unpopular tax schemes and a series of bad harvests. Influenced by Enlightenment ideals, demands for change grew, and the Third Estate, or commoners, soon rose against the monarchy in an attempt to achieve political and social rights.
On 14 July 1789, French civilians stormed the Bastille prison in Paris, marking the start of the Revolution. Following this event, violent outbreaks spread from the capital across the country, as people went out into the streets to protest against the system. Ten years of bloodshed and instability followed, costing thousands of lives and ultimately ending the absolute monarchy of King Louis XVI.
The tremendous repercussions of the French Revolution across all aspects of society and politics and its impact in Europe make it one of the pivotal historical moments of the era, with Paris at its centre stage.
The leader of the French Revolution during its final years, Napoleon Bonaparte assumed the role of Emperor of France in 1804 and led the country until 1814. The military leader’s contributions to the country are numerous and continue to hold great significance in France today.
Under Napoleon’s gaze, the Napoleonic code, which laid the foundation for French law, was created, and the emperor helped popularise education, encouraged religious freedom and reformed France out of economic decay. Among other notable achievements, Napoleon introduced the massive construction of industries and infrastructure in France, fairer taxes and a new commercial code. But it wasn’t just in his own country that Napoleon’s liberal views were spread – multiple conquests spearheaded by the leader’s military intelligence and ambition allowed for the dissemination of his ideas across Europe.
The elegant architecture of Paris’s most recognisable streets is largely thanks to one man: Georges-Eugène Haussmann. The first president of France, Napoleon III, tasked Haussmann with redeveloping the city in an effort to solve the many issues related to its Medieval neighbourhoods, which were drowning in filth and plagued by disease and frequently became sites of popular discontent. Haussmann’s regeneration project annexed suburbs to the city and introduced wide avenues, new parks, squares, fountains, aqueducts and sewers. Some of Paris’s most iconic structures, including the Palais Garnier (Opéra), Gare du Nord, Parc Monceau and Place du Châtelet, can be attributed to Haussmann’s project.
Characterised by a sense of optimism, peace, prosperity and technological and scientific innovation, La Belle Époque refers to the pre-war era in France, which started in 1870 and continued until 1914. These positive factors led to a thriving artistic wave through the country, with Paris as the main hub. Some of the major contributions of this period include the construction of the Eiffel Tower, the inauguration of the Paris Métro, the opening of the Moulin Rouge and the post-Impressionist movements of the visual arts. Named retroactively, the era’s optimism contrasted significantly with the horrors of the world war that followed.
At the height of La Belle Époque, Paris hosted L’Exposition Universelle (The World’s Fair) to mark the 100th anniversary of the storming of the Bastille. The main attraction was the Eiffel Tower – built to symbolise the people’s resistance and constructed of iron to represent the innovations of the new industrial era. The fair itself featured a re-creation of the Bastille prison before the Revolution, with the interior courtyard – now topped by a blue, fleur-de-lis-decorated ceiling – used for balls and other gatherings during the event.
While the battle on the ground never reached Paris itself, rationing and sustained bombing attacks meant that the city was still majorly affected by World War I. When German soldiers entered northeast France in September 1914, many Parisians were forced to flee the city, and the government moved to Bordeaux, afraid that Paris would be taken and destroyed by German troops.
Luckily, the destruction of Paris was avoided, but the country lost many lives through the drawn-out conflict itself as well as a flu epidemic. France ended the war on the winning side, but Paris found itself caught between feelings of relief and despair and loss.
Paris began to recover after the trauma of World War I, and by the end of the 1920s, artistic and cultural development once more characterised the city. The cafés of Montparnasse and St Germain became the meeting places of national and international artists, thinkers and writers, such as Pablo Picasso, Salvador Dalí, Ernest Hemingway and F Scott Fitzgerald. The intellectual and cultural evolution that the post-war period engendered, combined with economic improvement and peace efforts, gave the 1920s the title of Les Années Folles (The Crazy Years). But France still struggled to recover from the ill effects of World War I, and in 1931, the situation was exacerbated as the Great Depression hit the city.
By the end of the 1930s, France was at war with Germany again, having declared war alongside Britain in September 1939 in response to Hitler’s invasion of Poland. Eight months later, German troops attacked France, defeating its army, and the French government fled from Paris to Vichy in June 1940. (The Vichy government collaborated with the Nazis beginning in 1942.) Paris was soon occupied by German soldiers, who governed the city alongside Nazi-approved French officials. A strict curfew was imposed on those living in Paris, and rationing was introduced, especially for essential items such as food and coal.
In August 1944, the French Resistance fought against the Nazis to liberate Paris, and on 25 August, French Resistance fighters and American troops succeeded in freeing the city. But it was at this point that Paris faced one of its greatest threats of World War II: Hitler gave the order to destroy and burn the French capital, which had already been covered in dynamite and explosives. Luckily, the Nazi general Dietrich von Choltitz had developed such an affection for the beautiful city during his rule that he refused the command, eventually surrendering to the French Resistance and protecting Paris’s structures from the deadliest war in modern history.
|
Given two positive integers N and K, the task is to find the sum of the quotients of division of N by powers of K which are less than or equal to N.
Input: N = 10, K = 2
Dividing 10 by 1 (= 20). Quotient = 10. Therefore, sum = 10.
Dividing 10 by 2 (= 21). Quotient = 5. Therefore, sum = 15.
Divide 10 by 4 (= 22). Quotient = 2. Therefore, sum = 17.
Divide 10 by 8 (= 23). Quotient = 1. Therefore, sum = 18.
Input: N = 5, K=2
Dividing 5 by 1 (= 20). Quotient = 5. Therefore, sum = 5.
Divide 5 by 2 (= 21). Quotient = 2. Therefore, sum = 7.
Divide 5 by 4 (= 22). Quotient = 1. Therefore, sum = 8.
Approach: The idea is to iterate a loop while the current power of K is less than or equal to N and keep adding the quotient to the sum in each iteration.
Follow the steps below to solve the problem:
- Initialize a variable, say sum, to store the required sum.
- Initialize a variable, say i = 1 (= K0) to store the powers of K.
- Iterate until the value of i ≤ N, and perform the following operations:
- Store the quotient obtained on dividing N by i in a variable, say X.
- Add the value of X to ans and multiply i by K to obtain the next power of K.
- Print the value of the sum as the result.
Below is the implementation of the above approach:
Time Complexity: O(logK(N))
Auxiliary Space: O(1)
Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.
|
Secretary of the Interior Ryan Zinke is blaming this summer's large-scale wildfires on environmentalists, who he contends oppose "active management" in forests.
But the idea that wildfires should be suppressed by logging the forest is far too simplistic. Most scientists agree that large, hot wildfires produce many benefits for North American forests. Notably, they create essential habitat for many native species.
Fifteen years of research on spotted owls—a species that has played an oversized role in shaping United States forest management policies and practices for the past several decades—directly contradicts the argument that logging is needed to protect wildlife from fires. Wildlife biologists, including me, have shown in a string of peer-reviewed studies that wildfires have little to no effect on spotted owls' occupancy, reproduction, or foraging, and even provide benefits to the owls.
Nonetheless, despite this steadily accumulating evidence, the United States Forest Service advocates logging in old-growth forest reserves and spotted owl critical habitat in the name of protecting spotted owls from forest fires. Zinke's recent statements are just the latest and broadest iteration of the false viewpoint that logging benefits wildlife and their forest habitats.
Protecting Spotted Owl Habitat
Spotted owls are birds of prey that range from the Pacific Northwest to central Mexico. Because they nest in large old-growth trees and are sensitive to logging, in the 1980s they became symbols of the exceptional biodiversity found in old-growth forests.
The northern spotted owl in the Pacific Northwest was listed as threatened under the Endangered Species Act in 1990. At that point, about 90 percent of U.S. old-growth forest had already been lost to logging. Every year in the 1980s the Forest Service sold about seven to 12 billion board feet of public lands timber.
Listing the owl drew attention to the dramatic decline of old-growth forest ecosystems due to 50 years of unsustainable logging practices. In response the Forest Service adopted new regulations that included fewer clearcuts, less cutting of trees over 30 inches in diameter, and fewer cuts that opened up too much of the forest canopy. These policies, along with vast depletion of old-growth forests, reduced logging on Forest Service lands to about two billion board feet per year.
During the 1990s, national forest management policy for the northern spotted owl included creating old-growth reserves and designating critical habitat where logging was restricted—mostly within half a mile of a spotted owl nest. In spite of these protections, populations of northern spotted owls, as well as California and Mexican spotted owls, continued to decline on forest lands outside national parks. This was most likely due to ongoing logging outside of their protected nesting areas in the owls' much larger year-round home ranges.
Fire and Owls
Over the years the Forest Service shifted away from treating spotted owls as symbols of old-growth forest biodiversity, and instead started to cite them as an excuse for more logging. The idea that forest fires were a threat to spotted owls was first proposed in 1992 by agency biologists and contract researchers. In a status assessment of the California spotted owl, these scientists speculated that fires might be as damaging as clearcuts to the owls.
This perspective gained popularity within the Forest Service over the next 10 years and led to increased logging on public lands that degraded old-growth habitat for spotted owls.
Academic scientists, including some with Forest Service funding, published peer-reviewed studies of spotted owls and fire in 2002, 2009, 2011, and 2012. All four studies showed either no effects from fire or positive benefits from fire for spotted owls. Subsequent research on spotted owls in fire-affected forests has showed repeatedly that the owls can persist and thrive in burned landscapes.
Many Wild Species Thrive in Burned Landscapes
I recently conducted a systematic review and meta-analysis that summarized all available scientific research on the effects of wildfires on spotted owl ecology. It found that spotted owls are usually not significantly affected by mixed-severity forest fire. Mixed-severity forest fire, which includes large patches with 100 percent tree mortality, is how wildfires in Western forests naturally burn. The preponderance of evidence indicated that mixed-severity wildfire has more benefits than costs for spotted owls.
In 2017 I submitted an early version of this analysis with the same conclusions to the U.S. Fish and Wildlife Service during the agency's peer-review process for its conservation objectives report for the California spotted owl. My conclusions were not included in the final report.
Decades of science have shown that forest fires—including large hot fires—are an essential part of Western U.S. forest ecosystems and create highly biodiverse wildlife habitat. Many native animals thrive in the years and decades after large intense fires, including deer, bats, woodpeckers, and songbirds as well as spotted owls. Additionally, many native species are only found in the snag forest habitat of dead and dying trees created by high-severity wildfire.
Wildfires Threaten Homes, but Wildlife and Water Supplies Benefit
Studies have shown that that wildfires are strongly influenced by a warming climate, and that logging to reduce fuels doesn't stop the biggest, hottest fires. In my view, federal and state agencies that manage wildfires should devote significant resources toward making structures ignition-resistant and creating defensible space around homes to protect communities, rather than promoting ecologically damaging logging.
It is also time to reform Forest Service management goals to emphasize carbon capture, biodiversity, outdoor recreation, and water supply as the most important ecosystem services provided by national forest lands. These services are enhanced by wildfires, not by logging.
|
Researchers studying six very bright gamma-ray bursts discovered that the pulses composing these GRBs exhibited complex, time-reversible wavelike structures. In other words, each GRB pulse shows an event in which time appeared to repeat itself backwards.
Above – A model of temporally-reversed pulse structure. An impactor (red) produces variable radiation as it travels through axisymmetric, stretched clouds (blue).
They noticed this “mirroring” effect after realizing the “smoke” of limited instrumental sensitivity smeared out GRB light, giving moderately bright pulse light curves a three-peaked appearance and faint pulse light curves the shape of a simple bump. Only the brightest GRB pulse light curves exhibit the time-reversed wavelike structures.
Hakkila says that the time-reversible light curves do not necessarily violate natural laws of cause and effect. The research team believes that the most natural explanation is that a blast wave or a rapidly-ejected clump of particles radiates while being reflected within an expanding GRB jet or while moving through a symmetric distribution of clouds.
This discovery is intriguing, says Hakkila, in that it does not appear to have been predicted by theoretical models. Despite this, the discovery should provide astrophysicists with new tools in understanding the final death throes of massive stars and the physical processes that accompany black hole formation.
GRBs are the intrinsically brightest explosions known in the universe. They last from seconds to minutes, and originate during the formation of a black hole accompanying a beamed supernova or colliding neutron stars. The narrow beam of intense GRB radiation can only be seen when the jet points toward Earth, but such an event can be seen across the breadth of the universe.
Researchers demonstrate that the `smoke’ of limited instrumental sensitivity smears out structure in gamma-ray burst (GRB) pulse light curves, giving each a triple-peaked appearance at moderate signal-to-noise and a simple monotonic appearance at low signal-to-noise. They minimize this effect by studying six very bright GRB pulses (signal-to-noise generally over 100), discovering surprisingly that each exhibits complex time-reversible wavelike residual structures. These `mirrored’ wavelike structures can have large amplitudes, occur on short timescales, begin/end long before/after the onset of the monotonic pulse component, and have pulse spectra that generally evolve hard to soft, re-hardening at the time of each structural peak. Among other insights, these observations help explain the existence of negative pulse spectral lags, and allow us to conclude that GRB pulses are less common, more complex, and have longer durations than previously thought. Because structured emission mechanisms that can operate forwards and backwards in time seem unlikely, they look to kinematic behaviors to explain the time-reversed light curve structures. They conclude that each GRB pulse involves a single impactor interacting with an independent medium. Either the material is distributed in a bilaterally symmetric fashion, the impactor is structured in a bilaterally symmetric fashion, or the impactor’s motion is reversed such that it returns along its original path of motion. The wavelike structure of the time-reversible component suggests that radiation is being both produced and absorbed/deflected dramatically, repeatedly, and abruptly from the monotonic component.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
|
The smallest plants and creatures in the ocean power entire food webs, including the fish that much of the world’s population depends on for food, work and cultural identity.
In a paper published in Science Advances, NOAA Fisheries researcher Jason Link and colleague Reg Watson from the University of Tasmania’s Institute for Marine and Antarctic Studies suggest that scientists and resource managers need to focus on whole ecosystems rather than solely on individual populations. Population-by-population fishery management is more common around the world, but a new approach could help avoid damaging overfishing and the insecurity that brings to fishing economies.
“In simple terms, to successfully manage fisheries in an ecosystem, the rate of removal for all fishes combined must be equal to or less than the rate of renewal for all those fish,” said Link, the senior scientist for ecosystem management at NOAA Fisheries and a former fisheries scientist at the Northeast Fisheries Science Center in Woods Hole, Massachusetts.
The authors suggest using large-scale ecosystem indices as a way to determine when ecosystem overfishing is occurring. They propose three indices, each based on widely available catch and satellite data, to link fisheries landings to primary production and energy transfer up the marine food chain. Specific thresholds developed for each index make it possible, they say, to determine if ecosystem overfishing is occurring. By their definition, ecosystem overfishing occurs when the total catch of all fish is declining, the total catch rate or fishing effort required to get that catch is also declining, and the total landings relative to the production in that ecosystem exceed suitable limits.
“Detecting overfishing at an ecosystem level will help to avoid many of the impacts we have seen when managing fished species on a population-by-population basis, and holds promise for detecting major shifts in ecosystem and fisheries productivity much more quickly,” said Link.
In the North Sea, for example, declines in these indices suggested that total declines in fish catch indicative of ecosystem overfishing was occurring about 5-10 years earlier than what was pieced together by looking at sequential collapses in individual populations of cods, herrings and other species. Undue loss of value and shifting the catches in that ecosystem to one dominated by smaller fishes and invertebrates could have been avoided, the authors say.
Looking at the Whole Ecosystem
The first index used in the study is the total catch in an area, or how much fish a given patch of ocean can produce. The second is the ratio of total catches to total primary productivity, or how much fish can come from the plants at the base of the food chain. The third index is the ratio of total catch to chlorophyll, another measure for marine plant life, in an ecosystem.
Proposed thresholds for each index are based on the known limits of the productivity of any given part of the ocean. Using these limits, the authors say local or regional context should be considered when deciding what management actions to take to address ecosystem overfishing. Having international standards would make those decisions much easier and emphasize sustainable fisheries.
The authors named the indices in honor of the late marine biologist John Ryther and NOAA Fisheries scientists Michael Fogarty and Kevin Friedland, both at the Northeast Fisheries Science Center. All have worked extensively on integrating ecosystem and fishery dynamics for better resource management.
Shifting Fish, Not Fleets
“We know that climate change is shifting many fish populations toward the poles, yet the fishing fleets and associated industries are not shifting with them,” Link said. “That already has had serious economic and cultural impacts.” The authors note that they are able to follow these shifts over time and see how they can exacerbate or even contribute to ecosystem overfishing.
Fisheries are an important part of the global economy. In addition to trade and jobs, fish provide the primary source of protein to more than 35 percent of the world’s population, and 50 percent of the people in the least developed countries, according to the authors. Regions where the greatest amount of ecosystem overfishing occurs are also where impacts can be the greatest.
Tropics, Temperate Areas Face Most Overfishing
The researchers looked at 64 large marine ecosystems around the world and found those in the tropics, especially in Southeast Asia, have the highest proportion of ecosystem overfishing. Temperate regions also have a high level of ecosystem overfishing, with limited capability to absorb shifting fishing pressure from the tropics as species move toward the poles.
“Even if tropically-oriented fleets were able to shift latitudes and cross claims for marine exclusive economic zones, it remains unclear if temperate regions could absorb shifts from the tropics, given that many temperate regions are also experiencing ecosystem overfishing and catches there have been flat for more than 30 years,” Link said.
Potential International Standard
The three indices proposed represent a potential international standard for tracking the status of global fisheries ecosystems.
“They are easy to estimate and interpret, are based on widely repeated and available data, and are a practical way to identify when an ecosystem would be experiencing overfishing based on well understood and well-accepted primary production and food web limitations,” Link said. “It would eliminate a lot of the debate about whether or not ecosystem overfishing is happening and instead focus attention on solutions. But until we can define and identify what ecosystem overfishing is, we cannot begin to address it.”
For more information, please contact Shelley Dawicki.
|
It’s common knowledge that rainforests are falling at rapid pace, threatening the incredible biodiversity living within them. But these forests also provide critical services for people.
Widespread forest clearing in Indonesia could be putting people’s health at risk. New research from The Nature Conservancy shows that villagers living in recently fragmented landscapes are more likely to report an increase in local temperatures, signaling a loss of the forest’s cooling services. This, in turn, could lead to increased heat stress among local residents.
The Cooling Power of Forests
On a hot day, a shady tree is the perfect respite from the summer sun. But trees do far more to cool the air around them besides block the sun’s rays. Every tree is like a slow-motion fountain, drawing water from the ground through its roots, into its trunk, and finally into the leaves, where it evaporates into the air.
“That process has considerable cooling power,” explains Nick Wolff, a climate change scientist at The Nature Conservancy. “In fact, a single tree in the tropics is the equivalent of 2 medium-size air conditioners.”
That cooling power is critical for people living in the rural tropics, particularly in less-industrialized countries like Indonesia. In Indonesian Borneo, rural communities often live without access to air conditioning and make a living through subsistence farming or other work requiring extended time outside, including selective logging or working on fiber and oil palm plantations. All of this makes rural Indonesians — and others living in similar latitudes — vulnerable to heat stress and heat illness.
Heat stress occurs when a person spends too much time in a hot environment, causing them to lose large amounts of water and salts through sweating. Some of the symptoms include nausea, vomiting, cramps, or dizziness. In extreme cases heat stress will progress to heat stroke, where the body loses its ability to cool itself, causing organ systems to shut down and often resulting in death if left untreated. Long-term exposure to heat stress can compromise the immune system, exacerbating underlying conditions and increasing susceptibility to disease.
“Most of the research on heat stress and heat illness has been done on urban settings in industrialized countries,” says Wolff, “but there are very few studies from rural settings, especially in the tropics.”
That paucity of data even extends to accurate temperature records. While high-resolution temperature data are ubiquitous in places like the United States, they are less common in Indonesia and other less-industrialized nations, particularly in rural areas. So scientists wanting to understand how climate change or land-use change might impact public health have little to go on.
From Orangutans to Ecosystem Services
Wolff’s research originated, somewhat unexpectedly, from orangutan conservation efforts. In 2008 and 2009, the Conservancy surveyed more than 4,600 people across nearly 500 villages in Kalimantan to gain a better understanding of how rural people interacted with orangutans. But the villagers were also asked an open-ended question about the importance of forests for health, and 35 percent responded that forests provide cooling services.
“We thought, wow, something is going on here and we need to dig deeper,” says Wolff. “We wanted to understand what was driving that response — whether there was something happening with the local temperatures, or deforestation, or something else?”
To answer that question, Wolff and his colleagues combined satellite temperature data with maps of canopy cover and land-use change in a 10-kilometer circle around each of the nearly 500 survey villages. They compare those maps with the village survey data, looking to see if there was a correlation between any of those factors and local perceptions of forest cooling services.
“What we found is that those villagers living in the most fractured and fragmented landscapes were more likely to notice the absence of cooling services,” says Wolff.
Their results, published in Global Environmental Change, also show that villages that had experienced recent deforestation were more likely to notice the loss of cooling. “As the landscape becomes more deforested, they don’t notice or comment on it as much,” says Wolff. “They get used to it.”
The more time that passes after deforestation or fragmentation, the less likely people are to remember how much they benefited from the forests’ ecosystem services. Wolff says that this phenomenon — known as shifting baseline syndrome — is particularly concerning because as people forget the values that the forests provide, they’ll be less likely to protect them.
Wolff adds that their results likely underestimate just how much cooling Indonesia’s forests provide. The satellite temperature data that the team used only measures temperature at the top of the forest canopy, which doesn’t capture the added cooling effect from shade beneath the trees. So even though they were able to detect a clear increase in temperature in open landscapes, following deforestation, that difference is likely much greater than the data currently show.
From Perceptions to Reality
This research only documents a link between deforestation and people’s perceptions about temperature. “But humans are good at perceiving change in their environment,” says Wolff, “so perception data can be a real harbinger of more serious issues.”
The concern is that these perceptions accurately reflect ongoing human health impacts of deforestation and land conversion in Borneo. As broad tracts of oil palm and pulp paper plantations rapidly replace forests, Indonesia recently led Brazil in having the highest rates of deforestation in the tropics. Between 2000 and 2010, the country lost 840,000 hectares of forest per year, accounting for 56 percent of all forest cover loss in Southeast Asia.
Aside from losing biodiversity and other ecosystems services, deforestation and fragmentation can impact both public health and economic security. In increasingly hot environments, people naturally adjust when and how they work by working fewer hours, slowing down as they work, or even changing jobs to avoid the heat. “One possibility is that their overall productivity decreases and their incomes decrease, which can have long-term livelihood impacts,” says Yuta Masuda, a sustainable development and behavioral scientist at the Conservancy. “In the worst-case scenario, this could lead to adverse economic impacts for the region and country.”
Climate changes adds an additional threat to the equation, particularly for the tropics. “The tropics have little variation in seasonal temperature, so even a small increase in temperatures drives things outside of the normal range faster that other places,” says Wolff. “There’s a physiological limit to what people can stand, and in the tropics they’re already living close to it.” The same problem likely applies to other tropical forest regions across Southeast Asia, South America, and Africa.
Forthcoming collaborative research from the team will investigate the physiological effects of working in forested versus fragmented landscapes, and how those effects impact people’s behavior and livelihoods. “We need more data on the ground from these data-poor regions to understand how climate and deforestation are actually impacting productivity and health,” says Masuda.
The team also hope their results will raise awareness of the importance of forest conservation to new stakeholders in Indonesia and other tropical geographies. “There are strong arguments for protecting forests for their biodiversity and carbon values, but deforestation will also have broader impacts on rural people,” says Wolff. “We’re hopeful that this research will resonate with people that don’t usually think about how deforestation can directly impact human health or livelihoods.”
The trick, he says, is to find a combination of working and forested lands that allows Indonesia to meet their economic needs while still retaining forests and their critical ecosystem services. Without such a balance, deforestation could be a public health crisis in the making.
|
Law and order exist for the purpose of establishing justice and when they fail in this purpose they become the dangerously structured dams that block the flow of social progress.The philosophy of law is commonly known as jurisprudence. Normative jurisprudence asks “what should law be?”, while analytic jurisprudence asks “what is law?” John Austin’s utilitarian answer was that law is “commands, backed by threat of sanctions, from a sovereign, to whom people have a habit of obedience”.
Natural lawyers on the other side, such as Jean-Jacques Rousseau, argue that law reflects essentially moral and unchangeable laws of nature. The concept of “natural law” emerged in ancient Greek philosophy concurrently and in connection with the notion of justice, and re-entered the mainstream of Western culture through the writings of Thomas Aquinas, notably his Treatise on Law.
Hugo Grotius, the founder of a purely rationalistic system of natural law, argued that law arises from both a social impulse—as Aristotle had indicated—and reason.Immanuel Kant believed a moral imperative requires laws “be chosen as though they should hold as universal laws of nature”. Jeremy Bentham and his student Austin, following David Hume, believed that this conflated the “is” and what “ought to be” problem. Bentham and Austin argued for law’s positivism; that real law is entirely separate from “morality”.Kant was also criticised by Friedrich Nietzsche, who rejected the principle of equality, and believed that law emanates from the will to power, and cannot be labelled as “moral” or “immoral”.
|
This is the Barringer Meteor Crater in Arizona.
Click on image for full size
D. Roddy and LPI
Impact Craters on Earth
Compared with other planets, impact craters are rare surface features on Earth. There are two main reasons for the low number of craters. One is that our atmosphere burns
up most meteoroids before they reach the surface. The other reason is that Earth's surface is continually active and erases the marks of craters over time. The picture is the Barringer Meteorite Crater found in Arizona. It was probably formed about 50,000 years ago when an iron meteorite struck the Earth's surface. Many other large craters are found in Australia, Canada and Africa.
You might also be interested in:
Have you ever looked at pictures of friends or relatives to see what they looked like when they were very young? That is what some scientists are trying to do with a meteorite that fell in Canada. This...more
Meteors are streaks of light, usually lasting just a few seconds, which people occasionally see in the night sky. They are sometimes called "shooting stars" or "falling stars", though they are not stars...more
Many scientists have thought for years that the dinosaurs went extinct because an asteroid hit Earth near Mexico in a place called Chicxulub and caused big changes in the Earth’s climate. Now, scientists...more
For decades, scientists have known that an enormous space rock crashed into the ocean off the Yucatan Peninsula more than 65 million years ago, resulting in the the K-Pg extinction. We know that more than...more
Altocumulus clouds are part of the Middle Cloud group. They are grayish-white with one part of the cloud darker than the other. Altocumulus clouds usually form in groups. Altocumulus clouds are about...more
Altostratus clouds belong to the Middle Cloud group. An altostratus cloud usually covers the whole sky. The cloud looks gray or blue-gray. The sun or moon may shine through an altostratus cloud, but will...more
Cirrocumulus clouds belong to the High Cloud group. They are small rounded puffs that usually appear in long rows. Cirrocumulus are usually white, but sometimes appear gray. Cirrocumulus clouds are the...more
|
Program to calculate the area of the rectangle. Area of a rectangle is the amount of space occupied by the rectangle. A rectangle can be defined as the plain figure with two adjacent sides equal in length. The 4 angles present in the rectangle are also equal. A rectangle can be divided into 4 similar square.
Write a C program to input length and width of a rectangle and find area of the given rectangle. How to calculate area of a rectangle in C programming. Logic to find area of a rectangle whose length and width are given in C programming.
Write A Pseudocode To Calculate The Area Of A Circle, math assessment help center scam email, literature review apa introduction books, philosophy books about life and life stories.
Write pseudocode to solve this problem and write a Python program that asks the user to input the required information for a shape and then calculates the area of the shape. Geometry Calculator 1. Calculate the Area of a Circle 2. Calculate the Area of a Rectangle 3. Calculate the Area of a Triangle 4. Quit Enter your choice (1 - 4).
Declare functions to find diameter, circumference and area of circle. First assign a meaningful name to all the three functions. Say function to calculate diameter, circumference and area are - getDiameter(), getCircumference() and getArea() respectively. All the above three functions uses one input i.e. radius of circle to calculate output.
Example 2: Computing the Circumference and Area of a Circle Design. Behavior: The program should prompt the user to enter the radius of a circle. The user should enter a real value from the keyboard. The program should compute and display the circumference and area of that circle on the screen with four decimal places of accuracy. Data objects.
C Program to find the area of the rectangle. To calculate area of a rectangle, we need length and width of a rectangle. Below program, first takes length and width as input from user using scanf function and stores in 'length' and 'width' variables.
Area of a circle is the product of the square of its radius, and the value of PI, Therefore, to calculate the area of a rectangle. Get the radius of the circle. Calculate the square of the radius. Calculate the product of the value of PI and the value of the square of the radius. Print the result. Example.
To calculate area and circumference we must know the radius of circle. The program will prompt user to enter the radius and based on the input it would calculate the values. To make it simpler we have taken standard PI value as 3.14 (constant) in the program.
This function computes the area of a circle based on the parameter r. write a script. This script asks the user to type r1 for circle 1, and r2 for circle 2.
I need to create a function to calculae the area of a circle. The function should take in two arguments (a number and a string). The number should be the radius or diameter of the circle. The string.
C Program to Calculate Area and Circumference of circle Program: In this program we have to calculate the area and circumference of the circle. C Program to Calculate Area and Circumference of a Circle C program to print Area and perimeter of Rectangle Calculate circumference, area and perimeter in C C program to find Area and Circumference of a Circle with Sample Input and Output.
Write a program that asks the user to enter two values: an integer choice and a real number x. If choice is 1, compute and display the area of a circle of radius x. If choice is 2, compute and display the are of a square with sides of length x. If choice is neither 1, nor 2, will display the text Invalid choice.
Write an algorithm in pseudocode that will request for the radius of a circle. afterwards, it will compute and display the circumference and area of the circle. Premise: The problem above unlike others will read some other values with itself. in the other the word some value are constant and you have to make the algorithm to be aware of this kind of constant and read the value instead of.
Calculate Area of Rectangle in Python. To calculate area of rectangle in python, you have to ask from user to enter length and breadth of rectangle to calculate and print area of that rectangle on the output screen as shown in the program given below.Write a c program to find the area of circle. 2. Write a c program to find the area of any triangle. 3.. Write a c program to find the volume and surface area of cylinder. 12. Write a c program to find the surface area and volume of a cone. 13.Calculate Area of Circle in Python. To calculate area of a circle in python, you have to ask from user to enter radius of circle to calculate and print area of that circle on the output screen as shown in the program given below.
|
An adult's brain can significantly improve its performance by learning new things the way children do, according to new research. Moreover, experts hope that this discovery can help treat brain disorders. Scientists have unexpectedly found that in less than two hours of "quick" learning, which usually occurs in childhood, the adult brain begins to work very actively and literally grow.
Experts have long known the fact that the brain of young children grows very quickly. However, a large amount of research in recent years has shown that the adult brain is still capable of growth even after learning something new that only lasted a few weeks.
“Our study showed that the adult brain is actually much more flexible than we previously thought, ” says researcher Li-Hai Tan, a cognitive neuroscientist at the University of Hong Kong. "We are very pleased with the results, as they give hope for recovery for adults with intellectual disabilities: with the help of appropriate integration strategies, the brain can recover quickly."
Scientists analyzed the influence of language on the perception of color, a question that has long worried many experts. More than half a century ago, linguist Benjamin Lee Whorf suggested that language can influence how a person perceives the world. Tan and his colleagues decided to scan the brain in order to check if there were any visible consequences (if of course the statement is reliable) at the cellular level.
The researchers asked 19 adult volunteers to come up with words to describe two shades of green and two shades of blue. This was done in order to induce rapid word-object associations that arise automatically during the development of the child. The task of the participants in the experiment was as follows: they had to listen to the names of colors when they were shown on the display, then name these colors when they suddenly appeared on the screen, and also note whether the colors coincide with the previously named and displayed words-associations.
After five training sessions over three days (total time spent on experiments - 1 hour 48 minutes), brain scans showed that with such training there was a significant increase in gray matter in the left side of the volunteers' visual cortex (the part of the brain associated with color vision and perception ). These changes could be caused both by an increase in the number of neurons and by the expansion of dendritic branches emanating from them.
“We were so surprised that initially we couldn't even believe that the structure of an adult's brain could change so quickly, ” Tan said.
The results also confirmed the results of previous experiments, which showed that the names of different colors affect the ability of people to distinguish colors. Earlier research on experience-dependent changes in the brain used methods that examined the dependence of physical activity on the number of connections between neurons.
|
October 29, 1929
The stock market crash of October 1929 led directly to the Great Depression in Europe. When stocks plummeted on the New York Stock Exchange, the world noticed immediately. Although financial leaders in England, as in the United States, vastly underestimated the extent of the crisis that would ensue, it soon became clear that the world's economies were more interconnected then ever. The effects of the disruption to the global system of financing, trade, and production and the subsequent meltdown of the American economy were soon felt throughout Europe.
Causes of the Depression
In his memoirs, President Herbert Hoover tried to explain the Depression's impact on the United States by blaming the aftermath of the European war a decade earlier and the financial crisis that beset European banks in 1931. While historians still debate the precise causes of the Depression, most now agree that the economic crisis began in the United States and then moved to Europe and the rest of the world. According to Dietmar Rothermund's study of the global impact of the economic crisis, "all major factors contributing to the depression can be traced back to the United States of America." Both domestically and internationally, however, the crash of '29 built upon, exacerbated, and was compounded by the underlying economic weaknesses of the preceding decade. This section will provide necessary background information by exploring the ways in which national economies around the world were intimately connected, how the stock market crash in the United States triggered the European crisis, and how such connections shaped lives, societies, and political systems in Europe and elsewhere.
Class Relations Before the Depression
To appreciate the significance of the Depression, one must understand how it impacted social and economic conditions within distinct societies. While European economies during the 1920s experienced unemployment and the subsequent deprivation, hunger, and despair, much remained invisible to the general public. Left-leaning political parties had tried for decades to expose the effects of economic exploitation, yet the political shifts of the 1920s combined to make such conditions less apparent than they had been before the war. Socialist parties, attempting to gain a new respectability, were reluctant to draw attention to the class divide, while Communists remained more interested in staging confrontations than in uncovering the daily lives of the working class. Moreover, to the middle and upper classes, the lives of the poor were either invisible or frightening. The Depression would transform many societies by making visible the unemployment, distress, and despair already there.
Germany's Postwar Debt
With the onset of the Depression, both the hopes of peaceful class reconciliation and the willful ignorance of working-class desperation came to an end. Deprivation was evident everywhere, and conflict, rather than compromise, between classes appeared inevitable. In Germany, the Depression struck an already weakened economy barely beginning to recover from the combined effects of wartime destruction and postwar reparations. The Weimar government was deeply in debt, yet it tried to maintain high levels of unemployment benefits to forestall growing dissatisfaction among the lower classes. As unemployment grew, and even before the onset of the Depression, the government resisted pressure to cut payments. Under the terms of the Dawes plan, American banks loaned money to the German government, which used the loans to pay reparations to the French and British governments, which in turn used the money to pay war debts to American banks. The high interest rates sustained by the Dawes plan made Germany an attractive debtor for American banks, and, for several years, considerable money flowed from the American financial sector into Germany. In the words of historian Dietmar Rothermund, the plan was a "precarious solution," since everything depended on the continuous flow of American capital. The German government's debt to the victorious powers shifted towards American bankers, who, under the auspices of the Dawes plan, assumed the debt along with the dangers of default. Already by 1928, American banks had ceased to make loans under the Dawes plan. Germany, however, still had to service its American loans in addition to making reparations payments.
The Collapse of German Banks
The German government's initial response to the crash of '29, and the subsequent withdrawal of American capital by retrenchment, involved cutting public services to preserve solvency. The traumatic experience of extreme inflation in the early 1920s caused the government to respond to the crisis by decreasing, rather than increasing, public expenditure, which in turn worsened the economic conditions. Declining productivity, mass unemployment, and business failures ensued. When the Reichstag obstructed Chancellor Bruning's effort to maintain such a policy, Bruning resorted to the use of emergency powers granted by the President to implement measures so unpopular they earned him the moniker, "Hunger Chancellor." The collapse of German banks in 1931 marked the start of a downward spiral into depression. In 1932, Germany defaulted on its reparations; two years later, Britain and France defaulted on their own war debts, which were owed primarily to the United States.
Postwar Recovery in Britain
In Britain, significant economic problems persisted throughout the 1920s. The First World War cost Britain many of its positions of relative economic advantage: shipping never recovered from the losses of submarine warfare and the advances of competing nations; foreign investment declined as global capital increasingly moved to the United States; American banks displaced English banks as the main lenders to other European nations; coal production declined in the face of European competition, especially from French-occupied coalfields lost by Germany; and manufacturing suffered from the loss of European and colonial markets. Unemployment in Britain remained high throughout the 1920s, reaching 2 million in 1921 and then remaining at more than a million for the rest of the decade. The government, meanwhile, made financial security its priority. Domestic spending remained low relative to other European countries, as the government allowed private businesses to set their own policies on wages, hours, and conditions. The government remained committed to keeping the British pound on the gold standard, which meant that British exports were sold at inflated prices that made them less competitive with goods from other producers. Major industries, such as coal, steel, and textiles, were protected from foreign competition, which also meant that they had little incentive to update equipment, rationalize production, or diversify products. A growing wave of labor unrest had peaked in the 1926 General Strike, but the limited backing for the radical aims of trade union leadership by the government, big business, and a strong base of middle-class supporters dampened efforts to effect political change through extra-parliamentary measures. The memory of the General Strike would become an important factor in the early years of the Depression, as spreading unemployment and increasing despair led to fears of deepening class conflict and political instability. So-called depressed areas remained particular sources of chronic unemployment, hunger, and disease. In the words of historian Gordon Craig, the British economy "continued to stagnate until it was overwhelmed by the world depression."
Unlike Great Britain, France's economic situation improved markedly during the 1920s. Because the fighting of World War I caused so much damage to France's productive capacity, the government was forced to invest heavily in postwar reconstruction. As a result, French steel, coal, and textile production acquired more advanced machinery and adopted more effective techniques, which gave France a competitive advantage over countries that had not been forced to modernize, such as Britain. Postwar political settlements had awarded to France some of Germany's most productive territories, which also stabilized the French economy. At the same time, the French government remained deeply in debt, while continuing to demand excessive reparations payments from Germany. Although the government did gradually implement tax reforms to spread the burden of payments more evenly across society, the value of the French currency remained high as the government adhered to the gold standard, and the growth of international tourism poured additional funds into the French economy. According to Craig, France experienced "years of solid prosperity" in the period from 1926 to 1932.
France After the Crash of '29
The effects of the Wall Street crash spread across France more gradually. During the first years of the global economic crisis, France was predominantly affected by a decline in international tourism, by decreased demand for French luxury goods, and by the wave of protectionism that cut into all international trade. The contrasting directions pursued by Germany and France led to strikingly different assessments: "Why Germany ‘Totters," on the one hand, and "Why France Keeps Prosperous," on the other. Yet France could not remain invulnerable to the more general European and even global crisis. When conditions did worsen, French society quickly succumbed to the same sense of desperation. The contraction in world trade at the same time the government maintained the high value of the French currency ensured that exports became less competitive in a shrinking world market. The combination in turn caused production decreases and the spread of unemployment. In addition, the French response to the economic crisis was made more difficult by political conflicts between the major parties, which led to a series of short-lived, ineffective governments and, ultimately, the attempted overthrow of the government in February 1934.
Demonstrations, Protests, and Strikes in Britain
As indicated above, the governments of France, Britain, and Germany grappled with how to respond to the social and economic crisis brought on by the Great Depression. In each case, the governments faced considerable pressure from demonstrations, protests, and strikes taking place in the streets. In Britain, increasing economic distress led to waves of protests in 1930 and 1931 organized by a group of militant activists. During the 1920s, the combination of economic collapse and political radicalism had culminated in the General Strike of 1926, but divisions among labor leaders and sympathizers and the determination of the conservative government had caused the strike to fail. Yet public memory of the failed attempt persisted into the Depression. Labour Party leaders began to seek influence by working through, rather than against, the established political system. Labor protests still occurred frequently during the Depression, but in more localized ways. During 1930 and 1931, in particular, unemployed workers went on strike, demonstrated in public, and otherwise took direct action to call public attention to their plight. Protests often focused on the so-called Means Test, which the government had instituted in 1931 as a way to limit the amount of unemployment payments made to individuals and families. For working people, the Means Test seemed an intrusive and insensitive way to deal with the chronic and relentless deprivation caused by the economic crisis. The strikes were met forcefully, with police breaking up protests, arresting demonstrators, and charging them with crimes related to the violation of public order. The protests never approached revolution, however, since the actions of both protestors and police defined a realm of legitimate public engagement even in the midst of economic crisis.
Civil Unrest in Germany
In Germany, protests during the early 1930s arose out of a more long-term crisis of legitimacy of the Weimar system. In particular, the political extremes — the Communists on the left, and the National Socialist Democratic Workers Party (the Nazis) on the right — were committed to the overthrow of the democratic system by any means, including direct action on the streets. With the spread of unemployment, dissatisfaction with the policies of the Weimar government also intensified. The determination of Bruning's government to control expenses by cutting back welfare and social services alienated the poor and working classes, while his dependence on emergency powers convinced many that democratic politics could not handle the growing crisis. Nationalists played up Germany's vulnerability to the world economic crisis by denouncing, yet again, the terms of the postwar settlement. Germany's continued debt due to reparations provided yet additional grounds for linking the weakness of Germany's international position to the growing economic crisis. It was in this context that a series of strikes and protests occurred across Germany during 1930 and 1931. In contrast to Britain, however, protests became common among radicals on both the extreme left, which included Communists, and on the extreme right, led by the Nazis. The government, meanwhile, appeared both ineffective at controlling the waves of violence and repressive, as it resorted increasingly to the so-called emergency powers. The street protests and the government response combined to undermine even further the legitimacy and viability of Weimar democracy.
The crisis of French democracy in February 1934 centered on allegations that the elected government remained ineffective at dealing with the immediate economic crisis. In addition, the French government appeared overall less vigorous and incisive when compared to the neighboring Nazi-led German government. The French government faced sharp criticism and demands for immediate action from both the extreme right and the Communists on the extreme left. On February 6, 1934, thousands of people — most responding to summons by right-wing groups, but also Communist sympathizers willing to use any means to overthrow the government — assembled on the Place de la Concorde and appeared to organize an assault on the Chamber of Deputies. When police used arms against the crowd, twenty-one people were killed and more than a thousand injured. In the words of Gordon Craig, it seemed "as if action on the street was on the point of supplanting rule by law and parliamentary procedure." The Paris protests resulted in the resignation of Prime Minister Daladier, which led to the formation of a broad-based government drawing representatives from a spectrum of political parties. In the case of France, then, street protests served to redefine the basis of democratic legitimacy in the midst of crisis.
Election Campaigns and Political Consolidation
In addition to direct action on the streets by, in most cases, more extreme political movements, elections became an important measure of the impact of the Depression on Europe. Parties on the extreme left, such as the Communist Party, claimed that the interests of the working class could be served only by revolutionary, and inevitably violent, overthrow of the existing social, political, and economic order. Socialist parties, such as the Labour Party in Britain and the Social Democrats in Germany, argued that working-class interests were better served by working through the political system to promote egalitarian, democratic, and peaceful policies. To the right of the Socialists stood a variety of parties, such as the Conservatives in Britain and the Catholic Center Party in Germany, which argued that middle- and upper-class interests were best served by traditional policies that protected property, maintained order, and promoted changes through the existing economic system. In addition, a new force of political radicalism emerged on the extreme right arguing for stronger governments that took direct action to promote national interests for all classes at the expense of foreign and minority interests. The Nazi Party in Germany was the strongest example of such politics, although similar movements emerged in Britain and France as well.
The British Response to the Depression
In Britain, Prime Minister Ramsay Macdonald's Labour government responded to the economic crisis caused by the Wall Street crash and capital flight to America by imposing further restrictions on government spending, including threats to cut already meager unemployment payments. When the proposals were rejected by most of the members of Macdonald's own Labour party in the summer of 1931, he responded by forming a new so-called National Government, which included representatives from the three major parties: Conservative, Labour, and Liberal. In the 1931 elections, the new government won a solid victory, with 558 supporters drawn from the three parties against 60 members of an opposition comprised predominantly of Labourites fighting against further cuts in welfare benefits. The election thus appeared as a sign of reassurance in a time of increasing demonstrations and protests in the streets. The National Government seemed to represent a middle ground that strengthened moderate forces of both the left Labour Party and the right Conservatives. Such a position of strength allowed the National Government to implement several unpopular economic policies, including the devaluation of the British pound by abandoning the gold standard. Freeing the currency allowed the government to offer financial assistance to the most distressed areas and provide protection for key industries. But the government never undertook a major recovery effort, like the New Deal in the United States, and unemployment remained high through the end of the decade.
The Rise of the Nazi Party in Germany
In Germany, Bruning's decision to call elections to obtain a mandate for his actions proved a grave miscalculation; the fall 1930 elections returned only a handful of new seats for the parties supporting the Chancellor, while the extremist parties gained the most seats: twenty-three additional representatives for the Communists on the left and ninety-five new seats for the Nazis on the right, making the latter the second-largest party in the German Reichstag, or Parliament. In the election, more than six million Germans voted for the Nazi party. In subsequent elections, Nazi support continued to grow at the expense of moderate parties such as the Social Democrats and the Catholic Center Party. By 1932, the Nazi Party had won more than one-third of the seats in the Reichstag and had become the largest single party within the representative body, with 196 seats compared to 121 seats held by Social Democrats. While Hitler's actual accession to power occurred through a process of manipulation among the leaders and not through direct elections, the growing strength of the Nazi party from 1930 to 1932 illustrates how the effects of the Depression shaped the increasing radicalization of German politics in ways that undermined democratic legitimacy and stability.
The Popular Front in France
In France, the Popular Front emerged as a powerful symbol of the collective determination to overcome both economic crisis and social division. The Popular Front was formed in early 1936 by representatives of the Socialist, Liberal, and Communist Parties. The latter's involvement was especially significant, as it marked a dramatic reversal of the Communists' prior determination to promote revolutionary change in any possible way. By 1936, however, the establishment of a Fascist government in Germany and its relentless destruction of the Communist and Socialist Parties there had convinced French Communists and, more importantly, the Soviet leadership that exerted strong influence over European Communists that they needed to support democratic and capitalist governments to fight the rise of right-wing fascism. The program of the Popular Front thus illustrated combined efforts to mediate political divisions while promoting a program in support of government intervention in the economy, the defense of civil liberties, and the protection of social welfare.
|
Multiple issue and pipelining can clearly be considered to be parallel hardware, since functional units are replicated. However, since this form of parallelism isn’t usually visible to the programmer, we’re treating both of them as extensions to the basic von Neumann model, and for our purposes, parallel hardware will be limited to hardware that’s visible to the programmer. In other words, if she can readily modify her source code to exploit it, or if she must modify her source code to exploit it, then we’ll consider the hardware to be parallel.
1. SIMD systems
In parallel computing, Flynn’s taxonomy is frequently used to classify computer architectures. It classifies a system according to the number of instruction streams and the number of data streams it can simultaneously manage. A classical von Neumann system is therefore a single instruction stream, single data stream, or SISD system, since it executes a single instruction at a time and it can fetch or store one item of data at a time.
Single instruction, multiple data, or SIMD, systems are parallel systems. As the name suggests, SIMD systems operate on multiple data streams by applying the same instruction to multiple data items, so an abstract SIMD system can be thought of as having a single control unit and multiple ALUs. An instruction is broadcast from the control unit to the ALUs, and each ALU either applies the instruction to the current data item, or it is idle. As an example, suppose we want to carry out a “vector addition.” That is, suppose we have two arrays x and y, each with n elements, and we want to add the elements of y to the elements of x:
for (i = 0; i < n; i++)
x[i] += y[i];
Suppose further that our SIMD system has n ALUs. Then we could load x[i] and y[i] into the ith ALU, have the ith ALU add y[i] to x[i], and store the result in x[i]. If the system has m ALUs and m < n, we can simply execute the additions in blocks of m elements at a time. For example, if m D 4 and n D 15, we can first add ele-ments 0 to 3, then elements 4 to 7, then elements 8 to 11, and finally elements 12 to 14. Note that in the last group of elements in our example—elements 12 to 14—we’re only operating on three elements of x and y, so one of the four ALUs will be idle.
The requirement that all the ALUs execute the same instruction or are idle can seriously degrade the overall performance of a SIMD system. For example, suppose we only want to carry out the addition if y[i] is positive:
for (i = 0; i < n; i++)
if (y[i] > 0.0) x[i] += y[i];
In this setting, we must load each element of y into an ALU and determine whether it’s positive. If y[i] is positive, we can proceed to carry out the addition. Otherwise, the ALU storing y[i] will be idle while the other ALUs carry out the addition.
Note also that in a “classical” SIMD system, the ALUs must operate syn-chronously, that is, each ALU must wait for the next instruction to be broadcast before proceeding. Further, the ALUs have no instruction storage, so an ALU can’t delay execution of an instruction by storing it for later execution.
Finally, as our first example shows, SIMD systems are ideal for parallelizing sim-ple loops that operate on large arrays of data. Parallelism that’s obtained by dividing data among the processors and having the processors all apply (more or less) the same instructions to their subsets of the data is called data-parallelism. SIMD parallelism can be very efficient on large data parallel problems, but SIMD systems often don’t do very well on other types of parallel problems.
SIMD systems have had a somewhat checkered history. In the early 1990s a maker of SIMD systems (Thinking Machines) was the largest manufacturer of par-allel supercomputers. However, by the late 1990s the only widely produced SIMD systems were vector processors. More recently, graphics processing units, or GPUs, and desktop CPUs are making use of aspects of SIMD computing.
Although what constitutes a vector processor has changed over the years, their key characteristic is that they can operate on arrays or vectors of data, while conventional CPUs operate on individual data elements or scalars. Typical recent systems have the following characteristics:
Vector registers. These are registers capable of storing a vector of operands and operating simultaneously on their contents. The vector length is fixed by the
system, and can range from 4 to 128 64-bit elements.
Vectorized and pipelined functional units. Note that the same operation is applied
to each element in the vector, or, in the case of operations like addition, the same operation is applied to each pair of corresponding elements in the two vectors.
Thus, vector operations are SIMD.
Vector instructions. These are instructions that operate on vectors rather than scalars. If the vector length is vector length, these instructions have the great virtue that a simple loop such as
for (i = 0; i < n; i++)
x[i] += y[i];
requires only a single load, add, and store for each block of vector length elements, while a conventional system requires a load, add, and store for each
element.Interleaved memory. The memory system consists of multiple “banks” of memory, which can be accessed more or less independently. After accessing one bank, there will be a delay before it can be reaccessed, but a different bank can be accessed
much sooner. So if the elements of a vector are distributed across multiple banks,
there can be little to no delay in loading/storing successive elements.
Strided memory access and hardware scatter/gather. In strided memory access, the program accesses elements of a vector located at fixed intervals. For example, accessing the first element, the fifth element, the ninth element, and so on, would be strided access with a stride of four. Scatter/gather (in this context) is writing (scatter) or reading (gather) elements of a vector located at irregular intervals— for example, accessing the first element, the second element, the fourth element, the eighth element, and so on. Typical vector systems provide special hardware to accelerate strided access and scatter/gather.
Vector processors have the virtue that for many applications, they are very fast and very easy to use. Vectorizing compilers are quite good at identifying code that can be vectorized. Further, they identify loops that cannot be vectorized, and they often provide information about why a loop couldn’t be vectorized. The user can thereby make informed decisions about whether it’s possible to rewrite the loop so that it will vectorize. Vector systems have very high memory bandwidth, and every data item that’s loaded is actually used, unlike cache-based systems that may not make use of every item in a cache line. On the other hand, they don’t handle irregular data struc-tures as well as other parallel architectures, and there seems to be a very finite limit to their scalability, that is, their ability to handle ever larger problems. It’s difficult to see how systems could be created that would operate on ever longer vectors. Cur-rent generation systems scale by increasing the number of vector processors, not the vector length. Current commodity systems provide limited support for operations on very short vectors, while processors that operate on long vectors are custom manufactured, and, consequently, very expensive.
Graphics processing units
Real-time graphics application programming interfaces, or APIs, use points, lines, and triangles to internally represent the surface of an object. They use a graphics pro-cessing pipeline to convert the internal representation into an array of pixels that can
be sent to a computer screen. Several of the stages of this pipeline are programmable. The behavior of the programmable stages is specified by functions called shader functions. The shader functions are typically quite short—often just a few lines of C code. They’re also implicitly parallel, since they can be applied to multiple elements (e.g., vertices) in the graphics stream. Since the application of a shader function to nearby elements often results in the same flow of control, GPUs can optimize perfor-mance by using SIMD parallelism, and in the current generation all GPUs use SIMD parallelism. This is obtained by including a large number of ALUs (e.g., 80) on each GPU processing core.
Processing a single image can require very large amounts of data—hundreds of megabytes of data for a single image is not unusual. GPUs therefore need to maintain very high rates of data movement, and in order to avoid stalls on memory accesses, they rely heavily on hardware multithreading; some systems are capable of storing the state of more than a hundred suspended threads for each executing thread. The actual number of threads depends on the amount of resources (e.g., registers) needed by the shader function. A drawback here is that many threads processing a lot of data are needed to keep the ALUs busy, and GPUs may have relatively poor performance on small problems.
It should be stressed that GPUs are not pure SIMD systems. Although the ALUs on a given core do use SIMD parallelism, current generation GPUs can have dozens of cores, which are capable of executing independent instruction streams.
GPUs are becoming increasingly popular for general, high-performance comput-ing, and several languages have been developed that allow users to exploit their power. For further details see .
2. MIMD systems
Multiple instruction, multiple data, or MIMD, systems support multiple simulta-neous instruction streams operating on multiple data streams. Thus, MIMD systems typically consist of a collection of fully independent processing units or cores, each of which has its own control unit and its own ALU. Furthermore, unlike SIMD sys-tems, MIMD systems are usually asynchronous, that is, the processors can operate at their own pace. In many MIMD systems there is no global clock, and there may be no relation between the system times on two different processors. In fact, unless the programmer imposes some synchronization, even if the processors are executing exactly the same sequence of instructions, at any given instant they may be executing different statements.
As we noted in Chapter 1, there are two principal types of MIMD systems: shared-memory systems and distributed-memory systems. In a shared-memory sys-tem a collection of autonomous processors is connected to a memory system via an interconnection network, and each processor can access each memory location. In a shared-memory system, the processors usually communicate implicitly by accessing shared data structures. In a distributed-memory system, each processor is paired with its own private memory, and the processor-memory pairs communicate over an interconnection network. So in distributed-memory systems the processors usu-ally communicate explicitly by sending messages or by using special functions that provide access to the memory of another processor. See Figures 2.3 and 2.4.
The most widely available shared-memory systems use one or more multicore pro-cessors. As we discussed in Chapter 1, a multicore processor has multiple CPUs or cores on a single chip. Typically, the cores have private level 1 caches, while other caches may or may not be shared between the cores.
In shared-memory systems with multiple multicore processors, the interconnect can either connect all the processors directly to main memory or each processor can have a direct connection to a block of main memory, and the processors can access each others’ blocks of main memory through special hardware built into the pro-cessors. See Figures 2.5 and 2.6. In the first type of system, the time to access all the memory locations will be the same for all the cores, while in the second type a memory location to which a core is directly connected can be accessed more quickly than a memory location that must be accessed through another chip. Thus, the first type of system is called a uniform memory access, or UMA, system, while the sec-ond type is called a nonuniform memory access, or NUMA, system. UMA systems are usually easier to program, since the programmer doesn’t need to worry about different access times for different memory locations. This advantage can be offset by the faster access to the directly connected memory in NUMA systems. Further-more, NUMA systems have the potential to use larger amounts of memory than UMA systems.
The most widely available distributed-memory systems are called clusters. They are composed of a collection of commodity systems—for example, PCs—connected by a commodity interconnection network—for example, Ethernet. In fact, the nodes of these systems, the individual computational units joined together by the commu-nication network, are usually shared-memory systems with one or more multicore processors. To distinguish such systems from pure distributed-memory systems, they are sometimes called hybrid systems. Nowadays, it’s usually understood that a cluster will have shared-memory nodes.
The grid provides the infrastructure necessary to turn large networks of geograph-ically distributed computers into a unified distributed-memory system. In general, such a system will be heterogeneous, that is, the individual nodes may be built from different types of hardware.
3. Interconnection networks
The interconnect plays a decisive role in the performance of both distributed- and shared-memory systems: even if the processors and memory have virtually unlimited performance, a slow interconnect will seriously degrade the overall performance of all but the simplest parallel program. See, for example, Exercise 2.10.
Although some of the interconnects have a great deal in common, there are enough differences to make it worthwhile to treat interconnects for shared-memory and distributed-memory separately.
Currently the two most widely used interconnects on shared-memory systems are buses and crossbars. Recall that a bus is a collection of parallel communication wires together with some hardware that controls access to the bus. The key characteristic of a bus is that the communication wires are shared by the devices that are connected to it. Buses have the virtue of low cost and flexibility; multiple devices can be con-nected to a bus with little additional cost. However, since the communication wires are shared, as the number of devices connected to the bus increases, the likelihood that there will be contention for use of the bus increases, and the expected perfor-mance of the bus decreases. Therefore, if we connect a large number of processors to a bus, we would expect that the processors would frequently have to wait for access to main memory. Thus, as the size of shared-memory systems increases, buses are rapidly being replaced by switched interconnects.
As the name suggests, switched interconnects use switches to control the rout-ing of data among the connected devices. A crossbar is illustrated in Figure 2.7(a). The lines are bidirectional communication links, the squares are cores or memory modules, and the circles are switches.
The individual switches can assume one of the two configurations shown in Figure 2.7(b). With these switches and at least as many memory modules as pro-cessors, there will only be a conflict between two cores attempting to access memory
if the two cores attempt to simultaneously access the same memory module. For example, Figure 2.7(c) shows the configuration of the switches if P1 writes to M4, P2 reads from M3, P3 reads from M1, and P4 writes to M2.
Crossbars allow simultaneous communication among different devices, so they are much faster than buses. However, the cost of the switches and links is relatively high. A small bus-based system will be much less expensive than a crossbar-based system of the same size.
Distributed-memory interconnects are often divided into two groups: direct inter-connects and indirect interconnects. In a direct interconnect each switch is directly connected to a processor-memory pair, and the switches are connected to each other. Figure 2.8 shows a ring and a two-dimensional toroidal mesh. As before, the circles are switches, the squares are processors, and the lines are bidirectional links. A ring is superior to a simple bus since it allows multiple simultaneous communications. However, it’s easy to devise communication schemes in which some of the processors must wait for other processors to complete their communications. The toroidal mesh will be more expensive than the ring, because the switches are more complex—they must support five links instead of three—and if there are p processors, the number of links is 3p in a toroidal mesh, while it’s only 2p in a ring. However, it’s not difficult to convince yourself that the number of possible simultaneous communications patterns is greater with a mesh than with a ring.
One measure of “number of simultaneous communications” or “connectivity” is bisection width. To understand this measure, imagine that the parallel system is
divided into two halves, and each half contains half of the processors or nodes. How many simultaneous communications can take place “across the divide” between the halves? In Figure 2.9(a) we’ve divided a ring with eight nodes into two groups of four nodes, and we can see that only two communications can take place between the halves. (To make the diagrams easier to read, we’ve grouped each node with its switch in this and subsequent diagrams of direct interconnects.) However, in Figure 2.9(b) we’ve divided the nodes into two parts so that four simultaneous com-munications can take place, so what’s the bisection width? The bisection width is supposed to give a “worst-case” estimate, so the bisection width is two—not four.
An alternative way of computing the bisection width is to remove the minimum number of links needed to split the set of nodes into two equal halves. The number of links removed is the bisection width. If we have a square two-dimensional toroidal mesh with p = q2 nodes (where q is even), then we can split the nodes into two halves by removing the “middle” horizontal links and the “wraparound” horizontal links. See Figure 2.10. This suggests that the bisection width is at most 2q = 2Rootp. In fact, this is the smallest possible number of links and the bisection width of a square two-dimensional toroidal mesh is 2Rootp.
The bandwidth of a link is the rate at which it can transmit data. It’s usually given in megabits or megabytes per second. Bisection bandwidth is often used as a measure of network quality. It’s similar to bisection width. However, instead of counting the number of links joining the halves, it sums the bandwidth of the links. For example, if the links in a ring have a bandwidth of one billion bits per second, then the bisection bandwidth of the ring will be two billion bits per second or 2000 megabits per second.
The ideal direct interconnect is a fully connected network in which each switch is directly connected to every other switch. See Figure 2.11. Its bisection width is p2=4. However, it’s impractical to construct such an interconnect for systems with more than a few nodes, since it requires a total of p2=2 + p=2 links, and each switch must be capable of connecting to p links. It is therefore more a “theoretical best possible” interconnect than a practical one, and it is used as a basis for evaluating other interconnects.
The hypercube is a highly connected direct interconnect that has been used in actual systems. Hypercubes are built inductively: A one-dimensional hypercube is a fully-connected system with two processors. A two-dimensional hypercube is built from two one-dimensional hypercubes by joining “corresponding” switches. Simi-larly, a three-dimensional hypercube is built from two two-dimensional hypercubes. See Figure 2.12. Thus, a hypercube of dimension d has p = 2d nodes, and a switch in a d-dimensional hypercube is directly connected to a processor and d switches. The bisection width of a hypercube is p=2, so it has more connectivity than a ring or toroidal mesh, but the switches must be more powerful, since they must support 1 + d = 1 + log2.p/ wires, while the mesh switches only require five wires. So a hypercube with p nodes is more expensive to construct than a toroidal mesh.
Indirect interconnects provide an alternative to direct interconnects. In an indi-rect interconnect, the switches may not be directly connected to a processor. They’re often shown with unidirectional links and a collection of processors, each of which has an outgoing and an incoming link, and a switching network. See Figure 2.13.
The crossbar and the omega network are relatively simple examples of indi-rect networks. We saw a shared-memory crossbar with bidirectional links earlier (Figure 2.7). The diagram of a distributed-memory crossbar in Figure 2.14 has unidi-rectional links. Notice that as long as two processors don’t attempt to communicate with the same processor, all the processors can simultaneously communicate with another processor.
An omega network is shown in Figure 2.15. The switches are two-by-two cross-bars (see Figure 2.16). Observe that unlike the crossbar, there are communications that cannot occur simultaneously. For example, in Figure 2.15 if processor 0 sends
a message to processor 6, then processor 1 cannot simultaneously send a message to processor 7. On the other hand, the omega network is less expensive than the crossbar. The omega network uses 12 p log2(p) of the 2 2 crossbar switches, so it uses a total of 2p log2(p) switches, while the crossbar uses p2.
It’s a little bit more complicated to define bisection width for indirect networks. See Exercise 2.14. However, the principle is the same: we want to divide the nodes into two groups of equal size and determine how much communication can take place between the two halves, or alternatively, the minimum number of links that need to be removed so that the two groups can’t communicate. The bisection width of a p p crossbar is p and the bisection width of an omega network is p=2.
Latency and bandwidth
Any time data is transmitted, we’re interested in how long it will take for the data to reach its destination. This is true whether we’re talking about transmitting data between main memory and cache, cache and register, hard disk and memory, or between two nodes in a distributed-memory or hybrid system. There are two figures that are often used to describe the performance of an interconnect (regardless of what it’s connecting): the latency and the bandwidth. The latency is the time that elapses between the source’s beginning to transmit the data and the destination’s starting to receive the first byte. The bandwidth is the rate at which the destination receives data after it has started to receive the first byte. So if the latency of an interconnect is l seconds and the bandwidth is b bytes per second, then the time it takes to transmit a message of n bytes is
message transmission time = l + n=b.
Beware, however, that these terms are often used in different ways. For example, latency is sometimes used to describe total message transmission time. It’s also often used to describe the time required for any fixed overhead involved in transmitting data. For example, if we’re sending a message between two nodes in a distributed-memory system, a message is not just raw data. It might include the data to be transmitted, a destination address, some information specifying the size of the mes-sage, some information for error correction, and so on. So in this setting, latency might be the time it takes to assemble the message on the sending side—the time needed to combine the various parts—and the time to disassemble the message on the receiving side—the time needed to extract the raw data from the message and store it in its destination.
4. Cache coherence
Recall that CPU caches are managed by system hardware: programmers don’t have direct control over them. This has several important consequences for shared-memory systems. To understand these issues, suppose we have a shared-memory system with two cores, each of which has its own private data cache. See Figure 2.17. As long as the two cores only read shared data, there is no problem. For example, suppose that x is a shared variable that has been initialized to 2, y0 is private and owned by core 0, and y1 and z1 are private and owned by core 1. Now suppose the following statements are executed at the indicated times:
Then the memory location for y0 will eventually get the value 2, and the memory location for y1 will eventually get the value 6. However, it’s not so clear what value z1 will get. It might at first appear that since core 0 updates x to 7 before the assign-ment to z1, z1 will get the value 4 7 = 28. However, at time 0, x is in the cache of core 1. So unless for some reason x is evicted from core 0’s cache and then reloaded into core 1’s cache, it actually appears that the original value x = 2 may be used, and z1 will get the value 4 2 = 8.
Note that this unpredictable behavior will occur regardless of whether the system is using a write-through or a write-back policy. If it’s using a write-through policy, the main memory will be updated by the assignment x = 7. However, this will have no effect on the value in the cache of core 1. If the system is using a write-back policy, the new value of x in the cache of core 0 probably won’t even be available to core 1 when it updates z1.
Clearly, this is a problem. The programmer doesn’t have direct control over when the caches are updated, so her program cannot execute these apparently innocu-ous statements and know what will be stored in z1. There are several problems here, but the one we want to look at right now is that the caches we described for single processor systems provide no mechanism for insuring that when the caches of multiple processors store the same variable, an update by one processor to the cached variable is “seen” by the other processors. That is, that the cached value stored by the other processors is also updated. This is called the cache coherence problem.
Snooping cache coherence
There are two main approaches to insuring cache coherence: snooping cache coher-ence and directory-based cache coherence. The idea behind snooping comes from bus-based systems: When the cores share a bus, any signal transmitted on the bus can be “seen” by all the cores connected to the bus. Thus, when core 0 updates the copy of x stored in its cache, if it also broadcasts this information across the bus, and if core 1 is “snooping” the bus, it will see that x has been updated and it can mark its copy of x as invalid. This is more or less how snooping cache coherence works. The principal difference between our description and the actual snooping protocol is that the broadcast only informs the other cores that the cache line containing x has been updated, not that x has been updated.
A couple of points should be made regarding snooping. First, it’s not essential that the interconnect be a bus, only that it support broadcasts from each processor to all the other processors. Second, snooping works with both write-through and write-back caches. In principle, if the interconnect is shared—as with a bus—with write-through caches there’s no need for additional traffic on the interconnect, since each core can simply “watch” for writes. With write-back caches, on the other hand, an extra communication is necessary, since updates to the cache don’t get immediately sent to memory.
Directory-based cache coherence
Unfortunately, in large networks broadcasts are expensive, and snooping cache coher-ence requires a broadcast every time a variable is updated (but see Exercise 2.15). So snooping cache coherence isn’t scalable, because for larger systems it will cause performance to degrade. For example, suppose we have a system with the basic distributed-memory architecture (Figure 2.4). However, the system provides a single address space for all the memories. So, for example, core 0 can access the vari-able x stored in core 1’s memory, by simply executing a statement such as y = x.
(Of course, accessing the memory attached to another core will be slower than access-ing “local” memory, but that’s another story.) Such a system can, in principle, scale to very large numbers of cores. However, snooping cache coherence is clearly a problem since a broadcast across the interconnect will be very slow relative to the speed of accessing local memory.
Directory-based cache coherence protocols attempt to solve this problem through the use of a data structure called a directory. The directory stores the status of each cache line. Typically, this data structure is distributed; in our example, each core/memory pair might be responsible for storing the part of the structure that spec-ifies the status of the cache lines in its local memory. Thus, when a line is read into, say, core 0’s cache, the directory entry corresponding to that line would be updated indicating that core 0 has a copy of the line. When a variable is updated, the directory is consulted, and the cache controllers of the cores that have that variable’s cache line in their caches are invalidated.
Clearly there will be substantial additional storage required for the directory, but when a cache variable is updated, only the cores storing that variable need to be contacted.
It’s important to remember that CPU caches are implemented in hardware, so they operate on cache lines, not individual variables. This can have disastrous conse-quences for performance. As an example, suppose we want to repeatedly call a function f(i,j) and add the computed values into a vector:
int i, j, m, n; double y[m];
/* Assign y = 0 */
. . .
for (i = 0; i < m; i++) for (j = 0; j < n; j++)
y[i] += f(i,j);
We can parallelize this by dividing the iterations in the outer loop among the cores. If we have core count cores, we might assign the first m/core count iterations to the first core, the next m/core count iterations to the second core, and so on.
/* Private variables */
int i, j, iter count;
/ Shared variables initialized by one core */
int m, n, core count
iter count = m/core count
/* Core 0 does this */
for (i = 0; i < iter count; i++) for (j = 0; j < n; j++)
y[i] += f(i,j);
/* Core 1 does this */
for (i = iter count+1; i < 2 iter count; i++) for (j = 0; j < n; j++)
y[i] += f(i,j);
. . .
Now suppose our shared-memory system has two cores, m = 8, doubles are eight bytes, cache lines are 64 bytes, and y is stored at the beginning of a cache line. A cache line can store eight doubles, and y takes one full cache line. What hap-pens when core 0 and core 1 simultaneously execute their codes? Since all of y is stored in a single cache line, each time one of the cores executes the statement y[i] += f(i,j), the line will be invalidated, and the next time the other core tries to execute this statement it will have to fetch the updated line from memory! So if n is large, we would expect that a large percentage of the assignments y[i] += f(i,j) will access main memory—in spite of the fact that core 0 and core 1 never access each others’ elements of y. This is called false sharing, because the system is behaving as if the elements of y were being shared by the cores.
Note that false sharing does not cause incorrect results. However, it can ruin the performance of a program by causing many more accesses to memory than necessary. We can reduce its effect by using temporary storage that is local to the thread or process and then copying the temporary storage to the shared storage. We’ll return to the subject of false sharing in Chapters 4 and 5.
5. Shared-memory versus distributed-memory
Newcomers to parallel computing sometimes wonder why all MIMD systems aren’t shared-memory, since most programmers find the concept of implicitly coordinat-ing the work of the processors through shared data structures more appealing than explicitly sending messages. There are several issues, some of which we’ll discuss when we talk about software for distributed- and shared-memory. However, the prin-cipal hardware issue is the cost of scaling the interconnect. As we add processors to a bus, the chance that there will be conflicts over access to the bus increase dramat-ically, so buses are suitable for systems with only a few processors. Large crossbars are very expensive, so it’s also unusual to find systems with large crossbar intercon-nects. On the other hand, distributed-memory interconnects such as the hypercube and the toroidal mesh are relatively inexpensive, and distributed-memory systems with thousands of processors that use these and other interconnects have been built. Thus, distributed-memory systems are often better suited for problems requiring vast amounts of data or computation.
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
|
Zika virus was first discovered in Africa back in the 1940’s in a monkey with a mild fever. Since then, the disease has spread all over the world. In humans, the virus causes a birth defect called microcephaly which means ‘small brain’. In animals, the virus has been found primarily in non-human primates. Most exposed monkeys and apes show no signs of illness. A small number will develop a mild, short-lived fever. The virus tends to appear in monkeys and apes that live close to humans who have the virus. A recent study of Brazil’s monkeys identified the virus in a small number of monkeys. So far, no monkey or ape babies have been born with microcephaly from Zika. It is unclear at this time whether the monkeys and apes are getting the virus from humans or vice versa. The prevalence of the virus in non-human primates is also unknown.
Other than the non-human primates, there is no evidence of Zika virus infections causing disease in other animals. One study from Indonesia performed in the 1970’s found that the virus could infect livestock and bats but there are no documented cases of any of these animals transmitting Zika virus to humans. More research is needed to determine if Zika is a zoonotic disease meaning animals can infect people (examples are rabies, ringworm and leptospirosis) or a reverse zoonotic disease meaning people infect (example is MRSA ).
Like dengue fever, yellow fever and West Nile virus, Zika virus is transmitted by mosquitos of the Andes species. Female mosquitos need the protein contained in blood to lay eggs. When mosquitos bite, they inject saliva into the wound that contains an anticoagulant to keep the victim’s blood from clotting. Their saliva can contain all kinds of infectious agents including viruses, bacteria and parasites (heartworm disease, malaria, etc.) contracted from prior victims. Once infected, a single mosquito can transmit disease to many animals and/or people. When monkeys and apes are infected with Zika, they develop antibodies against the virus in approximately 14 days. The antibodies clear the virus out of the blood stream stopping the spread of the disease. Since monkeys and apes are quarantined in screened in facilities for 31 days when entering the United States, this should prevent the disease spreading into local mosquitos. Currently, it is unknown if monkeys and apes are reservoirs for the disease.
The bottom line is that Zika virus is not a threat to dogs and cats. There are no studies that show canines or felines can be infected with the virus or spread it to humans.
-‘Questions and Answers: Zika Virus and Animals’, ARIZONA VETERINARY NEWS, Aril 2016.
-‘Zika and Animals: What we know.’ CENTERS FOR DISEASE CONTROL AND PREVENTION, Update June 8, 2016.
|
|Grinding of Simple Tools - Course: Technique for Manual Working of Materials. Trainees' Handbook of Lessons|
These are grinding wheels. They are manufactured in different forms and structural compositions.
Straight grinding wheels:
Most used grinding wheel for all kinds of tools; it is used in various widths. Since the grinding operation takes place only at the circumference of the wheel, the result is always hollow grinding.
Special grinding wheel for the pointing of drills.
Grinding wheels with flaring or straight (cylindrical) outer face, which - due to their flat end faces - are especially suited for grinding surfaces that must not be hollow-ground, such as lathe tools and planing tools.
3.2. Structural compositions
Grinding wheels consist of abrasives (natural or synthetic) and binding agents.
Abrasives are produced in grain sizes ranging from very coarse to dust-fine and showing different void spaces between the grains - from very wide to very narrow - in the structure of the grinding wheel.
Binding agents may consist of elastic or inelastic materials, which - by the stability of their coherence with the grains of the abrasive - determine the hardness of the grinding wheel.
- If the abrasive grain shall remain for a long time, because a soft material is to be ground and. therefore, the edges of the abrasive grains are worn out only little, a hard binding agent is used, i.e. a hard wheel.
- Shall a hard material be ground, a soft binding agent is to be used, so that the rapidly dulling abrasive grains can tear loose quickly to make room for the following sharp grains: soft grinding wheel.
Since with off-hand grinding the pressure exerted on the wheel differes greatly and the abrasive grains tear loose more quickly, mostly hard grinding wheels are used.
3.3. Selection of the grinding wheels for off-hand sharpening
Soft to medium hard with medium grain size for tool steel and high-speed steel.
Silicon carbide wheels:
Hard with medium to fine grain size for tools with carbide cutting edges.
What kind of wheels are mainly used for off-hand
In which cases cup wheels are
Which kind of wheel is used for sharpening tools made of tool
|
Myths abound with stories of giants, from the frost and fire giants of Norse legends to the Titans who warred with the gods in ancient Greek mythology. However, giants are more than just legend. The supposed remains of Sa-Nakht, a pharaoh of ancient Egypt, may be the oldest known human giant.
As part of ongoing research into mummies, scientists investigated a skeleton found in 1901 in a tomb near Beit Khallaf in Egypt with an estimated age of 2700 B.C – Third Dynasty of Egypt.
Prior work suggested that the skeleton of the man — who would have stood at up to 6 feet 1.6 inches (1.9 meters) tall — may have belonged to Sa-Nakht, a pharaoh during the Third Dynasty. Previous research on ancient Egyptian mummies suggested the average height for men around this time was about 5 feet 6 inches (1.7 m).
Ancient Egyptian kings were likely better fed and in better health than commoners of the era, so they could be expected grow taller than average. Still, the over-6-foot-tall remains the scientists analyzed would have towered over Ramesses II, the tallest recorded ancient Egyptian pharaoh, who lived more than 1,000 years after Sa-Nakht and was only about 5 feet 9 inches (1.75 m) tall.
In the new study, Habicht and his colleagues reanalyzed the alleged skull and bones of Sa-Nakht. The skeleton’s long bones showed evidence of “exuberant growth,” which are “clear signs of gigantism,” Habicht said.
These findings suggest that this ancient Egyptian probably had gigantism, making him the oldest known case of this disorder in the world. No other ancient Egyptian royals were known to be giants.
|
The eye is a complex organ composed of many small parts, each vital to normal vision. The ability to see clearly depends on how well these parts work together.
Light rays bounce off all objects. If a person is looking at a particular object, such as a tree, light is reflected off the tree to the person's eye and enters the eye through the cornea (clear, transparent portion of the coating that surrounds the eyeball).
Next, light rays pass through an opening in the iris (colored part of the eye), called the pupil. The iris controls the amount of light entering the eye by dilating or constricting the pupil. In bright light, for example, the pupils shrink to the size of a pinhead to prevent too much light from entering. In dim light, the pupil enlarges to allow more light to enter the eye.
Light then reaches the crystalline lens. The lens focuses light rays onto the retina by bending (refracting) them. The cornea does most of the refraction and the crystalline lens fine-tunes the focus. In a healthy eye, the lens can change its shape (accommodate) to provide clear vision at various distances. If an object is close, the ciliary muscles of the eye contract and the lens becomes rounder. To see a distant object, the same muscles relax and the lens flattens.
Behind the lens and in front of the retina is a chamber called the vitreous body, which contains a clear, gelatinous fluid called vitreous humor. Light rays pass through the vitreous before reaching the retina. The retina lines the back two-thirds of the eye and is responsible for the wide field of vision that most people experience. For clear vision, light rays must focus directly on the retina. When light focuses in front of or behind the retina, the result is blurry vision.
The retina contains millions of specialized photoreceptor cells called rods and cones that convert light rays into electrical signals that transmitted to the brain through the optic nerve. Rods and cones provide the ability to see in dim light and to see in color, respectively.
The macula, located in the center of the retina, is where most of the cone cells are located. The fovea, a small depression in the center of the macula, has the highest concentration of cone cells. The macula is responsible for central vision, seeing color, and distinguishing fine detail. The outer portion (peripheral retina) is the primary location of rod cells and allows for night vision and seeing movement and objects to the side (i.e., peripheral vision).
The optic nerve, located behind the retina, transmits signals from the photoreceptor cells to the brain. Each eye transmits signals of a slightly different image, and the images are inverted. Once they reach the brain, they are corrected and combined into one image. This complex process of analyzing data transmitted through the optic nerve is called visual processing.
The stabilization of eye movement is accomplished by six extraocular muscles that attach to each eyeball and perform their horizontal and vertical movements and rotation. These muscles are controlled by impulses from the cranial nerves that tell the muscles to contract or to relax. When certain muscles contract and others relax, the eye moves.
The six muscles and their function are listed here:
- Lateral rectus–moves the eye outward, away from the nose
- Medial rectus–moves the eye inward, toward the nose
- Superior rectus–moves the eye upward and slightly outward
- Inferior rectus–moves the eye downward and slightly inward
- Superior oblique–moves the eye inward and downward
- Inferior oblique–moves the eye outward and upward
There are five different types of eye movements:
- Saccades–looking from object A to object B
- Pursuit–smoothly following a moving object
- Convergence/divergence–both eyes turning inward/outward simultaneously
- Vestibular–eyes sensing and adjusting to head movement via connections with nerves in the inner ear
- Fixation maintenance–minute eye movements during fixation
Eyelids, Eyelashes, Conjunctiva
The eyelids are moveable folds of skin that protect the front surface of the eyes. They close the eyes and blink, which moistens the surface of the eyes and removes debris. The eyelashes (also called cilia) are hairs that grow at the edge of the eyelids and remove minute particles of debris away from the surface of the eyes. The conjunctiva is the thin, transparent, mucous membrane that lines the eyelids and covers the front surface of the eyeballs. The section that lines the eyelids appears red in color because it contains many blood vessels. The section that covers the cornea appears white because of the sclera behind it.
Tear Production and Elimination
Tears perform vitally important functions:
- Carry bacteria-fighting compounds to the eye
- Carry nutrients to and waste products away from the eye
- Keep the eye moist
- Provide a smooth refracting surface
- Remove debris from the eye
Tear components are produced by the lacrimal gland, several other small glands, and cells within the eyelid. As the eyelid closes, tears are swept downward, toward the nose, and enter the puncta (openings in the upper and lower lids, close to the nose). As the eyes blink, tears are forced through narrow channels into the lacrimal sac. Once the muscles relax and the eye opens, the tears move from the sac to the nasolacrimal duct and into the nose. This accounts for stuffy, runny noses when crying.
Aqueous Humor Production and Elimination
Aqueous humor is nutritive watery fluid produced by the ciliary body through the ciliary body processes and secreted into the posterior chamber (i.e., space between the iris and the lens). It maintains pressure and provides nutrients to the lens and cornea. Aqueous humor diffuses through the pupil into the anterior chamber (between the lens and cornea) and is reabsorbed into the venous system by two routes:
- Through the trabecular meshwork (collagen cords that form a spongelike, three-dimensional net) into the canal of Schlemm, which carries it into the venous system. Responsible for 80–90 percent of aqueous drainage.
- Through the anterior ciliary body directly into larger blood vessels (called uveal-scleral outflow pathway). Responsible for 10–20 percent of aqueous drainage.
|
The early explorers brought Longhorn cattle of Spanish ancestry to the island settlements in the "New World" In 1521 Cortes stocked his Mexican Ranch with cattle from the island settlements Cuba and Santa Domingo. In 1541 Coronado seems to have brought the first cattle from Mexico to the area that is now Texas.
LaSalle planted cattle on what is now Texas near Lavaca bay, literally meaning The Cow Bay, in 1685, DeLeon brought cattle to what is now Texas on his four expedditions from 1687 to 1690. Subsequent expeditions spread the cattle herds even farther across Present-Day Texas.
The two previous paragraphs talk about how this Longhorn cattle population arrived, however nothing extensive was done with the Longhorn cattle population until after the mid-1860's when the U.S. Civil war had left cattle herds decimated and the Texas fever tick plauged cattle drives of the day.
The Texas longhorn could survive the long drive to Abeline, Kansas for sale and also was seemingly immune to the bite of the fever tick. The immmunity to the bite of the fever tick seemed to have developed over the generations and was a genetic trait in the herds of that time.
There are seven breeds of the Longhorn cow:
1. The MIlby Butler- started with cattle rescused from shipment to Cuba by way of the family owned stock pens in League City, Texas. The original herd included duns from the Gulf Coast and white cattle with colored points East Texas. Butler had up to 600 head of longhorns which are unique in several ways. These cattle are bred for horns. The cows were carefully selected and but with bulls he thought would produce lenght, base, and corkscrew shape in the horns of the offspring. Butler cattle possess several unique skeletal differences including topline, tailhead attachment, "crocodile eyes" and colors not found in other families of longhorn.
2. The M.P. Wright- begun in the early 1920's as Wright selected out cattle that were brouhgt to his slaughterhouse. Pure Wrihgt cattle lines are very rare today, and probably the most distinctive of the seven. The herd was nearly all duns, reds and line-backed, long in body but not espically tall. They share with their ancestors a tendency towards long, well-shaped horns.
3. The Emil Marks- becoming very rare, found its beginnings on Mark's Barker Ranch just west of Houston, Texas. Marks valued the traditional functionality of the Texas Longhorn, and the Marks line contributed much of this to the present breed. When asked about what makes a good cow, Marks would reply "A good cow has a good long body, a long hip, she stands up, she can travel and she has good legs. She has a calf every year."
4. The Cap Yates- originated mostly from west Texas and Northern Mexico where "survival of the fittest" is a daily reality. Cap Yates strongly believed the purity of his longhorns was of utmost importance. They are generally beefier and larger-framed with relatively shorter horns. Horns are typically fairly "high" with an upward twist second to none. The other dominant characteristics of the Yates are ruggedness, good survival instincts and an independant streak. This breed really eximplifies the spirit of the Old West, what all Texas Longhorns truly stand for.
5. The Wichita Refuge- mostly from West Texas and northern Mexico with additions over the years made from the Cap Yates herd or Mexican cattle from Clower. This breed lives on the Wichita Mountain Wildlife Range, found in Oklahoma. The Refuge mandate is specifically conservation of the traditional Texas Longhorn.
6. The Jack Phillips- began on the Battle Island Ranch near Columbia, Texas. Jack learned a great deal from his frind, Graves Peeler, who brought him some cows and one of the dun-colored bulls he had found in Mexico. They all had pretty good horns and were big cattle. Throughout the 1930's Phillips accumulated more "typical" longhorns from area ranchers. He looked for long-bodied, long-headed cows with a high tailhead and a tail with a heavy brush dragging the ground. Jack also liked the old Texas-twisty horns.
7. The Graves Peeler- reflects the characteristics of the man who started it. Peeler a true legend among Longhorn breeders, choose his cattle more for "personality" than any one physical attribute. Peeler liked his longhorns tough and independent, and did not object if there was a little wildness thrown in. Also important was productivity- a cow that failed to calve was not bound to stay long.
Images courtesy of the International Texas Longhorn Association
|Name: Texas Longhorn|
|Scientific name: Bos bos|
|Range:Southwestern and Midwestern United States Grasslands|
|Status: Not threatened|
|Diet in the wild:Prarie Grass|
|Diet in the zoo: Herbivore Diet|
|Location at the Zoo: Texas Wild! Hill Country|
|Physical Description:At four years old weighing 800 pounds (365 kg), maturity at 8 to 10 years old. Height 60" at shoulder, Length up to 100". There are various colorations present in the Longhorns.|
|General Information:The Longhorns live in herds consisting of males, females and their progeny.|
|Special anatomical, physiological or behavioral adaptations:Resistant to ticks and able to travel over 1,000 miles (1600 km) feeding along the way, described as "a wild fierce breed with huge horns and long legs.", Body with a long and slender build, tucked up flanks, catlike hams, and a thick tough hide.|
|Personal Observations: These are cows a somewhat calm and persistent animal. Longhorns have no real problems in the wild besides predatory animals and the avoidance of habitats in droughts or frosts with inadaquate forage to support a herd. These problems are often offset by a rancher or farmer providing for the animal.|
Materials and related links:
The Longhorn Crossing by Walter Long
|
Teaching kids to identify trees is a worthwhile, but challenging task – especially at the outset.
Adults rarely know where to start.
Most online resources are too difficult for children to follow. They’re full of obscure terminology, and they primarily focus on minutiae that’s barely of interest to dendrologists. Field guides are often valuable for those who are already familiar with trees, but they’re rarely user-friendly for novices.
Fortunately, you don’t need these things to teach your kid how to identify trees.
Whether you are trying to teach a scout group, classroom or your own kids, you only have to worry about doing three things when teaching children to identify trees:
Follow these steps and your kids will be tree-identifying machines in no time.
The following five species are common in the Piedmont region, and to a lesser extent, the rest of the eastern U.S. They all possess pretty distinctive characteristics and remain relatively easy to identify throughout the year.
The beech is a large and majestic species that typically stands out in the forest. But they are not only beautiful they are important too, as rodents, deer, pigs and birds all consume the edible nuts the trees produce.
The massive root systems of beech trees often produce a number of secondary stems, so you’ll often see dozens of small beech trees spring up around a larger, central tree. Technically, these trees are all part of the same organism, and each one is actually a stem, rather than an individual tree.
This type of clonal growth pattern means that beech trees often dominate the forests in which they live. And because they’re a long-lived species – several specimens have been documented living for 350 years or more — they play a huge role in the forest’s development.
The smooth grey bark of beech trees makes them a breeze for kids to learn. And while they’re easy to identify in any season, they’re especially easy to recognize in the winter, as they typically retain their dead leaves until the following spring.
Unfortunately, the smooth bark of beech trees makes them the targets of vandals who have nothing better to do than carve messages into the bark. A fact you’ll no doubt notice as you start paying more attention to them. I encourage you to discuss this problem with your kids and try to convey the importance of respecting the natural world.
Pine trees are exceedingly common throughout the eastern U.S. Most pines are early succession species, who quickly colonize fields and other wide-open habitats. But as forests age, many of the pines are pushed out by the oaks, hickories and other late-succession species who move in after the pines.
But while they grow naturally in the eastern U.S., they’re also a favorite of land managers and developers, who plant them by the truckload. Given the fact that pines are cheap, hardy and grow quickly, it is easy to see why they provide value in these contexts.
Pines are highly valuable to wildlife, as they not only produce edible pine nuts, but they provide excellent nest sites for many birds. Also, because some pines are vulnerable to wood-softening rots as they age, they are very important for woodpeckers and other cavity-nesting species.
The biggest challenge most kids will have while learning to identify pine trees is learning to distinguish them from red cedars (Juniperus virginiana) and other conifers.
Just take some time and concentrate on the fact that pines have bundled needles. None of the other conifers, including hemlocks, arborvitaes and every species between, bear clustered needles.
With practice, they’ll learn to associate the growth habit and bark of pines with the clustered needles, which will allow them to identify pines from a distance.
River birches are iconic trees that most commonly grow in the floodplains and forests bordering lakes and rivers. In fact, they’re ideally suited for such low-lying habitats. They not only cope well with flooded conditions but their seeds float, which helps them to colonize distant shores.
But while river birch trees naturally grow in riparian areas, they are also common far from water, as they’re popular with many homeowners, developers and landscapers. As long as the region receives adequate rain or the landowner provides supplemental irrigation, river birches will often thrive in these types of upland locations.
You can identify river birch trees by noting their thin, papery bark.
It is helpful to emphasize that the bark is peeling away from the trunk. Using the world “peeling,” as opposed to “rough” or “shaggy” paints a more vivid picture for young tree lovers to wrap their heads around.
Additionally, river birches frequently grow as multi-trunk clusters, rather than as single-stemmed trees. This can make them easy to recognize before you even get close enough to note the peeling bark.
Southern magnolias are stately trees, who reach moderately large sizes on good sites.
They can be seen growing naturally in various types of forests, but they’re also commonly planted as shade or ornamental trees in the southern and eastern United States. This means you probably won’t have to walk through the woods to find magnolias — just keep your eyes peeled while driving through the suburbs.
Magnolias are important food sources for animals ranging from squirrels to opossums to deer, and many insects relish the pollen found in their flowers. Magnolias provide some of the densest shade of any native tree, although their tendency to retain lower branches means that this shade is hard to access.
You can teach youngsters to identify southern magnolias by noting their thick, dark green leaves, which cling to the tree all year long. Once your kids develop a strong mental image of the basic magnolia tree aesthetic, they’ll have no trouble spotting them at a glance.
And although children are unlikely to need to consider additional characteristics when identifying magnolia trees, the trees’ summer-blooming flowers are gigantic and easy to recognize from a distance.
The American holly is another common evergreen that is native to eastern forests. Although individuals of this species can and do grow as trees, many amount to little more than shrubs. Hollies are critically important for several songbirds, who feed on their berries and make nests amid the dense foliage.
Armed with prickly evergreen leaves, the holly would be easy to recognize even if it didn’t produce bright red berries. The trunk is typically pale and moderately distinctive, but it is not the best criteria for youngsters to consider when trying to identify the tree.
It is important to note that dozens of horticultural varieties (called cultivars) are planted on residential and commercial properties. Many of these lack the wild holly’s pointy leaves, so you’ll want to introduce them to your kids in a forest setting to avoid confusion.
If you teach your youngsters to look for the characteristics mentioned above, they should be able to learn all five of these species without much trouble. Just remember to start slow and wait for them to master one species before moving on to another one.
There are plenty of other species that youngster could learn to recognize. Palm trees and sycamores are common in some areas, and both exhibit pretty distinctive characteristics. You could also select a locally abundant ornamental species, like crepe myrtles or pear trees.
Ultimately, it doesn’t matter which species you select, as long as you focus on a distinctive characteristic and give them plenty of time to practice spotting it.
Header image from Unsplash.
|
As educators, we know the power of a good rubric. Well-crafted rubrics facilitate clear and meaningful communication with our students and help keep us accountable and consistent in our grading. They’re important and meaningful classroom tools.
Usually when we talk about rubrics, we’re referring to either a holistic or an analytic rubric, even if we aren’t entirely familiar with those terms. A holistic rubric breaks an assignment down into general levels at which a student can perform, assigning an overall grade for each level. For example, a holistic rubric might describe an A essay using the following criteria: “The essay has a clear, creative thesis statement and a consistent overall argument. The essay is 2–3 pages long, demonstrates correct MLA formatting and grammar, and provides a complete works cited page.” Then it would list the criteria for a B, a C, etc.
An analytic rubric would break each of those general levels down even further to include multiple categories, each with its own scale of success—so, to continue the example above, the analytic rubric might have four grades levels, with corresponding descriptions, for each of the following criteria points: thesis, argument, length, and grammar and formatting.
Both styles have their advantages and have served many classrooms well. However, there’s a third option that introduces some exciting and game-changing potential for us and our students.
The single-point rubric offers a different approach to systematic grading in the classroom. Like holistic and analytic rubrics, it breaks the aspects of an assignment down into categories, clarifying to students what kinds of things you expect of them in their work. Unlike those rubrics, the single-point rubric includes only guidance on and descriptions of successful work—without listing a grade, it might look like the description of an A essay in the holistic rubric above. In the example below, you can see that the rubric describes what success looks like in four categories, with space for the teacher to explain how the student has met the criteria or how he or she can still improve.
A single-point rubric outlines the standards a student has to meet to complete the assignment; however, it leaves the categories outlining success or shortcoming open-ended. This relatively new approach creates a host of advantages for teachers and students. Implementing new ideas in our curricula is never easy, but allow me to suggest six reasons why you should give the single-point rubric a try.
1. It gives space to reflect on both strengths and weaknesses in student work. Each category invites teachers to meaningfully share with students what they did really well and where they might want to consider making some adjustments.
2. It doesn’t place boundaries on student performance. The single-point rubric doesn’t try to cover all the aspects of a project that could go well or poorly. It gives guidance and then allows students to approach the project in creative and unique ways. It helps steer students away from relying too much on teacher direction and encourages them to create their own ideas.
3. It works against students’ tendency to rank themselves and to compare themselves to or compete with one another. Each student receives unique feedback that is specific to them and their work, but that can’t be easily quantified.
4. It helps take student attention off the grade. The design of this rubric emphasizes descriptive, individualized feedback over the grade. Instead of focusing on teacher instruction in order to aim for a particular grade, students can immerse themselves in the experience of the assignment.
5. It creates more flexibility without sacrificing clarity. Students are still given clear explanations for the grades they earned, but there is much more room to account for a student taking a project in a direction that a holistic or analytic rubric didn’t or couldn’t account for.
6. It’s simple! The single-point rubric has much less text than other rubric styles. The odds that our students will actually read the whole rubric, reflect on given feedback, and remember both are much higher.
You’ll notice that the recurring theme in my list involves placing our students at the center of our grading mentalities. The ideology behind the single-point rubric inherently moves classroom grading away from quantifying and streamlining student work, shifting student and teacher focus in the direction of celebrating creativity and intellectual risk-taking.
If you or your administrators are concerned about the lack of specificity involved in grading with a single-point rubric, Jennifer Gonzales of Cult of Pedagogy has created an adaptation that incorporates specific scores or point values while still keeping the focus on personalized feedback and descriptions of successful work. She offers a brief description of the scored version along with a very user-friendly template.
While the single-point rubric may require that we as educators give a little more of our time to reflect on each student’s unique work when grading, it also creates space for our students to grow as scholars and individuals who take ownership of their learning. It tangibly demonstrates to them that we believe in and value their educational experiences over their grades. The structure of the single-point rubric allows us as educators to work toward returning grades and teacher feedback to their proper roles: supporting and fostering real learning in our students.
|
Spanish is an adjective relating to Spain, its people, or their culture. The demonym for a denizen of Spain is “Spaniard”, but collectively the population of Spain may be referred to as “the Spanish”. Spanish also describes the chief language of Spain and many areas now or formerly under Spanish control. Other commonly used terms using this word include the Spanish Civil War, the Spanish‑American War, the Spanish Armada, and Spanish fly (a green beetle thought to be an aphrodisiac). Famous figures in Spanish history include Pablo Picasso, El Cid and Manuel de Falla.
|
Peach canker is a fungus disease common on apricot, prune, plum, and sweet cherry trees as well as on peach trees. The disease is common in peach orchards and is a frequent cause of limb dying and death of peach trees. Other common names for peach canker are perennial canker and Valsa canker. The fungi that cause this disease enter the plant through wounds. Infection results in dead and weakened twigs and branches, and in reduced yields.
|Figure 1. Peach canker in a narrow-angled crotch. Gummy exudate is present.|
The first symptoms appear in early spring as gummy drops of sap around wounded bark. The diseased inner bark begins to break down, causing the cankered surface to appear depressed. Black specks, which are fungal spore producing bodies, appear on the bark surface or under the bark tissue. During wet periods spores ooze out of these fungal bodies in tiny orange or amber colored, curled strands.
During the summer, healthy bark (callus tissue) grows over the edges of the narrow, oval shaped cankers. In the fall, the fungus resumes growth, attacking and killing the new callus tissue. Over a period of years, a series of dead callus ridges form as the canker gets larger. Eventually, the canker may completely surround a branch. The portion of the branch beyond the canker then dies. Large amounts of gum are usually produced around cankered areas.
Peach canker is often confused with other problems which cause cankering and gumming. Among these are bacterial canker, insect borer injury and mechanical injury. When insects are involved, chewed-up wood dust is usually visible under the gum. Mechanical injury can often be verified by carefully reviewing recent operations in the area.
Peach canker is caused by the fungi, Cytospora leucostoma and Cytospora cincta. These fungi are weak pathogens and generally do not attack healthy, vigorous peach bark. Winter injury, insect damage, and mechanical injury are common types of wounds serving as entry points. The fungi survive the winter in cankers or in dead wood. During spring and summer, spores produced in the cankers are spread by wind and rain to wounds on the same or nearby trees. The spores are not blown over long distances in the wind. Infection and canker development depend on temperature and the species of fungus involved. Cytospora cincta is favored by lower temperatures than Cytospora leucostoma. Because of the manner of infection and development of this disease, no single control measure is adequate. Most known control methods act indirectly by reducing points of entry or by reducing the level of inoculum. Fungicides are generally ineffective for controlling this disease.
Control can be facilitated by following these guidelines:
|Figure 2. Peach canker.|
- Prune young trees carefully to avoid weak, narrow-angled crotches. Narrow-angled crotches are frequent sites of breakage and winter injury.
- Delay pruning until early spring. This promotes quick healing. Remove cankered branches and dead wood while pruning. Do not leave protruding pruning stubs. Cut flush to the next larger branch.
- Eradicate cankers and remove badly cankered limbs, branches or trees. Burn or remove all cankered limbs soon after pruning. These limbs or branches serve as a reservoir for the disease causing fungi. Sanitation is critical, especially during the early life of the orchard.
- Do not plant new peach trees near established trees with canker.
- Avoid mechanical and insect injury.
- Promote vigorous, healthy peach trees with proper fertilization, pruning, and water.
- Do not over-fertilize late in the season. Winter injury is more common on these trees because winter hardening is delayed.
- White latex paint applied to the southwest side of trunks and lower scaffold branches may help avoid cold injury.
- Maintain a good control program for other diseases and insect pests, especially borers.
For the most current spray recommendations, commercial growers are referred to Bulletin 506, Midwest Fruit Pest Management Guide, and backyard growers are referred to Bulletin 780, Controlling Diseases and Insects in Home Fruit Plantings. These publications can be obtained from your county Extension office or the CFAES Publications online bookstore at estore.osu-extension.org.
This fact sheet was originally published in 2008.
|
Seeds and oils from the plant that produces the loofah sponge could help purify wastewater and prevent the spread of waterborne diseases in the developing world, according to a scientist speaking today at the 17th Annual Green Chemistry & Engineering Conference in Bethesda, Md. The low-cost, biodegradable seeds and substances made from oils of these seeds are particularly effective at absorbing heavy metals and other potentially harmful organic compounds from polluted water, he said.
Adewale Adewuyi, Ph.D., a lecturer at Redeemer’s University in Mowe, Nigeria, notes that rain water, rivers and streams are the most common direct sources of drinking water in many developing countries. Often, this water is polluted with substances from factories and agricultural runoff, which can harm both people and animals. In 2010, for instance, lead poisoning in Nigeria — which was later linked to industrial wastewater — claimed the lives of more than 500 children in less than seven months, he reported.
Absorbents, such as activated carbon, can help absorb these pollutants from water. However, they work slowly, are only effective in a limited pH range and are expensive. To help overcome this problem, Adewuyi turned to the seeds of the Luffa cylindrica plant. This plant, commonly known as sponge gourd, produces sponge-like fruit — loofahs — that are used as bathing brushes by millions of people worldwide. But Adewuyi says the seeds, which are plentiful, are considered environmental waste. As a result, they are underutilized.
In laboratory tests, he isolated oil from L. cylindrica seeds and used it to produce detergent-like substances called surfactants. These surfactants enhanced the seed’s absorption capacity in cleaning wastewater. He found the new product was cheaper and more effective than existing absorbents. Adewuyi is currently exploring whether other underutilized seeds and oils could have the same effect.
“It’s a win-win process,” he says. “It’s cost-effective, green, reproducible and, of course, applicable in developing countries because it is very easy to start up and maintain.”
Article appearing courtesy Celsias.
|
- Download PDF
1 Answer | Add Yours
The basic reason for this is that diverting corn to ethanol production means that there is less corn to be made into animal feed. When this happens, the price of that corn goes up. When the price of corn goes up, the price of animal feed goes up. When the price of animal feed goes up, the price of the meat raised on that feed goes up. All of these price increases come about because of decreases in supply.
Supply is defined as the amount of a product that producers can and will sell at a given price. One of the things that makes supply decrease is when the producers have an alternative use for the product. That is what is happening with the corn for animal feed. When corn is diverted to ethanol, the supply of corn for animal feed goes down. When supply goes down (all else being equal) price goes up. So, the price of corn for animal feed rises.
Another thing that causes supply to decrease is an increase in the cost of inputs. Corn is an input of animal feed. Therefore, if the price of corn for animal feed rises, the supply of animal feed drops and its price rises. Animal feed is an input for meat. When the price of animal feed rises, the supply of meat falls and its price increases.
Through these steps, an increase in production of corn ethanol raises the price of meat.
We’ve answered 327,777 questions. We can answer yours, too.Ask a question
|
The Indo-European languages include 150 languages spoken by about 3 billion people, most of the major language families of Europe and western Asia which belong to a single superfamily.
The hypothesis that this was so was first proposed by Sir William Jones, who noticed similarities between four of the oldest languages known in his time, Sanskrit, Latin, Greek,and Persian. Systematic comparison of these and other old languages conducted by Franz Bopp supported this theory. In the 19th century, scholars used to call the group "Indo-Germanic languages". However when it became apparent that the connection is relevant to most of Europe's languages, the name was expanded to Indo-European. An example of this was the strong similarity discovered between Sanskrit and olden dialects of Lithuanian.
The common ancestral (reconstructed) language is called Proto-Indo-European (PIE). There is disagreement as to the geographic location where it originated from, with Armenia and the area to the north or west of the Black Sea being prime examples of proposed candidates.
The various subgroups of the Indo-European family include:
Most spoken European-languages belong to the Indo-European superfamily. There are, however, language families which do not. The Finno-Ugric language family, which includes Hungarian, Estonian, Finnish and the languages of the Saami, is an example. The Caucasian language family is another. The Basque language is unusual in that it appears to be separate from all other language families.
The Maltese language and Turkish are two examples of languages spoken in Europe which have definite non-European origins. Turkish being Turkic, and Maltese being largely derived from Arabic
It has been proposed that Indo-European languages are part of the hypothetical Nostratic language superfamily; this theory is controversial.1
|
Root is generally non green (do not contains chlorophyll) underground, positively geotrophic (grow towards the gravity), positively hydrotrophic (grow towards the source of water), negatively phototropic (grow opposite to the source of light) part of plants. Roots do not contain any nodes, internodes, leaves, lowers and fruits. Buds are absent except when roots take part in vegetative propagation. This type of example can be seen in Ipomoea batatas,Dalbergia, Populous, Dahlia etc.
Structure of root- Root consists of root cap, root hairs which are fine thread like (in subapical region). Root branches are endogenous in origin (developing from the pericycle in between two protoxylem points).
Parts of the roots - In a typical root, there are five distinctive zones starting from apex to the base.
These parts are follows: -
1. Root cap - It is a cap like covering over the tip of the root which gives protection against friction and weir and tear from soil particles. Root cap also help in graviperception. In different situations root caps are modified to give protection. In aquatic plants root pockets are observed instead of root cap because they are nonrenewable and act as balancers. Multiple root cap is observed in the aerial root of different plants. Example is Pandanus.
2. Growing point of root - It is found in the long region of meristematic cells which is almost 1 mm long. This growing part produces new cells for root cap and basal part.
3. Zone of elongation - It is behind the tip which is 4-8 mm long. As it is responsible for the elongation of root so, cells of this area elongated rapidly.
4. Root hair zone - This area is a cell maturation zone. Thus, xylem and phloem differentiated here. The region is 1-6 cm long. For increasing absorptive area outer cell give rise to tubular root hairs.
5. Zone of mature cells - Bulk of root is made of it.
Primary Functions of Root:
1. It helps the plant body to remain fixed with the soil.
2. One of the essential thing is water for plants and dissolved nutrients are transported with the help of root.
3. Water and dissolved minerals make a continuous water column through root by xylem and help in uptake by the plant.
4. Roots prevent soil erosionas it holds the loose soil tight.
5. Only source of nitrogenous salt are absorbed by the root which is very essential for plants.
Special functions of roots:
Root acts as - store house of food for plants, reproductive organ, pillar for plant, supportive part of plant, climbers etc.
Root system – A complex of one type of roots and their branches is called root system. They are of two types –
1. Tap root system - It is a complex of roots formed by tap root and different types of branches borne by it. E.g. mango, peepal.
2. Adventitious root system – It is a complexed formed by roots that develop from the parts of the plant other than primary root or its branches.
It can develop from any part of the plant.
|
While the Apollo 11 landing was on the cutting-edge of technology in 1969, today it's a demonstration of how much could be accomplished with so little.
The computing technology of the average cell phone far exceeds the combined computing power of the two spacecraft that got humans to the moon and home safely.
That doesn't make Apollo 11's technological feats any less impressive, however. The lunar module, for example, flew only twice with astronauts inside before Apollo 11. The hand-stitched, walkable spacesuits that Neil Armstrong and Buzz Aldrin wore on the moon when they stepped onto its face for the first time 45 years ago this month were not used before landing on the lunar surface. [Apollo 11 Moon Landing 45th Anniversary: Complete Coverage]
Here's a brief look at some of the technology that got the United States to the moon and back:
Saturn V rocket
There were several modes of transportation that NASA could have chosen to take to the moon. For example, engineers could have launched two big rockets and then docked the various spacecraft and components in Earth orbit. But it was lunar-orbit rendezvous that made the Saturn V rocket possible.
There were several NASA engineers that proposed the concept over the years, but one of the most famous is John Hoboult, then the assistant chief of the dynamics load division at NASA Langley. NASA said the main benefit of the approach was the lunar lander only had to be a small craft, since an Earth orbit rendezvous required more fuel to get back home.
The decision to go with this kind of mission made it possible to launch the entire mission on a single Saturn V rocket. Even at that, it was a monster. The three-stage rocket stood 363 feet (111 meters) tall — taller than the Statue of Liberty — and fully fuelled, it weighed about 6.2 million pounds (2.8 million kilograms).
NASA tested the Saturn V twice without humans, in 1967 (Apollo 4) and 1968 (Apollo 6). That same year, NASA elected to use the Saturn V to send humans all the way to the moon. That mission, Apollo 8, made its crew the first humans to go to the moon and orbit it.
After that successful flight, the Saturn V flew the rest of the Apollo missions, including Apollo 11. The last flight of the rocket was in 1973, when (without a crew) it launched the Skylab space station to orbit. [The Future of Moon Exploration]
The lunar module (LM) was the first spacecraft designed to operate on another world. Unlike a spacecraft made to function in the atmosphere (like the space shuttle, which looked like a plane) the LM was all strange angles and bumps.
This is because in space, there would be no atmosphere to worry about. Shaving the aerodynamic features helped save weight, cutting down the costs of launching the spacecraft all the way to the moon.
The LM was first tested with humans on Apollo 9, which dubbed its spacecraft "Spider." The astronauts flew it away from the command module (which was designed to orbit the moon) and practiced a docking. The LM then flew on every subsequent mission.
During a simulated lunar landing approach on Apollo 10, the LM spun for a few momentsjust as it was supposed to make its way back to the command module. The astronauts quickly got the craft under control, which (through a series of small errors) was pointing the wrong way.
Armstrong famously took control of his LM, dubbed "Eagle," during the landing when he saw the guidance system was moving it towards a rock field. He landed safely with very little fuel left.
The LM successfully made it to the moon on the rest of its missions, save one. On Apollo 13, the command module "Odyssey" experienced an oxygen tank explosion. The astronauts didn't land on the moon, but they did use the LM "Aquarius" as a "lifeboat" to keep them alive and to make course corrections to bring them back to Earth safely.
Odyssey was the only spacecraft with a heat shield, but Aquarius kept the astronauts safe long enough for them to transfer back to Odyssey for their landing in the Pacific Ocean.
While the Apollo missions are best remembered for the computers the astronauts operated, there were several other important computers used for the mission. One example is the Saturn V computer that was used to guide the rocket into Earth orbit, automatically. NASA also had large computers on the ground that it could use for things like navigation corrections.
On lunar missions, however, the bulk of the attention was focused on the command module computer and the lunar module computers. The CM was in charge of navigating the crew between the Earth and the moon, while the LM did landings, ascents and rendezvous, according to NASA.
The CM and the LM each had a computer (with different software, but the same design) called the Primary Guidance, Navigation, and Control System (PGNCS, pronounced "pings"). The LM also had a computer which was a part of the Abort Guidance System, to give a backup if the PGNCS failed during the landing.
"Ground systems backed up the CM computer and its associated guidance system so that if the CM system failed, the spacecraft could be guided manually based on data transmitted from the ground," NASA stated. "If contact with the ground were lost, the CM system had autonomous return capability."
Unlike the space shuttle's spacesuit, each Apollo suit was custom-tailored for its astronaut crew of three people. The suits were designed to be fully operational in the vacuum of space and also to walk around on the moon.
According to NASA, each mission required 15 suits. The main (prime) crew had nine suits between them, with one used for flight, one for training and one as a back-up in case something happened with the flight suit. The backup crew of three people required six suits between them: one for flight and one for training.
The construction of the spacesuit changed over the missions as the requirements of astronauts became more complex. For example, designers changed the suit waistfor Apollo 15, 16 and 17 so that the astronauts could use lunar rovers more easily, allowing them to do more complex geological expeditions further from the lunar module. [How the Apollo 11 Moon Landing Works (Infographic)]
There were several layers to the suit. The inside was a sort of "long john" fabric that included cooling water tubes sewed to the material, to keep the astronauts cool while working on the lunar surface. After that were several layers of nylon, Kapton, glass-fiber cloth, Mylar and Teflon to maintain pressure and protect the astronauts from radiation and micrometeroids.
Lunar gloves and boots were included to walk around the moon's surface and pick up rocks. To help the astronauts "feel" things as they pick up, the glove digits included silicone rubber.
Attached to the suit was a polycarbonate helmet, which attached using a neck ring that stayed in place as the astronaut moved his head. Another important supplement to the suit was the portable life support system, a backpack that allowed astronauts to breathe and maintain suit pressure for up to seven hours on the surface.
|
2. Land Classifications
3. Forest Parameters
5. Additional terms
Forests are crucial for the well-being of humanity. They provide foundations for life on earth through ecological functions, by regulating the climate and water resources, and by serving as habitats for plants and animals. Forests also furnish a wide range of essential goods such as wood, food, fodder and medicines, in addition to opportunities for recreation, spiritual renewal and other services.
Today, forests are under pressure from expanding human populations, which frequently leads to the conversion or degradation of forests into unsustainable forms of land use. When forests are lost or severely degraded, their capacity to function as regulators of the environment is also lost, increasing flood and erosion hazards, reducing soil fertility, and contributing to the loss of plant and animal life. As a result, the sustainable provision of goods and services from forests is jeopardized.
FAO, at the request of the member nations and the world community, regularly monitors the world's forests through the Forest Resources Assessment Programme. The next report, the Global Forest Resources Assessment 2000 (FRA 2000), will review the forest situation by the end of the millennium. FRA 2000 will include country-level information based on existing forest inventory data, regional investigations of land-cover change processes, and a number of global studies focusing on the interaction between people and forests. The FRA 2000 report will be made public and distributed on the world wide web in the year 2000.
The Forest Resources Assessment Programme is organized under the Forest Resources Division (FOR) at FAO headquarters in Rome. Contact persons are:
Robert Davis FRA Programme Coordinator email@example.com
Peter Holmgren FRA Project Director firstname.lastname@example.org
or use the e-mail address: email@example.com
The Global Forest Resources Assessment 2000 (FRA 2000) will report on the state of the worlds forests by the year 2000. The framework for the FRA 2000 was set by the Expert Consultation held in Kotka, Finland (Kotka III) in June 1996 (Nyyssönen & Ahti 1996). The objectives of the meeting were to agree on the FRA 2000 agenda and on how to respond to new information requirements for the year 2000 assessment.
This paper reflects considerable effort made to develop common terms and definitions that can be applied to forest resources assessment1. The process began in the Kotka III meeting, where a preliminary set of definitions was reviewed and edited by 32 experts from both developing and industrialized countries; and continued in the Team of Specialists meetings intended to harmonize definitions at the global level. In some case compromises between, or adjustments of, existing terms has been necessary.
FRA 2000 Information Content
Land cover (forest and other wooded land)
Forest area for wood supply
Felling and Removals
Non-wood forest products and forest services
A hierarchic scheme has been defined for classification of land cover. The scheme focuses on forest and other wooded land and does not distinguish sub-classes within, for instance, agricultural land. The general classification is defined below and in Figure 1. The sub-classes within forest and other wooded land are described in sections 2.1.2 and 2.1.3 respectively.
|Land cover class||Definition|
|Total area1||Total area (of country), including area under inland water bodies, but excluding offshore territorial waters.|
|Forest||Land with tree crown cover (or
equivalent stocking level) of more than 10 percent and area of more than 0.5 hectares
(ha). The trees should be able to reach a minimum height of 5 meters (m) at maturity in
situ. May consist either of closed forest formations where trees of various
storeys and undergrowth cover a high proportion of the ground; or open forest
formations with a continuous vegetation cover in which tree crown cover exceeds 10
percent. Young natural stands and all plantations established for forestry purposes which
have yet to reach a crown density of 10 percent or tree height of 5 m are included under
forest, as are areas normally forming part of the forest area which are temporarily
unstocked as a result of human intervention or natural causes but which are expected to
revert to forest.
Includes: forest nurseries and seed orchards that constitute an integral part of the forest; forest roads, cleared tracts, firebreaks and other small open areas; forest in national parks, nature reserves and other protected areas such as those of specific scientific, historical, cultural or spiritual interest; windbreaks and shelterbelts of trees with an area of more than 0.5 ha and width of more than 20 m; plantations primarily used for forestry purposes, including rubberwood plantations and cork oak stands.
Excludes: Land predominantly used for agricultural practices
|Other wooded land||Land either with a crown cover (or equivalent stocking level) of 5-10 percent of trees able to reach a height of 5 m at maturity in situ; or a crown cover (or equivalent stocking level) of more than 10 percent of trees not able to reach a height of 5 m at maturity in situ (e.g. dwarf or stunted trees); or with shrub or bush cover of more than 10 percent.|
|Other land||Land not classified as forest or other wooded land as defined above. Includes agricultural land, meadows and pastures, built-on areas, barren land, etc.|
|Inland water||Area occupied by major rivers, lakes and reservoirs.|
1) The Total land area is defined as the Total area, but excluding Inland water.
Figure 1. Land classification, general level
Note: The definition of forest applied in FRA 2000 has a minimum crown cover requirement and may be quite different from a legal definition of forest (or forest land) (i.e. Legal definitions may designate an area to be forest under a Forest Act or Ordinance without regard to the actual presence of forest cover).
The general forest definition above refers both to natural forests and forest plantations. In most tropical and subtropical countries a clear distinction is made between these two categories, and for the purpose of the FRA 2000 this distinction is used as a first level of subdivision (Figure 2).
|Plantation||Forest stands established by planting or/and seeding in the
process of afforestation or reforestation. They are either:
· of introduced species (all planted stands), or
· intensively managed stands of indigenous species, which meet all the following criteria: one or two species at plantation, even age class, regular spacing.
See also afforestation and reforestation in section 4.1.
Note Area statistics on forest plantations provided by countries should reflect the actual forest plantations resource, excluding replanting. Replanting is the re-establishment of planted trees, either because afforestation or reforestation failed, or tree crop was felled and regenerated. It is not an addition to the total plantation area.
|Natural forest||Natural forests are forests composed of indigenous trees, not
planted by man. Or in other words forests excluding plantations. Natural forests are
further classified using the following criteria:
· forest formation (or type): closed/open
· degree of human disturbance or modification
· species composition.
|Closed forest||Formations where trees in the various storeys and the undergrowth cover a high proportion (> 40 percent) of the ground and do not have a continuous dense grass layer (cf. The following definition). They are either managed or unmanaged forests, primary or in advanced state of reconstitution and may have been logged-over one or more times, having kept their characteristics of forest stands, possibly with modified structure and composition. Typical examples of tropical closed forest formations include tropical rainforest and mangrove forest.|
|Open forest||Formations with discontinuous tree layer but with a coverage of at least 10 percent and less than 40 percent. Generally there is a continuous grass layer allowing grazing and spreading of fires. (Examples are various forms of "cerrado", and "chaco" in Latin America, wooded savannas and woodlands in Africa).|
The division between closed and open forest is more ecological (referring to the climax vegetation of a particular location), than current physiognomic features, and thus not characterized only by the percentage crown cover. For instance, a rainforest, after logging, appear as open forest from the single criteria of crown cover. However, in this case, the forest should be classified as semi-natural closed forest rather than open forest, unless there are permanent changes in flora, fauna and soil condition. Such changes are generally due to repeated fire, grazing, etc., which maintain the forest in a sub-climax condition. Certain forest formations, e.g. Miombo woodland in Southern Africa, are on the threshold of closed and open formations, where the wetter types (northern distribution) could be classified as closed forest (according to crown closure) and the drier types in its southern distribution as open forest.
Three categories of natural forest are defined according to the degree of human disturbance or modification:
|Natural Forest undisturbed by man||Forest which shows natural forest dynamics such as natural species composition, occurrence of dead wood, natural age structure and natural regeneration processes, the area of which is large enough to maintain its natural characteristics and where there has been no known human intervention or where the last significant human intervention was long enough ago to have allowed the natural species composition and processes to have become re-established.|
|Natural Forest disturbed by man||Includes:
· logged over forests associated with various intensity of logging.
· various forms of secondary forests, resulting from logging or abandoned cultivation.
|Semi-natural forest||Managed forests modified by man through sylviculture and assisted regeneration.|
Closed forests are further distinguished according to their composition into the following types:
|Broadleaved forest||Forest with a predominance (more than 75 percent of tree crown cover) of trees of broadleaved species.|
|Coniferous forest||Forest with a predominance (more than 75 percent of tree crown cover) of trees of coniferous species.|
|Bamboo/Palms formations||Forest on which more than 75% of the crown cover consists of tree species other than coniferous or broadleaved species (e.g. tree-form species of the bamboo, palm and fern families)|
|Mixed forest||Forest in which neither coniferous, nor broadleaved, nor palms, bamboos, account for more than 75 percent of the tree crown cover.|
Open forests are distinguished into broadleaved and coniferous and mixed, using the same definitions that are applied to closed forests.
The FRA 2000 forest classification has the primary objective to allow standardized and comparable reporting on the world's forest and is not meant to replace existing national classifications. National inventories and the terms and definitions used by them have specific purposes and are geared to suit the country's ecological setting and/or functions and the use of the forests. In order to make the 2000 assessment process transparent, the country's classification and its relationship to the FRA 2000 classification will be reported.
Agreement on common classifications and definitions involves compromises. For example, the threshold of 40% crown cover to distinguish closed from open forest is frequently debated. During Kotka III, one non-governmental organization recommended a threshold of 70% crown cover for defining closed forests. However, it should be recalled that there is no single classification system that will serve and satisfy all needs. What is essential, is that the classification criteria are clear and can be applied objectively.
The FRA 2000 will attempt to report not only on the quantity of forest but also on the condition of forests. The latter aspect is reflected in the distinction between undisturbed natural forest, disturbed natural forest and semi-natural forest.
Figure 2. Forest classification
Other wooded land includes shrubs for all countries and an additional category, forest fallow, which occurs only in developing countries. The two types of other wooded land are defined as follows.
|Shrubs||Refer to vegetation types where the dominant woody elements are shrubs i.e. woody perennial plants, generally of more than 0.5 m and less than 5 m in height on maturity and without a definite crown. The height limits for trees and shrubs should be interpreted with flexibility, particularly the minimum tree and maximum shrub height, which may vary between 5 and 7 meters approximately.|
|Forest fallow system||Refers to all complexes of woody vegetation deriving from the clearing of natural forest for shifting agriculture. It consists of a mosaic of various reconstitution phases and includes patches of uncleared forests and agriculture fields, which cannot be realistically segregated and accounted for area-wise, especially from satellite imagery. Forest fallow system is an intermediate class between forest and non-forest land uses. Part of the area may have the appearance of a secondary forest. Even the part currently under cultivation sometimes has appearance of forest, due to presence of tree cover. Accurate separation between forest and forest fallow may not always be possible.|
Excluded: Areas having the tree, shrub or bush cover specified above but of less than 0.5 ha and width of 20 m, which are classed under "other land".
Other wooded land is divided in undisturbed and disturbed, according to the definitions that are applied to natural forest. Other wooded land undisturbed by man typically includes natural shrub formations, i.e. thickets, bushes, shrubs. Disturbed other wooded land includes forest fallow systems and shrub formations deriving from the degradation of previous forest formations.
Following the Kotka III recommendations, the FRA 2000 will include information on areas with protection status within the general land cover classes (i) Forest and (ii) Other wooded land. Protected areas refer to areas designated for conservation by law or other regulations.
Within FRA 2000, the International Union for Conservation of Nature (IUCN) categories for nature protection (IUCN 1984) (below) will be used. The FRA 2000 will group these categories into two main classes:
1. Strictly protected areas, includes IUCN categories 1 and 2; and
2. Protected areas with integrated management, including IUCN categories 3, 4, 5 and 6.
IUCN categories for nature protection:
|I - Strict nature reserve / wilderness area.||Protected area managed mainly for science or wilderness protection. These areas possess some outstanding ecosystems, features and/or species of flora and fauna of national scientific importance, or they are representative of particular natural areas. They often contain fragile ecosystems or life forms, areas of important biological or geological diversity, or areas of particular importance to the conservation of genetic resources. Public access is generally not permitted. Natural processes are allowed to take place in the absence of any direct human interference, tourism and recreation. Ecological processes may include natural acts that alter the ecological system or physiographic features, such as naturally occurring fires, natural succession, insect or disease outbreaks, storms, earthquakes and the like, but necessarily excluding man-induced disturbances.|
|II - National Park.||Protected area managed mainly for ecosystem protection and recreation. National parks are relatively large areas, which contain representative samples of major natural regions, features or scenery, where plant and animal species, geomorphological sites, and habitats are of special scientific, educational and recreational interest. The area is managed and developed so as to sustain recreation and educational activities on a controlled basis. The area and visitors' use are managed at a level which maintains the area in a natural or semi-natural state.|
|III - Natural monument.||Protected area managed mainly for conservation of specific natural features. This category normally contains one or more natural features of outstanding national interest being protected because of their uniqueness or rarity. Size is not of great importance. The areas should be managed to remain relatively free of human disturbance, although they may have recreational and touristic value.|
|IV - Habitat/species management area.||Protected area managed mainly for conservation through management intervention. The areas covered may consist of nesting areas of colonial bird species, marshes or lakes, estuaries, forest or grassland habitats, or fish spawning or seagrass feeding beds for marine animals. The production of harvestable renewable resources may play a secondary role in the management of the area. The area may require habitat manipulation (mowing, sheep or cattle grazing, etc.).|
|V - Protected landscape/ seascape.||Protected areas managed mainly for landscape/seascape conservation and recreation. The diversity of areas falling into this category is very large. They include those whose landscapes possess special aesthetic qualities which are a result of the interaction of man and land or water, traditional practices associated with agriculture, grazing and fishing being dominant; and those that are primarily natural areas, such as coastline, lake or river shores, hilly or mountainous terrains, managed intensively by man for recreation and tourism.|
|VI - Managed resource protection area.||Protected area managed for the sustainable use of natural ecosystems. Normally covers extensive and relatively isolated and uninhabited areas having difficult access, or regions that are relatively sparsely populated but are under considerable pressure for colonization or greater utilization.|
Land ownership classes for Forest and Other wooded land are defined below. Land ownership shall be reported for Forest area as a whole or by Natural Forest and Plantations respectively.
|Public ownership||Belonging to State or other public bodies.|
|Owned by national, state and regional governments or by government-owned corporations.|
Owned by other public institutions
|Belonging to cities, municipalities, villages and communes. Includes: any publicly owned forest and other wooded land not elsewhere specified.|
|Owned by indigenous and tribal peoples||Owned by indigenous and tribal peoples in independent
countries, defined as those who:
1. are regarded as indigenous on account of their descent from the populations which inhabited the country, or a geographical region to which the country belongs, at a time of conquest or colonization or the establishment of present state boundaries and who, irrespective of their legal status, retain some or all of their own social, economic, cultural and political institutions;
2. are tribal peoples whose social, cultural and economic conditions distinguish them from other sections of the national community, and whose status is regulated wholly or partly by their own customs or traditions or by special laws and regulations.
For both categories (1) and (2) self-identification as indigenous or tribal shall be regarded as the fundamental criterion for determining the groups. (Source: ILO Convention No. 169 on "indigenous and tribal peoples").
|Private ownership||Forest and other wooded land owned by individuals, families, co-operatives or corporations engaged in agriculture or other occupations as well as forestry; private forest (wood-processing) industries; private corporations and other institutions (religious and educational institutions, pension or investment funds, etc.).|
Owned by individuals
|Forest and other wooded land owned by individuals and families, including those who have formed themselves into companies, including companies that combine forestry and agriculture (farm forests). Includes cases where owners do not live on or near their forest holdings (absentee owners).|
Owned by forest industries
|Forest and other wooded land owned by private forestry or wood-processing industries.|
Owned by other private institutions
|Forest and other wooded land owned by private corporations, co-operatives or institutions (religious, educational, pension or investment funds, nature conservation societies, etc.).|
FRA 2000 will analyze and report forest state and change by ecological zone. The classification is based on climatic factors and altitude, which to a large extent determine distribution of forest formations. Ecological zones shall be reported for forest area as a whole or by natural forest and plantations respectively.
The generated information will help to assess and analyze forest changes, i.e. impacts of deforestation or reforestation on ecosystem biological diversity, and the impacts of biomass changes on the carbon cycle
The forest area that is inaccessible for wood supply due to (a) legal, (b) economic or (c) environmental restrictions should be identified. Work Group 3 in Kotka III preferred the wording available for wood supply rather than "available for wood production" as "production" could imply biological production (yield) rather than harvest or fellings, which was intended.
The definitions of the terms are as follows:
|Forest available for wood supply||Forest where any legal, economic, or specific environmental
restrictions do not have a significant impact on the supply of wood.
Includes: Areas where, although there are no such restrictions, harvesting is not taking place, for example areas included in long-term utilization plans or intentions.
|Forest not available for wood supply||Forest where legal, economic or specific environmental
restrictions prevent any significant supply of wood.
· Forest with legal restrictions or restrictions resulting from other political decisions, which totally exclude or severely limit wood supply, inter alia for reasons of environmental or biological diversity conservation, e.g. protection forest, national parks, nature reserves and other protected areas, such as those of special environmental, scientific, historical, cultural or spiritual interest;
· Forest where physical productivity or wood quality is too low or harvesting and transport costs are too high to warrant wood harvesting, apart from occasional cuttings for autoconsumption.
Information on the volume and biomass of trees is important to indicate the role of forests in carbon storage. The growing stock volume of forest available for wood supply is also an important indicator of the forest's (economic) potential.
Volume and Biomass terms and definitions are:
|Growing stock||Stem volume of all living trees more than 10 cm diameter at breast height (or above buttresses if these are higher), over bark measured from stump to top of bole. Excludes: all branches|
|Commercial growing stock||Part of the growing stock, that consists of species
considered as actually or potentially commercial under current local and international
market conditions, at the reported reference diameter (d.b.h.). Includes: species
which are currently not utilized, but potentially commercial having appropriate
Note: When most species are merchantable, i.e. in the temperate and boreal zone, the commercial growing stock, in a given area or for a country, can be close to the total growing stock. In the tropics however, where only a fraction of all species are merchantable, it may be much smaller.
|Woody biomass||The mass of the woody part (stem, bark, branches, twigs) of trees, alive and dead, shrubs and bushes. Includes: Above ground woody biomass, stumps and roots. Excludes: foliage, flowers and seeds.|
|Above-ground woody biomass||The above ground mass of the woody part (stem, bark, branches, twigs) of trees, alive or dead, shrubs and bushes, excluding stumps and roots, foliage, flowers and seeds.|
Further discussion on volume and biomass
Volume and biomass were discussed at length during Kotka III. It was noted that attempts to obtain international agreement on standard minimum diameters for the measurement of growing stock have been unsuccessful so far. In the industrial (temperate and boreal) countries, 7 cm is the most commonly used minimum diameter, both for top and at breast height, some countries use the 0 cm, while 10 cm diameter is used in many developing countries. Accordingly the 10 cm diameter will be applied in the FRA 2000 assessment for developing countries for practical reasons.
Primary data on woody biomass are scarce, particularly in the tropics, and the available figures are generally derived from volume (growing stock) data, and refer to above ground biomass. Therefore the reported FRA 2000 tropical country figures will refer to the above ground woody biomass.
Information on fellings and removals is important to provide information on the volumes of wood being cut and harvested annually, as an indication of the wood utilization of the forest.
|Fellings||Average volume of all trees,
living or dead, measured over bark to a minimum diameter of 10 cm (d.b.h.), that are
felled during a given period (e.g. annually), whether or not they are removed from the
forest or other wooded land.
Includes: silvicultural and pre-commercial thinnings and cleanings of trees more than 10 cm (d.b.h.) left in the forest, and natural losses of trees above 10 cm (d.b.h.).
|Removals||(Annual) removals that generate revenue for the owner of the
forest or other wooded land or trees outside the forest. They refer to "Volume
Actually Commercialized" (VAC), i.e. volume under-bark actually cut and removed from
the forest. This volume may include wood for industrial purposes (e.g. sawlogs, veneer
logs, etc.) and for local domestic use (e.g. rural uses for construction).
Includes: removals during the given reference period of trees felled during an earlier period and removal of trees killed or damaged by natural causes (natural losses), e.g. fire, wind, insects and diseases.
Excludes: removals for fuelwood.
Note: Removals as defined above refer to commercial removals, i.e. harvested timber, both for industrial and local domestic uses. In many developing countries, removals for fuelwood make up a considerable part of the total harvested wood. However, data on fuelwood removals are generally scarce and/or unreliable, and need to be reported separately when national or local data are available.
The purpose of this section is to provide qualitative and, where available, quantitative information on the importance of the role of forest and other wooded land in providing non-wood forest products and certain social, cultural and environmental services.
The following categories can be distinguished:
|Non-wood forest products||Products for human consumption: food, beverages, medicinal plants, and extracts (e.g. fruits, berries, nuts, honey, game meats, mushrooms, etc.)|
|Fodder and forage (grazing, range)|
|Other non-wood products (e.g. cork, resin, tannins, industrial extracts, wool and skins, hunting trophies, Christmas trees, decorative foliage, mosses and ferns, essential and cosmetic oils, etc.)|
|Forest services||Protection (against soil erosion by air or water, avalanches,
mud and rock slides, flooding, air pollution, noise, etc.)
Social and economic values (e.g. hunting and fishing, other leisure activities, including recreation, sport and tourism)
Aesthetic, cultural, historical, spiritual and scientific values (including landscape and amenity
Three categories are distinguished concerning forest cover change, of which deforestation and forest plantations are usually reflected in the country forest statistics. Forest degradation, on the other hand, refers to a partial loss of forest cover which is not sufficient to change the classification from forest to other land cover classes, thus is not reflected in increases or decreases in forest area. However, degradation is an important process to assess, especially in relation to biomass and biological diversity changes.
|Deforestation||Refers to change of land cover with depletion of tree crown cover to less than 10 percent. Changes within the forest class (e.g. from closed to open forest) which negatively affect the stand or site and, in particular, lower the production capacity, are termed forest degradation.|
|Forest degradation||Takes different forms, particularly in open forest formations, deriving mainly from human activities such as over-grazing, over-exploitation (for firewood or timber), repeated fires, or due to attacks by insects, diseases, plant parasites or other natural sources such as cyclones. In most cases, degradation does not show as a decrease in the area of woody vegetation but rather as a gradual reduction of biomass, changes in species composition and soil degradation. Unsustainable logging practices can contribute to degradation if the extraction of mature trees is not accompanied with their regeneration or if the use of heavy machinery causes soil compaction or loss of productive forest area.|
|Artificial establishment of forest on lands which previously
did not carry forest within living memory.
Artificial establishment of forest on lands which carried forest before.
This section is intended to provide information about the extent of fire damage and the average fire size in forest areas and to provide information on historical trends regarding fires.
|Forest fire||Fire that breaks out and spreads on forest and other wooded
land or which breaks out on other land and spreads to forest and other wooded land.
Excludes: Prescribed or controlled burning, usually with the purpose of reducing or eliminating the quantity of accumulated fuel on the ground.
|Broadleaved tree||All trees classified botanically as Angiospermae. They are sometimes referred to as "non-coniferous" or "hardwoods".|
|Coniferous tree||All trees classified botanically as Gymnospermae. They are sometimes referred to as "softwoods".|
|Endangered species||Species classified by an objective process (e.g. national "Red Book") as being in IUCN categories "critically endangered" and "endangered". A species is considered to be a critically endangered when it is facing an extremely high risk of extinction in the wild in the immediate future. It is considered "endangered" when it is not critically endangered but is still facing a very high risk of extinction in the wild in the near future.|
|Endemic species||Species is endemic when found only in a certain strictly limited geographical region, i.e. restricted to a specified region or locality|
|Indigenous tree species||Tree species which have evolved in the same area, region or biotope where the forest stand is growing and are adapted to the specific ecological conditions predominant at the time of the establishment of the stand. May also be termed native species or autochthonous species.|
|Introduced tree species||Tree species occurring outside their natural vegetation zone, area or region. May also be termed non-indigenous species. Includes: Hybrids|
|Managed forest / Other wooded land||Forest and other wooded land that is managed in accordance with a formal or an informal plan applied regularly over a sufficiently long period (five years or more).|
|Protection||The function of forest/other wooded land in providing protection of soil against erosion by water or wind, prevention of desertification, the reduction of risk of avalanches and rock or mud slides; and in conserving, protecting and regulating the quantity and quality of water supply, including the prevention of flooding. Includes: Protection against air and noise pollution.|
|Tree||A woody perennial with a single main stem, or in the case of coppice with several stems, having a more or less definite crown. Includes: bamboo's, palms and other woody plants meeting the above criterion.|
FAO. 1995. Forest resources assessment 1990, Global synthesis. FAO Forestry Paper 124. ISSN 0258-6150.
IUCN. 1984. Categories, Objectives and Criteria for Protected Areas. In: National Parks, Conservation and Development: The role of Protected Areas in sustaining society, eds, J.A. McNeely and K.R. Miller. IUCN/Smithsonian Institution Press, Washington.
Nyyssönen, A. & Ahti, A. (Editors) 1996. Proceedings of FAO Expert Consultation on Global Forest Resources Assessment 2000 in Cooperation with ECE and UNEP (Kotka III). The Finnish Forest Research Institute, research Paper 620. ISBN 951-40-1541-X.
UN-ECE/FAO. 1997. Temperate and boreal forest resources assessment 2000, Terms and definitions. United Nations, New York and Geneva.
1 The UN-ECE in Geneva, Switzerland has distributed a companion document for industrialized countries (UN-ECE/FAO 1997).
|
Gum disease, also known as periodontal disease, is a chronic infection that can result in a number of health problems, from mild inflammation to severe gum damage to tooth loss, if left untreated. In addition, gum disease can affect your overall health, and has been linked to an increased risk of heart disease and stroke.
Gum disease develops in the space between your gum line and your teeth. It causes tissue inflammation and damage that can eventually cause your gums to recede. The severity of gum disease is determined by the depth of the excess space, or so-called "pockets," that form as your gum tissue recedes.
The National Institute of Dental and Craniofacial Research estimates that 80 percent of adults in the United States have some degree of gum disease.
Types of Gum Disease
Gum disease is classified as either gingivitis or periodontitis. Gingivitis is the first stage of gum disease and is reversible with treatment. But it can also develop into the more serious oral health problem, periodontitis.
- Gingivitis results in swollen, irritated gums that bleed easily. Good oral health habits, including daily flossing and brushing, as well as getting regular professional teeth cleanings can prevent and help to reverse this disease, which typically doesn't result in the loss of gum tissue or teeth.
- Periodontitis occurs as a result of untreated gingivitis. In periodontitis, the gums significantly recede from the teeth, leading to the formation of infected pockets. As your body's immune system struggles to fight off these infection, tissues and bones may start to break down. Without proper treatment, the gums, connective tissue, and jaw bones that support your teeth may all deteriorate and begin to compromise your overall oral health. Eventually, the teeth will loosen and either fall out or have to be removed.
Signs of Gum Disease
Your oral health is critical to your overall health. If you notice any of the following symptoms, seek care from a dentist who is knowledgeable about treating gum disease:
- A sour taste in your mouth or persistently bad breath
- A change in how your partial dentures fit
- A change in how your teeth fit together when you bite down
- Bleeding gums
- Gum tissue that pulls away from your teeth
- Loose teeth or increasing spaces between your teeth
- Pain when chewing
- Unusually sensitive teeth
- Swollen and tender gums
Causes of Gum Disease
In addition to poor oral health habits, other factors associated with gum disease include:
- Smoking and chewing tobacco — tobacco products irritate the gums and make gum disease more difficult to treat.
- Systemic diseases that affect the immune system, such as cancer, diabetes, and HIV/AIDS
- Taking certain medications, including some blood pressure drugs, antidepressants, steroids, and oral contraceptives, that can cause dry mouth. The lack of saliva in your mouth makes you more susceptible to gum disease since one of its main functions is to help wash away food particles and bacteria.
- Crooked teeth
- Dental bridges that don't fit properly
- Old and defective fillings
- Hormonal fluctuations, particularly those that occur during pregnancy
- Genetic differences may make some people more susceptible to gum disease
- Stress, which can reduce your body's defenses when it comes to fighting off any infection, including gum infections
Consequences of Untreated Gum Disease
Untreated gum disease has been associated with an increased risk for heart disease and stroke, and for women, an increased chance of delivering a baby with a low birth weight. Gum disease has also been linked to trouble controlling blood sugar among diabetics.
Gum Disease Treatment Options
Gum disease can be treated in several ways, depending on whether you have gingivitis or periodontitis. The primary goal is to manage the chronic infection that leads to gum damage. Treatment options include:
- Regular professional deep cleanings
- Medications that are either taken orally or are inserted directly into infected tissue pockets
- Surgery, in more severe cases of gum disease. One type, called flap surgery, involves pulling up the gum tissue in order to remove tartar and then stitching the tissue back in place for a tight fit around the teeth. Tissue grafts can also be used to replace severely damaged bone or gum. In bone grafting, for instance, a small piece of mesh-like material is placed between the bone and gum tissue, enabling the supportive tissue and bone to regenerate.
While it's good to know there are treatments, it's better to avoid gum disease in the first place, by brushing and flossing at least twice a day, eating a balanced diet, and visiting your dentist regularly for exams and cleanings.
|
A typical Black Scabbard fishing vessel
The Black Scabbard is calculated to reside between 600 and 1,600 metres below sea level. Considered a veritable achievement due to the incredible range of depths that the fish swims. The theory is that the fish rise to the level of 600 metres below sea level when it is dark and then descend a couple of a hundreds metres further below when it is light. Some connection to solar rays has been surmised. But the mystery of why it is easier to catch the Black Scabbard at night remains unresolved. Especially since common reason has it that sunlight does not penetrate the depths of the ocean beyond 400 metres below sea level.
For most of the nineteenth and the twentieth centuries the Black Scabbard was considered unique to Madeira. Recent discoveries of the same fish in places as far afield as Southern Ireland, and even Japan, have dispelled that special qualification for Madeira. The fish has been caught successfully for some years off the shores of North Africa, Portugal, and even the Canary Islands. However, it seems that the only locale where the fish is caught at sustainable economic or industrial levels is the village of Câmara de Lobos itself.
That unique status of the village has lent to the creation of some equally unique and unusual crafts for the catching of the Espada fish. The colourful fishing boats used to catch the fish are all made of wood with four or several oars, which one or two persons can use to row the boat out to sea or on to shore. It also enjoys one single sail and is left to the whims of the ocean currents to change its position once the general target area for the fishing grounds are reached.
|
MATH: MAFS.4.MD.3.6 MD.3.7 4.G.1.1
- Measure angles in whole number degrees using a protractor. Sketch angles of specific measure.
- Recognize angle measures as additive- that the whole angle is equal to the sum of its parts.
- Draw points, lines, line segments, rays, angles (right, acute, obtuse). Identify these in two dimensional figures.
SCIENCE: SC.4.L.17.2 SC.4.L.17.3
- Explain that animals, including humans, cannot make their own food and that when animals eat plants or other animals, the energy stored in the food source is passed to them.
- Trace the flow of energy from the Sun as it is transferred along the food chain through the producers to the consumers.
SOCIAL STUDIES: SS.4.A.8.3 4.A.8.4
- Describe the effect of the United States space program on Florida’s economy and growth.
- Explain how tourism affects Florida’s economy and growth.
|Learning Targets and Learning Criteria|
The student will:
- Measure an angle to the nearest whole number degrees using a protractor.
- Use a protractor to sketch an angle given a specific measurement.
- Recognize angle measure as additive, explaining that the angle measurement of a larger angle is the sum of the angle measures of its decomposed parts.
- Solve addition and subtraction problems to find unknown angles on a diagram in real world and mathematical problems.
- Draw points, lines, line segments, rays, right angles (exactly 90 degrees), acute angles (less than 90 degrees), and obtuse angles (91-179 degrees).
- Identify points, lines, line segments, rays, angles (acute, right, obtuse) in two-dimensional shapes.
- Review that all living things need energy to survive.
- Explain that plants make their own food and are called producers.
- Explain that animals, including humans, cannot make their own food and are called consumers.
- Explain that when animals eat plants or other animals, the energy stored in the food source is passed to them.
- Describe that all life on Earth is dependent upon the Sun.
- Trace the flow of energy from the sun as it is transferred along the food chain through the producers to the consumers (e.g. SUN –> GRASS –> RABBIT –> FOX)
- Explain that some energy is lost from one organism to the next in the form of heat.
- Classify consumers as herbivores, carnivores, or omnivores.
- Describe the relationship between plants as producers and animals as consumers.
- Discuss the development of the national space program.
- Identify how the national space program impacts Florida’s economy and population.
- Identify the major components of Florida’s tourist industry, including cultural sites, eco-tourism, beaches, natural wonders, and amusement parks.
- Explain how tourism impacts Florida’s economy.
Directions to Access Canvas Courses
- Students will log onto their vPortal account via the Volusia County student page https://www.vcsedu.org/students
- Select “Canvas” icon
- They should see 4 courses: English Language Arts, Math, Science, Social Studies
- Modules represent each week and include individual pages with directions for each day of the week in the form of a “To Do” list for students on the right hand side of the page.
- All assignments, videos, and activities will be linked directly onto the day.
- Students can e-mail me directly through the course by selecting the inbox icon on the left hand side and composing a new message. My e-mail is: email@example.com
- I can also be contacted through REMIND and my Ivy Hawn email: firstname.lastname@example.org
- Assignments that are on paper (for example, from the Math text) can be screen shotted and emailed to me also.
- Please scroll down to ASSIGNMENTS DUE for a list of websites/media that students may be using. Please let me know ASAP if they are having difficulty logging in to any sites for assignments.
- I will schedule ZOOM meetings if needed so that I can stay in touch with students and answer any questions they may have.
Please see CANVAS individual courses for assignments. Below is a list of websites that students have used and may be used in assignments.
List of Websites
Student user name log-ins are either alpha code email or just alpha code. Passwords are 8 digit birthdates.
- Studies Weekly (SS and Science Newspaper) app.studiesweekly.com
- FloridaStudents.org Tutorials for students in Math, Science, SS, ELA that follow state standards. Besides lessons assigned, this is a great site for students to review standards that they have learned.
- Brainpop Fun videos on all subjects- educational games Brainpop.com We have free access so student log in is: User Name: mandalorian Password: theyoda22
- Study Jams Fun videos on all subjects and silly karaoke. Studyjams.scholastic.com
- Flipgrid Students can make quick videos for assignments. User code generated for each assignment. Flipgrid.com
- Seesaw A sharing platform where students can use all types of media to do an assignment. They can also see each other’s work. app.seesaw.me
- Nearpod A site where students can find teacher created assignment modules. An access code is generated for each module. Nearpod.com
- Newsela Timely news articles and topics for research to encourage kids to read. There are comprehension questions and vocabulary as well. Newsela.com
- Discovery Education Website that accompanies our Science text. app.discoveryeducation.com
- ZOOM video conferencing platform zoom.com Meeting code and password are generated for each meeting.
- Khan Academy Tutorials for Math khanacademy.org A new student roster was recently created so when I assign anything from this site I will send usernames that were auto-generated (and I can’t change). Password for all is: Manning1
- 12. IXL Practice for Math, Science, ELA Students receive practice sections each week to do.
- 13. I-READY Math and ELA site that works off of diagnostics and gives students lessons based on their skill levels. Students practice each week and we have seen wonderful gains in learning!
|
Conductivity and resistivity are widely used parameters for water purity analysis, monitoring of reverse osmosis, cleaning procedures, control of chemical processes and in industrial wastewater. Reliable results for these varied applications depend on choosing the right conductivity/resistivity sensor.
Electrical conductivity has been measured for more than 100 years and it is still an important parameter today. The high reliability, sensitivity, fast response, and relatively low cost of the equipment make conductivity a valuable, easy-to-use tool for quality control. In some applications, the same measurement is made as resistivity which is the inverse of conductivity and with reciprocal units.
This guide provides the basics for a good understanding of conductivity and resistivity determination. Important factors that influence measurements and possible sources of error are discussed. Both theoretical and practical aspects are covered to enable reliable calibration and measurement in various applications.
Recommended videos about conductivity in power plants from the Process Analytics channel:
Our complimentary 48 page guide is a comprehensive reference and training tool based on decades of industry leadership in this measurement.Topics covered include:
- What is conductivity?
- What are the tools for conductivity measurement?
- How do you select the correct conductivity sensor?
- What can interfere with conductivity measurements?
- How do you maintain and store conductivity sensors?
- What method of conductivity measurement should be used for specialized industries?
- And more…
|
Wind turbines are big on power and marvels to look at. They're also rather inefficient. So some scientists thought about it, talked, and figured out a way to increase their power output tenfold. All by staring at fish!
John Dabiri, professor of aeronautics and bioengineering at Caltech, and his colleagues set out to test the hypothesis this time last year at his Field Laboratory for Optimised Wind Energy. They'd observed that current Horizontal Axis Wind Turbines, or HAWTS, require a lot of space since interference from adjacent turbine's wakes, or wind disturbance, causes them to perform less optimally. More space means more money.
But Dabiri noticed something about schools of fish that he could bend to his work:
"I became inspired by observations of schooling fish, and the suggestion that there is constructive hydrodynamic interference between the wakes of neighbouring fish," says Dabiri... "It turns out that many of the same physical principles can be applied to the interaction of vertical-axis wind turbines."
In other words, the fish used the cruising paths of their neighbours to their advantage. The same could be applied to wind turbines. Vertical Axis Wind Turbines to be exact, which look more like merry-go-rounds than they do fans. Pack these guys together and the clockwise rotation of one turbine will help power the counter-clockwise rotation of another. And so on and so forth until you really see the results:
The six VAWTs generated from 21 to 47 watts of power per square meter of land area; a comparably sized HAWT farm generates just 2 to 3 watts per square meter.
That's incredible. The next logical step would be to push for these idea to be implemented as cheaply as possible in areas that could use them. And, at the very least, I now have a newfound appreciation for tuna. [Caltech via Green Car Congress]
|
June 1, 2010 | 44
Editor’s Note: Scientific American’s George Musser will be chronicling his experiences installing solar panels in Solar at Home (formerly 60-Second Solar). Read his introduction here and see all posts here.
In the first installment of this post, Arnold Mckinley of Xslent Energy Technologies described how "reactive power" — that is, power stored momentarily by electrical appliances and then released — destabilizes the electrical grid. Here he explains how home solar arrays can help.
Electricity has traditionally been distributed using a wheel and spoke grid: power travels from a large central generator to loads distributed around it. In some cases, energy travels very long distances, perhaps 500 to 1,000 miles, before being used. That model is changing. Since solar and wind inject energy at numerous local points, the grid is coming to look more like a network than like a wheel — making it even harder than it already is to keep power flowing smoothly. Two recent developments promise to help. The first is a new generation of microinverters, and the second is the growth of the interconnected smart grid.
A solar panel generates DC power, which gets converted to AC power by a device known as an inverter. Most inverters require a certain minimum threshold voltage to work. Therefore the panels must be wired together in electrical series to raise the voltage high enough. Experience has shown that this setup is less than optimally efficient, as an earlier Solar at Home post talked about. A cloud shading a single panel reduces the efficiency of the entire string. Moreover, each panel has slightly different electrical characteristics, creating a mismatch that reduces the power generated. Finally, if the voltage from the string is too low, the inverter never turns on; so on rainy or foggy days, the system generates no power at all. The solution to all three problems is to fit each panel with its own low-voltage inverter, or microinverter. It turns on as soon as light falls on the panel and automatically compensates for the panels’ electrical differences.
But if microinverters were also able to produce reactive power, they could ship the excess over the local load consumption back into the grid, as they do with active power now, and help out the utilities.
The figure at left gives an example of a basic setup where household appliances draw 1000 watts of active power and 600 volt-amps of reactive power. If a solar array can generate 1200 W of DC power, then it is capable of producing 1200 W of active AC power and 1200 VA of reactive AC power. That is enough not only to power the house but also to feed some active and reactive power into the grid. All it requires is the right microinverter.
When the ordinary inverters and microinverters were first developed, the designers paid no attention to reactive power generation. Because consumers pay only for active power, the goal was to produce as much of that as possible. Today it is clear that reactive-producing solar can help stabilize the grid, and microinverters are being designed to produce both. In fact, physics is helping us out here. Since no energy is required to produce reactive power, an inverter can produce it without sacrificing active power or requiring more solar panels.
When I first learned that reactive power can be produced without affecting the active component, I was surprised. To see that this is reality and not fantasy, the figure at the right shows two days of power production at a typical solar facility. On the first day, the microinverter was set to produce both active power (green line) and reactive power (red line); on the second, it was set to produce only active power. The switch did not affect the active power production at all. My company’s website has more details on this issue.
In the past, the big problem preventing microinverters from producing reactive power was the need for weighty capacitors to store the energy temporarily. But new designs pull off the trick simply by changing the shape of the AC wave. This significantly reduces the cost and the size of the devices.
What is more, microinverters are also evolving to communicate with other grid devices, much as smart meters are already doing. Networked microinverters can report data for display on internet browsers, but some also have two-way communication, allowing operators to control their active/reactive power generation mix. Eventually, on-board intelligence will adjust the mix on the fly, providing the best economic benefit to consumers based on their rate structures (which will eventually include reactive power pricing). Such intelligence will allow these distributed networks to separate from the main continental grid and form localized microgrids, so that all the electricity we need is generated where we need it.
Photo and Diagram courtesy of Arnold Mckinley
12 Digital Issues + 4 Years of Archive Access just $19.99X
|
Social Media Usage Provides Educational Benefits Research Shows
Social media usage in schools is no longer a head turner, and while some teachers worry that it may be a distraction, many are also finding it to be increasingly useful as a way to connect with other educators, share information on a larger scale and enable students to learn more interactively.
One study, which was carried out by researchers from the University of Science & Technology of China and the City University of Hong Kong, found that social networking sites can help students to become academically and socially integrated, and may even improve learning outcomes.
The researchers held discussions with college students in order to gain an insight into their online social networking experiences and attitude towards using social media for education. They found that networking websites were used for both social and educational purposes.
Students reported that social media enhanced their relationships, helped them maintain friendships and enabled them to build and establish virtual relationships.
On the learning side, they reported that social networks allowed them to connect with faculty, share knowledge and commentary, and collaborate with other students through discussions, course scheduling, project management, and educational applications to organize learning activities.
The study, which collected data from students aged between 16 and 18 over a period of six months, found that social networking sites helped students to practice their technology skills, develop creativity and communication skills, and be more open to diverse views.
Christine Greenhow, a learning technologies researcher for the University of Minnesota’s College of Education and Human Development, commented that social networking sites enable students to practice and develop the kinds of 21st century skills that will help them be successful.
“Students are developing a positive attitude towards using technology systems, editing and customizing content and thinking about online design and layout,” she said. “They’re also sharing creative original work like poetry and film and practicing safe and responsible use of information and technology. The Web sites offer tremendous educational potential.”
Greenhow also pointed out that the study’s results have implications for educators, who now have the opportunity to support what their students are learning through social media.
“As educators, we always want to know where our students are coming from and what they’re interested in so we can build on that in our teaching. By understanding how students may be positively using these networking technologies in their daily lives and where the as yet unrecognized educational opportunities are, we can help make schools even more relevant, connected and meaningful to kids.”
The study also found that many students are still unaware of the academic and professional opportunities that social media can offer; highlighting to the need for teachers to spend more time highlighting these opportunities and working with students to enhance their experiences on social networking sites.
Of course, there are also drawbacks to social media usage, and Wisconsin Center for Educational Research (WCER) researcher, Mark Connolly, points out in a WCER news release that although social media can be valuable in educational settings, it is important to use them prudently.
Researchers have found that heavy internet use may result in greater impulsivity, less patience, less tenacity and weaker critical thinking skills, which may result from the need to rapidly shift attention from object to object online, as this can weaken an individual’s ability to control their focus.
Connolly believes that it is important to help students learn how to use social media in an instrumental way, learn how to think deliberately about their use, and consider the sorts of outcomes for which using social media are proper.
He points out that knowing when, where, and with whom to use social media, may be the most important learning outcome of all.
|
As children develop, they require healthy foods , that contain more vitamins and minerals to support growing bodies. This means whole grains (whole wheat, oats, barley, rice, millet, quinoa); a wide variety of fresh vegetables (5 or more helpings per day) and some fruits; calcium for growing bones (milk or substitutes if lactose intolerant); and healthy proteins.
Healthy eating and being physically active are particularly important for children and adolescents. This is because their nutrition and lifestyle influence their wellbeing, growth and development. The nutritional requirements of children and adolescents are high in relation to their size because of the demands for growth, in addition to requirements for body maintenance and physical activity. Physical activity has a major impact on health at all stages of life. In children and young people physical activity is particularly important to maintain energy balance and therefore a healthy bodyweight, for bone and muscoskeletal development, for reducing the risk of diabetes and hypertension, and for numerous psychological and social aspects. Too little time in the sun will reduce Vitamin D.
Trends in overweight and obesity in children and adolescents are associated with an increased risk of various conditions in adulthood yet are observed in children. Obese children have been shown to already have many of the changes associated with vascular disease in adults, including insulin resistance, and high blood pressure. Type 2 diabetes mellitus has become a far more common occurrence in children and adolescents
Oral health has clearly improved since the 1970s, mainly due improved oral hygiene and nutrition. A sufficient supply of calcium and vitamin D, as well as being physically active, is important for healthy bone development. There is a high incidence of perceived food allergies and intolerances. It has also been suggested that contemporary diets and food quality affects mental health, including cognitive function and depression.
|
This article introduces the idea of generic proof for younger children and illustrates how one example can offer a proof of a general result through unpacking its underlying structure.
How many possible necklaces can you find? And how do you know you've found them all?
This challenge encourages you to explore dividing a three-digit number by a single-digit number.
What happens when you add the digits of a number then multiply the result by 2 and you keep doing this? You could try for different numbers and different rules.
EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules.
What happens when you add three numbers together? Will your answer be odd or even? How do you know?
Are these statements always true, sometimes true or never true?
This challenge combines addition, multiplication, perseverance and even proof.
Ram divided 15 pennies among four small bags. He could then pay any sum of money from 1p to 15p without opening any bag. How many pennies did Ram put in each bag?
This task combines spatial awareness with addition and multiplication.
Look at three 'next door neighbours' amongst the counting numbers. Add them together. What do you notice?
Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total?
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
Four bags contain a large number of 1s, 3s, 5s and 7s. Pick any ten numbers from the bags above so that their total is 37.
This game can replace standard practice exercises on finding factors and multiples.
|
Effect of Different Colored Lights on Photosynthesis
Anastasia Rodionova, Cassidy Davis, Sara Cucciniello
CU Boulder, Fall 2003
Our experiment tested which color (red, blue, green) would influence the plant to produce the most amount of photosynthesis. There are four main photosynthetic pigments found in the chloroplast of the plant called chlorophyll a, chlorophyll b, xanthophylls, and carotenes. All these pigments absorb light and possibly utilize the light energy in photosynthesis. Light energy is essential for photosynthesis. An initial experiment showed that all the pigments at peak absorbance showed violet/blue light at the highest level, orange/red light as the second highest, and yellow/green having the lowest level of absorption. We hypothesized that photosynthesis was affected by the light absorption rate.
To test this we used about 5 grams of leaves for each trial, and placed them in a gas chamber. On two sides of the gas chamber we placed two clear containers filled with water to serve as the temperature regulators. Behind the water containers we placed lights directed at the plant. We ran three trails for each different leaf we used. Each trail consisted of measuring the amount of CO2, with a CO2 gas sensor under blue light, red light, and green light. We made sure to switch the order of colors in each trail as an experimental control, to minimize error. Since we know that photosynthesis requires CO2, and we know that blue light pigments absorb the most light energy, we predicted that under blue light the most CO2 would be used.
Our results showed the least amount of CO2 under blue light (mean: -8.1 ppm/min/g), medium amount in the red light (mean: -1.04 ppm/min/g), and the most amount in the green light (mean: 4.7 ppm/min/g). But our t-tests proved our results insignificant (p>0.05).
Our results are contradictory with our hypothesis, based on our statistical results. There were several problems with our experiment that could have been taken into consideration. First, when taking respiration rates the foil wasnt covering the chamber all the way letting some external light in. Second, the colored cellophane only allows approximately 70% of light through; this might have prevented the plant from absorbing the amount of light energy needed to have a significant photosynthetic rate. Third, the fast paced moving between trials lost time and efficiency. By having short trials (2.5 min.) we might not have allowed the plant enough time to adjust its photosynthetic rate to the different wavelengths of light energy. Plus by moving the plant, and switching from cellophane to foil (or vise versa), might have screwed up the photosynthetic cycle by exposing it to white light.
Other students who did the same experiment had results that also supported that blue light was not responsible for the rate of photosynthesis. Experiments Warsh et al. 2001, Bahramzadeh et al. 2001, and Whorley and Weaver. 2002 all showed that red had much higher rates of photosynthesis than blue, but these experiments had fewer trials or shorter trials. Lots of other students also used white light as a controlled variable; this would have been a good thing to put into our experiment to give it some comparison.
Considering that not all light energy is used for photosynthesis we propose an alternative hypothesis. In a previous experiment the pigment xanthophylls absorbed significant amounts of blue light. In new research it is found that this pigment could be an important component in a process called energy dissipation rather than photosynthesis. In order to not overwhelm the plant with photosynthesis and respiration, this photon energy goes to other functions or formations of the plant. Further research on the function of xanthophylls will need to be conducted in order to understand the processes of plant function.
|
Without the process of domestication, humans would still be hunters and gatherers, and modern civilization would look very different. Fortunately, for all of us who do not relish the thought of spending our days searching for nuts and berries, early civilizations successfully cultivated many species of animals and plants found in their surroundings. Current studies of the domestication of various species provide a fascinating glimpse into the past.
A recent article by Dr. Seung-Chul Kim and colleagues in the June 2009 issue of the American Journal of Botany explores the domestication of chiles. These hot peppers, found in everything from hot chocolate to salsa, have long played an important role in the diets of Mesoamerican people, possibly since as early as ~8000 B.C. Capsicum annuum is one of five domesticated species of chiles and is notable as one of the primary components, along with maize, of the diet of Mesoamerican peoples. However, little has been known regarding the original location of domestication of C. annuum, the number of times it was domesticated, and the genetic diversity present in wild relatives.
To answer these questions, Dr. Kim and his team examined DNA sequence variation and patterns at three nuclear loci in a broad selection of semiwild and domesticated individuals. Dr. Kim et al. found a large amount of diversity in individuals from the Yucatan Peninsula, making this a center of diversity for chiles and possibly a location of C. annuum domestication. Previously, the eastern part of central Mexico had been considered to be the primary center of domestication of C. annuum. On the basis of patterns in the sequence data, Dr. Kim et al. hypothesize that chiles were independently domesticated several times from geographically distant wild progenitors by different prehistoric cultures in Mexico, in contrast to maize and beans which appear to have been domesticated only once.
Geographical separation among cultivated populations was reflected in DNA sequence variation. This separation suggests that seed exchange among farmers from distant locations is not significantly influencing genetic diversity, in contrast to maize and beans seeds, which are traded by farmers across long distances. Less genetic diversification was seen in wild populations of C. annuum from distant locales, perhaps as a result of long-distance seed dispersal by birds and mammals.
Across the three loci studied, Dr. Kim and colleagues found an average reduction in diversity of 10% in domesticated individuals compared with the semiwild individuals. Domesticated chiles in traditional agricultural habits, however, harbor unique gene pools and serve as important reservoirs of genetic diversity important for conserving biodiversity.
This work was conducted primarily by Araceli Aguilar-Meléndez as her dissertation project under the guidance of Drs. Kim and Mikeal Roose in the Department of Botany and Plant Sciences at the University of California at Riverside. The research was supported by the University of California Institute for Mexico and the United States (UC MEXUS), El Coneso Nacional de Ciencia y Technología (CONACYT), and a gift from the McIlhenny Company. Aguilar-Meléndez, Kim, and their colleagues plan to continue research on this remarkably variable and economically important spice in Mesoamerica.
NOTE: A hi-res digital color image is available by request.
The Botanical Society of America (www.botany.org) is a non-profit membership society with a mission to promote botany, the field of basic science dealing with the study and inquiry into the form, function, development, diversity, reproduction, evolution, and uses of plants and their interactions within the biosphere. It has published the American Journal of Botany (www.amjbot.org) for nearly 100 years. For further information, or for full access to this article, please contact the AJB staff at email@example.com.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
|
English Society in the Later Middle Ages
What was the social structure of England in the period 1200 to 1500? What were the basic forms of social inequality? To what extent did such divisions generate social conflict? How significantly did English society change during this period and what were the causes of social change? Is it useful to see medieval social structure in terms of the theories and concepts produced within the medieval period itself? What does modern social theory have to offer the historian seeking to understand English society in the later middle ages? These are the questions which this book seeks to answer. Beginning with an analysis of class structure of medieval England, Part One of this book asks to what extent class conflict was inherent within class relations and discusses the contrasting successes and outcomes of such conflict in town and country. Part Two of the book examines to what extent such class divisions interacted with other forms of social inequality, such as those between orders (nobility and clergy), between men and women, and those arising from membership of a status-group (the Jews). Dr Rigby's discussion of medieval English society is located within the context of recent historical and sociological debates about the nature of social stratification and, using the work of social theorists such as Parkin and Runciman, offers a synthesis of the Marxist and Weberian approaches to social structure. The book should be extremely useful to those undergraduates beginning their studies of medieval England whilst, in offering a new interpretative framework within which to examine social structure, also interesting those historians who are more familiar with this period.
|
# What is Bronchitis
# Symptoms of Bronchitis
# Causes of Bronchitis
# Risk Factors That Causes Bronchitis
# Bronchitis Detections
# Bronchitis Treatment
# Prevention of Bronchitis
What is Bronchitis
Bronchitis is an inflammation of the lining in the bronchial tubes. This is the airways that connect the windpipe (trachea) to the lungs. The respiratory system is covered and protected by a mucus-producing lining. When a person contacts bronchitis, it is often painful and hard for air to pass through in and out of the lungs when breathing. Bronchitis is contagious and can be spread by direct or indirect contact.
There are two main types of bronchitis: Acute and Chronic
Acute Bronchitis comes quickly and can cause severe symptoms. This kind of bronchitis usually persists for 10 days. Acute bronchitis is most often caused by a virus that can infect the respiratory tract and attack the bronchial tubes. It usually develops from a severe cold. It can also follow or accompany by the flu or it may just begin without even an infection. In severe cases it may cause chest pain and malaise. Most people would have or will contact acute bronchitis at some point in their lives.
Chronic bronchitis, on the other hand, can be mild to severe and is longer lasting. It can persist from several months to years. Suffers of chronic bronchitis are more susceptible to bacterial infections of the lungs and airway, like pneumonia. With chronic bronchitis the bronchial tubes continue to be inflamed and irritated over time. Most suffers of chronic bronchitis are smokers themselves or are people who are exposed to secondhand smoke.
|
Battle of Tlatelolco
|This article does not cite any sources. (September 2008) (Learn how and when to remove this template message)|
|Battle of Tlatelolco|
The burning temple of Tlateloclo and the death of Moquihuixtli, as depicted in the Codex Mendoza (early 16th century).
|Commanders and leaders|
The Battle of Tlatelolco was an attack in 1473 on the Mexica altepetl (city-state) of Tlatelolco by Tenochtitlan and its allies. It resulted in a Tenochca victory, and the deaths of Moquihuixtli, tlatoani ("ruler" or "king") of Tlatelolco and Xilomantzin, tlatoani of Culhuacan, who had conspired to conquer Tenochtitlan.
|This article about a battle is a stub. You can help Wikipedia by expanding it.|
|This article related to indigenous Mesoamerican culture is a stub. You can help Wikipedia by expanding it.|
|
TenMarks teaches you how to solve two step rational equations.
Read the full transcript »
Learn About Two Step Rational Equations Let’s learn about two-step equations where we’ve got to apply two different steps to solve it. We’re given two problems. The first one is r+7/4=5. So, step one is we’ve got a variable which is being divided by four. So, the first thing we do is we write r+7/4 and we multiply the left side by four and the right side by four. I took the denominator because we need to remove that. We multiply the left hand side of the equation and the right hand side of the equation by four. What do I get? We get r+7=4/4 is 1, 5x4=20. Now that I have r+7=20, the variable can be separated by subtracting seven from the left and seven from the right, r+7-7 is r, 20-7 is 13 so the answer is r=13. Now, let’s try the second problem which is -1=5b/8+3/8. So, the second problem is -1=5b/8+3/8. Let’s try and solve this. So, what do we see? -1=5b/8+3/8, I’m just going to flip this around to make it easy. 5b/8+3/8=-1, I just moved the right hand side to left and left to right. No change or whatsoever. So, the first thing that I see is the variable part has something being added to it. So, we have to subtract 3/8 from the left and from the right. What am I left with? What I’m left with is 5b+3/8-3/8=5b/8=-1-3/8, I just subtracted this from this. Now, since the right side has got a whole number and a fraction, let’s convert the whole number to fractions. -1 can be multiplied by 8 on top and 8 on the bottom,-3/8. All I’m trying to do is get this to have the same denominator as 3/8. So, -1x8 is -8/8-3/8. I can subtract this, -8, -3 is -11/8 since the denominators were the same. So now, what I have is 5b/8=-11/8. Since 5b/8=-11/8, 5b is being divided by eight so let’s multiply it by eight on the left and multiply it by eight on the right. What am I left with? 5b on the left because 8/8 is 1 and -11 x 8/8 is -11. 8 and 8 cancel out. So 5b=-11, b is being multiplied by five so let’s divide it by five in both sides. You notice what I’m doing is I’m just looking at the variable, the action or the constant next to it and if it’s being multiplied to b, I’m dividing it. So 5b/5 is b, -11/5 is -11/5, so the final answer is b=-11/5.
Copyright © 2005 - 2014 Healthline Networks, Inc. All rights reserved for Healthline.
|
End–Ordovician extinction event
The End–Ordovician extinction event is the third-largest extinction event of the Phanerozoic eon. The Ordovician period followed the Cambrian and was followed by the Silurian. There were no living things on the land except for bacteria and perhaps some single-celled algae. The biota was almost entirely marine.
The extinction came in two steps, at the start and the finish of the Hirnantian stage, which was the last stage of the Ordovician.
- 1. Pre-event: warm climate, deep ocean anoxic event. ocean bottoms were anoxic (little or no oxygen). Black shales were laid down in deep ocean strata; carbonates laid down on oxygenated continental shelves.
- 2. First step: climate turns cold; turnover of water in seas. Rising anoxic water kills most of the plankton, and shrinking seas remove habitats. Cold stage with clear evidence of widespread glaciation.
- 4. Second step: warming ocean re-established; glaciers melt, anoxic conditions reach continental shelves and kills fauna again.
Basic mechanism: climate changes from very warm to very cold and back to very warm. Changes in ocean circulation were the results of the climate changes. Both benthic (ocean bottom) and pelagic fauna were faced with conditions they were unable to cope with.
More than 100 invertebrate families becam extinct in the End–Ordovician extinction event, and a total of almost half the genera. The brachiopods and bryozoans were decimated, along with many of the trilobite, conodont and graptolite families.
Related pages[change | change source]
References[change | change source]
- Sole R.V. & Newman M. 2002. Extinctions and biodiversity in the fossil record. In volume 2: The Earth system: biological and ecological dimensions of global environment change, pp297–391. Encyclopedia of Global Environmental Change. Wiley, N.Y.
- Elewa A.M.T. (ed) 2008. Mass extinctions. Springer, Berlin.
- Hallam A. and Wignall P.B. 1997. Mass extinctions and their aftermath. Oxford.
- Rohde R.A. & Muller R.A. 2005. Cycles in fossil diversity. Nature 434, 208–210.
- Berardelli, Phil 2009. The mountains that froze the world. Science.
|
Executable programs sometimes do not record the directories of the source files from which they were compiled, just the names. Even when they do, the directories could be moved between the compilation and your debugging session. gdb has a list of directories to search for source files; this is called the source path. Each time gdb wants a source file, it tries all the directories in the list, in the order they are present in the list, until it finds a file with the desired name.
For example, suppose an executable references the file /usr/src/foo-1.0/lib/foo.c, and our source path is /mnt/cross. The file is first looked up literally; if this fails, /mnt/cross/usr/src/foo-1.0/lib/foo.c is tried; if this fails, /mnt/cross/foo.c is opened; if this fails, an error message is printed. gdb does not look up the parts of the source file name, such as /mnt/cross/src/foo-1.0/lib/foo.c. Likewise, the subdirectories of the source path are not searched: if the source path is /mnt/cross, and the binary refers to foo.c, gdb would not find it under /mnt/cross/usr/src/foo-1.0/lib.
Plain file names, relative file names with leading directories, file names containing dots, etc. are all treated as described above; for instance, if the source path is /mnt/cross, and the source file is recorded as ../lib/foo.c, gdb would first try ../lib/foo.c, then /mnt/cross/../lib/foo.c, and after that—/mnt/cross/foo.c.
Note that the executable search path is not used to locate the source files.
Whenever you reset or rearrange the source path, gdb clears out any information it has cached about where source files are found and where each line is in the file.
When you start gdb, its source path includes only ‘cdir’
and ‘cwd’, in that order.
To add other directories, use the
The search path is used to find both program source files and gdb script files (read using the ‘-command’ option and ‘source’ command).
In addition to the source path, gdb provides a set of commands that manage a list of source path substitution rules. A substitution rule specifies how to rewrite source directories stored in the program's debug information in case the sources were moved to a different directory between compilation and debugging. A rule is made of two strings, the first specifying what needs to be rewritten in the path, and the second specifying how it should be rewritten. In set substitute-path, we name these two parts from and to respectively. gdb does a simple string replacement of from with to at the start of the directory part of the source file name, and uses that result instead of the original file name to look up the sources.
Using the previous example, suppose the foo-1.0 tree has been
moved from /usr/src to /mnt/cross, then you can tell
gdb to replace /usr/src in all source path names with
/mnt/cross. The first lookup will then be
/mnt/cross/foo-1.0/lib/foo.c in place of the original location
of /usr/src/foo-1.0/lib/foo.c. To define a source path
substitution rule, use the
set substitute-path command
(see set substitute-path).
To avoid unexpected substitution results, a rule is applied only if the from part of the directory name ends at a directory separator. For instance, a rule substituting /usr/source into /mnt/cross will be applied to /usr/source/foo-1.0 but not to /usr/sourceware/foo-2.0. And because the substitution is applied only at the beginning of the directory name, this rule will not be applied to /root/usr/source/baz.c either.
In many cases, you can achieve the same result using the
set substitute-path can be more efficient in
the case where the sources are organized in a complex tree with multiple
subdirectories. With the
directory command, you need to add each
subdirectory of your project. If you moved the entire tree while
preserving its internal organization, then
allows you to direct the debugger to all the sources with one single
set substitute-path is also more than just a shortcut command.
The source path is only used if the file at the original location no
longer exists. On the other hand,
set substitute-path modifies
the debugger behavior to look at the rewritten location instead. So, if
for any reason a source file that is not relevant to your executable is
located at the original location, a substitution rule is the only
method available to point gdb at the new location.
You can configure a default source path substitution rule by configuring gdb with the ‘--with-relocated-sources=dir’ option. The dir should be the name of a directory under gdb's configured prefix (set with ‘--prefix’ or ‘--exec-prefix’), and directory names in debug information under dir will be adjusted automatically if the installed gdb is moved to a new location. This is useful if gdb, libraries or executables with debug information and corresponding source code are being moved together.
You can use the string ‘$cdir’ to refer to the compilation
directory (if one is recorded), and ‘$cwd’ to refer to the current
working directory. ‘$cwd’ is not the same as ‘.’—the former
tracks the current working directory as it changes during your gdb
session, while the latter is immediately expanded to the current
directory at the time you add an entry to the source path.
set substitute-pathfrom to
For example, if the file /foo/bar/baz.c was moved to /mnt/cross/baz.c, then the command
(gdb) set substitute-path /usr/src /mnt/cross
will tell gdb to replace ‘/usr/src’ with ‘/mnt/cross’, which will allow gdb to find the file baz.c even though it was moved.
In the case when more than one substitution rule have been defined, the rules are evaluated one by one in the order where they have been defined. The first one matching, if any, is selected to perform the substitution.
For instance, if we had entered the following commands:
(gdb) set substitute-path /usr/src/include /mnt/include (gdb) set substitute-path /usr/src /mnt/src
gdb would then rewrite /usr/src/include/defs.h into
/mnt/include/defs.h by using the first rule. However, it would
use the second rule to rewrite /usr/src/lib/foo.c into
unset substitute-path [path]
If no path is specified, then all substitution rules are deleted.
show substitute-path [path]
If no path is specified, then print all existing source path substitution rules.
If your source path is cluttered with directories that are no longer of interest, gdb may sometimes cause confusion by finding the wrong versions of source. You can correct the situation as follows:
directorywith no argument to reset the source path to its default value.
directorywith suitable arguments to reinstall the directories you want in the source path. You can add all the directories in one command.
|
Anti-interventionists used to deride World War I as “Mr. Wilson’s War.” They called it right. The United States entered the war in April 1917 because Woodrow Wilson decided to take the country in. Germany’s submarine attacks, renewed two months earlier, had brought shrill demands for immediate intervention from such war hawks as Theodore Roosevelt and Henry Cabot Lodge, but every indicator of public and congressional opinion found most people clinging to what Wilson once called “the double wish of our people” to stand up to Germany and yet not get drawn into war.
Wilson chose war because, like some other presidents in similar situations, he believed there were no truly good choices. Neutrality seemed to him no refuge from the war’s damaging reverberations. He had already ordered naval protection for and the arming of merchant vessels against the submarines, which was tantamount to waging a naval war. Also, with the tsarist regime recently overthrown in Russia, the Allies now looked like a marginally more palatable set of victors. Above all, Wilson recoiled from being just a bystander. By becoming a belligerent and helping to decide the outcome of the war, he was wagering that he could play a leading part in shaping the peace to follow.
Shortly before the submarine attacks, Wilson had unveiled his grand design for a new world order to be achieved through a compromise settlement—“peace without victory”—and future guarantees of all nations’ independence, territorial integrity, and freedom from aggression through a league of nations. He dearly wished to pursue those goals as a mediator, but he now believed he could pursue them only as a belligerent. In his wartime diplomacy, he avoided proclaiming common cause with the Allies—as Theodore Roosevelt wanted to do then and Franklin Roosevelt did later—and he sought to shape and limit their war aims, particularly through the Fourteen Points.
Wilson’s diplomatic misfortune was that he succeeded too soon. The prospect of a less punitive peace prompted the Germans to sue for the Armistice in November 1918. The fighting ended just as the massive influx of Doughboys was about to give the United States the upper hand in managing the war. If the war had lasted for another five or six months, an American-led invasion of Germany would have looked like what happened a quarter century later. There would have been many more deaths and much greater destruction, but Wilson would have been in a position to dictate the terms of the peace settlement. As it was, he did not have a strong hand to play at the peace conference, but he was able to resist the harshest demands of the British and French and to establish the League of Nations.
The war’s end also left Wilson politically vulnerable at home. Negotiations leading to the Armistice prevented him from making a nationwide speaking tour in the fall of 1918 to inform the public about his vision for peace and answer critics who were demanding a Carthaginian treatment of Germany. Likewise, he was contemplating ways to restrain overzealous federal prosecutors and judges in their draconian enforcement of wartime restrictions on dissent. Wilson’s failure to educate the public about his design for peace and his permissiveness toward repression of civil liberties deservedly remain blots on his historical reputation. But his greatest failings, particularly in shaping the peace settlement and in bringing the United States into a collective security system, stemmed from bad luck. His worst misfortune came when he suffered a massive stroke just after a belated and foreshortened speaking tour to sell the public on the League of Nations. It left him a broken man, whose impaired judgment turned him into a major element in the spiteful stalemate that kept the America out of the League of Nations.
Would things have been different if Wilson had not decided to go to war in 1917? Yes, because Germany would almost certainly have won by the end of that year. Military disasters in Russia and Italy, grievous shipping losses inflicted by the submarines, and an untenable financial situation (the British had run out of credit in the U. S. to sustain their massive war orders), and no prospect of American troops eventually coming to their rescue—all these added up to a recipe for Allied defeat. Europe dominated by a victorious Germany would almost certainly have been more benign than the Nazi-conquered continent following the Fall of France in 1940. But how much more benign? The settlement imposed on the Bolsheviks at Brest-Litovsk in 1918 leaves the question open. Likewise, what impact would such a victory have had on the long march toward the end of colonialism that began with the League of Nations mandate system?
What might have happened if Wilson had won his wager? What if he had been able to shape the peace more to his liking and get his country wholeheartedly into an empowered League of Nations? During World War II, he enjoyed a posthumous apotheosis as a prophet whose unheeded warnings could have avoided that second global conflict and all its attendant horrors. Few historians accept that scenario, but it is hard to deny that things could and probably would have gone better if Wilson had won his wager on a new world order. For that wager to have had any chance of success, this country had to go into World War I.
|
Electroplating is the coating of an object with a metal. It is done by immersing the object and a bar of the metal in a solution containing the metal ions. Electric current is then applied; the positive goes to the metal and the negative goes to the object. The metal bar dissolves in the solution and plates out on the object, forming a thin but durable coating of metal. It is often used to gold-plate objects for decoration or to stop corrosion. Normally the metal becomes fragile, and is only used for display.
A common example of electroplated metal is German planes during World War II. The planes were partially fragile, but at the time they were believed to be stronger. The planes were easily destroyed by the Allies, which was one of the reasons they lost the war.
|
Effective School Library Programs in CanadaAcknowledgment - Canadian Library Association (CLA): Approved November 25, 2000
A major goal of education in Canada is to develop students who are informed, self- directed and discriminating learners. To be effective citizens in a society rich in information, students need to learn skills which will allow them to locate and select appropriate information, to analyze that information critically, and to use it wisely. With the demands growing from across society for information-literate and technologically- competent citizens, there is a strong need for an educational program, which emphasizes the information literacy skills that are crucial to the processes of critical thinking and problem solving.
The school library, and its instructional program, are essential components of the educational process, contributing to the achievement of these educational goals and objectives through programs and services that implement and support the instructional programs of the school. The role and responsibility of the school library lies in the development of resource-based programs that will ensure that all the young people in our schools have the opportunity to learn the skills that will enable them to become competent users of information. The school library also houses and provides access to resources in a variety of formats and in sufficient breadth and number to meet the demands of the curriculum and the varied capabilities and interests of the students. These materials provide the essential support for resource-based teaching and learning.
The school library program is most effective when it is an integral part of the instructional program of the school and when information and media literacy skills are integrated in a developmental and sequential way with subject-specific skills and content. The program is developed jointly by teachers and teacher-librarians, who work collaboratively to plan, implement and evaluate resource-based units of study. Through such planned and purposeful activities, students learn how to retrieve, evaluate, organize, share and apply information objectively, critically, and independently. As well, they are given opportunities to grow intellectually, aesthetically and personally.
The school library exists within a particular context and is shaped by policy set at national, provincial and local levels, by professional standards and research, by educational objectives and curriculum requirements, and by the expectations of the administration, the staff and the community. Basic levels of support are required in order to develop library programs and services that are congruent with the educational goals of the school, the curriculum and the needs of the learners. Support from the provincial ministry of education, from the local school district, and from the administration and teaching staff of the school are all important to the success of the program. This support involves the development of policies and procedures related to the school library, and the provision of qualified personnel, multi-functional facilities, diverse learning resources, and an adequate annual budget. Each of these factors has an impact on the richness of the program that can be offered. As the number of qualified teacher-librarians increases, services and programs become more extensive, and they affect the educational goals of the school more significantly. As collections of resources increase in quantity, size and scope, students' individual learning styles and needs are met more effectively. Adequate and consistent budgets ensure that school library collections remain current and capable of meeting diverse learning needs. The provision of provincial and district services support the program in the local school by enabling library personnel to spend more of their time working with teachers and students.
The role of the school library program in the education of our young people is a crucial one. As support increases, more effective program development is possible. As programs expand, the impact of resource-based learning on student achievement is more pronounced. All students in our schools should have access to effective school library programs. All our young people must have the opportunity to develop the information and media literacy skills they require to reach their fullest potential, to become independent, lifelong learners, and to live as active, responsible members of society.
|
Previously there was very little demand for electrical energy. A single small electrical generating unit could meet the localized demand. Nowadays the demand for electrical energy is tremendously increasing along with the modernization of human lifestyles. To meet this increasing electrical load demand, we have to establish quite a large number of big power plants.
But from the economic point of view, this is not always possible to build a power plant nearer to the load centers. We define load centers as the places where the density of consumers or connected loads is quite high compared to other parts of the country. It is economical to establish a power plant near the natural source of energy like coal, gases, and water etc. Because of that and for many other factors, we have to construct an electrical generating station often far away from load centers.
Thus we have to establish electrical network systems to bring the generated electrical energy from power generating station to the consumer ends. Electricity generated in the generating station riches to the consumers through the systems which we can divide into two main parts referred as transmission and distribution.
We call the network through which the consumers get electricity from the source as electrical supply system. An electrical supply system has three main components, the generating stations, the transmission lines and distribution systems. Power generating stations produce electricity at a comparatively lower voltage level. Producing electricity at lower voltage level is economical in many aspects.
The step-up transformers connected at the beginning of the transmission lines, increase the voltage level of the power. Electrical transmission systems then transmit this higher voltage electrical power to the possible nearest zone of load centres. Transmitting electrical power at higher voltage levels is advantageous in many aspects. High voltage transmission lines consist of overhead or/and underground electrical conductors. The step-down transformers connected at the end of the transmission lines decrease the voltage of electricity to the desired low values for distribution purposes. The distribution systems then distribute the electricity to various consumers according to their required voltage levels.
We usually adopt AC system for generation, transmission and distribution purposes. For Ultra High Voltage transmission we often use DC transmission system. The transmission and distribution both networks can be either overhead or underground. As the underground system is much more expensive than an overhead system, the latter is preferable wherever possible from the economic point of view. We use three phase 3 wire system for AC transmission and three phase 4 wire system for AC distribution.
We can divide both transmission and distribution systems into two parts, primary transmission and secondary transmission, primary distribution and secondary distribution. It is a generalized view of an electrical network. We should note that all the transmission distribution systems may not have these four stages of the electrical supply system.
As per requirement of the system, there may be many networks which may not have a secondary transmission or secondary distribution even in many cases of localized electrical supply system the entire transmission system can be absent. In those localized electrical supply system generators directly distribute the power to different consumption points.
Let us discuss a practical example of the electrical supply system. Here generating station produces three-phase power at 11KV. Then one 11/132 KV step-up transformers associated with the generating station steps up this power to 132KV level. The transmission line transmits this 132KV power to 132/33 KV step down substation consisting of the 132/33KV step-down transformers, situated at outskirts of town. We will call that portion of the electrical supply system that is from 11/132 KV step-up transformer to 132/33 KV step down transformer as primary transmission. The primary transmission is 3 phase 3 wire system that means there are three conductors for three phases in each line circuit.
After that point in the supply system, the secondary power of 132/33 KV transformer gets transmitted by 3 phase 3 wire transmission system to different 33/11KV downstream substations situated at different strategic locations of the town. We refer this portion of the network as secondary transmission.
The 11KV 3 phase 3 wire feeders passing through roadsides of the town carry the secondary power of 33/11KV transformers of secondary transmission substation. These 11KV feeders comprise the primary distribution of the electrical supply system.
The 11/0.4 KV transformers in consumer localities step down the primary distribution power to 0.4 KV or 400 V. These transformers are called distribution transformer, and these are pole mounted transformer. From distribution transformers, the power goes to consumer ends by 3 phase 4 wire system. In 3 phase 4 wire system, 3 conductors are used for 3 phases, and the 4th conductor is used as the neutral wire for neutral connections.
A consumer can take the supply either in three phase or single phase depending on his requirement. In case of three phase supply the consumer gets 400 V phase to phase (line voltage) voltage, and for single phase supply, the consumer gets 400 / root 3 or 231 V phase to neutral voltage at his supply mains. Supply main is the end point of an electrical supply system. We refer this portion of the system that is from secondary of the distribution transformer to supply main as secondary distribution. Supply mains are the terminals installed at consumer premises from which the consumer takes connection for his uses.
|
What Is Hammertoe?
Hammertoe is a contracture (bending) deformity of one or both joints of the second, third, fourth or fifth (little) toes. This abnormal bending can put pressure on the toe when wearing shoes, causing problems to develop.
Hammertoes usually start out as mild deformities and get progressively worse over time. In the earlier stages, hammertoes are flexible and the symptoms can often be managed with noninvasive measures. But if left untreated, hammertoes can become more rigid and will not respond to nonsurgical treatment.
Because of the progressive nature of hammertoes, they should receive early attention. Hammertoes never get better without some kind of intervention.
The most common cause of hammertoe is a muscle/tendon imbalance. This imbalance, which leads to a bending of the toe, results from mechanical (structural) or neurological changes in the foot that occur over time in some people.
Hammertoes may be aggravated by shoes that do not fit properly. A hammertoe may result if a toe is too long and is forced into a cramped position when a tight shoe is worn. Occasionally, hammertoe is the result of an earlier trauma to the toe. In some people, hammertoes are inherited.
Common symptoms of hammertoes include:
- Pain or irritation of the affected toe when wearing shoes.
- Corns and calluses (a buildup of skin) on the toe, between two toes or on the ball of the foot. Corns are caused by constant friction against the shoe. They may be soft or hard, depending on their location.
- Inflammation, redness or a burning sensation
- Contracture of the toe
- In more severe cases of hammertoe, open sores may form.
Although hammertoes are readily apparent, to arrive at a diagnosis, the foot and ankle surgeon will obtain a thorough history of your symptoms and examine your foot. During the physical examination, the doctor may attempt to reproduce your symptoms by manipulating your foot and will study the contractures of the toes. In addition, the foot and ankle surgeon may take x-rays to determine the degree of the deformities and assess any changes that may have occurred.
Hammertoes are progressive—they do not go away by themselves and usually they will get worse over time. However, not all cases are alike—some hammertoes progress more rapidly than others. Once your foot and ankle surgeon has evaluated your hammertoes, a treatment plan can be developed that is suited to your needs.
There is a variety of treatment options for hammertoe. The treatment your foot and ankle surgeon selects will depend on the severity of your hammertoe and other factors.
A number of nonsurgical measures can be undertaken:
- Padding corns and calluses. Your foot and ankle surgeon can provide or prescribe pads designed to shield corns from irritation. If you want to try over-the-counter pads, avoid the medicated types. Medicated pads are generally not recommended because they may contain a small amount of acid that can be harmful. Consult your surgeon about this option.
- Changes in shoewear. Avoid shoes with pointed toes, shoes that are too short, or shoes with high heels—conditions that can force your toe against the front of the shoe. Instead, choose comfortable shoes with a deep, roomy toebox and heels no higher than two inches.
- Orthotic devices. A custom orthotic device placed in your shoe may help control the muscle/tendon imbalance. Injection therapy. Corticosteroid injections are sometimes used to ease pain and inflammation caused by hammertoe.
- Medications. Oral nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen, may be recommended to reduce pain and inflammation. Splinting/strapping. Splints or small straps may be applied by the surgeon to realign the bent toe.
When Is Surgery Needed?
In some cases, usually when the hammertoe has become more rigid and painful or when an open sore has developed, surgery is needed.
Often, patients with hammertoe have bunions or other foot deformities corrected at the same time. In selecting the procedure or combination of procedures for your particular case, the foot and ankle surgeon will take into consideration the extent of your deformity, the number of toes involved, your age, your activity level and other factors. The length of the recovery period will vary, depending on the procedure or procedures performed.
|
Table of contents:
- How do you write an ethos argument?
- How do you analyze ethos?
- Why is ethos important in writing?
- What are Aristotle's three main means of persuasion?
- Why is the rhetorical situation represented as a triangle?
- What does rhetorical mean in writing?
- What are the five elements of a rhetorical situation?
- What are rhetorical concepts?
- What does it mean when someone says rhetorically speaking?
How do you write an ethos argument?
Ethos or the ethical appeal is based on the character, credibility, or reliability of the writer....EthosUse only credible, reliable sources to build your argument and cite those sources properly.Respect the reader by stating the opposing position accurately.Establish common ground with your audience.
How do you analyze ethos?
When you evaluate an appeal to ethos, you examine how successfully a speaker or writer establishes authority or credibility with her intended audience. You are asking yourself what elements of the essay or speech would cause an audience to feel that the author is (or is not) trustworthy and credible.
Why is ethos important in writing?
It is important for professional writing to use ethos because it established the writer's credibility. In using ethos, writers exemplify their expertise on the topic and draw themselves as respectable authority figures who their audience can trust to receive reliable information.
What are Aristotle's three main means of persuasion?
According to Aristotle, rhetoric is: "the ability, in each particular case, to see the available means of persuasion." He described three main forms of rhetoric: Ethos, Logos, and Pathos. In order to be a more effective writer and speaker, you must understand these three terms.
Why is the rhetorical situation represented as a triangle?
The rhetorical situation Aristotle argued was present in any piece of communication is often illustrated with a triangle to suggest the interdependent relationships among its three elements: the voice (the speaker or writer), the audience (the intended listeners or readers), and the message (the text being conveyed).
What does rhetorical mean in writing?
Rhetoric is the art of persuasion through communication. It is a form of discourse that appeals to people's emotions and logic in order to motivate or inform. The word “rhetoric” comes from the Greek “rhetorikos,” meaning “oratory.”
What are the five elements of a rhetorical situation?
AN INTRODUCTION TO RHETORIC An introduction to the five central elements of a rhetorical situation: the text, the author, the audience, the purpose(s) and the setting.
What are rhetorical concepts?
These rhetorical situations can be better understood by examining the rhetorical concepts that they are built from. The philosopher Aristotle called these concepts logos, ethos, pathos, telos, and kairos – also known as text, author, audience, purposes, and setting.
What does it mean when someone says rhetorically speaking?
If you ask a rhetorical question it means you don't necessarily expect an answer, but you do want an occasion to talk about something. Rhetoric is the art of written or spoken communication. But nowadays if we say something is rhetorical, we usually mean that it's only good for talking.
You will be interested
- What are some ways to glorify God?
- What are examples of hierarchy?
- How does Locke view the state of nature?
- What is Interpretivist theory?
- What is your personal philosophy of life?
- How do I stop being an idealist?
- What is the importance of critical thinking in philosophy?
- What are 3 examples of unalienable rights?
- What are the important requirements in doing philosophy Brainly?
- What are the 4 Metaparadigm of nursing?
|
字幕列表 影片播放 列印英文字幕 Imagine a world in which you see numbers and letters as colored even though they're printed in black, in which music or voices trigger a swirl of moving, colored shapes, in which words and names fill your mouth with unusual flavors. Jail tastes like cold, hard bacon while Derek tastes like earwax. Welcome to synesthesia, the neurological phenomenon that couples two or more senses in 4% of the population. A synesthete might not only hear my voice, but also see it, taste it, or feel it as a physical touch. Sharing the same root with anesthesia, meaning no sensation, synesthesia means joined sensation. Having one type, such as colored hearing, gives you a 50% chance of having a second, third, or fourth type. One in 90 among us experience graphemes, the written elements of language, like letters, numerals, and punctuation marks, as saturated with color. Some even have gender or personality. For Gail, 3 is athletic and sporty, 9 is a vain, elitist girl. By contrast, the sound units of language, or phonemes, trigger synthetic tastes. For James, college tastes like sausage, as does message and similar words with the -age ending. Synesthesia is a trait, like having blue eyes, rather than a disorder because there's nothing wrong. In fact, all the extra hooks endow synesthetes with superior memories. For example, a girl runs into someone she met long ago. "Let's see, she had a green name. D's are green: Debra, Darby, Dorothy, Denise. Yes! Her name is Denise!" Once established in childhood, pairings remain fixed for life. Synesthetes inherit a biological propensity for hyperconnecting brain neurons, but then must be exposed to cultural artifacts, such as calendars, food names, and alphabets. The amazing thing is that a single nucleotide change in the sequence of one's DNA alters perception. In this way, synesthesia provides a path to understanding subjective differences, how two people can see the same thing differently. Take Sean, who prefers blue tasting food, such as milk, oranges, and spinach. The gene heightens normally occurring connections between the taste area in his frontal lobe and the color area further back. But suppose in someone else that the gene acted in non-sensory areas. You would then have the ability to link seemingly unrelated things, which is the definition of metaphor, seeing the similar in the dissimilar. Not surprisingly, synesthesia is more common in artists who excel at making metaphors, like novelist Vladimir Nabokov, painter David Hockney, and composers Billy Joel and Lady Gaga. But why do the rest of us non-synesthetes understand metaphors like "sharp cheese" or "sweet person"? It so happens that sight, sound, and movement already map to one another so closely, that even bad ventriloquists convince us that the dummy is talking. Movies, likewise, can convince us that the sound is coming from the actors' mouths rather than surrounding speakers. So, inwardly, we're all synesthetes, outwardly unaware of the perceptual couplings happening all the time. Cross-talk in the brain is the rule, not the exception. And that sounds like a sweet deal to me!
|
Forward Error Correction (FEC) is a type of error correction that involves encoding a message in a redundant way, which allows the receiver to reconstruct lost bits without the need for retransmission.
How Forward Error Correction Works
FEC works by adding “check bits” to the outgoing data stream. Adding more check bits reduces the amount of available bandwidth by increasing the overall block size of the outgoing data, but also enables the receiver to correct for more errors without receiving any additional transmitted data.
This dynamic makes FEC ideal when bandwidth is plentiful, but retransmission is costly or impossible.
The “check bits,” or redundant bits, that the sender adds to the data stream are coded into the data in a very specific way, which allows for efficient error correction by the receiving device.
Many different types of FEC coding have been developed.
A simplistic example would be a triple redundancy code, also known as (3,1) repetition code, where each bit of data is simply transmitted 3 times. The results of each triplet are averaged together to account for noise in the transmission, and a corrected result is decided on.
Other more advanced coding systems that are in use today include Reed-Solomon coding, a customizable coding scheme that is often used in DVB.
Applications of Forward Error Correction
Reed-Solomon coding is notable for its use in CD, DVD, and hard disk drives. Although these drives are not transmitting data in the traditional sense, FEC coding allows for error correction on bits that become corrupted through damage to the physical medium of the drive.
Many types of multicast transmissions also make use of FEC.
Forward Error Correction is particularly well suited for satellite transmissions, both for consumer and space exploration applications, where bandwidth is reasonable but latency is significant.
Forward Error Correction vs. Backward Error Correction
Forward Error Correction protocols impose a greater bandwidth overhead than backward error correction protocols, but are able to recover from errors more quickly and with significantly fewer retransmissions.
Forward Error Correction also places a higher computational demand on the receiving device because the redundant information in the transmission must be interpreted according to a predetermined algorithm.
Overall, Forward Error Correction is more suitable for single, long-distance, and relatively high-noise transmissions, rather than situations where smaller batches of information can be sent repeatedly and easily. In these cases, Backward Error Correction is much more likely to be suitable.
|
Drill and practice is a behavioral approach to acquiring language. Through the frequent use of drills, students will hopefully uncover the pattern and structure of the language.
Although there is criticism of drill and practice such as the focus on memorization and the common inability of the student to generate language on their own. This method is still used frequently in language teaching.
The purpose of this post is to provide several drill and practice activities that can be used in teaching language. In particular, we will look at the following activities
Inflection involves the modification of a word in one sentence in another sentence
I bought the dog —–> I bought the dogs
Replacement is the changing of one word for another
I ate the apple —–> I ate it.
Restatement is the rewording of a statement so that it is addressed to someone else
Convert the sentence from 2nd person to third person
Where are you going?—–>Where is he going?
Completion is when the student hears a sentence and is required to finish it.
The woman lost _____ shoes—–>The woman lost her shoes
A change in word order is needed when a word is added to the sentence
I am tired. (add the word so)—–>I am so tired.
A single word replaces a phrase or clause
Put the books on the table—–>Put the books there
Two separate sentences are combined
They are kind. This is nice—–>It is nice that they are kind
These are responses to something that is said. A general answer based on a theme is expected from the student
Example say something polite
Example agree with someone
I think you are right
The student is given several words and they need to combine them into a sentence
boy/playing/toy—–>The boy is playing with the toy
The examples in this post provide some simple ways in which English can be taught to students. These drill and practice tools are one of many ways to support ESL students in their language acquisition.
|
Roman soldiers came from all parts of the Roman Empire. They were loyal, because they chose to become soldiers. Also, after at least twenty years’ service, soldiers were given either some land or a large amount of money, or both.
The Roman army developed some advanced weapons, which also helped them conquer new lands. They used powerful sling-shot catapults to smash the walls of castles. Rocks were loaded into a sling at the end of a long wooden post. This was then pulled back like a spring and released.
|
Be Alert About Asthma
Know what to do to breathe easy.
Asthma is a disease that affects your lungs. An asthma attack happens in your body’s airways. During an asthma attack, the sides of the airways in your lungs swell and the airways shrink. Less air gets in and out of your lungs, and mucous clogs up the airways even more. An asthma attack may include coughing, chest tightness, wheezing, and trouble breathing.
Asthma affects 24 million people living in the United States, including more than 6 million children. It causes 3 in 5 people to limit their physical activity or miss days at school and work. Asthma is the leading cause of missed school days related to chronic illness. It causes more than 10 million missed school days a year. Asthma is also expensive, costing the nation $56 billion each year.
- Follow Your Doctor’s Advice
- With your healthcare provider’s help, make your own asthma action plan. Decide who should have a copy of your plan and where they should keep it.
- Use Inhalers and Take Your Medicine
- Asthma medicines come in two types: quick-relief and long-term control. Quick-relief medicines control the symptoms of an asthma attack. If you need to use your quick-relief medicines more and more, visit your doctor to see if you need a different medicine. Long-term control medicines taken daily help you have fewer and milder attacks, but they don’t help you while you are having an asthma attack.
- Avoid Triggers
- Your triggers can be very different from those of someone else with asthma. Know yours and learn how to avoid them. Watch out for an attack when you can’t avoid your triggers. Some of the most common ones are tobacco smoke, dust mites, outdoor air pollution, cockroach allergen, pets, mold, and smoke from burning wood or grass.
Learn how to control your #asthma: https://www.cdc.gov/asthma/faqs.htm #kNOwAsthma
1 in 13 people live with #asthma in the U.S. Learn more: https://www.cdc.gov/asthma/default.htm #kNOwAsthma
24 million Americans have asthma. Learn more: https://www.cdc.gov/asthma/default.htm #kNOwAsthma
#Asthma is expensive. It costs the nation $56 billion per year. Learn more:
#Asthma causes more than 10 million missed school days each year. https://www.cdc.gov/asthma/default.htm #kNOwAsthma
#Asthma is the leading cause of missed school days related to chronic illness.
Have #asthma? Plan outdoor activities for when air pollution levels will be low.
People who have #asthma are at higher risk from wildfire smoke. Be ready!
Have #asthma? Work with your doctor or other medical professional to create an asthma action plan.
Control your #asthma. Know the warning signs and avoid things that trigger an attack.
|
When stars collapse, they can create black holes, which are everywhere throughout the universe and therefore important to be studied. Black holes are mysterious objects with an outer edge called an event horizon, which traps everything including light. Einstein’s theory of general relativity predicted that once an object falls inside an event horizon, it ends up at the center of the black hole called a singularity where it is completely crushed. At this point of singularity, gravitational attraction is infinite and all known laws of physics break down including Einstein’s theory. Theoretical physicists have been questioning if singularities really exist through complex mathematical equations over the past several decades with little success until now. LSU Department of Physics & Astronomy Associate Professor Parampreet Singh and collaborators LSU Postdoctoral Researcher Javier Olmedo and Abhay Ashtekar, the Eberly Professor of Physics at Penn State developed new mathematical equations that go beyond Einstein’s theory of general relativity overcoming its key limitation—the central singularity of black holes. This research was published recently in Physical Review Letters and Physical Review D and was highlighted by the editors of the American Physical Society.
Theoretical physicists developed a theory called loop quantum gravity in the 1990s that marries the laws of microscopic physics, or quantum mechanics, with gravity, which explains the dynamics of space and time. Ashtekar, Olmedos and Singh’s new equations describe black holes in loop quantum gravity and showed that black hole singularity does not exist.
“In Einstein’s theory, space-time is a fabric that can be divided as small as we want. This is essentially the cause of the singularity where the gravitational field becomes infinite. In loop quantum gravity, the fabric of space-time has a tile-like structure, which cannot be divided beyond the smallest tile. My colleagues and I have shown that this is the case inside black holes and therefore there is no singularity,” Singh said.
Instead of singularity, loop quantum gravity predicts a funnel to another branch of the space-time.
“These tile-like units of geometry—called ‘quantum excitations’— which resolve the singularity problem are orders of magnitude smaller than we can detect with today’s technology, but we have precise mathematical equations that predict their behavior,” said Ashtekar, who is one of the founding fathers of loop quantum gravity.
“At LSU, we have been developing state-of-the-art computational techniques to extract physical consequences of these physical equations using supercomputers, bringing us closer to reliably test quantum gravity,” Singh said.
Einstein’s theory fails not only at the center of the black holes but also to explain how the universe was created from the Big Bang singularity. Therefore, a decade ago, Ashtekar, Singh and collaborators began to extend physics beyond the Big Bang and make new predictions using loop quantum gravity. Using the mathematical equations and computational techniques of loop quantum gravity, they showed that the Big Bang is replaced by the “Big Bounce.” But, the problem of overcoming black hole singularity is exceptionally complex.
“The fate of black holes in a quantum theory of gravity is, in my view, the most important problem in theoretical physics,” said Jorge Pullin, the Horace Hearne professor of theoretical physics at LSU, who was not part of this study.
The research was supported by the U.S. National Science Foundation, the Urania Stott Fund of the Pittsburgh Foundation, the Penn State Eberly College of Science and the Ministry of Economy and Competitiveness, or MINECO, in Spain.
- Quantum Transfiguration of Kruskal Black Holes, Physical Review Letters: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.121.241301
- Quantum extension of the Kruskal spacetime, Physical Review D: https://journals.aps.org/prd/abstract/10.1103/PhysRevD.98.126003
|
Possessive Pronouns with Examples
What are Possessive Pronouns?
Possessive pronouns indicate ownership (possession) of something, and like other pronouns, function simply the replacement team for nouns or noun phrases that run the risk of sounding repetitive.
The possessive pronouns in the English language are:
mine, yours, his, hers, ours, and theirs. Possessive pronouns help us show possession in a sentence, and are used quite frequently, as the following examples show:
- My bus is delayed by an hour.
- Your breakfast is on the table.
- Could you please bring her coffee out to her?
- I would have invited them, but I heard they are out of town.
Types of Possessive Pronouns:
There are two types of possessive pronouns:
Strong (or absolute) possessive pronouns refer back to a noun or noun phrase already used, replacing it to avoid repetition. These pronouns are mine, yours, his, hers, its, ours, yours, and theirs. For example: "I said that bicycle was mine."
Weak possessive pronouns (or possessive adjectives) function as determiners in front of a noun to describe who something belongs to. These pronouns are my, your, his, her, its, our, your, and their. For example: "I said that's my bicycle."
Possessive Pronouns for Consistency
Possessive pronouns help us to be more concise when explaining an idea. The fewer words you use, the better the listener or reader will follow what you are trying to communicate. Consider the following examples:
- Those are his Legos. They are not your Legos.
- Those are his Legos. They are not yours.
- I forgot my pen for class today. Can I borrow your pen?
- I forgot my pen for class today. Can I borrow yours?
- I like my ice cream with sprinkles. Do you like sprinkles as well?
- I like my ice cream with sprinkles. Do you like them as well?
- We traded our car for a new one. This is our car now.
- We traded our car for a new one. This is ours now.
As you can see, possessive pronouns make each example easier to follow by avoiding repetition.
|
Nearly every cell contains an organism's entire genome. Yet not every gene is expressed all the time. To express only the genes necessary, the cell must regulate gene expression. The packing of chromosomes and existence of nucleosomes, the structure formed by DNA winding around nucleosomes, allows for this regulation.
For a gene to be read by the enzyme RNA polymerase, and thus transcribed into a message that can leave the nucleus, that gene must be accessible to the enzyme. Genes deeply packed into chromosomes are not accessible. The tightly packed form of chromatin is called heterochromatin. To alter accessibility of genes, cells use a chromatin remodeling complex, a group of proteins bound together that adjusts the binding of nucleosomes to make any region of the DNA more or less accessible. These complexes can bind to the nucleosome, the DNA strand, or both, making them either more or less accessible as needed. The less tightly packed form of chromatin is called euchromatin. By adjusting between heterochromatin and euchromatin, chromatin remodeling complexes can alter gene expression.Alternatively, the cell can chemically modify the histones to alter gene expression. Two common modifications are methylation—the addition of a methyl group (–CH3)—and acetylation—the addition of an acetyl group (–C2H3O). These modifications can change the way histones bind DNA and each other, either condensing or relaxing the chromatin structure. They can also recruit certain other proteins to the site. These proteins may, in turn, condense or relax chromatin, or they may express or silence nearby genes. The mechanisms that modify histones in this manner can be inherited, leading to hereditary changes in gene expression that do not rely on changes to the genes themselves. Epigenetics, the study of heritable changes in the expression of genes that are caused by the environment, further analyzes the mechanisms that modify histones and the resultant effects of these changes.
|
Suchomimus (Crocodile mimic)
Suchomimus (Crocodile mimic)
Named By : Paul Sereno et al. - 1998
Diet : Piscivore / Carnivore
Size : Estimated 10 meters long
Type of Dinosaur : Large Theropod
Type Species : S. tenerensis (type)
Found in : Niger - Elhraz Formation
When it Lived : Early Cretaceous, 121-112 million years ago
Suchomimus (meaning “crocodile mimic”) is a genus belonging to spinosaurid dinosaurs, which lived between 125-112 millions of years ago, in the area that is currently Niger in the Aptian to the early Albian phases in the Early Cretaceous period. The creature was identified and described by Palaeontologist Paul Sereno and colleagues in 1998, using a fragment of skeleton from the Elrhaz Formation. The skull of Suchomimus is long and narrow like that of an ocelot, earned its name as a generic term and the more specific name Suchomimus tenerensis is a reference to the place the first remains of it, which is the Tenere Desert.
Suchomimus is 9.5 to 11 meters (31 -36 feet) in length and measured from 2.5 up to 5.2 tons (2.8 up to 5.7 small tons) The Holotype specimen might not have grown fully. The narrow skull of the species was perched on a neck with a short length and its forelimbs strong and sturdy, with the size of a claw for each of its thumbs. The midline that ran along the rear of the creature, there was an elongated dorsal sail constructed out of the long neural vertebrae spines. As with other spinosaurids it probably had a diet consisting of small prey animals and fish.
Certain palaeontologists believe that the creature is one of the African species belonging to the European spinosaurid Baryonyx, B. tenerensis. It could could be considered a minor synonym for the spinosaurid contemporaneous Cristatusaurus lapparenti, even though the latter taxon is built on more fragmentary remains. Suchomimus was a member of an environment of fluvial floodplains with a variety of other dinosaurs, including fish, pterosaurs, crocodylomorphs bivalves, turtles, and pterosaurs.
in 1997 American scientist palaeontologist Paul Sereno and his team from Gadoufaoua discovered Fossils that comprised around two-thirds of a huge theropod dinosaur skeleton found in Niger. The first discovery of a massive thumb claw was discovered on December 4, 1997, by David Varricchio. in 1998 Sereno, Allison Beck, Didier Dutheil, Boubacar Gado Hans Larsson, Gabrielle Lyon, Jonathan Marcot, Oliver Rauhut, Rudyard Sadleir, Christian Sidor, David Varricchio, Gregory Wilson and Jeffrey Wilson have named and described the species Suchomimus Tenerensis. The common name Suchomimus (“crocodile imitation”) originates from Ancient Greek soukhos, souchos and souchos, which is the Greek name of the Egyptian god of crocodiles, Sobek and mimos mimos “mimic”, after the head’s shape. The name tenerensis, as it is known, is named after it being the Tenere Desert where the animal was discovered.
The Holotype, MNN GDF500, was discovered in the Tegama Beds in the Elrhaz Formation. It is comprised of a fragmented skull-less skeleton. It consists of three neck ribs as well as parts of 14 dorsal (back) vertebrae 10 dorsal ribs gastralia (or “belly ribs”) fragments made up of sacral vertebrae parts of 12 caudal (tail) vertebrae and the chevrons (bones which form the underneath of the tail) and an capula (shoulder blade) and a coracoid. an incomplete forelimb, the majority parts of the pelvis (hip bone) and fragments of the hindlimb. It was heavily articulated, while the rest consisted of bones that were disarticulated. The skeleton’s parts were exposed on the desert’s surface and suffered erosion damage. In addition, several specimens were identified as paratypes MNN GDF 508 to 501 comprise a snout, an occipital quadrate that is located at the rear of the skull, 3 dentaries (tooth-bearing bones of the lower jaw) and the Axis (second cervical vertebra) as well as a back cervical vertebra, as well as a rear dorsal vertebra. MNN GDF 510 to MNN GDF 511 include 2 caudal vertebrae. The entire original collection of fossils are located inside the Palaeontological Collection at the Musee National du Niger. The first account of Suchomimus was in the beginning. In 2007 the furcula (wishbone)–found on an expedition in the year 2000, was detailed.
S. Tenerensis may be a junior synonym for another spinosaurid found in the Elrhaz Formation, Cristatusaurus lapparenti which was named in the following year on the basis of teeth fragments as well as vertebrae. The skull components were deemed as indistinguishable from Baryonyx Walkeri in the Barremian of England by British paleontologists Alan Charig and Angela Milner. In 1997, while discussing S. tenerensis, Sereno and his colleagues backed this conclusion concluding they believed that Cristatusaurus was a shady name. In 2002, German paleontologist Hans-Dieter Sues and colleagues concluded that Suchomimus was the same as Cristatusaurus lapparenti and, despite Cristatusaurus having been named slightly prior to Suchomimus they suggested that they be a different species of Baryonyx known as Baryonyx Tenerensis. In a study conducted in 2003, German paleontologist Oliver Rauhut agreed with this.
|
What are Hives?
Hives are itchy skin rashes that are usually caused by an allergic reaction. Initially, hives appear as raised, red bumps. Hives can appear anywhere on the body, including the tongue or throat, and they range in size from as small as ¼” to 10”. The duration of an outbreak of hives may be as short as a few hours.
Hives develop when histamines are produced in the body as a reaction. Histamine production causes the tiny blood vessels, known as capillaries, to leak fluid. When the fluid accumulates under the skin, it causes the red bumps that are hives. Hives often itch uncomfortably.
There are several different types of hives, the most common of which include:
Caused by ingesting certain foods or medication or through infections. Insect bites may also be a cause.
Chronic urticaria and angioedema: Usually affects internal organs and the exact causes are unknown, except that it is allergy related.
Physical urticaria: Caused by direct physical stimulation of the skin, such as extreme heat or sun exposure, this form of hives develops in about an hour.
Dermatographism: Hives formed after aggressive stroking or scratching of the skin.
What are the symptoms of Hives?
The skin swellings associated with hives are called wheals and are usually round, pink or red bumps. The skin surrounding the wheals may also be red.
Symptoms of chronic urticaria and angioedema include muscle soreness, shortness of breath, vomiting, and diarrhea.
Who gets Hives?
Though allergic symptoms differ with each person, anyone who has allergic tendencies can get hives. There are no tests for hives, only for the allergens that may trigger them.
Hives Treatment Options
The best treatment for hives is to identify the allergic component and eliminate it from the patient’s lifestyle. In the case of medications, however, this may not be feasible. Antihistamines may provide temporary relief and seem to be preventive when taken regularly, not just when an outbreak occurs. For chronic hives, oral corticosteroids may be prescribed.
|
Because the sun doesn’t always shine, solar utilities need a way to store extra charge for a rainy day. The same goes for wind power facilities, since the wind doesn’t always blow. To take full advantage of renewable energy, electrical grids need large batteries that can store the power coming from wind and solar installations until it is needed. Some of the current technologies that are potentially very appealing for the electrical grid are inefficient and short-lived.
University of Utah and University of Michigan chemists, participating in the U.S. Department of Energy’s Joint Center for Energy Storage Research, predict a better future for a type of battery for grid storage called redox flow batteries. Using a predictive model of molecules and their properties, the team has developed a charge-storing molecule around 1,000 times more stable than current compounds. Their results are reported today in the Journal of the American Chemical Society.
“Our first compound had a half-life of about eight-12 hours,” says U chemist Matthew Sigman, referring to the time period in which half of the compound would decompose. “The compound that we predicted was stable on the order of months.”
Not your ordinary battery
For a typical residential solar panel customer, electricity must be either used as it’s generated, sold back to the electrical grid, or stored in batteries. Deep-cycle lead batteries or lithium ion batteries are already on the market, but each type presents challenges for use on the grid.
All batteries contain chemicals that store and release electrical charge. However, redox flow batteries aren’t like the batteries in cars or cell phones. Redox flow batteries instead use two tanks to store energy, separated by a central set of inert electrodes. The tanks hold the solutions containing molecules or charged atoms, called anolytes and catholytes, that store and release charge as the solution “flows” past the electrodes, depending on whether electricity is being provided to the battery or extracted from it.
“If you want to increase the capacity, you just put more material in the tanks and it flows through the same cell,” says University of Michigan chemist Melanie Sanford. “If you want to increase the rate of charge or discharge, you increase the number of cells.”
Current redox flow batteries use solutions containing vanadium, a costly material that requires extra safety in handling because of its potential toxicity. Formulating the batteries is a chemical balancing act, since molecules that can store more charge tend to be less stable, losing charge and rapidly decomposing.
Molecular bumper cars
Sanford began collaborating with Sigman and U electrochemist Shelley Minteer through the U.S. Department of Energy’s Joint Center for Energy Storage Research (JCESR), an Energy Innovation Hub dedicated to creating next-generation battery technologies. Sanford’s lab developed and tested potential electrolyte molecules, and sought to use predictive technology to help design better battery compounds. Minteer contributed expertise in electrochemistry and Sigman employed a computational method, which uses the structural features of a molecule to predict its properties. A similar approach is widely used in drug development to predict the properties of candidate drugs.
The team’s work found that a candidate compound decomposed when two molecules interacted with each other. “These molecules can’t decompose if they can’t come together,” Sanford says. “You can tune the molecules to prevent them from coming together.”
Tuning a key parameter of those molecules, a factor describing the height of a molecular component, essentially placed a bumper or deflector shield around the candidate molecule.
The most exciting anolyte reported in the paper is based on the organic molecule pyridinium. It contains no metals and is intended to be dissolved in an organic solvent, further enhancing its stability. Other compounds exhibited longer half-lives, but this anolyte provides the best combination of stability and redox potential, which is directly related to how much energy it can store.
Sharing skills to build batteries
Sigman, Minteer and Sanford are now working to identify a catholyte to pair with this and future molecules. Other engineering milestones lay ahead in the development of a new redox flow battery technology, but determining a framework for improving battery components is a key first step.
“It’s a multipart challenge, but you can’t do anything if you don’t have stable molecules with low redox potentials,” Sanford says. “You need to work from there.”
The team attributes their success thus far to the application of this structure-function relationship toolset, typically used in the pharmaceutical industry, to battery design. “We bring the tools of chemists to a field that was traditionally the purview of engineers,” Sanford says.
Find the full study here.
Funding for the project was provided by the Joint Center for Energy Storage Research, a Department of Energy Innovation Hub supported by DOE’s Office of Science.
The Joint Center for Energy Storage Research (JCESR), a DOE Energy Innovation Hub, is a major partnership that integrates researchers from many disciplines to overcome critical scientific and technical barriers and create new breakthrough energy storage technology. Led by the U.S. Department of Energy’s Argonne National Laboratory, partners include national leaders in science and engineering from academia, the private sector, and national laboratories. Their combined expertise spans the full range of the technology-development pipeline from basic research to prototype development to product engineering to market delivery.
|
The equals sign, =, is a symbol with multiple meanings in BBC BASIC:
- an assignment statement, to set a variable to the value of an expression;
- an equality operator, to test whether two values are equal;
- a statement to return from a function with its value.
|Availability||Present in all original versions of BBC BASIC.|
|Syntax||BASIC I-V|| [|
<num-var> = <numeric>
<num-var> = <string>
|Token (hex)||BASIC I-V|| |
|Description||BASIC I-V|| In the first two forms, assigns the value of the <numeric> or <string> to the <num-var> or <string-var>.|
In the third and fourth forms, compares the values of the two operands and returns
In the fifth form, evaluates the <numeric> or <string>, exits the current function (see
|Associated keywords|| |
= evaluates the expression on the right hand side to obtain its value, and sets the variable, named on the left hand side, to this value. It may be an ordinary variable (which is created if it does not exist), an array element, an indirection operation (
$) or a system variable.
If the same variable is named on both sides, its value is preserved if it exists, or it is initialised without error if not. See Forcing a variable to exist.
If one attempts to assign a string value to a numeric variable, or a numeric value to a string variable, a
Type mismatch error occurs. If this is intentional then
EVAL may be used to convert to the correct type.
In some other BASICs the keyword
LET is needed, but it is optional in BBC BASIC. When setting the system variables,
LET is in fact forbidden.
Assignment is a statement, not an operator. It is not possible to daisy-chain assignments, for example
X = Y = 0 to set X and Y to 0, because
X = ... expects a value from
Y = 0, and so the latter is evaluated as an expression, not a statement (the
= becomes an equality operator, see below).
= evaluates the expressions on either side, and compares their values. If the values are equal, it returns the Boolean value
TRUE, otherwise it returns
FALSE. It is usually used as the condition of an
IF statement, but it is equally valid to assign the result to a variable.
It is not possible to daisy-chain (in)equality tests (for example
IF bool% >= X = Y THEN ...), as the Group 5 operators do not associate. That is, BASIC will not use the result of one test as the operand of another, unless parentheses force it to. Instead, the expression stops after the first test, and the second test's operator and the code following are treated as a statement. If the second operator is
= then it becomes a function return statement (see below), otherwise a
Syntax error occurs.
This means the following idiom is valid:
DEF FNcrc16(start%,length%) IF length% = 0 = 0 ...
which allows the function to return zero immediately if
length% is 0.
Function return statement
= evaluates the expression on the right hand side to obtain its value and exits the current function (as
ENDPROC does for procedures.) If the statement has been met outside a function definition, a
No FN error results.
It then continues evaluating the expression that made the
FN call, substituting the call with the obtained value.
-- beardo 21:22, 2 April 2007 (BST)
|
Create an Account - Increase your productivity, customize your experience, and engage in information you care about.
Uranium is found naturally in nearly all soil, rock, and water. Uranium decays by creating alpha particles, so high concentrations of uranium can also lead to high concentrations of alpha particles.
When Uranium decays, it eventually becomes radium. So, it can also ultimately lead to high levels of radium as well.
|
Diabetes is a chronic disease that occurs because the body is unable to use blood sugar or glucose properly. The exact cause of this malfunction is unknown, but risk factors include genetics, age, pregnancy (gestational), poor diet, lack of exercise, obesity, poor choices or behaviors like smoking, and environment can play a part. Lack of insulin production is primarily the cause of diabetes (type 1). It occurs when insulin-producing cells are damaged or destroyed and stop producing insulin, which is needed to move blood sugar into cells throughout the body.
The resulting insulin deficiency leaves too much sugar in the blood and not enough in the cells for energy. Insulin resistance is specific to type 2 diabetes. Pre-diabetes occurs when insulin is produced normally in the pancreas, but the body is still unable to move glucose into the cells for fuel. At first, the pancreas will create more insulin to overcome the body’s resistance, causing blood-sugar levels higher than normal but still not diagnosable as full-fledged diabetes. Eventually the cells “wear out” and stops insulin production, as insulin resistance increases, leaving too much glucose in the blood and transitioning in to type 2 diabetes.
According to the American Diabetes Association it’s estimated that up to 70 percent of people with prediabetes go on to develop type 2 diabetes. Fortunately, progressing from prediabetes to diabetes isn’t inevitable, however. The 3rd type of diabetes is gestational diabetes which occurs only during pregnancy due to insulin-blocking hormones.
More than 30.3 million Americans have diabetes, or 9.4 per cent; of those, about 7.2 million don’t know they have it. Experts estimate that number will double to 60 million by 2050, with 1.5 million new cases diagnosed every year, and that is why it’s important to know the Foods To Avoid For Diabetes. Diabetes is the leading cause of blindness and kidney failure among adults. It causes mild to severe nerve damage that, coupled with diabetes-related circulation problems, can often lead to the loss of a limb or a foot.
Diabetes Increases the Risk of Heart and Cardiovascular Disease.
The American Heart Association makes a clear connection with diabetes and heart disease, per these statistics:
At least 68 percent of people age 65 or older with diabetes die from some form of heart disease; and 16 percent die of stroke,
The American Heart Association considers diabetes to be one of the seven major controllable risk factors for cardiovascular disease.
And it’s the seventh leading cause of death in the U.S., directly causing almost 79,535 deaths each year and contributing to thousands more as the underlying cause of death, for a total of 252,806 death certificates listing diabetes as an underlying or contributing cause of death according to the American Diabetes Association (ADA). The good news is that diabetes is largely preventable. 90 percent of the cases could be avoided by simply by making healthy choices like keeping your weight under control, exercising more, eating a healthy diet, and not smoking.
Genetics Influence. According to the American Diabetes Association Genetics are a factor and uncontrollable, as statistics show that if you have a parent or sibling with diabetes, your odds of developing it yourself increases. Cystic Fibrosis and hemochromatosis, both inheritable, can both damage the pancreas leading to a higher likelihood of developing diabetes according to a 2016 Journal of Pathology reviewed by the National Institutes of Health (NIH).
Overweight and Obesity-Related. Being overweight or obese is the biggest risk factor for type 2 diabetes. However, diabetes risk is higher if you tend to carry your weight around your abdomen, called visceral fat, as opposed to your hips and thighs. A lot of belly fat surrounds the abdominal organs and liver and is closely linked to insulin resistance and diabetes.
Excess visceral fat promotes inflammation and insulin resistance, which significantly increase the risk of diabetes according to research. A 2012 JAMA(NIH)study visceral fat and insulin resistance, but not general adiposity, were independently associated with incident prediabetes and type 2 diabetes mellitus in obese adults.
A 2016 Yonsei Medical Journal study reviewed by NIH found that visceral fat mass has stronger associations with diabetes and prediabetes than other obesity indicators among Korean adults. One Diabetes Care study, confirmed by the NIH, of more than 1,000 people with prediabetes found that for every 2 pounds or so (2.2 pounds) of weight participants lost, their risk of diabetes reduced by 16 percent, up to a maximum reduction of 96 percent.
Calories obtained from fructose found in sugary beverages such as soda, energy and sports drinks, coffee, and processed foods like doughnuts, muffins, cereal, candy and granola bars, are more likely to add weight around your abdomen.
Foods To Avoid For All Types Of Diabetes
Poor Diet High-sugar foods and Refined Carbohydrates. Carbohydrates are foods that have the biggest effect on your blood glucose levels. After you eat carbohydrates, your blood glucose rises almost immediately. Fruit, sweet foods and drinks, starchy foods, such as bread, potatoes and rice, and milk and milk products contain carbohydrates. Carbohydrates are important for health.
But when you eat too many carbohydrates at once, your blood glucose can go too high as confirmed by this NIH review on carbohydrates. This is even more likely if you don’t have or take enough insulin for that food. Not all carbohydrates can be broken down and absorbed by your body. Foods with more non-digestible carbohydrates, or fiber, less processed foods, and nutrient-dense foods, are healthier and less likely to increase your blood sugar out of the safe range, according to this Harvard T H Chan School Of Public study.
These non-digestible carbs include foods such as beans, oatmeal, brown rice, non-starchy vegetables, and 100 per cent whole grains. Simple or processed carbohydrates raise blood glucose (glycemic index) more than others. These include potatoes, sweets, and white bread, and most processed foods.Cutting back on sugary foods and refined carbohydrates can mean a slimmer waistline as well as a lower risk of diabetes, per this Harvard T H Chan School Of Public Health study.
Many studies have shown a link between the frequent consumption of sugar or refined carbs and the risk of diabetes. What’s more, replacing them with foods that have less of an effect on blood sugar may help reduce your risk. Here are a few examples on the research.
One 2014 BMC Public Health study reviewed by the NIH found that results indicated independent associations between diabetes mellitus prevalence rates and per capita sugar consumption both worldwide and with special regard to the Asian region.
An American Journal of Clinical Nutrition study with NIH review, showed increasing intakes of refined carbohydrate (corn syrup) with decreasing intakes of fiber paralleled the upward trend in the prevalence of type 2 diabetes observed in the United States during the 20th century.
A PLOS|ONE 2013 study reviewed by NIH determined that declines in sugar exposure correlated with significant subsequent declines in diabetes rates independently of other socioeconomic, dietary and obesity prevalence changes. One 2015 Journal of Nutrition study confirmed by the NIH, good dietary substitutes for refined carbs are diets that are rich in low-GI carbohydrates, cereal fiber, resistant starch, fat from vegetable sources, or unsaturated fat, and lean sources of protein should be encouraged, whereas refined sugars and grains (high-GI carbohydrates) are to be avoided in order to lower risk of type 2 diabetes.
And finally, a detailed analysis of 37 studies published in the Journal of Clinical Nutrition reviewed by NIH found that people with the highest intakes of fast-digesting carbs were 40 percent more likely to develop diabetes than those with the lowest intakes. According to the NIH, losing just 5 percent to 7 percent of your total weight can help you lower your blood sugar, blood pressure, and cholesterol levels. Losing weight and eating healthier “gut-feeling-foods” can also have a profound effect on your mood, energy, and sense of well being according to a Harvard Medical School review.
Processed carbs like white rice, white pasta, and white bread are missing the fiber from the original grain, so they raise blood glucose higher and faster than their intact, unprocessed counterparts. In a six-year study of 65,000 women, those with diets high in refined carbohydrates from white bread, white rice, and pasta were 2.5 times as likely to be diagnosed with type 2 diabetes compared to those who ate lower-glycemic-load foods, such as intact whole grains and whole wheat bread.
An analysis of four prospective studies on white rice consumption and diabetes found that each daily serving of white rice increased the risk of diabetes by 11 percent.
Added Or Hidden Sugars. We look at these separately because they are hidden sugars in foods you may be eating unintentionally. Since diabetes is characterized by abnormally elevated blood glucose levels, of course, it is wise to avoid foods that cause dangerously high spikes in blood glucose, primarily refined or processed foods, such as sugar-sweetened drinks which are devoid of fiber, to slow the absorption of glucose in the blood. Fruit juices and sugary processed foods and desserts have similar effects. These foods promote hyperglycemia and insulin resistance, and promote the formation of advanced glycation end products (AGEs) in the body.
The average American eats 22 teaspoons of added sugar per day, according to the American Heart Association. You’re likely not adding that much sugar to food yourself, so could you really be eating that much? Well, yes, says Erin Gager, R.D., L.D.N., a dietitian at The Johns Hopkins Hospital, because sugar is in a lot more foods than you may think.
To identify hidden sugars look for ingredients on food labels with words that end in “ose”, like fructose, sucrose, maltose, dextrose. Other examples of added sugar include fruit nectar, concentrates of juices, honey, agave and molasses.
High Sodium Foods. Having diabetes doesn’t mean you have to cut salt and sodium from your diet. However, people with diabetes should cut back on their sodium intake since they are more likely to have high blood pressure, a leading cause of heart disease, than people without diabetes. Aim for less than 2,300 mg of sodium a day. Your doctor may suggest you aim for even less if you have high blood pressure. It is estimated that about 75 percent or more of the sodium Americans eat is from processed, packaged foods, and that’s why you should avoid these types foods.
Many companies are slowly trying to lower the amount of sodium in their products, but there is still much work to be done. A 2017 Diabetologia study reviewed by ScienceDirect found that sodium intake may be linked to an increased risk, as much as 43 percent, of developing both type 2 diabetes and latent autoimmune diabetes in adults. A sodium-restricted diet has long been a first line of intervention for people with hypertension and is particularly important in those with type 2 diabetes according to a 2014 Clinical Diabetes study reviewed by NIH. In 2010 the American Heart Association (AHA) study recommended that those at risk of heart disease, including all people with type 2 diabetes, further limit their dietary sodium to 1,500 mg/day.
Fried Foods. Overdoing it on greasy, fried foods can lead to weight gain and wreak havoc on your blood sugar. French fries, potato chips, and doughnuts are particularly bad choices for diabetics because they’re made with carb-heavy, starchy ingredients, which can cause blood glucose levels to shoot up. And, most likely laden with unhealthy trans fats also, which is doubly bad, because they’ve been deep-fried in hydrogenated oils.
A 2014 T H Chan Harvard Medical School review warned that eating fried foods tied to increased risk of diabetes and heart disease. In 2014 a American Journal of Clinical Nutrition study reviewed by the NIH found that frequent fried-food consumption was significantly associated with risk of incident type 2 diabetes and moderately with incident coronary artery disease.
Saturated Fats. One of the Foods To Avoid For Diabetes are high-fat dairy products and animal proteins such as butter, fatty beef, and processed meats such as hot dogs, sausage and bacon. Also, limit coconut and palm kernel oils. Because saturated fat raises blood cholesterol levels people with diabetes are at high risk for heart disease and limiting your saturated fat can help lower your risk of having a heart attack or stroke according to the American Diabetes Association. Unhealthy fats cause a buildup of excess fat in the cells of the body causing insulin resistance.
Fat build-up inside muscle, liver, and pancreas cells creates toxic fatty breakdown products and free radicals that ‘block’ the insulin-signaling process, close the ‘glucose gate,’ and make blood sugar levels rise. Saturated fats have been associated with heart disease and diabetes, per an NIH study.
Trans Fats. Avoid trans fats found in processed snacks, baked goods, shortening and stick margarine. Trans fats is a strong dietary risk factor for cardiovascular disease. Studies, like this NIH study, have shown even small amounts of trans fats increases risk. Trans fats also reduce insulin sensitivity, leading to higher insulin and glucose levels, and diabetes. Trans fats were banned in the U. S. by the FDA in 2015.
Cholesterol. Cholesterol is important to overall health, but when levels are too high, cholesterol can be harmful by contributing to narrowed or blocked arteries. Unfortunately, people with diabetes are more prone to having unhealthy high cholesterol levels, which contributes to cardiovascular disease according to the American Heart Association (AHA). Cholesterol sources include high-fat dairy products and high-fat animal proteins, egg yolks, liver, and other organ meats. Examples of high-fat animal proteins, eggs, and dairy are grain-fed or corn-fed meats and poultry and dairy cows, which we’ll cover shortly. Diabetes is directly negatively affected by high fat-high cholesterol diet. Aim for no more than 200 milligrams (mg) of cholesterol a day.
Caged Eggs. Truth be told, the research has been totally confusing about eggs and diabetes. While a few studies have suggested that dietary cholesterol might increase the risk for diabetes, others show that eating eggs actually improves sensitivity to insulin, which protects against diabetes.
It doesn’t have to be confusing. Improving insulin sensitivity is more likely the truer picture, unless you’re eating “caged” or housed white eggs, which are depleted of healthy polyunsaturated fats like omega 3 fatty acids, and higher in saturated omega 6 fatty acids.
Caged eggs containing more Omega 6s than omega 3s, will increase inflammation,weight gain, and cholesterol. In this study, the NIH said that the increase in ratio of omega 6 fatty acids to omega 3 fatty acids in the last few decades, is directly related to the increase in inflammatory diseases like non-alcoholic fatty liver disease, cardiovascular disease, obesity, inflammatory bowel disease, rheumatoid arthritis, and Alzheimer’s disease.
Processed Meats. Processed red meat is especially bad for your health. It is believed and backed up by research, that the preservatives, additives and chemicals such as nitrites, nitrates, and sodium, that are added to the meat during manufacturing can harm your pancreas which is the organ that produces insulin, and increase insulin resistance. One 2015 Journal of Alzheimer’s Disease confirmed the damaging effects of chemical additives in processed foods.
In one study, researchers from the Harvard School of Public Health (HSPH) have found that eating processed meat, was associated with a 42 percent higher risk of heart disease and a 19 percent higher risk of type 2 diabetes. Processed meat was defined as any meat preserved by smoking, curing or salting, or with the addition of chemical preservatives; examples include bacon, salami, sausages, hot dogs or processed deli or luncheon meats. The American Diabetes Association in this study said the following:
Our data indicate that higher consumption of total red meat, especially various processed meats, may increase risk of developing type 2 diabetes in women.
Processed meat was defined as any meat preserved by smoking, curing or salting, or with the addition of chemical preservatives; examples include bacon, salami, sausages, hot dogs or processed deli or luncheon meats. The American Diabetes Association in this study said the following:
Our data indicate that higher consumption of total red meat, especially various processed meats, may increase risk of developing type 2 diabetes in women.
A 2014 Lancet (NIH) study found that cutting back on packaged and processed foods that are high in vegetable oils, refined grains and additives may help reduce the risk of diabetes. A 2016 Cornell University study found that a “whole food natural” approach of high consumption of coffee, whole grains, fruits and vegetables, and nuts are each independently associated with the reduced risk of type 2 diabetes in high risk, glucose intolerant individuals Another 2016 Public Health Nutrition study reviewed by NIH found that poor-quality diets that were high in processed foods increased the risk of diabetes by 30 percent. However, including nutritious whole foods helped reduce this risk.
Corn-Fed Feedlot Meats. Industrial factory-farmed livestock (feedlots or housed) has been forced, trained, genetically engineered, or whatever euphemism the industry chooses to use, to feast on corn, soy and other grains, in addition to a repulsive mixture of other chemicals, hormones, antibiotics and liquid-solutions.
These food animals include cattle, sheep, pigs, chickens, turkeys, and ducks. Growth promoters including hormonal substances and antibiotics are used legally and illegally in food producing animals for the growth promotion of livestock animals according to Toxicol Residual study reviewed by the NIH, and are critical risk factors effecting human health.
Nutritionally deprived of healthy, polyunsaturated omega 3 fatty acids, which could only be gotten from eating natural grass, these food animals are also forced to live in crowded feedlots, in unsanitary conditions, stressing the animals out, causing them to be unhealthy and less resistant to infections and the spread of disease.
Although feedlot animal meats do contain some omega 6s, there’s not any healthy balance of omega 3s to 6s, because omega 3s are non-existent, making the meats less lean, causing weight gain (belly fat), inflammation, and insulin resistant. And, because of these unsanitary conditions, are routinely given antibiotics, which causes the development unhealthy antibiotic-resistant bacteria in the meats, as well. A T H Chan Harvard Medical study recommended that a great source of monounsaturated omega 3 fatty acids is some animal fat, especially naturally grass-fed meats, and not corn or grain-fed.
According to this Mayo Clinic study on Grass-Fed beef, you should NOT be eating corn or grain-fed meats, but should be eating grass-fed meats. Enough said.
Housed Dairy Cattle Products. The same applies to dairy cows. Most milk sold in America today comes from cows that have been fed corn or grains. It cheaply fattens the animals up, but because cows’ multi-compartmented stomachs can’t properly digest corn, it also makes them more susceptible to E. coli, a pathogenic bacteria that can spread to humans.
Milk from conventionally raised dairy cows are higher in inflammation-causing omega 6 fatty acids and lower in healthy inflammation-fighting omega 3 fatty acids. Milk from housed-dairy cattle are also lower in beta carotene and other essential vitamins and minerals.
In this study, the Diabetes Council had this to say about the best form of milk to drink for diabetes:
choose full-fat milks that come from grass-fed animals (so you don’t get a not-so-nice dose of antibiotics and hormones in your milk) and that are preferably raw (unpasteurized). The next best choice would be full-fat, grass-fed milk.
It’s simple, if it doesn’t say organic grass-fed on the label, don’t buy it.
Farmed-Raised Wild Fish and Seafood. In people with diabetes, inflammation-reducing omega-3s can lower the risk of heart disease, raise HDL good cholesterol, and improve triglyceride levels, and prevent diabetes.
Research has proven time again that omega 3s reduce inflammation, they also may play a role in lowering the risk of arthritis, cancer, and chronic diseases. Farmed fish are raised in controlled conditions, in pens with water and other fish, and fed pellets of food, and thus, void of healthy polyunsaturated omega 3s, and higher in less-healthy inflammation-causing omega 6s. The result of eating omega 6s is weight gain usually around the stomach area, high inflammation, and insulin resistance. Research done by Cleveland Clinic confirms eating farm-raised wild fish is not as safe, or healthy, as eating natural, wild-caught fish.
The American diet already contain high amounts of omega 6s from many food sources, so there’s really no advantage to eating farm-raised fish. What the American diet is missing is healthy omega 3s, which farm-raised fish are deficient in.
Farm-raised fish may have as much as 20 percent less protein compared to wild-caught fish, and besides a small fillet of wild salmon has 131 fewer calories and also has 20.5 percent more saturated fat content, half the fat content of the same amount of farmed salmon, as well.
Studies have also shown that farmed fish have 10 times the amount of PCBs (poly chlorinated biphenyls, a toxic chemical) and dioxin that wild fish do, plus other contaminants according to research such as an EWG study, a Mayo Clinic review study, and an NIH review.
Then you have the problem of crowded conditions in which farm raised fish are raised, they are routinely treated with antibiotics to help prevent infection, and, again, as in feedlot meats, the development of dangerous antibiotic-resistant bacteria in humans, as confirmed by a New York Medical College study, reported these findings in the July issue of ‘Environmental Microbiology’.
Lack of exercise. Exercise increases the insulin sensitivity of your cells. So when you exercise, less insulin is required to keep your blood sugar levels under control. One 2014 Journal of Endocrinology Metabolism study in people with prediabetes found that moderate-intensity exercise increased insulin sensitivity by 51 percent and high-intensity exercise increased it by 85 percent.
However, this effect only occurred on workout days. Just 30-minutes of moderate exercise a day is also beneficial and recommended by the Department Of Health and Human Services, even 2-15-minute periods of physical activity will work. Many types of physical activity have been shown to reduce insulin resistance and blood sugar in overweight, obese and prediabetic adults.
These include aerobic exercise, high-intensity interval training and strength training. A 2014 American Geriatric Society confirmed by the NIH that obesity and insulin resistance decrease with exercise and weight loss, suggesting that exercise training is a necessary component of lifestyle modification in obese postmenopausal women.
According to a 2012 Journal of Gerontology study, reviewed by NIH, a 12-week resistance exercise program improves muscle strength and muscle function to a similar extent in healthy, prediabetic, and Type 2 diabetes elderly people.
Two weeks of HIIT ( high-intensity interval training) and GFatmax training are effective for the improvement of aerobic fitness during exercise in these classes of obesity. The decreased levels of resting fatty-acids only in GFatmax may be involved in the decreased insulin resistance only in this group, found a obesity study out of Switzerland confirmed by the NIH. A couple of choices for a healthy diet are the Mediterranean diet and a heart-healthy diet. Even if you’ve already developed diabetes, it’s not too late to make a positive change. The bottom line is that you have more control over your health than you may think.
Types Of Nutrient-Dense Foods to Eat For Diabetes
Taking steps to prevent or control diabetes doesn’t mean living in deprivation; it means eating a tasty, balanced diet that will also boost your energy and improve your mood. You don’t have to give up sweets entirely or resign yourself to a lifetime of bland food. With the healthy recommendations we’ll cover, you can still take pleasure from your meals without feeling hungry or deprived.
Whether you’re trying to prevent or control diabetes, your nutritional needs are virtually the same as everyone else, so no special foods are necessary. But you do need to pay attention to some of your food choices, most notably the types of carbohydrates you eat.
Human cells use converted glucose obtained from food, particularly carbohydrates, for energy. The glucose ends up in the bloodstream and there are mechanisms that keep it in balance, preventing glucose levels from getting too low or spiking to high. Any rise in blood sugar signals the pancreas to make and release the hormone insulin.
Insulin serves the purpose of instructing cells which require glucose, to absorb the glucose. Diabetes occurs when the body can’t make enough insulin or can’t properly use the insulin it makes.
Having type 2 diabetes means that your body doesn’t control blood glucose well. When blood glucose stays too high for too long, serious health problems can develop. Type 1 diabetes, affecting 5 percent to 10 percent of those diagnosed with diabetes, occurs when the body’s immune system attacks the insulin-producing cells in the pancreas, stopping the production of insulin.
Type 2 diabetes is more stealthy, coming on gradually, sometimes taking years to develop into diagnoisable diabetes. It starts when cells resist absorbing glucose, which then remains in the bloodstream. The body’s mechanism continues to produce insulin, almost uncontrollable, trying to force cells to absorb glucose, and eventually, the insulin-producing cells become exhausted and fail, and that’s when full-blown type 2 diabetes occurs.
When we consistently take in large amounts of calories, our body has mechanisms to process all of this material, which works well in the short term but over the long haul can reduce insulin sensitivity and eventually wear out our insulin-making cells,
says Scott Keatley, a registered dietitian with Keatley Medical Nutrition Therapy in New York.
By having a diet in balance, we can avoid many of these long-term issues.
Pre-diabetes occurs when blood-sugar levels get high, but is still correctable and reversible at this stage by changing diet, getting exercise, and loosing weight. Typically blood-glucose levels return to normal if the above activities occur.
A study of 101 men with pre-diabetes was given a self-administered diabetes prevention program over a 6-month period by reducing their portion size of potato and meat and improve their variety of health foods. Loosing an average of 12 pounds and registering better blood sugar levels, they were able to reduce the proportion of energy coming from junk food by 7.6 percent more than the group who didn’t change their diet and got a four-point increase in their scores.
In women, behavioral and lifestyle changes, correcting issues such as excess weight, lack of exercise, a less-than-healthy diet, smoking, and abstaining from alcohol, are at least 90 percent effective in preventing type 2 diabetes, according to data from Nurses’ Health Study. Similar results occurred in men, as well. A follow-up NIH Health Professional study showed the typical American diet, combined with lack of physical activity and excess weight, dramatically increases the risk of type 2 diabetes in men.
Information from several clinical trials strongly supports the idea that type 2 diabetes is preventable in both men and women. In the group assigned to weight loss and exercise, there were 58 percent fewer cases of diabetes after almost three years than in the group assigned to usual care.
Even after the program to promote lifestyle changes ended, the benefits continues,
and the risk of diabetes was reduced, albeit to a lesser degree, over 10 year period. Quoting the study:
In conclusion, favouring plant and egg products appeared to be beneficial in preventing T2D.
These types of results were also confirmed in a Finnish and Chinese study. One study published in the Chinese Medical Journal (NIH) found that present studies have shown that conversion rate from prediabetes to diabetes can be markedly decreased by effective interventions such as weight loss and exercise.
When it comes to food choices, the trick is balancing the right protein, healthy carbohydrates and unsaturated fats, dietitians say. This combination helps you stay full longer without spiking your blood sugar too high. Balance also means watching both the quality and quantity of what you eat.
The American Diabetes Association advocates a plate that is half-filled with non starchy organic vegetables and fruit, a quarter-filled with lean animal and vegetable protein and a quarter-filled with healthy complex carbohydrates or whole grains. And, the more bright colors the fruits and veggies have. the better!
Briefly, the types of healthy nutrient-dense foods (A) you should be eating, are the exact opposite of the foods we’ve listed in the Foods To Avoid For Diabetes category above. These nutrient-dense foods containing omega 3s, essential amino acids, anti-inflammatories, vitamins and minerals, and antioxidants, include:
Certified Organic Grass-Fed Finished Meats, (A) such as beef, bison, and wild game.
Certified Organic Free-Range Finished Meats, (A) such lamb and pork.
Wild-Caught Fish and Seafood, (A)such as salmon, halibut, tuna, sturgeon
Fresh Certified Organic Raw Nuts and Edible Flower Seeds (A)
Polyunsaturated Vegetable Oil, such as extra virgin olive oil, or avocado oil (A)
No-Sugar Drinks, Filtered water and healthy antioxidant drinks (A), such as infused water (made with fruit), 100 percent juice no sugar, low-fat 1 percent milk, coffee, green, black or herbal tea, recommended by the American Diabetes Association. (A)Natural fruit smoothies such as blueberry Maca smoothie is another great choice and very simple and easy to make and quite a “quick pick-me-upper” (A).
Water by far is the most healthy natural drink you can have! LADA ( latent autoimmune diabetes of adults) is a form of type 1 diabetes that occurs in people over 18 years of age. Unlike the acute symptoms seen with type 1 diabetes in childhood, LADA develops slowly, requiring more treatment as the disease progresses, according to a Diabetologia study reviewed by the NIH, and drinking filtered water, is the surest way to preventing being tempted into drinking sweet fruity drinks.
A large 2016 observational Journal of Endocrinology study looked at the diabetes risk of 2,800 people who consumed more than two servings of sugar-sweetened beverages per day had a 99 percent increased risk of developing LADA and a 20 percent increased risk of developing type 2 diabetes. By contrast, consuming water may provide benefits. A 2016 British Journal of Nutrition study confirmed by the NIH found that increased water consumption may lead to better blood sugar control and insulin response.
One 24-week 2015 American Journal of Clinical Nutrition (NIH) study showed that overweight adults who replaced diet sodas with water while following a weight loss program experienced a decrease in insulin resistance and lower fasting blood sugar and insulin levels. A Diabetologia study (NIH) reported that drinking coffee on a daily basis reduced the risk of type 2 diabetes by 8–54 percent, with the greatest effect generally seen in people with the highest consumption.
One 2015 European Journal of Clinical Nutrition (NIH) significance of long-term habitual coffee drinking against preventing diabetes onset. The anti-inflammatory effect of several coffee components may be responsible for this protection.
Another review of several studies, like a Annals In Internal Medicine that included caffeinated tea and coffee found similar results, with the largest risk reduction in Japanese women and overweight men.
Complex Carbohydrates (Whole Grains) and High-Fiber Foods.. Natural whole grains and sprouted grains are nutrient-dense foods high in micronutrients and plant polyphenols, such as buckwheat, maize, whole wheat, millet, oats, sorghrum, rye, quinoa, and Ezekiel’s bread (A). Ezekiel’s bread is made from a variety of sprouted grains.
Studies have shown sprouting also partially breaks down the starch, since the seed uses the energy in the starch to fuel the sprouting process according to an NIH review. For this reason, sprouted grains have slightly fewer carbohydrates, which is great for diabetes. Numerous studies in obese, elderly and prediabetic individuals have shown that it helps keep blood sugar and insulin levels low. Here are some examples:
European Journal of Nutrition study reviewed by NIH found insulin and triglyceride concentrations are influenced by dietary fiber-rich meals from oats, rye bran, and sugar beet fiber. In another Journal of Nutrition study (NIH) the digestive tract, soluble fiber and water form a gel that slows down the rate at which food is absorbed. This leads to a more gradual rise in blood sugar levels. Insoluble high fiber has also been linked to reductions in blood sugar levels and a decreased risk of diabetes, although exactly how it works is not clear, according to a Diabetes Care study confirmed by NIH.
Dietary fiber consumption, according to a 2008 Journal of Nutrition (NIH) study, contributes in a positive way to a number of unexpected metabolic effects independent from changes in body weight, which include improvement of insulin sensitivity, modulation of the secretion of certain gut hormones, and effects on various metabolic and inflammatory markers that are associated with the metabolic syndrome.
Antioxidant-Rich Herbs and Spices. A multitude of healthy antioxidant-rich herbs and spices.Examples like basil, turmeric, bay leaves, hot peppers, and oregano. (A)
Natural Fermented Foods. Healthy fermented foods such as kefir, sauerkraut, pickles, yogurt, kimchi, and goat cheese. (A)
All-Natural Peruvian Maca Supplement . Read also What Is In Maca Root? Use these links to learn all about this incredible all-natural organic whole-food nutrient-dense supplement and it’s ingredients. (A)
All the foods items recommended above are the main ingredients of the Mediterranean Diet.
According to the NIH, the Mediterranean Diet serves as a model for healthy functional foods capable of managing diabetes……
may be associated with enhanced anti-oxidant, anti-inflammatory, insulin sensitivity, and anti-cholesterol functions, which are considered integral to prevent and manage T2DM.
(A) Use these links for more in depth information and documented studies on benefits and to buy any of these incredible nutrient-dense foods that will be beneficial in preventing and treating various forms of diabetes. We hope the information on Foods To Avoid For Diabetes was helpful. Should you have any questions or comments, please leave them below.
(1) Holsteincowboy video
(2) Chrissie Manion Zaepoor video
|
As the hysteria behind coronavirus spreads beyond Wuhan to Chicago and other parts of the world, many people remain relatively uninformed about the true nature of the disease due to the lack of experts on the topic.
The term “coronavirus” actually refers to a family of ribonucleic acid (RNA) viruses, with its main characteristic being that these viruses are in animals and can spread to humans, and humans can then spread them to each other. Coronaviruses are pervasive in that they have been around for a long time, first being identified in the 1960s, and cause about 20% to 25% of the common cold, which often has symptoms including a runny nose, cough, sneezing and fatigue. According to analysis of the genetic tree, coronaviruses originated in bats, although it is unknown whether there was an intermediary animal host.
Some notable coronaviruses include SARS and MERS, both of which have also caused outbreaks in the past.
COVID-19 has a lower mortality rate, and nobody in the U.S. has died from it. There are currently only 15 cases in the U.S. as of now.
“The high end is 2% and the low end is less than 1%,” said Robert Murphy, Northwestern professor of medicine and biomedical engineering. “However, the Spanish flu had a mortality rate, less than 1%, 20 million people died because hundreds of millions of people got it.”
Murphy said compared to previous outbreaks of older coronaviruses, COVID-19 has spread to many more people but causes fewer deaths percentage-wise.
The novel coronavirus is capable of taking the life of a perfectly healthy individual, and the disease often has no obvious symptoms. At least one report has shown people contracting the disease despite having no symptoms, but COVID-19 often has cold-like symptoms.
COVID-19, like other members of its family, is spread through respiratory droplets, which is why a mask may help reduce the spreading of the virus. There is a debate, however, over whether wearing a mask actually protects people from getting sick.
“The virus itself is very small,” Murphy said. “And it’s smaller than the filter ability of the mask, and most of those masks you see that you can kind of see people’s cheeks on the side - those do almost no good.”
Currently, scientists are developing a vaccine for COVID-19, but it’s in its early stages. Murphy said to be careful with these drugs because they can have “toxicities and complications of their own.” Since researchers know about the virus, there is likely a possibility of developing a good drug for the disease.
There are still not many informed experts on the disease, as the death rate of the disease determines the research funding, but the world is taking public health measures, such as implementing travel bans and isolating sick patients, to help prevent coronavirus from spreading further.
For more information about COVID-19, check out resources from Northwestern University Health Service, World Health Organization, the Centers for Disease Control and Prevention and a podcast from Northwestern University Feinberg School of Medicine.
Editor's Note: This article was updated on Feb. 23. A previous version of the article used the word "suck" instead of "sick."
Thumbnail licensed under Creative Commons Attribution-Share Alike 4.0 International License.
|
CNC machines are hugely versatile pieces of equipment, in large part thanks to the range of cutting tools they can accommodate. From end mills to thread mills, there’s a tool for every operation, allowing a CNC machine to perform a variety of cuts and incisions in a workpiece.
Getting to know these cutting tools is a great way to understand CNC machining in general. And a better understanding of machining will help you design parts that are better suited to the manufacturing process.
This article looks at some of the most widely used CNC machining cutting tools, though there are many more out there besides those discussed.
Cutting tool basics
A cutting tool is a device used to remove material from a solid block of material. It is fitted to the spindle of a CNC machine, which follows computer instructions to guide the cutting tool where it needs to go.
Cutting tools remove material from the workpiece by a process of shear deformation. That is, the sharp tool rotates at high speed and cuts from the workpiece many tiny chips, which are then ejected away from the workpiece. Some tools make contact with the workpiece at one point only, while others, such as end mills, hit the material at multiple points.
Most CNC machine cutting tools feature multiple flutes, which are helical grooves that run down the exterior of the tool. The flutes can be thought of as the troughs of the cutting tool, while the teeth, the sharp ridges between each flute, are its peaks. Chips cut from the workpiece travel down the flutes as they are ejected.
The ideal number of flutes on a cutting tool depends on the workpiece material. A tool with fewer flutes is preferable for soft materials, since the increased flute width means bigger chips can be ejected. A higher flute count can increase speed and is suitable for harder materials, but can lead to chip jamming, since each flute is narrower.
The type of cutting tool will affect the size of chip removed from the workpiece, and so will the spindle speed and feed rate.
Cutting tool materials
In order to cut through the solid workpiece, cutting tools must be made from a harder material than the workpiece material. And since CNC machining is regularly used to create parts from very hard materials, this limits the number of available cutting tool materials.
Common cutting tool materials include:
Carbon steel is an affordable steel alloy containing 0.6-1.5% carbon, as well as silicon and manganese.
The more expensive HSS is harder and tougher than carbon steel thanks to its blend of chromium, tungsten and molybdenum.
Usually sintered with another metal like titanium, carbide tools are wear-resistant and heat-resistant, providing an excellent surface finish.
Used to cut superalloys, cast iron and other strong materials, ceramic tools are resistant to corrosion and heat.
Cutting tool coatings
The function of a cutting tool depends on its shape and material, but can also be adjusted with a coating over the main material.
These coatings can make tools harder, increase their lifespan or enable them to cut at faster speeds without compromising the part.
Common cutting tool coatings include:
Titanium Nitride (TiN)
TiN is a general-purpose coating with a high oxidation temperature that increases the hardness of a cutting tool.
Titanium Carbo-Nitride (TiCN)
TiCN adds surface lubricity and hardness to a cutting tool.
Super-life Titanium Nitride (Al-TiN)
Al-TiN adds heat resistance to carbide cutting tools, especially when minimal coolant is employed.
Diamond provides a high-performance coating for cutting abrasive materials.
Chromium Nitride (CrN)
CrN adds corrosion resistance and hardness to cutting tools.
1. End mill
The end mill is the most widely used tool for vertical CNC machining. With cutting teeth at one end and on the sides, end mills can remove large amounts of material in a short space of time.
End mills come in many forms. Some have just a single flute, while some may have up to eight or even more. (Beyond four flutes, however, chip removal may become an issue.)
Types of end mill include:
- Flat: General purpose flat-faced tool suitable for 2D features
- Ball nose: Tool with ball-shaped end that is suitable for 3D contours and curves
- Bull nose: Tool with flat bottom and rounded corner suitable for fillets and roughing
2. Roughing end mill
A roughing end mill is a kind of end mill used for removing larger amounts of material with less precision than a standard end mill.
The tool has serrated teeth that remove large sections of material but leave a rough finish on the part. It produces small chips which are easy to clear.
3. Face mill
Face mills consist of a solid body with interchangeable cutter inserts, usually made from carbide. They are used to make flat sections on the workpiece, often before another kind of cutter is used to make detailed features.
Since the cutting edges of face mills are found on its sides, cutting must be done horizontally.
However, face mills can be more cost-effective than other cutting tools, since variations in cutting profile can be achieved by replacing the small cutter inserts rather than the entire tool.
4. Fly cutter
Fly cutters comprise one or two tool bits contained within a solid body. The tool bits of a fly cutter make broad, shallow cuts, producing a smooth surface finish.
It is more common to find fly cutters with one tool bit, but those with two tool bits — sometimes called “fly bars” — provide a larger swing.
Less expensive than face mills, fly cutters can nonetheless be used for similar purposes.
5. Thread mill
Many engineers prefer to make threads using taps, but threads can also be made with a CNC machine fitted with a thread mill.
Thread mills can cut internal or external threads, and may be better than taps for penetrating very hard metals or asymmetrical parts.
6. Drill bit
CNC machines can be fitted with a variety of drill bits for various cutting operations. Drill bits have one or more flutes and a conical cutting point.
Drill bits used in CNC machining include:
- Twist drill: Used to make holes in the workpiece
- Center drill: Used to precisely locate a hole before drilling
- Ejector drill: Used for deep hole drilling
Reamers are used to widen existing holes in the workpiece, providing an exact hole diameter and an excellent surface finish.
Reamers can create holes with much tighter tolerances than other cutting tools.
8. Hollow mill
Hollow mills are pipe-shaped cutting tools that are like inverted end mills. Their cutting edges are on the inside of the pipe shape, and they can be used to create shapes like full points and form radii.
9. Side-and-face cutter
Side-and-face cutters have teeth both on their side and around their circumference, and are suitable for unbalanced cuts.
These cutting tools can be used for cutting slots and grooves with fast feed rates. Their teeth can be straight or staggered.
10. Gear cutter
CNC mills are sometimes used to create metal gears for the manufacturing industry. Specific gear cutting tools can be used to make these gears.
Cutting gears sometimes requires a special kind of milling machine known as a hobbing machine.
11. Slab mill
Slab cutters or plain milling cutters are used to mill flat surfaces, usually with the target surface mounted parallel to the machine table.
These cutting tools have no side teeth, and can be used for general or heavy-duty machining operations.
3ERP provides professional CNC machining services for your prototyping and production needs. Get in touch for a fast quote.
|
Emerging America built this digital resource to provide ongoing support for K-12 teachers of history, social studies, and humanities to challenge and nurture struggling learners. Resources for teachers and students to increase accessibility in the classroom are being added to this page regularly. Please share with us additional resources that you find helpful, and be sure to check our current course offerings.
One of the most important and challenging aspects of inquiry and historical thinking is to learn to ask and pursue meaningful and effective questions, and to teach in a way that encourages students to ask and pursue their own questions. (A thought-provoking article on supporting students’ questions is Alfie Kohn’s Who’s Asking.)
Teaching inquiry strategies is an important part of being a skilled history and social sciences educator. Primary sources play a central role in this process, a point emphasized in both state and national academic standards. See National Council for the Social Studies (NCSS) FRAMEWORKand Massachusetts 2018 FRAMEWORK.
This page offers a variety of strategies and tools to help teachers and students develop the skills necessary to deepen analysis and investigation, with a focus on primary sources. Find the strategies that work best for your classroom and students.
Use Library of Congress teacher resources such as the Primary Source Analysis Tool to help students learn through inquiry.
Observe: Students should make no inferences about the primary source. Rather, they should question only what they see. Teach the difference between questioning what you see and making assumptions about what is happening.
What do you notice first?
What kind of structures do you see?
What do you notice about the people?
Is there anything you notice because it is NOT there?
Reflect: Students should now speculate about the structures and people in the primary source. Speculate on the purpose of the structure and the role of the people.
Can you tell anything about the details of the source?
Why was the source created? Photograph taken?
What type of building is this?
Question: Build on the questions that students have presented. Encourage students to go deeper into the source. Brainstorm how students can find the answers to their questions.
What would you ask the person who took the photo?
Where is the building?
When was this happening?
Who owned the buildings?
Where did the people come from?
Wrap up the Observe-Reflect-Question sequence by having students consider whether and how the event represented by the primary source has impacted history. The teacher should help students figure out how they can find the answers to their questions. Students should be guided to appropriate and reliable sources.
A way to spur inquiry and close observation is by examining one quarter of the primary source at a time. This 6 minute exercise gives students a chance to focus in on particular details of the source. Having students write notes about each quadrant helps students to generate ideas and text fragments they can use in their writing; the partial view makes it easier for students to make notes without self-criticism. The process is a way to introduce students to the benefits of taking their time when interpreting sources, and to finding tools to delay drawing conclusions before looking closely and noticing as much as possible.
Introduce this exercise by showing an image for the first time without a caption or identifying information, for only 60 seconds, asking students to write nothing, just look at the image. After the 60 seconds in which students are shown the whole image, show just one quarter of the image for 60 seconds, and encourage students to write what they see.
Every 60 seconds, show a different quarter of the image asking the students to repeat the process
Finish the observation by again showing students the entire image
Once this 6 minute exercise is complete, students can be directed share their observations with a partner, and to complete a variety of tasks, depending on the teaching goals. For example:
- What are the three most important details you and your partner noticed?
- What was unique in each quarter? How did the divided image differ from the whole?
- If you were to give this image a title, what would it be?
- Write a thought bubble for a person in this image? What are they thinking?
The whole class discussion following sharing with partners can provide opportunities for groups to share their observations, and to post titles and/or thought bubbles on the board for all to see. Discussion can turn to the historical particulars of the image, including
- Who is the audience for this image? Who made it, and why?
- What other questions do you have about this image? What would you need to know to understand more about it?
The exercise can serve as an introduction to new content or new methods, among many possible purposes.
Prior to investigating a source, students examine the variety of people and groups that would interpret the source differently. This strategy helps with developing a context around a primary source. It also helps for students to appreciate different viewpoints and the purposes of creating a particular document.
Social Studies educators can use this activity to develop a safe environment for discussing difficult topics in history. Educators develop discussion guidelines based on the varying perspectives from the Circle Activity. Teachers can then create a system of hand signals indicating comfort level with discussing a topic and incorporate time for discussion and reflection. Students will benefit from a discussion that is student-driven and not fully directed by the teacher.
Example: A Circle Thinking Map on a primary source about lynching. https://blogs.loc.gov/teachers/2016/04/selecting-and-using-primary-sources-with-difficult-topics-civil-rights-and-current-events/
The Stripling Inquiry Model provides an image to help students make sense of the inquiry process. The Stripling model is a six step inquiry model.
Connect: Provide detailed context to the sources and connect to the major themes of historical study.
Wonder: Develop focus questions at different levels of thought and connect to larger themes for the unit of study.
Investigate: Determine the main ideas and details. Investigate the purpose of the source and the author’s point of view.
Construct: Draw conclusions about the evidence that has been compiled.
Express: Apply new ideas to share with others.
Reflect: After every investigation, short or long, pause to ask what we learned about the inquiry process. What new skills? What approaches? What pitfalls? Also take a moment to identify new or still unanswered questions to take learning to a higher level.
Injuries and Disability in 19th Century Industry – Stripling model and Read and Analyze Non-fiction (RAN) chart
Immigration: The Making of America – Quadrant Analysis
Propaganda Posters in the Spanish Civil War – Observe-Reflect-Question (ORQ) tool
Facing History has a page that links to dozens of teaching strategies to use in lessons, including many inquiry-based activities. They have videos on specific lessons and lesson types in their on-demand professional development section. Examples include two-column note taking for inquiry, the think-pair-share process, gallery walk, and more.
Right Question Institute. http://rightquestion.org/
- Joshua Beer, Goshen-Lempster Institute, New Hampshire on Question Formulation Technique (9:45 mins)
- Dan Rothstein, co-author of Just Make One Change: Teach Students to Ask Their Own Questions, TED Talk (13:40 mins)
- https://www.youtube.com/user/RightQInstitute You Tube site with multiple videos.
- https://www.youtube.com/watch?v=9Dhg13QBOBM&t=1s intro (3:23 minutes)
See also assessment strategies.
|
Getting Started with Data Visualization in Python and a Few Tricks
Data Visualization is about taking data and representing it visually to make large data interpretable to humans. In this tutorial, we will be using matplotlib and seaborn.
Data Visualization is about taking data and representing it visually to make large data interpretable to humans. Data Visualization also allows us to look at trends and patterns in the data to facilitate decision making.
Python has a lot of data visualization libraries for common type of visualizations. Some of the major libraries are:
In this tutorial, we will be using matplotlib and seaborn and looking at common types of plots. We will also be looking at easy ways to make graphs prettier.
Common Types of Charts
- Bar Plot - A bar chart or bar graph is a chart or graph that presents categorical data with rectangular bars with heights or lengths proportional to the values that they represent. The bars can be plotted vertically or horizontally. A vertical bar chart is sometimes called a column chart.
- Line Plot - A line chart or line plot or line graph or curve chart is a type of chart which displays information as a series of data points called 'markers' connected by straight line segments.It is a basic type of chart common in many fields.
- Pie Chart - A pie chart (or a circle chart) is a circular statistical graphic, which is divided into slices to illustrate numerical proportion. In a pie chart, the arc length of each slice (and consequently its central angle and area), is proportional to the quantity it represents.
- Box Plot - A box plot is a method for graphically depicting groups of numerical data through their quartiles. Box plots may also have lines extending from the boxes (whiskers) indicating variability outside the upper and lower quartiles, hence the terms box-and-whisker plot and box-and-whisker diagram. Outliers may be plotted as individual points.
- Scatter Plot - A scatter plot is a type of plot used to visualize relationship between two variables. The data are displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.
- Histogram - A histogram is an approximate representation of the distribution of numerical or categorical data. A histogram can be a line histogram or a bar histogram.
Note: Definitions taken straight from Wikipedia.
Matplotlib has 3 different layers, each layer has different level of customization.
Different layers of matplotlib are:
- Scripting Layer
- Artist Layer
- Backend Layer
We will be looking at the scripting layer since its the most easy to use. Scripting layer can be used using matplotlib.pyplot.
Importing the Libraries
Lets start by importing the libraries. We'll import pandas for reading the data and matplotlib for plotting.
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import style
Importing the Data
We are going to use two datasets, just for demonstrating the things we can do with matplotlib and seaborn.
df1=pd.read_csv("India GDP from 1961 to 2017.csv")
Basic Usage of pyplot
If you want to plot something quickly, you can use pyplot.plot function. If you pass a single column it'll plot it against the index. If you pass two columns, you'll get a line graph by default and a scatter plot if you pass the string 'o'. We can also change the color of the data points being plotted by passing two character strings like 'ro'. We can also use different markers such as '+'.
You might have noticed that our plots do not look visually pleasing. One easy trick to make it instantly look better is to use matplotlib.style. It comes with a large amount of styles, so feel free to experiment. In this tutorial, we are going to use the style 'ggplot' which is based on the famous R library ggplot2.
We can use style.available to get a list of all the styles.
Now, let us set style by using style.use(). We'll use 'ggplot' as argument.
We can notice that our plot looks instantly better.
Plotting Basic Plots in Matplotlib
Scatter plot can be generated by using plt.scatter() and passing two arguments for x and y.
Now, lets look at coloring the data points by their species. To do this, we need more than a single line of code. Lets see how we can do it. First, we need to create groups using pandas and we can plot the scatter plot using a for loop. We shall also go ahead and add labels for x and y axis and add a title. We can do this by using xlabel, ylabel and title methods in pyplot.
for name, group in groups:
plt.title('Sepal Length VS Sepal Width')
We can plot a bar plot by using plt.bar() and passing x and y values. x-axis generally contains categorical value and y contains a numerical value.
Plotting a pie chart in matplotlib is relatively easy. First, let us extract the counts for the species column and then plot it using matplotlib. We shall add the legend as well and set its position using bbox_to_anchor parameter.
Line plots are pretty easy in matplotlib. We can use plt.plot to generate it. We shall visualize India's GDP percentage from 1960-2017
We can generate histogram using plt.hist() method and pass a numerical column as parameter, we can also set the number of bins and play with a few style options.
We can generate boxplot by using plt.boxplot(). However, if we want to group the boxplot by species like we did with scatter, we can use panda's implementation of boxplot using df.boxplot() instead since its pretty straight forward.
Seaborn is a plotting library build on top of matplotlib. It provides an easy way to produce good looking plots. Since it is built upon matplotlib, both can be used together to enhance plots generated by the other. We will be looking at few examples for using seaborn to enhance matplotlib plots.
Lets start by using seaborn's aesthetics methods to style matplotlib plots. We generally use the alias 'sns' for seaborn.
sns.set() will set seaborn styling to matplotlib plots.
import seaborn as sns
sns.set_style() can be used to set the grid style. The following example shows how to use it.
sns.set_palette() can be used select color palettes. Seaborn has some built in palettes but built in or custom palettes can also be used. We will be looking into it later in the tutorial.
Plotting Basic Plots in Seaborn
Now, lets look at how to plot different types of charts in seaborn. We will be looking at the same types of charts that we looked at when we were using matplotlib.
Scatter Plot can be generated using sns.scatterplot(). We can also group data points to categories just by passing the hue parameter with the column containing categorical values.
Let us create a custom palette and use it. It is as simple as creating a list of colors in hexadecimal values.
Now, let us alter the scatter plot and add a fourth variable which we will represent using 'size' parameter. The resulting plot is called a bubble chart.
Bar plot can be generated using sns.barplot and passing x and y values.
seaborn also provides countplot method from which you can see the counts of a categorical column.
Box plot can be generated using sns.boxplot() method. Passing a single column will generate a single box plot. Passing the categorical variable with the column will group the values by the categorical variable values. We can pass it in any order and it'll only result in horizontal and vertical alignment differences.
Histogram can be generated using sns.distplot(). Seaborn provides both bar histogram and line histograms by default.
Line plot can be generated using sns.lineplot and passing x and y values. As simple as that.
Unfortunately, seaborn does not support pie charts. But, we can still use seaborn's styling to generate pie charts using matplotlib.
A good selection of colors can really enhance data visualization experience. Although both matplotlib and seaborn come with built in styling and palettes and we can always use custom palettes, it is really handy to have a wide selection of existing palettes available. Enter palettable. It is a library which contains a wide variety of palettes from different data visualization tools and libraries that can be used alongside with matplotlib and seaborn. Let us see how we can use it.
We will be using one of my favorite palettes called Prism. Here are a few examples:
from palettable.cartocolors.qualitative import Prism_10
There are multiple palettes available in palettable so feel free to mess around.
Saving Plots in High Resolution
We can save the generated graphs in high resolution using plt.savefig() method. We can set the figure size and dpi using plt.figure(). If we want a transparent background, we can use transparent=True in savefig() method.
That's it for this tutorial. In the future, I'll be covering interactive plots using plotly. Thanks for reading and have fun 'plotting'.
|
Week 7: Learning Math Skills with Sudoku
I love math. I’ve loved it all my life. I majored in math. It was the stepping stone to my introduction to computer science, which led to my career at Microsoft, and my creation of The Trip Clip.
Seriously, I love math. I just needed to get that out.
I also love puzzles. Puzzles of all kinds. And wouldn’t you know it? Puzzles are a wonderful stepping stone to math! Puzzles can teach a surprising number of math and computer science concepts, and the best part is that kids will think they’re playing, not learning.
Sudoku is one of many puzzles that can teach and hone math concepts, from very simple ones, to much more advanced ones, and The Trip Clip has 4×4, 6×6, and 9×9 Sudoku puzzles for kids.
- At its most basic, sudoku familiarizes kids with numbers. If you watch any beginning sudoku solver, you will see them whispering the numbers to themselves in order as they start with the 1’s, then the 2’s, then the 3’s, and work their way through each number, which means they do a lot of number sequence repetition and number recognition.
- Another benefit for very young players is simply practice writing the numbers. The 4×4 grids on The Trip Clip website can be solved by surprisingly young children, and they have nice big boxes to allow them to practice writing those tricky 2’s and 4’s.
- Sudoku is also a strong lesson in how to use process of elimination and deduction to solve a problem. These simple 4×4 puzzles help introduce kids into how this process of elimination works, and you may be surprised how quickly they can understand how to solve their first sudoku puzzles!
- Sudoku also teaches basic computer programming algorithms. No matter what approach they take, at some point kids will develop a consistent routine that they use to methodically work through the empty squares and find numbers they can insert into the puzzle. Their algorithm will tackle each column, or row, or box in an order that makes sense to them so they can find number to insert.
- And once they figure out what some of those basic algorithms are, they will invent extrapolation all on their own as they apply those algorithms to harder, more involved puzzles, and find out that it works!
And what makes Sudoku a truly great puzzle, is the deeper learning that can happen next.
- Kids will develop what feels like intuition about how to tackle each new puzzle they see, but what they think of as intuition is really a deepening ability to recognize patterns. They will start to quickly spot the row or square that is more filled in than the others. Or maybe they’ll glance at a puzzle and notice there are hardly any 2’s but an awful lot of 8’s, and they’ll know where to start.
- And the fun really begins when they start solving harder sudoku puzzles, where they may have to go down multiple paths before they can establish which one is the right path. Some much more advanced problem solving starts to occur at this point. Kids are essentially working through computer programming if/then statements to solve the sudoku puzzle.
- Kids will also be strengthening their memorization skills as they hold multiple paths in their head.
- These harder sudoku puzzles will also really stretch their problem solving skills, including their ability to reason and deduce.
- Last but not least, sudoku teaches focus, concentration, and perseverence, with a great payoff of a feeling of satisfaction for sticking with it and knowing for sure at the end that you’ve solved the puzzle correctly.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.