content
stringlengths
275
370k
The intention of the following learning experience is to share, facilitate, and develop an understanding of biodiversity through inquiry, related to the overall Science expectations, as well as some of the expectations in Language and Social Studies. The process begins with critical questions related to students’ own lives and the impact they have on their local environment. The purpose of the critical questions is to engage students to guide their own learning, and gain new knowledge through the triangulation of data - observation, documentation, and conversation (Watt & Colyer). This provides a framework for investigation through inquiry. Although this resource suggests a somewhat linear framework for learning, and learning goals are somewhat static in nature, authentic inquiry-based learning occurs in a very organic way, honouring the distinct culture of a classroom, and is always responsive to students’ needs. Various experiences may include, but are not limited to, experiential learning, research of content (online or otherwise), and documentation. Assessment as learning focuses on small group discussions, ongoing feedback, sharing of new knowledge, research, self-assessment, and further inquiry/questions. Culminating learning experiences may take on various forms/formats (i.e., digital). Students are also invited to share with others, enabling student agency in their school and local community.
Welcome to Ms. Lindo's Music Page Within my classroom students are encouraged to strengthen their musical understanding and explore their budding musical ability through writing, singing, playing, and listening to music. Students tackle the role of both composer and musician as they engage is various activities focused on fostering a healthy appreciation for music and good musicianship that will last throughout their lives. Learners are introduced to different technologies, musical instruments, and eras for development and inspiration. They delve into the lives of famed composers and other creative individuals as they deepen their musical knowledge through synergy and independent study. Creativity and imagination are prized, along with a healthy respect for all of their fellow musicians as students learn while following the S.T.A.R.S expectations.
Researchers have been able to watch the interior cells of a plant synthesize cellulose for the first time by tricking the cells into growing on the plant's surface. "The bulk of the world's cellulose is produced within the thickened secondary cell walls of tissues hidden inside the plant body," says University of British Columbia Botany PhD candidate Yoichiro Watanabe, lead author of the paper published this week in Science. "So we've never been able to image the cells in high resolution as they produce this all-important biological material inside living plants." Cellulose, the structural component of cell walls that enables plants to stay upright, is the most abundant biopolymer on earth. It's a critical resource for pulp and paper, textiles, building materials, and renewable biofuels. "In order to be structurally sound, plants have to lay down their secondary cell walls very quickly once the plant has stopped growing, like a layer of concrete with rebar," says UBC botanist Lacey Samuels, one of the senior authors on the paper. "Based on our study, it appears plant cells need both a high density of the enzymes that create cellulose, and their rapid movement across the cell surface, to make this happen so quickly." This work, the culmination of years of research by four UBC graduate students supervised by UBC Forestry researcher Shawn Mansfield and Samuels, was facilitated by a collaboration with the Nara Institute of Technology in Japan to create the special plant lines, and researchers at the Carnegie Institution for Science at Stanford University to conduct the live cell imaging. "This is a major step forward in our understanding of how plants synthesize their walls, specifically cellulose," says Mansfield. "It could have significant implications for the way plants are bred or selected for improved or altered cellulose ultrastructural traits - which could impact industries ranging from cellulose nanocrystals to toiletries to structural building products." The researchers used a modified line of Arabidopsis thaliana, a small flowering plant related to cabbage and mustard, to conduct the experiment. The resulting plants look exactly like their non-modified parents, until they are triggered to make secondary cell walls on their exterior.
FOR the first time, scientists are hot on the trail of a gene that shapes the development of human language. The discovery may help explain why many speech and language disorders arise. Scientists know that our genetic heritage can strongly affect our ability to communicate. For example, identical twins tend to share linguistic ability or disability. Geneticists have tried to hunt down the genes that influence language abilities by studying language disorders that can be inherited, such as stuttering. But such disorders tend to have complex patterns of inheritance, suggesting that many genes are involved, each making a small contribution. And so far, they have not snared a single solid candidate. But now British researchers say they are close to identifying a gene they call To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
A maggot is a larva of the common fly. Maggots have soft bodies and no legs, so they look a bit like worms. They usually have a reduced head that can retract into the body. Maggot commonly refers to larvae that live on rotting flesh or tissue debris of animal and plants. Some species eat healthy animal tissue and living plant matter. Some people choose to eat maggots intentionally. Maggots may be fried and eaten in places where eating bugs is commonplace. They can also be used to make a Sardinian delicacy. “Casu marzu” translates to maggot cheese or rotten cheese. It’s an Italian cheese that’s prepared specially to turn into breeding grounds for maggots. While casu marzu may be described as a fermented Pecorino cheese, it’s actually decomposing. It’s said that the cheese is safe to eat as long as the maggots are still living. It’s also possible to eat maggots by mistake since they’re often found around food, though usually they’re found around contaminated food that you’d avoid. However, eating maggots poses a few risks of which you need to be aware. It may be safe to consume maggots themselves, but you may be susceptible to whatever they’ve eaten or been exposed to, such as feces or rotting flesh. Fruit infested with maggots is likely to be rotting and ridden with bacteria. Other risks include the following: Myiasis is an infection that occurs when maggots infest and feed on the living tissue of animals or humans. It’s most common in tropical and subtropical counties. People who have difficulty maintaining good oral hygiene are particularly at risk. Larvae can settle in areas of the mouth where hygiene is poor. Eating maggots is also thought to leave the internal organs and tissue susceptible to the larvae, although myiasis is more commonly something that occurs under the skin. The maggots that cause myiasis can live in the stomach and intestines as well as the mouth. This can cause serious tissue damage and requires medical attention. Myiasis is not contagious. Symptoms of myiasis in your gastrointestinal tract include stomach upset, vomiting, and diarrhea. In the mouth, the larvae are typically visible. Eating maggots or maggot-infested food can cause bacterial poisoning. Most foods that have maggots aren’t safe to eat, especially if the larvae have been in contact with feces. Some houseflies use animal and human feces as breeding sites. They also breed on garbage or rotting organic material. It’s possible for maggots to become contaminated with Salmonella enteritidis and Enterococcus coli bacteria. Symptoms of an E. coli infection include fever, diarrhea, nausea or vomiting, and cramping. Symptoms of salmonella are similar. Both conditions can also cause bloody stool and fatigue. Some people may be allergic to maggots. Certain types of larvae have been shown to cause respiratory and asthmatic symptoms in people who handled the larvae to use as live fishing bait or who are occupationally exposed. Contact dermatitis has also been reported. It’s been suggested that you may have an allergic reaction if you eat larvae that have been exposed to or consumed foods you’re allergic to. Scientific research is needed to clarify this view. Eating dried, cooked, or powdered maggots is safer than eating whole, unprocessed larvae. The processing would get rid of microbes, parasites, and bacterial spores. Producing larvae in this way would have less of an environmental impact than producing meat for human consumption. However, at present, risks still exist and likely outweigh potential benefits. Call your doctor if you develop any unusual symptoms that you think are related to eating maggots. This is especially important if you’re in the tropics or traveling in a country with unsafe food conditions. Overall, it’s unlikely you’ll be exposed to large amounts of maggots. If you accidentally eat one in an apple, you’ll probably be fine. You may choose to eat fried maggots or casu marzu at your own discretion. To prevent maggots and flies from developing in your home, follow these tips: - Keep your house and kitchen as sanitary as possible. - Keep an eye on all of your fruits, vegetables, and meats to ensure they’re not becoming breeding grounds. - Cover your fruits and vegetables with a net or store them in the refrigerator, especially if you live in a warmer climate. - Keep your garbage can covered and take it out as often as possible.
In somewhat simpler terms, static friction is why your keyboard doesn't move across the desk while you type on it, and why everything you touch doesn't go flying across the room. It's also why car s have low gears and why you can keep your balance standing on the ground. Static friction is like a threshold force that has to be overcome in order for an object to move. Your keyboard is not moving while you type on it (well, it is, but infinitesimally). But if you firmly push the keyboard from one of its sides, it will slide a short ways. The minimum amount you have to push in order to make it move is essentially equal to the force of static friction (with adjustments for gravity, etc). How much static friction is there for an object? It depends mostly on the object's weight. The weight of an object is determined not only by its mass, but also by the force of gravity, as well as all other forces acting in the vertical axis. The table that the keyboard is resting on exerts an upward force on the keyboard. It has to, if you think about it -- otherwise, the keyboard would simply sink into the table. This is an example of Newton's third law: "for every action, there is an equal and opposite reaction." The force of the table pushing upward on the keyboard is called the "normal force" (in physics and mathematics, the word "normal" is used to mean "perpendicular"). The normal force is equal to the object's weight, and acts in the opposite direction of the weight. This is why you can easily slide the keyboard across the table, but you cannot easily slide a parked car that has its brakes on. The weight of the car makes the force required very large. Briefly consider the normal force exerted on the keyboard by the table or desk. It would be equal to the keyboard's weight, which is equal to the acceleration of gravity (9.8 m/s^2) times the mass of the keyboard (1.3 kg, for a typical keyboard). Remember that force is measured in newtons, and is equal to the mass of an object times its acceleration. So we have: W = (9.8 m/s2)(1.3 kg) = 12.74 N The normal force is therefore: N = -W = -12.74 N There is one more factor to consider before we can calculate the static friction, which is both obvious and a pain. Different objects have different degrees of slipperiness. It takes much less force to get a 1.3 kg block of ice moving across the table than a 1.3 kg block of plastic (like a keyboard). Similarly, it takes more force if the keyboard has sandpaper on the bottom instead of a piece of plastic. This difference is reflected in a quantity called the "coefficient of static friction". It does not vary with an object's weight, but with the material it is made out of. The only way to determine the coefficient of static friction for a particular object is to consult a table or conduct an experiment. Furthermore, we would also need to figure out the coefficient of static friction for the table as well, since the bottom of the keyboard and the top of the table are in contact. Coefficients of friction get smaller as objects get slipperier. Teflon on teflon is very close to 0, while sandpaper on sandpaper is very large. We will disregard the coefficient of static friction for the table in this small example, assuming it to be close to 0*. The coefficient of static friction for the bottom of the keyboard might be around .4**. The force of static friction is equal to the coefficient of static friction times the normal force: f = μN = (.4)(-12.74 N) = -5.09 N Note that the coefficient of friction was unitless, because it's only a coefficient by which to increase or decrease the action of the normal force. According to this calculation***, you would need to exert around 5.09 newtons of force sideways on the keyboard to make it move. An experiment to figure out the coefficient of static friction for two surfaces could be as follows: - Gather up a block (made of material A), a pulley, a bunch of weights, a rope or string, and a scale, plus a horizontal surface made of material B. - Determine the mass of the block by weighing it on the scale, and multiply by gravity (9.8 m/s2 to calculate the weight. - Tie one end of the rope to the block. - Put the block on the surface (which needs to be elevated above the ground -- on a table is perfect. Of course, you can measure the coefficient for the table itself). - Run the rope through the pulley and dangle it over the edge of the table. - Add weights to the dangly end until the block just begins to move. - Record the amount of weight you had to add, then add weights on top of the block, and repeat. - Divide the added-weight numbers by the weight of the block. These are approximate coefficients of static friction (if you performed the experiment correctly) for surface A against surface B. Average them to get a better approximation. This is only one method for determining the coefficient of static friction. More accurate methods are available. This is inaccurate. This is inaccurate -- I guesstimated. The results of the calculation are grossly inaccurate for the real world, but correct in principle. The force REALLY required could be anything, as many factors have been glossed over or assumed out of existence.
What is Concrete? Concrete is a composite construction material composed of cement (commonly Portland cement) and other cementitious materials such as fly ash and slag cement, aggregate (generally a coarse aggregate made of gravels or crushed rocks such as limestone, or granite, plus a fine aggregate such as sand), water, and chemical admixtures. The word concrete comes from the Latin word “concretus” (meaning compact or condensed), the perfect passive participle of “concresco”, from “com-” (together) and “cresco” (to grow). Concrete solidifies and hardens after mixing with water and placement due to a chemical process known as hydration. The water reacts with the cement, which bonds the other components together, eventually creating a robust stone-like material. Concrete is used to make pavements, pipe, architectural structures, foundations, motorways/roads, bridges/overpasses, parking structures, brick/block walls and footings for gates, fences and poles. Concrete is used more than any other man-made material in the world. As of 2006, about 7.5 cubic kilometres of concrete are made each year—more than one cubic metre for every person on Earth. Concrete powers a US$35 billion industry, employing more than two million workers in the United States alone. More than 55,000 miles of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days.
Object-oriented Programming Terminology - Object: either a class or an instance, but typically refers to an instance. - Class: a description of a set of similar objects, the instances. This description typically includes: - Instance variables: the names of variables that are assumed to be defined for every subclass or instance of the class. - Methods: definitions of messages to which members of the class - Instance: an individual member of a class. Typically an instance must be a member of exactly one class. An instance is a data structure that: - can be identified as being an object - denotes the class to which the object belongs - contains values of the instance variables - Superclass: a class to which a given class belongs. Sometimes a class may have more than one superclass.
In Australia in 2015 it was estimated that 209,000 people were living with chronic hepatitis C and 239,000 with chronic hepatitis B. In the ACT approximately 3,600 people are living with hepatitis C and 4,000 with hepatitis B. The burden of disease and mortality associated with hepatitis B and hepatitis C continues to increase, and preventable infections continue to occur. Communities at risk Hepatitis C is preventable and treatable, yet is one of the most commonly notified diseases in Australia. There is no vaccine for hepatitis C. Hepatitis C is not confined to any one age group, social or economic group, race or culture. In Australia the majority of infections are attributable to unsterile injecting drug use. Other risk factors include incarceration, receipt of blood products in Australia prior to 1990, unsterile tattooing or body piercing, medical and dental procedures in developing countries,and less-commonly household transmission through shared personal effects, vertical transmission (from mother to child) and needle stick injuries. Hepatitis B infection is preventable including through vaccination. The majority of people living with chronic hepatitis B in Australia were infected at birth or in their early childhood in hepatitis B endemic regions such as the Asia Pacific, sub-Saharan Africa, or rural and remote Aboriginal and Torres Strait Islander communities. Most people (95%) infected as adults will recover completely from acute hepatitis B infection. Most infected at birth will develop chronic hepatitis B infection. As many as 38% of people living with hepatitis B in Australia are undiagnosed, however effective treatment is available for those chronically affected. Many people live with hepatitis C or hepatitis B for many years without being aware. Unfortunately some are diagnosed only when they develop serious liver problems. When people are aware of their hepatitis infection they are able to make lifestyle choices to enhance their health, protect their liver and help prevent serious liver damage, prevent viral transmission to others, and to seek treatment. In some cases, being diagnosed may explain why people have felt fatigued or why certain foods or alcohol makes them feel unwell. Hepatitis A is a highly contagious form of viral hepatitis with effects ranging from mild to severe inflammation of the liver. Transmission occurs through faecal-oral exposure and no chronic (long term) infection persists. There is an average incubation period of four weeks, after which symptoms may occur. Once a person has been exposed to hepatitis A through infection or vaccination, they develop immunity to future infection. Stigma and discrimination Stigma and discrimination experienced by people living with conditions such as hepatitis B & C is associated with negative health outcomes for affected populations. These outcomes also include negative influences on a person’s mental, psychological and emotional health. A range of policy responses (including laws) to stigma and discrimination are in place nationally and in the ACT, however people living with hepatitis B & C continue to report stigma and discrimination. Stigma and discrimination within the health care sector is commonly reported by people living with hepatitis B and C. For those affected, the negative effects can impact on receipt of care and can create unnecessary barriers to disclosure, diagnosis, management and treatment. Stigma occurs as a result of perceiving a characteristic of a person as deviant (different) from the norms and expectations of the majority. Stigmatised people are labelled as different. The source of the labelling can be external (e.g. another person, an organisation) or internal (i.e. internalised or self-stigma). Discrimination occurs when labelling, perceptions and stigma begin to affect how a person or group is treated. Discrimination is unfavourable treatment because of a particular characteristic or difference. Sometimes discrimination is obvious, and sometimes it is more insidious. Regardless of whether an instance or pattern of stigma and discrimination is intended or unintended, actual or perceived, it has the same effect on the affected person. It is against the law to discriminate against a person living with hepatitis B or C in most circumstances. The Federal Disability Discrimination Act 1992 makes it an offence to discriminate on the basis of a person’s disability (including chronic viral hepatitis) anywhere in Australia. In the ACT there is also the Discrimination Act 1991. Discrimination laws cover acts of discrimination occurring in public life. Click here to view support information from other sources Rights and responsibilities When a hepatitis B or C affected person must disclose There are very few instances when an infected person is legally required to disclose viral hepatitis: - If giving blood to the blood bank and you know you have hepatitis B or C, you are required to disclose this to them and your blood will not be accepted for donation. When blood is donated it is also screened for a range of infections, including hepatitis B and C. You may also be required to disclose if donating bodily organs or other bodily fluids, such as sperm. - If you are a health care worker who conducts exposure prone procedures and you have hepatitis you may be required to notify your employer. Disclosure requirements differ from state to state. Hepatitis ACT, your state/territory health department and your professional body or union will be able to provide you with more information about local requirements. - Some insurance policies, particularly life insurance, require that you disclose any infections, disabilities or illnesses you have that might influence the insurance company’s decision to insure you. If you don’t disclose this information it may affect future claims you may make. Be sure to read all insurance policies carefully and seek advice if you feel you need to. - If you are a member of the Australian Defence Force and you have hepatitis B or C, you will have to disclose this. You are also required to disclose any existing medical conditions on application to enter the Australian Defence Force. Should an affected person disclose to others? Some people find that disclosing their hepatitis status is daunting and it is common to worry about how others will react. It is possible that a person disclosing their hepatitis status will be treated differently or discriminated against once people know. On the other hand there can be benefits to telling selected people. Disclosing can allow others a greater understanding of the health condition and can enable friends and family to be a source of support. In most situations, whether or not to disclose hepatitis status is entirely up to the affected person. In making the decision whether or not to disclose, it may help to consider how people might react to the news, how that reaction might impact on the affected person, and how best to respond to any negative reactions. Points to consider when considering disclosure to others: - It can help to find out as much as possible about hepatitis before telling others. Providing people with accurate information about hepatitis B or C can help correct misconceptions they may have. - Some people find it useful to practice disclosing in their mind or to a friend, confidant, counsellor or hepatitis worker, before disclosing to others in their life. - There are better times than others to share with someone new information about a health condition. It is important to have the discussion when both parties can give the matter sufficient consideration. - If possible, having a supportive person on hand or easily contactable when disclosing can help. - It can be a shock for friends or family to find out that a loved one has hepatitis B or C. It is important to give the person time to come terms with this new information. Having a contact number on hand to access further information or support is important (Hepatitis Infoline: 1300 437 222). - Remember that different people will react differently when told about hepatitis B or C. If the outcome is negative, it is important to remember that this is not a reflection on the worth of the affected person.
A lecture that attempts to explain the functional details, the operation and the purpose of use of an ancient astronomical mechanism, built about 2000 ago. The Antikythera Mechanism was found by chance, in a shipwreck, close to the small Greek island of Antikythera, in April 1900, by sponge divers. The shipwreck was dated between 86 and 67 BCE (coins from Pergamon). Later the Mechanism was stylistically dated, around the second half of the 2nd century B.C. (200 – 100 BCE). It was a portable (laptop-size), geared mechanism which calculated and displayed, with good precision, the movement of the Sun and the Moon on the sky and the phase of the Moon for a given epoch. It could also calculate the dates of the four-year cycle of the Olympic Games and predict eclipses! Its 30, precisely cut, gears were driven by a manifold, with which the user could select, with the help of a pointer, any particular epoch. While doing so, several pointers were synchronously driven by the gears, to show the above mentioned celestial phenomena on several accurately marked spiral dials. It contained an extensive user’s manual. The exact function of the gears has finally been decoded and a large portion of the manual has been read after 2000 years by a major new investigation, using state of the art equipment. New results concerning the construction of the spirals and the pointers will be presented and the ability of ancient Greeks to use hard metals and cutting tools will be examined. NB! The lecture is recorded but not webcasted, like all Academic Training lectures. - Antikythera1.png: Copyright National Archaeological Museum of Athens - Antikythera2.png: Credit Professor K. Efstathiou, Aristotle University - Antikythera3.png: Credit Dr. M. Anastasiou, Aristotle University - Antikythera4.png: Credit Dr. M. Anastasiou, Aristotle University. Sponsor: Maria Dimou / 200 participants
When Nobel Peace Prize Laureate Wangari Maathai was a girl in Kenya, her mother taught her that the wild fig tree was a tree of God. When gathering firewood, she was instructed: "Don't pick any dry wood out of the fig tree, or even around it." Her mother told her, "We don't use it. We don't cut it. We don't burn it." This was an early lesson in conservation for Maathai who would grow up to become the first African woman to receive the Nobel Peace Prize for her work planting trees. (Watch a video of Ms. Maathai receiving the 2004 Nobel prize HERE.) Ms. Maathai died of cancer on September 25. She was 71. Her life and her legacy remind us that peace comes after justice, and justice means a right relationship not only with human beings but also with the natural world. In her memoir, Unbowed, Ms. Maathai explained why the Kikuyu tradition considered the wild fig tree to be God's tree: I later learned that there was a connection between the fig tree's root system and the underground water reservoirs. The roots burrowed deep into the ground, breaking through the rocks beneath the surface soil and driving into the underground water table. The water travelled up along the roots until it hit a depression or weak place in the ground and gushed out as a spring. Indeed wherever these trees stood, there were likely to be streams. "The trees also prevented soil erosion, and when this traditional wisdom was no longer taught, when the idea of the holiness of trees and the biodiversity of the environment was lost, the people suffered. Women especially suffered. Forests were burned to make space for cash crops-coffee and tea. Trees that were not native to Kenya were planted because they grew faster, but they did not have a beneficial effect on the environment." This meant that rural women were forced to walk longer distances to find firewood. They started to eat a less healthy diet of foods that required less wood to cook. Land used for cash crops was not used to plant food that would be consumed locally. Malnutrition set in. Soil erosion led to muddy streams and a lack of clean fresh water. Ms. Maathai saw the connection between this environmental devastation, the subjugation of women and political corruption. In 1977, she established the Green Belt Movement, an effort to teach rural women how to plant trees. It was a difficult beginning. The movement started with great ceremony and planted seven trees. Two of the original seven survived to grow large and to provide shade. Ms. Maathai's life and work are examples of the truth of the adage, "Nothing is more powerful than a made up mind." She made up her mind that planting trees is a way to make life better for rural women and for all of humankind. She wanted to plant one tree for every person in Kenya. An the Green Belt Movement has planted tens of millions of trees. Trees are holy. In Biblical literature they very often represent prosperity and peace. Wisdom, understanding and righteousness are referred to as a tree of life. Eden means abundance, plenty, fullness. Paradise was a garden where the trees provided food for humankind. When we forget the holiness of trees and of nature, we fall into the conceptual error that human beings are not a part of nature and that nature is not a part of us. This error may lead us to privilege short-term economic gain over the long-term health of the earth, of the trees and of ourselves. Wangari Maathai did not make such a mistake. Dr. Valerie Elverton Dixon is an independent scholar who publishes lectures and essays at JustPeaceTheory.com. She received her Ph.D. in religion and society from Temple University and taught Christian ethics at United Theological Seminary and Andover Newton Theological School.
Digital tools (free course) Historians are increasingly required to understand and make use of complex digital tools in order analyse and present their research in new and exciting ways. These modules and case studies are intended as introductions to various common tools and techniques. Historians may wish to use these but be uncertain of what they are, what they can do, and why they are useful. Topics covered include visualisation, linked data, and cloud computing, with more extensive training provided for semantic markup and text mining. There are various digital tools available to historians to enable them to better analyse, present and carry out research in their fields of interest. These training modules and case studies are intended as introductions to some of these tools, giving historians an idea and starting point for what the tool is for, what it can do for them, and how they might start going about making use of it. Digital research areas covered are: - Semantic Markup (training module; case studies; tool audit) - Text Mining (training module; case study; tool audit) - Visualisation tools (case study; tool audit) - Linked data (case study; tool audit) - Cloud computing (tool audit) There are three primary components. First, a tools audit held on History Online (link: http://www.history.ac.uk/history-online/tools) which lists software which can be used for the five areas mentioned above. Second, a series of case studies looking in more detail at how historians have already made use of the tool. Third, two training modules which introduce historians to semantic markup and text mining, assuming no prior knowledge. These are described in more detail below: Semantic markup: an introduction for historians: This module guides you in easy steps through the process of why you might want to mark up a text, how to do it, and what you need to consider throughout the process. The module looks primarily at what XML is and how to use it and gives you some ideas of how to use it for historical research using texts. Text Mining: an introduction for historians: If you have ever wanted to search a broad range of texts or a large text and analyse it in complex ways then text mining tools might well be what you need. This module begins with a simple guide to what text mining is, and how it can be used in its most basic forms for historical research. From there the module gets more in-depth, with introductory training in using natural language processing, named entry recognition, and topic modelling. Both modules are introductions to their respective topics, but will give you enough knowledge to move forward with your own research materials in new and exciting ways.
Common Reed (Phragmites) What is it? Common reed, also known as phragmites, is a large perennial, grass or reed with creeping rhizomes. It typically is found in or near wetlands but also can be found in sites that hold water, such as roadside ditches and depressions. Phragmites form dense stands, which include both live stems and standing dead stems from previous years. The plant spreads horizontally by sending out rhizome runners, which can grow 10 or more feet in a single growing season, rapidly crowding out native grasses. Is it here yet? Yes. Extensive stands exist in both eastern and western Washington in marshes and along river edges and shores of lakes and ponds. Why should I care? The Washington State Noxious Weed Control Board has listed common reed as a Class B noxious weed. The goals are to contain the plants where they already are widespread and prevent their spread into new areas. Cutting has been used successfully for control. Because it is a grass, cutting several times during a season, at the wrong times, may increase stand density. However, if cut just before the end of July, most of the food reserves produced that season are removed with the aerial portion of the plant, reducing the plant's vigor. What should I do if I find one? How can we stop it? Do not purchase, plant, or trade this species. What are its characteristics? - Common reed is a perennial wetland grass that is able to grow to heights of 15 feet or more. - Leaves are 8-16 inches long, .2 to 1.5 inches wide. - Leaf blade is smooth and lanceolate, which tapers from a rounded base toward its top; lance-shaped. - Their hollow stems can reach 12 feet tall and have a rough texture. - The flowers are dense, silky, floral spikelets and grow from 1–16 inches long. These feathery plumes are purplish in color and flower in late July and August. How do I distinguish it from native species? Non-native common reed may be confused with native populations of phragmites. Native genotypes are less dense and the stems are thin and shiny. Native phragmite flowers are also less dense.
Prevention of high blood pressure – diet and exercise Much of the disease burden of high blood pressure is experienced by people who are not labeled as hypertensive. Consequently, population strategies are required to reduce the consequences of high blood pressure and reduce the need for antihypertensive medications. Lifestyle changes are recommended to lower blood pressure, before starting medications. The 2004 British Hypertension Society guidelines proposed lifestyle changes consistent with those outlined by the US National High BP Education Program in 2002 for the primary prevention of hypertension: - maintain normal body weight for adults (e.g. body mass index 20–25 kg/m2) - reduce dietary sodium intake to <100 mmol/ day (<6 g of sodium chloride or <2.4 g of sodium per day) - engage in regular aerobic physical activity such as brisk walking (≥30 min per day, most days of the week) - limit alcohol consumption to no more than 3 units/day in men and no more than 2 units/day in women - consume a diet rich in fruit and vegetables (e.g. at least five portions per day); Effective lifestyle modification may lower blood pressure as much as an individual antihypertensive medication. Combinations of two or more lifestyle modifications can achieve even better results. There is considerable evidence that reducing dietary salt intake lowers blood pressure, but whether this translates into a reduction in mortality and cardiovascular disease remains uncertain. Estimated sodium intake ≥6g/day and <3g/day are both associated with high risk of death or major cardiovascular disease, but the association between high sodium intake and adverse outcomes is only observed in people with hypertension. Consequently, in the absence of results from randomized controlled trials, the wisdom of reducing levels of dietary salt intake below 3g/day has been questioned. Management of high blood pressure According to one review published in 2003, reduction of the blood pressure by 5 mmHg can decrease the risk of stroke by 34%, of ischemic heart disease by 21%, and reduce the likelihood of dementia, heart failure, and mortality from cardiovascular disease. Target blood pressure Various expert groups have produced guidelines regarding how low the blood pressure target should be when a person is treated for hypertension. These groups recommend a target below the range 140–160 / 90–100 mmHg for the general population. Cochrane reviews recommend similar targets for subgroups such as people with diabetes and people with prior cardiovascular disease. Many expert groups recommend a slightly higher target of 150/90 mmHg for those over somewhere between 60 and 80 years of age. The JNC-8 and American College of Physicians recommend the target of 150/90 mmHg for those over 60 years of age, but some experts within these groups disagree with this recommendation. Some expert groups have also recommended slightly lower targets in those with diabetes or chronic kidney disease with protein loss in the urine, but others recommend the same target as for the general population. The issue of what is the best target and whether targets should differ for high risk individuals is unresolved, although some experts propose more intensive blood pressure lowering than advocated in some guidelines. The first line of treatment for hypertension is lifestyle changes, including dietary changes, physical exercise, and weight loss. Though these have all been recommended in scientific advisories, a Cochrane systematic review found no evidence for effects of weight loss diets on death, long-term complications or adverse events in persons with hypertension. The review did find a decrease in blood pressure. Their potential effectiveness is similar to and at times exceeds a single medication. If hypertension is high enough to justify immediate use of medications, lifestyle changes are still recommended in conjunction with medication. Dietary changes shown to reduce blood pressure include diets with low sodium, the DASH diet, vegetarian diets, and green tea consumption. Increasing dietary potassium has a potential benefit for lowering the risk of hypertension. The 2015 Dietary Guidelines Advisory Committee (DGAC) stated that potassium is one of the shortfall nutrients which is under-consumed in the United States. Physical exercise regimens which are shown to reduce blood pressure include isometric resistance exercise, aerobic exercise, resistance exercise, and device-guided breathing. Stress reduction techniques such as biofeedback or transcendental meditation may be considered as an add-on to other treatments to reduce hypertension, but do not have evidence for preventing cardiovascular disease on their own. Self-monitoring and appointment reminders might support the use of other strategies to improve blood pressure control, but need further evaluation.
Unless you're a tardigrade, you need water to survive. For many creatures, this means lapping up or drinking water up through the mouth. Others, like those in desert environments, get it from the food they eat or by relying on other adaptations, like gathering moisture on their bodies. Snakes have their own particular adaptation as well. They open their mouths and just soak in the H2O. And it's kind of adorable when they do. Snakes don't lap up water with their tongues. It'd be pretty difficult to do that, after all, considering that snakes don't open their mouths up wide enough when they flick out their tongues. Additionally, snakes' tongues actually go into sheaths when they're not in use, gathering up scents to give the snake a sense of their environment. So if the tongue can't help a snake get water, what does? For a while, we believed that snakes simply sucked in water through a small hole in their mouths. Think of it as a sort of built-in straw. This method, called the buccal-pump model, relies on the snakes, particularly boa constrictors, alternating the negative and positive pressure in their oral cavities to make a flow of water. They depress their jaws, creating negative pressure to draw in the water and then seal up their mouths on the side to create positive pressure and push the water into the rest of their bodies. Except that's not how it works A 2012 study published in the Journal of Experimental Zoology Part A debunked this particular assumption, at least in regards to some snake species. The mouth sealing process, so important to the buccal-pump model, wasn't always found in snakes, leaving the issue of how the snakes consumed water up in the air. Mouth sealing, it turned out, was incidental to the whole process. "One thing that didn't fit the model was that these species don't seal the sides of their mouth," David Cundall, a biologist at Lehigh University in Pennsylvania, explained in a 2012 statement released by the university. "From there, it took a long time for me to realize that the anatomy of the system and the lining of the lower jaw suggested a sponge model." Yes, a sponge model. It turns out that at least four species — the cottonmouth, the Eastern hognose snake, gray rat snake and the diamond-backed watersnake — move water through their mouths thanks to the sponge-like properties of their lower jaw. Watch Bacon Bit, a western hognose snake, show you how it's done in the video above. When snakes open their mouths to eat, they "unfold a lot of the soft tissues," according to Cundall, and the folding of this soft tissue creates a number of sponge-like tubes that water flows through. Muscle action then forces the water into the snake's gut. Cundall and his team used synchronized video and electromyographic recordings of muscle activity in three of those species and pressure recordings in the jaws and esophagus of a fourth to come to this conclusion. So sip on, snakes. And thanks for the quick lesson in biomechanics.
Assembly Language (finally!) CS 301 Lecture, Dr. Lawlor OK, so in the last two weeks, we've looked at bits, bit operations, hexadecimal, tables, and finally machine code (in excruciating detail). Together, these are everything you need to know in order to understand assembly language. Assembly language is, simply, a line-by-line copy of machine code transcribed into human-readable words. For example, we've been using the "move into register 0" instruction (0xb8) a lot. In an assembler, you can emit the same machine code with this little assembly language program: (Try this in NetRun now!) The assembler (NASM, in this case) will then spit out the following machine code: Note the middle column contains the same 0xb8 and so on that in HW2, we wrote by hand. (The duplicate "ret" instructions are because NetRun always puts in a spare "ret" instruction at the end, in case you 0: b8 05 00 00 00 mov eax,0x5 5: c3 ret 6: c3 ret The big advantage of using an assembler is that you don't need to remember all the funky arcane numbers, like 0xb8 or 0xc3 (these are "opcodes"). Intead, you remember a human-readable name like "mov" (short for "move"). This name is called an "opcode mnemonic", but it's always the first thing in a CPU "instruction", so I usually will say "the mov instruction" rather than "the instruction that the mov opcode mnemonic stands for". There are several parts to this line: Unlike C/C++, assembly is line-oriented, so the following WILL NOT WORK: - "mov" is the "opcode", "instruction", or "mnemonic". It corresponds to the first byte (or so!) that tells the CPU what to do, in this case move a value from one place to another. The opcode tells the CPU what to do. - "eax" is the destination of the move, also known as the "destination operand". It's a register, register number 0, and it happens to be 32 bits wide, so this is a 32-bit move. - 5 is the source of the moved data, also known as the "source operand". It's a constant, so you could use an expression (like "2+3*1") or a label (like "foo") instead. - A semicolon indicates the start of a comment. Unlike in C/C++/Java/C#/..., semicolons are OPTIONAL in assembly! - A newline. Unlike in C/C++/Java/C#/..., you MUST have a newline after each line of assembly. Yup, line-oriented stuff is indeed annoying. Be careful that your editor doesn't mistakenly add newlines to long lines of text! A list of all possible x86 instructions can be found in: The really important opcodes are listed in my cheat sheet. Most programs can be writen with mov, the arithmetic instructions (add/sub/mul), the function call instructions (call/ret), the stack instructions (push/pop), and the conditional jumps (cmp/jmp/jl/je/jg/...). We'll learn about these over the next few weeks! Here are the commonly-used x86 registers: Each of these registers is available in several sizes: - rax. This is the register that stores a function's return value. - rax, rcx, rdx, rsi, rdi. "Scratch" registers you can always overwrite with any value. Note that "ebx" is NOT scratch! - rdi, rsi, rdx, rcx, ... In 64-bit mode, these registers contain function arguments, in left-to-right order. - rsp, rbp. Registers used to run the stack. Be careful with these! Curiously, you can write a 64-bit value into rax, then read off the low 32 bits from eax, or the low 16 bitx from ax--it's just one register, but they keep on extending it! - rax is the 64-bit, "long" size register. It was added in 2003. - eax is the 32-bit, "int" size register. It was added in 1985. - ax is the 16-bit, "short" size register. It was added in 1979. - al and ah are the 8-bit, "char" size parts of the register. They're original back to 1971. mov rcx,0xf00d00d2beefc03; load 64-bit constant mov eax,ecx; pull out low 32 bits (Try this in NetRun now!) Arithmetic In Assembly Here's how you add two numbers in assembly: Here's the C/C++ equivalent: - Put the first number into a register - Put the second number into a register - Add the two registers - Return the result int a = 3; And finally here's the assembly code: int c = 7; a += c; mov eax, 3 (executable NetRun link) mov ecx, 7 add eax, ecx Here are the x86 arithmetic instructions. Note that they *all* take just two registers, the destination and the source. Be careful doing these! Assembly is *line* oriented, so you can't say: add (sub eax,ecx),edx but you can say: In assembly, arithmetic has to be broken down into one operation at a time!
What are Economies of Scope? Economies of scope describe situations in which the long-run average and marginal cost of a company, organization, or economy decreases, due to the production of some complementary goods and services. An economy of scope means that the production of one good reduces the cost of producing another related good. While economies of scope are characterized by efficiencies formed by variety, economies of scale are characterized by volume. The latter involves the reduction of the average cost, or the cost per unit, that stems from increasing production for one single type of product. Economies of scale helped drive corporate growth in the 20th century, for example through assembly line production. - Economies of scope describe situations when producing two or more goods or services together results in a lower cost than producing them separately. - Economies of scope differ from economies of scale, in that the former means producing a variety of different products together to reduce costs while the latter means producing more of the same good in order to reduce costs by increasing efficiency. - Economies of scope can result from goods that are co-products or complements in production, goods that have complementary production processes, or goods that share inputs to production. Economies of Scope Understanding Economies of Scope Economies of scope are economic factors that make the simultaneous manufacturing of different products more cost-effective than manufacturing them on their own. Economies of scope can occur because the products are co-produced by the same process, the production processes are complementary, or the inputs to production are shared by the products. Economies of scope can arise from co-production relationships between the final products. In economic terms these goods are complements in production. This is when the production of one good automatically produces another good as a byproduct or a kind of side-effect of the production process. Sometimes one product might be a byproduct of another, but have value for use by the producer or for sale. Finding a productive use or market for the co-products can reduce costs or increase revenue. For example, dairy farmers separate milk into whey and curds, with the curds going on to become cheese. In the process they also end up with a lot of whey, which they can use as a high protein feed for livestock to reduce their feed costs or sell as a nutritional product to fitness enthusiasts and weightlifters for additional revenue. Another example of this is the black liquor produced by the processing of wood into paper pulp. Instead of being just a waste product that might be costly to dispose of, black liquor is burned as an energy source to fuel and heat the plant, saving money on other fuels, or can even be processed into more advanced biofuels for use on-site or for sale. Producing and using the black liquor saves costs on producing the paper. Complementary Production Processes Economies of scope can result from direct interaction of the production processes. Companion planting in agriculture is a classic example here, such as the Three Sisters historically farmed by Native Americans. By planting corn, pole beans, and ground trailing squash together, the Three Sisters method can increase the yield of each crop, while also improving the soil. The tall corn stalks provide a structure for the bean vines to climb up; the beans fertilize the corn and the squash by fixing nitrogen in the soil; and the squash shades out weeds among the crops with its broad leaves. All three plants benefit from being produced together, so the farmer can grow more crops at lower cost. A more modern example might be a co-operative training program between an aerospace manufacturer and an engineering school, where students at the school also work part time at the business. The manufacturer can reduce its overall costs by obtaining low cost access to skilled labor, and the engineering school can reduce its instructional costs by effectively outsourcing some instructional time to the manufacturer's training managers. The final goods being produced (airplanes and engineering degrees) might not seem to be direct complements or share many inputs, but producing them together reduces the cost of both. Because productive inputs (land, labor, and capital) usually have more than one use, economies of scope can often come from common inputs to the production of two or more different goods. For example, a restaurant can produce both chicken fingers and French fries at a lower average expense than what it would cost two separate firms to produce each of the goods separately. This is because chicken fingers and French fries can share the use of the same cold storage, fryers, and cooks during production. Proctor & Gamble is an excellent example of a company that efficiently realizes economies of scope from common inputs since it produces hundreds of hygiene-related products from razors to toothpaste. The company can afford to hire expensive graphic designers and marketing experts who can use their skills across all of the company's product lines, adding value to each one. If these team members are salaried, each additional product they work on increases the company's economies of scope, because of the average cost per unit decreases. Different Ways to Achieve Economies of Scope Economies of scope are essential for any large business, and a firm can go about achieving such scope in a variety of ways. First, and most common, is the idea that efficiency is gained through related diversification. Products that share the same inputs or that have complementary productive processes offer great opportunities for economies of scope through diversification. Horizontally merging with or acquiring another company is another a way to achieve economies of scope. Two regional retail chains, for example, may merge with each other to combine different product lines and reduce average warehouse costs. Goods that can share common inputs like this are very suitable for generating economies of scope through horizontal acquisitions.
(Phys.org) —Declines of the food resources that feed lake organisms are likely causing dramatic changes in the Great Lakes, according to a new study. The study, led by the U.S. Geological Survey and co-authored by three University of Michigan researchers, found that since 1998, water clarity has been increasing in most Great Lakes, while phytoplankton (the microscopic water organisms that feed all other animals), native invertebrates and prey fish have been declining. These food web changes fundamentally affect the ecosystem's valuable resources and are likely caused by decreasing levels of lake nutrients, and by growing numbers of invasive species such as zebra and quagga mussels. The study found that inputs of phosphorus—the nutrient that limits phytoplankton growth—have declined in the Great Lakes since 1972, when the Great Lakes Water Quality Agreement was signed. The growing numbers of invasive species, such as zebra and quagga mussels, have caused phosphorus levels to decline in the offshore waters of some lakes over the last decade by filtering out organic material and nutrients. These decreases in nutrients have the potential to affect the smallest organisms up to the top predators. In Lake Huron, for example, plankton and fish appear to be controlled by declining nutrients or food. "Our study provides a comprehensive ecological report card that highlights existing gaps in scientific understanding and monitoring of the complex Great Lakes ecosystems," said David "Bo" Bunnell, lead author of the study and a U.S. Geological Survey scientist at the Great Lakes Science Center in Ann Arbor. "Ideally, it will spur future research to more rigorously test some of the predictions born from our relatively simple analyses." The U-M co-authors are Thomas H. Johengen of the Cooperative Institute for Limnology and Ecosystems Research, Thomas F. Nalepa of the U-M Water Center and Catherine M. Riseng of Michigan Sea Grant. "The study is significant since it uniquely explores reasons for the huge, recent changes in the Great Lakes food web by placing these changes into a mathematical framework," Nalepa said. "By using long-term data sets collected in all the lakes, it provides insights into the relative importance of the major drivers of ecosystem structure: nutrient inputs, invasive species and top fish predators." The Great Lakes provide valuable ecosystem services to the 30 million people that live within the watershed, but portions have been degraded since the industrial era. In 2010, the U.S. government initiated the Great Lakes Restoration Initiative, the largest investment in the Great Lakes in two decades, investing approximately $1 billion over the past four years. "These findings provide critical information to help decision-makers understand changes that are affecting the Great Lakes fishery that generates about $7 billion for the economy each year," said Suzette Kimball, acting director of the Geological Survey. "The work is the result of a strong public-private collaboration and greatly contributes to managers' ability to deal effectively with the changes occurring in these unique and vast freshwater ecosystems so important to our nation." The study was published online Dec. 5 in the journal BioScience. Explore further: Lake Michigan fish populations threatened by decline of tiny creature
Supporting Spelling at Home - Have your child write the spelling words: - On paper with pencils, pens, markers or paint - With chalk on a sidewalk - With dry erase markers on a white board or mirror - Type on the computer - With his/her fingers in a plate of pudding or in whip cream/shaving cream on a counter - On post-it notes and stick on the bathroom mirror - Have your child spell the words out loud when you are in the car, walking to school, in the store, waiting in line, etc. - Spell words using blocks, Scrabble tiles or flash cards. You can make own using index cards – one card for each letter. Consider using different coloured letters for vowels. - If the list of words to learn is long, have your child choose 4-5 to focus on at a time. - Combine spelling with physical activity - Your child can do jumping jacks, saying one letter per jump - When walking up the stairs, get him/her to say one letter per step - Your child can holla hoop and say one letter per loop - Download a dictionary app on your devices and keep a dictionary at home to help your child look up unfamiliar words. - Encourage your child to read. Often, good readers are good spellers. Click here to download ideas for spelling activities to do at home. Information taken from For the Teachers
Macular degeneration is damage or breakdown of the macula. The macula is a small area at the back of the eye that allows us to see fine details clearly. When the macula doesn't function correctly, we experience blurriness or darkness in the center of our vision. Macular degeneration affects both distance and close vision and can make some activities - like threading a needle or reading - difficult or impossible. Although macular degeneration reduces vision in the central part of the retina, it does not affect the eye's side or peripheral vision. For example, you could see the outline of a clock but not be able to tell what time it is. Macular degeneration alone does not result in total blindness. People continue to have some useful vision and are able to take care of themselves. Many older people develop macular degeneration as part of the body's natural aging process. The two most common types of age-related macular degeneration are "dry" (atrophic) and "wet" (exudative.) "Dry" macular degeneration: Most people have dry macular degeneration. It is caused by aging and the thinning of the tissues of the macula. Vision loss is usually gradual. "Wet" macular degeneration: Wet macular degeneration accounts for about 10% of all cases. It results when abnormal blood vessels form at the back of the eye. These new blood vessels leak fluid or blood and blur central vision. Vision loss may be rapid and severe. Macular degeneration can cause different symptoms in different people. The condition may be hardly noticeable in its early stages. Sometimes only one eye loses vision while the other eye continues to see well for many years. When both eyes are affected, the loss of central vision may be noticed more quickly. The following are some common ways that vision loss is detected: - Words on a page look blurred - A dark or empty area appears in the center of vision - Straight lines look distorted as in the following diagram Many people do not realize that they have a macular problem until blurred vision becomes obvious. Your ophthalmologist can detect early stages of macular degeneration during a medical eye examination that includes the following: - Viewing the macula with an ophthalmoscope - A simple vision test where you look at a grid resembling graph paper - Sometimes special photographs called angiograms are taken to find abnormal blood vessels under the retina. Fluorescent dye is injected into your arm and your eye is photographed as the dye passes through the blood vessels in the back of the eye. Once abnormal blood vessels form, several treatments are available. However, patients do better if we catch the vessels in the early stages of the disease. This is why people with macular degeneration are asked to check their maculas by doing Amsler grid testing separately in each eye every day. The first proven treatment involves burning the abnormal blood vessels with a conventional laser. It however, causes a permanent dark spot and the blood vessels often come back even after successful treatment. Nonetheless, this treatment is still used when we can easily define the blood vessels and they are a certain distance away from the very center. A newer treatment for patients with wet macular degeneration and well defined abnormal blood vessels involves injecting the patient with a medicine called Visudyne and then applying a low intensity "cold laser" to the area. The blood vessels tend to recur, so patients are evaluated every 3 months. Most patients stabilize after 4 to 6 treatments done over 1 to 1 1/2 years. Several experimental treatments are currently being evaluated in controlled national studies, including new laser treatments and the use of medications. Hopefully, in the future, some of these experimental techniques will become available and other options will also allow us to better treat this very common and devastating disease.
The enigmatic nawamis have been known since Edward H. Palmer's early explorations in the Sinai. During the years 1972 to 1982. Ofcr Bar-Yoscf, Avner Goren. and other Israeli scholars made intensive field investigations of these structures. It is generally agreed lhat these well-preserved stone-built structures were constructed during the 4thmillennium b.c.by indigenous pastoralists and used as tombs at the end of the Chalcolithic and Early Bronze I A. The nawamis generally have the same rounded plan 3 to 6 m in diameter and approximately 2 m in height. They are usually double walled and built of local stone, either sandstone slabs, granitic, or metamorphic boulders. A wide range of burial offerings was found in the nawamis. These include beads (mostly dentalium, connus, carnelian, faience, bone, and ostrich eggshell), mother-of-pearl pendants, lambis-shcll bracelets, transverse arrowheads, tabular scrapers, and some copper points, Bar-Yoscf suggested that the nawamis were used as graves for family units. Both primary and secondary burials were found in the numerous fully excavated nawamis.
The human body is made up of trillions of cells. Cells of the central nervous system (CNS) are called neurons. Neurons are electrically excitable cells in the central nervous system (CNS) that process and transmit information. They communicate with each other via chemical and electrical synapses, in a process known as synaptic transmission. Neurons are the core components of the brain, spinal cord, and peripheral nerves, and the human brain has approximately 100 billion of them. Neurons are the oldest and longest cells in the body. Unlike many of the other types of cells in your body, you have many of the same neurons for your whole life. Neurons come in all shapes and sizes. Some of the smallest neurons have cell bodies that are just 4 microns* wide. The largest ones have cell bodies that are 100 microns* wide. Some of them, such as corticospinal neurons or primary afferent neurons, can be several feet long. If you would like to learn more about neuron anatomy and how they function in the human body, ask your doctor or nurse. He or she may be able to provide you with additional information.
Sea Level Rise Term Paper - Length: 5 pages - Subject: Environmental Science - Type: Term Paper - Paper: #79085512 Excerpt from Term Paper : Rising sea levels, resulting from global warming, may have a potentially important impact on human culture. Recent evidence supports the contention that increases in greenhouse gases are linked to rising sea levels. One important impact of climate change and rising sea levels is increased rates of extinction across the globe. Further, changes in sea level will have a significant impact on outlying coastal areas, both in terms of physical changes, and in terms of events such as storm surges. Rising sea levels in the United States and across the world will have significant economic and cultural impacts, and may influence human health and the environment through the flooding of toxic waste disposal sites. Warrick, in his 1993 book, Climate and Sea Level Change: Observations, Projections and Implications, notes that there are many important uncertainties in predicting both global climate change, and changes in sea level. The factors that can impact global climate change include greenhouse gas concentrations and their associated impact on oceanic thermal expansion, ice sheets in Greenland and Antarctica, and mountain glaciers (Warrick, 1993). In the simplest scenario, greenhouse gases like carbon dioxide warm the Earth by absorbing outgoing infrared radiation (Titus et al., 1991). In any discussion of the environmental impact of sea level changes, particularly in the context of global warming, it should be noted that changes in sea level and climate are natural occurrences. Warrick (1993) notes, "Change in climate and sea level is the rule, not the exception. Natural variations in sea level a clearly evident over a large range of time and space scales, from the pulse of diurnal tides to globally coherent variations in sea level of current over many millennia ... The Earth is a naturally strongly interactive, dynamic system" (p. 3). As Warrick (1993) points out, the fact that changes in sea level and climate are natural occurrences is often overlooked in the context of discussions of global change. In recent years, atmospheric concentrations of a number of greenhouse gases such as methane, carbon dioxide, chlorofluorocarbons, and nitrous oxide have increased dramatically. As such, these fast and dramatic increases have led to widespread speculation that they can be linked to changes in the global environment. However, a correct and the useful assessment of the problem requires that such changes be considered within the context of normal natural variation in sea level and climate (Warrick, 1993). In the same breath, the speed and magnitude of the increasing greenhouse gas concentrations gives "rise to legitimate concerns about the future" (Warrick, 1993, p. 3). These include concerns that humankind may be a significant and new factor in the global environmental change that may dominate natural changes, and that changes in sea level and climate could accelerate at unknown rates. Further, there are concerns that the human impact on the global climate may have dramatic consequences, and that accelerated rates of global environmental change may exceed the human ability, and the ability of the natural world, to adapt to such changes (Warrick, 1993). Based on evidence from coral reefs and oxygen isotopes, the geological time scale suggests that sea level and climate are linked. Sea level fluctuations are associated with transitions between warm interglacial periods and cold glacial periods. For example, during the last interglacial period, approximately 120,000 years before today, the mean global temperature was likely warmer, and the mean the global sea level was likely 526 m higher than today. Similarly, about 18,000 years ago, during the last glacial maximum, global temperature was about four to five degrees colder than today, and the sea level was close to 100 m lower (Warrick, 1993). In the relatively recent past, sea level changes have been relatively slow and stable. In the last 1000 years, the rise in sea level has been approximately 0.1 to 0.2 mm per year (Warrick, 1993). Current evidence indicates that glaciers in Antarctica are melting, and potentially may contribute to increases in sea level. A recent study published in the journal Science notices that glaciers in the Amundsen Sea sector of West Antarctica are discharging approximately 250 cubic km of ice into the ocean each year. This is approximately 60% more than the amount accumulated in their catchment basins. The authors note that this discharge alone could increase the sea level by over 0.2 mm per year. Further the authors note that glacial thinning rates observed in 2002 to 2003 are larger than those seen during the 1990s (Thomas, R. et al. 2004). Interestingly, notes Warrick (1993), different oceanic regions of the world can be affected in different ways by global climate change. This can be linked to redistribution of ice mass during interglacial and glacial cycles (Warrick, 1993). While evidence clearly seems to indicate that climate changes occurring, and that sea levels are rising, the implications of these changes to the environment and humankind are still somewhat unclear. An investigation reveals that such changes may potentially have catastrophic impacts on biodiversity and low-lying coastal areas, in particular. Recent evidence suggests that climate change may pose a significant risk of increased extinction rates. In a study published in 2004 in the journal Science, researchers project that 15 to 37% of species in sample regions and taxa will be "committed to extinction" by 2050 (Thomas, C.D. et al., 2004, p. 145). The most positive projection of the study, based on minimal climate warming scenarios, suggests that approximately 18% of species will be committed to extinction by 2050. The authors assessed sample regions that made up approximately 20% of the Earth's terrestrial surface (Thomas, C.D. et al., 2004). Note the authors, "minimizing greenhouse gas emissions and sequestering carbon emissions to realize minimum, rather than mid-range or maximum, expected climate warming could save a substantial percentage of terrestrial species from extinction" (Thomas, C.D. et al., 2004, p. 147. Low-lying coastal areas around the world will perhaps be some of the regions most highly impacted by changes in sea levels. These regions may feel the effects of climatic change in terms of severe tropical storms and storm surges. Regions most likely to be impacted by rising sea levels include the Netherlands, China's eastern coast and Hong Kong, South America, the Mississippi deltas, the Ganges Brahmuputra, Egypt, Bangladesh, and the Norfolk coast of the United Kingdom (Warrick, 1993). A significant rise in mean sea level will have a direct physical effect on coastal areas. These include: "coastal or ocean shoreline inundation owing to higher normal tide levels plus increased temporary surge levels during storms, and saltwater intrusion primarily into estuaries and groundwater aquifers" (Sorensen, in Barth and Titus, Chapter 6). The human impact of increasing sea levels will also be felt in socioeconomic terms, in addition to physical and environmental changes (Warrick, 1993). If sea level rises at what is an estimated to be a 50 to 200 cm in the next century, the financial impact on the United States could be significant. In total, the cost for a one meter rise in sea level during that time would run 270 to 475 billion dollars. This would include the cost of protecting emotion resort communities by raising barrier islands and pumping sand onto beaches, the cost of using dikes and bulkheads to protect developed areas along sheltered waters, and the loss of undeveloped lowlands and coastal wetlands (Titus et al., 1991). However, the environmental consequences of such actions may prove prohibitive. For example, levees and bulkheads along sheltered waters may eventually eliminate much of America's wetland shorelines (Titus et al., 1991). Further, there are potentially significant social and cultural applications to measures taken in response to rising sea levels. The coast and local features are tied closely to the mythology of many cultures, and the lifestyles of many people. Importantly, "many people in developed and developing nations view the sea, coasts, reefs, and beaches as…
Download Respondus Lockdown Browser TC Student Web Mail The 5 Paragraph Essay Format The First Paragraph: Begins with a topic sentence that introduces a general topic or theme. Follows the topic sentence with sentences that narrow the focus of the topic or theme, so that it is less general. Introduces the author of the text you are writing about (If applicable. If not, move on to the next bullet point). Introduces the title of that text (If applicable. If not, move on to the next bullet point). Narrows the discussion of the topic by identifying an issue or problem. Finishes with a thesis statement, which must be the final sentence of the introductory paragraph. Students will be graded on the quality of the thesis statement, and whether it is in its proper locationthe final sentence of the introductory paragraph. Three Body Paragraphs: Begin with topic sentences that clearly relate to the topic, or issue, or problem, that was identified in the introductory paragraph. Sentences that elaborate on the focus laid out in the introductory paragraph, and demonstrate a clear connection to the thesis statement. A focus that consistently reflects the focus that was promised in the thesis statement, and an analysis that actually engages in comparison-contrast, or cause-effect, depending on what type of paper the student has been assigned to write. Avoid phrases like "In conclusion, . . . ." Begins with a topic sentence that clearly relates to the topic, or issue, or problem, that was identified in the introductory paragraph. Sentences that make connections with, or revisit, points from the introductory paragraph and the body paragraphs. These points now serve to close the paper's argument. A final comment, or intellectual conclusion of sorts that points out the larger significance of your argument. More About Thesis Statements: The thesis statement must make clear to readers the focus of the paper, and what the paper seeks to accomplish. This semester, students will write different types of papers, such as comparison-contrast, cause-effect, and an argumentative paper. Each of these different types of papers will have different types of thesis statements. For example, a comparision-contrast paper might have a thesis that states: "The purpose of this paper is to compare and contrast . . . ." A cause-effect paper might have a thesis that states: "The purpose of this paper is to examine . . . , the cause of which is . . . ," or "The purpose of this paper is to examine . . . the effect of which is . . . ." An argumentative paper, however, is quite different because the definition of a thesis statement for an argumentative paper is as follows: A debatable claim. Here students must be careful to include a thesis statement that is indeed a debatable claim, as opposed to a statement of fact, and this is where students often experience difficulties that have a negative impact on the grades papers can receive. Remember that the purpose of an argumentative paper is to persuade readers, so that by the end of the paper, readers agree with the paper's position on the issue at hand. Basically, to prove a statement of fact does not require much persuasion, but to prove a debatable claim requires much persuasion. For example, if a student is assigned to write an argumentative paper about alternate-fuel techonlogies for cars, here are two example-thesis statements: Example of a poor thesis statement: "Biofuels and electric cars are two technologies that present strong possibilities for the future of transportation." Example of an acceptable thesis statement: "Biofuels and electric cars are two technologies that present strong possibilities for the future of transportation, but the choice that makes more sense is biofuels because battery technology is not sufficiently developed, the internal combustion engine would not have to be replaced, and the United States already has a fuel-delivery system available to the masses in the form of gasoline service stations. Above, the example of a poor thesis is indeed poor because it is simply a statement of fact. It is entirely true that "Biofuels and electric cars are two technologies that present strong possibilities for the future of transportation," so a paper with this thesis statement sets out to prove a point that is pretty much of a given; however, a paper that uses the example of an acceptable thesis must prove to readers that biofuels are the better choice for the future. Moreover, the acceptable thesis statement even goes so far as to lay out for readers the three main points that the paper's body paragraphs will cover. Always Use Academic Prose: I will grade all your course work, in part, on how well you apply the following requirements to your writings: Do not use contractions. Do not use first-person pronouns such as "I" "me" "my." Do not use second-person pronouns such as "you" "your" "yours." Do not engage in personal stories, meaning stories of your own life experiences, or the experiences of friends, family, and so on. Do not begin sentences with conjunctions: but, and, or, nor, for, so, yet. Do not pose any questions in any assignments. This means, quite literally, not to use questions. Write sentences in the form of statements instead. Do not quote the bible or make allusions to religion in any way. Avoid any form of direct address to the reader, such as "think about the fact that . . ." Avoid too casual of a prose style, such as sentences that begin with words like "well, sure, now, yes, no." Do not use phrases such as, "a lot," "lots" or "lots of," which can usually be replaced with one of the following words: many, most, much, often. Do not use exclamation points, for they are almost always unnecessary. Periods and commas should be inside of quotation marks, but other forms of punctuation go outside of quotation marks. Do not use the word "okay" when words like "acceptable" could be used instead. Do not use the word "nowdays," "nowadays," or any slight variation thereof.
Think of the distance from one integer to another as being one step. If you start at zero and take two steps to the right, you get to 2. If you start at 0 and take two steps to the left, you get to -2. If you take two steps forward, we take two steps back - we come together, cuz opposites attract. In any case, we go the same distance. Each integer (except 0) has two pieces of information: its distance from 0, and its direction from 0. The distance of an integer from 0 is called its magnitude or its absolute value. We indicate "the absolute value of #" by putting vertical bars around #, like this: |#|. Because absolute value is basically a measurement rather than a number, it is never negative. Can you imagine how difficult it would be, for example, trying to track down drapes that would fit a window measuring -3 feet long? Not even the helpful people at Target would be able to help you out with that one. The sign of the number is there to tell us which direction from 0 we are stepping. By the way, don't freak - there is actually very little physical activity involved in solving these problems. If there is no sign in front of the number, it means the number is positive. If there is one negative sign in front of the number, it means we theoretically reflected the number into the mirror, so the number is negative. Also, it looks like it's getting ready to brush its teeth. Of course, things can get more complicated than that. We can go crazy with the negative signs and write things like "-(-(-3))". Ever slipped up and said something like, "I don't want no socks for Christmas," and then your grammar-stickler uncle gets you socks just to prove a point? What you did was use a double-negative - because you said that you do not want no socks, that must mean that you do want socks. Same thing with a number like -(-3). However, with three negative signs as in the earlier example, the number would once again become negative. Now it's as if you're saying, "I don't want none of no socks for Christmas." Wow. You really need to spend some more quality time with your uncle. Remember, one negative sign meant we reflected a number into the mirror once. Two negative signs means we reflect the negative number back again, so now we're back to the right: And we could reflect again to get the following: Be Careful: Taking the negative of a number doesn't always give us a negative number, as the previous examples demonstrate. So before assuming a number is negative just because you see a negative sign, make sure that there's only one of them. If there are multiple negative signs, the number may be negative or positive, depending on how many negative signs there are. We know that you may not don't never can't no want to deal with too many negative signs, but it's just a part of life. If we want to be extra clear that a number is positive, we can write an extra "+" in front of it. +5 = 5. However, if there's no + or - sign, the number is understood to be positive. If a number is preceded by this symbol: ©, then it is copyrighted. You may not reproduce, retransmit or rebroadcast this number without the express written consent of Major League Baseball. You didn't pull anything doing those exercises, did you? We warned you to stretch first...
Click on an oval to select your answer. To choose a different answer, click one different oval. Petroleum, consisting of crude oil and natural gas, seems to originate from organic matter in marine sediment. Microscopic organisms settle to the seafloor and accumulate in marine mud. The organic matter may partially decompose, using up the dissolved oxygen in the sediment. As soon as the oxygen is gone, decay stops and the remaining organic matter is preserved. Continued sedimentation-the process of deposits` settling on the sea bottom-buries the organic matter and subjects it to higher temperatures and pressures, which convert the organic matter to oil and gas. As muddy sediments are pressed together, the gas and small droplets of oil may be squeezed out of the mud and may move into sandy layers nearby. Over long periods of time (millions of years), accumulations of gas and oil can collect in the sandy layers. Both oil and gas are less dense than water, so they generally tend to rise upward through water-saturated rock and sediment. Oil pools are valuable underground accumulations of oil, and oil fields are regions underlain by one or more oil pools. When an oil pool or field has been discovered, wells are drilled into the ground. Permanent towers, called derricks, used to be built to handle the long sections of drilling pipe. Now portable drilling machines are set up and are then dismantled and removed. When the well reaches a pool, oil usually rises up the well because of its density difference with water beneath it or because of the pressure of expanding gas trapped above it. Although this rise of oil is almost always carefully controlled today, spouts of oil, or gushers, were common in the past. Gas pressure gradually dies out, and oil is pumped from the well. Water or steam may be pumped down adjacent wells to help push the oil out. At a refinery, the crude oil from underground is separated into natural gas, gasoline, kerosene, and various oils. Petrochemicals such as dyes, fertilizer, and plastic are also manufactured from the petroleum. As oil becomes increasingly difficult to find, the search for it is extended into more-hostile environments. The development of the oil field on the North Slope of Alaska and the construction of the Alaska pipeline are examples of the great expense and difficulty involved in new oil discoveries. Offshore drilling platforms extend the search for oil to the ocean`s continental shelves-those gently sloping submarine regions at the edges of the continents. More than one-quarter of the world`s oil and almost one-fifth of the world`s natural gas come from offshore, even though offshore drilling is six to seven times more expensive than drilling on land. A significant part of this oil and gas comes from under the North Sea between Great Britain and Norway. Of course, there is far more oil underground than can be recovered. It may be in a pool too small or too far from a potential market to justify the expense of drilling. Some oil lies under regions where drilling is forbidden, such as national parks or other public lands. Even given the best extraction techniques, only about 30 to 40 percent of the oil in a given pool can be brought to the surface. The rest is far too difficult to extract and has to remain underground. Moreover, getting petroleum out of the ground and from under the sea and to the consumer can create environmental problems anywhere along the line. Pipelines carrying oil can be broken by faults or landslides, causing serious oil spills. Spillage from huge oil-carrying cargo ships, called tankers, involved in collisions or accidental groundings (such as the one off Alaska in 1989) can create oil slicks at sea. Offshore platforms may also lose oil, creating oil slicks that drift ashore and foul the beaches, harming the environment. Sometimes, the ground at an oil field may subside as oil is removed. The Wilmington field near Long Beach, California, has subsided nine meters in 50 years; protective barriers have had to be built to prevent seawater from flooding the area. Finally, the refining and burning of petroleum and its products can cause air pollution. Advancing technology and strict laws, however, are helping control some of these adverse environmental effects.
What is a definition of the nitrogen cycle? The nitrogen cycle is the biogeochemical cycle describing how nitrogen moves through the biosphere and atmosphere. The nitrogen cycle is the biogeochemical cycle describing how nitrogen moves through the biosphere and atmosphere. Like the carbon cycle or the water cycle, the nitrogen cycle describes how nitrogen is converted into different forms as it moves through the cycle. The complete cycle is shown below: The majority of nitrogen on Earth is actually stored in the atmosphere. Atmospheric nitrogen is not not readily available for most organisms to use though. Conversion of atmospheric nitrogen is called nitrogen fixation and is largely done by certain bacteria, although it nitrogen can also be converted through the Haber-Bosch Process. After this conversion process, nitrates and ammonia from the Haber-Bosch Process can be used by plants. When living organisms die, nitrogen returns to the soil through ammonification. Nitrification by bacteria transforms ammonia in the soil into nitrates so that it is available for plants to use again. There is also denitrifying bacteria which transform nitrates in the soil into atmospheric nitrogen again. You can read about the nitrogen cycle in more detail here.
Difference between Anxiety and Anxiety Disorders Anxiety is a normal psychological and emotional response to danger in the present – the so called Fight or Flight response. It is our natural response to a situation that we find stressful such as the jitters that we might feel before a job interview or an important exam. We may also experience these unpleasant feelings when we are worried about our finances, or we have a family member suffering from an illness. For some of us, our performance can be improved by some level of stress while others find that experiencing severe anxiety on a day to day basis can interfere with their life. Anxiety disorders are a serious mental health condition that cause significant feelings of fear or worry. They refer more to a perceived anticipated threat in the future, whether that perceived danger is real or imagined. A key difference between Anxiety and fear related disorders are the stimulus or situation that triggers the fear or anxiety. Treatment for anxiety disorders typically involves psychotherapy and counseling, often alongside some form of medication. Of course, it is perfectly normal to experience some degree of anxiety when we face a particularly difficult situation but many people find that anxiety interferes with their normal everyday life. Acute anxiety may be linked to other psychiatric conditions, for example, depression. Anxiety is not considered normal when: - It is present even when there is no stressful event - It interferes with normal everyday activities such as socializing and work - It is considered severe and prolonged Physical Symptoms of Anxiety Disorders Symptoms of anxiety are triggered by the brain which sends messages to different areas of the body in preparation for the ‘fight or flight’ response. Certain organs in the body such as the lungs and heart work faster, whilst the brain releases an increased amount of stress hormones such as adrenaline. As a result, certain physical symptoms can occur such as: - Abdominal pain or discomfort. - Increased/rapid heartbeat and/or palpitations - Pain and tightness in the chest - Shortness of breath - Dry mouth - Swallowing difficulties - More frequent urination Psychological Symptoms of Anxiety Disorders In addition to physical symptoms, there are also psychological symptoms which can include: - Inability to sleep (insomnia) - Anger and irritability - Inability to maintain concentration - Not feeling like one can control their actions (depersonalisation) - Feeling unreal - A fear of madness There are several types of anxiety disorders. They are often associated with a physical condition such as a thyroid disorder. The anxiety usually improves when the physical illness is treated. Anxiety is also the main symptom of mental illnesses which are known as anxiety disorders. It is very often the symptom of a further mental health problem for example depression, alcohol misuse, personality disorder or a withdrawal from a long term use of tranquilizers. Some sufferers experience what is known as Acute Stress Reaction where the symptoms develop quicker following the event. This reaction type occurs usually following an unexpected event such as bereavement. For some, this reaction may occur before the event, for example an exam. This is known as situational anxiety and the symptoms usually disappear fairly quickly and no treatment is required. When it comes to answering the question - what is an anxiety disorder, most sufferers would define it as a flurry of emotions including fear and worry that do not seem to go away People with anxiety find it hard to identify the causes of their disorder but sometimes, by asking themselves why do I have anxiety, can reveal things they did not realise were causing them stress Physicians have categorized the types of anxiety disorders into three broad groups which are generalized anxiety disorder, social phobia disorder and panic attacks Obsessive compulsive disorder (OCD) is a serious psychiatric condition and a type of anxiety disorder characterized by obsessions and/or compulsions that can have a serious impact on an individual's daily life As well as being a mental disorder that affects the mind, there are physical symptoms of stress and anxiety, such as sleep disorders and depression, that can affect the body Generalized anxiety disorder in children is characterized by their worrying about small things such as past conversations or incidents as well as upcoming events or schoolwork Although it may seem difficult to differentiate between everyday anxiety, which affects everybody, and an anxiety disorder by clinical definition, there is a difference Suffering from anxiety does not mean spending a small fortune on psychiatrists and medications since there are several anxiety coping strategies that cost very little to implement Many folks with high levels of nervousness, worry and nervous tension need to find more effective ways of anxiety disorder and stress management to deal with the daily pressures they face Folks who suffer from continual anxiety stress and anger management issues can help themselves by practicing the old adage of take a deep breath and count to ten Treatments for depression and anxiety attacks typically include medication to control the symptoms of the anxiety disorder together with psychotherapy with a psychiatrist Many anxiety sufferers would prefer treating anxiety naturally than use prescription medications. There are several natural remedies for anxiety including Kava Kava, St Johns Worts and Valerian Root While a good deal of health care providers are wary of suggesting St John's Wort for depression and anxiety, they also have trouble denying that many depressives find relief with this natural remedy The difference between phobias and anxiety disorder are that a phobia is an irrational and extreme fear while the anxiety disorder can be caused by the phobia A phobia is an extreme irrational fear of something while anxiety panic attacks are an irrational disorder brought on by an over reaction to a series of thoughts or an outside stimulus Although some phobias do tend to sound bizarre, they are able to develop due to a certain little used human instinct that is intended to help us survive, but occurs when fear is misplaced In many cases, people with phobias realize how silly their fear is, but since it is an irrational fear, they are powerless to escape once the phobic reaction is triggered Various techniques for treating phobias allow therapists to free people from the grip of their fears but phobia therapy only works long term when the sufferer deals with the root cause of the fear Even though there are hundreds of different phobias, the vast majority of them fall into five categories called specific phobias Agoraphobia is based on a fear of having a phobic reaction or panic attack in public and literally means fear of the marketplace or public spaces Panic attack symptoms and causes are often the result of some underlying reason and it is therefore important to look at those causes to prevent them continuously occurring in the future Symptoms of a panic attack include periods of disabling fear or anxiety that is brought on by a certain trigger. It can be accompanied by shortness of breath, dizziness, and even severe chest pains Men are many times more likely to commit suicide than are women but the early signs of depression in men, that lead to these suicidal thoughts, are often ignored Depression in children is characterized in much the same way as adults who are depressed although you might wonder what a child has got to be depressed about Separation anxiety in young children tends to occur around the six month mark when babies begin to understand the principle of object permanence Post Traumatic Stress Disorder - PTSD is a disorder that can develop in some people who have experienced a terrifying, distressing or traumatic event
To ‘acknowledge’ means to recognise the importance or quality of something. This concept and its application in Australia has been imperative to the slow repair of disparities that exists between First Nations People and non-Indigenous Australians. By acknowledging Aboriginal and Torres Strait Islander people and in turn, giving them the opportunity to welcome us to Country, we promote an awareness of the history and culture of Indigenous people, as well as contribute to mending a long history of dispossession and colonisation. Further to this, by incorporating welcoming and acknowledgment into official meetings and events, we recognise First Nation Peoples as the First Australians and Traditional Custodians of this land. The difference between an Acknowledgment of Country and Welcome to Country is not often well understood. In light of this, today we seek to explain their meanings, tradition and importance. “An Acknowledgement of Country is an opportunity for anyone to show respect for Traditional Owners and the continuing connection of Aboriginal and Torres Strait Islander peoples to Country. It can be given by both non-Indigenous people and Aboriginal and Torres Strait Islander people.” There is no set script for an Acknowledgment of Country. Personalising and localising an acknowledgment is encouraged as it helps to make it as meaningful as possible. An example of an Acknowledgment would read: “A Welcome to Country occurs at the beginning of a formal event and can take many forms including singing, dancing, smoking ceremonies or a speech in traditional language or English. A Welcome to Country is delivered by Traditional Owners, or Aboriginal and Torres Strait Islander people who have been given permission from Traditional Owners, to welcome visitors to their Country.”
It sounds like science fiction: giant solar power stations floating in space that beam down enormous amounts of energy to Earth. And for a long time, the concept – first developed by the Russian scientist Konstantin Tsiolkovsky in the 1920s – was the main inspiration for writers. A century later, however, scientists are making huge strides in turning the concept into reality. The European Space Agency has realised the potential of these efforts and is now looking to fund such projects, predicting that the first industrial resource we will get from space is “beamed power”. Climate change is the greatest challenge of our time, so there’s a lot at stake. From rising global temperatures to shifting weather patterns, the impacts of climate change are already being felt around the globe. Overcoming this challenge will require radical changes to how we generate and consume energy. Renewable energy technologies have developed drastically in recent years, with improved efficiency and lower cost. But one major barrier to their uptake is that they don’t provide a constant supply of energy. Wind and solar farms only produce energy when the wind is blowing, or the sun is shining – but we need electricity around the clock, every day. Ultimately, we need a way to store energy on a large scale before we can switch to renewable sources. The ‘switch to solar’ is as much of serious consideration as it is an inevitable necessity. And while more and more efforts are being made on an everyday basis to use as much solar power as possible on Earth, it seems like the concept is about to take a turn that can best be described as ‘sci-fi.’ Solar power has been around for a long time. When the first manufactured satellite took to space in the 1960s, we had the concept operational. We have been harnessing it for use in space for over five decades now. And yet, it has taken so much time to become ubiquitous. There are many reasons for this, with the most prominent one being that sending solar-generated electricity from space to the earth can seem like a logistical nightmare. After all, we can’t really use wire in this scenario, can we? But there might be a solution that can make this a reality. Let’s dive in, shall we? A solar space station will have one job—to orbit the earth in a fashion that it faces the sun 24 hours a day, seven days a week, for every year of its operation (the satellite will have to be slotted for this) and collect solar power. There won’t be any issues with a setup like this that can set it back, like climate issues or a lack of sunlight during nighttime. The station will first convert solar energy to electricity, just as any photovoltaic device does. Now, the question remains: how can it transfer that energy to earth? Well, the solution is simple enough. The solar space station will convert this electricity into microwave energy and beam it down to the planet. The energy will be received and converted back into electricity. The ‘millimetre wavelength’ can optimally make the process (this is the wavelength used during 5G communication). The first challenge lies in the satellite’s position itself. For the satellite to effectively beam down waves, it will have to be in a geosynchronous orbit, just like a telecommunications satellite is. 36,000 km above the earth’s surface is a good place for such a setup. The earth will have a point where a receiving center will be built. However, the problem can come if the satellite isn’t positioned well enough to follow the protocol for years on end so that the point where it emits the beam doesn’t slowly shift. The second option would be to put the satellite in a low-earth orbit and build several receiving stations. However, scientists have noted that these challenges aren’t as difficult to solve as one might think they are, which gives us at the Swiss Institute for Disruptive Innovation a lot of hope about seeing this operational in the coming decades. The other issue can be logistics. Millions of solar panels will have to be transported up to space, which is anything but a simple or cost-effective task. Here, companies like SpaceX and Blue Origin can prove to be the innovators that can provide reusable rockets to get the job done with much efficiency. The U.S. is not alone In March 2019, a story was published in Forbes magazine that revealed that China also has plans to build and launch a space solar station. While China hasn’t confirmed the story yet, they seem reasonable to turn to space for energy. With this concept on the table, the future looks bright indeed. It is only a matter of time till we run out of fuel we use to make electricity. Space solar stations can advance humankind by leaps and bounds—and if you want to be a part of the revolution in the future, we recommend our Space and Architecture course would be a good place to start!
Astronomers discover tiny, planet-sized star An international team of astronomers has discovered the smallest ever star to date – a tiny red dwarf only slightly larger than the planet Saturn. Known as EBLM J0555-57Ab, it sits roughly 600 light years from Earth, and is so small that it was discovered using a technique ordinarily used for hunting exoplanets. Low-mass bodies such as red dwarfs are the most prevalent star types in the galaxy. Their diminutive size and luminosity in comparison to a stellar body such as our Sun, however, makes them relatively hard to detect. Despite their seemingly unimpressive nature, red dwarfs represent some of the most exciting stars in the Milky Way, as they are thought to be the stellar bodies with the greatest chance of harboring Earth-sized exoplanets, themselves capable of hosting liquid water on their surfaces. One such red dwarf, known as TRAPPIST-1, generated a massive amount of excitement in February, when scientists sensationally announced that it played host to seven Earth-sized exoplanets, three of which orbited in the star's habitable zone. EBLM J0555-57Ab first came to the attention of the team in October 2008, when the exoplanet-hunting experiment, Wide Angle Search for Planets (WASP), spotted a dip in the output of light from the Sun-like star EBLM J0555-57, as the red dwarf made a transit across its disk. Follow-up observations from the CORALIE spectrograph mounted on the 1.2-m (3.9-ft) Euler telescope, and the smaller TRAPPIST telescope, both located at the La Silla observatory in Chile, confirmed that the two stars belonged to an eclipsing stellar binary system. The red dwarf transits across the disk of the primary Sun-like star EBLM J0555-57 once every 7.75 days. "This star is smaller, and likely colder than many of the gas giant exoplanets that have so far been identified," said lead author of the study Alexander Boetticher. "While a fascinating feature of stellar physics, it is often harder to measure the size of such dim low-mass stars than for many of the larger planets. Thankfully, we can find these small stars with planet-hunting equipment, when they orbit a larger host star in a binary system. It might sound incredible, but finding a star can at times be harder than finding a planet." The newly-discovered red dwarf boasts a mass similar to that of TRAPPIST-1, despite being only a little larger than the planet Saturn, which has a radius of 72,000 miles (115,872 km), or nearly 10 times the size of Earth. It is also important to remember that small is contextual. Though only the size of Saturn, if, by some miracle, you found yourself standing on the "surface" of the red dwarf, its gravity would exert a force 300 times that of Earth's. So, you know, don't go there. That said, the star is so small that it sits only slightly above the mass threshold at which a stellar body is capable of turning hydrogen nuclei into helium through nuclear fusion. Were EBLM J0555-57Ab to be any smaller, there would not be enough pressure at the heart of the star to maintain the fusion process, and it would transform into a failed stellar body, known as a brown dwarf. A paper on the findings has been published in the journal Astronomy and Astrophysics. Source: University of Cambridge
|Instrument family Wind instrument| Conch, or conque, also known as a "seashell horn" or "shell trumpet", is a musical instrument, a wind instrument that is made from a seashell, the shell of several different kinds of very large sea snails. The shells of large marine gastropods are prepared by cutting a hole in the spire of the shell near the apex, and then blowing into the shell as if it were a trumpet, as in blowing horn. Sometimes a mouthpiece is used, but some shell trumpets are blown without one. Various species of large marine gastropod shells can be turned into "blowing shells", but some of the best-known species are: the sacred chank or shankha Turbinella pyrum; the "Triton's trumpet" Charonia tritonis; and the Queen Conch Strombus gigas. Shell trumpets have been known since the Magdalenian period (Upper Paleolithic), one example being the "conch Marsoulas", an archeological shell trumpet which is on display at the Museum de Toulouse. India and Tibet The sacred chank, Turbinella pyrum, is known in India as the shankha. In Tibet it is known as "Dung-Dkar". For the Hindu context, see the article shankha. Throughout Mesoamerican history, conch trumpets were used, often in a ritual context (see figure). In Ancient Maya art, such conches were often decorated with ancestral images; scenes painted on vases show hunters and hunting deities blowing the conch trumpet. The Queen Conch Strombus gigas was, and sometimes still is, used as a trumpet in the West Indies and other parts of the Caribbean. The Pacific Ocean area The Triton shell, also known as "Triton's trumpet" Charonia tritonis, is used as a trumpet in Melanesian and Polynesian culture, and also in Korea and Japan. In Japan this kind of trumpet is known as the horagai. In Korea it is known as the nagak. In some Polynesian islands it is known as "pu". Conch shell trumpets were historically used throughout the South Pacific, in countries such as Fiji. In resorts in Fiji they still blow the shell as a performance for tourists. The Fijians also used the conch shell when the chief died: the chief's body would be brought down a special path and the conch would be played until the chief's body reached the end of the path. In Malta the instrument is called a bronja, colloquially known as tronga. The shell of a sea snail is modified, with a hole at one end, and when blown it creates a strong noise reaching a large distance in a given Maltese village. The tronja was generally used to inform the people that the windmills on the islands are operating that day due to being a windy day, of which wind allows the grain of wheat and other. The American jazz trombonist Steve Turre also plays conches, notably with his group Sanctified Shells. A partially echoplexed Indian conch was featured prominently as the primary instrument depicting the extraterrestrial environment of the derelict spaceship in Jerry Goldsmith's score for the film Alien. Director Ridley Scott was so impressed by the eerie effect that he requested its use throughout the rest of the score, including the Main Title.
Say African American or Black, but first acknowledge the persistence of structural racism In February, Urban Institute researchers writing on Urban Wire will explore racial disparities in housing and criminal justice and the structural barriers that continue to disadvantage the black population in the United States. Other posts in this series: In February, the United States and several other countries celebrate black history. In the United States, this time is also called African American History Month. The evolution of this celebration and its names over the years reflect differences in views of the population and how we perceive the legacy of racism and structural disadvantage faced by black people in the United States. The designation of February as the month to celebrate African American achievements originated with Carter G. Woodson and the Association for the Study of Negro Life and History. The monthlong celebration grew out of Negro History Week, chosen as the second week in February because it coincided with the birthdays of Abraham Lincoln and Frederick Douglass. Yet some cynics have suggested that February was selected because it was the shortest month of the year, and once again, black people were being shortchanged even as they were being celebrated. Today, some people view “black” and “African American” interchangeably. But many have strong opinions that “African American” is too restrictive for the current US population. In part, the term African American came into use to highlight that the experiences of the people here reflect both their origins in the African continent and their history on the American continent. But recent immigrants from Africa and the Caribbean have different combinations of history and experience, so some have argued that the term “black” is more inclusive of the collective experiences of the US population. About 10 percent of the 46.8 million black people in the United States are foreign born. Contrary to the perception of some, immigrants from Africa are more likely to have a college education than the black population as a whole and the US population as a whole. Data on children of immigrants reveal that 44 percent of children in the United States with parents from Africa and the West Indies have parents with at least a college degree, compared with 24 percent of all black children and 40 percent of all children with native-born parents. Structural racism disadvantages black people in America Diversity of opinion and disparities in educational attainment are not the only differences within the black population. People often focus only on the median income of black households ($39,490) versus non-Hispanic white households ($65,041), which leaves the impression that all black households have lower incomes. But the range of household incomes within the black population is broader than within the white population. Mean 2016 household income for the bottom fifth of black households was $7,334, while it was $153,249 for the top fifth, nearly 21 times the income of the bottom 20 percent. This ratio is higher than the ratio (15 times) for white households. Nevertheless, black households in the highest income group within their race have mean incomes about 68 percent of white households in the highest income group. Structural barriers that stand in the way of black achievements can help explain these disparities. For example, in housing and criminal justice, race predicts outcomes even after controlling for income and education. In a series of blog posts that will be released this month, the structural racism initiative at Urban will focus on housing and criminal justice because they indicate how structural racism and racial stereotypes keep black people from reaching goals comparable with their white counterparts. Housing disparities lead to other disparities Low levels of homeownership and limited access to decent and affordable housing keep black households from building wealth and accessing high-quality neighborhood services, such as education. Less than half of black households are homeowners, compared with nearly three-quarters of white households. Consequently, high-achieving black families find it harder to pass on economic and educational advantages at the same rate as white families. Strategies to change that dynamic have been offered but are not part of the national agenda. Recommendations for new initiatives will soon be released by the US Partnership on Mobility from Poverty. The criminal justice system is also racially unequal Black people are more likely to become involved with the criminal justice system. Many of these disparities arise from differences in how certain actions by black people versus white people are treated by schools, police, hospitals, and other institutions, with serious consequences for black people. Black people make up more than one-third of people in federal and state prisons, nearly three times their representation in the population. Once they become involved, black people are more likely to carry the consequences throughout their lives. We can implement strategies that change how people are treated after they have been incarcerated and strategies that reduce encounters with law enforcement and the negative consequences that follow those encounters. Some of these strategies will be explored in the structural racism blog series this month. While we have much to celebrate about black achievement in the United States, many structural barriers continue to keep black people from achieving more. As a country, we should not be content to recognize the many accomplishments of black people, but we should acknowledge how much more we have to do to promote equitable outcomes. Protesters at the March on Washington. Photo by Wally McNamee/Corbis via Getty Images.
You might think of the Universe as infinite, and quite honestly, it might truly be infinite, but we don't think we'll ever know for sure. Thanks to the Big Bang -- the fact that the Universe had a birthday, or that we can only go back a finite amount of time -- and the fact that the speed of light is finite, we're limited in how much of the Universe we can see. By time you get to today, the observable Universe, at 13.8 billion years old, extends for 46.1 billion light years in all directions from us. So how big was it all the way back then, some 13.8 billion years ago? Let's look to the Universe we see to find out. When we look out at the distant galaxies, as far as our telescopes can view, there are some things that are easy to measure, including: - what its redshift is, or how much its light has shifted from an inertial frame-of-rest, - how bright it appears to be, or how much light we can measure from the object at our great distance, - and how big it appears to be, or how many angular degrees it takes up on the sky. These are very important, because if we know what the speed of light is (one of the few things we know exactly), and how intrinsically either bright or big the object we're looking at is (which we think we know; more in a second), then we can use this information all together to know how far away any object actually is. In reality, we can only make estimates of how bright or big an object truly is, because there are assumptions that go into this. If you see a supernova go off in a distant galaxy, you assume that you know how intrinsically bright that supernova was based on the nearby supernovae that you've seen, but you also assume that the environments in which that supernova went off was similar, the supernova itself was similar, and that there was nothing in between you and the supernova that changed the signal you're receiving. Astronomers call these three classes effects evolution (if older/more distant objects are intrinsically different), environmental (if the locations of these objects differ significantly from where we think they are) and extinction (if something like dust blocks the light) effects, in addition to the effects we may not even know are at play. But if we're right about the intrinsic brightness (or size) of an object we see, then based on a simple brightness/distance relation, we can determine how far away those objects are. Moreover, by measuring their redshifts, we can learn how much the Universe has expanded over the time the light has traveled to us. And because there's a very well-specified relationship between matter-and-energy and space-and-time -- the exact thing Einstein's General Relativity gives us -- we can use this information to figure out all the different combinations of all the different forms of matter-and-energy present in the Universe today. But that's not all! If you know what your Universe is made out of, which is: - 0.01% — Radiation (photons) - 0.1% — Neutrinos (massive, but ~1 million times lighter than electrons) - 4.9% — Normal matter, including planets, stars, galaxies, gas, dust, plasma, and black holes - 27% — Dark matter, a type of matter that interacts gravitationally but is different from all the particles of the Standard Model - 68% — Dark energy, which causes the expansion of the Universe to accelerate, you can use this information to extrapolate backwards in time to any point in the Universe's past, and find out both what the different mixes of energy density were back then, as well as how big it was at any point along the way. Because of how illustrative they are, I'm going to plot these on logarithmic scales for you to view. As you can see, dark energy may be important today, but this is a very recent development. For most of the first 9 billion years of the Universe's history, matter -- in the combined form of normal and dark matter -- was the dominant component of the Universe. But for the first few thousand years, radiation (in the form of photons and neutrinos) was even more important than matter! I bring these up because these different components, radiation, matter and dark energy, all affect the expansion of the Universe differently. Even though we know that the Universe is 46.1 billion light years in any direction today, we need to know the exact combination of what we have at each epoch in the past to calculate how big it was at any given time. Here's what that looks like. Here are some fun milestones, going back in time, that you may appreciate: - The diameter of the Milky Way is 100,000 light years; the observable Universe had this as its radius when it was approximately 3 years old. - When the Universe was one year old, it was much hotter and denser than it is now. The mean temperature of the Universe was more than 2 million Kelvin. - When the Universe was one second old, it was too hot to form stable nuclei; protons and neutrons were in a sea of hot plasma. Also, the entire observable Universe would have a radius that, if we drew it around the Sun today, would enclose just the seven nearest star systems, with the farthest being Ross 154. - The Universe was once just the radius of the Earth-to-the-Sun, which happened when the Universe was about a trillionth (10-12) of a second old. The expansion rate of the Universe back then was 1029 times what it is today. If we want to, we can go back even farther, of course, to when inflation first came to an end, giving rise to the hot Big Bang. We like to extrapolate our Universe back to a singularity, but inflation takes the need for that completely away. Instead, it replaces it with a period of exponential expansion of indeterminate length to the past, and it comes to an end by giving rise to a hot, dense, expanding state we identify as the start of the Universe we know. We are connected to the last tiny fraction of a second of inflation, somewhere between 10-30 and 10-35 seconds worth of it. Whenever that time happens to be, where inflation ends and the Big Bang begins, that's when we need to know the size of the Universe. Again, this is the observable Universe; the true "size of the Universe" is surely much bigger than what we can see, but we don't know by how much. Our best limits, from the Sloan Digital Sky Survey and the Planck satellite, tell us that if the Universe does curve back in on itself and close, the part we can see is so indistinguishable from "uncurved" that it much be at least 250 times the radius of the observable part. In truth, it might even be infinite in extent, as whatever the Universe did in the early stages of inflation is unknowable to us, with everything but the last tiny fraction-of-a-second of inflation's history being wiped clean from what we can observe by the nature of inflation itself. But if we're talking about the observable Universe, and we know we're only able to access somewhere between the last 10-30 and 10-35 seconds of inflation before the Big Bang happens, then we know the observable Universe is between 17 centimeters (for the 10-35 second version) and 168 meters (for the 10-30 second version) in size at the start of the hot, dense state we call the Big Bang. The smallest conceivable answer -- 17 centimeters -- is about the size of a soccer ball! The Universe couldn't have been much smaller than that, since the constraints we have from the Cosmic Microwave Background (the smallness of the fluctuations) rule that out. And it's very conceivable that the entire Universe is substantially larger than that, but we'll never know by how much, since all we can observe is a lower limit on the true size of the actual Universe. So how big was the Universe when it was first born? If the best models of inflation are right, somewhere between the size of a human head and a skyscraper-filled city block. Just give it time -- 13.8 billion years in our case -- and you wind up with the entire Universe we see today.
Every spring and every fall, we diligently change the time the time on our clocks forward or back, lamenting the loss of an hour or rejoicing in additional time to sleep. As a longstanding part of American tradition, it’s easy to change the clock one way or the other without taking the time to consider why we make these adjustments each and every year. Maximizing Daylight Hours Daylight savings time, as the name is implies, is designed to maximize daylight hours in accordance with the seasons. In the fall, as the days become shorter and the weather gets cooler, we “fall back,” or set the time back an hour. Then, in the spring as the weather starts to warm up and the days grow longer, we “spring forward,” moving the time up an hour so that daylight lasts longer into the evening. 2:00 AM is the official time in which DST starts and ends in the United States. Due to its inconvenience, most individuals have historically changed their clocks the night before, prior to going to sleep. Today, however, cell phone and computer clocks often change automatically, ensuring each time change happens promptly and accurately. Although it’s now second nature, the U.S. didn’t always change the time twice a year. DST was first proposed by Benjamin Franklin in 1784 after a trip to France. While visiting, he noticed that, while there was no official time change procedure, the French woke up earlier in the winter and later in the summer to maximize the daylight hours. At the time, however, the government wasn’t interested in such a radical idea. Daylight Saving Catches On After World War One, the idea of daylight savings caught on in European countries. Germany was the first to adopt the practice in 1916, implementing a time change twice a year in hopes of saving money on coal. In 1918, the United States followed suit. The time change schedule held steady until 2007, when the government elected to increase DST by several weeks, with current time changes occurring in March and November rather than April and October. Today, Daylight Savings Time is observed in all states except Alaska, Hawaii, and Arizona, and by dozens of countries around the world. In the United States, the time changes forward one hour at 2:00 AM on the second Sunday in March and back one hour at 2:00 AM on the second Sunday in November, allowing Americans to enjoy as much sunlight as possible all year round.
When calculating electrical circuits, including for modeling purposes, Kirchhoff’s laws are widely applied, which make it possible to completely determine the mode of its operation. Before moving on to the very laws of Kirchhoff, we give a definition of the branches and nodes of an electric circuit. A branch of an electric circuit is such a section of it that consists only of series-connected sources of EMF and resistances, along which the same current flows. An electric circuit node is a place (point) of connection of three or more branches. When going around branches connected in nodes, you can get a closed circuit of the electric circuit. Each circuit is a closed path passing through several branches, with each node in the circuit under consideration occurs no more than once . The first Kirchhoff law is applied to nodes and is formulated as follows: the algebraic sum of currents in a node is zero: ∑i = 0, or in complex form ∑I = 0. The second Kirchhoff law is applied to the circuits of an electric circuit and is formulated as follows: in any closed circuit, the algebraic sum of the voltages at the resistances included in this circuit is equal to the algebraic sum of the EMF: ∑Z ∙ I = ∑E. The number of equations compiled for an electric circuit according to the first Kirchhoff law is Nn – 1, where Nn is the number of nodes. The number of equations compiled for an electric circuit according to the second Kirchhoff law is Nb – Nn + 1, where Nb is the number of branches. The number of equations to be compiled according to the second Kirchhoff law is easy to determine by the type of circuit: for this it is enough to calculate the number of “windows” of the circuit, but with one clarification: it should be remembered that the circuit with the current source is not considered. Let us describe the methodology for compiling equations according to the laws of Kirchhoff. Consider it on the example of the electrical circuit shown in Fig. 1. Fig. 1. Consider the electrical circuit To begin with, you need to specify arbitrary directions of currents in the branches and specify the directions of circuit paths (Fig. 2). Fig. 2. Setting the direction of the currents and the direction of circuit bypass for the electrical circuit The number of equations compiled according to the first Kirchhoff law is 5 – 1 = 4 in this case. The number of equations compiled according to the second Kirchhoff law is 3, although the “windows” in this case are 4. But we recall that the “window” containing the source current J1 is not considered. Compose the equations according to the first law of Kirchhoff. For this, we will take the currents “flowing” into the node with the “+” sign, and the “leaky” ones with the “-” sign. Hence, for the node “1 n.” The equation according to the first law of Kirchhoff will look like this: I1 – I2 – I3 = 0; for the node “2 n.” the equation according to the first law of Kirchhoff will look as follows: –I1 – I4 + I6 = 0; for the node “3 n.”: I2 + I4 + I5 – I7 = 0; for the node “4 n.” I3 – I5 – J1 = 0 The equation for the node “5 n.” you can not make. We compose the equations according to the second law of Kirchhoff. In these equations, positive values for currents and EMF are selected if they coincide with the direction of the circuit. For the “1 c.” circuit, the equation according to the second Kirchhoff law will look like this: ZC1 ∙ I1 + R2 ∙ I2 – ZL1 ∙ I4 = E1; for the “2 c.” circuit, the equation according to the second Kirchhoff law will look as follows: -R2 ∙ I2 + R4 ∙ I3 + ZC2 ∙ I5 = E2; for the “3 c.” circuit: ZL1 ∙ I4 + (ZL2 + R1) ∙ I6 + R3 ∙ I7 = E3, where ZC = – 1/(ωC), ZL = ωL. Thus, in order to find the required currents, it is necessary to solve the following system of equations: In this case, it is a system of 7 equations with 7 unknowns. To solve this system of equations, it is convenient to use Matlab. To do this, imagine this system of equations in matrix form: To solve this system of equations, we use the following script: >> syms R1 R2 R3 R4 Zc1 Zc2 Zl1 Zl2 J1 E1 E2 E3; >> A = [1 -1 -1 0 0 0 0; -1 0 0 -1 0 1 0; 0 1 0 1 1 0 -1; 0 0 1 0 -1 0 0; Zc1 R2 0 -Zl1 0 0 0; 0 -R2 R4 0 Zc2 0 0; 0 0 0 Zl1 0 (R1+Zl2) R3]; >> b = [0; 0; 0; J1; E1; E2; E3]; >> I = A\b As a result, we obtain a column vector I of currents from seven elements, consisting of the sought currents, written in general form. We see that the Matlab software package can significantly simplify the solution of complex systems of equations compiled according to the laws of Kirchhoff. - Zeveke G.V., Ionkin P.A., Netushil A.V., Strakhov S.V. Fundamentals of circuit theory. Textbook for high schools. Ed. 4th, revised. M., “Energy”, 1975.
Use-Cases of this Tutorial - Know the correct tag for specifying definitions in HTML pages - Know about the HTML Definition element <dfn> The HTML Definition element represented by <dfn> is the standard markup used to highlight terms being defined in webpages. <p>A <dfn>synonym</dfn> is a word having almost the same meaning as a different word.</p> Using standard markup elements bring a sense meaning to the page, especially for automated machines like web crawlers. A web crawler can read a <dfn> tag and get to know that there is term being defined. This term and its definition can be used by the crawler for specific purposes. Note that <dfn> tag holds the term being defined, and not the definition. The associated definition needs to be defined nearby to have a meaning for the <dfn> element. Identifying the Term Being Defined Following rules are used to identify the term being defined: If the attribute title is present in the <dfn> element, its value is considered to be the term being defined. The text must be present within the element though. <!-- "CSS" is the defined term --> <p>A <dfn title="CSS">CSS</dfn> stands for Cascading Style Sheets.</p> If <dfn> contains no inner text and <abbr> element is the only child element in it, then the title attribute of the <abbr> element is considered the term being defined (if title attribute present). <!-- "PHP" is the defined term --> <p><dfn><abbr title="PHP"></abbr></dfn> stands for Hypertext Preprocessor.</p> In rest of the cases the text content of <dfn> is considered to be the term being defined. <!-- "CSS" is the defined term --> <p><dfn>CSS</dfn> stands for Cascading Style Sheets.</p> Identifying the Complete Definition of the Term The complete definition for the term represented by <dfn> can be found in the nearest parent <p> or <section> or <dt> / <dd> pair. <!-- "CSS" is the defined term --> <!-- "CSS stands for Cascading Style Sheets." is the definition --> <p><dfn>CSS</dfn> stands for Cascading Style Sheets.</p>
How can teachers determine whether students are making appropriate progress? Page 3: Select a Measure The first step in the progress monitoring process is to select a measure. Recall that these measures should include sample items for all skills across the entire academic year. Often, the mathematics program selected by the school or district will include grade-level progress monitoring measures. In other instances, specific GOM measures might be chosen by school, district, or state administrators. This is typically the case when a school is using an MTSS or RTI framework for instruction. Teachers can also decide independently to use GOM measures to monitor student progress and make instructional decisions. Regardless of who is making the choice, it’s important to keep several factors in mind when selecting a GOM measure: - Does it align with the grade-level mathematics skills? - Is the measure reliable and valid? - Does the measure have sufficient alternate versions? - Is the measure relatively quick (e.g., two to ten minutes) and easy to administer? - Is the measure designed to be administered to individual students or to groups? (Group-administered tests are often more convenient than individually administered ones.) - Are versions of the test available in languages other than English? For the earlier grade levels (e.g., kindergarten, 1st grade), teachers will most likely need to assess early numeracy skills, such as number identification. However, for students who have mastered these basic skills, the teacher should administer two types of mathematics probes: computation probes and concepts and applications probes. Click on the links below to view samples of each. |Computation Probe||Concepts and Applications Probe*| |Measures students’ procedural knowledge (e.g., ability to add fractions).||Assesses conceptual understanding of mathematics or students’ ability to apply mathematics knowledge (e.g., to make change from a purchase).| The sample secondary computation probe below is designed to assess students’ basic algebraic skills. Note that a normal probe would contain 60 questions and allow students five minutes to complete them.This 30-item example, which is the first of a two-page probe, is presented here for the sake of brevity and illustrative purposes. Project AAIMS. (2014). Project AAIMS algebra progress monitoring measures [Algebra Basic Skills, Algebra Foundations]. Ames, IA: Iowa State University, College of Human Sciences, School of Education, Project AAIMS. * No valid middle or high school concepts and applications probes are available at this time. For Your Information Though a variety of GOM measures are commercially available in mathematics for grades K through 12, tests for secondary students are limited. These standardized measures typically include the tests, administration procedures, and scoring guides that have been developed to produce reliable and valid scores. Additionally, student benchmarks and expected rate of improvement (ROI) are often provided by the developer. The National Center on Intensive Intervention (NCII) provides a tools chart that presents information about commercially available progress monitoring probes that have been reviewed by a panel of experts and rated on key features. Click here to use this tools chart. For information specifically about algebra measures see the project AAIMS Website: There is a lack of available validated measures to assess the mathematics skills of high school students. This is especially true of measures that assess students’ conceptual understanding. David Allsopp discusses an option for assessing this type of understanding (time: 1:56). David Allsopp, PhD Assistant Dean for Education and Partnerships University of South Florida Selecting Measures for Struggling Students Sam is a 4th-grade student. - Sam was performing at the 2nd-grade level at the end of last year. - The teacher administers a 2nd-grade math probe. Sam’s average on the two probes is 12. - The teacher will administer 2nd-grade probes for the year. Grade-level GOM measures are appropriate for typically achieving students as well as for many who are struggling. However, these measures might not be appropriate for students who are consistently not performing at grade level. These students might require a measure designed for a different grade level. Sometimes, commercially available measures include directions on selecting an appropriate grade-level probe for these students. If they do not, however, teachers can use the following procedure: - Identify the grade level at which the student was performing at the end of the prior academic year. - On two separate days, administer a GOM test at the grade level at which the student was performing at the end of the previous year. - If the average of the two scores is less than 10, then use probes one grade level below where the student was performing at the end of the prior year. - If the average of the two scores is between 10 and 15, use probes at this grade level. - If the average of the two scores is greater than 15, use probes one grade level above where the student was performing at the end of the previous year. - Maintain the appropriate grade-level probes for the entire year.
Antibiotic Resistance, Food, and Food-Producing Animals Antibiotics are medicines that kill or stop the growth of bacteria. Antibiotics save lives, but any time antibiotics are used, they can contribute to the development and spread of antibiotic resistance. Antibiotic resistance happens when bacteria develop the ability to survive or grow despite being exposed to antibiotics designed to kill them. Antibiotic resistance spreads through people, animals, and the environment. Improving antibiotic use, including reducing unnecessary use, can help stop resistance from spreading. Read on to learn what CDC is doing to help stop antibiotic-resistant infections from food and animals, and how you can protect yourself and your family. Animals and Food Animals carry bacteria in their guts just as people do. Some of these bacteria may be antibiotic resistant. Antibiotic-resistant bacteria can get in food in several ways: - They can spread to meat and poultry when animals are slaughtered and processed. - Animal waste (poop) can contain resistant bacteria and get into the surrounding environment. - Fruits and vegetables can get contaminated through contact with soil, water, or fertilizer that contains animal waste. Transmission of Antibiotic-Resistant Intestinal Infections to People People can get antibiotic-resistant intestinal infections from various sources, including food. People also can get these infections by handling or eating contaminated food or coming in contact with animal waste (poop), either through direct contact with animals and animal environments or through contaminated drinking or swimming water. People with intestinal infections usually do not need antibiotics to get better. However, people with severe infections (or those at risk of severe infections) may need antibiotics. People at risk for serious disease or complications include infants, people who are 65 and older, and people who have health problems or take medicines that lower the body’s ability to fight germs and sickness. If an infection is antibiotic resistant, some types of antibiotics might not effectively treat it. Infections with resistant organisms cause more severe or dangerous illness. What CDC Is Doing CDC is working to prevent infections caused by antibiotic-resistant bacteria by: - Tracking resistant infections and studying how resistance emerges and spreads. - Detecting and investigating antibiotic-resistant outbreaks quickly to identify their sources and stop and prevent their spread. - Determining the sources of antibiotic-resistant infections that are commonly spread through food and animals. - Strengthening the ability of state and local health departments to detect, respond to, and report antibiotic-resistant infections. - Educating the public and food workers on prevention methods, including safe food handling, safe contact with animals, and proper handwashing. - Promoting the improved use of antibiotics in people and animalsexternal icon. Foodborne Illnesses: Protect Yourself and Your Family You can take steps to help protect yourself and your family from antibiotic-resistant foodborne illnesses. - Take antibiotics only when needed, and take them exactly as prescribed. - Follow simple food safety tips: - CLEAN. Wash your hands after touching raw meat, poultry, seafood, or their juices, or uncooked eggs. Wash your work surfaces, cutting boards, utensils, and dishes before, during, and after cooking. - SEPARATE. Germs from raw meat, poultry, seafood, and eggs can spread to fruits, vegetables, and other ready-to-eat foods unless you keep them separate. Use one cutting board to prepare raw meats and another for foods that will not be cooked before they’re eaten. Don’t put cooked meat on a plate that had raw meat on it. - COOK. Use a food thermometer to ensure that foods are cooked to a safe internal temperature: 145°F for whole cuts of beef, pork, lamb, and veal, such as steaks, chops, and roasts, 160°F for ground red meats, and 165°F for poultry, including ground chicken and turkey. - CHILL. Keep your refrigerator below 40°F and refrigerate foods within 2 hours of cooking (refrigerate within 1 hour if the outdoor temperature is above 90°F). - Wash your hands after touching pets and other animals, or their food, water, poop, belongings (such as toys and bowls), or habitats (such as beds, cages, tanks, coops, stalls, and barns). - Report suspected illness from food to your local health department. - Review CDC’s Traveler’s Health recommendations when preparing for international travel. For more information on antibiotic resistance and food safety, visit CDC’s Antibiotic Resistance, Food and Animals page.
In the past, I've taught biomimicry to 5th grade students. Biomimicry is the innovation and design that occurs from nature as its inspiration. This concept was simple for older students, but I noticed it aligned best with first grade science standards. What I didn't expect, was for it to be as complex for the 1st grade students as it was. We began by deciphering the word. Next, we looked at tons of examples of product design that demonstrated biomimicry. We played a guessing game, viewing the product and guessing what part of nature it mimicked. Next, we reversed the game. I showed students a photo of something from nature and they thought of design ideas. This is where it got tricky. I heard responses such as "The cat reminds me of a dog" or "The vines look like a tree." Students had a very difficult time distinguishing nature and living items to inanimate objects. Students also seemed very stuck on inventing their own ideas. Many of their designs were examples we shared as a whole group. One solution I found after this day, was to break down the characteristics of the inspiration. I had students think of their favorite animal, draw it, then describe various parts of it and compare it to a non-living thing. For example, "The tail is , like a " and students may fill in "The tail is fluffy, like a duster." This step helped guide students along. After a couple sessions of talking, giving examples, drawing and sharing our ideas students were getting the hang of it. They soon prototyped their creation and they turned out fantastic! Check out some of their ideas!
Diana Baumrind, a developmental psychologist, is known for her research on parenting styles. Parenting styles represent approaches to how parents manage their children’s behavior, which in turn influences their development. This lesson explores the four different approaches and used clips from television and movies to test students’ understanding of them. - Show students the PPT and ask them to describe the characteristics of each object shown. Objects include: a marshmallow, a tennis ball, a rock and an old fish. Ask students to share their characteristics and ask what they have in common with parenting. - Projector & Screen - Glue & Scissors - Discuss Diana Baumrind, and her theory of parenting styles. Explain that parenting styles represent approaches to how parents manage their children’s behavior, which in turn influences their development. - Have students read the article “Parenting Styles: Four Styles of Parenting” and have students create a 4 grid, quadrant style foldable to take notes on. Ask students to match up the objects from the introductory activity with each style and draw the object in the quadrant is best represents. - Our school district has just adopted the Collins Writing method so I included some writing prompts based on that method. You can adapt the prompts to fit the method your school uses. - What kind of parents do you have? Explain why you believe this. (Type I) - If you could choose any of the parenting styles for your parents to be which would you choose and why? Provide at least 3 details about your chosen parenting style in your explanation. (Type II) - Reinforce this information by having students complete the assignment “Parenting Styles at a Glance” cut and paste activity (found in lesson plan below). Students may refer to their grid or the article to help them if necessary. Cut out characteristics and glue them into the correct parenting style category. - Practice or Quiz students by showing them various clips from television and movies of parents and children. Students are to identify the parenting style representing each. Stop and discuss each along the way and have students justify their responses or show one right after the other as a quiz. - Parenting Styles Lesson Plan PDF - Parenting Styles Intro Activity PPT - Parenting Styles YouTube Quiz–Key (Word) - Parenting-Styles-at-a-Glance-Key PDF - Picture Courtesy of Dreamstime
Contents - Previous - Next This is the old United Nations University website. Visit the new site at http://unu.edu 6. Deforestation and desertification in developing countries R. K. Pachauri and Rajashree S. Kanetkar This paper takes a fresh look at two of the major environmental hazards affecting the planet, namely deforestation and desertification, in terms of the nature and magnitude of the problem as faced by the developing world, and their causes and effects. The Indian scenario and the various measures that have been adopted so far to combat the problem are reviewed. The role of forestry in controlling desertification and strategies for sound economic development while conserving the global environment are also discussed. Much of the earth is degraded, is being degraded, or is at risk of degradation. Marine, freshwater, atmospheric, near-space, and terrestrial environments have suffered and continue to suffer degradation. This paper focuses on terrestrial degradation - which may be defined as the loss of utility or potential utility or its reduction, or the loss or change of features or organisms that cannot be replaced (Barrow, 1991) - and on deforestation and desertification in particular. The processes of deforestation and desertification, which are widespread, discrete when caused by human actions, and continuous when they occur naturally, are two of the major environmental concerns that are addressed by Agenda 21 developed for the United Nations Conference on Environment and Development (UNCED) in June 1992. The forests that occupy more than a quarter of the world's land area are of three broad types - tropical moist and dry, temperate, and degraded. The rapid loss of tropical forests, due to competing land uses and forms of exploitation that often prove to be unsustainable, is a major contemporary environmental issue. The main concern globally is with tropical forests that are disappearing at a rate that threatens the economic and ecological functions that they perform. Deforestation in developing countries is more recent, with tropical forests having declined by nearly one-fifth so far in this century. Areas of forests and woodlands at the end of 1980 as assessed by the Food and Agriculture Organization of the UN (FAO) are shown in figure 6.1. Deforestation is a much-used, ill-defined, and imprecise term that tends to imply quantitative loss of woody vegetation. There can also be qualitative changes in forests, from, say, species-diverse tropical forests to single-species eucalyptus or pine plantations, or to less species-rich secondary (regrowth) forests. Each year, around 4 million hectares (ha) of virgin tropical forests are converted into secondary forests (Barrow, 1991). However there is little distinction in most of the literature between vegetation loss that will "heal" and that which will not. Fig. 6.1 Area of forests and woodlands by continent, end 1980 (kmē million) Deforestation profiles - No simple stereotypes According to the 1989 World Development Report (World Bank 1989), the 14 developing countries in South America, Africa, and South-East Asia where more than 250,000 ha of tropical forests are destroyed annually represent a wide range of third world development problems. They defy easy stereotypes: populations ranged from 11 million in Ecuador to 853 million in India; the percentage of the total population living in rural areas ranged from 15 per cent in Argentina to 82 per cent in Thailand; per capita GNP ranged from US$170 in Zaire to over US$2,600 in Argentina; total debt owed to foreign institutions varied from US$9 billion by Zaire to over US$110 billion by Brazil; and this debt as a percentage of total GNP ranged from 31 per cent in Peru to 140 per cent in Zaire (Wood, 1990). More recent statistics on deforestation suggest that, for tropical forests, the overall annual rate in the 1980s was 0.9 per cent. This is also the rate in Latin America, with Asia's rate somewhat higher (1.2 per cent) and Africa's somewhat lower (0.8 per cent) (World Bank, 1992). However, current rates of deforestation do not provide an indication of the current conditions of forests on these continents because historically damage may already have taken place at a very high level in the past, leaving highly devastated areas wherein scope for further damage is low. Figures 6.2 and 6.3 illustrate the extent of forest cover and deforestation rates in developing countries in the Asia and Pacific region. The causes of contemporary deforestation Severe human pressures on forests in many tropical developing countries, especially those resulting from a need to provide for the welfare of numerous poor rural dwellers, will continue to threaten the existence of these resources. In parallel, forests continue to be lost in many developed countries owing to over-harvesting, inadequate regeneration, clearance for agriculture and urbanization, and air pollution. The major causes of deforestation are discussed below. Human population growth, agricultural expansion, and resettlement Forest degradation and loss from the spontaneous expansion of people's activities into forest lands is notoriously difficult to quantify. Shifting agriculture is the primary cause of deforestation, accounting for about 45 per cent of the 7.5 million ha losses in tropical forests in 1976-1980. In 1980 it accounted for 35 per cent of deforestation in Latin America, 70 per cent in Africa, and 49 per cent in South-East Asia (notably Sri Lanka, Thailand, north-east India, Laos, Malaysia, and the Philippines) (Tolba et al., 1992). Fig. 6.2 Area of forests and woodlands in the top 10 countries in the Asia and Pacific region, 1983 (million hectares) Fig. 6.3 Annual deforestation by country in the Asia and Pacific region, 1976-1980 ('000 hectares) Grazing and ranching Domestic animals in tropical woodlands and forests reduce regeneration through grazing, browsing, and trampling. India alone has about 15 per cent of the world's cattle, 46 per cent of its buffaloes, and 17 per cent of its goats. The spread of irrigated and cultivated land in India has forced livestock owners into forest areas, where 90 million of the estimated 400 million cattle now reside, whereas the carrying capacity is estimated at only 31 million (Government of India, 1987). Fuelwood and charcoal Exploitation for fuelwood and charcoal is mainly a problem of tropical and subtropical woodlands, although there are examples of closed forests being severely affected (notably in India, Sri Lanka, and Thailand). On a global scale, increasing demand for industrial roundwood accounts for marginally less exploitation than fuelwood, although it remained high at around 1.7 billion mģ in 1989 (Barrow, 1991). A smaller, but none the less significant, reason for the removal of natural forests is the planting of tropical tree crops such as rubber, oil palm, and eucalyptus, more so because the rate of plantation establishment is less than rates of deforestation. The overall stress of pollution brings about nutrient deficiencies thereby rendering the vegetation vulnerable to droughts, insects, and pests. The growth, development, and decline of forests have always reflected the integrated effects of many variables. Acid rain has now been added to this list. It may not be possible to establish definite proof of the link between acid precipitation and damage to vegetation. The body of circumstantial evidence is large, however, and supports the view that the terrestrial environment is under some threat from acid rain. In addition to the above, the expansion of communication, the construction of large dams, the failure to assist the poor, and climatic anomalies of fire and drought only aggravate this problem further. The development of desert-like conditions where none existed previously has been described in many ways. Definitions of desertification are usually broad, including losses of vegetative cover and plant diversity that are attributable in some part to human activity as well as the element of irreversibility. These definitions are not confined to advancing frontiers of sand that engulf pastures and agricultural land, as often shown visually in the media. Various indicators of this phenomenon are listed in table 6.1 (Barrow, 1991). Table 6.1 Indicators of desertification |Physical indicators||Decrease in soil depth| |Decrease in soil organic matter| |Decrease in soil fertility| |Soil crust formation/compaction| |Appearance/increase in frequency/severity of dust sandstorms/dune formation and movement| |Decline in quality and quantity of ground and surface water| |Increased seasonality of springs and small streams| |Alteration in relative reflectance of land (albedo change)| |Vegetation||Decrease in cover| |Decrease in above-ground biomass| |Decrease in yield| |Alteration of key species distribution and frequency| |Failure of species successfully to reproduce| |Animal||Alteration in key species distribution and frequency| |Change in population of domestic animals| |Change in herd composition| |Decline in livestock production| |Decline in livestock yield| |Social/economic indicators||Change in land use/water use| |Change in settlement pattern (e.g. abandonment of villages)| |Change in population (biological) parameters (demographic evidence, migration statistics, public health information)| |Change in social process indicators - increased conflict between groups/tribes, marginalization, migration, decrease in incomes and assets, change in relative dependence on cash crops/subsistence crops| Sources: Reining (1978) and Kassas (1987). Fig. 6.4 World arid lands by continent (Source: UNEP, 1991) The nature and scope of the problem The UNCED defined desertification as land degradation in the arid, semi-arid, and sub-humid areas resulting from various factors, including climatic variations and human activities. These areas are subject to serious physical constraints linked to inadequate water resources, low plant formation productivity, and general vulnerability of biological systems and functions. Whereas on an individual basis animal and plant species are each a model of adjustment and resistance, ecological associates and formations are easily disturbed by the pressures exerted by rapidly growing populations and their livestocks. Desertification has become a longstanding and increasingly severe problem in many parts of the world, and in developing countries in particular. According to a UNEP (1984) estimate, 35 per cent of the earth's land surface (4.5 billion ha) - an area approximately the size of North and South America combined - and the livelihoods of the 850 million people who inhabit that land are under threat from desertification. Currently, each year some 21 million ha are reduced to a state of near or complete uselessness. Projections to the year 2000 indicate that loss on this scale will continue if nations fail to step up remedial action (World Bank, 1992). The distribution of the world's arid lands by continent is shown in figure 6.4. Trends in desertification Global statistics on trends in desertification are scanty. However, estimates of trends are possible for areas where detailed assessments Table 6.2 Some examples of desertification trends |Kenya||At Lake Baringo, an area of 360,000 ha, the annual rate of land degradation desertification between 1950 and 1981 was 0.4%. At Marsabit, an area of 1.4 million ha, it was 1.3% for the period 1956-1972.| |Mali||In the three localities of Nara, Mordiah, and Yonfolia, with a total area of some 195,000 ha, the average annual rate of loss during the past 30-35 years has been of the order of 0.1%.| |Tunisia||The annual rate of desertification during the past century was of the order of 10% and about 1 million ha were lost to the desert between 1880 and the present.| |China||The present average annual rate of desertification/land degradation for the country is of the order of 0.6%, while in such places as Boakong County, north of Beijing in Hebei Province, it rises to 1.3%, and to 1.6% in Fenging County.| |USSR||The annual desertification/sand encroachment rate in certain districts of Kalmykia, north-west of the Caspian Sea, was recently estimated at a level as high as 10%, while in other localities it was 1.5-5.4%. The desert growth around the drying-out Aral Sea was estimated at about 100,000 ha per year during the past 25 years, which gives an annual average desertification rate of 4%.| |Syria||An annual rate of land degradation of 0.25% was found in the 500,000 ha area of the Anti-Lebanon Range north of Damascus for the period 1958-1982.| |Yemen||The country's average annual rate of abandonment of cultivated land owing to soil degradation increased from 0.6% in 1970-1980 to about 7% in 1980-1984.| |Sahara||An analysis using a satellite-derived vegetation index shows steady expansion of the Sahara between 1980 and 1984 (an increase of approximately 1,350,000 kmē) followed by a partial recovery up to 1990 (Tucker et al., 1991).| Source: UNEP (1991). have been made at the national or local level. These are highlighted in table 6.2 (Tolba et al., 1992). The causes and the process Much damage has been inflicted on the economic activities in the arid regions, leading to a great deal of hardship for the majority of the people there. Very few parts of the arid zone have been spared. What accounts for this unhappy situation? The answer is twofold. Firstly, human pressure in the dry zones has grown enormously in recent decades owing to an increase in population. The needs for food, water, fuel, raw materials, and other natural resources have grown accordingly, exceeding the carrying capacity of the land in most cases. Secondly, many recent years have seen protracted drought, sometimes lasting for over 20 years. Under natural conditions, such failures of expected rainfall would have had little effect, but coupled with the human pressures they have produced disastrous results. Soil erosion caused naturally by prolonged droughts and by various activities that abuse and over-exploit the natural resources are, in essence, responsible for the advance of deserts. Advancing deserts provide negative feedbacks to the root causes, thereby accelerating the process of desertification further. This is illustrated in figure 6.5. Whatever the causes, the processes of degradation or desertification involve damage to the vegetation cover. 4. The implications of deforestation and desertification The environmental hazards of desertification and deforestation, though distinct, provide mutual feedbacks and are far from being independent of each other. They consequently have similar implications and solutions. Desertification and deforestation involve a drastic change in microclimates. For instance, if shrubs and trees are felled, the noonday sun will fall directly on hitherto shaded soil; the soil will become warmer and drier, and organisms living on or in the soil will move away to avoid the new harshness. The organic litter on the surface - dead leaves and branches, for example - will be quickly oxidized, the carbon dioxide being carried away. So too will be the small store of humus in the soil. All these changes in microclimate also bring about ecological changes. The ecosystem is being altered, in most cases adversely. Hence, these processes result not only in a loss of biological productivity but also in the degradation of surface microclimates. Phenomena such as global warming and the greenhouse effect, which have their origin in deforestation and desertification, among many other causes, are more serious, global in scope, and therefore potentially more threatening. Deforestation and desertification adversely affect agricultural productivity, the health of humans as well as of livestock, and economic activities such as eco-tourism. Hence, they have serious socio-economic implications too. In Asia, some 30 million people living in the coldest zones of the Himalayas were unable to ensure their energy supply in 1980, according to estimates for that year, despite overutilization of all the wood available. Approximately 710 million people were in a situation of decidedly inadequate fuelwood supplies, mainly in the highly populated zones of the Ganges and Indus plains and in the lowlands and islands of South-East Asia. It is estimated that by the year 2000, if present trends continue, 1.4 billion people in this region will be living in zones where fuelwood supplies are completely inadequate to cover their minimum energy needs (World Bank, 1992). Fig. 6.5 The causes and development of desertification Another indirect implication of these two hazards is that the resources needed to combat them are going to be very large. In 1982, it was estimated that between then and A.D. 2000, US$1.8 billion per year would be required to combat desertification (Tolba, 1987). Ahmad and Kassas (1987) estimated that a 20-year worldwide programme to arrest desertification would cost (at 1987 prices) roughly US$4.5 billion a year, US$2.4 billion of that needed in developing countries. Such sums are well beyond 1987 levels of donor assistance to the third world for everything. 5. Citizen action to counter deforestation and desertification The Chipko Movement in India, which began in 1972, is a people's ecology movement professing non-violence and non-cooperation. It has been significant in protecting forest and woodland on the Indian subcontinent. Also in India, a centuries-old practice is being rediscovered, adapted, and promoted. Deeply rooted, hedge-forming vetiver grass, planted in contour strips across hill slopes, slows water run-off dramatically, reduces erosion, and increases the moisture available for crop growth. Since 1987 a quiet revolution has been taking place, and today 90 per cent of soil conservation efforts in India are based on such biological systems. In the Philippines, non-governmental organizations (NGOs) and the Catholic Church have been active in supporting and promoting citizen groups seeking to protect existing trees and to plant new forests. In the Sahel, simple technologies involving the construction of rock bunds along contour lines for soil and moisture conservation have succeeded where sophisticated measures once failed. Bunded fields yield an average of 10 per cent more produce than traditional fields in a normal year, and in the drier years almost 50 per cent more. 6. The role of forestry in combating desertification The problem of developing arid lands and improving the well-being of the people living on them is one of both magnitude and complexity magnitude in terms of the large area involved and complexity in that their development cannot be dissociated from their ecological, social, and economic characteristics. Forestry has a major role to play in such a development strategy: one of its fundamental roles is the maintenance of the soil and water base for food production, through shelterbelts, windbreaks, and scattered trees, and through soil enrichment; it contributes to livestock production through silvipastoral systems, particularly the creation of fodder reserves or banks in the form of fodder trees or shrubs, to cushion the calamities of drought; it produces fuelwood, charcoal, and other forest products through village and farm woodlots; it contributes to rural employment and development through cottage industries based on raw materials derived from wild plants and animals and the development of wildlife-based tourism; it provides food from wildlife as well as from plants in the form of fruits, leaves, roots, and fungi. Scenario of deforestation and desertification in India During the British rule, the area under reserved forests was progressively increased at the cost of the areas set aside to meet the needs of village populations. Increasing population pressure, shrinking areas of land accessible to meet the domestic requirements of agriculture and animal husbandry, and, above all, the creation of open access meant continuing degradation of these areas. Simultaneously, in the reserve forests, the whole emphasis was on a few commercially valuable species such as Teak. The trend everywhere was to harvest the more accessible larger timber that had commercial value, with little thought for long-term sustainability. Making biomass available to influential groups in society (merchants, contractors, etc.) at highly subsidized prices was carried to its extreme after independence when the forest-based industry, originated under the British rule, really took off (Gadgil, 1989). The latest statistics related to forest cover in India show that 19.44 per cent of the total geographical area (639,182 kmē) is covered by forests (Government of India, 1991). The estimated annual rate of deforestation during 1981-1985 was 147,000 ha and the area annually deforested as a percentage of the total forest area in the country was 0.25 per cent (Maheshwari, 1989). The arid zone of India covers about 12 per cent of the geographical area including (31.9 billion kmē) of hot desert located in parts of Rajasthan (61 per cent), Punjab and Haryana (9 per cent), and Andhra Pradesh and Karnataka (10 per cent). The cold arid tracts are located in the north-west Himalayas, namely Ladakh, Kashmir, and Lahaul Spiti (Himachal Pradesh). The Indian arid zone is by far the most populated arid zone in the world. The statewise distribution of arid zones in India is shown in figure 6.6, and they are mapped in figure 6.7. Fig. 6.6 Arid and semi-and zones in India, by state (Source: Maheshwari, 1989) The programme for combating desertification in India was started in 1977-1978 and is being implemented in 18 affected districts of Rajasthan, Gujarat, Haryana, Jammu & Kashmir, and Himachal Pradesh. During recent years there has been growing recognition of the failure of traditional forest management systems in India. The social forestry programme of the State Forest Departments and various community and agro-forestry projects, funded internally as well as internationally, are actively countering deforestation. Looking at the potential of participatory forest management, the Tata Energy Research Institute (TERN) is implementing the Joint Forest Management Programme (JFMP) in the State of Haryana in collaboration with the Haryana Forest Department (HFD) and with the active participation of the local communities. After completing the first three-year phase successfully, implementation of the second phase has now begun. TERI's primary objectives in implementing this programme are: to facilitate the development of participatory forest management systems for adoption by the HFD; to orient the forestry staff and local communities to bring in attitudinal changes regarding JFMP through regular meetings, workshops, training, and extension activities; to assist in research on the institutional, economic, social, and ecological aspects of joint forestry management; to disseminate information concerning the effects of joint forestry management on ecological regeneration, economic productivity, and environmental security. Fig. 6.7 Map showing arid and semi-uric zones of India Various strategies and incentive mechanisms adopted for implementing the programme are: the provision of various non-timber forest products to local communities at concessionary rates; the organization of meetings, field training and workshops emphasizing micro-planning and women's participation to sensitize, motivate, and orient the target groups; regular documentation, and dissemination of publicity and extension material. So far, 38 Hill Resource Management Societies have been formed in villages adjoining the forests in Haryana Shiwaliks (lower Himalayas). The target group comprises marginal farmers and traditional graziers. Since 1990, there has been a remarkable change in the livestock pattern (the numbers have gone down whereas quality has improved), shifting the emphasis from open grazing to stall feeding. Agricultural yields have increased up to fourfold owing to the provision of irrigation water through the construction of water-harvesting structures. There has been significant increase in the yield of commercial as well as fodder grasses as a result of social fencing offered by the local communities in return. Measures undertaken to counter the problems of deforestation and desertification The FAO and United Nations Development Programme (UNDP) have been mapping and monitoring deforestation and desertification, especially since 1979. The Geographical Information System (GIS) is being used to map and monitor the amount and degree of damage caused by these hazards and extensive databases are being established. In 1977, the United Nations Conference on Desertification (UNCOD) adopted a Plan of Action to Combat Desertification (PACD), which was endorsed by the UN General Assembly in the same year. The worldwide programme was aimed at stopping the process of desertification and at rehabilitating affected land. In 1985, the World Resources Institute, the World Bank, and the UNDP published a Tropical Forestry Action Plan (TFAP). This had taken several years to prepare through the combined efforts of governments, forestry agencies, UN agencies, and NGOs. These plans, however, stated only what should be done, but not how it could be achieved, and efforts undertaken so far have not been adequate to cope with the magnitude of the problem. So the damage continues. The general conclusion is that the plans failed to generate enough political support, although the proposals were probably quite sound. Despite the limited success of PACD, several countries have adapted their national plans to come within the scope of PACD implementation. Particularly significant measures have been undertaken in the countries of the Sudano-Sahelian belt of Africa, and in India, China, Iran, and the former USSR. In both developed and developing countries much could be achieved through a change in attitudes toward forests - from one of seeing the potential for exploitation to one of seeing the need and desirability to conserve and make exploitation more rational and sustained and less wasteful. Various multinational development agencies and philanthropic foundations (e.g. the FAO, the World Bank, the Ford Foundation) are now supporting efforts to encourage management by smaller groups more closely associated with particular forest tracts, and to give them responsibility for forests in good condition, as well as for degraded land. For example, the Joint Forest Management Programme in India, implemented in 12 states since 1990, is being funded by such organizations and aims at evolving and establishing systems of sustainable forest management jointly by the government and the local people. 7. Possible solutions In face of the above hazards, with all the social, economic, political, and environmental problems that they imply, it is inevitable that two questions have received increased attention in recent years: Can the damage be prevented? Can the damage that has already occurred be reversed? The answer to both is a qualified yes. Preventive measures for combating drought and halting the spread of the deserts, as highlighted by the UNCED in Agenda 21, include: strengthening the knowledge base and developing information and monitoring systems for regions prone to desertification and drought, including the economic and social aspects of the fragile ecosystems; combating land degradation through, inter alia, intensified soil conservation, afforestation, and reforestation activities; developing and strengthening integrated development programmes for the eradication of poverty and the promotion of alternative livelihood systems in areas prone to desertification; developing comprehensive anti-desertification programmes and integrating them into national development plans and national environmental planning; developing comprehensive drought-preparedness and drought-relief schemes, including self-help arrangements for drought-prone areas and the design of programmes to cope with environmental refugees; encouraging and promoting popular participation and environmental education, focusing on desertification control and management of the effects of drought (UNCED, 1992). The overriding need of the next few decades is to evolve strategies that inextricably tie conservation and development together. Policies for resource management will have to include the following essential components: - a recognition of the true value of natural resources, because they ultimately are in finite supply; - institutional responsibility for resource management hand in hand with a matching accountability for results; - better knowledge of the extent, quality, and potential of the resource base while accelerating the diffusion of existing technology that can expand output in environmentally sound ways. In developing countries, where the actions of local people are the root cause of deforestation, the various alternatives that offer some hope for slowing tropical deforestation, and at the same time are cheap and fast enough to be worthy of consideration, are: conservation to help natural forests regenerate, better management of forests, better fire control measures, reforestation and afforestation, fuelwood/energy plantations and woodlots, agro-forestry, farm and village woodlots, cash-crop tree farming, and, last but not least, non-conventional methods of forest management. The causal chain of land degradation and possible interventions at various stages to reverse the process are illustrated in figure 6.8. As is evident in this figure, the development of science and technology occupies a prominent position in the possible interventions. Land-use planning, dryland cropping strategies, appropriate forest management technologies optimizing their resource potential, the standardization of harvesting techniques for non-timber forest products, fuelwood-supply plantations, and renewable energy technologies are some of the potential areas for research that need more probing. The following are a few suggestions appropriate for immediate action to fight against deforestation and desertification in a reasonably long-term perspective: strengthen the planning and organization of ecological, silvicultural, and socio-economic research; strengthen research on particular subjects (such as certain social and cultural aspects of rural life, natural resource accounting, etc.) that appear to be weak at present in view of development and resource conservation objectives; strengthen research on non-traditional methods in forestry, e.g. community forestry programmes emphasizing people's participation like the JFMP, the use of biotechnology in the breeding of tree species for desired characteristics; decentralize research work according to ecological and socioeconomic conditions; determine and carry out a systematic programme for the advanced training of research workers and research administrators. Fig. 6.8 Interventions refuted to the causation of land degradation (Source: Winpenny, 1990) To assume that massive aid inflows through the multilateral development banks and bilateral agencies for international development can solve the developing countries' problems is to ignore decades of documentation demonstrating that such aid has compounded environmental degradation. The history of development assistance suggests that its success depends on getting it to the right people. NGOs such as TERI, which are actively involved in a participatory approach at the grass roots, research on biomass, and biotechnology could all play a vital role in developing and implementing the above strategy. The urgency of the problem is accentuated by the fact that the pressure on natural resources is fast getting out of hand owing to unprecedented population growth and increasing densities. The time for action is running out as the environmental damage rendered by deforestation and desertification expands, threatening new areas and new societies, while countermeasures tend to be long term and time consuming. The cost of countermeasures is escalating from year to year because the area affected is growing, the degree of damage is growing, and world prices of rehabilitative measures are rising. Off-site (and social) costs too continue to increase. Other environmental and economic problems are likely to become serious, tending to distract the attention of international funding agencies to other issues (e.g. sealevel rise). However, if the process of desertification and deforestation is not arrested soon, the world shortage of food will increase dramatically. Past experiences show that the success of programmes to combat desertification and deforestation will depend on the institutional arrangements, the dissemination of information, the creation of awareness, the development of assessment methodology, and adaptive research. Funding and implementing agencies, both national and international, must give priority to programmes for combating desertification and countering deforestation, both nationally and internationally. The necessary assistance - in cash as well as in kind - will have to be provided to developing countries affected by these hazards. Any countermeasures will have to be fully integrated into programmes of socio-economic development, instead of being considered only as rehabilitation measures, and the affected populations will have to be fully involved in the planning and implementation of these programmes. Ahmad, Y.J. and M. Kassas. 1987. Desertification: Financial Support for the Biosphere. West Hartford, Conn.: Kumarian Press. Barrow, C. J. 1991. Land Degradation - Developments and Breakdown of Terrestrial Environments. Cambridge: Cambridge University Press. Gadgil, M. 1989. "Deforestation: Problems and prospects." Foundation Day Lecture, Society for Promotion of Wastelands Development, 12 May, New Delhi. Centre of Ecological Sciences and Theoretical Studies, Indian Institute of Science Bangalore. Government of India. 1987. State of Forest Report 1987. Forest Survey of India, Dehradun. Government of India. 1991. State of Forest Report, 1987- 1989. Forest Survey of India, Dehradun. Kassas, M. 1987. "Drought and desertification." Land Use Policy 4(4): 389-400. Kemp, D. D. 1990. Global Environmental Issues - A Climatological Approach. London: Routledge. Maheshwari, J. K. 1989. Processing and Utilization of Perennial Vegetation in the Arid Zone of India in Role of Forestry in Combatting Desertification. Rome: FAO Conservation Guide 21, pp. 137-172. Reining, P. 1978. Handbook on Desertification Indicators. Washington, D.C.: American Association for the Advancement of Science. Tolba, M. K. 1987. Sustainable Development: Constraints and Opportunities London: Butterworth. Tolba, M. K., O. A. El-Kholy, et al. 1992. The World Environment 1972-1992. Two Decades of Challenge. London: Chapman & Hall. Tucker, C. J., H. E. Dregne, and W. W. Newcomb. 1991. "Expansion and contraction of Sahara Desert from 1980-1990." Science 253. UNCED (United Nations Conference on Environment and Development). 1992. Agenda 21. United Nations Conference on Environment and Development, Brazil, June 3-14,1992. Brazil: UNCED. UNEP (United Nations Environment Programme). 1984. General Assessment of Progress in the Implementation of the Plan of Action to Combat Desertification, 1978-1984. GC-12/9. UNEP (United Nations Environment Programme). 1991. Status of Desertification and Implementation of the United Nations Plan of Action to Control Desertification. Nairobi: UNEP. Winpenny, J. T. (ed.). 1990. Development Research: The Environmental Challenge. Boulder, Colo.: Westview Press, for the ODI. Wood, W. B. 1990. Tropical Deforestation. Balancing Regional Development Demands and Global Environmental Concerns. World Bank. 1989. World Development Report 1989. Oxford: World Bank. 1992. World Development Report 1992. Oxford: Oxford University Press. Contents - Previous - Next
In non-signing communities, home sign is not a full language, but closer to a pidgin. Home sign is amorphous and generally idiosyncratic to a particular family, where a deaf child does not have contact with other deaf children and is not educated in sign. Such systems are not generally passed on from one generation to the next. Where they are passed on, creolization would be expected to occur, resulting in a full language. However, home sign may also be closer to full language in communities where the hearing population has a gestural mode of language; examples include various Australian Aboriginal sign languages and gestural systems across West Africa, such as Mofu-Gudur in Cameroon. A village sign language is a local indigenous language that typically arises over several generations in a relatively insular community with a high incidence of deafness, and is used both by the deaf and by a significant portion of the hearing community, who have deaf family and friends. Deaf-community sign languages , on the other hand, arise where deaf people come together to form their own communities. These include school sign, such as Nicaraguan Sign Language , which develop in the student bodies of deaf schools which do not use sign as a language of instruction, as well as community languages such as Bamako Sign Language , which arise where generally uneducated deaf people congregate in urban centers for employment. At first, Deaf-community sign languages are not generally known by the hearing population, in many cases not even by close family members. However, they may grow, in some cases becoming a language of instruction and receiving official recognition, as in the case of ASL. Both contrast with speech-taboo languages such as the various Aboriginal Australian sign languages , which are developed by the hearing community and only used secondarily by the deaf. It is doubtful whether most of these are languages in their own right, rather than manual codes of spoken languages, though a few such as Yolngu Sign Language are independent of any particular spoken language. Hearing people may also develop sign to communicate with speakers of other languages, as in Plains Indian Sign Language ; this was a contact signing system or pidgin that was evidently not used by deaf people in the Plains nations, though it presumably influenced home sign. Contact occurs between sign languages, between sign and spoken languages contact sign , a kind of pidgin , and between sign languages and gestural systems used by the broader community. One author has speculated that Adamorobe Sign Language , a village sign language of Ghana, may be related to the "gestural trade jargon used in the markets throughout West Africa", in vocabulary and areal features including prosody and phonetics. The only comprehensive classification along these lines going beyond a simple listing of languages dates back to In his classification, the author distinguishes between primary and auxiliary sign languages as well as between single languages and names that are thought to refer to more than one language. Sign languages vary in word-order typology. Influence from the surrounding spoken languages is not improbable. Sign languages tend to be incorporating classifier languages, where a classifier handshape representing the object is incorporated into those transitive verbs which allow such modification. For a similar group of intransitive verbs especially motion verbs , it is the subject which is incorporated. Only in a very few sign languages for instance Japanese Sign Language are agents ever incorporated. Brentari classifies sign languages as a whole group determined by the medium of communication visual instead of auditory as one group with the features monosyllabic and polymorphemic. That means, that one syllable i. Another aspect of typology that has been studied in sign languages is their systems for cardinal numbers. Children who are exposed to a sign language from birth will acquire it, just as hearing children acquire their native spoken language. The Critical Period hypothesis suggests that language, spoken or signed, is more easily acquired as a child at a young age versus an adult because of the plasticity of the child's brain. In a study done at the University of McGill, they found that American Sign Language users who acquired the language natively from birth performed better when asked to copy videos of ASL sentences than ASL users who acquired the language later in life. They also found that there are differences in the grammatical morphology of ASL sentences between the two groups, all suggesting that there is a very important critical period in learning signed languages. The acquisition of non-manual features follows an interesting pattern: When a word that always has a particular non-manual feature associated with it such as a wh- question word is learned, the non-manual aspects are attached to the word but don't have the flexibility associated with adult use. At a certain point, the non-manual features are dropped and the word is produced with no facial expression. After a few months, the non-manuals reappear, this time being used the way adult signers would use them. Sign languages do not have a traditional or formal written form. Many deaf people do not see a need to write their own language. Linguistic and Lexical Context So far, there is no consensus regarding the written form of sign language. Except for SignWriting, none are widely used. Maria Galea writes that SignWriting "is becoming widespread, uncontainable and untraceable. In the same way that works written in and about a well developed writing system such as the Latin script, the time has arrived where SW is so widespread, that it is impossible in the same way to list all works that have been produced using this writing system and that have been written about this writing system. For a native signer, sign perception influences how the mind makes sense of their visual language experience. For example, a handshape may vary based on the other signs made before or after it, but these variations are arranged in perceptual categories during its development. The mind detects handshape contrasts but groups similar handshapes together in one category. The mind ignores some of the similarities between different perceptual categories, at the same time preserving the visual information within each perceptual category of handshape variation. When Deaf people constitute a relatively small proportion of the general population, Deaf communities often develop that are distinct from the surrounding hearing community. - About Author/Editor(s)/ Contributor(s); - Account Options. - Sign language. - Pablo Picasso: A Biography for Beginners. - Download Concise Lexicon For Sign Linguistics. - The Waverley Novels of Sir Walter Scott: Volume II (Halcyon Classics). This sign language was developed in the Black Deaf community as a variant during the American era of segregation and racism, where young Black Deaf students were forced to attend separate schools than their white Deaf peers. On occasion, where the prevalence of deaf people is high enough, a deaf sign language has been taken up by an entire local community, forming what is sometimes called a "village sign language" or "shared signing community". Famous examples include:. In such communities deaf people are generally well integrated in the general community and not socially disadvantaged, so much so that it is difficult to speak of a separate "Deaf" community. Many Australian Aboriginal sign languages arose in a context of extensive speech taboos, such as during mourning and initiation rites. They are or were especially highly developed among the Warlpiri , Warumungu , Dieri , Kaytetye , Arrernte , and Warlmanpa , and are based on their respective spoken languages. It was used by hearing people to communicate among tribes with different spoken languages , as well as by deaf people. There are especially users today among the Crow , Cheyenne , and Arapaho. Unlike Australian Aboriginal sign languages, it shares the spatial grammar of deaf sign languages. In the s, a Spanish expeditionary, Cabeza de Vaca , observed natives in the western part of modern-day Florida using sign language, [ citation needed ] and in the midth century Coronado mentioned that communication with the Tonkawa using signs was possible without a translator. Signs may also be used by hearing people for manual communication in secret situations, such as hunting, in noisy environments, underwater, through windows or at a distance. Some sign languages have obtained some form of legal recognition, while others have no status at all. Concise Lexicon for Sign Linguistics Sarah Batterbury has argued that sign languages should be recognized and supported not merely as an accommodation for the disabled, but as the communication medium of language communities. The Internet now allows deaf people to talk via a video link , either with a special-purpose videophone designed for use with sign language or with "off-the-shelf" video services designed for use with broadband and an ordinary computer webcam. The special videophones that are designed for sign language communication may provide better quality than 'off-the-shelf' services and may use data compression methods specifically designed to maximize the intelligibility of sign languages. Some advanced equipment enables a person to remotely control the other person's video camera, in order to zoom in and out or to point the camera better to understand the signing. In order to facilitate communication between deaf and hearing people, sign language interpreters are often used. Such activities involve considerable effort on the part of the interpreter, since sign languages are distinct natural languages with their own syntax , different from any spoken language. Sign language interpreters who can translate between signed and spoken languages that are not normally paired such as between LSE and English , are also available, albeit less frequently. With recent developments in artificial intelligence in computer science , some recent deep learning based machine translation algorithms have been developed which automatically translate short videos containing sign language sentences often simple sentence consists of only one clause directly to written language. Interpreters may be physically present with both parties to the conversation but, since the technological advancements in the early s, provision of interpreters in remote locations has become available. In video remote interpreting VRI , the two clients a sign language user and a hearing person who wish to communicate with each other are in one location, and the interpreter is in another. The interpreter communicates with the sign language user via a video telecommunications link, and with the hearing person by an audio link. - The Political Economy of Sustainable Energy (Energy, Climate and the Environment); - A Concise Lexicon of Late Biblical Hebrew. - Intelligent Integrated Media Communication Techniques; - Home : Oxford English Dictionary! - A Concise Lexicon of Late Biblical Hebrew. - Energy and Equity! VRI can be used for situations in which no on-site interpreters are available. However, VRI cannot be used for situations in which all parties are speaking via telephone alone. With video relay service VRS , the sign language user, the interpreter, and the hearing person are in three separate locations, thus allowing the two clients to talk to each other on the phone through the interpreter. Sign language is sometimes provided for television programmes. The signer usually appears in the bottom corner of the screen, with the programme being broadcast full size or slightly shrunk away from that corner. Typically for press conferences such as those given by the Mayor of New York City , the signer appears to stage left or right of the public official to allow both the speaker and signer to be in frame at the same time. Paddy Ladd initiated deaf programming on British television in the s and is credited with getting sign language on television and enabling deaf children to be educated in sign. In traditional analogue broadcasting, many programmes are repeated, often in the early hours of the morning, with the signer present rather than have them appear at the main broadcast time. Some emerging television technologies allow the viewer to turn the signer on and off in a similar manner to subtitles and closed captioning. Legal requirements covering sign language on television vary from country to country. Concise Lexicon for Sign Linguistics : Jan Nijen Twilhaar : In the United Kingdom , the Broadcasting Act addressed the requirements for blind and deaf viewers, but has since been replaced by the Communications Act As with any spoken language, sign languages are also vulnerable to becoming endangered. For example, a sign language used by a small community may be endangered and even abandoned as users shift to a sign language used by a larger community, as has happened with Hawai'i Sign Language , which is almost extinct except for a few elderly signers. There are a number of communication systems that are similar in some respects to sign languages, while not having all the characteristics of a full sign language, particularly its grammatical structure. Many of these are either precursors to natural sign languages or are derived from them. When Deaf and Hearing people interact, signing systems may be developed that use signs drawn from a natural sign language but used according to the grammar of the spoken language. In particular, when people devise one-for-one sign-for-word correspondences between spoken words or even morphemes and signs that represent them, the system that results is a manual code for a spoken language, rather than a natural sign language. Such systems may be invented in an attempt to help teach Deaf children the spoken language, and generally are not used outside an educational context. It has become popular for hearing parents to teach signs from ASL or some other sign language to young hearing children. Since the muscles in babies' hands grow and develop quicker than their mouths, signs can be a beneficial option for better communication. This reduces the confusion between parents when trying to figure out what their child wants. When the child begins to speak, signing is usually abandoned, so the child does not progress to acquiring the grammar of the sign language. This is in contrast to hearing children who grow up with Deaf parents, who generally acquire the full sign language natively, the same as Deaf children of Deaf parents. Informal, rudimentary sign systems are sometimes developed within a single family. For instance, when hearing parents with no sign language skills have a deaf child, the child may develop a system of signs naturally, unless repressed by the parents. The term for these mini-languages is home sign sometimes "kitchen sign". Home sign arises due to the absence of any other way to communicate.
How can we reduce bycatch without threatening the economic viability of the fishery? Groundfish include over 90 species of flatfish, roundfish, and rockfish, most of which spend their time near the bottom of the ocean. In the 1970s and 80s, many groundfish populations were devastated by overfishing, especially by deepwater trawls that drag along the bottom, damage seafloor habitats, and collect a variety of nontargeted marine life, or bycatch. By the 1990s, federal fishery policy changed to demand an end to overfishing, and the U.S. Pacific Fishery Management Council declared ten groundfish species to be overfished. Plans were instituted to immediately halt overfishing of these species and to give them time to recover. This led to the creation, in 2002, of Rockfish Conservation Areas (RCAs) that were closed to bottom trawling, for any fish. One of these RCAs runs the entire length of the California coast along the edge of the continental shelf where it begins to slope downward to the seafloor. A fishing quota program was also implemented for the West Coast groundfish trawl sector that included hard caps on catches. This meant that after a set number of restricted groundfish were caught, the entire fishery could be closed for the rest of the season to avoid risk of further bycatch. Independent observers were also required on each boat, to ensure compliance. These measures had their intended effects of immediately reducing the catch rates of the overfished rockfish. However, they also had two unintended impacts. One was that by reducing catch rates so drastically and by not allowing catch from their prime habitats, there were few data streams available to assess the status of recovery of those fisheries. Scientists and managers could no longer monitor the distribution, abundance, or size structure of overfished species, limiting their ability to adaptively manage the fishery. Additionally, the RCA closures and quotas prevented fishermen from catching other fish species that were still plentiful. These limitations led to low catch levels of abundant stocks, with total catches ranging from 16–21% of the allowable catch for species like lingcod, for example. Many people livelihoods, and whole communities, suffered as a result. In response, Conservancy scientists partnered with fishermen to find a solution that would work for nature and people. First, we worked with Moss Landing Marine Laboratories, Marine Applied Research and Exploration, Monterey Bay Aquarium Research Institute, the fishing industry, and agency partners to develop a stereo underwater camera system that would allow us to monitor overfished species. This stereo video lander is lowered down to rocky habitats where overfished rockfish like to congregate, and enables us to collect size and abundance data on many species, especially overfished stocks like cowcod and yelloweye rockfish. We also collaborated with academic partners, central coast fishermen and regulatory agencies to conduct experimental fishing inside the Rockfish Conservation Areas under a special permit. Instead of trawling, the fishermen used hook and line gear that had been modified. A comparison of fishing and video surveys revealed that fishermen could fish with the modified gear and catch the abundant species while rarely catching the overfished species, which were known to be present based on the video surveys. The video surveys provided much- needed information on the abundance, sizes, and habitat associations of the overfished rockfish populations. This type of cost-effective monitoring approach can help improve the accuracy of stock assessments, inform catch limits and spatial closures with better data, and reduce bycatch of overfished species – and thereby help to maintain the economic viability of California’s fisheries. The Nature Conservancy, Jono Wilson There are thousands of fisheries around the globe that lack the infrastructure, capacity, resources and management techniques to perform conventional assessments and management. We know that fisheries that get measured—that have stock assessments performed—are typically better managed, including increased regulation and enforcement. Conversely, when fisheries… Mary Gleason, Evan Fox, Susan Ashcraft, Jason Vasques, Elizabeth Whiteman, Paulo Serpa, Emily Saarman, Meg Caldwell, Adam Frimodig, Melissa Miller-Henson, John Kirlin, Becky Ota, Elizabeth Pope, Mike Weber, Ken Wiseman This paper reviews the design of a network of marine protected areas (MPAs) in state waters as mandated by the Marine Life Protection Act (MLPA). A public–private partnership (the MLPA Initiative) completed four regional public MPA planning processes characterized by robust stakeholder contributions and the… This paper presents how voluntary collective agreements amongst fishermen can be used to reduce risk of bycatch of sensitive species and improve fishery performance in the West Coast groundfish fishery. We describe the challenges and results of designing and implementing an “insurance risk pool” to…
William Bradford, the second governor of Plymouth colony elected to fill the place of the deceased John Carver, was responsible for the infant colony's success through great hardships. The Pilgrims were part of a strain of Puritanism known as Separatism, which denoted the aim to completely secede from the Church of England. The Pilgrims held to a Congregational rather than a Presbyterian form of church government. Not all of the Plymouth colony were Christians, however, and some spoke of using their liberty in defiance of the Pilgrims. Unless they could be held together in unity there was little hope they would survive. The success of the Plymouth was based on covenantalism - the belief that men could form compacts or covenants in the sight of God as a basis for government without the consent of a higher authority. The church of the Pilgrims was already bound by a strict mutual covenant. But to include those outside of the church, a civil compact was drawn up - the constitution and foundation of a Christian democratic republic in the New World. The Mayflower Compact acknowledged the right of everyone who signed it to share in the making and administering of laws and the right of the majority to rule. It was the constitution of a pure democracy, the principle of Congregational church government applied to the state. This was all the law they had for several years. It worked because they chose Christians as their leaders and all understood that they were to be self-governing under the moral law of God.
Dutch elm disease |Dutch elm disease| Dutch elm disease is a fungal disease of elm trees which is spread by the elm bark beetle. Although believed to be originally native to Asia, it has been accidentally introduced into America and Europe, where it has devastated native populations of elms which had not had the opportunity to evolve resistance to the disease. The name Dutch elm disease refers to the identification of the disease in the 1920s in the Netherlands; the disease is not specific to the Dutch Elm hybrid. The causative agents of Dutch Elm Disease are ascomycete microfungi. Three species are now recognized, Ophiostoma ulmi, which afflicted Europe in 1910, reaching North America on imported timber in 1928, Ophiostoma himal-ulmi, a species endemic to the western Himalaya. A third, extremely virulent species, Ophiostoma novo-ulmi, was first described in Europe and North America in the 1940s and has devastated elms in both areas since the late 1960s(Spooner & Roberts, 2005). The origin of O. novo-ulmi remains unknown (Spooner & Roberts, 2005), but may have arisen as a hybrid between O. ulmi and O. himal-ulmi. The new species was widely believed to have originated in China, but a comprehensive survey there in 1986 found no trace of it , although elm bark beetles were very common. The disease is spread by two species of bark beetles (Family: Curculionidae, Subfamily: Scolytinae): the native elm bark beetle, Hylurgopinus rufipes , and the European elm bark beetle, Scolytus multistriatus. Both act as vectors for infection. In an attempt to block the fungus from spreading further, the tree reacts to the presence of the fungus by plugging its own xylem tissue with gum and tyloses, bladder-like extensions of the xylem cell wall. As the xylem (one of the two types of vascular tissue produced by the vascular cambium, the other being the phloem) delivers water and nutrients to the rest of the plant, these plugs prevent them from travelling up the trunk of the tree, eventually killing it. The first symptom of infection is usually an upper branch of the tree with leaves starting to wither and yellow in summer, months before the normal autumnal leaf shedding. This progressively spreads to the rest of the tree, with further dieback of branches. Eventually, the roots die, starved of nutrients from the leaves. Often, not all the roots die: the roots may put up small suckers. These may grow up for some years into small elm trees, but after a decade or so the new trunks become large enough to support the bark beetles, and with their inevitable arrival the fungus returns, and the new tree dies. Practical Information for the Elm tree owner: The disease is caused by a fungus. It is primarily spread 3 ways: 1) by beetle vectors which carry the fungus from tree to tree (the beetle doesn't kill the tree, the fungus it carries does). 2) through direct contact of an infected tree's roots with a neighboring healthy tree. 3) by pruning of a healthy tree with saws which have been used to take down diseased trees. This third method of spread is common and not recognized by many tree pruning and removal services. Arborists at Kansas State University state cleaning blades with a 10% solution of a household bleach will prevent this type of spread. Owners of healthy trees should be vigilant about the companies they hire to prune healthy trees. Be certain blades are disinfected between use to remove dead trees and use to prune healthy trees. Dutch elm disease was first noticed in Europe in 1910, and spread slowly, reaching Britain in 1927. This first strain was a relatively mild one, which only killed a small proportion of elms, more often just killing scattered branches, and had largely died out by 1940. It was isolated in Holland in 1921 by Marie Beatrice Schwarz, a pioneering Dutch phytopathologist, and this discovery would lend the disease its name. In about 1967, a new, far more virulent strain arrived in Britain on a shipment of Rock Elm logs from North America, and this strain proved both highly contagious and lethal to all of the European native elms; more than 25 million trees died in the UK alone. By 1990-2000, very few mature elms were left in Britain or much of northern Europe. One of the most distinctive English countryside trees, the English Elm U. procera Salisb. (see e.g. John Constable's painting The Hay Wain), is particularly susceptible. Thirty years after the epidemic, these magnificent trees, which often grew to > 45 m high, are long gone. The species still survives in hedgerows, as the roots are not killed and send up root sprouts ("suckers"). These suckers rarely reach more than 5 m tall before succumbing to a new attack of the fungus. However, established hedges kept low by clipping have remained apparently healthy throughout the nearly 40 years since the onset of the disease in the UK. The largest concentration of mature elm trees remaining in Britain is found in Brighton, where 15,000 elms still stand (2005 figures). Their survival is due to a concerted effort by local authorities to identify and remove infected sections of trees as soon as they show signs of the disease to save the tree and prevent it spreading. The United States The disease was first reported in the United States in 1928, with the beetles believed to have arrived in a shipment of logs from the Netherlands destined for the Ohio furniture industry. The disease spread slowly from New England westward and southward, almost completely destroying the famous Elms in the 'Elm City' of New Haven, reaching the Detroit area in 1950, the Chicago area by 1960, and Minneapolis by 1970. Dutch elm disease reached Eastern Canada during the Second World War, and spread to Manitoba in 1975 and Saskatchewan in 1981. In Toronto, Ontario, as much as 80% of the elm trees have been lost to Dutch elm disease, and many more have fallen victim to the disease in Ottawa and Montreal and other cities during the 1970s and 1980s. Alberta and British Columbia are the only provinces that are currently free of Dutch elm disease, although an elm tree in southeastern Alberta was found diseased in 1998 and was immediately destroyed before the disease could spread any further. Thus, this was an isolated case. Today, Alberta has the largest number of elms unaffected by Dutch elm disease in the world. Aggressive measures are being taken to prevent the spread of the disease into Alberta as well as further progression of the disease in other parts of Canada. The City of Edmonton has banned elm pruning from March 31 to October 1, since fresh pruning wounds will attract the beetles during the warmer months. The first fungicide used for preventive treatment of Dutch elm disease was Lignasan BLP (carbendazim phosphate), which was introduced in the 1970s. This had to be injected into the base of the tree using specialized equipment, and was never especially effective. It is still sold under the name "Elm Fungicide". Arbotect (thiabendazole hypophosphite) became available some years later, and it has been proven effective. Arbotect must be injected every 2 to 3 years to provide ongoing control; the disease generally cannot be eradicated once a tree is infected. Alamo (propiconazole) has become available more recently and shows some promise, though several university studies show it to be less effective than Arbotect treatments. Alamo is primarily recommended for treatment of Oak Wilt. Treatment of diseased trees is costly and at best will prolong the life of the tree, perhaps by as many as five or ten years. It is usually only justified when a tree has unusual symbolic value or occupies a particularly important place in the landscape. Research to select resistant cultivars and varieties began in the Netherlands in 1928, and in the USA since the disease became endemic there. Initial efforts in the Netherlands involved crossing varieties of U. minor and U. glabra, but later included the Himalayan or Kashmir Elm U. wallichiana as a source of anti-fungal genes. Early efforts in the USA involved the hybridization of the Chinese Elm with the American Elm, and produced a resistant tree that lacked the beauty, traditional shape, and landscape value of the American Elm. Few were planted. Three major groups of resistant cultivars are commercially available now: - The Princeton Elm, a cultivar selected in 1922 by Princeton Nurseries for its landscape value. By happy coincidence, this cultivar was revealed to be highly resistant in inoculation studies carried out by USDA in the early 1990s, to Dutch elm disease. Because mature trees planted in the 1920s still remain, the properties of the mature plant are well known. - The Liberty Elm, a set of five cultivars produced through selection over several generations starting in the 1970s. Marketed as a single variety, nurseries selling the "Liberty Elm" actually distribute the five cultivars at random. Two of the cultivars are covered by patents. - The Valley Forge elm, and some related cultivars, have demonstrated resistance to Dutch elm disease approximately equal to that of the Princeton elm cultivar, in controlled USDA tests. Even resistant cultivars can become infected, particularly if the tree is under stress from drought and other environmental conditions, and if the disease pressure is high. With the exception of the Princeton Elm, no trees have yet been grown to maturity. The oldest liberty elm was planted in about 1980, and the trees cannot be said to be mature until they have reached an age of sixty years. There have been many attempts to breed disease resistant cultivar hybrids and they have usually involved a genetic contribution from Asian elm species which have demonstrable resistance to this fungal disease. Much of the early work in Europe was undertaken in the Netherlands species. The Dutch research programme ended in 1992, after raising two complex hybrids, later released as Columella (elm cultivar) and Nanguen (Lutèce™), found to be actually immune to the disease when inoculated with unnaturally high doses of the fungus. The patent for the Lutèce™ clone was purchased by the French Institut National de la Recherche Agronomique (INRA), which subjected the tree to 20 years of field trials in the Bois de Vincennes, Paris before releasing it for sale in 2002. In Italy, research is continuing at the Istituto per la Protezione delle Piante, Florence, to produce a wide range of disease-resistant trees using a variety of Asiatic species crossed with the early Dutch hybrid Plantyn (elm hybrid) as a safeguard against any future mutation of the disease. Two trees with very high levels of resistance, San Zanobi (elm cultivar) and Plinio (elm cultivar), were released in 2003. Both feature the Siberian Elm U. pumila as the male parent. There is also the example of the European White Elm that has little innate resistance to Dutch elm disease but it is avoided by the vector bark beetles and only rarely becomes infected. Research published in the Canadian Journal of Forest Research has indicated that it is the presence of certain organic compounds, such as triterpenes and sterols, that serves to make the tree bark unattractive to the beetle species that spread the disease. The Red Elm U. rubra is less susceptible to Dutch elm disease than many elms, but this quality seems to have somehow largely evaded the attention of the resistance programme. In 2001, English Elm was genetically engineered to resist disease in experiments at Abertay University, Dundee, by transferring anti-fungal genes into the elm genome using minute DNA-coated ball bearings . However, there are no plans to release the trees into the countryside. Trees in the genus Zelkova, closely related to elms, are also planted as resistant substitutes for susceptible elms. Zelkova serrata, the Japanese Zelkova, the most commonly planted Zelkova tree, is similar to the American Elm in size and the vase-shaped crown. Possible earlier occurrences - There is something wrong with elm trees. In the early part of this summer, not long after the leaves were fairly out upon them, here and there a branch appeared as if it had been touched with red-hot iron and burnt up, all the leaves withered and browned on the boughs. First one tree was thus affected, then another, then a third, till, looking round the fields, it seemed as if every fourth or fifth tree had thus been burnt. [...] Upon mentioning this I found that it had been noticed in elm avenues and groups a hundred miles distant, so that it is not a local circumstance. This suggestion remains largely speculative, and there is no proof that it was caused by a fungus related to Dutch elm disease. From analysis of pollen in peat samples, it is apparent the elm all but disappeared from Europe during the mid-Holocene period about 6000 years ago. Examination of sub-fossil elm wood has suggested that Dutch elm disease may have been responsible . - Brasier, C. M. (1996). New horizons in Dutch elm disease control. Pages 20-28 in: Report on Forest Research, 1996. Forestry Commission. HMSO, London, UK. - Forestry Commission. Dutch elm disease in Britain , UK. - Institut National de la Recherche Agronomique. Lutèce®, a resistant variety brings elms back to Paris , Paris, France. - Macmillan Science Library: Plant Sciences. Dutch Elm Disease. , - Martín-Benito D., Concepción García-Vallejo M., Alberto Pajares J., López D. 2005. Triterpenes in elms in Spain. Can. J. For. Res. 35: 199–205 (2005). - Santini A., Fagnani A., Ferrini F. & Mittempergher L., (2002) San Zanobi and Plinio elm trees. HortScience 37(7): 1139-1141. 2002. American Society for Horticultural Science, Alexandria, VA 22314, USA. - Santini A., Fagnani A., Ferrini F., Mittempergher L., Brunetti M., Crivellaro A., Macchioni N., Elm breeding for DED resistance, the Italian clones and their wood properties. Invest Agrar: Sist Recur For (2004) 13 (1), 179-184. 2004. - Spooner B. and Roberts P. 2005. Fungi. Collins New Naturalist series No. 96. HarperCollins Publishers, London. - Elm Recovery Project - Guelph University (Canada) - Dutch elm disease - info from the Government of Alberta - Dutch elm disease - info from the Government of British Columbia - DED info from Rainbow Treecare Scientific Advancements - The Mid-Holocene Ulmus decline: a new way to evaluate the pathogen hypothesis
It's important to understand the difference between intolerance and other types of food reaction. This is an important first step in helping clients develop a plan and approach to dealing with complex food intolerances. Unlike allergies and coeliac disease, which are immune reactions to food proteins, intolerances don’t involve the immune system at all. They are triggered by food chemicals which cause reactions by irritation in different parts of the body. The chemicals involved in food intolerances are found in many different foods, so the approach involves identifying them and reducing your intake of groups of foods, all of which contain the same offending substances. By contrast, protein allergens are unique to each food (for example, soy, egg, fish, milk and peanut), and dealing with a food allergy involves identifying and avoiding all traces of that particular food. Similarly, gluten, the protein involved in coeliac disease, is only found in certain grains (wheat, barley, rye) and their elimination is the foundation of a gluten-free diet. Dietary elimination particularly in children requires close oversight of a dietitian with paediatric training. Children require a wide variety of micro and macro-nutrients to promote growth and development and a dietitian can support you through this process whilst also ensuring your child is getting enough nutrition. In nut a shell - there is definitely a difference between food allergies and intolerances. Just because your child doesn't test positive to a food allergy it is still worthwhile discussing the possibility of a food intolerance with your GP or specialist. Alternatively, you can seek support from a paediatric trained dietitian. Fussy eating is defined as a "spectrum of feeding difficulties". Scientific literature has provided a number of useful definitions: “It is characterised by an unwillingness to eat familiar foods or to try new foods, as well as strong food preferences” “Consumption of an inadequate variety of food through rejection of a substantial number of foods that are familiar, as well as unfamiliar; this may include an element of food neophobia, and can be extended to include rejection of specific food textures” “Restricted intake of food, especially of vegetables, and strong food preferences, leading parents to provide a different meal from the rest of the family” “Unwillingness to eat familiar foods or try new foods, severe enough to interfere with daily routines to an extent that is problematic to the parent, child, or parent-child relationship” “Consumption of an insufficient amount or inadequate variety of food through rejection of food items” “Limited number of food items in the diet, unwillingness to try new foods, limited intake of vegetables and some other food groups, strong food preferences (likes/dislikes), and special preparation of foods required” There are a number of key themes within these definitions that help us identify when your child's fussy eating is beyond the normal "fussiness" that we would expect in a child aged between 11 to 36 months of age. During this time its normal for toddler's to become cautious, erratic, picky and fickle... however if it is impacting on the following then it might be time to seek some additional support. There are definitely strategies that you can put in place to help protect your parent-child relationship at meal times and encourage your children to expand their variety and meet their nutritional needs. Amy offers both home visits and Skype consults to families to provide education and support around children's feeding behaviours and perspectives. She has popped some key starting points below: Constipation is also typically common during the introduction of solids to the diet, during toilet training and also at school entry. - positioning will help with passing bowel motions - implementing structured "toilet sits" up to 5 minutes, three times a day, preferably after meals. - maintain a chart, diary or calendar in the toilet to reinforce positive behaviour and record frequency of motions - delay toilet training attempts until your child is passing painless stools - increasing dietary fibre and water can help alleviate constipation - excessive dairy intake / milk intake can exacerbate constipation in children If your child has ongoing or recurrent episodes of constipation we would recommend that you contact your GP or Child Health Nurse for an assessment. Furthermore, we recommend that you seek dietary advice from a trained paediatric dietitian to rule out possible factors that could be exacerbating your child's constipation. What is Vitamin C? Vitamin C is a water soluble vitamin that is natural present in fruits and vegetables! Citrus fruits, tomatoes and potatoes are MAJOR sources of vitamin C. Other sources includes capsicum, kiwifruit, broccoli and berries. Some foods such as breads and cereals can also be "fortified" with Vitamin C. This is when food manufacturers add Vitamin C to foods that typically aren't very high in Vitamin C to increase the Vitamin C content. Unfortunately unlike most animals, we (humans) are unable to synthesize or produce Vitamin C endogenously or from within our own bodies. This means that we have to rely on eating a variety of different fruits and vegetables. Why is Vitamin C important? Vitamin C is required in the biosynthesis of collagen, L-carnitine and certain neurotransmitters and it also plays a very important role in protein metabolism and wound healing. Vitamin C is a very powerful antioxidant and plays an important role in supporting the immune system. Vitamin C plays an important role in facilitating and improving the absorption of iron. This is super important for families with kids suffering from low iron levels or iron deficiency. How much Vitamin C do kids need? 0-6 months 25mg/day 7-12 months 30mg/day Do i need to use a Vitamin C supplement? If your kids are eating 4-5 serves of colourfrul fruits and vegetables everyday then they are low risk for a Vitamin C deficiency. I always suggest "food first strategies" because there are so many other important and amazing components to the foods we eat to fuel our bodies!! If your child has low iron levels and also doesn't eat a great deal of fruits then a vitamin supplement might be best in the short term, at least until your child is willing and accepting to eat a wider variety and range of fruits and vegetables. Remember to always check with a health professional prior to commencing vitamin supplementation. Remember to keep food fun and mix up the way you are offering the fruits and vegetables. If you are having difficulties getting your kids to eat a wide variety of foods then it might be time to see a paediatric dietitian for support and advice, alternatively discuss some options with your GP.
Schizophrenia is a severe psychotic disorder. (Learn More – What Is Schizophrenia?) The major symptoms of schizophrenia include positive symptoms, negative symptoms, disorganized behavior, and impaired cognition.(Learn More – What Are the Symptoms of Schizophrenia?) In addition to the formal symptoms of the disorder, many people with schizophrenia share other features that are not used to diagnose the disorder. (Learn More – Other Features of the Disorder) Medication is the primary approach to treat schizophrenia. (Learn More – Treatment of Schizophrenia) The medications are divided into two groups: older medications or typical antipsychotics (Learn More – Typical Antipsychotics) and newer medications or atypical antipsychotics. (Learn More – Second-Generation or Atypical Antipsychotics) The newer medications are less likely to have serious side effects. (Learn More – Severe Side Effects From Typical Antipsychotics) However, there is the potential to develop side effects from either typical antipsychotics or atypical antipsychotics. (Learn More – Other Side Effects of Antipsychotic Medications) Although therapy is not the first-line treatment for schizophrenia, people with the disorder can benefit from psychotherapy, especially to help them adjust to living with the disease, stay on medication, and get support. (Learn More – Therapy for Schizophrenia) What Is Schizophrenia? Schizophrenia is not multiple personality disorder, where a person has more than one functioning personality. Instead, schizophrenia is a psychotic disorder in which psychosis represents a loss of reality for the person. According to the American Psychiatric Association (APA), schizophrenia is a chronic brain disorder that results in a person having psychotic episodes. Psychotic episodes occur when a person loses contact with reality. The APA says that the prevalence of schizophrenia is less than 1% worldwide. There are multiple factors that may contribute to the development of schizophrenia, but no cause of the disorder has been identified. It is believed that there is a large genetic component associated with the development of the disorder, but environmental factors like stress may affect the onset of the disorder and how it presents itself. What Are the Symptoms of Schizophrenia? Severe psychotic symptoms can be quite distressing to a person with schizophrenia. The symptoms of schizophrenia fall into several different categories. - Positive symptoms of schizophrenia include hallucinations, exaggerated or distorted perceptions or beliefs, and delusions, which are often of a paranoid nature. - Negative symptoms are reductions in behavior, such as an inability to express emotions, experience pleasure, speak, or develop and initiate plans of action. - Disorganized symptoms include confused speech, disordered speech, difficulty with rationality or logical thinking, abnormal movements, or bizarre behaviors. - Impairments in cognition also occur in people with schizophrenia. These include difficulty with attention and concentration, memory issues, and impaired intellectual capabilities. Other Features of the Disorder Symptoms of schizophrenia typically appear in early adulthood. Men display symptoms in their late teenage years or early 20s and women often show signs in their 20s or early 30s. Early signs may be subtler, such as issues with performance and motivation in school or problems with relationships. Early warning signs of psychosis can sometimes occur in younger people. They can include delusional beliefs and even hallucinations in people who are predisposed to develop schizophrenia. Treatment of Schizophrenia The main treatment for schizophrenia is the use of antipsychotic medications. These medications address the symptoms of schizophrenia, such as hallucinations and delusions. They typically work better on the positive symptoms, but they can be used to treat some of the negative symptoms. In general, there are two different groups of antipsychotic medications: typical antipsychotics and atypical antipsychotics. The first group of antipsychotic medications is the older antipsychotics, also known as typical antipsychotics. They may also be referred to as first-generation antipsychotics or conventional antipsychotics. They primarily work on the neurotransmitter dopamine. Medications in this group include the following: These drugs are associated with some significant side effects (see below). Second-Generation or Atypical Antipsychotics Newer medications are referred to as atypical antipsychotics or second-generation antipsychotics. These medications work on dopamine and also on other neurotransmitters like serotonin. They may be better able to control both the positive and negative symptoms of schizophrenia, without some of the more serious side effects that can occur with antipsychotic medications. The following are medications in this group: Severe Side Effects From Typical Antipsychotics Typical antipsychotics may increase levels of a hormone known as prolactin, which can affect the menstrual cycle of women, cause the growth of breast tissue in men (and in women), and lead to decreased sex drive and mood swings. Typical antipsychotics are also believed to have a higher probability of producing extrapyramidal side effects. These effects include issues with coordination and motor control that often display as uncontrollable movements. The following extrapyramidal side effects may occur as a result of antipsychotics: - Parkinsonism refers to symptoms that are similar to Parkinson’s disease but are caused by the use of medication. They include slow thinking, slow movements, tremors, rigid muscles, facial stiffness, and difficulty speaking. - Akathisia is a significant feeling of being restless, making it very hard for the person to sit or stay still. They will often rock back and forth, twirl their fingers, or continually cross and uncross their legs. - Tardive dyskinesia is a serious movement disorder caused by typical antipsychotics. This movement disorder can affect the tongue, facial muscles, or neck muscles. The person may display tics or uncontrollable movements. If not addressed soon enough, the disorder may become permanent. Atypical antipsychotics may have these side effects as well, but clinicians and researchers believe they are far less likely to cause them than typical antipsychotics. Other Side Effects of Antipsychotic Medications Other side effects can occur with both typical or atypical antipsychotics. These include: - Weight gain - Restlessness or drowsiness - Sexual problems - Blurred vision - Low blood pressure - Low white blood cell count Therapy for Schizophrenia The use of psychotherapy for the sole and direct treatment of schizophrenia is not recommended, but if the person’s symptoms are controlled by medication, therapy can help. It can be particularly helpful for managing stress, developing social skills, tracking medication compliance, and recognizing the warning signs of relapse. Schizophrenia often develops in a person’s formative years, so the person may not have been able to develop skills that make them employable. Therapy can help to develop life management skills, and may even help them respond to training that may allow them to get a job. Therapy can also help to address family and relationship issues for people with schizophrenia. Support groups allow people with schizophrenia to learn from one another and share their experiences, which can promote overall well-being. With a combination of medication, therapy, and support, living with schizophrenia is possible. What Is Schizophrenia? (July 2017). American Psychiatric Association. Understanding the Symptoms of Schizophrenia. (December 2017). Medical News Today. Schizophrenia. (February 2016). National Institute of Mental Health. American Psychiatric Association Practice Guidelines for the Treatment of Patients With Schizophrenia. (May 2019). American Psychiatric Association. Extrapyramidal Side Effects of Antipsychotics Are Linked to Their Association Kinetics at Dopamine D2 Receptors. (October 2017). Nature Communications.
Fresnel mirrors[frā′nel ′mir·ərz] (also Fresnel bimirror), two plane mirrors that form a dihedral angle a few angular minutes less than 180° used for the observation of the phenomenon of the interference of coherent light beams; proposed by A. J. Fresnel in 1816. When the mirrors I and II (see Figure 1) are illuminated by a source S, the beams of rays reflected from them may be considered to have originated from coherent sources S1, and S2, which are virtual images of S. Interference occurs in the region where the beams cross. If S is linear (a slit) and parallel to the edge of the Fresnel mirrors, then upon illumination by monochromatic light the interference picture in the form of parallel slits of evenly spaced dark and light bands or fringes is observed on the screen M, which can be set up anywhere in the region where the beams cross. The wavelength of the light can be determined from the distance between the bands. Experiments conducted with Fresnel mirrors were one of the crucial proofs of the wave nature of light. REFERENCESZakhar’evskii, A. N. Interferometry. Moscow, 1952. Nagibina, I. M. Interferentsiia i difraktsiia sveta. Leningrad, 1974.
Each chapter in each book has its own assignment. Each assignment includes a vocabulary list of words used in that chapter. We present the words, not in alphabetical order, but in the order, you will hear them. Instead of providing definitions, we thought you might want the listener to look each word up, or put them in alphabetical order, or write their own definitions based on what they think the word might mean (from context) and then look it up and see how they did. Each vocabulary list is followed up with questions related to the content of the chapter. Our goal is to ask at least one question in each of the following categories: History, Geography, Language, and Character. After the questions, we provide suggestions for Activities the listener might do that will reinforce the information in that chapter.
The Quebec Act Britain’s 1774 implementation of the Quebec Act is often recognized as a source of increased American resentment towards British rule in North America. Along with other British legislation, such as the Tea Act (1773) and the Coercive Acts (1774), the Quebec Act helped spur American colonists towards independence. Traditionally, colonial resentment towards the Quebec Act has been attributed to the increased British control of religion, land distribution, and colonial government in North America granted by the Act. While these fears were legitimate, only one significantly goaded the American colonists into believing actions taken by the British in Canada were a substantial threat to their freedom. It was the fear of Parliamentary supremacy that made the Quebec Act a lightening rod for colonial anger. The Quebec Act proved to American colonists what they already believed—the British were not afraid to restrict colonial governments, in order to secure their possessions in North America. Consequently, the Quebec Act’s impact extended well past British Canada. It had global ramifications—particularly in the thirteen American colonies. British Rationale and American Apprehension Before examining American apprehension about the Quebec Act, it’s important to consider the historical environment that produced the Act. For the British, the 1763 Treaty of Paris signaled the end of the Seven Years’ War against the French and Spanish, but it also heightened tensions in the American colonies. The war proved very costly, forcing the British to collect more tax revenue from the Americas. Moreover, the British needed to limit American westward expansion to prevent colonists from provoking another war with Native American tribes. American colonists viewed these measures as attempts by the British to expand their imperial authority—thus sowing the seeds for future conflict between Americans and Britons. More specifically, the Treaty of Paris set the stage for British rule in Canada. Article IV of the accord ceded control of the Canadian territory from France to Britain. It also condoned the religious differences between French Catholics and British Protestants. As Canada changed from French to British hands, the British ensured in the Treaty that Catholic subjects in Canada were allowed to worship freely and received equal rights as Protestants. For the largely Protestant American populace, this perceived promotion of Catholicism was viewed as a dangerous precedent. Eleven years after acquiring Canada in the aftermath of the Seven Years’ War, Britain issued the Quebec Act to govern the territory more effectively. With revolution in the American colonies a real possibility, members of Parliament feared that, if war were to break out, French Canadians would support the American rebels. From the British perspective, the Quebec Act was not an attack on the American colonies. Instead, it was a measure to secure the allegiance of Britain’s Canadian subjects. Reforming Canadian Governance The Quebec Act focused on reforming three areas of Canadian governance—religion, territorial claims, and governmental power structure. 1. To appease the Canadians, the Quebec Act increased political freedoms for Catholics by removing references to Protestantism from government oaths. The Act removed the requirement that government officials in Canada swear an oath that made specific reference to Protestantism. This meant that French-Catholic Canadians could participate in colonial government without abandoning their faith. Furthermore, the Quebec Act offered special privileges for Catholics, such as permitting the collection of tithes and allowing previously banned Jesuit priests into the province. These exceptions frightened American colonists who feared not only an increase in Catholic power—American colonists were predominantly Protestant—but also that Britain would pursue similar policies by meddling in American religious policy. 2. The Quebec Act consolidated British control in Canada by increasing the size of the province. Article I of the Act outlines the expansion of the Canadian colony into western American territories. These additions nearly tripled the territory of the old French province and granted Canadian colonists much more land for settlement. However, an increase in Canadian territory amounted to a decrease in the land available to American colonists. Americans saw this distribution of western territories as an unfair restraint on American expansion, as well as evidence that Parliament may soon start regulating the borders of the American colonies. 3. The Quebec Act specified the structure of a new Canadian provincial civil government. Between the years 1763 and 1774, Britain made few attempts to structure the civil government of its new colonial possession. With the Quebec Act, though, Parliament set forth a plan for a fairly autocratic government. Per the Act, the head of the Canadian government was to be appointed by the British Crown and no provisions were put in place for an elected legislative assembly to represent the Canadian people. Reforms like these terrified the American colonists. At a time when Americans felt underrepresented in government, the existence of a Canadian province that offered no representation to its people was startling. If the British were willing to take these actions in Canada, what would become of the Americans’ beloved colonial assemblies? It is not difficult to understand why the American colonists objected strongly to the Quebec Act. Religion, land distribution, and structure of government were all issues that, alone, the Americans could tolerate. However, when British involvement crossed the threshold into absolute rule and threatened the authority of the American colonial assemblies, colonists reacted strongly. America’s Colonial Relationship with Britain In many respects, American opposition to the Quebec Act differed little from already contentious issues. The distribution of western land, religious affiliation, and structure of government were concerns that the American colonists had long fought over, both with the British and amongst themselves. What was it about the Quebec Act that motivated American colonists so strongly to accelerate their drive for independence? The answer lies in the apprehension and distrust colonists felt towards the British. The Quebec Act set a precedent for British absolute rule in North America—exactly what Americans feared most. The colonists quickly recognized intensified British rule in Canada and deeply feared that it would spread to the American colonies. In one sense, the greatest aspect of Empire, for the Americans, was that it left them alone. Despite being under the rule of British royal governors, American colonists enjoyed effective independence throughout much of the eighteenth century. The governors issued orders, but the colonists often ignored these proclamations and did as they pleased. The colonists were not entirely satisfied with British governance in the Americans colonies, but they were largely free and prosperous so they did not complain. Samuel Adams’ 1772 Rights of the Colonists stressed the rights of the colonists in three different respects: as men, as Christians, and as British subjects. Adams’ piece is telling because it demonstrates that the colonists saw themselves as proud British citizens who shared a cultural heritage and political principles with their mother country. Tensions with Britain arose not because of differences in principle, but rather over who was implementing those principles—British Parliament or American colonial assemblies. American colonists took a pragmatic approach to their relationship with Britain as evidenced by their response to the Townsend Acts (1767). First implemented in 1767, the Townsend Acts were a series of duties (taxes) enforced on certain British imports coming into America—such as paper, paint, lead, glass, and tea. The burden of this tariff fell on American colonists. Resistance to the Townsend Acts took the form of boycotting British goods as well as public remonstrations, such as the protests that led to the Boston Massacre. Due to the strong American opposition, Britain repealed almost all of the Acts in 1770—except for the Tea Act. After the repeal of the majority of the Townsend Acts, the colonists largely accepted British rule. The colonists recognized their shared history with Britain, but more heavily valued the economic prosperity they enjoyed as colonies. As long as the British did not encroach on their relative political and economic autonomy, the American colonies were content as British subjects. A Threat to American Freedom The Coercive Acts, also known as the Intolerable Acts, were a series of British laws passed in the spring of 1774 intended to assure control of the increasingly unruly American colonies. The Coercive Acts consisted of four separate pieces of legislation that all greatly increased British power in the colonies. The Acts explicitly affected the colonies by: - Closing Boston’s ports in retaliation for the Boston Tea Party - Allowing the quartering of British soldiers in private American homes - Exempting British officials from having to stand trial in America - Limiting the powers of colonial assemblies, while increasing the powers of royal appointees. The glaring difference between the Quebec Acts and the rest of the Coercive Acts is that the Quebec Act had no direct effect on American colonists. It makes sense that measures such as the Quartering Act—which allowed British soldiers to be housed in American homes—angered the colonists because it was a measure that directly diminished control over their own lives. The colonists, however, deemed the Quebec Act equally as intolerable because they perceived it as a direct threat to their colonial governments and the freedom they had previously enjoyed under British rule. If the colonists were primarily motivated by anything else other than economic and political freedom, then the Quebec Act would not have been regarded as egregious as the rest of the Coercive Acts. These complaints are prominent in the Declaration of Independence. As one of the first accusations hurled at the British king, the Continental Congress attacks King George III claiming, that “he has refused to pass other Laws for the accommodation of large districts of people, unless those people would relinquish the right of Representation in the Legislature, a right inestimable to them and formidable tyrants only.” This passage is clearly referencing Britain’s treatment of her Canadian province. The Declaration of Independence does not mince words when it comes to making direct complaints at King George. If the colonists were incensed by British religious or land policy in Canada, then one would suspect such frustrations to make an appearance in the Declaration. However, the only allusion to the Quebec Act in the Declaration is the previously mentioned excerpt that regards the colonial fear that Britain had a history of tyranny and denying her subjects appropriate representation. By itself, the reference to the Quebec Act in the Declaration of Independence carries a lot of weight. The Declaration unified colonial complaints against Britain and lodged them against the king all at once. Since fear over the Quebec Act made it into the Declaration, the concern was both real and significant. This concern is amplified when coupled with other evidence proving that the Americans valued the economic and political freedom that their relationship gave them, as well as the divided history the colonies had with respect to religious and land polices. All put together, it becomes clear the worry that Britain would restrict American political and economic freedoms was the greatest and most significant reason the Quebec Act drove the American colonies towards independence. For more information: - Visit the U.S. History Scene reading list for the American Revolution.
Credit & Copyright: Martin PughExplanation: South of Antares, in the tail of the nebula-rich constellation Scorpius, lies emission nebula IC 4628. Nearby hot, massive stars, millions of years young, radiate the nebula with invisible ultraviolet light, stripping electrons from atoms. The electrons eventually recombine with the atoms to produce the visible nebular glow. This narrow band image adopts a typical false-color mapping of the atomic emission, showing hydrogen emission in green hues, sulfur as red and oxygen as blue. At an estimated distance of 6,000 light-years, the region shown is about 80 light-years across. The nebula is also cataloged as Gum 56 for Australian astronomer Colin Stanley Gum, but seafood-loving astronomers might know this cosmic cloud as The Prawn Nebula. NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: LHEA at NASA / GSFC & Michigan Tech. U. Based on Astronomy Picture Of the Day Publications with keywords: emission nebula - star formation Publications with words: emission nebula - star formation
Tutor profile: Alyn S. How is Microeconomics relevant to students today? Microeconomics is a fundamental building block of studying the relationship between markets and goods and services. It is the branch that analyzes the decisions made by one individual selling one good. Through the use of supply and demand, economists are able to determine equilibrium prices and quantities in a given scenario. This study is relevant to students because it is a basic concept that many other subjects build upon. The idea of supply and demand in a world of limited resources is a very real problem that we face today. With resource scarcity becoming an increasingly large problems, it is crucial for students to be equipped with the knowledge necessary to make educated decisions in the future. How is Macroeconomics relevant to students today? In today's world, nearly every corner of the planet is connected via the internet. With this kind of development comes new trades and markets. Macroeconomics is the study of the broad movements of in the aggregate economy. In other words, it pertains to the entire economy as a whole. As global trade relationships become increasingly complex, macroeconomics allows us to measure the changes in economic growth, activities, and policies to name a few. Students today would gain knowledge of the way capital flows on a large platform through the study of macroeconomics. How is English grammar relevant to students today? Grammar is one of the essential building blocks of any language. Although common speech may have drifted to slang, correct grammar is still a vital part of formal education today. Learning to develop a concise thesis and clear organization makes a huge difference in a student's academic as well as future career. needs and Alyn will reply soon.
Goals: The student will| - Learn to distinguish between weight and mass. Mass is really just another name for inertia. Both mass and weight are properties of matter, and all observations suggest that they are proportional to each other. Weight is the force by which an object is attracted by gravity. Mass is the extent to which it resists acceleration. - Learn that in oscillations of a mass against an elastic spring--in the absence of gravity, or in horizontal motion--the length of the oscillation period is proportional to the square root of the mass. This makes it possible to compare masses without the use of gravity. - Learn about space station Skylab and the measurement of astronaut mass conducted aboard it. Terms: mass, weight, inertia, zero-g Stories and extras: The story of Skylab and studies of weight-loss by its crew members. Hands-on activities: A simple experiment with a clamped hacksaw blade, containing some elements of the Skylab measurements. Notes to the teacher: - These linked sections are relatively free of mathematics, because they stress the intuitive distinction between weight and mass, a subject on which many students and even some teachers are unclear. It is hoped that the distinction is made clear by approaching it in more than one way, and illustrating it by as many examples as possible. - Some teachers still maintain that a two-pan balance measures weight, while a spring balance measures mass. This is misleading and should be avoided: both devices rely on gravity, and therefore both measure weight. The way they do so differs. The two-pan balance compares the weight of the object being examined to that of a set of standard weights in the other pan. The spring balance, on the other hand, compares that weight to the pull of a calibrated spring. Thus on the Moon, where gravity is only 1/6 of its value on Earth, the spring balance will record a smaller weight, but the two-pan balance will not. That is because on the Moon, the pull of the spring is unchanged, but the balancing weights in the other pan also weigh only 1/6 of what they weigh on Earth. In both cases, however, what is measured is weight, not mass, because gravity is involved. Today we return to two concepts already discussed in a previous lesson--weight and mass. They often get confused! Since both are measured in kilograms or pounds, many students (even some teachers) feel both represent the same thing. (Write the following on the board) (End of words copied from the board) Galileo showed by experiments that (disregarding air resistance) big stones fell no faster than small ones. Newton asked himself: why? If they were pulled down by a stronger force, why didn't they fall any faster? He guessed the reason. All material also resists acceleration. A big stone with 10 times the weight of a small one also has 10 times the resistance, and therefore it does not fall any faster. Newton named the resistance to acceleration intertia. We call it mass. If the only use of the concept of mass was for explaining why big and small stones, in free fall, accelerated at the same rate, it would not be very useful. However, there also exist many motions in which gravity plays no role--horizontal motions on Earth, and motions in "zero g" in space. Weight does not drive such motions, but inertia remains an important factor. Continue with examples from the lesson, of a rolling bowling ball and a rolling wagon--both starting their motion and stopping it. Examples already mentioned: we read about train locomotives hitting cars which stalled on railroad tracks, because those trains were too massive to stop quickly. Supertankers (aka "large crude oil carriers"), ships of 200,000 tons and more, are even harder to stop when fully loaded, taking several miles to do so. [This is probably no longer topical, relating to matrix printers, widely used in the 1980s, before laser printer and ink-jet printers entered wide use. They used tiny metal slugs which pushed an inked ribbon against the paper output from a computer. The slugs were arranged in an array (matrix) and depending on which of them were pushed (by magnets), letters and symbols were formed. They are still used in industry to mark boxes. The slugs had to be tiny and very light, so they could hit and rebound very rapidly.] Then go over Skylab story. As a project, some students may prepare a presentation on Skylab, based on the October 1974 article about it in "National Geographic." All past issues of that magazine are available on compact disks, or in paper copies in libraries. The hands-on experiment in section (17b) may be performed as the teacher chooses--together with the Skylab discussion, before it or afterwards. Guiding questions and additional tidbits with suggested answers. -- What is the difference between the mass and weight of a bowling ball? The ball has both weight and mass. Its weight makes it hard to lift. Its mass makes it hard to get rolling, and also hard to stop. --What do we mean by the ball's weight? Its weight is the force by which gravity pulls the ball down. -- What do we mean by the ball's mass? The ball's mass is its inertia, its resistance to acceleration. -- Suppose that some time, in the far future, a bowling alley is built on the Moon, where gravity is 1/6 of what it is on Earth. Would it be easier there to roll the ball down the alley? It would be easier to lift the ball off the floor, but not any easier to get it rolling. --An astronaut in a space suit, in the space shuttle bay, tries to push a one-ton scientific satellite out of the bay, but the satellite proves very hard to move. If it is weightless, why should it be so? In the moving frame of reference of the space shuttle, it has no weight, but it has one ton of mass. --Should the astronaut give up trying to push it? Not necessarily. If he keeps pushing it will accelerate--it just does so very slowly. In a minute it might be moving fast enough to float out of the bay. At this point, however, the astronaut better be ready to let it float away--trying to stop it would be just as hard! --On Earth we drop from a high point a bowling ball and a marble. The marble has only 1/1000 of the weight of the ball, but it falls just as fast. Why? The marble also has only 1/1000 of the inertia or mass of the bowling ball. By Newton's law a = F/m Both F and m for the marble are 1/1000 times less, but their ratio is the same as with the bowling ball, and therefore the marble accelerates at the same rate. --If the Earth's gravity reaches up to the Moon (which is held by it), how can we have a "zero gravity" environment aboard a space station that orbits a mere 300 miles above ground? Gravity does act on the space station, too--that is what keeps it in its orbit. In fact, gravity is the only external force acting on it and on the astronauts inside (same as it is in free fall). That means that inside the station, no additional force pulls objects towards Earth. In the reference frame of the space station it feels like "zero g", because no outside force is evident. ["Stargazers" returns to this matter in a later section, where frames of reference are discussed.] --Before electronic wrist-watches were introduced (around 1980), mechanical ones were used. How were they designed, to operate in any position? They obviously could not depend on gravity, so they too used a spring and an oscillating mass. The mass was a balance wheel, which rotated back-and-forth against a spiral spring. [It might be possible to show the class an old mechanical alarm clock with its back removed, provided the balance wheel is clearly visible, which often is not the case.] (Questions about the "Skylab" section, #17a) --How can mass be measured in "zero g"? --How was astronaut mass measured aboard Skylab? --What did measurements of astronaut mass aboard Skylab reveal? (Question after the experiment in section #17b) -- Suppose the same hacksaw blade described in the author's experiment in section #17b was also used in the mass-measuring device aboard Skylab. If the device carried an astronaut known to weigh about 70 kg, what would be its back-and-forth period? In the notation of that section, for m1 = 50 gr., the blade gave a period T1 of 0.5 seconds. If m2 = 70 kg, m2/m1 = 1400, SQRT(m2/m1) = 37.42, multiply by T1 = 0.5 sec to give T2 ~ 18.7 seconds.
Celastrales, small order of flowering plants that includes 3 families, some 100 genera, and about 1,350 species. In the Angiosperm Phylogeny Group II (APG II) botanical classification system, Celastrales is placed in the Rosid I clade (see angiosperm). Celastraceae, or the bittersweet family, contains about 90 genera with some 1,300 species of trees, lianas, and herbs found throughout temperate and especially tropical regions. Maytenus (including Gymnosporia) contains about 200 species, Salacia about 150 species, and Hippocratea (including Loesneriella) about 120 species; all are pantropical. Euonymus (130 species) is more temperate, while Celastrus (32 species) is intermediate in its preferences. Fresh leaves of Catha edulis (khat), a plant native in Africa and Arabia, yield a stimulant when chewed. It is especially popular in much of the Middle East. Species of Celastrus and Euonymus in particular are commonly cultivated as ornamental shrubs. Many species in Celastraceae secrete gutta, or latex, and, if a leaf is pulled apart transversely, the two halves often hold together by thin strands of dried latex. The leaves can be spiral or opposite on a single plant, and in some species they are two-ranked. The leaf margins often have teeth, although these may be minute, and there are stipules that are quite often fringed. The flowers are usually small, with three or five stamens facing inward or outward and borne on the outside or inside of the prominent nectary disc. The ovary often has three compartments. The fruits and seeds are variable; the former may be fleshy or dry, and the latter may be winged, with a fleshy appendage or aril, or simply rounded. Parnassiaceae, with two genera, includes annual to perennial herbs. Parnassia contains 50 species that grow in the north temperate to Arctic region. Lepuropetalon spathulatum, the only species of its genus, occurs in the southeastern United States and Mexico. The leaves in the family have no stipules, and the flowers are single or obviously cymose. There are five stamens and five staminodes, or nonfunctional stamens. The latter are opposite the corolla, and at least sometimes nectar is secreted at their bases. The ovules are borne on the walls of the ovary, and the fruit is a capsule containing many small seeds. L. spathulatum is one of the smallest flowering plants growing on land. It is often less than 2 cm (1 inch) tall, and its flowers lack petals. The staminodes of Parnassia are branched, each branch ending in a shiny yellow knob; these appear to mimic nectaries. Lepidobotryaceae is a small family of two genera and two or three species of trees, Lepidobotrys staudtii being known from East Africa and Ruptiliocarpon caracolito growing in Central and South America. They have simple two-ranked leaves that are jointed at the base of the blade and have small paired leafy structures, or stipels, as well as ordinary stipules where the leaf joins the stem. The inflorescence seems to arise opposite the leaves. The flowers are small, male and female flowers being borne on different plants. The sepals and petals are free, and the 10 stamens are joined at their bases. The fruit wall has a layer with distinctive radial fibres that separate from a horny inner layer.
Egg binding refers to a common and potentially serious condition where a female bird is unable to pass an egg that may be stuck near the cloaca, or further inside the reproductive tract. Even though egg binding can occur in any female bird, it is most common in smaller birds such as lovebirds, cockatiels, budgies and finches. The potential of an egg breaking inside the tract is high, which then can result in an infection or damage to internal tissue; and - if left untreated - death. The bound egg may be gently massaged out; failing this it may become necessary for a vet to break the egg inside and remove it in parts. If broken, the oviduct should be cleaned of shell fragments and egg residue to avoid damage or infection. Suspected causes for egg binding include: - Low Calcium Levels or Hypocalcaemia Syndrome associated with low calcium levels in the blood. Supplementing the breeding hen with a diet rich in calcium and Vitamin D is an important factor in preventing this problem - You could provide a dish filled with crushed egg shell (from boiled eggs to kill any bacteria) and/or attach a calcium / mineral block to the cage. - In areas where access to natural sunlight is limited (such as in the northern hemisphere during the winter months), full spectrum lamps can be used to provide UVA and UVB rays.natrual food source rich in vitimine D Potentially discuss supplementation with your vet. Supplementation needs to be carefully screen ed and supervised by a vet since an excess of vitamin D (in the form of a supplement) causes kidney damage and retards growth. - Relevant Article:Natural Calcium for Birds - Sources and Absorbability - Malnutrition caused by seed-only or low-protein diets. - Sedentary lifestyle: Often the case when birds are kept in enclosures / cages that are too small for them. The lack of exercise causes poorly developed muscles and obesity. - At particular risk are sick and old Birds. - Pet birds can also develop this problem, as birds don't need a mate to lay eggs. (Obviously, solitary egg-laying females won't produce fertile eggs.) Loss of appetite, depression, abdominal straining, and sitting fluffed on the bottom of the cage. Some hens may pass large wet droppings while others may not pass any droppings due to the egg's interfering with normal defecation. If you suspect that your bird is egg-bound, she should be seen by a vet immediately. The veterinarian may be able to feel the egg in the bird's abdomen. An x-ray may be necessary to confirm the diagnosis. Sometimes medical treatment will enable the hen to pass her egg. Occasionally surgery is necessary. Complications from being egg bound can be swelling, bleeding or prolapse of the oviduct. If in doubt as to if the hen is egg bound or not, a few vet sites recommend separation, warmth, warm bath and calcium to all hens in lay that seem distressed. This is a life-threatening condition and should be addressed by a qualified avian vet. Your vet may discuss: - Calcium shots - immediate solution to help the egg shell harden allowing the hen to hopefully pass it - Lupron shots to stop hens from going into breeding condition - Spaying your hen as a permanent solution The following are samples of actions that have resolved this problem for some birds (please note: not all hens can be saved, especially if it's critical by the time the problem was discovered and no vet is available or can be reached in time). Egg-bound hens go into profound cardiovascular collapse and may not be able to put in the effort to push the egg out without intervention. - Suspected egg binding: Keep her in a warm area. Provide supportive care . - Place the bird into a steamy room, such as bathroom with shower on until the bathroom mirrors and windows steam up. Desired temperature: 85-90 degrees Fahrenheit / Humidity: 60%. Place bird on wet towel. The warmth relaxes the hen so that the vent can dilate more allowing the egg to pass. - A warm water bath can also be of great help (shallow water, of course, you don't want to drown the hen). This relaxes her muscles and often the hen will pass the egg into the water. Make the water as warm as you would like to take a long soak in. - Massage the muscles in that area with olive oil. In many cases, this lead to a successful passing of the egg. Note: there is a risk associated with messaging this area. It could cause the egg inside to break - which is life-threatening. Be very careful! If in doubt, it's always best to have the vet take care of it ... - Even if the cause is not hypocalcaemia in this hen’s case it will not hurt her to have more calcium. - Applying a personal lubricant, such as KY jelly to her vent may also be helpful. - To reduce swelling on her vent, some breeders reported success in applying Preparation H to her vent. - Successful Passing of the Egg: Following passing of the egg keep the hen in a warm and quiet area separate from the others, until she is out of shock and back to eating and drinking well. - Prevention: Provide bird with high-calorie, high-calcium food to help strengthen future eggs and prevent egg binding.
Why are armadillos used for research? Odd though it may seem, armadillos might someday help cure leprosy (Hansen’s disease). Researchers have found that the core body temperature of the armadillo is low enough to favor the growth of the leprosy-causing bacterium Mycobacterium leprae. While this microorganism has been grown in other types of animal tissue, no animal model had previously been found that regularly contracted the most virulent form of the disease (lepromatous leprosy). Because the bacillus only tends to grow in cooler parts of the body, such as the feet, nose and ears, large amounts of bacteria could not be grown (attempts to grow the microorganism in vitro have not been successful). The armadillo, however, has a lower body temperature than most mammals, resulting in rapid development of the disease following inoculation. Because of the armadillo, scientists have been able to develop a vaccine against leprosy. The nine-banded armadillo has become the principal source of M. leprae in biochemical and immunological research. Although there has been some concern about humans contracting leprosy from wild armadillos, this is not a common occurrence. My understanding is that most instances of humans contracting leprosy from armadillos involve people who have eaten undercooked armadillo meat. (You can read more about this on the Armadillos as Food page.) Because of their unique double-twinning, nine-banded armadillos are also studied to learn more about multiple births and other reproductive issues. Some researchers have also explored the possibility of using armadillos in HIV studies. In the past, the nine-banded armadillo has been used for skin and organ transplant experiments, tests of cancer-causing agents, and experiments on drug metabolism. The fact that one animal produces four identical young has been very helpful to scientists, because no experiment is acceptable without proper controls. Identical animals means that any differences seen between an experimental animal and the control animal are a result of the treatment, and not due to different genetic makeups. McBee, K. and Baker, R.J. 1982. Dasypus novemcinctus. Mammalian Species 162: 1-9. Storrs, Eleanor. The Astonishing Armadillo. National Geographic. June 1982. 161(6).
One of the many conditions treated by the Beaumont Children's pediatric surgeons, Hirschsprung's disease occurs when some of the nerve cells that are normally present in the intestine do not form properly while a baby is developing during pregnancy. Hirschsprung's disease occurs in 1 out of every 5,000 live births. As food is digested, muscles move food forward through the intestines in a movement called peristalsis. When we eat, nerve cells that are present in the wall of the intestines receive signals from the brain telling the intestinal muscles to move food forward. In children with Hirschsprung's disease, a lack of nerve cells in part of the intestine interrupts the signal from the brain and prevents peristalsis in that segment of the intestine. Because stool cannot move forward normally, the intestine can become partially or completely obstructed (blocked), and begins to expand to a larger than normal size. The problems a child will experience with Hirschsprung's disease depend on how much of the intestine has normal nerve cells present. Causes of Hirschsprung's Disease Between the 4th and the 12th weeks of pregnancy, while the fetus is growing and developing, nerve cells form in the digestive tract, beginning in the mouth and finishing in the anus. For unknown reasons, the nerve cells do not grow past a certain point in the intestine in babies with Hirschsprung's disease. Symptoms of Hirschsprung's Disease Most children with Hirschsprung's disease show symptoms in the first few weeks of life. Children who only have a short segment of intestine that lacks normal nerve cells may not show symptoms for several months or years. The following are the most common symptoms of Hirschsprung's disease. However, each person may experience symptoms differently. Symptoms may include: - not having a bowel movement in the first 48 hours of life - gradual bloating of the abdomen - vomiting green or brown fluid Children who do not have early symptoms may also present the following: - constipation that becomes worse with time - loss of appetite - delayed growth - passing small, watery, bloody stools - loss of energy Symptoms of Hirschsprung's disease may resemble other conditions or medical problems. Please consult your child's doctor for a diagnosis. How is Hirschsprung's disease diagnosed? A doctor will examine your child and obtain a medical history. Other tests may be done to evaluate whether your child has Hirschsprung's disease. These tests may include: - Abdominal X-ray. A diagnostic test which may show a lack of stool in the large intestine or near the anus and dilated segments of the large and small intestine. - Barium enema. A procedure performed to examine the large intestine for abnormalities. A fluid called barium (a metallic, chemical, chalky, liquid used to coat the inside of organs so that they will show up on an X-ray) is given into the rectum as an enema. An X-ray of the abdomen shows strictures (narrowed areas), obstructions (blockages), and dilated intestine above the obstruction. - Anorectal manometry. A test that measures nerve reflexes which are missing in Hirschsprung's disease. - Biopsy of the rectum or large intestine. A test that takes a sample of the cells in the rectum or large intestine and then looks for nerve cells under a microscope. Treating Hirschsprung's Disease Specific treatment for Hirschsprung's disease will be determined by your child's doctor based on the following: - The extent of the problem - Your child's age, overall health, and medical history - Your child's tolerance for specific medications, procedures, or therapies - Expectations for the course of the disorder - The opinion of the health care providers involved in the child's care - Your opinion and preference An operation is usually necessary to deal with intestinal obstruction caused by Hirschsprung's disease. The surgeon removes the portion of the rectum and intestine that lacks normal nerve cells. When possible, the remaining portion is then connected to the anal opening. This is known as a pull-through procedure. At times, a child may need to have a colostomy done so stool can leave the body. With a colostomy, the upper end of the intestine is brought through an opening in the abdomen known as a stoma. Stool will pass through the opening and then into a collection bag. The colostomy may be temporary or permanent, depending on the amount of intestine that needs to be removed. After a healing period, many children can have the intestine surgically reconnected above to the anal opening and have the colostomy closed.
The simple answer is that deep inside the core of the Sun, enough protons can collide into each other with enough speed that they stick together to form a helium nucleus and generate a tremendous amount of energy at the same time. This process is called nuclear fusion. Every second, a star like our Sun converts 4 million tons of its material into heat and light through the process of nuclear fusion.The Details Our Sun has provided an essentially constant amount of heat and light to Earth for about 4.5 billion years. But just what generates this energy for such a long period of time? Scientists of the 19th century believed that the Sun was powered by chemical reactions. However, calculations showed that a star powered on chemical energy would only last maybe a thousand years or so. In the mid-1800s two physicists, Lord Kelvin and Hermann von Helmholtz, put forward the idea that the huge weight of the Sun's outer layers should cause the Sun to gradually contract. As it contracts, the gases in its interior become compressed and when a gas is compressed its temperature increases. Kelvin and Helmholtz argued that gravitational contraction would cause the Sun's gases to become hot enough to radiate heat energy into space. This process does in fact happen in the protostar phase of stellar formation. However, this kind of contraction cannot be the main source of stellar energy for billions of years a hundred million, maybe, but not a billion. A clue to the source of stellar energy was provided by Albert Einstein. In 1905, while developing his special theory of relativity, Einstein showed that mass can be converted into energy and vice-versa. These quantities are related by the mass-energy relation E = mc2 where E is the energy released (in units called Joules) from the conversion of a mass m (in units of kg), and c is the speed of light (in meters per second). In 1920, British astronomer Arthur Eddington proposed that the Sun and other stars are powered by nuclear reactions. Hans Bethe realized that a proton smashing into another proton with enough force could be the reaction to power the Sun. In 1938, Bethe and his colleagues presented a fully developed proton-proton chain of reactions that converts hydrogen into helium, which would allow the Sun to shine for about 10 billion years. The proton-proton cycle, as it is now called, is known to be responsible for about 98% of the Suns energy production in its inner core. Bethe won the 1967 Nobel Prize in Physics for his work concerning energy production in stars.Even More Details: Planet Earth, our bodies, and shining stars are all made of the same basic elements of matter. To understand why stars shine, we must first understand the tiny particles that make up matter. Scientists have studied matter in their laboratories for many, many years. What they have learned about matter is that it is made up of different kinds of atoms hydrogen atoms, carbon atoms, and iron atoms, for example. Each kind of atom has a certain unique number of particles called protons, neutrons, and electrons in it. The protons and neutrons cluster together in the center of the atom in what is called the nucleus. The electrons orbit around the nucleus. Atoms are very, very small. A hundred million atoms placed side-by-side in a row would only be about 1 inch long! The simplest atom is hydrogen. The nucleus of a hydrogen atom consists of a single proton. Around this proton orbits a single electron. There is more hydrogen in the universe than any other kind of atom. Helium is the second lightest (or simplest) atom. It consists of a nucleus containing 2 protons and two neutrons. Around the nucleus orbits 2 electrons. A helium nucleus can be created by smashing enough protons into each other that they fuse, or stick, together. Heat and pressure inside the core, or center, of the Sun are so high that protons can hit each other hard enough to bond together. If four protons are smashed together, the result is two protons, two neutrons, two positrons, and some energy. A positron is a small particle similar to an electron, but with a positive electric charge. (Remember that protons have a positive charge, electrons have a negative charge, and neutrons have no charge.) |The Stages of the Proton-Proton Cycle. Select an image to see an enlarged view.| Why the extra energy? Einstein showed us that E=mc2, which tells us that any loss in mass (m) results in the appearance of energy (E). c is the speed of light and is equal to 300,000 km/second. A helium nucleus is only 99.3% as heavy as four protons. The missing mass is converted into energy. It is this energy which causes the star to shine and stops it from collapsing due to the pull of gravity. Does the conversion of this tiny bit of mass into energy create enough energy to account for the Suns output of 390,000,000,000,000,000,000,000,000 Watts? Sure it does - perform the calculation! The StarChild site is a service of the High Energy Astrophysics Science Archive Research Center (HEASARC), Dr. Alan Smale (Director), within the Astrophysics Science Division (ASD) at NASA/GSFC. The StarChild Team StarChild Graphics & Music: Acknowledgments StarChild Project Leader: Dr. Laura A. Whitlock Responsible NASA Official: Phil Newman
Millions of people in United States are affected by different types of allergies. Knowing what causes or triggers an allergic response is important in extending an appropriate and effective treatment for the allergy. With the development of technologies, a number of new medical tests have been introduced to identify the allergen. Reports of allergy tests are then evaluated by symptoms and detailed medical history to identify the allergy causing substances. Skin tests and blood tests are the two commonly performed tests to find the allergens that trigger a response. Skin tests are more common, as they are less expensive and provide faster results. In skin tests, a small amount of allergens are placed on the skin to test for a reaction. There are different types of skin tests: - Skin prick test – In this test, a small drop of the allergen is placed on the skin. The solution containing allergen enters into the skin through small scratches or pricks made on the skin. The development of raised, itchy, red skin at the testing area indicates an allergic response. - Intradermal test – This test is more sensitive than the skin prick test, and is usually done if the skin test results are negative, but the chemical is considered to be an allergen. In this test, a small amount of the solution containing the allergen is injected into the skin. - Skin patch test – In this test a small amount of the allergen is placed on a pad attached to the skin. The skin patch test is usually used to detect allergic dermatitis. Skin prick tests are usually used to test for the presence of mold, dust, feathers, pet dander, and food allergens. It also helps in identifying whether a person is allergic to medicines. Although it is not as sensitive as skin tests, a blood test is conducted for people who are unable to have skin tests for allergies. One of the most common types of blood tests conducted for allergy is the enzyme-linked immunosorbent assay or ELISA. This test is used to measure the amount of immunoglobulin E in blood. These antibodies are produced in excess in people with allergy. Blood tests are conducted in case of skin conditions, like eczema, hives, and severe allergic reactions. - There are multiple types of skin tests used to detect allergies, including the skin prick test, the intradermal test, and the skin patch test.
The use of asbestos in building materials has decreased in the United States during the past several decades due to its worldwide recognition as a deadly carcinogen. At the same time, however, American workers may still find themselves at risk for asbestos exposure due to the fact that many old buildings still contain asbestos. Further, while asbestos is banned from use in some products, there is no federal asbestos ban. While the Environmental Protection Agency (EPA) did enact such a law in 1989, it was reversed two years later by a federal appeals court. Without a permanent asbestos ban in place, millions of workers involved in construction trades or ship repair, renovating old homes or disaster response, continue to face a greater risk of asbestos-related diseases such as mesothelioma. It is crucial for affected workers to understand the dangers of asbestos exposure. While government safety regulations have been in place for many years, they have not yet been sufficient enough to stop the estimated 12,000-15,000 asbestos-related deaths occurring annually. Asbestos Exposure: Where It Happens and the Federal Protection Guidelines in Place Federal asbestos regulations are stated in the Occupational Safety and Health Administration’s (OSHA) Asbestos General Standard and Asbestos Construction Standard. These government standards dictate the legal amount of exposure workers may have to asbestos and procedures that must be used when employees are at risk of exposure. Among other safety measures, employers are required to provide protective clothing and to use filtration systems to reduce the volume of airborne asbestos. The standards also dictate proper asbestos disposal and mandate workers have medical examinations after prolonged exposure. Workers can develop mesothelioma even after being exposed to a single asbestos fiber, so it is crucial to monitor their health very closely. Pleural mesothelioma is a type of cancer caused by the inhalation of asbestos fibers that enter tiny airways in the lungs. These fibers can irritate the pleural lining of the chest, which, over time, leads to mesothelioma. Even though asbestos has not been used widely in the United States since the 1980s, many workers in construction trades, such as carpentry, renovation, demolition, roofing, flooring, and general labor, to name a few, were exposed to asbestos over a period of many years. Importantly, the risk of developing mesothelioma does not decrease following exposure. Individuals who develop mesothelioma face a devastating prognosis. If the disease is found at Stage I, the average survival time is 21 months; but if it is detected at Stage IV, that estimate decreases to a mere 12 months, but often less. The five-year survival rate is only 5 to 10 percent. Recent CDC Study Points to Continued Asbestos Exposure Even with OSHA guidelines in place, a recent CDC study of mesothelioma deaths between 1999 to 2015 suggests asbestos exposure continues to be a threat for workers. The report noted that while there were 2,479 total mesothelioma deaths in 1999, that number increased to 2,597 in 2015. A majority of these deaths were among individuals who worked in or around construction sites. The study’s authors noted that the mesothelioma risk for construction workers is still much higher than it should be, particularly with the number of protections employers are mandated to use on job sites. The data indicated an increase in the number of deaths of individuals over 85 years old, yet, “the continuing occurrence of mesothelioma deaths among persons aged <55 years suggests ongoing occupational and environmental exposures to asbestos fibers and other causative EMPs.” The researchers also suggested, “new cases might result from occupational exposure to asbestos fibers during maintenance activities, demolition and remediation of existing asbestos in structures, installations, and buildings if controls are insufficient to protect workers.” Even with protective measures in place, the amount of airborne asbestos on construction sites has still been shown to exceed OSHA guidelines. As stated in the study, for example: “20 percent of air samples collected in the construction industry in 2003 for compliance purposes exceeded the OSHA permissible exposure limit.” What the recent CDC study stresses is that workplace exposure also puts the families of workers at risk for developing asbestos-related diseases. Since asbestos fibers can stick to clothing, workers may take it home with them if their garments are not properly cleaned after being on a worksite. Not a Problem of the Past Contrary to the hope that asbestos-related deaths would decline due to decreased use in building materials and increased agency regulation, the CDC study indicates this is not the case. Continued exposure to asbestos still occurs today. Employers in the construction trades must be aware of the risks and take all necessary precautions to ensure employee and public safety.
The main effect of an electromagnetic wave is basically that the electric field in the electromagnetic wave shoves charged particles around (ions and/or electrons). That's called "Electric dipole coupling". Electric dipole coupling is almost always much stronger than other effects of the electromagnetic wave. For example, the electromagnetic wave has a magnetic field too, which can exert forces on molecules. This "magnetic dipole coupling" is a much smaller effect than the electric dipole coupling. Shoving an electron around (the electric dipole coupling) does not directly change or rotate the electron's spin. It only changes the spin a little bit, due to "spin-orbit coupling", an effect related to special relativity (in the "rest frame" of the moving electron, the electric fields get converted to magnetic fields, which torques the spin). Therefore, since spin-orbit coupling is a weak effect (unless the electron is traveling near the speed of light), the coupling between electromagnetic waves and molecules cannot usually involve a change of spin. That means that a spin-singlet molecule absorbing light almost definitely goes into a spin-singlet excited state. Conversely, it is exceedingly unlikely for a spin-triplet excited state to emit light while changing into a spin-singlet ground state. By the way, another possibility is for the magnetic field of the light wave to directly torque the spin of the electron. This effect is normally even weaker than the effect of the wave via the spin-orbit effect.
Sedimentation is a physical water treatment process using gravity to remove suspended solids from water solid particles entrained by the turbulence of moving water may be removed naturally by sedimentation in the still water of lakes and oceans. Sedimentator lab_____ / 25 name_____date_____period_____ problem: how do sediments settle. Prepare your students for medical and lab tech careers with carolina's wide range of equipment, kits and models genetics carolina offers a variety of resources and products to help your students delve into the emerging area of genetics. Biology instrument homogenizer homogenizer ultrasonic homogenizer molecular biology instrument electrophoresis electrophoresis cell power supplies related. Activity: the jam-jar experiment soil-netcom describe what happened _____ _____ _____ now add the clear water until the jar of soil is almost full first just watch the mixture for a while - do you see air bubbles rising how much air is there in soil. Sedimentation is the tendency for particles in suspension to settle out of the fluid in which they are entrained and come to rest against a barrier this is due to their motion through the fluid in response to the forces acting on them: these forces can be due to gravity,. Neo/sci porosity of soils and waterflow lab investigation $6476 view item stock # 53679 glass eyedropper, pack of 12 $536 view item sedimentator tube $1283 view item stock # 12036 introduction to rocks educational classroom poster. Volcanoes cty course syllabus day 1, monday morning what how introduction to vaco − hand out syllabus − class/lab rules − expectation sheets − lab safety sheets − lab write up guidelines sedimentator tubes – studentsmake observations and discussthe processof deposition evening. Sedimentator lab introduction: in this lab we will be working with a sedimentator to observe and classify sediments a sediment is naturally-occurring material that is broken down by processes of weathering and erosion, and is subsequently transported by the action of fluids such as wind, water, or ice. Sedimentator sealed in a clear 1 diameter tube, sediments from fine clays to coarse gravels demonstrate the principles of deposition and layering can be used alone or with a stream table, sedimentary rock kit, or fossil kit. Lab 2: coring isn't boring requires collection of materials to construct a model drill ahead of time although lab 1b: meet the scientists on board expedition 341 was not included in this unit, the teacher may choose to include it as an extra activity or may choose to highlight the roles of specific scientists on board by showing a few of the. Sedimentator 4 use the description from the table on sediments to name each type of sediment you can use a ruler to measure the largest sediments 5 labet the sediments on your diagram part il looking at the flow of water in a river pick up the sedimentator and slightly tilt it up and. The great fossil find [read to students] in this activity, you and the members of your team will play the roles of paleontologists working in the field in montana, near the town of randak. Sedimentation centrifuge, wholesale various high quality sedimentation centrifuge products from global sedimentation centrifuge suppliers and sedimentation centrifuge factory,importer,exporter at alibabacom. The latest tweets from kelly chadesh (@mrschadesh) an earth & space science teacher that thirsts for knowledge to share with her amazing students cross plains, wi. Delta education sedimentator tube featuring 10-1/2 in height with a diameter of 1 in is a uniquely designed, completely sealed earth science teaching aid containing water and a variety of soil types 100% reusable sedimentator which is a mess-free plastic see-through demonstration tube allows the students to observe and understand the soil sedimentation process. The volume of sediment removed from the bed channel before the intake is determined using the rate of changes in bed channel topography compared to the initial condition the effects of submerged vane angle on sediment entry to an intake from a 90 degree converged bend. Sedimentator lab period e clay and silt animal and plant remains ocean salts boulders gravel sand predict the order in which the sediments may settle out from a river from first (1) to last (6) hypothesis: why do you think this will happen in the way you predicted directions: 1 gently shake the sedimentator to loosen the sediments. Start studying science sedimentator and stream table review-based on labs learn vocabulary, terms, and more with flashcards, games, and other study tools. Students can watch the process of sedimentation and rock formation unfold before their eyes it also demonstrates river erosion, deposition and layering of sediments, and rock formation and embedding of organic material to form fossils. Lab equipment and supplies microscopes and optics new products physics technology teacher resources earth science resources home educational materials environmental educational materials environmental learning activities sedimentator click to enlarge sedimentator: 00050 00050/5-pk simple, yet effective tool. We've improved the density experiment in a dozen ways: we've added two cubes to our 10-cube set lignum vitae (ironwood), one of the heaviest and hardest woods in the world, sinks in water, but because it is wood, your students may expect it to float. Sedimentator demonstration model for earth science and geology is a unique sediment tube filled with rocks, sand, plant remains, water and more demonstrate the process of sediment deposition. Sedimentator lab sand lab textbook assignments none at this time tests/quizzes there will be a quiz at the end of this unit handouts you must be signed into your gilbertschoolsnet account in order to open any of these documents microfossils and sediments packet. Student lab reports and group discussions should reflect an understanding of sediment depositional environments and sediment sources appropriate conclusions should be presented in student reports more information about assessment tools and techniques. Curriculum mapping subject: science - stem lab grade: second pitsco education missions are a hands-on, student-centered, cooperative-learning system that incorporates math, science, and technology programs use the sedimentator tubes and the “rivers move soil” information sheet to. Heating mantles, lab equipments, labware, new products heating mantle with stirrer (rotamantle) these are specially designed to meet laboratory requirement of convenient stirring in flask with simultaneous uniform heating by heating mantles. Now obtain a sedimentator tube gently shake the sedimentator to loosen the sediments stand the sedimentator upright on one end, and then flip it over so that it stands up on the other end. Sedimentation is very important without it we wouldn’t have any dinosaur fossils it is the building up of layers of small particles like sand or mud the easiest place to see this is the beach a beach is made up of lots of sand which have been deposited, or left behind, by the sea sand.
Pannus (chronic superficial keratitis) is an inflammatory disease of the dog’s cornea, or front of the eye, that worsens over time, and it usually affects both eyes at the same time. Scar tissue and blood vessels slowly invade the cornea of the eye from the corner of the eye and can turn it black in severe cases. Experts believe that it is caused by an unknown immune response that increases in severity with exposure to high levels of ultraviolet (UV) sunlight. It is most often seen in middle-aged German Shepherds, Belgian Tervurens, Siberian Huskies, and Greyhounds, but it can occur in other breeds of dogs. If left untreated, this disease can eventually lead to blindness. The direct cause of pannus is unknown, but there are a few contributing factors: 1. An increased occurrence in certain breeds indicates a genetic component to the disease. 2. The severity of the disease increases with exposure to UV light and/or high altitudes. It occurs most often in dogs living in the Rocky Mountains and in the Southwest (Arizona, New Mexico, etc.) 3. Chronic superficial keratitis is thought to be an autoimmune disease. This means that a dog’s body response in the wrong way and attacks the tissues of the cornea as something foreign. 4. There is some evidence that irritants in the air and underlying eye problems may also contribute to this disease, such as entropion, or inward-turned eyelids. Signs and Symptoms Pannus usually begins at the corner edge of the cornea, advances inward and may extend across the entire eye resulting in blindness in untreated and severe cases. If your dog develops pannus, he may exhibit the following signs and symptoms: 1. Red, blood-shot eyes. 2. Excessive tearing and weeping. 3. A grayish-pink film covering the eyes. 4. Coloring of the cornea, usually dark brown or black. 5. Opaque cloudiness of the eye. 6. Thickening and loss of color of the third eyelids. 7. Although the disease occurs in both eyes, each eye may be at a different stage of the disease. A diagnosis of pannus is normally based on clinical symptoms and your dog’s medical history. To make a thorough diagnosis of your dog’s eye condition, your veterinarian will also perform a complete physical exam and eye exam. Additionally, he/she may perform one or more of the following diagnostic tests to rule out any other eye diseases: (Photo by: Joel Mills) 1. Fluorescent staining of the eye (rules out corneal lacerations and ulcers). 2. A Schirmer Tear Test (measures tear production). Obtain a sample the cornea and/or lining of the eye (the conjunctiva) for microscopic examination of corneal or conjunctival cells. 4. Testing of the pressure behind the eyeball [Intraocular pressure testing (IOP)]. Your veterinarian may also recommend certain blood tests to help determine any underlying issues. These may include: 1. Complete blood count (CBC) to check for: anemia, inflammation, and infection. 2. Chemistry tests to check: blood sugar levels and liver, kidney, and pancreatic functions. 3. Infectious disease tests, such as those for Lyme disease, Leptospirosis, and Ehrlichiosis. 4. Bacterial cultures. 5. PCR testing. Therapy and Disease Management Unfortunately, there is no cure for pannus at this time. If caught early, the progress of the disease can be suspended and the symptoms can be managed over the long term. However, scarring and discoloration of the cornea usually irreversible. Therapy strategies are implemented to reverse invasion of blood vessels into the corner and to prevent further scarring and pigmentation of the cornea. Types of therapy: 1. Topical immunotherapy - Immunosuppressant drugs, such as tacrolimus or cyclosporine eye drops or eye ointment may improve symptoms. These can be used in coordination with corticosteroid therapy. In addition, immunosuppressant drugs lower a dog’s natural resistance to certain bacterial infections, so antibiotics are usually given in conjunction with immunotherapy. 2. Corticosteroid therapy - recurrent injections under the conjunctiva of the eyes in conjunction with continuous application of eye drops or eye ointments. This is the main line of defense against progression of the disease. Therapy is usually successful but must be continued lifelong. Even short periods of interrupted therapy, for example 2 to 4 weeks, may cause severe recurrence with devastating effects on the dog’s vision. 3. Surgery - In cases of severe scarring and pigmentation, surgical removal of a surface layer (superficial keratectomy) from the affected eyes may improve vision. Unfortunately, this procedure cannot be repeated and recurrence of the disease is high, so this method remains a last resort. 4. Radiation therapy - Beta irradiation is a last ditch effort to forestall the progress of the disease. This therapy is only used when medicinal and/or surgical therapies have failed. As with any medication, there are potential side effects. Some drug complications to be aware of are: inammation of the conjunctiva (conjunctivitis), corneal ulceration, and corneal mineralization, just to name a few.Any of these complications may result in permanent blindness , so it is important for the dog owner to be vigilant when his/her dog is undergoing medical treatments for pannus. majority of dogs do very well with topical medications. Some dogs with more severe cases of pannus may need a referral to a licensed veterinary ophthalmologist for more intensive treatments. Long Term Outlook and Prevention Because there is no cure, pannus is managed with symptom therapies. Treatment must be maintain for the life of the dog and is necessary to maintain vision. Limited exposure to ultraviolet light is also recommended for long-term control of the disease. Sheltering your dog while outdoors; walking him in the early morning hours, after dusk, or in shaded areas, and keeping him in your house and out of direct sunlight will help keep the disease in check. There are also specialty canine sun goggles that can be used if your dog has to be outdoors for an extended period of time. In addition to strict adherence to the long term therapies prescribed by your veterinarian, regular veterinary examinations and an immediate visit with the veterinarian when acute symptoms occur are all necessary to keep pannus under control and preserve your dog’s sight.Return from Pannus in Your German Shepherd to German Shepherd health Sign up for promotions, news, discounts, and the chance to win prizes for you and your German Shepherd Thank you for this web site. Very informative and well written. I often advise my shepherd people to visit here for information. Again Laura Page Warden What a fabulous website!!! I really enjoyed reading about the history of the dogs. There is a ton of helpful information on here and defiantly something for every reader to enjoy!!! Recently got a GSD again. Last time had GSD no internet etc. Cant believe how much info for free. Kenneth I love and appreciate the helpful advice I found on your website!
by Matt Johnson One of the problems that has grappled electrical engineers over the last few decades is the long-distance transmission of power. As the shift towards renewable energy continues, we are finding more and more electricity being generated farther and farther away from consumers. With an unavoidable power loss directly related to transmission distances, engineers have found themselves in a tough situation. The Economist (2017) dives into one technology, ultra-high-voltage direct-current connectors, as a particularly promising solution. Electric power grids were standardized on alternating current (AC) in the late 1880s and 1890s, and have stayed that way ever since. Alternating current travels like a wave: the energy shimmies back and forth through a conducting medium. As the distances of transmission increase, it takes more and more energy to push this wave through. Inherently, the more energy you put in, the more that is lost. Direct current on the other hand is a steady flow of energy, there is no oscillation. Therefore, over transcontinental distances, direct current power lines are much more efficient. The power lines are cheaper to build, because a smaller wire can carry more power: reducing weight and cost. Whereas the transformers for AC are relatively cheap, the comparable thyristors for voltage conversion in DC are pricy; but these prices are justified by increased transmission efficiency, especially over long distances. The US has found itself a laggard in the adoption of this new technology. China already has a handful built, and more under construction. Their biggest project is a power connector 3,400 Kilometers long. This line carries a behemoth of power equivalent to the average usage of Spain. European utilities also have plans for trans-European connectors: especially useful considering the hydroelectric opportunities present. As the transition towards green energy continues, UHVDC connectors will hopefully lead in the economic transmission of clean, cheap power. Rise of the supergrid: Electricity now flows across continents, courtesy of direct current. (2017, January). The Economist. Retrieved from: http://www.economist.com
From CreationWiki, the encyclopedia of creation science The Lobster was once considered to be a poor man's food , is now considered a delicious delicacy. All lobsters are invertebrates with an exoskeleton. They exhibit bilateral symmetry, unless a mutation or mutilation prohibits it from being exhibited (see below). On the head of the lobster is located antenna, which they use to feel and sense their environment. The head of the lobster is in the thorax section of its body. From the thorax extends the different types of claws (see below). The posterior of the lobster is its tail witch has flippers on the end, on a tasty note this is where most of the meat is when you eat this succulent dish. Lobsters have a decentralized nervous system that has been compared to that of a grasshopper . Some interesting notes to this nervous system, is that it seems that gradually increasing the temperature will lull the lobster to sleep , also when it is turned over on its dorsal side will educe a state of unconsciousness . The two main nerve clusters are located in the head of the lobster. Animal rights issues: The issue on whether lobsters may feel pain has prompted many animal rights activists to be considered with the ethical treatment of cooking these animals. There have been studies to weather lobsters feel pain, one said yes, while another one released at the same time said no. Lobsters have exhibited that they can have their crusher claw on either the right mandible or the left one . The other type of claws are the spiny legs that protrude from the thorax. Lobsters from the east coast have a pronged ends on their “feet” while spiny lobsters on the west coast taper out to a single point on the end. Lobster’s antennas pick up the scents of their prey by using the small hairs immersing the four antenna . Appendage deformities: In 1998 lobsters started emerging with gonoped deformities, scientists are yet to discover what is the cause of this mutation . Also science 1800 lobsters have been appearing with claws that look like they have multiple claws rendering out from them . Lobsters come in a variety of colors, ranging from blue to yellow to multi colored phantoms. But all lobsters turn red when cooked properly. . The unique design of the lobster eye has been intensely studied to help understand how it allows some organisms to see in low light and murky waters. Rather than bending (refracting) the light to focus the image on the retina, several of the long-bodied decapod crustaceans (shrimps, prawns, crayfish and lobsters) possess “reflecting” compound eyes. Unlike the more common compound eyes of insects, which have hexagonal facets, this unique eye design incorporates square facets that are arranged radially forming an optic array with a 180° field of view. The geometric assemblage of facets has all of the hallmarks of intelligent design and defies attempts to explain it through natural mechanisms. Simply put, these facets are tiny square-shaped tubes with walls that act as mirrors to reflect the incoming light. The walls of each facet are perfectly aligned so that the reflected light is focused toward the receptor layer flawlessly so that they all merge at the same point. The design creates an intensified, superpositioned image because the light from many facets combines to form a single image. As many as 3000 reflective facets are found in some species such as the Norway lobster (Nephrops norvegicus), and increases in sensitivity up to 1000 above the more common apposition type eye (where light remains within a single facet/ommatidium). The ability of the decapod’s eye to intensify an image that is captured from a broad field of view has intrigued engineers since the mechanism was first made known. Investigating biological systems or processes for potential use in technology is rapidly expanding field known as biomimicry. Several technological developments are now based on the unique geometric design of the lobster eye (see Lobster eye biomimicry). Researchers have developed a cosmic imaging device for use on space satellites, and a handheld imaging system was built that can view through walls of various thicknesses and materials, and identify contents. Ears that hear and eyes that see—the LORD has made them both. Proverbs 20:12 This is probably the most in-depth research into lobster mating habits , and trying to summarize it just won’t do it justice. There are some types of fisher men who despite fishing laws will scrub the fertilized eggs off of female lobsters so that they do not have to release them again. - Main Article: Molting Molting is an essential part of growing, when the lobster’s biological clock hits a certain time, the process ensues. One of the first physical signs of molting is the chemicals and hormones permeating the lobster on a cellular level, with a multitude of “molting” glands. At first the tail will crack open, very slightly. While different hormones trigger different stages around the lobsters body the crack opens even more. Then the lobster slowly wiggles it’s body form the old shell, it finally wiggles free with in about 8-12 hours . The lobster will go through a rapid growth section until its new shell hardens. Molting in a trap: Due to a speculated reason, lobsters will begin the molting process with the thorax while in a fishing trap with other lobsters . Humans have turned eating lobsters into an industry. Men have braved the ocean looking for lobsters, to turn in to cash, to by the boats to look for lobsters. The lobsters fishing industry has formed it’s own triangle trade. Since the puritans landed, lobster fishing has been crucial to our trade, and food supply . For an in-depth look of how this arthropod has been entangled in our economy go to "Lobster history". - ↑ Lobster by Wikipedia - ↑ The Design in Nature by by Harun Yahya. - ↑ 3.0 3.1 New Design Innovatons from Biomimetics: Lobster Recruited in the War on Terrorism by Chris Ashcraft, Creation 32(3):21-23, July 2010. - ↑ Land, Michael F., Eyes with mirror optics, Journal of Optics A: Pure and Applied Optics. 2 R44-R50, 2000. doi: 10.1088/1464-4258/2/6/204 - ↑ Sarfati, Jonathan. Lobster eyes—brilliant geometric design. Creation 23(3):12–13, June 2001. - ↑ Land, Michael F., Superposition Images Are Formed by Reflection in the Eyes of Some Oceanic Decapod Crustacea, Nature, October, 28 1976, Volume 263, pp. 764-765. doi:10.1038/263764a0 - ↑ Gaten, Edward. Eye structure and phylogeny: is there an insight? The evolution of superposition eyes in the Decapoda (Crustacea).Contributions to Zoology, 67: 223-235. 1998 - ↑ Lobster Telescope Has An Eye For X-Rays ScienceDaily April 5, 2006. - ↑ U.S. Department of Homeland Security. Eye of the Lobster. S&T Spotlight, Volume 1, Issue 7. November 2007. - ↑ Ecdysis by Wikipedia
What to do with this activity? Activities that encourage concentration and control of small finger and hand movements are very good for your child's brain development. One craft activity that your child might enjoy is stitching. We think that stitching on stiff card (or similar) is easier than stitching on cloth, so start with that. Make sure you have a suitable large needle and coloured thread. Make it a maths measuring exercise by keeping the stitch holes the same distance apart for neat patterns. Here are some tips and projects that your child might enjoy. Notice that this is an activity for both boys and girls. 1) Stitching on polystyrene or paper plates - inspiration from "Make it & love it". 2) Stitch heart shapes on paper plates - from "Happy Hooligans". 3) Ideas and tips on how to stitch onto card from "Kiwi Crate". 4) Make stitched book marks as presents, from "Grey lustre girl". Children learn numbers and maths in a natural way through play and everyday activities. It’s different to school and should always be fun and practical – that way your child will enjoy working with numbers. Your child also develops a sense of patterns and what time means in everyday life. This is important for helping your child to manage everyday activities – going places, how long they have to wait and understanding when things will happen in the future. Talking about numbers helps your child’s fluency in counting, estimating and understanding numbers and money in everyday life. It takes time for children to understand addition and subtraction so use objects when helping them understand this or when doing their homework. Rate this activity How would you rate this activity? 1 = Poor, 5 = Great.
The Enlightenment or ‘Age of Reason’ was a period in the late seventeenth century and early eighteenth century, where a group of philosophers, scientists and thinkers advocated new ideas based on reason. This period saw a decline in the power of absolute monarchies, a reduction in the pre-eminence of the Church and a rise of modern political ideologies, such as liberalism, republicanism and greater independence of thought. The Enlightenment ideas were influential forces behind the American and French revolutions. Francis Bacon (1561 – 1626) English philosopher, statesman, orator and scientist. Bacon is considered the ‘father of empiricism’ for his work and advocacy of scientific method and methodical scientific inquiry in investigating scientific phenomena. He encouraged an empirical approach both through his own example and philosophically. A key figure in the Scientific Revolution of the 17th Century. Rene Descartes (1596 – 1650) Rene Descartes was a French philosopher and mathematician. Descartes made a significant contribution to the philosophy of rationalism. Descartes’ Meditations was ground-breaking because he was willing to doubt previous certainties and tried to prove their validity through logic. Later empiricists disagreed with Descartes methods, but his philosophy opened up many topics to greater discussion. Although Descartes ‘proved’ the existence of God, his doubt was an important step in promoting reason over faith. Descartes also made significant discoveries in analytical geometry, calculus and mathematics. Baruch Spinoza (1632-1677) Spinoza was a Jewish-Dutch philosopher. He was an influential rationalist, who saw the underlying unity of the universe. He was critical of religious scriptures and promoted a view that the Divine was in all, and the Universe was ordered, despite its apparent contradictions. His philosophy influenced later philosophers, writers and romantic poets, such as Shelley and Coleridge. Immanuel Kant (1724 – 1804) Immanuel Kant was an influential German philosopher whose ‘Critique of Pure Reason’ sought to unite reason with experience and move philosophy on from the debate between rationalists and empiricists. Kant’s philosophy was influential on future German idealists and philosophers, such as Shelling and Schopenhauer. John Locke (1632 – 1704) Locke was a leading philosopher and political theorist, who had a profound impact on liberal political thought. He is credited with ideas, such as the social contract – the idea government needs to be with the consent of the governed. Locke also argued for liberty, religious tolerance and rights to life and property. Locke was an influential figure on those involved in the American and French revolutions, such as Jefferson, Madison and Voltaire. Sir Isaac Newton (1642-1726) Newton made studies in mathematics, optics, physics, and astronomy. In his Principia Mathematica, published in 1687, he laid the foundations for classical mechanics, explaining the Law of Gravity and the Laws of Motion. Voltaire (1694 – 1778) – French philosopher and critic. Best known for his work Candide (1762) which epitomises his satire and criticisms of social convention. Voltaire was instrumental in promoting Republican ideas due to his criticism of the absolute monarchy of France. Jean-Jacques Rousseau (1712-1778) Rousseau was a political philosopher widely known for his ‘Social Contract‘ (1762), which sought to promote a more egalitarian form of government by consent and formed the basis of modern republicanism. His ideas were influential in the French and American revolutions. Benjamin Franklin (1706-1790) One of the American Founding Fathers of the United States. He was an author, politician, diplomat, scientist and statesman. He was a key figure in the American Enlightenment, which saw major breakthroughs in science and ideas of political republicanism. Franklin was an early supporter of colonial unity and the United States. Adam Smith (1723-1790) was a Scottish social philosopher and pioneer of classical economics. He is best known for his work ‘The Wealth of Nations‘ which laid down a framework for the basis of classical free-market economics. Smith is often referred to as the ‘Father of Economics.’ Smith’s work makes a strong case for free market economics, but he was also aware of situations where the free market could be against the public interest, for example monopolies. Thomas Jefferson (1743-1826) was an American Founding Father and the principal author of The Declaration of Independence (1776) In this declaration, Jefferson laid out the fundamental principles of America, calling for equality and liberty. He also advocated ending slavery and promoting religious tolerance. Citation: Pettinger, Tejvan. “Famous people of The Enlightenment”, Oxford, www.biographyonline.net, 4th June 2013. Last updated 21st February 2018. The Enlightenment at Amazon Famous People of the Renaissance (1350s to 1650s) The Renaissance covers the flowering of art and culture in Europe. People of the Seventeenth-Century – Famous people of the Seventeenth-Century which included the emerging European Enlightenment. Including; Shakespeare, Charles I, Louis XIV, Rene Descartes, Francis Bacon, John Locke and Galileo. Famous Scientists – Scientists from Aristotle and Archimedes to Albert Einstein and Charles Darwin. Including scientists of the Enlightenment Period. People of the American Revolution – Leading figures in the American Revolution. Includes military leaders, philosophers, British protagonists and ordinary people. List includes; George Washington, Thomas Jefferson, George III and Benjamin Franklin.
Tapping into Latin American food culture Did you know that while volunteering in Latin America you can sample appetising and nutritious meals based on the food traditions of ancient civilizations? The Maya, Inca, and Aztec people built prominent civilizations throughout Mexico, and Central and South America. These civilizations form the basis of today’s culinary and cultural traditions in the region. This valuable culinary heritage was based mainly on food they cultivated themselves. The food traditions of the Aztec and Mayan people were closely related due to their proximity. They were located where Mexico, Guatemala, Belize and northern El Salvador are now. The Incas, on the other hand, emerged in South America. Latin American food traditions These communities lived mainly in farming villages. Corn was the basis of their food, along with beans and other vegetables such as squash, and many varieties of peppers. Although conditions were often harsh, these farmers were entirely self-sufficient. The Inca people also grew potatoes and a small grain called quinoa. The Aztec and the Maya people focused on the production of avocados, tomatoes and a great variety of fruit. However, for pre-Columbian civilisations, large-scale agricultural production was rather challenging due to the environmental and geographical conditions they faced. Reduced amounts of rainfall, shallow soil deposits, poor soil quality or, in some cases, lack of land, were some of the obstacles they had to overcome. Despite inhabiting these rather harsh environments, they adapted and developed the proper agricultural skills that were necessary to sustain their own food culture. While the Mayan people were jungle inhabitants, the Aztecs lived in many areas surrounded by lakes and water. The Inca populated the mountainous Andes. These ancient Latin American civilizations became skilful at developing effective techniques like crop rotation for cultivating in large fields or terraces, and steps on the mountainside. In some cases, barges were built around lakes or water surfaces to create more arable land. Latin American food traditions in medicine and religion To Maya, Aztec and Inca people, food was significant for more than just eating. In some cases, it was considered medicinal. Herbal remedies were commonly used for rituals, and as medicine. They were either ingested, smoked or rubbed on the skin depending on the specific case. Fresh vegetation was sometimes applied directly on the skin for curing illnesses. Mayan people also made various kinds of drinks by mixing cacao extract with ground corn and a touch of pepper. They drank this during special celebrations and festivals as part of their food tradition. How have ancient traditions influenced Central American cuisine? Mesoamerican people used corn as a main ingredient in their meals. In fact, tortillas (a sort of thin and savoury corn pancake) is a basic traditional ingredient in almost every meal. The importance tortillas have on a typical Mexican meal cannot be underestimated. Prepared and enjoyed in many different ways since early times, they are a must at every table. A side of corn tortillas When prepared as a side dish, tortillas can be served along any main course such as fajitas (a spicy, grilled meat, complete with peppers and onions). Depending on personal preference, they can also be served along with chillies in nogado, a meat-stuffed pepper bathed with a walnut cream and garnished with pomegranate seeds and cilantro. Frijoles refritos, or refried black beans, is another traditional accompaniment to tortillas. Alternatively, corn tortillas can be prepared as part of a main dish. When preparing Enchiladas, corn tortillas are wrapped around different kinds of ingredients, ranging from seasoned potatoes, to cheese, beans, various meats and other vegetables. Last but not least, this luscious meal is covered with a spicy homemade tomato sauce, chopped lettuce, and fresh cream. These rich ingredients can be topped off with a soft guacamole. Tacos are fairly similar to Enchiladas as they are usually folded around a filling. As one of the most popular Mexican dishes, Tacos consist of a corn tortilla rolled around a tasty and warm filling of meat, vegetables or cheese. They can be topped with a chilli sauce, or even eaten plain as you walk down Av Rojo Gomez in Puerto Morelos and buy one. Since the tortilla is the cornerstone of the Maya and Aztec food traditions, you should also try the supreme tortilla soup. Made up of healthy ingredients and fresh herbs, this dish is flavoursome, aromatic and full of character. The essence of the dish comes from the chillies, beans, cilantro and chicken, which simmer together for a while. Later, avocado, tortilla bits, chopped onions and cheese, are added to the bowl. The result is a real banquet of flavours! Where do Peruvian delicacies come from? The Inca Empire arose in the Andean highlands of Peru, establishing their capital in the city of Cusco (where GVI is based). Because of this, the Latin American food culture of the Incas differed from that of Aztecs and Mayans. Even though all three civilizations regarded corn as an important food, it was only possible for Aztec and Mayan people to cultivate this vegetable in such enormous quantities due to their geographical conditions. Due to their higher altitude, Inca crops needed to resist low temperatures. That’s why root vegetables became central in their diets. The Inca people also included various grains in their diet, such as corn and amaranth. Different tubers and potato varieties were also common in their rather healthy dishes. One variety of potato, the Oca, was particularly popular. Oca is high in protein and has good nutritional value. It was usually boiled in soups, and stews, but was sometimes roasted. Oca was also used as a sweetener. This ochre tuber is sometimes called “the lost crop of the Incas” as, with time, it became the second most popular tuber after the potato. According to the strict hierarchy of early Incan society, food was more plentiful and varied for the upper classes than for the lower classes. Along with many plants and vegetables, the Incas raised llamas and alpacas as a source of meat and milk. Being close to the Pacific coastline, which is one of the richest fisheries in the world, they also caught fish and used them as a primary food source. As a fusion of all these ingredients is Ceviche: a popular Peruvian dish. Ceviche is a typical seafood dish made from fresh raw fish. The fish is cured in citrus juice, preferably lime, and spiced with different chili peppers, chopped onions and cilantro. It can also be garnished with diced tomatoes, avocado and popped or roasted corn. Ceviche contains the perfect blend of textures and flavour: soft from the citrus, with added zest from the cilantro! The Inca people also had their own kind of drink called Chicha. Chicha is made from grains, corn or fruit. It can contain alcohol, and is prepared in many ways according to region. Chicha morada, which is made with purple corn, is a very popular refreshment in Bolivia and Peru. What are you waiting for? Dive into a spicy and cultural experience of local cuisine while you volunteer in Latin America. - Cape Coast - Cape Town - Chiang Mai - Community Development - Fiji Islands - Gap Year - GVI Live - Kampong Cham - Luang Prabang - Mahe and Curieuse - Marine Conservation - Personal Development - Phang Nga - Responsible Travel - Service Learning - Study Abroad - Under 18 - Wildlife Conservation - Women's Empowerment
In graph theory, an Eulerian trail (or Eulerian path) is a trail in a finite graph which visits every edge exactly once. Similarly, an Eulerian circuit or Eulerian cycle is an Eulerian trail which starts and ends on the same vertex. They were first discussed by Leonhard Euler while solving the famous Seven Bridges of Königsberg problem in 1736. The problem can be stated mathematically like this: - Given the graph in the image, is it possible to construct a path (or a cycle, i.e. a path starting and ending on the same vertex) which visits each edge exactly once? Euler proved that a necessary condition for the existence of Eulerian circuits is that all vertices in the graph have an even degree, and stated without proof that connected graphs with all vertices of even degree have an Eulerian circuit. The first complete proof of this latter claim was published posthumously in 1873 by Carl Hierholzer. The term Eulerian graph has two common meanings in graph theory. One meaning is a graph with an Eulerian circuit, and the other is a graph with every vertex of even degree. These definitions coincide for connected graphs. For the existence of Eulerian trails it is necessary that zero or two vertices have an odd degree; this means the Königsberg graph is not Eulerian. If there are no vertices of odd degree, all Eulerian trails are circuits. If there are exactly two vertices of odd degree, all Eulerian trails start at one of them and end at the other. A graph that has an Eulerian trail but not an Eulerian circuit is called semi-Eulerian. An Eulerian cycle, Eulerian circuit or Euler tour in an undirected graph is a cycle that uses each edge exactly once. If such a cycle exists, the graph is called Eulerian or unicursal. The term "Eulerian graph" is also sometimes used in a weaker sense to denote a graph where every vertex has even degree. For finite connected graphs the two definitions are equivalent, while a possibly unconnected graph is Eulerian in the weaker sense if and only if each connected component has an Eulerian cycle. The definition and properties of Eulerian trails, cycles and graphs are valid for multigraphs as well. An Eulerian orientation of an undirected graph G is an assignment of a direction to each edge of G such that, at each vertex v, the indegree of v equals the outdegree of v. Such an orientation exists for any undirected graph in which every vertex has even degree, and may be found by constructing an Euler tour in each connected component of G and then orienting the edges according to the tour. Every Eulerian orientation of a connected graph is a strong orientation, an orientation that makes the resulting directed graph strongly connected. - An undirected graph has an Eulerian cycle if and only if every vertex has even degree, and all of its vertices with nonzero degree belong to a single connected component. - An undirected graph can be decomposed into edge-disjoint cycles if and only if all of its vertices have even degree. So, a graph has an Eulerian cycle if and only if it can be decomposed into edge-disjoint cycles and its nonzero-degree vertices belong to a single connected component. - An undirected graph has an Eulerian trail if and only if exactly zero or two vertices have odd degree, and all of its vertices with nonzero degree belong to a single connected component. - A directed graph has an Eulerian cycle if and only if every vertex has equal in degree and out degree, and all of its vertices with nonzero degree belong to a single strongly connected component. Equivalently, a directed graph has an Eulerian cycle if and only if it can be decomposed into edge-disjoint directed cycles and all of its vertices with nonzero degree belong to a single strongly connected component. - A directed graph has an Eulerian trail if and only if at most one vertex has (out-degree) − (in-degree) = 1, at most one vertex has (in-degree) − (out-degree) = 1, every other vertex has equal in-degree and out-degree, and all of its vertices with nonzero degree belong to a single connected component of the underlying undirected graph. Constructing Eulerian trails and circuits Fleury's algorithm is an elegant but inefficient algorithm which dates to 1883. Consider a graph known to have all edges in the same component and at most two vertices of odd degree. The algorithm starts at a vertex of odd degree, or, if the graph has none, it starts with an arbitrarily chosen vertex. At each step it chooses the next edge in the path to be one whose deletion would not disconnect the graph, unless there is no such edge, in which case it picks the remaining edge left at the current vertex. It then moves to the other endpoint of that edge and deletes the edge. At the end of the algorithm there are no edges left, and the sequence from which the edges were chosen forms an Eulerian cycle if the graph has no vertices of odd degree, or an Eulerian trail if there are exactly two vertices of odd degree. While the graph traversal in Fleury's algorithm is linear in the number of edges, i.e. O(|E|), we also need to factor in the complexity of detecting bridges. If we are to re-run Tarjan's linear time bridge-finding algorithm after the removal of every edge, Fleury's algorithm will have a time complexity of O(|E|2). A dynamic bridge-finding algorithm of Thorup (2000) allows this to be improved to but this is still significantly slower than alternative algorithms. Hierholzer's 1873 paper provides a different method for finding Euler cycles that is more efficient than Fleury's algorithm: - Choose any starting vertex v, and follow a trail of edges from that vertex until returning to v. It is not possible to get stuck at any vertex other than v, because the even degree of all vertices ensures that, when the trail enters another vertex w there must be an unused edge leaving w. The tour formed in this way is a closed tour, but may not cover all the vertices and edges of the initial graph. - As long as there exists a vertex u that belongs to the current tour but that has adjacent edges not part of the tour, start another trail from u, following unused edges until returning to u, and join the tour formed in this way to the previous tour. By using a data structure such as a doubly linked list to maintain the set of unused edges incident to each vertex, to maintain the list of vertices on the current tour that have unused edges, and to maintain the tour itself, the individual operations of the algorithm (finding unused edges exiting each vertex, finding a new starting vertex for a tour, and connecting two tours that share a vertex) may be performed in constant time each, so the overall algorithm takes linear time, . Counting Eulerian circuits The number of Eulerian circuits in digraphs can be calculated using the so-called BEST theorem, named after de Bruijn, van Aardenne-Ehrenfest, Smith and Tutte. The formula states that the number of Eulerian circuits in a digraph is the product of certain degree factorials and the number of rooted arborescences. The latter can be computed as a determinant, by the matrix tree theorem, giving a polynomial time algorithm. BEST theorem is first stated in this form in a "note added in proof" to the Aardenne-Ehrenfest and de Bruijn paper (1951). The original proof was bijective and generalized the de Bruijn sequences. It is a variation on an earlier result by Smith and Tutte (1941). Counting the number of Eulerian circuits on undirected graphs is much more difficult. This problem is known to be #P-complete. In a positive direction, a Markov chain Monte Carlo approach, via the Kotzig transformations (introduced by Anton Kotzig in 1968) is believed to give a sharp approximation for the number of Eulerian circuits in a graph, though as yet there is no proof of this fact (even for graphs of bounded degree). Eulerian trails are used in bioinformatics to reconstruct the DNA sequence from its fragments. They are also used in CMOS circuit design to find an optimal logic gate ordering. There are some algorithms for processing trees that rely on an Euler tour of the tree (where each edge is treated as a pair of arcs). In infinite graphs In an infinite graph, the corresponding concept to an Eulerian trail or Eulerian cycle is an Eulerian line, a doubly-infinite trail that covers all of the edges of the graph. It is not sufficient for the existence of such a trail that the graph be connected and that all vertex degrees be even; for instance, the infinite Cayley graph shown, with all vertex degrees equal to four, has no Eulerian line. The infinite graphs that contain Eulerian lines were characterized by Erdõs, Grünwald & Weiszfeld (1936). For a graph or multigraph G to have an Eulerian line, it is necessary and sufficient that all of the following conditions be met: - G is connected. - G has countable sets of vertices and edges. - G has no vertices of (finite) odd degree. - Removing any finite subgraph S from G leaves at most two infinite connected components in the remaining graph, and if S has even degree at each of its vertices then removing S leaves at most one infinite connected component. - Eulerian matroid, an abstract generalization of Eulerian graphs - Five room puzzle - Handshaking lemma, proven by Euler in his original paper, showing that any undirected connected graph has an even number of odd-degree vertices - Hamiltonian path – a path that visits each vertex exactly once. - Route inspection problem, search for the shortest path that visits all edges, possibly repeating edges if an Eulerian path does not exist. - Veblen's theorem, that graphs with even vertex degree can be partitioned into edge-disjoint cycles regardless of their connectivity - N. L. Biggs, E. K. Lloyd and R. J. Wilson, Graph Theory 1736–1936, Clarendon Press, Oxford, 1976, 8–9, ISBN 0-19-853901-0. - C. L. Mallows, N. J. A. Sloane (1975). "Two-graphs, switching classes and Euler graphs are equal in number". SIAM Journal on Applied Mathematics. 28 (4): 876–880. doi:10.1137/0128070. JSTOR 2100368. - Some people reserve the terms path and cycle to mean non-self-intersecting path and cycle. A (potentially) self-intersecting path is known as a trail or an open walk; and a (potentially) self-intersecting cycle, a circuit or a closed walk. This ambiguity can be avoided by using the terms Eulerian trail and Eulerian circuit when self-intersection is allowed. - Jun-ichi Yamaguchi, Introduction of Graph Theory. - Schaum's outline of theory and problems of graph theory By V. K. Balakrishnan . - Schrijver, A. (1983), "Bounds on the number of Eulerian orientations", Combinatorica, 3 (3–4): 375–380, doi:10.1007/BF02579193, MR 0729790. - Fleury, M. (1883), "Deux problèmes de Géométrie de situation", Journal de mathématiques élémentaires, 2nd ser. (in French), 2: 257–261. - Fleischner, Herbert (1991), "X.1 Algorithms for Eulerian Trails", Eulerian Graphs and Related Topics: Part 1, Volume 2, Annals of Discrete Mathematics, 50, Elsevier, pp. X.1–13, ISBN 978-0-444-89110-5. - Brightwell and Winkler, "Note on Counting Eulerian Circuits", 2004. - Brendan McKay and Robert W. Robinson, Asymptotic enumeration of eulerian circuits in the complete graph, Combinatorica, 10 (1995), no. 4, 367–377. - M.I. Isaev (2009). "Asymptotic number of Eulerian circuits in complete bipartite graphs". Proc. 52-nd MFTI Conference (in Russian). Moscow: 111–114. - Pevzner, Pavel A.; Tang, Haixu; Waterman, Michael S. (2001). "An Eulerian trail approach to DNA fragment assembly". Proceedings of the National Academy of Sciences of the United States of America. 98 (17): 9748–9753. Bibcode:2001PNAS...98.9748P. doi:10.1073/pnas.171285098. PMC 55524. PMID 11504945. - Roy, Kuntal (2007). "Optimum Gate Ordering of CMOS Logic Gates Using Euler Path Approach: Some Insights and Explanations". Journal of Computing and Information Technology. 15 (1): 85–92. doi:10.2498/cit.1000731. - Tarjan, Robert E.; Vishkin, Uzi (1985). "An efficient parallel biconnectivity algorithm". SIAM Journal on Computing. 14 (4): 862–874. CiteSeerX 10.1.1.465.8898. doi:10.1137/0214061. - Berkman, Omer; Vishkin, Uzi (Apr 1994). "Finding level-ancestors in trees". J. Comput. Syst. Sci. 2. 48: 214–230. doi:10.1016/S0022-0000(05)80002-9. - Komjáth, Peter (2013), "Erdős's work on infinite graphs", Erdös centennial, Bolyai Soc. Math. Stud., 25, János Bolyai Math. Soc., Budapest, pp. 325–345, doi:10.1007/978-3-642-39286-3_11, MR 3203602. - Bollobás, Béla (1998), Modern graph theory, Graduate Texts in Mathematics, 184, Springer-Verlag, New York, p. 20, doi:10.1007/978-1-4612-0619-4, ISBN 0-387-98488-7, MR 1633290. - Erdõs, Pál; Grünwald, Tibor; Weiszfeld, Endre (1936), "Végtelen gráfok Euler vonalairól" [On Euler lines of infinite graphs] (PDF), Mat. Fix. Lapok (in Hungarian), 43: 129–140. Translated as Erdős, P.; Grünwald, T.; Vázsonyi, E. (1938), "Über Euler-Linien unendlicher Graphen" [On Eulerian lines in infinite graphs] (PDF), J. Math. Phys. (in German), 17: 59–75, doi:10.1002/sapm193817159. - Euler, L., "Solutio problematis ad geometriam situs pertinentis", Comment. Academiae Sci. I. Petropolitanae 8 (1736), 128–140. - Hierholzer, Carl (1873), "Ueber die Möglichkeit, einen Linienzug ohne Wiederholung und ohne Unterbrechung zu umfahren", Mathematische Annalen, 6 (1): 30–32, doi:10.1007/BF01442866. - Lucas, E., Récréations Mathématiques IV, Paris, 1921. - Fleury, "Deux problemes de geometrie de situation", Journal de mathematiques elementaires (1883), 257–261. - T. van Aardenne-Ehrenfest and N. G. de Bruijn (1951) "Circuits and trees in oriented linear graphs", Simon Stevin 28: 203–217. - Thorup, Mikkel (2000), "Near-optimal fully-dynamic graph connectivity", Proc. 32nd ACM Symposium on Theory of Computing, pp. 343–350, doi:10.1145/335305.335345 - W. T. Tutte and C. A. B. Smith (1941) "On Unicursal Paths in a Network of Degree 4", American Mathematical Monthly 48: 233–237. |Wikimedia Commons has media related to Eulerian paths.|
During the American Civil War, illustrated journalism and cartoons in print media became available to the American public for the first time. “Many factors contributed to this sudden flowering: the growth of the population and the news market, the solving of many technological problems by men trained in English and American picture publishing, and an aroused popular attention to news events of national concern. Between 1855 and 1860 the American lithographing and engraving industries flourished, several illustrated comic weeklies, including Vanity Fair, began offering their cartoon wares to the public, and most significantly three enterprising publishers established weekly illustrated newspapers: Harper’s Weekly, Frank Leslie’s Illustrated Newspaper, and the New York Illustrated News” . Illustrations about events going on in the world around them captivated American audiences, and they energetically embraced this new form of information. One result was that images alongside news stories were, for the first time, able to depict war as seen in the American Civil War. What ensued was civil war propaganda on a mass scale. Abraham Lincoln was elected President of the United States March 4, 1861, and as a response 11 southern slave states declared their secession from the United States and formed the Confederate States of America (the Confederacy). The other 25 states supported the federal government (the Union). Civil war went on from 1861-1865 and after roughly four years of warfare, mostly within the Southern states, the Confederacy surrendered and slavery was outlawed. The causes and aims of the civil war were numerous and complex. However, the primary factors involved were slavery/abolition of slavery and this issues within the larger question of state rights. Both the Union and the Confederates used the new development in print media to further their causes. Besides visual propaganda being common in print news, during the Civil War there was a popular period of pictorial envelopes being produced and used. This style of patriotism is unique in American history as it would only be utilized to such a widespread extent for a few short years at the beginning of the Civil War, and, unlike envelopes of a similar nature in other countries, these were produced independently (not by any governmental organization). These envelopes were produced in both the North and the South and were used most commonly in letter writing. As this was the dominant form of communication, hundreds of thousands were produced. “New York City was considered the printing capital of the United States from about 1825 and continued to be so during the Civil War years. Despite New York’s official status as a Union state, many residents of New York City were not so assuredly in favor of the Union, or even of the war itself. In 1863 the city saw riots in Union Square and elsewhere protesting the draft and other war hardships. Such commotion in the nation’s biggest city may have added more weight to the need for distribution of pro-Union propaganda” . Besides newsprint, American’s also consumed a large amount of almanacs. While produced with varying purpose, Pro-Union almanacs, such as the Anti-Slavery Almanac, were widely popular in the North. “Almanacs were widely popular publications, read and used by the great majority of literate American adults. The Anti−Slavery Almanac was intended to instruct, persuade and horrify its readers about the evils of the American slave system and discrimination against people of color. Each of its 13 woodcuts—one for each of 12 months and one for the cover—presented an image of the evils of slavery and racism” . In its portrayals of ‘the evils of slavery and racism’, it heavily relied upon classic propaganda techniques, most commonly demonizing the enemy and name calling (making the South, as a whole, appear immoral). In the following example from Anti-Slavery Almanac, through relying on emotional sympathy from Northerners, the artist attempts to win the audiences’ denouncement of slavery and support for the Union as it demonizes the South and it’s practices of slavery. The Confederacy would use similar techniques to the North, but one technique that the South utilized significantly more was an appeal to fear. This is exemplified most clearly in Southern propaganda featuring miscegenation – “the mixing of different racial groups through marriage, cohabitation, sexual relations, and procreation” . The following image from Edward Williams Clay is titled “The Amalgamation Waltz” and it depicts a ballroom dance where African-American men dance with Caucasian women, while their would-be Caucasian male escorts watch from the balcony. This image insinuates that the abolition of slavery and the amalgamation of freed slaves would result in, what would be considered horrible, miscegenation. Thus, The Amalgamation Waltz cartoon clearly draws on it’s audience’s sense of fear. Many cartoons regarding the Civil War were less serious and fear-inspiring as those above, often they were intended to be humorous or satirical. Despite being more light hearted in nature, these cartoons still served to re-enforce an audiences’ attitude about a given topic. At the time of the Civil War, the vast majority of Southerners were against African-American conscription in the military, though there was a contingent of those that believed they should be able to enlist. The following cartoon depicts two men who, presumably, were slaves that were then entered into service, in the foreground. The men in the foreground are featured in cartoonish and stereotypically racist ways, having a chat. The numerous men in the background appear to be engaging in similar light hearted banter. This is problematic because one of the men represents the Union forces, and the other represents Confederate forces. The title “The Black Conscription” is followed with the sentence “When black meets black then comes the end of war”. “To modern sensibilities, this is one of the most offensive of Tenniel’s cartoons, as its theme is the notion that black men are incapable of becoming good soldiers” . This cartoon was intended to be humorous, but in all seriousness it served to cement the belief that black men should not be permitted to fight alongside white men, and was a blatant use of appeal to prejudice (racial prejudice) and stereotyping, and uses that solid stereotyping in transfer (a technique of projecting perceived negative qualities of a group to another to make the second to discredit it). In both the North and South Civil War recruitment posters, like most other war recruitment posters, were infused with various propaganda elements. Civil War recruiting posters frequently employed patriotic appeals (flag waving), slogans, and virtue words (patriotism, courage, honor, etc.). “Patriotic imagery contributed to the plea, and might feature eagles with wings spread, cavalry officers with raised swords, battle scenes, or pictures of George Washington and other national figures. Most posters were intended for a broad-based audience but some targeted specific segments of the population, such as posters written in German or French or decorated with harps and shamrocks to appeal to Irish-Americans” . Though the specifics of each poster are combined in all different manner of ways, one thing they all have in common is that they attempt to convince their audience to support the war effort, and the most common way they did this (both in the North and South) was through appeals of patriotism. The propaganda imagery produced out of the American Civil War era generally relied upon patriotic fervor to advance the aims of the Union and the Confederacy. It is concretely true that many southern states seceded and practiced slavery, while the North -generally- supported Lincoln (enough to not, as a whole, threaten secession) and did not own slaves. It was not set in stone, however, that because one lived in the North that they would automatically support the Union, support the abolition of slavery, oppose secession, or support these so much so that they would fight on behalf of them (or vice-verse for the South). This is primarily what purpose propaganda served during the American Civil War – the solidification of North vs. South identity, pro-abolition vs. anti-abolition. The efficacy of each respective side’s propaganda can still be felt easily today, close to 150 years since its occurrence. The KKK (which formed in the South as a result of the Union’s victory) is still active, as well as many other similar groups that are concentrated most densely in the South-East . The song “Dixie” (oft protested as a racist relic of the Confederacy and a reminder of decades of white domination and segregation) as well as the Confederate flag are still sung and flown popularly throughout the south as symbols of Southern pride and heritage. In the modern United States a felt North-South divide still lingers that Union and Confederate propagandists sought to instill nearly 150 years ago. Thomson Jr., William Fletcher. “Pictorial Propaganda and the Civil War.” Editorial. The Wisconsin Magazine of History Autumn 1962. Web. 7 Feb. 2012. <http://www.jstor.org/pss/4633807>. “Guide to the Patriotic Envelope Collection [1861-1865], 1898 PR 117.” The New-York Historical Society. Web. 07 Feb. 2012. <http://dlib.nyu.edu/findingaids/html/nyhs/envelopes_content.html>. Sasser, Patricia. “The Persuasive Eloquence of the Sunny South.” Digital Collection: South Carolina and the Civil War. University Libraries, University of South Carolina, July 2009. Web. 07 Feb. 2012. <http://digital.tcl.sc.edu/cdm/singleitem/collection/civilwar/id/360/rec/9>. “24 May 1861: Col. Ellsworth “House Breaker and Thief”.” Weblog post. The Civil War Day by Day. UNC University Library, 24 May 2011. Web. 7 Feb. 2012. <http://www.lib.unc.edu/blogs/civilwar/index.php/2011/05/24/24-may-1861-col-ellsworth-house-breaker-and-thief/>. “The Anti-Slavery Almanac.” Teach US History. Web. 07 Feb. 2012. <http://www.teachushistory.org/second-great-awakening-age-reform/resources/anti-slavery-almanac>. “Miscegenation.” Dictionary.com. Web. 07 Feb. 2012. <http://dictionary.reference.com/browse/miscegenation>.
Go to Rice Insect Fact Sheets Page Go to Rice Insects Home Page Click on the links above to go to the Rice Insect Fact Sheets page or to go to the Rice Insects home page. Common Name: Rice seed midge Scientific name: Chironomus spp. Adult midges resemble small mosquitoes. They swarm over rice fields, levees, and roadside ditches. Eggs are laid in strings on the surface of open water. After emerging, the larvae move to the soil surface, where they live in spaghetti-like tubes constructed from secreted silk, plant debris and algae. The larvae develop through four instars before pupating under water in tubes. The life cycle from egg to adult requires one to two weeks. Damage caused by mite: Larvae injure rice by feeding on the embryo of germinating seeds or on the developing roots and seeds of very young seedlings. Injury from the midge can be insignificant to very severe. Injury can also be localized, making damage assessment difficult. In some instances, whole fields may need to be replanted. In other instances, only parts of fields may require reseeding. Midge injury is indicated by the presence of chewing marks on the seed, roots and shoots and by the presence of hollow seeds. Facts: Larvae feed on the embryo of germinating seeds or the developing roots of young seedlings. Midge injury occurs in water-seeded rice and is usually not important once seedlings are several inches tall. Midge injury is indicated by the presence of chewing marks on the seed, roots and shoots and by the presence of hollow seeds. If midge injury is present and plant stand has been reduced to fewer than 15 plants per square foot, treatment may be necessary.What should you look for: Rice seed midge is a problem only for rice seeds and seedlings in water-seeded fields. Fields should be scouted for midge injury 5-7 days after seeding. Check for hollow seeds and chewing marks on the seed, roots and shoots. Repeat scouting at 5- to 7-day intervals until rice seedlings are about 3 inches tall. Midges are not a problem in rice more than 2 to 4 inches tall. Midge presence is indicated by larval tubes on the soil surface. There are many midge species, most of which do not attack rice, and the presence of midge tubes alone does not indicate the need to treat a given field.How you can manage rice seed midge: Drain fields to reduce numbers of midge larvae. Reseeding of heavily infested fields may be necessary. Avoid holding water in rice fields for more than two to three days before seeding to discourage the buildup of large midge numbers before seeding. Pre-sprout seed and avoid planting in cool weather to help to speed rice through the vulnerable stage and reduce the chance for serious damage.
Commonly held belief and scientific proof holds true that black holes suck matter in rather than spewing them out. But NASA has just found some curious evidence around a supermassive black hole named Markarian 335. Two of NASA’s telescopes, including the Nuclear Spectroscopic Telescope Array (NuSTAR), observed what is believed to be a black hole’s corona launching away from the supermassive black hole. That event was then followed by a large pulse of X-Ray energy. This is the first time we have been able to link the launching of the corona to a flare,” Dan Wilkins, of Saint Mary’s University, said. “This will help us understand how supermassive black holes power some of the brightest objects in the universe. Fiona Harrison, the chief investigator at NuSTAR, admits that the energy source is “mysterious”. According to Fiona, the ability to record the event would in theory be able to provide clues about the black hole’s (Markarian 335) size and structure as well as information as to the nature of black holes. Markarian 335 is 324 million light-years away from Earth. This diagram shows how a shifting feature, called a corona, can create a flare of X-rays around a black hole. The corona (feature represented in purplish colors) gathers inward (left), becoming brighter, before shooting away from the black hole (middle and right). Astronomers don’t know why the coronas shift, but they have learned that this process leads to a brightening of X-ray light that can be observed by telescopes.
|Summary: BMI is an important measurement to indicate your risk of hypertension, heart disease and premature death. Do you know your Body Mass Index?| The Body Mass Index (BMI) is a number that represents our body weight adjusted for our height and is used world wide to calculate obesity. BMI is broken down into four categories: underweight, normal, overweight, and obese. The formula for calculating adult (20 years old and over) BMI is not a difficult one: simply divide your weight by your height squared and multiply by 703. Even easier – just go to the Center for Disease Control and plug in your height and weight into their BMI calculator. As BMI increases so does the risk for diseases such as hypertension, diabetes, heart disease, and premature death, so it’s important to know whether you’re just packing a few extra pounds or are at serious increased risk for any of these diseases. While there are other methods of measuring obesity such as underwater weighing and skin calipers, using the BMI is reasonably accurate, is inexpensive, and does not require special instrumentation.
The Ins and Outs of Tides Compare predicted and observed tides using data from NOAA. - Describe the forces causing tides. - Appraise accuracy of tidal predictions. Gravitational pull, Centrifugal force The rhythmic ebb and flow of the oceans subject coastlines to constant The rhythmic ebb and flow of the oceans subject coastlines to constant change. Tides dictate the lives of the marine organisms which live within their reach, as well as the plans of those who live, work, and play near the coast. Understanding tides is crucial for safe maritime navigation, coastal zone management, coastal engineering projects, fisherpeople, boaters, surfers and other water sport enthusiasts. Tides vary widely from place to place, so tide prediction data for specific areas are very useful. This is especially true for the Bay of Fundy in Nova Scotia, Canada, which has the highest tides in the world. The bay is very narrow, so water from the ocean rushes in and out, changing the water level by up to 20 meters a day! Tides are caused by two forces, one being the gravitational pull of the sun and the moon on the earth. The moon has more influence on tides than the sun because it is 400 times closer to the earth. The other factor is the centrifugal force acting on the earth as it spins. This causes bulges in the ocean that follow the moon as it revolves around the earth, one bulge directly under the moon and the other on the opposite side of Earth. There are usually 2 high tides and 2 low tides with each complete revolution (every 24 hours 50 minutes). During full moon and new moon phases, we experience spring tides, and during the moon's quarter phases we have neap tides. provided by the National Oceanic and Atmospheric Administration (NOAA) are based upon analyses of tidal observations for periods of at least one year and are generated using a complex computer program. Predicted tidal heights are those expected under average weather conditions since it would be impossible to predict the effect that wind, rain, freshwater runoff, and other short-term meteorological events will have on the tides. Generally, prolonged onshore winds or a low atmospheric pressure can produce higher tides than predicted, while the opposite conditions can result in lower tides than those predicted. When planning around the tides, one must be aware that actual water levels may vary from those predicted when weather differs from what is considered average for The accuracy of tide predictions varies with location. Periodically NOAA does a comparison of the predicted tides versus the observed tides for a calendar year. The information generated is compiled in a Tide Prediction Accuracy Table. Look at the average yearly differences in estimated tidal times and heights to get an idea of the accuracy of At the NOAA Tides Online site, you can access near real-time data and see how close the predictions were to the observed tides at various sites (select State Maps, then choose your Suppose you want to photograph marine organisms at low tide, or you are planning to scuba dive from a beach and need to know what the tides will be doing. Use the NOAA Tide Predictions page to find tidal predictions for the entire year at your site of interest. Generate a graph of tidal predictions for a location near you by using the WWW Tide and Current Predictor, a site containing different station locations. Once you have chosen your site and obtained the data in tabular form, scroll down to Prediction Options. In the first section of options, select Graphic Plot. In the next section, select 3 days for the Length of Time to Display. Start with the default colors, and you can change them later if you wish. Label the horizontal lines on your graph. Notice the gradual shift in the times that highs and lows occur from day to day. How does this correspond with the moon's (NOTE: These tidal predictions are not NOAA
Alexandra shares an excellent activity that can be used by parents and teachers alike. My name is Alexandra Berube, I am a former Kindergarten teacher and continue to tutor students in K-8th grade, in all subject areas and test preparation. When children are beginning to understand that letters carry meaning, they will use one or two letters to convey an idea--usually the first letter and/or the last. It is very tiring for children to try to write long ideas if this is their current skill capacity. Children love to draw, whether it’s representational (“That’s a tree with a rainbow”), or symbolic/action-based (“That’s how I run around and that’s where I jump from...”). If you give them a comic book template (blank squares side by side, large enough that they can draw in each one), they can ‘write’ out their story. Once they’ve drawn in each square, they can narrate what’s happening. Depending on their ability, you can either write for them, sounding it out as you go (modeling); you can help them sound out key words and then write the other words yourself; or you can help them sound out all the words to their best ability of invented spelling. This builds meaning into the process of writing, because it serves the purpose of narrating their story. Young children often forget what they are trying to write about as they go, because they are so focused on the letters. This gives them the chance to first put down their story in pictures, and then write the best they can without losing their idea.
Development of the Holocoust; The Holocaust of European Jewry can be divided into four periods of time: 1. 1933-1939: The aim of the Nazis during this time was to “cleanse” Germany of her Jewish population (Judenrein). By making the lives of the Jewish citizenry intolerable, the Germans indirectly forced them to emigrate. The Jewish citizens were excluded from public life, were fired from public and professional positions, and were ostracized from the arts, humanities, and sciences. The discrimination was anchored in German anti-Jewish legislation such as the Nurnburg Laws of 1935. At the end of 1938, the government initiated a pogrom against the Jewish inhabitants on a particular night which came to be known as Kristallnacht. This act legitimized the spilling of Jewish blood and the taking of Jewish property. The annexation of Austria in 1938 (Anschluss) subjected the Jewish population there to the same fate as that in Germany. 2. 1939-1941: During this time, the Nazi policy took on a new dimension: The option of emigration (which was anyway questionable because of the lack of countries willing to accept Jewish refugees) was brought to a halt. The Jew-hatred, which was an inseparable part of Nazi policy, because even more extreme with the outbreak of World War II. As the Nazis conquered more land in Europe, more Jewish populations fell under their control: Jews of Poland, Ukraine, Italy, France, Belgium, Holland, etc. The Jews were placed in concentration camps and compelled to do forced labor. Ghettos were set up in Poland, Ukraine, and the Baltic states in order to segregate the Jewish population. In the camps and ghettos, great numbers of Jews perished because of impossible living conditions, hard labor, starvation, or disease. Hitler’s political police force, the Gestapo, had been founded two months after the Nazi rise to power. It became the most terrifying and deadly weapon of the Nazi government, and was used for the destruction of millions of Jews. 3. June 1941 – fall 1943: This was the time during which the Nazis began carrying out the Final Solution to the Jewish problem. Systematic genocide of the Jewish people became official Nazi policy as a result of the Wannsee Conference (Jan. 1942). Special task forces, known as Einsatzgruppen, would follow behind the German army and exterminate the Jewish population of newly conquered areas. In this manner, entire Jewish communities were wiped out. At this point, many concentration camps which had been set up shortly after the Nazi rise to power, became death camps used for the mass-murder of Jews in gas chambers. Some of the more well-known extermination camps were Auschwitz, , Bergen-Belsen, Treblinka, and Belzec. 4. 1943 – May 1945: The beginning of 1943 was a turning point in the war. This time saw the gradual collapse of the Third Reich until its ultimate surrender on May 7th, 1945. Despite the weakened position, the Nazis continued with their plan of destruction of the Jewish population in the ghettos and camps still under their control. As the Soviet army proceeded westward, the Nazis hastened the destruction of the Jews and then of their own facilities in order to cover the tracks of their crimes. In the fall of 1944, the Nazis began the evacuation of Auschwitz, and in January 1945, Himmler commanded to evacuate (by foot) all camps toward which the Allied forces were advancing. In this so-called “Death march”, tens of thousands of more Jewish lives perished.
See what questions a doctor would ask. A bladder infection is a common type of urinary tract infection. It is the result of an invasion of bacteria into the bladder. A bladder infection is also sometimes called cystitis, although cystitis is a general term for an inflammation of the bladder that can occur without a bacterial infection. The bladder is a muscular organ of the urinary tract whose function is to temporarily store urine until it is expelled from the body through the urethra. Normally, the bladder, urethra and the rest of the urinary tract, including the urine, ureters and the kidneys, are sterile. This means that they contain no bacteria or other microorganisms. However, bacteria can get into the bladder from outside the body through the urethra. Bacteria can also come from other parts of the body that are infected by spreading through the bloodstream into the urinary tract. A bladder infection results in symptoms that typically include burning with urination, difficulty urinating, an urge to urinate frequently, and bloody urine. Symptoms may vary between individuals in character and intensity. For more information on symptoms, refer to symptoms of bladder infection. Bladder infections can lead to potentially serious, even life-threatening complications in some people, especially of left untreated. These include pyelonephritis, kidney damage, sepsis and problems with a pregnancy, such premature birth and having a low birth weight baby. Bladder infections occur more commonly in women than in men, because the urethra in women is shorter than a man's. This makes it easier for bacteria to get into the female bladder. Women who are sexually active, who use diaphragms for birth control, and/or are past menopause are at an increased risk for a bladder infection. Certain other populations are also at a higher risk for developing bladder infections. They include older adults and the elderly and people with a history of kidney stones, kidney disease, or chronic conditions that affect the immune system, such as HIV/AIDS and diabetes. People who have an indwelling catheter in their bladder are also at risk. Making a diagnosis of a bladder infection begins with taking a thorough personal and family medical history, including symptoms, and completing a physical examination. It also includes performing a urinalysis test, which checks for the presence of pus, white blood cells, and bacteria in the urine, which point to a bladder infection. A urine culture and sensitivity is usually performed to find the exact microorganism that is causing the infection and to determine the most effective antibiotic to treat it. A diagnosis of a bladder infection can easily be missed or delayed in older populations. This is because some symptoms, such as fatigue and weakness, may not be noticed or might be associated with aging. For information on misdiagnosis, refer to misdiagnosis of bladder infection. Mild bladder infections that are not accompanied with complications, such as pyelonephritis, are generally treated with oral antibiotic medications. People are also encouraged to drink plenty of water to help flush bacteria out of the bladder. The prognosis and the chance for a cure without complications are good for people who are generally healthy. More serious infections, such as those in people with HIV/AIDS or in the elderly, may require hospitalization, especially if there are complications, such as sepsis. For more information on treatment, refer to treatment of bladder infections....more » People who are young or middle-aged are often diagnosed and treated quickly with a bladder infection. This is because the symptoms are often so painful and/or inconvenient that they seek prompt medical care. However, in the elderly population, symptoms of a bladder infection can be easily overlooked, delaying a diagnosis. Elderly people with a bladder infection ...more misdiagnosis » The following medical conditions are some of the possible causes of Bladder infection. There are likely to be other possible causes, so ask your doctor about your symptoms. Home medical tests possibly related to Bladder infection: Listed below are some combinations of symptoms associated with Bladder infection, as listed in our database. Visit the Symptom Checker, to add and remove symptoms and research your condition. The first step in treating a bladder infection is prevention. Prevention measures include drinking plenty of fluids, urinating as soon as possible when the urge is felt, and drinking cranberry juice, which may have infection-fighting qualities. For women, prevention measure include urinating promptly after having sexual intercourse, wiping the genital area from ...Bladder infection Treatments Some of the possible treatments listed in sources for treatment of Bladder infection may include: Review further information on Bladder infection Treatments. Alternative treatments or home remedies that have been listed as possibly helpful for Bladder infection may include: Real-life user stories relating to Bladder infection: Some of the comorbid or associated medical symptoms for Bladder infection may include these symptoms: Research the causes of these more general types of symptom: Research the causes of related medical symptoms such as: Read more about causes and Bladder infection deaths. Antibiotics often causes diarrhea: The use of antibiotics are very likely to cause some level of diarrhea in patients. The reason is that antibiotics kill off not only "bad" bacteria, but can also kill the "good"...read more » Interstitial cystitis an under-diagnosed bladder condition: The medical condition of interstitial cystitic is a bladder condition that can be misdiagnosed as various conditions such as overactive bladder or other causes of...read more » Other ways to find a doctor, or use doctor, physician and specialist online research services: Research extensive quality ratings and patient safety measures for hospitals, clinics and medical facilities in health specialties related to Bladder infection: Conditions that are commonly undiagnosed in related areas may include: The list below shows some of the causes of Bladder infection mentioned in various sources: This information refers to the general prevalence and incidence of these diseases, not to how likely they are to be the actual cause of Bladder infection. Of the 10 causes of Bladder infection that we have listed, we have the following prevalence/incidence information: The following list of conditions have 'Bladder infection' or similar listed as a symptom in our database. This computer-generated list may be inaccurate or incomplete. Always seek prompt professional medical advice about the cause of any symptom. Select from the following alphabetical view of conditions which include a symptom of Bladder infection or choose View All. The following list of medical conditions have Bladder infection or similar listed as a medical complication in our database. The distinction between a symptom and complication is not always clear, and conditions mentioning this symptom as a complication may also be relevant. This computer-generated list may be inaccurate or incomplete. Always seek prompt professional medical advice about the cause of any symptom. Ask or answer a question about symptoms or diseases at one of our free interactive user forums. Medical story forums: If you have a medical story then we want to hear it. This information shows analysis of the list of causes of Bladder infection based on whether certain risk factors apply to the patient: Medical Conditions associated with Bladder infection: Bladder symptoms (1010 causes), Urinary problems (1033 causes), Digestive symptoms (5299 causes), Abdominal symptoms (5930 causes), Urinary difficulty (648 causes), Urinary symptoms (1228 causes), Sexual symptoms (1838 causes), Intercourse symptoms (258 causes), Lower abdominal symptoms (3048 causes) Doctor-patient articles related to symptoms and diagnosis: These general medical articles may be of interest: Search Specialists by State and City
U.S. Geological Survey --EROS Data Center The map below displays a gray relief of the flow direction grid for the 1 KM African DEM. The boxed in region identifies the Congo-Zaire flow channels and displays the complex amount of information that can be extracted from the DEM. The movement of water across the land surface is called runoff. The directions runoff flows across a DEM surface is called flow directions. The flow directions are determined by finding the steepest descent from each cell in the DEM. The distance and direction of descent is calculated by using the eight neighborhood process (ESRI, 1992) adapted from Jenson and Domingue (1988). Once flow directions are developed, basin delineation can be accomplished. Return to Delineation of Drainage Basins Title Page
In this planets worksheet, students cut apart 9 full color cards with unlabeled pictures of the planets. There are no directions for students on the page. 3 Views 0 Downloads Blubber Mits - Tundra Adaptions There are not many plants and animals that can survive the cold, harsh conditions of the tundra. This collection of activities and reading passages helps young scientists understand the unique characteristics that allow these amazing... K - 5th Science CCSS: Adaptable Space: Our Star, the Sun, and Its Friends, the Planets Young scholars examine the solar system. In this space lesson, students identify the order of the planets and their relative size to the sun. Young scholars create a scale model of our solar system using a variety of household objects. 2nd - 4th Math
* Workers spend all their time outdoors, sometimes in poor weather and often in isolated areas. * Most jobs are physically demanding and can be hazardous. * Little to no change in overall employment is expected. Nature of the Work The Nation’s forests are a rich natural resource, providing beauty and tranquility, varied recreational benefits, and wood for commercial use. Managing and harvesting the forests and woodlands require many different kinds of workers. Forest and conservation workers help develop, maintain, and protect the forests by growing and planting new seedlings, fighting insects and diseases that attack trees, and helping to control soil erosion. Timber-cutting and logging workers harvest thousands of acres of forests each year for the timber that provides the raw material for countless consumer and industrial products. Forest and conservation workers perform a variety of tasks to reforest and conserve timberlands and to maintain forest facilities, such as roads and campsites. Some forest workers, called tree planters, use digging and planting tools called “dibble bars” and “hoedads” to plant seedlings in reforesting timberland areas. Forest workers also remove diseased or undesirable trees with power saws or handsaws, spray trees with insecticides and fungicides to kill insects and to protect against disease, and apply herbicides on undesirable brush to reduce competing vegetation. In private industry, forest workers usually working under the direction of professional foresters, paint boundary lines, assist with controlled burning, aid in marking and measuring trees, and keep tallies of trees examined and counted. Those who work for State and local governments or who are under contract with them also clear away brush and debris from camp trails, roadsides, and camping areas. Some forest workers clean kitchens and rest rooms at recreational facilities and campgrounds. Other forest and conservation workers work in forest nurseries, sorting out tree seedlings and discarding those not meeting standards of root formation, stem development, and condition of foliage. Some forest workers are employed on tree farms, where they plant, cultivate, and harvest many different kinds of trees. Their duties vary with the type of farm. Those who work on specialty farms, such as farms growing Christmas or ornamental trees for nurseries, are responsible for shearing treetops and limbs to control the growth of the trees under their care, to increase the density of limbs, and to improve the shapes of the trees. In addition, these workers’ duties include planting the seedlings, spraying to control surrounding weed growth and insects, and harvesting the trees. Other forest workers gather, by hand or with the use of handtools, products from the woodlands, such as decorative greens, tree cones and barks, moss, and other wild plant life. Still others tap trees for sap to make syrup or chemicals. Logging workers are responsible for cutting and hauling trees in large quantities. The timber-cutting and logging process is carried out by a logging crew. A typical crew might consist of one or two tree fallers or one tree harvesting machine operator to cut down trees, one bucker to cut logs, two logging skidder operators to drag cut trees to the loading deck, and one equipment operator to load the logs onto trucks. Specifically, fallers, commonly known as tree fallers, cut down trees with hand-held power chain saws or mobile felling machines. Usually using gas-powered chain saws, buckers trim off the tops and branches and buck (cut) the resulting logs into specified lengths. Choke setters fasten chokers (steel cables or chains) around logs to be skidded (dragged) by tractors or forwarded by the cable-yarding system to the landing or deck area, where the logs are separated by species and type of product, such as pulpwood, saw logs, or veneer logs, and loaded onto trucks. Rigging slingers and chasers set up and dismantle the cables and guy wires of the yarding system. Log sorters, markers, movers, and chippers sort, mark, and move logs, based on species, size, and ownership, and tend machines that chip up logs. Logging equipment operators use tree harvesters to fell the trees, shear the limbs off, and then cut the logs into desired lengths. They drive tractors mounted on crawler tracks and operate self-propelled machines called skidders or forwarders, which drag or transport logs from the felling site in the woods to the log landing area for loading. They also operate grapple loaders, which lift and load logs into trucks. Some logging equipment operators, usually at a sawmill or a pulp-mill woodyard, use a tracked or wheeled machine similar to a forklift to unload logs and pulpwood off of trucks or gondola railroad cars. Some newer, more efficient logging equipment has state-of-the-art computer technology, requiring skilled operators with more training. Log graders and scalers inspect logs for defects, measure logs to determine their volume, and estimate the marketable content or value of logs or pulpwood. These workers often use hand-held data collection devices to enter data about individual trees; later, the data can be downloaded or sent from the scaling area to a central computer via modem. Other timber-cutting and logging workers have a variety of responsibilities. Some hike through forests to assess logging conditions. Some clear areas of brush and other growth to prepare for logging activities or to promote the growth of desirable species of trees. Most crews work for self-employed logging contractors who have substantial logging experience, the capital to purchase equipment, and the skills needed to run a small business successfully. Many contractors work alongside their crews as supervisors and often operate one of the logging machines, such as the grapple loader or the tree harvester. Some manage more than one crew and function as owner-supervisors. Although timber-cutting and logging equipment has greatly improved and operations are becoming increasingly mechanized, many logging jobs still are dangerous and very labor intensive. These jobs require various levels of skill, ranging from the unskilled task of manually moving logs, branches, and equipment to skillfully using chain saws to fell trees, and heavy equipment to skid and load logs onto trucks. To keep costs down, many timber-cutting and logging workers maintain and repair the equipment they use. A skillful, experienced logging worker is expected to handle a variety of logging operations. Forestry and logging jobs are physically demanding. Workers spend all their time outdoors, sometimes in poor weather and often in isolated areas. The increased use of enclosed machines has decreased some of the discomforts caused by inclement weather and has generally made tasks much safer. Most logging occupations involve lifting, climbing, and other strenuous activities, although machinery has eliminated some heavy labor. Loggers work under unusually hazardous conditions. Falling branches, vines, and rough terrain are constant hazards, as are the dangers associated with tree-felling and log-handling operations. Special care must be taken during strong winds, which can even halt logging operations. Slippery or muddy ground, hidden roots, or vines not only reduce efficiency, but also present a constant danger, especially in the presence of moving vehicles and machinery. Poisonous plants, brambles, insects, snakes, heat, humidity, and extreme cold are everyday occurrences where loggers work. The use of hearing protection devices is required on logging operations because the high noise level of felling and skidding operations over long periods may impair one’s hearing. Workers must be careful and use proper safety measures and equipment such as hardhats, eye and ear protection, safety clothing, and boots to reduce the risk of injury. The jobs of forest and conservation workers generally are much less hazardous than those of loggers. It may be necessary for some forestry aides or forest workers to walk long distances through densely wooded areas to accomplish their work tasks. Source: bls.gov, pjcj.net, nrc.umass.edu, hwforests.com,
Thank you for helping us expand this topic! Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. The topic yellow-ridged toucan is discussed in the following articles: ...lightweight bone covered with keratin—the same material as human fingernails. The common names of several species, such as the chestnut-mandibled toucan, the fiery-billed aracari, and the yellow-ridged toucan, describe their beaks, which are often brightly coloured in pastel shades of green, red, white, and yellow. This coloration is probably used by the birds for species recognition,... Click anywhere inside the article to add text or insert superscripts, subscripts, and special characters. You can also highlight a section and use the tools in this bar to modify existing content: Add links to related Britannica articles! You can double-click any word or highlight a word or phrase in the text below and then select an article from the search box. Or, simply highlight a word or phrase in the article, then enter the article name or term you'd like to link to in the search box below, and select from the list of results. Note: we do not allow links to external resources in editor. Please click the Websites link for this article to add citations for
The American Engineers’ Council for Professional Development (ECPD) defines engineering thus: “The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation or safety to life and property” ~Engineers’ Council for Professional Development (1,2) That’s a pretty complex definition! What this really means is that engineers design, build, and operate many different things in our world, and try to find ways to make those things better. Some of these things include buildings, roads, machines, and even processes (like the process for building a car). Because there are so many different things an engineer could do, engineering as a field is divided into many different types. Below are just a few of the different types of engineering. Engineers use not only scientific information to design things, but also their practical knowledge and understanding of the society they work in. So, what do we already know a boat needs? - It needs to be waterproof, so they don’t leak. - It needs to float. This sounds simple, but it means the boat must displace (push away) an amount of water equal to its weight (please visit our blog on buoyancy for more information!). This is what makes the shape of a boat so important. - It may need to transport cargo. This could be people, or objects, or both! This means they must be durable, and they must displace even more water when something else is placed in them. 1. Paper! Any kind of paper will do—newspaper, magazines, scratch paper from a copier; just make sure that no one needs the information on the paper! Getting some from a recycling bin would be best! 2. Other things to build your boat with. Be creative! Think about what your boat needs to have in order to float and not leak. Here are some suggestions as to what else you might want to use: a. Waterproof tape. c. Thin wire or flexible sticks 3. Some small object that is heavy for its size, like a small rock. 4. A bathtub or sink full of water. Here’s what to do: - Plan! Write down how you plan to build your boat using the materials provided. ONE OF THE MATERIALS YOU USE MUST BE A PAPER PRODUCT! - Assemble your boat . What happened? Did your boat float? Did it keep water out for the whole 20 minutes? Could it hold the heavy object? Do you think you could improve your design? Write down what happened and how you might improve your boat in your journal. Try building your boat again, and see if you can make those improvements! See if you can make your boat float longer! For further reading, check out these sites! Ancient Egyptian Engineers: http://www.history.com/images/media/pdf/engineering_empire_egypt_study_guide.pdf Ancient Roman Engineers: http://www.historylearningsite.co.uk/roman_engineering.htm Types of Engineering: http://www.aboriginalaccess.ca/adults/types-of-engineering 1) Engineers' Council for Professional Development. (1947). Canons of ethics for engineers 2) Engineers' Council for Professional Development definition on Encyclopædia Britannica (Includes Britannica article on Engineering) 3) Cooney, Michael (2008). “What are the 14 Greatest Engineering Challenges for the 21st Century?”, Network World, www.networkworld.com. 4) Maehlum, Mathias Aarre (2014). “Solar Energy Pros and Cons.”, Energy Informative, www.energyinformative.org. Creative Commons Attribution 2.0 Generic license Creative Commons Attribution-Share Alike 3.0 Unported license
Well. Europe has suffered from repeated attacks of heatwaves in recent summer. Now, scientists have revealed that it was the climate change which was mainly responsible for the scorching temperatures across the Mediterranean nations of Europe during g the summer months. Further scientists have warned that, if global warming is not checked, then by 2050, Europe would be facing heatwaves with temperature soaring over 40 degree Celsius. The World Weather Attribution (WWA) group put forward this new analysis by studying and understanding particularly the “Lucifer” heatwave that struck Italy, Croatia, and south-east France during early August. The researchers found out that the climate change caused by humans made Lucifer heatwaves four times more likely. A member of WWA, Friederike Otto of the University of Oxford, said that summers are becoming hotter because heatwaves have intensified in recent years. If nothing is done to check or reduce greenhouse gas emissions, these type of scorching heatwaves would become a normal thing in the upcoming years. Actually, Climate change cannot be directly blamed for any type of extreme events because random extremes also tend to occur naturally. But researchers say that if we compare those extremes with historical measurements as well as computer models of a climate unaltered by carbon emissions, then we can observe that global warming has already become the main culprit for extreme and dangerous weather conditions. In June, the WWA showed how extreme heatwaves resulted in deadly forest fires in Spain and Portugal. And it was discovered that Global warming made this event ten times more likely to occur. In Australia, various research revealed that the bleaching of the Great Barrier Reef was 175 times more likely due the greenhouse gas emissions and also the recent hot winter was made 60 times more likely due to global warming. Although, the thorough analysis of Hurricanes is more complex and time-consuming than heatwaves, still, scientists have hinted that climate change might have given rise to these giant super-storms that destroyed the Caribbean islands and some parts of the US. The researchers examined the Lucifer heatwave thoroughly during its three-day peak in early August and discovered that the intensity of such heatwaves has risen up to 1-2 degree Celsius since 1950 and the climate change has made these intense heatwaves four times more likely to take place than it would have been.
Plasma shock waves Launched by two Russian rockets in the summer 2000, the four Cluster spacecraft then used their own thrusters to get to an elliptical orbit with a closest distance to the Earth of nearly 20 000 km, and getting as far away as nearly 120 000 km: nearly a third of the distance to the moon. Cluster?s task is to sample the Earth?s magnetosphere. Cluster senses many different physical processes, including the shockwave generated when the solar wind slams into the Earth. Our planet and its magnetic field are an obstacle to the supersonic plasma as it flows away from the Sun. Therefore a ‘bow shock’ forms to slow down and deflect the plasma around the Earth. Shocks form wherever an obstacle sits in a supersonic plasma flow. The Hubble space telescope has captured images of a bow shock, about 0.25 light years across, formed ahead of a star ploughing through the Orion Nebula. Astrophysical shocks are very energetic (they’re the source of some of the highest-energy particles we know about) but to understand how those particles get such huge amounts of energy, we need to know more in general about how shocks work in plasmas. The Earth's bow shock is a good case to study, because it's close enough that we can get plenty of information back about the physics happening there. Cluster is not the first spacecraft to visit the Earth’s magnetosphere. So what makes it so different? The answer is that the four spacecraft fly in close formation, giving us four tracks through whatever physical process is happening around the spacecraft. It's therefore possible to sample a volume of space and make measurements in three-dimensions. Imperial College, London developed 3-axis magnetometers, University College London’s Mullard Space Science Laboratory designed and built electron detectors and Sheffield University developed the wave processors for each satellite. Find our more about the Cluster mission at the European Space Agency's website!
According to Peterson, recent outbreaks of toxic algae blooms in Quebec lakes and off Swedens Baltic Sea coast are prime examples of ecosystem flips, the consequence of nutrients from fertilizers permeating the soil and running off into streams, lakes and oceans. As you get more and more nutrients in the soil you eventually get to a point where you can even completely stop farming and all the nutrients will still be there, explained Bennett, an assistant professor at McGill's Department of Natural Resource Sciences and the School of Environment. You go past a tipping point where its very difficult to reverse. Ecosystem flips can have significant and sometimes devastating effects on human well-being, as global populations suddenly lose resources they are dependent on, said the researchers. Some of the most vulnerable areas on Earth are places like the drylands of sub-Saharan Africa. In some of these regions we risk two types of ecosystem flips, one that causes rapid soil degradation with dramatic effects on yields and farmers' livelihoods, and another that affects rainfall and therefore also vegetation growth, Gordon said. These are the places where populations are growing the fastest, people have the least amount of water per capita and are the poorest of any of the biomes of the world. They are also the regions most likely to be affected by climate change, Peterson added. As global demands for agriculture and water continue to grow, concluded the authors, it is increasingly urgent for scientists and managers to develop new ways to build resilience by anticipating, analyzing and managing changes in agricultural landscapes. Managing the green water component of the hydrological cycle is also important, as well as encouraging more diverse agricultural practices. Regime shifts a key issue at the Resilience2008 Conference, Stockholm Sweden, April 14-17 2008< |Contact: Mark Shainblum|
A new analysis of the mineral composition of meteorites suggests that theories concerning the development of the early solar system may need revision. Announcing their results today in the journal Science, researchers conclude that it took the earth only 20 million years to form from material floating around the early sun. Previous estimates, in contrast, had placed that figure at around 50 million years. The findings also re-open the debate over which types of supernovae could have produced our solar system. Measuring the amounts of an isotope of the element niobium (niobium-92) and its daughter isotope zirconium-92 in two meteorite samples provided the researchers with a kind of radioactive chronometer capable of estimating the timing of events in the early solar system. The earlier calculation of 50 million years for the formation of the earth was obtained using the same technique. But this time, the experimenters made sure to avoid contamination of their samples. By paying greater attention to maintaining the purity of the samples, says study co-author Brigitte Zanda-Hewins of Rutgers University, the team was able to produce a more accurate estimate. Additionally, the new, lower figures for the abundance of niobium-92 (which is generated by supernovae) in the early solar system, Zanda-Hewins says, loosen the constraints on the types of supernovae that could have spawned the solar system. The floor is once again open for candidates.
Multiplication can Increase or Decrease a Number Question: Does multiplication always increase a Yes it does; take the number 8, for example: 2 × 8 = 16 3 × 8 = 24 4 × 8 = 32 In each it is getting larger, so, yes, multiplication clearly increases a number. No it increases a number only under certain Multiplying any positive number by a whole number greater than 1 will always increase its value see the example opposite; but consider ; here the number 8 is reduced. So, multiplying can have a reducing effect when multiplying a positive number by a fraction which is less than one. But this can still be confusing. While we accept the above, the concept of ' a number times 8' continues to be perceived as an increase. How then can we attach a meaning to so that this will be perceived as decreasing? When multiplying by a whole positive number, e.g. 6 times 5, we understand this as being 5 added over and over again, how ever many times six times in this example. But this interpretation of times does not quite work with fractions. If we ask how many times , the answer is "not quite once". Again we need to put the term multiplying into a context with which we can identify, and which will then make the situation We want to buy 30 roses which are sold in bunches of 5, so we ask for "6 of the 5-rose bunches". In this way, the word times also often means of. If we try using the word of when times appears to have an unclear meaning, we get of 8 rather than times 8. Indeed we know what of 8 means So, by using of instead of times we are able to understand the concept of multiplying by a fraction and how this can have a reducing effect when the fraction is smaller than 1. This also helps us to understand how we multiply by a fraction, and why the method works: the 4 which results from (or of 8) can be reached by dividing 8 by 2; similarly, the 5 which results from (or of 15), (or a third of fifteen) can be reached by dividing 15 by 3. Generalising this result gives: When your bank balance is +4 pounds you have £4. When your bank balance is 4 pounds you owe £4. Owing is the opposite of having , so we find that we can associate the concept of 'minus' with '(the) opposite (of)'. This also works in reverse. Thus, ( -4) × 8 means " owing £4, eight times over" or "owing £32" which is - £32 . Now - 32 is smaller than 8, so we have illustrated another case where multiplying has a reducing effect, i.e. when multiplying by a negative number. Note that, using the method shown above, it follows that -1 × 8 = - 8 , and vice versa. Are the following statements: - sometimes false - always true and always false - sometimes true - Multiplication of a positive number by a number greater than 1 always increases the number. - Multiplication of a positive number by a positive number between 0 and 1 always increases the number. - Multiplication of a negative number by a positive number always increases the first number. - Always true - Always false, as multiplication of a positive number by a number between 0 and 1 will always reduce the number. - Sometimes false and sometimes true; e.g. for the number 8, 2 × (-8) = -16, so the number is decreased, whereas the number increases in the example below:
On this day in history, October 21, 1797, the USS Constitution is launched in Boston Harbor. The Constitution is the world's oldest commissioned naval ship still sailing. It was one of six ships built by the US Congress in 1797 to deal with the Barbary pirates of North Africa. close of the American Revolution, the Continental Navy was shut down. In the 1790s, increasing pirate activity of the North African states of Morocco, Tunisia, Algeria and Libya, threatened American shipping in the Mediterranean, leading Congress to create a small navy to deal with the Washington named the USS Constitution, which was launched on October 21, 1797. The ships were larger and stronger than typical naval ships of the era because the United States could not afford to build a very large fleet. Instead, Congress decided to make a few ships and make them The Constitution served in the Quasi-War against France in the late 1790s, capturing French ships in the Americas and the West Indies. During the First Barbary War, the Constitution served as the flagship of Captain Edward Preble, who forced the Barbary states into submission. These battles are the subject of the line, "From the halls of Montezuma to the shores of Tripoli," in the United States Marine Corps hymn. Constitution earned its reputation mostly from the War of 1812, during which she made numerous captures of British ships. The Constitution successfully outran 5 British ships in July of 1812. She captured and destroyed HMS Guerriere in August of that year in Nova Scotia. This battle was the source of the Constitution's nickname, "Old Ironsides," when British cannonballs were seen to bounce off her sides. The Constitution was involved in the last fighting between British and American subjects during a battle with HMS Cyane and HMS Levant on February 20, 1815, a battle which the Constitution won. After the War of 1812, the Constitution served for years on patrol missions and diplomatic missions in places as far as Africa, Brazil and the Mediterranean. She received such dignitaries as Emperor Pedro II of Brazil, King Ferdinand II of Italy and Pope Pius IX. After significant renovations, the Constitution was recommissioned and sailed around the world in the 1840s, docking in such places as Madagascar, Zanzibar, Singapore, Vietnam, China and Hawaii. As the ship aged and became less seaworthy, she spent years as a training vessel, a classroom and even a dormitory, in such places as Annapolis, Philadelphia and Norfolk. In 1931, restoration efforts to make the Constitution seaworthy culminated in a 90 city tour of American ports. The Constitution traveled all the way from Bar Harbor, Maine, through the Panama Canal and north to Bellingham, Washington, though not under her own power. Instead, the ship was towed. After the tour, the Constitution sat in Boston Harbor, serving as a museum, a brig for those awaiting court-martial and a training vessel. After more extensive restoration, the Constitution set sail under her own power in 1997, her first sailing under her own power in 116 years. The USS Constitution still sets in Boston Harbor today, serving as a museum and educational facility to teach about the US Navy. It is manned by 60 officers and sailors who are active duty United States Navy personnel and is open year round for tours.
I saw this lesson on Artsonia in several variations. - drawing sheet A4 size - white pencil or silver gel pen - black paper for background Start this the lesson with a class discussion about witches. How can you recognize a witch? What things belong to a witch? What can you say about the clothing of a witch? Students draw with pencil the lower half of the body of a witch: skirt and legs. Around this body they draw things that belong to witches. Draw a horizon line at about 1/3 part from the bottom. The drawing should be coloured with markers. Colour the background with markers or chalk pastel. The latter is obviously faster than colouring the background with markers. Paste the drawing on a black background and decorate the rim with theme-related little drawings. Use a white pencil or a silver gel pen. In the debriefing should be clear that you only need a half drawing to recognize a witch: Which witch is this? Made by students of grade 3
Module 6: Conversations about race This is the sixth in a series of six short online courses about decolonising and diversifying the curriculum. We recommend engaging with the modules in order. Get started with Module 1 here. Learn about this series of courses: Content in these modules has been developed by, and with feedback from, a range of expert teachers, leaders and researchers. The courses will support you to increase your knowledge of the multiplicity of British histories and identities, select literature and resources that foster belonging, address the perspectives represented in the curriculum, consider how lenses that decolonise and diversify can be applied to the curriculum, and have confident conversations with pupils about race. Decolonising and diversifying the curriculum is a lifetime endeavour that involves significant critical reflection, learning and action. These modules are designed as a starting point and not the answer. We intend for your learning on this course to be a catalyst for further growth and development. Each of the modules will signpost you to further learning sources and engage you in considering future actions. As an organisation, we recognise the need to continue our own learning in this space and we commit to revisiting the content of these modules regularly. As a practising teacher or leader who’s already invested in, or beginning to consider, decolonising and diversifying your curriculum, we hope that these modules will provide a space for you to reflect on your decision making and approaches, and engage in dialogue with other educators before planning next steps for your practice. What will you learn in this short online course? By the end of this sixth module, you’ll be able to examine how language choices can influence conversations about race and you’ll identify approaches to practise and apply to future conversations about race in your own context. Looking forward to learning
The African Fish Eagle, or also known as the African Sea Eagle, is a large bird that is widely found across sub-Saharan Africa, where bodies of water and food sources are abundant. It is considered as the national bird of four African countries, Namibia, Zambia, South Sudan, and Zimbabwe. This bird species is classified under the genus Haliaeetus, the Latin term for “sea eagles.” The African Fish Eagle’s close relatives are the Sanford’s Sea Eagle, Bald Eagle, the critically endangered Madagascan Fish Eagle, Pallas’s Fish Eagle, and White-tailed Eagle. Like most fish eagles, the African Fish Eagle is a white-headed species. Its binomial name, Haliaeetus vocifer, was given by the French naturalist François Levaillant who called this bird “the vociferous one.” Since the population of this bird species appears to be in a continuous rise across sub-Saharan Africa, the International Union for Conservation of Nature (IUCN) Red List tagged this sea bird species as Least Concern. Its seven levels of scientific classification are as follows: Species: H. vocifer The physical characteristics of an African Fish Eagle This bird species is a relatively large bird. A female African Fish Eagle is usually larger than a male African Fish Eagle, weighing 3.2-3.6 kg and 2-2.5 kg, respectively. There exists a sexual dimorphism in this bird species. Meaning, the two sexes of this bird species exhibit different physical characteristics aside from their sexual organs. Male African Fish Eagles usually have a wingspan of 6.6 ft., while females have 7.9 ft. With its pure white head, neck, tail, and chest, this bird species can be easily recognizable. It also has a dark chestnut brown and black primaries and secondaries. Its tail is short. The cere and feet are yellow and the eyes are dark brown in color. Its head is featherless. Juveniles usually have brown plumage, with paler eyes compared to adults. Their feet have rough soles and powerful talons that can grasp an aquatic prey. Distribution and habitat of African Fish Eagles As mentioned before, the African Fish Eagles are native to sub-Saharan Africa, ranging from Mauritania, Niger, Chad, Mali, Sudan, and north Eritrea to the western Atlantic Ocean, eastern Indian Ocean, and South Africa. Non-breeding African Fish Eagles can be found in southwestern Africa, central Africa, and some parts of western Africa. These birds frequent to freshwater lakes, reservoirs, rivers, mouths of rivers, and lagoons. They are common in Orange River in South Africa, Okavango Delta in Botswana, Lake Victoria, and in Lake Malawi. They also take refuge in grasslands, swamplands, tropical rainforests, and fynbos. They are absent in arid and desert zones, as they need lots of fishes to eat and trees to nest in. The behavior of African Fish Eagles This bird species mate for life. Breeding happens once a year, during the dry season when there are low water levels. A pair of African Fish Eagles participate in building two or more nests that can be reused for many years. They build their abodes by collecting twigs, pieces of woods, and sticks and situating it in a large, tall tree. A female African Fish Eagle lays one to three eggs that are primarily white in color with red speckles. The pair takes turns in incubating the eggs. The incubation period lasts for an average of 45 days. These chicks fledge between 64-75 days. After 8 weeks of post-fledging, the African Fish Eagles will fly away from their parents. African Fish Eagles are very territorial when it comes to their home turf. Oftentimes, you would see a bird perched alone, in pairs, or in small flocks. Although some sightings suggest that these birds congregate in flocks of more than 75 individuals. Consequently, these birds are also known for their very distinct, loud cry, which is considered as a very iconic sound in Africa. An African Fish Eagle’s diet As its name suggests, the African Fish Eagle’s diet usually consists of a wide variety of fish. An African Fish Eagle does not submerge its head on the water to catch prey. Instead, it waits for the prey to appear on the surface of the water, snatches it using its strong talons, and flies up to a perch to eat its prey. Other than fish, it also feeds on flamingos, small turtles, lizards, small reptiles, crocodile hatchlings, and monkeys. Likewise, these birds have the ability to steal the prey that was caught by other predatory birds. This behavior is called kleptoparasitism. BOTSWANA BIRDS | SOUTH AFRICA BIRDS NAMIBIA BIRDS | ZAMBIA BIRDS | ZIMBABWE BIRDS
Carboxylic acids are mainly prepared by the oxidation of a number of different functional groups, as the following sections detail. Oxidation of alkenes Alkenes are oxidized to acids by heating them with solutions of potassium permanganate (KMnO 4) or potassium dichromate (K 2Cr 2O 7). Oxidation of alkenes The ozonolysis of alkenes produces aldehydes that can easily be further oxidized to acids. The oxidation of primary alcohols and aldehydes The oxidation of primary alcohols leads to the formation of alde‐hydes that undergo further oxidation to yield acids. All strong oxidizing agents (potassium permanganate, potassium dichromate, and chromium trioxide) can easily oxidize the aldehydes that are formed. Remember: Mild oxidizing agents such as manganese dioxide (MnO 2) and Tollen's reagent [Ag(NH 3) 2 +OH −] are only strong enough to oxidize alcohols to aldehydes. The oxidation of alkyl benzenes Alkyl groups that contain benzylic hydrogens—hydrogen(s) on a carbon α to a benzene ring—undergo oxidation to acids with strong oxidizing agents. In the above example, t‐butylbenzene does not contain a benzylic hydrogen and therefore doesn't undergo oxidation. Hydrolysis of nitriles The hydrolysis of nitriles, which are organic molecules containing a cyano group, leads to carboxylic acid formation. These hydrolysis reactions can take place in either acidic or basic solutions. The mechanism for these reactions involves the formation of an amide followed by hydrolysis of the amide to the acid. The mechanism follows these steps: 1. The nitrogen atom of the nitrile group is protonated. 2. The carbocation generated in Step 1 attracts a water molecule. 3. The oxonium ion loses a proton to the nitrogen atom, forming an enol. 4. The enol tautomerizes to the more stable keto form. 5. The amide is protonated by the acid, forming a carbocation. 6. A water molecule is attracted to the carbocation. 7. The oxonium ion loses a proton. 8. The amine group is protonated. 9. An electron pair on one of the oxygens displaces the ammonium group from the molecule. The carbonation of Grignard reagents Grignard reagents react with carbon dioxide to yield acid salts, which, upon acidification, produce carboxylic acids. Synthesis of substituted acetic acids via acetoacetic ester Acetoacetic ester, an ester formed by the self‐condensation of ethyl acetate via a Claisen condensation, has the following structure: The hydrogens on the methylene unit located between the two carbonyl functional groups are acidic due to the electron withdrawing effects of the carbonyl groups. Either or both of these hydrogens can be removed by reaction with strong bases. The resulting carbanions can participate in typical S N reactions that allow the placement of alkyl groups on the chain. Hydrolysis of the resulting product with concentrated sodium hydroxide solution liberates the sodium salt of the substituted acid. Addition of aqueous acid liberates the substituted acid. The second hydrogen on the methylene unit of acetoacetic ester can also be replaced by an alkyl group, creating a disubstituted acid. To accomplish this conversion, the reaction product in step 2 above would be reacted with a very strong base to create a carbanion. This carbanion can participate in a typical S N reaction, allowing the placement of a second alkyl group on the chain. Hydrolysis using concentrated aqueous sodium hydroxide leads to the formation of the sodium salt of the disubstituted acid. Addition of aqueous acid liberates the disubstituted acid. The acid formed has a methyl and an ethyl group in place of two hydrogens of acetic acid and is therefore often referred to as a disubstituted acetic acid. If dilute sodium hydroxide were used instead of concentrated, the product formed would be a methyl ketone. This ketone occurs because dilute sodium hydroxide has sufficient strength to hydrolyze the ester functional group but insufficient strength to hydrolyze the ketone functional group. Concentrated sodium hydroxide is strong enough to hydrolyze both the ester functional group and the ketone functional group and, therefore, forms the substituted acid rather than the ketone. A reaction between a disubstituted acetoacetic ester and dilute sodium hydroxide forms the following products: Upon heating, the β ketoacid becomes unstable and decarboxylates, leading to the formation of the methyl ketone. A Claisen condensation of ethyl acetate prepares acetoacetic ester. The Claisen condensation reaction occurs by a nucleophilic addition to an ester carboxyl group, which follows these steps: 1. An α hydrogen on the ester is removed by a base, which leads to the formation of a carbanion that is resonance stabilized. 2. Acting as a nucleophile, the carbanion attacks the carboxyl carbon of a second molecule of ester. 3. A pair of unshared electrons on the alkoxide oxygen move toward the carboxyl carbon, helping the ethoxy group to leave. Synthesis of substituted acetic acid via malonic ester Malonic ester is an ester formed by reacting an alcohol with malonic acid (propanedicarboxylic acid). Following is the structure of diethyl malonate: The hydrogen atoms on the methylene unit between the two carboxyl groups are acidic like those in acetoacetic ester. Strong bases can remove these acidic hydrogens. The resulting carbocation can participate in typical S N reactions, allowing the placement of an alkyl group on the chain. A second alkyl group can be placed on the compound by reacting the product formed in the previous step with a very strong base to form a new carbanion. The resulting carbanion can participate in a typical S N reaction, allowing the placement of a second alkyl group on the chain. Hydrolysis of the resulting product with concentrated aqueous sodium hydroxide produces the sodium salt of the disubstituted acid. Addition of aqueous acid converts the salt into its conjugate acid. Upon heating, the β ketoacid becomes unstable and decarboxylates, forming a disubstituted acetic acid. α halo acids, α hydroxy acids, and α, β unsaturated acids The reaction of aliphatic carboxylic acids with bromine in the presence of phosphorous produces α halo acids. This reaction is the Hell‐Volhard‐Zelinski reaction. α halo acids can be converted to α hydroxy acids by hydrolysis. α halo acids can be converted to α amino acids by reacting with ammonia. α halo acids and α hydroxy acids can be converted to α, β unsaturated acids by dehydrohalogenation and dehydration, respectively.
Bronchitis is an inflammation of the tubes that carry air to and from the lungs. Coughing and shortness of breath are the major symptoms. Bronchitis is an inflammation of the lining of your bronchial tubes, which carry air to and from your lungs. People who have bronchitis often cough up thickened mucus, which can be discolored. Bronchitis may be either acute or chronic. Often developing from a cold or other respiratory infection, acute bronchitis is very common. Chronic bronchitis, a more serious condition, is a constant irritation or inflammation of the lining of the bronchial tubes, often due to smoking. Acute bronchitis, also called a chest cold, usually improves within a week to 10 days without lasting effects, although the cough may linger for weeks. However, if you have repeated bouts of bronchitis, you may have chronic bronchitis, which requires medical attention. Chronic bronchitis is one of the conditions included in chronic obstructive pulmonary disease (COPD). For either acute bronchitis or chronic bronchitis, signs and symptoms may include: - Production of mucus (sputum), which can be clear, white, yellowish-gray or green in color — rarely, it may be streaked with blood - Shortness of breath - Slight fever and chills - Chest discomfort If you have acute bronchitis, you might have cold symptoms, such as a mild headache or body aches. While these symptoms usually improve in about a week, you may have a nagging cough that lingers for several weeks. Chronic bronchitis is defined as a productive cough that lasts at least three months, with recurring bouts occurring for at least two consecutive years. If you have chronic bronchitis, you're likely to have periods when your cough or other symptoms worsen. At those times, you may have an acute infection on top of chronic bronchitis. When to see a doctor See your doctor if your cough: - Lasts more than three weeks - Prevents you from sleeping - Is accompanied by fever higher than 100.4 F (38 C) - Produces discolored mucus - Produces blood - Is associated with wheezing or shortness of breath Acute bronchitis is usually caused by viruses, typically the same viruses that cause colds and flu (influenza). Antibiotics don't kill viruses, so this type of medication isn't useful in most cases of bronchitis. The most common cause of chronic bronchitis is cigarette smoking. Air pollution and dust or toxic gases in the environment or workplace also can contribute to the condition. Factors that increase your risk of bronchitis include: - Cigarette smoke. People who smoke or who live with a smoker are at higher risk of both acute bronchitis and chronic bronchitis. - Low resistance. This may result from another acute illness, such as a cold, or from a chronic condition that compromises your immune system. Older adults, infants and young children have greater vulnerability to infection. - Exposure to irritants on the job. Your risk of developing bronchitis is greater if you work around certain lung irritants, such as grains or textiles, or are exposed to chemical fumes. - Gastric reflux. Repeated bouts of severe heartburn can irritate your throat and make you more prone to developing bronchitis. Although a single episode of bronchitis usually isn't cause for concern, it can lead to pneumonia in some people. Repeated bouts of bronchitis, however, may mean that you have chronic obstructive pulmonary disease (COPD). To reduce your risk of bronchitis, follow these tips: - Avoid cigarette smoke. Cigarette smoke increases your risk of chronic bronchitis. - Get vaccinated. Many cases of acute bronchitis result from influenza, a virus. Getting a yearly flu vaccine can help protect you from getting the flu. You may also want to consider vaccination that protects against some types of pneumonia. - Wash your hands. To reduce your risk of catching a viral infection, wash your hands frequently and get in the habit of using alcohol-based hand sanitizers. - Wear a surgical mask. If you have COPD, you might consider wearing a face mask at work if you're exposed to dust or fumes, and when you're going to be among crowds, such as while traveling. During the first few days of illness, it can be difficult to distinguish the signs and symptoms of bronchitis from those of a common cold. During the physical exam, your doctor will use a stethoscope to listen closely to your lungs as you breathe. In some cases, your doctor may suggest the following tests: - Chest X-ray. A chest X-ray can help determine if you have pneumonia or another condition that may explain your cough. This is especially important if you ever were or currently are a smoker. - Sputum tests. Sputum is the mucus that you cough up from your lungs. It can be tested to see if you have illnesses that could be helped by antibiotics. Sputum can also be tested for signs of allergies. - Pulmonary function test. During a pulmonary function test, you blow into a device called a spirometer, which measures how much air your lungs can hold and how quickly you can get air out of your lungs. This test checks for signs of asthma or emphysema. Most cases of acute bronchitis get better without treatment, usually within a couple of weeks. Because most cases of bronchitis are caused by viral infections, antibiotics aren't effective. However, if your doctor suspects that you have a bacterial infection, he or she may prescribe an antibiotic. In some circumstances, your doctor may recommend other medications, including: - Cough medicine. If your cough keeps you from sleeping, you might try cough suppressants at bedtime. - Other medications. If you have allergies, asthma or chronic obstructive pulmonary disease (COPD), your doctor may recommend an inhaler and other medications to reduce inflammation and open narrowed passages in your lungs. If you have chronic bronchitis, you may benefit from pulmonary rehabilitation — a breathing exercise program in which a respiratory therapist teaches you how to breathe more easily and increase your ability to exercise. To help you feel better, you may want to try the following self-care measures: - Avoid lung irritants. Don't smoke. Wear a mask when the air is polluted or if you're exposed to irritants, such as paint or household cleaners with strong fumes. - Use a humidifier. Warm, moist air helps relieve coughs and loosens mucus in your airways. But be sure to clean the humidifier according to the manufacturer's recommendations to avoid the growth of bacteria and fungi in the water container. - Consider a face mask outside. If cold air aggravates your cough and causes shortness of breath, put on a cold-air face mask before you go outside. You're likely to start by seeing your family doctor or a general practitioner. If you have chronic bronchitis, you may be referred to a doctor who specializes in lung diseases (pulmonologist). What you can do Before your appointment, you may want to write a list that answers the following questions: - Have you recently had a cold or the flu? - Have you ever had pneumonia? - Do you have any other medical conditions? - What drugs and supplements do you take regularly? - Are you exposed to lung irritants at your job? - Do you smoke or are you around tobacco smoke? You might also want to bring a family member or friend to your appointment. Sometimes it can be difficult to remember all the information provided. Someone who accompanies you may remember something that you missed or forgot. If you've ever seen another physician for your cough, let your present doctor know what tests were done, and if possible, bring the reports with you, including results of a chest X-ray, sputum culture and pulmonary function test. What to expect from your doctor Your doctor is likely to ask you a number of questions, such as: - When did your symptoms begin? - Have your symptoms been continuous or occasional? - Have you had bronchitis before? Has it ever lasted more than three weeks? - In between bouts of bronchitis, have you noticed you are more short of breath than you were a year earlier? - Do your symptoms affect your sleep or work? - Do you smoke? If so, how much and for how long? - Have you inhaled illicit drugs? - Do you exercise? Can you climb one flight of stairs without difficulty? Can you walk as fast as you used to? - Does anything improve or worsen your symptoms? - Does cold air bother you? - Do you notice that you wheeze sometimes? - Have you received the annual flu shot? - Have you ever been vaccinated against pneumonia? If so, when? December 24th, 2020
Linear And Quadratic Equations Worksheet – Expressions and Equations Worksheets are created to assist children in learning faster and more efficiently. The worksheets include interactive exercises and challenges that are dependent on the sequence of how operations are conducted. These worksheets make it easy for children to master complex concepts as well as simple concepts in a short time. It is possible to download these free documents in PDF format. They will help your child learn and practice math equations. These resources are useful to students in the 5th-8th grades. Download Free Linear And Quadratic Equations Worksheet These worksheets can be utilized by students in the 5th through 8th grades. These two-step word puzzles are created using fractions and decimals. Each worksheet contains ten problems. You can access them through any online or print resource. These worksheets are a fantastic opportunity to practice rearranging equations. These worksheets can be used to practice rearranging equations and assist students with understanding equality and inverted operations. These worksheets can be utilized by fifth- and eighth graders. These worksheets are perfect for students who struggle to calculate percentages. There are three kinds of questions you can choose from. You can choose to solve one-step challenges that contain whole or decimal numbers, or you can use word-based approaches to do fractions or decimals. Each page will contain 10 equations. These worksheets for Equations are suggested for students from 5th to 8th grades. These worksheets can be used to learn fraction calculation and other concepts in algebra. Some of the worksheets let you to choose from three different kinds of problems. You can choose the one that is numerical, word-based or a combination of both. The type of the problem is vital, as each presents a different challenge type. Every page is filled with ten issues which makes them an excellent resource for students from 5th to 8th grade. These worksheets help students understand the relationships between variables and numbers. They allow students to test their skills at solving polynomial equations, and to learn how to apply equations to solve problems in everyday life. These worksheets are a fantastic way to learn more about equations and formulas. These worksheets will help you learn about various kinds of mathematical problems as well as the different symbols utilized to represent them. These worksheets could be helpful for children in the beginning grades. The worksheets will assist them to master the art of graphing and solving equations. These worksheets are ideal for practice with polynomial variables. These worksheets will assist you to factor and simplify these variables. There are plenty of worksheets you can use to teach kids about equations. The most effective way to learn about equations is to complete the work yourself. There are a variety of worksheets that teach quadratic equations. There are several worksheets on the different levels of equations for each stage. These worksheets are a great way to work on solving problems until the fourth level. When you’ve completed a particular level, you can go on to solving different kinds of equations. You can continue to work on the same level problems. You could, for instance identify the same problem as an elongated one.
Bermudas Islands Crow (Corvus sp.) Neither hath the aire for her part been wanting with due supplies of many sorts of Fowles, as the gray and white Hearne, the gray and greene Plover, some wilde Ducks and Malards, Coots and Red-shankes, Sea-wigions, Gray-bitterns, Cormorants, numbers of small Birds like Sparrowes and Robins, which have lately beene destroyed by the wilde Cats, Wood-pickars, very many Crowes, which since this Plantation are kild, the rest fled or seldome seene except in the most uninhabited places, from whence they are observed to take their flight about sun set, directing their course towards the North-west, which makes many coniecture there are some more Ilands not far off that way.” This is a part of an account from 1623 that reports some of the bird life inhabiting the Bermudas Islands at that time. Given the remote location of the islands, the crows mentioned here very likely were of an endemic form, may it have been a species or a subspecies; the text even tells us how these crow population went extinct, they were killed by the British settlers because they were considered a pest for their crops. John Smith: The Generall Historie of Virginia, New-England, and the Summer Isles: with the Names of the Adventurers, Planters, and Governours from their first beginning, An: 1584. to this present 1624. With the Procedings of Those Severall Colonies and the Accidents that befell them in all their Journyes and Discoveries. Also the Maps and Descriptions of all those Countryes, their Commodities, people, Government, Customes, and Religion yet knowne. Divided into Sixe Bookes. By Captaine Iohn Smith, sometymes Governour in those Countryes & Admirall of New England. London: printed by I. D. and I. H. for Michael Sparkes 1624
The human mind can rapidly absorb and analyze new information as it flits from thought to thought. These quickly changing brain states may be encoded by synchronization of brain waves across different brain regions, according to a new study. The researchers found that as monkeys learn to categorize different patterns of dots, two brain areas involved in learning — the prefrontal cortex and the striatum — synchronize their brain waves to form new communication circuits. “We’re seeing direct evidence for the interactions between these two systems during learning, which hasn’t been seen before. Category-learning results in new functional circuits between these two areas, and these functional circuits are rhythm-based, which is key because that’s a relatively new concept in systems neuroscience,” says Earl Miller, the Picower Professor of Neuroscience at MIT and senior author of the study, which appears in the June 12 issue of Neuron. There are millions of neurons in the brain, each producing its own electrical signals. These combined signals generate oscillations known as brain waves, which can be measured by electroencephalography (EEG). The research team focused on EEG patterns from the prefrontal cortex — the seat of the brain’s executive control system — and the striatum, which controls habit formation. The phenomenon of brain-wave synchronization likely precedes the changes in synapses, or connections between neurons, believed to underlie learning and long-term memory formation, Miller says. That process, known as synaptic plasticity, is too time-consuming to account for the human mind’s flexibility, he believes. “If you can change your thoughts from moment to moment, you can’t be doing it by constantly making new connections and breaking them apart in your brain. Plasticity doesn’t happen on that kind of time scale,” says Miller, who is a member of MIT’s Picower Institute for Learning and Memory. “There’s got to be some way of dynamically establishing circuits to correspond to the thoughts we’re having in this moment, and then if we change our minds a moment later, those circuits break apart somehow. We think synchronized brain waves may be the way the brain does it.” The paper’s lead author is former Picower Institute postdoc Evan Antzoulatos, who is now at the University of California at Davis.
Insects, as well as plants and several species of animals, communicate with the world and with each other through biocommunication. It occurs most often through odor, sight, touch or hearing. Thanks to biocommunication, insects are able to locate host plants, choose places to lay their eggs, identify their prey and recognize their sexual partners. The intelligent use of biocommunication allows the behavioral control of insects, making it possible to prevent them from becoming pests for agricultural cultivation. Because they are substances that are already in nature, they are safe and have low impact. They can be used for an exclusive purpose, acting on one species without harming the others. Biocommunicators also have the advantage of triggering an immediate and intense reaction. Small amounts are enough to control large populations in a field. HOW THEY WORK Insect antennae carry sensory structures called sensilla. They are the ones who capture the smells present in the environment. The sensilla of each species react only to the smells important for the life of that species, ignoring the others. As an example, we have pheromones that are used by female moths to indicate to males that they are available for copulation. This biocommunication is essential for the survival of insects and plants, and thanks to the scientific knowledge already acquired, it is possible to use it to control insect populations and protect crops.
One of the key benefits of using the Singleton design pattern is that it helps to prevent the creation of multiple instances of a class, which can lead to memory and performance issues. This is particularly important in situations where the class is responsible for managing shared resources, such as database connections or system resources. By ensuring that only one instance of the class exists, we can avoid conflicts and ensure that the shared resources are managed efficiently. There are several different ways to implement the Singleton design pattern in a software system. One common approach is to use a static method that returns the single instance of the class. This method can be called by any other class in the system, and will return the same instance each time it is called. Another approach is to use a private constructor and a static member variable to store the single instance of the class. This approach ensures that the class cannot be instantiated from outside of the class itself, and provides a convenient way for other classes to access the single instance of the class. Here is an example of the Singleton design pattern implemented in Swift, using a private constructor and a static member variable to store the single instance of the class: To use the logger, you can call the log function on the shared instance: The Singleton design pattern is a useful tool for managing shared resources in a software system. By ensuring that only one instance of a class exists, and providing a convenient way for other classes to access it, the Singleton design pattern can help to improve the performance and maintainability of a software system.
A windmill is a mill that converts the energy of wind into rotational energy by means of vanes called sails or blades, according to Wikipedia. In Physics’ perspective, it converts the wind energy from the air into the kinetic energy on its blades so that they move. When it rotates, it does a centripetal motion. The faster it rotates, the more kinetic energy it has. Windmills have a very long history. Centuries ago, windmills usually were used to mill grain and pump water. Modern windmills are used to produce electricity and pump water for land drainage or to extract groundwater. Therefore, windmills are an important sign which shows that human beings make good use of renewable green energies. Once I asked myself: “Well, the wind provides energy and forces the windmill to rotate. But in what way can we change how fast it rotates?” THE RESEARCH QUESTION Last week, I found a windmill in my old toy box. I tried to blow away the dusts on it. It rotates really fast. But when I was on my bed after I made it standing on my table, it rotates slowly and peacefully when I blowed. I thought of the question that I had wondered for many years since I was a child: how can the distance from the origin of the wind to the windmill affect its rotational speed? I might guess the answer, but I needed to prove that. I needed to design an experiment to examine the specific relationship. Since I only want to how the distance affects the rotation speed, I have to eliminate other factors that may influence the results of the experiment. For example, the original speed of the wind must be constant. The angle made by the direction of the wind and the front of the windmill must be constant too. Other factors such as the air resistance and the external temperature must also be kept unchanged. In this experiment, the independent variable is the distance from the origin of the wind to the windmill. The dependent variable is the speed of rotation of the windmill. I change the independent variable, the distance, and examine how the dependent variable, the rotational speed, changes in response to the independent variable. The purpose of this experiment is to test the relationship between the distance from the origin of the wind to the windmill and the rotational speed of the windmill. I prepare all the following equipments needed to finish this experiment: - To make sure the original speed of the wind is constant, I need a hair drier and always set it to the LOW power output. (Though I do not know the exact speed of wind, it does not matter because it is not the factor involved in this experiment) - To measure the specific distance from the hair drier and the windmill, I need a 1-meter-long ruler with clear scales on it. - I need a motion sensor to test the rate of rotation of the windmill. - Two retort stands are needed to fix the positions of all the equipments. - And of course, I have a colorful toy windmill with four blades. - First, I attach the windmill to the first resort stand. - Then I setup the sensor on the second resort stand and make sure that the four blades of the windmill can pass through the sensor when they rotate without touching the sensor. - After that, I attach the ruler to the first stand in the horizontal direction. The ruler must make a right angle with the stick of the stand. - In addition, I need to make sure the front opening of the hair drier (where the wind comes from) perfectly directs at the mid-point of the four blades so that the wind direction does not change. - Finally, I power on the hair drier, set it to LOW power to make sure the wind speed is unchanged, and change the distance according to the readings on the ruler by my hand. - I choose the distance once every 10 centimeters. The distances I pick are 20 cm, 30 cm, 40 cm, 50 cm, 60 cm and 70 cm. - I measure the rotational speed 3 times for the same distance, and then use the average. Repeating experiments makes my data and results more accurate. The sensor is connected to the Verner Lab Pro interface and then is connected to the computer. The Logger Pro software automatically senses the motion sensor and displays graphs axes of rotational speed against time. Mean rotational speed = (V1 + V2 + V3) ÷ 3 For example, the mean rotational speed of trial 0.20m is (0.756 + 0.760 + 0.765) ÷ 3 = 0.760 Uncertainty = (Vmax – Vmin) ÷ 2 For example, the uncertainty of trial 0.30m is (0.688 – 0.594) ÷ 2 = 0.047 The mean rotational speed is graphed against the distance in the following diagram: And the ln(mean speed of rotation) or lnV is now graphed against the distance in the following diagram: From the first graph plotted, we can deduce that the rotational speed of a windmill has an exponential relationship with the distance from the origin of the wind to the windmill. As the distance from origin of the wind to the windmill increases, the rotational speed of the windmill decreases. If we graph the ln value of the average rotational speed against distance, we can find that lnV has a linear relationship with the distance. This also proves that the rotational speed of a windmill has an exponential relationship with the distance. Though the data does show an exponential relationship, the data is not accurate enough. When we are collecting the data of rotational speed, we find out that for the same distance, the rotational speed fluctuates a lot. Every trial has a large difference from the other ones for the same distance. Because we do not have any equipment to fix the position of the hair drier on the ruler, we have to do that by man power. This means that we are not able to make sure that the front opening of the drier (where the wind comes from) directs precisely at the mid-point of the four blades. In addition, human hands may shake a lot during the experiment. Therefore, we can not make sure that the direction of the wind is always constant. The rotational speed may change due to the change in wind direction. Further more, I did not take the distance from the wind origin to the opening of the hair into consideration of the whole distance. Instead, I used the distance from the opening to the windmill when calculating. This may have a slight influence on the results of my experiment. I can make some improvements to make the data in the experiment more precise: - I can do the experiment in a perfectly windless indoor room to avoid some fluctuations because of the change in strength of airflow. - I should use an equipment to fix the position of the hair drier during the experiment, so that we can make sure the wind direction does not change during the experiment and the front opening of the drier (where the wind comes from) directs precisely at the mid-point of the four blades. - I should take the distance from the wind origin to the opening of the hair into consideration of the whole distance instead of only using the distance from the opening to the windmill when calculating. Therefore, the real distance should include both the distance from the wind origin to the opening of the hair, and the distance from the opening to the windmill. - I can also do further experiments in the future to find how other factors such as wind direction influences the rotational speed of the windmill.