content
stringlengths
275
370k
From CreationWiki, the encyclopedia of creation science Dinosaur tracks are fossilized footprints or other impressions that are typically found in sedimentary rock. Ichnology (Greek ichnos meaning footprint) is the study of tracks, trails, burrows and other traces left by living animals. Dinosaur track fossils (ichnofossils) can be found at multiple locations around the world, and their general characteristics offer strong evidence that they were formed under catastrophic conditions. Furthermore, dinosaur tracks have been found with impressions made by other organisms that are believed to have evolved millions of years after the dinosaurs went extinct. Unconfirmed human-like tracks have even been found alongside or within the same depositional plain as those made by dinosaurs. Within North America, there are several "megatrack" sites, that is, track sites that have more than 100 dinosaur tracks. At many of these site, like the Red Fleet Megatrack Site in Utah (pictured at left), the dinosaur tracks predominantly go in a North East direction, suggesting the animals were fleeing in a common direction. In August of 2007, Ian Juby studied the Red Fleet site and counted 113 tracks above water at that time. Of those, 111 were going North East, 7 were traveling South West (the exact opposite direction) and two were going South. These tracks were on six different stratigraphic levels. These characteristics have been interpreted as the result of the dinosaurs moving in herds, however, modern herds have meanderings trail systems. Furthermore, the tracks are found always going in the same direction through several strata layers that are supposed to represent different geological ages. Some of these sites have tracks going in all directions, such as Dinosaur State Park and the Paluxy River where Dinosaur Valley State Park is located. However, even in these cases, the dinosaur tracks are always going in straight trails. Dinosaur tracks have several other consistent oddities about them. Sauropod tracks are rare; the predominant tracks found are tridactyl (such as shown in Red Fleet photo). Juvenile tracks are also rare, and Glen Rose is one place that has both juvenile tracks and sauropod tracks. Usually the tracks are found in multiple layers, with no evidence of any long period of time in between the layers (i.e., no paleosols). Shallow Sea Model: To fit these tracks into the uniformitarian paradigm, it is suggested that the limestone layers were a shallow sea bed or shoreline. Dinosaurs would walk through the mud, which then was slowly covered and hardened. There are several problems with this model, some of which will be discussed within the other models presented. One that will be mentioned here is that even gentle waves destroy footprints in mud, even in shallow water mud. There are no footprints being preserved anywhere in the world today in this way. The present therefore cannot be the key to the past. Secondly, the tracks are often in limestone, which either hardens quickly (like cement) or doesn't harden at all. Lastly, this does not explain why the tracks have a tendency in straight lines and directions. Tidal Bore Model: In Glen Rose, the limestone containing the tracks is quite thick, and had to have hardened quite quickly. The limestone and the marls beneath it contain many fossil clams, buried alive in the closed position, yet never in the living position. They've been tumbled. This indicates strong depositional currents, not a gentle, shallow sea bed. The Paluxy, like many other track sites, has multiple layers of dinosaur tracks. The bible is quite clear that the flood was forty days on the earth before Noah's Ark was floated (Genesis 7:17,18), and it kept rising after that. So it would appear that the onset of the flood took at least a month and a half to fully encroach on the land. During this time, the tides would have been active, and ever stronger day by day as less and less land was exposed to hinder the tides. As a result, twice daily, the water would encroach higher upon the land, then recede. During the high tide, a tidal bore would deposit marls and limestone, such as we see at the Paluxy. During low tide, the dinosaurs would forage for food, then return to high ground at high tide. In the case of the Paluxy, this would explain why the dinosaur tracks there predominantly go away from and towards the Llano uplift; this was probably the high ground. High Speed Floodwaters: Dr. John Baumgardner and Daniel Barnett were performing computer modeling on a global flood scenario. Serendipitously, they found that the flood waters would break down into giant, high-speed swirls by the Coriolis effect. Such extreme speeds were achieved that theoretically, there could be patches of land in the centers of the swirls where dinosaurs could have been trapped, making tracks in the freshly laid sediments. Tracks that resemble human footprints have been found in many locations, such as the former Soviet Union where scientists reported a layer of rock containing more than 2,000 dinosaur footprints. Reports of human-like tracks at Paluxy River and two sites at the Dinosaur Valley State Park (‘Taylor Site’ and ‘Shelf Site’) triggered a heated debate. But further analysis revealed several problems with the interpretation. The size of the footprints and the length of the stride (1 meter) were much longer than the modern humans. Also the anatomical ratios would not positively identify them with modern human prints. Creation geologist Emil Silvestru finds it possible that the ichnofossils found at Paluxy River were from a dinosaur and the human-like appearance was the result of mud collapse. Silvestru analyzed similar findings at the Upper Cretaceous Dunvegan Formation of British Columbia. Human-like tracks were found by local creationists in an area that had previously yielded tracks of theropods as well as ankylosaur. However, after further investigation, it was concluded that the tracks were eroded metatarsal dinosaur footprints of an ornithopod. At present, the discovery of a confirmed human footprints in the same strata as those of dinosaurs remains unconfirmed and elusive. Unique and Unusual Tracks In a few places, dinosaur tracks have been found in coal. Two primary places are the coal mines in Price, Utah, and at Grand Cache, Alberta. The photo on the right is an actual fossil footprint from the Price coal mine. The vegetation was laid down (quite thick) and then a dinosaur tread upon it, making an impression of its feet. Sediments then covered over the vegetation and hardened into rock, the vegetation turned into coal, and now when the coal is removed, solid rock "dinosaur tracks" can fall out of the ceiling of the mine. At the Tyrell museum in Drumheller, Alberta, there are numerous fossil dinosaur tracks on display. One of the displays is a large slab with four dinosaur tracks on it, numerous "burrows", and wave ripples. The dinosaur was walking at an oblique angle to the current, and consequently the influence of the current altered the orientation of the dinosaur's feet. This reveals a number of things: a) The sediments were soft and influenced by the water. b) The sediments must have hardened very shortly after these tracks were made, or else the tracks would have been obliterated by the current. c) The same also applies to the burrows within these layers: The animals had to have burrowed out quickly, in order to escape the mud before it hardened. There are many impressions still within the rock that appear to be animals that did not escape in time. - Dinosaur Valley State Park, Glen Rose, Texas - Dinosaur State Park, Rocky Hill, Connecticut - Red Fleet Megatrack site, Utah, near Dinosaur National Monument - There are two dinosaur track sites in the immediate vicinity of Arches National Park and Moab, Utah. The park has directions and pamphlets to both. - Tuba City, Arizona Take highway 160, look for the signs. Not far from the Grand Canyon. - Tyrell Museum in Drummheller Alberta is a museum well worth seeing, and it happens to have a few dinosaur tracks on display, including the ones shown above. - Red Gulch track site, near Greybull, Wyoming: See the BLM website for directions: Recording Track Tips - Visit the site early morning or early evening to get cross-shadows. Trying to photograph a depression in a rock is very difficult. - A diffuser for sunlight helps photographs - If you can't take a photograph under proper lighting conditions, try getting low to the ground to take a photograph "across" the track. - ↑ What about the Dinosaurs? by Dr. Walt Brown. In the Beginning: Compelling Evidence for Creation and the Flood, 8th Edition (2008) - ↑ Human and dinosaur fossil footprints in the Upper Cretaceous of North America? by Emil Silvestru. Journal of Creation 18(2):114–120, August 2004.
What Is an Electric Dipole? The electric dipole is an important entity in the study of the electric field in dielectric media, which is very important for the analysis of electromagnetic optical waves in waveguides. A system of two equal and opposite charges q separated by a small distance L is called an electric dipole as shown below. Electric Dipole Moment An electric dipole's strength and orientation are described by the dipole moment , which is a vector that points from the negative charge -q toward the positive charge +q and has the magnitude . is called the displacement vector pointing from the negative charge to the positive charge as shown below. The electric dipole moment is a measure of the separation of positive and negative electrical charges in a system of electric charges, that is, a measure of the charge system's overall polarity. The SI units are Coulomb-meter (C m), however the most commonly used unit is the Debye (D). The electric field on the axis of the dipole at a point a great distance |x| away is in the same direction as and has magnitude: At a point far from a dipole in any direction, the magnitude of the electric field is proportional to the magnitude of the dipole moment and decreases with the cube of the distance. The equipotential lines are drawn as solid lines, and the electric field lines are drawn as dashed lines in the following figure for an electric dipole. The electric field lines are rotationally symmetrical about the z-axis (independent of azimuth angle φ) and are everywhere normal (perpendicularly) to the equipotential lines. Equipotential and Electric Field Lines of an Electric Dipole
A new analysis of comet ice and dust collected by Rosetta, a spacecraft launched by the European Space Agency, finds that comets made a significant contribution to Earth’s atmosphere. Bernard Marty (Centre de Recherches Pétrographiques et Géochimiques, France), a member of the DCO Reservoirs and Fluxes Community and Scientific Steering Committee, in collaboration with 29 other researchers from six countries, made a detailed analysis of the trace gas xenon in the cloud surrounding the 67P/Churyumov-Gerasimenko comet, sampled by Rosetta. In a new paper in the journal Science , the group reports that based on the characteristics detected in 67P’s xenon, they estimate that about 22% of xenon in Earth’s atmosphere came from comets. Considering that comets are also rich in organic carbon, these findings suggest that they may have carried substantial amounts of carbon and other volatile compounds to early Earth. Rosetta was the first spacecraft to orbit a comet, giving researchers the opportunity to monitor trace gases and volatile components like carbon, nitrogen, and water. The volatiles are trapped in icy regions of the comet but, when heated by the sun’s energy, they change directly from a solid into a vapor in a process called sublimation. These gases form a cloud called the coma, which the orbiting spacecraft sampled. The researchers could then identify the collected particles, based on their size, using an onboard high-resolution mass spectrometer called ROSINA (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis). The researchers took a special interest in the different isotopes of the rare noble gas xenon. An isotope is an atom for which the number of neutrons is known. Different isotopes of an element have different masses but identical chemical properties. Xenon in Earth’s atmosphere contains smaller proportions of “heavy” isotopes with additional neutrons than xenon in the solar wind or meteorites, which scientists have struggled to explain. “We were interested in xenon isotopes because it has nine isotopes so it’s potentially very rich in terms of information,” said Marty, “but it’s the least abundant noble gas.” The low levels of xenon in the coma required close and extended sampling from the comet. The researchers worked with the flight engineers to establish a flight path that placed the spacecraft within five to 10 kilometers of the four-kilometer-wide comet for three weeks. “That’s always a little bit risky,” said Marty. “The flight engineers were not very happy.” The researchers then compared the isotope profile of the comet to Earth’s atmosphere, the solar wind, and to meteorites. They saw that the heavy xenon in Earth’s atmosphere can be explained by the addition of cometary xenon, which makes up more than one fifth of the xenon in the atmosphere. “The comet contains an isotope signal that fits quite well with atmospheric xenon,” said Marty. “We could establish a genetic link between atmospheric xenon and cometary xenon.” As described in an earlier paper by the group, the comet also contained the amino acid glycine, and the precursor molecules methylamine and ethylamine, which can serve as building blocks for life. Based on the new paper’s estimates of cometary xenon in the atmosphere, Marty roughly calculates that comets may have contributed organic carbon in amounts comparable the current organic carbon present in the biosphere. In future work, Marty plans to build a comet in the lab by simulating cometary ice, and irradiating it to release the volatiles trapped inside. He wants to investigate processes of cometary formation and to observe how sublimation affects the isotope profiles of the released gases. The Rosetta spacecraft ended its mission in 2016 by landing on 67P/Churyumov-Gerasimenko, but Marty hopes that in the future, a spacecraft will return comet samples back to Earth. Though this type of mission will be complicated and expensive, the samples will be invaluable for illuminating the prebiotic materials from comets that may have led to life on Earth. Both NASA and the Japan Aerospace Exploration Agency (JAXA) are considering such a mission, said Marty, but none are currently scheduled. The Rosetta mission poster showing the spacecraft and its deployment of the Philae lander to comet 67P/Churyumov–Gerasimenko. Credit: ESA/ATG medialab; Comet image: ESA/Rosetta/Navcam
The African Fish Eagle, or also known as the African Sea Eagle, is a large bird that is widely found across sub-Saharan Africa, where bodies of water and food sources are abundant. It is considered as the national bird of four African countries, Namibia, Zambia, South Sudan, and Zimbabwe. This bird species is classified under the genus Haliaeetus, the Latin term for “sea eagles.” The African Fish Eagle’s close relatives are the Sanford’s Sea Eagle, Bald Eagle, the critically endangered Madagascan Fish Eagle, Pallas’s Fish Eagle, and White-tailed Eagle. Like most fish eagles, the African Fish Eagle is a white-headed species. Its binomial name, Haliaeetus vocifer, was given by the French naturalist François Levaillant who called this bird “the vociferous one.” Since the population of this bird species appears to be in a continuous rise across sub-Saharan Africa, the International Union for Conservation of Nature (IUCN) Red List tagged this sea bird species as Least Concern. Its seven levels of scientific classification are as follows: Species: H. vocifer The physical characteristics of an African Fish Eagle This bird species is a relatively large bird. A female African Fish Eagle is usually larger than a male African Fish Eagle, weighing 3.2-3.6 kg and 2-2.5 kg, respectively. There exists a sexual dimorphism in this bird species. Meaning, the two sexes of this bird species exhibit different physical characteristics aside from their sexual organs. Male African Fish Eagles usually have a wingspan of 6.6 ft., while females have 7.9 ft. With its pure white head, neck, tail, and chest, this bird species can be easily recognizable. It also has a dark chestnut brown and black primaries and secondaries. Its tail is short. The cere and feet are yellow and the eyes are dark brown in color. Its head is featherless. Juveniles usually have brown plumage, with paler eyes compared to adults. Their feet have rough soles and powerful talons that can grasp an aquatic prey. Distribution and habitat of African Fish Eagles As mentioned before, the African Fish Eagles are native to sub-Saharan Africa, ranging from Mauritania, Niger, Chad, Mali, Sudan, and north Eritrea to the western Atlantic Ocean, eastern Indian Ocean, and South Africa. Non-breeding African Fish Eagles can be found in southwestern Africa, central Africa, and some parts of western Africa. These birds frequent to freshwater lakes, reservoirs, rivers, mouths of rivers, and lagoons. They are common in Orange River in South Africa, Okavango Delta in Botswana, Lake Victoria, and in Lake Malawi. They also take refuge in grasslands, swamplands, tropical rainforests, and fynbos. They are absent in arid and desert zones, as they need lots of fishes to eat and trees to nest in. The behavior of African Fish Eagles This bird species mate for life. Breeding happens once a year, during the dry season when there are low water levels. A pair of African Fish Eagles participate in building two or more nests that can be reused for many years. They build their abodes by collecting twigs, pieces of woods, and sticks and situating it in a large, tall tree. A female African Fish Eagle lays one to three eggs that are primarily white in color with red speckles. The pair takes turns in incubating the eggs. The incubation period lasts for an average of 45 days. These chicks fledge between 64-75 days. After 8 weeks of post-fledging, the African Fish Eagles will fly away from their parents. African Fish Eagles are very territorial when it comes to their home turf. Oftentimes, you would see a bird perched alone, in pairs, or in small flocks. Although some sightings suggest that these birds congregate in flocks of more than 75 individuals. Consequently, these birds are also known for their very distinct, loud cry, which is considered as a very iconic sound in Africa. An African Fish Eagle’s diet As its name suggests, the African Fish Eagle’s diet usually consists of a wide variety of fish. An African Fish Eagle does not submerge its head on the water to catch prey. Instead, it waits for the prey to appear on the surface of the water, snatches it using its strong talons, and flies up to a perch to eat its prey. Other than fish, it also feeds on flamingos, small turtles, lizards, small reptiles, crocodile hatchlings, and monkeys. Likewise, these birds have the ability to steal the prey that was caught by other predatory birds. This behavior is called kleptoparasitism. BOTSWANA BIRDS | SOUTH AFRICA BIRDS NAMIBIA BIRDS | ZAMBIA BIRDS | ZIMBABWE BIRDS
Small upheaval in the field of archaeology : the settlement of North America dates back to at least 30 000 years and is thus two times older than estimated so far, according to archaeological research, the results of which are published on Wednesday. The specimens collected, of which 1 900 tools in stone, to prove a human occupation of the cave of Chiquihuite (north of Mexico) up to 33 000 years, and which lasted for 20 000 years, show two studies published in the journal Nature. " Our research provide new evidence on a former presence of humans in the Americas, the last continent to have been occupied by the man, is commended to the Agence France-Presse the archaeologist Ciprian Ardelean, lead author of one of the two studies. The oldest specimens found in this cave, situated high and has been excavated since 2012, have been dated by radiocarbon (or carbon-14) over a range of between 33 000 and 31 000 years before our era. "They are few in number, but they are here ", commented the researcher from the Universidad autonoma de Zacatecas, Mexico. They revealed a lithic industry hitherto unknown, using tools of stone cut into thin strips. "Is the denial, is a strong approval" If no bones or DNA of humans have been found on the site, "it is likely that humans have used it as a base relatively fixed, probably during seasonal events recurring in the context of broader migration movements," says the study. The origins of the first occupants of America are hotly debated among anthropologists and archaeologists. For decades, the thesis of the most commonly accepted was that of the old growth of 13 000 years, corresponding to the period known as " Clovis ", long regarded as the first american culture, from where the ancestors of the native americans. This theory of Clovis culture primitive is called into question in the past 20 years, with new discoveries, which have declined in the age of early settlements, but only up to 16 000 years. The results of this research are likely to be keenly contested. "This happens as soon as someone finds sites older than 16 000 years ago : the first reaction is either denial, or a strong approval ", according to the researcher.
Climate change means the climate of Earth changing. Climate change is now a big problem. Climate change this century and last century is sometimes called global warming, because the surface of the Earth is getting hotter, because of humans. But thousands and millions of years ago sometimes it was very cold, like ice ages and snowball Earth. It describes changes in the state of the atmosphere over time scales ranging from decades to millions of years. These changes can be caused by processes inside the Earth, forces from outside (like more or less sunshine) or, more recently, human activities. Climate change is any significant long-term change in the weather of a region (or the whole Earth) over a significant period of time. Climate change is about abnormal variations to the climate, and the effects of these variations on other parts of the Earth. Examples include the melting of ice caps at the South Pole and North Pole. These changes may take tens, hundreds or perhaps millions of years. In recent use, especially in environmental policy, climate change usually refers to changes in modern climate (see global warming). Some people have suggested trying to keep Earth’s temperature increase below 2 °C (3.6 °F). On February 7, 2018, The Washington Post reported on a study by scientists in Germany. The study said that if the world built all of the coal plants that were currently planned, carbon dioxide levels would rise so much that the world would not be able to keep the temperature increase below this limit. History of climate change studiesEdit Joseph Fourier in 1824, Claude Poulliet in 1827 and 1838, Eunace Foote (1819–1888) in 1856, Irish physicist John Tyndall (1820–1893) in 1863 onwards, Svante Arrhenius in 1896, and Guy Stewart Callendar (1898–1964) are credited with discovering the importance of CO2 in climate change. Foote's work was not appreciated, and not widely known. Tyndall proved there were other greenhouse gases as well. Nils Gustaf Ekholm in 1901 invented the term. - Rosen, Julia; Parshina-Kottas, Yuliya. "A Climate Change Guide for Kids". The New York Times. ISSN 0362-4331. Retrieved 2021-05-29. - "If the world builds every coal plant that's planned". Washington Post. February 7, 2018. Retrieved January 29, 2019. - Tyndall J. 1863. Heat as a mode of motion. London & New York. - Easterbrook, Steve. "Who first coined the term "Greenhouse Effect"?". Serendipity. Retrieved 11 November 2015. - Ekholm N (1901). "On the variations of the climate of the geological and historical past and their causes". Quarterly Journal of the Royal Meteorological Society. 27 (117): 1–62. Bibcode:1901QJRMS..27....1E. doi:10.1002/qj.49702711702. |Wikimedia Commons has media related to Climate change.| - Edwards, Paul Geoffrey; Miller, Clark A. (2001). Changing the atmosphere: expert knowledge and environmental governance. Cambridge MA: MIT Press.CS1 maint: multiple names: authors list (link) ISBN 0-262-63219-5 - McKibben, Bill (2011). The global warming reader. New York, N.Y.: OR Books. ISBN 978-1-935928-36-2 - Ruddiman W.F. (2003). "The anthropogenic greenhouse era began thousands of years ago". Climate Change. 61 (3): 261–293.CS1 maint: ref=harv (link) - Ruddiman, William F. (2005). Plows, plagues, and petroleum: how humans took control of climate. Princeton N.J: Princeton University Press. ISBN 0-691-13398-0 - Schelling, Thomas C. 2002. "Greenhouse Effect". In David R. Henderson (ed.) (ed.). Concise Encyclopedia of Economics (1st ed.). Library of Economics and Liberty.CS1 maint: extra text: editors list (link) CS1 maint: ref=harv (link) OCLC 317650570, 50016270 and 163149563
Polish language belongs to a group of west slavic languages, and is a part of the family of Indoeuropean languages. It is estimated that the Polish language is the mother tongue of about 44 million people worldwide, the Poles and Polish citizens residing abroad. It is also one of the official languages of the European Union. The written standard of the Polish language is the Polish alphabet, which has 9 additions to the letters of the basic Latin script (ą, ć, ę, ł, ń, ó, ś, ź, ż). Polish is closely related to Kashubian, Lower Sorbian, Upper Sorbian, Czech and Slovak. “Although the Austrian, German and Russian administrations exerted much pressure on the Polish nation (during the 19th and early 20th centuries) following the Partitions of Poland, which resulted in attempts to suppress the Polish language, a rich literature has regardless developed over the centuries and the language currently has the largest number of speakers of the West Slavic group. It is also the second most widely spoken Slavic language, after Russian and just ahead of Ukrainian. In history, Polish is known to be an important language, both diplomatically and academically in Central and Eastern Europe. Today, Polish is spoken by over 38.5 million people as their first language in Poland. It is also spoken as a second language in western parts of Belarus, Lithuania and Ukraine, as well as northern parts of the Czech Republic and Slovakia. Because of the emigration from Poland during different time periods, most notably after World War II, millions of Polish speakers can be found in countries such as Australia, Brazil, Canada, the United Kingdom and the United States. There are 55 million Polish language speakers around the world.”
Macroeconomics is that branch of economic analysis in which groups created to the whole economies, like national income, Total production, total consumption, total savings, wage-level, general cost, and general price level are studied. Macroeconomics deals with economic affairs in the large.” It looks at the total size and shape and functioning of the ‘Elephant’ of Economics experience, rather than the working or articulation or dimensions of the individual parts. To alter the metaphor it studies the character of the forest, independently of the trees which compose it. We can analyze that in macroeconomics problems related to the whole economy are studied. Types of Macroeconomics Analysis Macroeconomics is concerned with the study of aggregates or groups. Following are the types of macroeconomics analysis: 1. Macro Static Analysis It deals with an equilibrium point of macroeconomic variables at a given point of the time namely total consumption, and total investment in the country. The Macro static tells us the final equilibrium as shown in the following diagram: The diagram shows that the equilibrium is at point E where national income (Y) is equal to total consumption (C), total investment (I)and government expenditures (G) which can be expressed as given below: Y=C+I+G. 2. Macro Comparative Static Analysis It deals with the comparison of two macro static points at a given point in time. The comparison between the points E and E1 are known as macro-comparative static. The initial point of equilibrium was at point E But after inducement of expenditure by the government the new equilibrium is attained at point E1 which is shown in the following diagram: The initial point of equilibrium was at point E where consumption and investment and government expenditure are equal to Income. After the inducement by the government (G) the new equilibrium is attained at point E1 where the income is equal to (Y2=c+I+G+G) consumption, investment, government expenditure and inducement of expenditure by the government (G). The level of income changes from OY1 to OY2 and consequently the point of equilibrium shifts from E to E1 point. The study between two points of Equilibrium is called Macro Comparative Static Analysis. 3. Macro Dynamic Analysis It deals with the process of change or path of change between the original equilibrium and the new equilibrium. It also explains the forces which have brought the change including the process of change the diagram source the operation of the analysis the diagram shows the process of change. The diagram shows the operation of the analysis. The diagram shows the process of change from the initial point of Equilibrium (E) to the new equilibrium (E1). The change is not a sudden change but it has been caused by a process and time lag. In the beginning, the government has increased the investment (G) which might have resulted in more employment, high productivity and a high level of income. All these variables have been motivating to the government to take additional expenditure. - Macroeconomic Developments Report - Macroeconomic analyses - Macroeconomics Analysis and Policy of India - World Bank report - National Macroeconomics and Health Reports (WHO) Limitations of Macroeconomics Followings are the limitations of macroeconomics: 1. Importance not given to Individual Units It is not complete analysis because in it instead of the individual units’ whole economy is studied collectively, So by the study of its importance is given to an undivided unit. 2. Possibility of Wrong Predictions Policies are framed on the basis of the whole economy sometimes maybe dangerous for some firms and commodities. For example: If the general price level is fixed, then it cannot be said that the price of commodities will also remain fixed because, by increasing the price of some commodities and a decrease in the price of some commodities, the general price level can remain fixed. 3. Difficult to find out Macro Quantities It is difficult to find out macro quantities. Index number, defective of giving weight to index no. Thus, it is very difficult to find correct data of total investment total savings, total consumption, etc. 4. No Attention to Structure and Composition of Group Macroeconomics attention is given only towards groups and totals not towards the structure and composition of the group. Features of Macroeconomics Following are the characteristics of macroeconomics: In it, macro-units are considered as the variable (dynamic) whereas Micro units are considered as static. Macro quantity is not always the total of Microquantities, nor we can get individual quantity by during Macro quantity by individual units. Determination of the quantity of Micro and Macro is done by different methods. The benefits of the whole society are kept in view during Macroeconomic analysis. 3. Study of the Whole Economy Macroeconomics policies and problems related to the whole economy are studied. And the effects of these policies not seen on individual units but on the whole society. Thus, Now You Know about limitations and types of macroeconomics.
The linkage between malnutrition and susceptibility to viral infections becomes especially important in the face of the Coronavirus pandemic The whole world is reeling under the impact of a pandemic, a viral infection caused by COVID-19, more commonly known as the Coronavirus infection. Coronaviruses, including the newly discovered COVID-19, are a group of viruses that belong to the family Coronaviridae1. The virus originated in China’s Wuhan province and rapidly spread to other countries. It has affected more than 210 countries and territories around the world2. Infected persons usually show respiratory or flu-like symptoms such as cough, fever, shortness of breath, and so on, within 2-14 days after exposure to the virus3. The good news is that most patients usually experience mild to moderate severity of the illness and recover without requiring special treatment. However, older patients and those with pre-existing medical problems like cardiovascular disease, diabetes, chronic respiratory disease, or cancer are more likely to be serious4 and the disease may even prove fatal. The virus spreads primarily through droplets of saliva or discharge from the nose when an infected person coughs or sneezes3. As there are no specific vaccines or treatments available currently, the only ways to reduce the spread of the disease are - practicing respiratory etiquette, observing social distancing, maintaining hygiene, and washing your hands thoroughly. As of April 14th 2020, 1,926,149 people all around the world have been infected of which 119,724 have lost their lives to the infection.2 As researchers around the world desperately try to develop a vaccine and the prophylaxis for it, governments have been working on a war footing to control the spread of the virus and minimize the number of fatalities resulting from it. Although too early to confirm, the virus is said to share certain similarities with other virus strains such as Severe Acute Respiratory Syndrome (SARS) SARS and Middle East Respiratory Syndrome coronavirus (MERS-CoV). As scientists are still working on the exact nature of the similarities, certain common precautionary measures have been advised to the general public. It would also be noteworthy to assess the status of malnutrition and susceptibility to viral infections through the lens of other similar viral diseases/epidemics/ endemic case studies. As the COVID-19 is a newly discovered virus, confirmed information available about it is limited. However, a study of similar experiences with previous episodes of infections would help develop possible measures to ensure preparedness for the future. Malnutrition and immunodeficiency Malnutrition is a major factor responsible for increased morbidity and mortality in a population. Malnutrition usually results from disordered nutrient assimilation or recurrent infections and chronic inflammation, which could be the result of an underlying immune defect5. Studies have shown that immune dysfunction can be both a cause and a consequence of malnutrition. Immune dysfunction can directly drive pathological processes in malnutrition, including mal-absorption, increased metabolic demand, dysregulation of the growth hormone, and greater susceptibility to infection5. Malnutrition is not just a result of inadequate food intake but also improper nutrient intake and poor diets leading to consequences such as obesity and diabetes. Characterizing pathogenesis across the spectrum of malnutrition is essential to underpin novel therapeutic approaches5 to support international goals to improve nutrition, health, and well-being6. Even common infections can be fatal to undernourished children implying that mortality is related to underlying immunodeficiency even in mild forms of under-nutrition8. Infections are also more common and more severe in people with obesity9. Immune dysfunction can also arise before birth via developmental pathways, compounded by environmental and behavioural factors, particularly those experienced during early life.5 A condition that results from a genetic or developmental defect in the immune system is called a primary immunodeficiency.10 Secondary or acquired immunodeficiency is the loss of immune function that results from a variety of extrinsic factors.10 Immune Defects in Undernourished Children Studies have shown certain hindered immunity parameters in undernourished children (age 0–5 years).7 As characterisation of immunodeficiency was limited, especially for mild and moderate malnutrition, the precise nature of immunodeficiency in undernutrition is thus, uncertain. However, available evidence does show that both innate and adaptive immunity are impaired by malnutrition5. One’s innate immune system is the first line of defence specified to confine infection in the early hours of exposure11. An adaptive immune system provides protection from infections that cause disease and death, by defending the body from pathogens. Unlike innate immune responses, the adaptive responses are highly specific to the particular pathogen that induced them12. Defects in innate immune function include impaired epithelial barrier function of the skin and gut, reduced granulocyte microbicidal activity, fewer circulating dendritic cells, and reduced complement proteins, leukocyte numbers and acute phase response are preserved.5 Defects in adaptive immune function involve reduced levels of soluble IgA in saliva and tears, lymphoid organ atrophy, reduced delayed-type hypersensitivity responses, fewer circulating B cells, and so on, but lymphocyte and immunoglobulin levels in peripheral blood are preserved.5 However, most malnourished children usually respond adequately to vaccination, although the timing, quality, and longevity of vaccine-specific responses may be impaired13. Co-relation between micro-nutrient deficiencies and susceptibility to infections Malnutrition can be a consequence of energy deficit (protein-energy malnutrition - PEM) or a micronutrient deficiency and in both cases is still a major burden in developing countries. It is considered the most relevant risk factor for illness and death, particularly affecting hundreds of millions of pregnant women and young children.10 A seemingly healthy child might also be more susceptible to viruses if suffering from micro-nutrient deficiencies which often go unnoticed. Together, infections and micronutrient deficiencies can induce immunodeficiency in otherwise healthy children, increasing their susceptibility to viral infections as well as other ailments. A sick person’s nutritional status is further aggravated by diarrhoea, mal-absorption, loss of appetite, diversion of nutrients for immune response, and so on. All of these lead to nutrient losses and further hinder the body’s defence mechanisms, with fever also increasing both energy and micronutrient requirements14. Malnutrition thus, magnifies the effect of disease and vice versa. Malnutrition is the leading cause of immunodeficiency in human beings. Multiple studies have proven that the immune system cannot function optimally in the presence of malnutrition15. There is a direct relationship between malnutrition and immunodeficiency as reflected through the susceptibility to infections caused by influenza and Zika viruses. Case study 1: Vitamin A helps maintain the integrity of the epithelium in the respiratory and gastrointestinal tracts and its deficiency increases the risk of diarrhoea, plasmodium falciparum malaria, measles, and overall mortality16. Measles, estimated to kill two million children per year17, is closely linked to Vitamin A deficiency. Children who are already deficient in Vitamin A are at a much greater risk of dying from measles. Thus, vaccination against measles often includes a high dose of Vitamin A. Furthermore, measles also depletes the body's supply of vitamin A and is likely to aggravate other existing nutritional deficiencies.17 Studies have shown that Vitamin A deficiency also increases the risk of developing respiratory diseases19. Alarmingly, according to World Health Organisation (WHO) an estimated 250 million preschool children are Vitamin A deficient and it is likely that in vitamin A deficient areas a substantial proportion of pregnant women are vitamin A deficient20. Case study 2: Zika virus or the Zika fever causing virus, is an arthropod-borne viral disease, first discovered in Uganda in the late 1940s21. Zika virus transmitted to people through the bite of infected mosquitoes belonging to the Aedes species22. It was found that the virus first spread rapidly in countries with high rates of malnutrition that was a result of energy deficit (protein-energy malnutrition) or a micronutrient deficiency.21 In 2007, the first documented outbreak of Zika virus disease was reported in Yap State, Federated States of Micronesia when about 73% of the population below three years of age was estimated to have been infected.21 Subsequent outbreaks occurred in Southeast Asia and the Western Pacific. In December, Brazil’s Ministry of Health estimated that 440,000–1,300,000 suspected cases of Zika virus disease had occurred in Brazil in 201523. The virus spread to different regions of Africa and Southeast Asia, in addition to diverse Pacific Islands, South America and the Caribbean. Way forward- Nutritional security as the first line of defence A malnourished person has more severe disease episodes, more complications, and spends more time ill for each episode . Thus, malnutrition and the susceptibility of a person to infectious diseases is not only closely linked but is also a vicious cycle of infection, reduced immunity, and deteriorating nutritional status, leading to consequences such as impaired child development, compromised immunity leading to infections and diseases, reduced productivity, poverty, impaired development of education and health system, and socio-economic and political instability.24 - Use of government mechanisms such as the PDS (Public Distribution System), Mid-day Meal and the ICDS (Integrated Child Development System) for the distribution of vitamin supplements, especially to the most vulnerable populations. - Extensive public awareness campaign involving government, media, public figures etc. about the significance of good nutrition, especially with regards to micro-nutrients to improve immunity against the virus. This can be on the lines of the current awareness campaigns being carried out to encourage sanitary practices among the public to protect themselves from Coronavirus infection. - The learnings from the current Coronavirus outbreak should be used to tackle similar situations in the future by increasing our preparedness for the same, especially with respect to measures by the government (rapid testing, early detection and isolation, sufficient medical equipment, etc.) and by citizens (increasing personal immunity, improving nutritional quality of available food, personal hygiene, etc.).
Remember dropping your milk teeth? After a lot of wiggling the tooth finally dropped out. But in your hand was only the enamel-covered crown: the entire root of the tooth had somehow disappeared. In a paper published in Nature, a team of researchers from Uppsala University and the ESRF in France apply synchrotron x-ray tomography to a tiny jawbone of a 424 million year old fossil fish in order to illuminate the origin of this strange system of tooth replacement. Teeth are subject to a lot of wear and tear, so it makes sense to be able to replace them during the lifetime of the animal. Surprisingly, however, the teeth of the earliest jawed vertebrates were fixed to the jaw bones and could not be shed. Tooth shedding eventually evolved independently on two occasions, using two quite different processes. In sharks and rays, the fibres that anchor the tooth to the skin of the jaw dissolve and the whole tooth simply falls out. In bony fish and land vertebrates, the developing tooth becomes attached directly to the jaw bone by a special tissue known as "bone of attachment", and when it is time for the tooth to be shed this attachment must be severed; specialized cells come in and resorb the dentine and bone of attachment until the tooth comes loose. That's why our milk teeth loose their roots before they are shed. But when did this process evolve? The authors of the new study decided to investigate a jaw bone of the 424 million year old fossil fish Andreolepis from Gotland in Sweden, close to the common ancestor of all living bony fish and land vertebrates. The jaw is a tiny thing, less than a centimetre in length, but it hides a wonderful secret: the internal microstructure of the bone is perfectly preserved and contains a record of its growth history. Until recently it has only been possible to see internal structures by physically cutting thin sections from the fossil and viewing them under the microscope, but this destroys the specimen and provides only a two-dimensional image that is hard to interpret. However, at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, it is now possible to make tomographic scans that capture the same level of microscopic detail, in three dimensions, without damaging the specimen. Donglei Chen, first author of the study, has spent several years painstakingly 'dissecting' the scan data on the computer screen, building up a three-dimensional map of the entire sequence of tooth addition and loss - the first time an early fossil dentition has been analyzed in such detail. "Every time a tooth was shed, the resorption process created a hollow where it had been attached. When the succeeding replacement tooth was cemented in place by bone of attachment, the old resorption surface remained as a faint buried scar within the bone tissue. I found up to four of these buried resorption surfaces under each tooth, stacked on top of each other like plates in a cupboard. This shows that the teeth were replaced again and again during the life of the fish," explains Donglei Chen. This is the earliest known example of tooth shedding by basal resorption, and it seems to be most similar to the process of tooth replacement seen today in primitive bony fish such as gar (Lepisosteus) and bichir (Polypterus). Like in these fish, new replacement teeth developed alongside the old ones, rather than underneath them like in us. "The amount of biological information we get from the scans is simply astonishing. We can follow the process of growth and resorption right down to cellular level, almost like in a living animal. As we apply this technique to more early vertebrates, we will come to understand their life processes much better - and no doubt we will be in for some major surprises," says Per Ahlberg, one of the leaders of the project.
Scrap paper was the primary ingredient in Mme. Webster’s class’ fruit salad this November. The grade 6 students took the challenge of “closing the loop”, using waste material to create something new. In this case, they diverted scrap paper from the recycling and used it to make new paper – in the shape of fruits including watermelon and strawberries. To make homemade paper, students first ripped up the scraps into small pieces, before putting it in the blender with water. Once it reached a smoothie-like consistency, the mixture was filtered, and shaped into whatever their creative minds could think of. Finally, the new paper was rolled out and left to dry. What a fun way to learn first-hand h ow old things can become new again!
Comic books and graphic novels can be a good place to start reading for children who find pages of unbroken text difficult. Images can help children read and understand ideas that they might find tricky. Not to mention, reading comics is lots of fun! We’ve put together some of our top tips to get your child interested in comics: - Explore – There are so many comics out there that you can find titles to interest every child. Our book list has lots of different suggestions for children of different ages and covers lots of topics. - Magazines – If your child might find a whole comic book too daunting to begin with, magazines like The Beano and The Phoenix can be a great introduction. Both magazines are published weekly. Once your child has got to grips with the style of comics, they can explore the genre further. - Visit the library – Many libraries now have comics sections for you and your child to explore. You can ask your librarian for their favourites. Many bookshops also have great comic book sections too. - Make your own comic strip – Making your own comic is a great way to boost writing and storytelling skills. We’ve created our own tool to help your child embrace their comic genius. Judd Winick, the comic book artist and author of the Hilo series, shares his top tip for engaging children with comics: “Find one book that your child will fall in love with. You'll be very surprised how a child who doesn't like reading will cling to a comic or a graphic novel. They finish reading it, and they'll read it over again and over again. It’s what's great about a lot of comics and graphic novels is that many of them are serialized. So if they like the first one, there's usually one that follows after that.”
Almost all people who are colorblind are able to see colors but have trouble differentiating between certain colors. Another interesting point to note is that not all colorblind people will have trouble with the same colors. A majority cannot distinguish between greens and reds in hazy light while others cannot separate yellows from blues; and a very small group of people experience a condition called monochromatism, where they are only able to see black and white. What Causes Colorblindness? Color blindness is a hereditary condition that is a result of the difference in how light-sensitive cells present in the retina react to different colors. These cells are known as cones and they sense wavelength of light, which then enables the retina to differentiate between colors. Colorblind people have difficulty seeing certain colors or identifying obvious differences between two shades of color under regular lighting. Most types of colorblindness is genetic and present at birth, thus colorblindness is normally diagnosed during childhood. Signs of Colorblindness Signs of colorblindness can differ from person to person. While some people have more serious forms of color blindness, others not experience the condition mildly. You may be colorblind and not even know about it. To help you determine this, here is a list of signs you need to look out for: - Having trouble distinguishing between shades of green and red, blue and green, and any other colors - Having trouble identifying colors in dim light - Sensitivity to bright colors and lights - Struggle reading from colored pages - Frequent complaints of headaches or eye aches when looking at something green on a red background, or vice versa. - Avoiding colored objects, colored games or brightly colored pictures. There are a number of tests available to identify complications associated with color blindness. The most common one is the American Optical/Hardy, Rand, and Ritter Pseudoisochromatic Test that comprises of a number of discs full of color dots of various colors and sizes. An individual with normal color vision looking at the test item will see a number clearly as it is while a colorblind person will not be able to identify the number. Another test is the Ishihara Test, which consists of eight plates. The patient is then tested to look for numbers among the different colored dots on every test plate. Some of these plates differentiate between blue and green/red color blindness. Individuals with normal color vision see one number. On the other hand, those with green/red color deficiency perceive a different number. There is no absolute cure for color blindness as of yet. However, most color deficiencies are normal and can be easily adapted through easy methods. Like its treatment, there is no absolute way to treat color blindness especially if it has been caused by diabetes, Alzheimer’s disease, leukemia, macular degeneration, Parkinson’s disease, retinitis pigmentosa, multiple sclerosis, liver disease and mellitus. However, some forms of color blindness can be prevented by limiting drugs and alcohol use and high blood pressure medications.
Diffusion. Diffusion of particles. Quartz - Why these colored water droplets seem to come alive. How Facilitated Diffusion Works. Movement of Oxygen and Carbon Dioxide. Animation: How Osmosis Works. Cell Membrane: Just Passing Through. SumanasInc Animation - how molecules move across membranes. Cell membrane. Cell Membrane Passive Transport. Diffusion In a Baggie. Name____________________________________________________ Introduction: In this lab you will observe the diffusion of a substance across a semi permeable membrane. Iodine is a known indicator for starch. An indicator is a substance that chances color in the presence of the substance it indicates. Watch as your teacher demonstrates how iodine changes in the presence of starch. Prelab Observations: Describe what happened when iodine came into contact with starch. Procedure: Fill a plastic baggie with a teaspoon of corn starch and a half a cup of water tie bag. Questions: 1. 2. 3. 4. 5. What's in the Bag? We're going to think about concentrations now, which substances are more or less concentrated depends on which one has the most stuff in it. Osmosis Demystified — How It Works. ©2003 Darel Rex Finley. This complete article, unmodified, may be freely distributed for educational purposes. “Osmosis” is the process by which small molecules automatically cross a semi-permeable membrane, compensating for a difference in the concentration of those molecules on either side of the membrane. But how do the molecules know to do that? What makes it happen? I have found the standard explanation to be a bit fuzzy and mystical; hence this tutorial. First, the standard explanation of osmosis: Sounds good, doesn’t it? First, A. And, B. Obviously, another explanation is required. Mystery solved — osmosis explained entirely by random movements of molecules, without reference to mysterious “tendencies” to cross gradients! Reverse Osmosis: What is “reverse osmosis?” UPDATE 2008.10.05 — Over the past year or two, a few persons have e-mailed me to suggest that this explanation may not apply to the case of a rigid membrane, and I have to agree that it doesn’t. Concept 2 Review: Cells in Hypotonic Solutions. Concepts in Biochemistry - Interactive Animations. Bird Respiratory System. The avian respiratory system delivers oxygen from the air to the tissues and also removes carbon dioxide. In addition, the respiratory system plays an important role in thermoregulation (maintaining normal body temperature). The avian respiratory system is different from that of other vertebrates, with birds having relatively small lungs plus nine air sacs that play an important role in respiration (but are not directly involved in the exchange of gases). Avian respiratory system (hd = humeral diverticulum of the clavicular air sac; adapted from Sereno et al. 2008) The air sacs permit a unidirectional flow of air through the lungs. Unidirectional flow means that air moving through bird lungs is largely 'fresh' air & has a higher oxygen content. Pulmonary air-sac system of a Common Teal (Anas crecca). a. Animated gif created by Eleanor Lutz (Eleanor's website: Some hollow bones are providing solid new evidence of how birds evolved from dinosaurs. Sandhill Cranes calling in flight Maina, J. Membrane Transport Animation. Bioinspired Water Filtration Makes Nearly Anything Drinkable. In the break room of a South Boston research center, a gleaming fish tank stands next to shelves of Cheez-It crackers. Inside angelfish, catfish, and minnows swim around brightly colored plastic seaweed. The only sign of the water’s polluted past is a small jar of tar-black liquid—wastewater from hydraulic fracturing—sitting on top of the tank. “When oil and gas water comes to the surface, they take the hydrocarbons off and we bring it all the way to potable water,” says Jim Matheson, CEO of Oasys Water. Oasys Water is one of a number of companies that are taking decades old water purification technology known as reverse osmosis and turning it on its head. The new approach, called forward osmosis, can treat far dirtier water, often using significantly less energy than existing purification methods. Where traditional water filtration may choke on polluted waters like this, forward osmosis can make it potable. This is where forward osmosis comes in. The Draw Finding a Focus Zero Waste.
The ice edge – a vulnerable place with lots of life As the ice melts and moves northwards in the Arctic spring, a productive band teeming with life is created in an otherwise nutrient-poor sea. "Generally, the waters in the Arctic are less productive than Atlantic waters south of the Polar front", says Haakon Hop, Senior Researcher and head of ice ecosystems in the Centre for Ice, Climate and Ecosystems (ICE) at the Norwegian Polar Institute. Algal bloom in the melt water Phytoplankton is the foundation for animal life in the sea. In the spring, an algal bloom takes place. In order for that to happen, there must be a stratification of the water column. Such layers at depths of 20-30 meters are created by different saline contents or temperatures, and prevent the algae from sinking to depths were there is too little light to support plant growth. The stratification is created either by the surface water warming up or by melt water being added to the surface. The solar heating in the Arctic is too weak to create stratification, but when the sea ice melts the surface water near the ice edge becomes fresher than the salt water deeper down. Nutrients are an another important condition for algal bloom. There are generally few nutrients in the Arctic sea, but autumn and winter storms churn the waters so that nutrient salts in the depths become available to algae further up. In the shallow Barents Sea, this churning reaches all the way to the seabed. The ice edge "When the ice edge melts northwards in the spring and summer, new areas are continuously becoming available for algal bloom. It lasts until all the nutrient salts have been consumed. This algal bloom along the ice edge thus becomes a productive band that moves further and further north over the course of the summer", says Hop. Zooplankton feed on the algae at the ice edge, and zooplankton are in turn food for fish, sea birds and marine mammals. The ice edge is also where many seals rest and give birth, and where polar bears hunt seal. Despite the teeming animal life, there is little commercial fishing along the ice edge. When the sea ice melts in the spring, the ice edge becomes a rich area teeming with life. Illustration: Audun Igesund/Norwegian Polar Institute. The ice retreats "The extent of the sea ice in the Arctic varies significantly through the year and from year to year", says Sebastian Gerland, the section leader for Oceans and Sea Ice at the Norwegian Polar Institute. Since 1979, their measurements have shown a trend towards less and less sea ice in the Barents Sea. All answers have not been found for why the sea ice is retreating. A warmer climate, with warmer air and water, naturally leads to shorter frost seasons and an earlier melting of the ice. However, several processes complicate the picture. "As a starting point, the air is cold enough, and the water is cold enough to create sea ice. However, many factors such as sunlight, cloud cover, wind and ocean currents, water column stratification and ice dynamics affect this. Precipitation changes are also significant. For example, a thicker snow cover means that the sea ice grows slower", says Gerland. That the ice edge moves northwards affects animal life in the Arctic. The journey to feed at the ice edge can become too long for polar bears and sea birds. "For birds nesting in Svalbard, it can become too far to fly from the nesting locations to the ice edge to find food. We are already seeing birds such as ivory gulls, kittiwakes and little auks preferring to graze in front of glaciers in Svalbard. Polar bears that remain on the mainland risk becoming very thin", says Haakon Hop. According to Hop, it is not just in the Barents Sea that the ice edge has retreated northwards. In the East Siberian Sea, the sea north of central and eastern Siberia, the ice edge has retreated so far back that it has moved from the shallow continental shelf and out over the deep Arctic Ocean. "When the ice edge is over the several thousand metre deep Arctic Ocean, the algae and nutrients disappear down to the bottom and out of the ecosystem. They are too far down to be reused the following year when the autumn and winter storms have churned the water masses, such as in the shallow continental seas. This has consequences for the biological production, which doesn't get very high", says Hop. The polar front, which is the border between warm Atlantic water and cold Arctic water, generally follows the ice edge at its maximum range in the eastern Barents Sea. "Today, the ice-free part of the Barents Sea is important for the fisheries, with rich populations of cod, haddock and pollock. These are also important grazing areas for white-beaked dolphins and several species of baleen whales", says Jan Helge Fosså, Senior Researcher in the research group for benthic resources and processes in the Institute of Marine Research. Parts of the Barents Sea have been opened for petroleum activity. If this is expanded to include areas with sea ice, this can cause environmental challenges. "As a starting point, the problem related to oil is the same in the Arctic as it is for petroleum activities further south" says Harald Loeng, the Research Director in the Institute of Marine Research. Though these are still hypothetical problems, he considers acute discharges such as a blowout or shipping accidents to be the greatest danger. If this occurs in the summer, it can affect algal bloom and the fish larvae. On the other hand, in the winter there is little biological production in the ice-covered areas. "The question is which activity we want to permit in areas with a risk of sea ice", says Loeng. The ice edge as of 13 February 2013. According to Ann Mari Green, the chief engineer in the section for petroleum activities in the Norwegian Environment Agency, the ice edge is an ecosystem that is vulnerable to acute oil pollution, contaminants and climate change. "The production of phytoplankton and zooplankton takes place at low temperatures in the upper layers of the ocean in a 20-50 km broad band along the ice edge. This means that the concentration of grazing species in the area can be high. This makes the ice edge vulnerable to acute oil pollution in parts of the year, especially for sea birds. In addition to oil spills, she considers discharges of soot from combustion to be a problem. Soot is considered a short-lived climate driver, and will contribute to a faster global warming by increasing the melting of the ice. This affects the basis for the productive ecosystem. Acute pollution at the ice edge comes with special challenges. This is not just because it takes place far from inhabited areas and infrastructure. Ice, and partly the darkness, makes it difficult to discover the pollution and makes the clean-up challenging. Booms and skimmers work poorly in an ocean full of ice. Chemical dispersal also has a limited effect because the ice dampens the waves that are necessary for the dispersants to mix with the oil. Incinerating the oil can be problematic because of the negative effects of the soot on the ice, but may nevertheless be the best alternative for removing oil from waters with sea ice.
What is Non Verbal Reasoning? Non Verbal Reasoning tests are commonly used by employers of medical professions, engineering, piloting, air traffic controllers and other areas of recruitment. The test is also used in the 11+ Kent test which is required for children to gain placements into grammar schools. What does it mean? Non-Verbal Reasoning are assessments that require a person’s logical and technical ability to visualise patterns, shapes and formations. The questions appear in diagrammatic form, and are often referred to as Abstract or Diagrammatic Reasoning. This type of assessment will enable employers to determine how well you can understand and visualise information to solve problems. You need to be able to recognise and identify patterns amongst abstract shapes and images. Such tests may include: - Determining identical shapes - Rotating shapes - Reflections of shapes - Finding the odd shape - Finding the missing shape - 3D shapes - Shading and colours - Number sequences All of the above are typical questions that often appear in a Non Verbal Reasoning test. It is imperative that you fully comprehend the question types, and know how to answer them. This comprehensive guide will provide you with lots of testing questions in order to better your chances at passing your assessment. When you order you will receive the following free bonus; 30-DAYS FREE PSYCHOMETRIC ONLINE TESTING SUITE ACCESS. As an additional bonus, you will receive 30-days FREE ACCESS to our professional online testing suite which will equip you with sample tests that will help you prepare fully! After your 30 days free trial ends the service is automatically charged at a mere £5.95 plus vat per month with no minimum term. You can cancel at any time. See our terms and conditions for more details. Non-Verbal Reasoning Tests Workbook Add to cart More info
In this article we are learning about “void pointers” in C language. Before going further it will be good if you refresh about pointers by reading – Introduction to pointers in C. A pointer variable is usually declared with the data type of the “content” that is to be stored inside the memory location (to which the pointer variable points to). Ex:- char *ptr; int *ptr; float *ptr; A pointer variable declared using a particular data type can not hold the location address of variables of other data types. It is invalid and will result in a compilation error. Ex:- char *ptr; ptr=&var1; // This is invalid because ‘ptr’ is a character pointer variable. Here comes the importance of a “void pointer”. A void pointer is nothing but a pointer variable declared using the reserved word in C ‘void’. Ex:- void *ptr; // Now ptr is a general purpose pointer variable When a pointer variable is declared using keyword void – it becomes a general purpose pointer variable. Address of any variable of any data type (char, int, float etc.)can be assigned to a void pointer variable. Dereferencing a void pointer We have seen about dereferencing a pointer variable in our article – Introduction to pointers in C. We use the indirection operator * to serve the purpose. But in the case of a void pointer we need to typecast the pointer variable to dereference it. This is because a void pointer has no data type associated with it. There is no way the compiler can know (or guess?) what type of data is pointed to by the void pointer. So to take the data pointed to by a void pointer we typecast it with the correct type of the data holded inside the void pointers location. void *ptr; // Declaring a void pointer ptr=&a; // Assigning address of integer to void pointer. printf("The value of integer variable is= %d",*( (int*) ptr) );// (int*)ptr - is used for type casting. Where as *((int*)ptr) dereferences the typecasted void pointer variable. ptr=&b; // Assigning address of float to void pointer. printf("The value of float variable is= %f",*( (float*) ptr) ); The value of integer variable is= 10 The value of float variable is= 37.75 A void pointer can be really useful if the programmer is not sure about the data type of data inputted by the end user. In such a case the programmer can use a void pointer to point to the location of the unknown data type. The program can be set in such a way to ask the user to inform the type of data and type casting can be performed according to the information inputted by the user. A code snippet is given below. void funct(void *a, int z) printf("%d",*(int*)a); // If user inputs 1, then he means the data is an integer and type casting is done accordingly. printf("%c",*(char*)a); // Typecasting for character pointer. printf("%f",*(float*)a); // Typecasting for float pointer Another important point you should keep in mind about void pointers is that – pointer arithmetic can not be performed in a void pointer. ptr++; // This statement is invalid and will result in an error because 'ptr' is a void pointer variable.
What Is a Demand Shock? A demand shock is a sudden and surprise event that dramatically increases or decreases demand for particular goods or services, usually on a temporary basis. A positive demand shock is a sudden increase in demand, while a negative demand shock is a decrease in demand. Both a positive demand shock and a negative demand shock will have an effect on the prices of goods and services. Demand shocks may be contrasted with supply shocks, where there is a sudden decrease or increase in the supply of a good or service that causes an observable economic effect; both supply and demand shocks are forms of economic shocks. - A demand shock is a sharp, sudden change in the demand for particular goods or services. - A positive demand shock will cause the price of goods to skyrocket and lead to limited supply, while a negative shock will cause prices to collapse and lead to an oversupply. - Demand shocks are often temporary disruptions, that the market will adjust to eventually by incentivizing more production of supply in a positive shock, or else the bankruptcy of producers in a negative shock. Understanding Demand Shocks A demand shock is a large but transitory disruption of market prices caused by an unexpected event that changes the perception and level of demand with regard to a specific good or service, or a group of such goods or services. Earthquakes, terrorist events, technological advances, and government stimulus programs are all examples of events that can cause demand shocks. When the demand for a good or service rapidly increases, the price of that good or service typically increases because suppliers cannot cope with the increased demand given the current level of supply capacity. In economic terms, this results in a shift in the demand curve to the right. A sudden drop in demand causes the opposite to happen since the supply will remain too elevated over the decreased demand until capacity can be attenuated. A positive demand shock can come from fiscal policy, such as an economic stimulus or tax cuts. Other demand shocks can come from the anticipation of a natural disaster, such as buying bottled water or gasoline before a hurricane. Negative demand shocks can come from contractionary policy, such as tightening the money supply or decreasing government spending. Example of a Demand Shock The rise of electric cars over the past few years is a real-world example of a demand shock. It was hard to predict the demand for electric cars and, therefore, their component parts. Lithium batteries, for example, had low demand as recently as the mid-2000s. From 2010, however, the rise in the demand for electric cars from companies like Tesla Motors increased the overall market share of these cars to 3 percent, equal to roughly 2,100,000 vehicles. This meant that the demand for the lithium batteries that power these cars also increased sharply, and somewhat unexpectedly. Lithium is a limited natural resource that is difficult to extract and found only in limited parts of the world. Therefore, production has been unable to keep up with the growing demand, and so the supply of newly mined lithium remains lower than it would be otherwise. The result is a demand shock. Over the period from 2004 to 2014, the demand for lithium more than doubled, increasing the price per metric ton from $5,180 in 2011 to $6,600 in 2014. Because the demand for electric vehicles and other uses of batteries, such as mobile phones and tablets, has exploded since 2014, the price of lithium has more than doubled again, to $16,500 per metric ton in 2018. This increase in demand for electric cars increased the cost of component parts, and these rising costs are being passed onto the consumer, raising the cost of electric cars in a positive demand shock environment. An example of a negative demand shock would be a product that becomes technologically obsolete, such as cathode ray tubes. The introduction of low-cost flat-screen televisions caused the demand for cathode-ray tube TVs and computer screens to drop to nearly zero in only a few short years.
Expository Essay Topics for School and College Starting from high school, students deal with many types of essays. In fact, there are so many types of writing assignments that even an experienced student can get confused to tell them apart. Teachers are trying to assign different types of writing tasks to introduce their students to various essay types. However, it’s quite difficult to combine the task with regular tests, student’s projects, practical lessons and tons of work that almost every student has. That’s why we decided to create a helpful guide to help students with creating their excellent expository essay – one of the most common types of writing the assignment. Traps and Pitfalls of Writing an Expository Essay Writing an expository essay is not a very difficult task. You can easily cope with it if you know some main principles and basic rules of writing an essay. However, what you shall start with is understanding of how this type of essays differs from others. The hidden danger that becomes a reason that so many students get lower marks for their expository essay is a similarity of expository and narrative essays. Many students can’t tell two of these types apart and write both of them in the same way. That is not really correct. However, it’s easy to improve! There are two key words that can easily show you the difference between these essay types: - TELL – narrative essay In this case, all you have to do is to tell your reader about an idea, object, event or situation. Your task is to inform your target audience about something. - EXPLAIN – expository essay Dealing with an expository essay, you have to write an explanation of something. Your task is not only to introduce a topic to your reader but also to explain some statement or your point of view. You can even provide some instructions to your target audience or teach your readers about something. Basic Ideas for Expository Essays An expository essay is aimed at giving your readers some basic understanding of what the discussed object is, how to do something or how something is done. While there’s a great diversity of topics, you shall start with deciding on the subtype of your expository essay. The most common variants of expository essays are: Exposition definition essay gives a detailed explanation of something. It reveals some main concepts of the topic, including a description of the most important features of the discussed subject. The main question that essay answers is “What is it?” This type of expository essays explains the process of creating something. At the same time, there are two possible variants of an expository process essay. It can be a type of instruction that gives an idea of how to do something to your reader. Or it also can be an explanation of the process of producing some certain things. For example, a detailed explanation of how a LED-lamp is made without an offer to make it on your own. The type of essays speaks about certain features of a subject or phenomena that allow determining the class or type it belongs to. The explanation may also include some general information about the possible classes and types. - Compare and contrast This type of expository essays includes a comparison between two objects that will help to attract a reader’s attention to certain features of one or both of the objects. Often, the contrast is the best way to discuss some small but important features of a thing. - Cause and effect Often we accept a phenomenon or an object as a whole, without understanding its causes and the effects it can make on other objects. Cause and effect expository essay throws some light on complex concepts, letting your readers see the complex process and the chain that leads to the final result. How to Recognize an Expository Essay by Its Topic Expository essays may refer to many topics and themes. However, there’s always a way to know that the essay is an expository one, right after reading its topic. What are that special features that help to recognize expository essays between others: - Signal words. There are special words that express the main function of the essays and help you to understand which type of paper it is: describe, define, explain, etc. - Guidelines or instructions. If the topic sounds like a name of tutorial or instruction, the essay is probably an expository one. Important Features of the Expository Essay Topics Often teachers give an expository essay topic that you have to work with. However, sometimes students are allowed to choose a topic on their own. This is a great chance to express your knowledge and creative thinking. However, to choose a topic that allows you to express your knowledge and writing talent, you shall understand which features the topic shall have: - The theme will be interesting to you and to your target auditory. If you choose a popular topic that doesn’t attract you and doesn’t reflect your interest, you may feel a lack of motivation to work with it. In opposite, if you rely on your personal interests only, your essay may sound not interesting to others. That’s why it’s very important to find a good compromise and to stop your search on a topic that will be interesting both to you and to your readers. - The topic shall be complicated enough to require explanation. If you choose an easy topic and start to explain concepts that are known for everyone, it may sound silly. That’s why it’s important to search for a complicated and complex theme that needs more explanation and description. - The topic shall reflect your academic knowledge. If you study in school, your topic may sound less complicated than if you study in college or uni. It’s important to choose the theme that is attractive and understandable to your classmates or group mates and, at the same time, allows you to express the knowledge you have. - The topic shall correspond to your academic field and subject. If you study literature, your essay shall be connected to the field. If you are a medical student, it’s necessary to write about health issues. Your teacher may also ask you to write on a different theme, in that case, an essay topic can be different from the main subject and the field of your study. - There shall be enough information about the topic. Before making a final decision about a topic, check if there’re enough information sources you can work with. In a case you are not ready for extra spends, it’s better to be sure the sources are free. It’s very important to start writing your essay only after developing a clear understanding of what you write about and what your topic is. When you pick a topic up, don’t forget to check if it has all the features mentioned above. Great Collection of 50 Best Topics for Your Expository Essays Even if you have a clear understanding of how an expository essay topic shall sound, it may be difficult to come up with an idea about your own one. In that case, some good examples of the topics can be especially useful. We’ve chosen 50 good topics that can be used directly or can serve as examples and inspiration for creating your own one. All topics are separated into groups according to the field and theme they belong to. Writing about your personal experience is a great choice. First, essays of this type are easier to write. Second, you have more opportunities to make your essay sound interesting and easier for understanding. Third, you will probably spend less time, searching for additional information as the topic is already familiar to you. Often the task to write an expository essay about some personal experience is assigned in school. However, sometimes it’s given to college students too. - Describe your first day of living in a new flat/house/apartment. - Explain how visiting your grandparents influenced your character. - Describe how your life changed when you got your pet. - Describe your first week of living outside the parents’ home. - Describe your best traveling experience. - Explain how your first job helped you to become more independent. - Describe a book that influenced your life philosophy. - Explain the choice of your future profession. - Explain how your favorite teacher changed your attitude to study. - Describe the situation that embarrassed you most of all. Literature is a very interesting subject. Reading a book or a poem different people may feel different emotions or accept the information in various ways. That’s why it’s always interesting to write your own explanations of something and then compare it to how your group mates see it. This is the reason why teachers ask their students to write an expository essay about a poem or a book that was studied recently. - Define which writing methods are often used in your favorite book. - Explain the role of the monologues in a poem that you read recently. - Explain the factors that could influence the behavior of the main character. - Explain the reasons why poetry is less popular than it was 200 years ago. - Define the main features that allow recognizing the style of the author. - Explain criteria that are used for evaluating books, poems, and novels. - Describe how a novel shall be created. - Define the common features of the popular literature of the 18th century. - Describe how dialogues help to attract the attention of a reader. - Describe the subtext of a story. Almost every student faces the task of writing an expository essay on a historical topic. Knowledge of history is required almost in every scientific sphere. Moreover, historical topics are often interesting to both a writer and a target auditory. - Define the most significant changes that have happened in medicine since the beginning of the 20th century. - Explain how the legal system of the USA was formed. - Describe the influence of the Second World War on the world’s economy. - Define the key historical figures for the development of car industries. - Explain the reasons for the First World War. - Define the most important reasons for the development of the civilization of Ancient Egypt. - Define the key differences in the legal system of Ancient Rome comparing to other countries of the same time. - Explain which factor influenced the art of the 17th century. - Explain the reasons for the popularity of communism. - Define important reasons of civil revolution in the USA. Writing an essay about some social issues is a great chance to get the attention of your target auditory and to get an excellent mark. Why? The reason is the popularity of social issues and the interest they always cause. Moreover, the theme is attractive with its diversity and variety of topics connected to social issues. You can easily find something interesting and trendy to talk about. - Describe the main negative changes that happened in society during the last decade. - Define the key reasons of the growing rate of suicides among teenagers and young people. - Explain how lessons of sexual education in schools can reduce the number of divorces in the future. - Define which changes shall be made to help elderly people feel more involved in the social life. - Describe how the relationship between parents and children changed in the last 50 years. - Explain the main factors of unemployment growth. - Describe possible means that can help young mothers to overcome post-natal depression. - Explain the importance of tolerance between professional workers. - Define the main social causes of bullying among school children. - Explain why wearing the uniform may be important to college students. Science and Technology Writing about science or technology can be an exciting task. Topics connected to some new inventions and recent changes in the industry always cause a lot of interest and attention. If you like to write on some unique topics, you can choose many themes that are fresh and trending in the sphere of science and technology. - Explain why it’s important to have some IT classes in every school. - Describe the role of science development in reducing the level of pollution globally. - Define the industry that causes the most negative effect on the environment. - Define the ways of developing genetic engineering without braking moral norms and values. - Describe how solar energy can be used. - Describe the way IT technologies may change in the nearest decade. - Explain why it’s important to study black holes. - Describe the newest medical inventions that are aimed at solving cerebral diseases. - Define the most important factors for slowing the process of global warming down. - Explain the opportunities space exploration gives for solving current environmental problems.
M.Ed., Stanford University Winner of multiple teaching awards Patrick has been teaching AP Biology for 14 years and is the winner of multiple teaching awards. Eukaryotic cells are more complex than prokaryotic cells. Eukaryotic cells are found in most of the more complex kingdoms of life including fungi, animals, plants and protists. Eukaryotic cells have internal membranes around their nuclei and organelles and cytoskeletons. Plant cells also have cell walls, but these are not present in other types of eukaryotic cells. Eukaryotic cells are one of the two major kinds of cells in the world of Biology. Now their name actually gives you their key characteristic cause "Eu" means true or good, "kary" means nucleus and that ties into the basic characteristic for all eukaryotic cells is that they have an organelle within the cell itself that has its own membrane and that is the nucleus. Now most eukaryotic cells also have mitochondria and those that can do photosynthesis will have the membrane bound organelle called the chloroplast. Now another thing that generally all the eukaryotes have is that they hav4 many DNA molecules or chromosomes and because they have so much DNA in order to fit into the small space inside of the nucleus, they have to wrap those DNA molecules around specialized proteins called histones and that means that a DNA molecule which in eukaryote is linear has to be wrapped or bound on these histone proteins. Lastly they have their own special kind of ribosomes that different from the prokaryotes and generally you could just call them eukaryotic style ribosomes. Now the two most common kinds of eukaryotic cells are the animal cells and the plant cells. Now I know in this diagram the labels are kind of hard kind of small and hard to see but let's focus in on some the key things that you'll see in an animal cell. You'll see it has the membrane around it with the nucleus that has you can see its own membrane wrapped around it. The other key eukaryotic organelle here is the mitochondria, now animal cells differ from plant cells in that they also have these things here, these two barrel shaped things that are 90 degrees to each other, those are called centrioles and a common question on a Biology test is to try to identify organelles that are unique to animal cells and the answer would be centrioles. Now I'm going to let you know one of the trick questions. The trick question is they'll also include an option where you can choose centrioles and mitochondria because most kids know that plants do photosynthesis so of course they could be using chloroplast and animal cells break down sugar from photosynthesis in order to get the energy and that's done by the mitochondria, so a lot of kids will think plants have chloroplast only animal cells have mitochondria and that's not true. If we take a look at this plant cell you can see it has mitochondria why? Because just like you, it wants to get the energy that's in the sugar that came from photosynthesis. Now photosynthesis is done by that cell there oh! Sorry that organelle there called the chloroplast and you can see it has its own double membrane around it just like the mitochondria has two membranes in and out of it. Now something else that makes a plant cell different from an animal cell is that plant cells have this large thick cell wall wrapped around them to give them the structure and stability that animal cells don't need remember animal cells are all about moving so they don't need a plant cell wall around them cause that would keep them boxed in and what do we use for structure and stability? Our own skeleton so whether you're insect and you wear your skeleton on the outside or you're jelly fish and you use water that's on the inside of you as a hydrostatic skeleton. We have our own structural support built in to the entire body we don't need every cell to have its own skeleton. Another characteristic you'll commonly see in plant cells is that they'll have this large vacuole and that's usually filled up with water and that's how they are able to get all filled and it's called turgid and if the water in the water vacuole starts to leak out that's when a plant cell will start to wilt and the entire plant will start to deflate and wilt as well. That's eukaryotic cells.
Confused about fertilizer numbers? What value do they have in organic gardening? A plant needs nutrients to survive. Most of these are provided by the soil, but soil varies tremendously in nutrient amounts, soil type, pH, and nutrient availability. The three main nutrients that have been identified as absolutely necessary for plants are nitrogen (N), phosphorus (P) and potassium (K). These three are also known as macronutrients, and are the source of the three numbers commonly found on organic fertilizer labels. The numbers found on our All-Purpose Fertilizer, for example, are 5-5-5. This is the percentage by weight of the N, P, and K found in the fertilizer. So what’s so important about nitrogen, phosphorus and potassium? Nitrogen (N) is probably the most widely recognized nutrient, known primarily for its ability to “green up” lawns. Nitrogen mainly affects vegetative growth and general health. Chlorophyll, the green substance in plants responsible for photosynthesis, is largely composed of nitrogen. It is also used heavily in new shoots, buds and leaves. Air contains about 78% nitrogen, but atmospheric nitrogen is not readily available to plants. They must absorb it through the soil. Ammonium and nitrate are both readily available forms of nitrogen, but they are common in chemical fertilizers and leach heavily and quickly out of the soil. Nitrogen can be applied organically in many ways, including blood meal, feather meal and various liquid fertilizers such as Alaska Fish Fertilizer. Keep in mind that many organic dry fertilizers are slow-release, helping the long-term nitrogen content and building up organic matter in the soil. Nitrogen deficiency is recognized by the yellowing of older leaves, slowing or stopping of growth. Leaves may drop sooner than expected. Excess nitrogen is recognized by extremely fast growth, resulting in long, spindly, weak shoots with dark green leaves. Phosphorus (P) is important for healthy roots and is used more heavily during blooming and seed set. Phosphorus is easily rendered unavailable to plants when the pH is slightly unbalanced. It is released in soil through decomposing organic matter. Phosphorus deficiency is recognized by dull green leaves and purplish stems. The plant is generally unhealthy, sometimes yellowing. Lack of blooming with lush green foliage may also indicated a lack of phosphorus. Organic phosphorus can be found in rock phosphate, bone meal and various liquid organic fertilizers such as Neptune’s Harvest Fish & Seaweed. Potassium (K), sometimes known as potash, is important for general health of plants. It is key in the formation of chlorophyll and other plant compounds. Potassium is also known to help with disease resistance. Potassium deficiency is hard to symptomize, but plants are generally sickly, with small fruit, yellowing from the older leaves upwards, and sickly blooms. Sources of organic potassium include sul-po-mag (sulfate of potash magnesia), palm bunch ash, and liquid fertilizers such as Earth Juice Meta-K.
Why Doesn’t the United States Use the Metric System? In 1793, noted French scientist Joseph Dombey departed Le Havre, France bound for Philadelphia. His mission was to meet with Thomas Jefferson and give him two of the rarest items on Earth. Unfortunately for Dombey, fate had other intentions and storms pushed the ship he was aboard well of course. And so it was that around the time he was supposed to deliver his precious cargo to Jefferson, he found himself instead at the mercy of British pirates. Being French in this situation wasn’t exactly ideal, so at first he attempted to pass himself off as Spanish, but his accent gave him away. Dombey was eventually taken to the small Caribbean island of Montserrat where he ultimately died before he could be ransomed. So what was the precious cargo he was to have delivered as a gift to the United States? Two small copper items (of which only six sets existed on Earth at the time)- standards representing a meter and a grave, the latter better known today as a kilogram. At the time, the United States, having already become one of the first nations in the world to adopt a decimal, base ten system for currency was strongly considering doing the same with the system of weights and measures to get rid of the hodgepodge of British weights and measures system mixed with others also commonly used throughout the young nation. Thus, with the initial strong support of then Secretary of State Thomas Jefferson, and thanks to a desire to continue to strengthen ties between France and the United States, adoption of the new French metric system seemed close at hand. Along with a trade agreement concerning grain export to France, Dombey was to deliver the meter and grave standards and attempt to argue the system’s merits to Congress who, at the time, were quite open to adopting these units of measure. Of course we all know how this turned out- Dombey never got a chance to make his arguments and thanks to concerns about whether the metric system would even stick around at all in France, combined with the fact that trade between Britain and the U.S. would be hindered by such a change, the U.S. eventually decided to abandon efforts to adopt the metric system and mostly stuck with the British system, though the U.S. Customary Units and what would become the Imperial System would soon diverge in the following decades. But as more and more nations came to adopt this new system of weights and measures, the U.S. slowly began to follow suit. Fast-forwarding to 1866 and with the Metric Act the U.S. officially sanctioned the use of the metric system “in all contracts, dealings or court proceedings” and provided each state with standard metric weights and measures. In 1875, the United States was one of just 17 nations to sign the “Treaty of the Metre” establishing, among other things, the International Bureau of Weights and Measure to govern this system. Fast forward a little under a century later and the full switch seemed inevitable in the United States after the 1968 Metric Study Act. This ended up being a three year study looking at the feasibility of switching the United States to the metric system. The result? a report titled A Metric America: “A Decision Whose Time Has Come” recommending the change and that it could be reasonably done in as little as 10 years. Unfortunately, the public was largely either apathetic or strongly opposed to making the switch. (According to a Gallup poll at the time, 45% were against it.) This was nothing new, however. A huge percentage of the time a given people of a nation have been asked by their government to switch to the International System of Units, the general public of those nations were largely against it, even France itself, who went back and forth for decades on the issue, contributing to the United States’ hesitation to adopt it in the early going. Brazil actually experienced a genuine uprising when the government forced the change in the late 19th century. Over a half century later, British citizens still stubbornly cling to many of the old measurements in their day to day lives, though have otherwise adopted SI units. So why did all these governments frequently go against the will of their people? Arguments for the economic benefits simply won out- as in so many matters of government, what businesses want, businesses often get. So the governments ignored the will of the general public and did it anyway. But in the U.S. the situation was different. Not having the pressure from being bordered and economically as bound to one’s neighbors as in Europe, and being one of the world’s foremost economic powerhouses itself, the immediately economic benefit didn’t seem so clear. For example, California alone- one of 50 states- if it were its own nation would have the 5th largest economy in the world. Texas and New York state aren’t far behind when compared to nation’s of the worlds economies at 10th and 13th respectively, let alone the other 47 states. Seeing lesser readily apparent economic benefit, and not having the same geographic pressures as in Europe, in the 1970s many big businesses and unions were in strong opposition to the change, citing the cost of making the switch and, on the latter side, unions worried that such a change would make it easier to move jobs that formerly used customary units oversees, given that now such product could more easily be purchased from abroad. Swayed, when the 1975 Metric Conversion Act was signed by President Gerald Ford, it had largely lost its teeth. While it did establish a board whose job it was to facilitate the nation’s conversion and put forth various recommendations, the act did not have an official timeline and made the switch voluntary. Nevertheless, contrary to popular belief, in the decades since, the United States actually has largely switched to the metric system, just the general public (both domestic and international) seem largely ignorant of this. The U.S. military almost exclusively uses the metric system. Since the early 1990s, the Federal government has largely been converted, and the majority of big businesses have made the switch in one form or another wherever possible. In fact, with the passage of the Metric Conversion Act of 1988, the metric system became the “preferred system of weights and measures for United States trade and commerce”. In the medical field and pharmaceuticals. the metric system is also used almost exclusively. In fact, since the Mendenhall Order of 1893, even the units of measure used by the layperson in the U.S., the yard, foot, inch, and pound, have all been officially defined by the meter and kilogram. Speaking of the general public side, nobody in the U.S. blinks an eye about food labels containing both metric and customary units (required thanks to the Fair Packaging and Labeling Act, with the majority of states since also allowing metric only). The gram is commonly used to measure everything from the amount of flour to add in a recipe to how much marijuana one buys from a shop or, where it’s still illegal, their local dealer. And if you were to ask someone to pick up a two liter of Dr. Pepper or how a person did running a 10K, most everyone in the United States would know exactly what you are talking about. Beyond this, you’d be hard pressed to find a ruler in the United States that doesn’t include both inches and centimeters and their common divisors. Further, in school, both customary units and the metric system are taught. Yes, while Americans may generally have little practical need to learn a second language, most are, at least for a time, reasonably fluent in two very different systems of measurement. As with languages unpracticed, however, once out of school, many lose their sense of the latter from lack of use and concrete perspective. It’s one thing to know what 100 and 0 degrees Celsius refers to with respect to water, it’s a whole different matter to “get” what temperature you might want to put on a jacket for. However, students who go on to more advanced science classes quickly pick up this perspective as they become more familiar and, thus, the scientists of America aren’t at the slightest disadvantage here, also contrary to what is often stated in arguments as to why the U.S. should make the switch a bit more official than it already is. All students that go along that path become just as familiar as their European brethren, if a little later in life. This all brings us around to why the United States hasn’t made the switch to the metric system more official than it already is. Primarily three reasons- cost, human psychology, and, at least on the general public side, little readily apparent practical reason to do so. As to cost, while there has never been a definitive study showing how much it would cost the United States to make the switch official and universal, general estimates range even upwards of a trillion dollars all things considered. Why so high? To begin with, we’ll discuss a relatively small example in road signs. Installing street signs is an incredibly expensive affair in many places for a variety of reasons. For instance, in 2011 the Washington State Department of Transportation claimed it costs anywhere from $30,000 to $75,000 PER SIGN, though they later clarified those were worst case and most expensive scenarios and sometimes the signs and installation can ring in ONLY around $10,000. Bronlea Mishler of the DOT explains, Installing a sign along a highway isn’t quite as simple as pounding some posts into a ground and bolting on a sign — that’s why the cost is so variable. There are two ways to replace a sign. One way allows us to install it under old rules; the second way requires us to follow new federal standards… The old rules apply if we are just fixing something, not building something new. Installing a sign alongside the road counts as fixing something — basically, just giving drivers more information. If we install a sign on the side of the road, it would cost: $2,000 to make the sign, buy the beams and rivets; $8,000 for two steel posts and concrete; $5,000 to clear brush and other landscape work before and after installation; $15,000 for maintenance crews to set up traffic cones, work vehicles, program highway signs and spend the evening doing the work. Total: $30,000…. The new rules apply if we’re doing a new construction project. Costs would be higher because we would have to bring everything up to the current highway code. These often involve putting up a sign bridge, a steel structure that spans the entire freeway to hold up multiple signs. Typical costs include: $2,600 to make the sign, buy the beams and rivets because the sign must be bigger; $75,000 for the sign bridge. Total: $77,600. WSDOT Deputy Regional Administrator Bill Vleck also stated, beyond many of these signs needing to be special ordered on a 1-off variety (think a highway sign with city name and distance marker) and often being much larger than most sign makers make, drastically increasing cost, some of the seemingly exorbitant costs are due to special features of the signs few know about. For instance, Vleck states, “If there’s an auto accident, if a car hits that sign post and there’s any kind of injury involved, the state is going to be liable, so we’re looking potentially at a multi-million dollar settlement in those kind of situations… [So] it would have to be a breakaway type sign post, and it has to be specially fabricated so that if a car hits that sign, it reacts appropriately and doesn’t come down and basically take out the occupants.” For your reference here, in 1995, it was estimated that approximately 6 million signs would need changed on federal and state roads. On top of that, it was noted that approximately just shy of 3 million of the nations about 4.2 million miles (6.8 million km) of public roads are actual local, with an uncertain number of signs in those regions that would need changed. That said, the rather obscene costs quoted by the aforementioned Washington State DOT would likely be grossly overestimated on a project such as this, with prices massively reduced if special laws were passed to remove much of the red tape, and given the extreme bulk orders that would be called for here, including for the signs themselves and contracts to dedicated crews to make this happen as fast as possible. For example, in 1995, Alabama estimated they could swap out all the signs on federal highways for a mere $70 per sign ($120 today) on average. Perhaps a better rubric would be in looking at Canada’s switch, swapping out around a quarter of a million signs on their then 300,000 miles (482,000 km) or so of road. The total reported cost? Only a little over $13 million (about $61 million today) or around $244 per sign in today’s dollars. Extrapolating that out to the minimum 6 million signs would then run approximately $1.5 billion + whatever additional signs need swapped out on the 3/4 of the rest of the roads not accounted for in that 6 million sign estimate. Not an insignificant sum, but also relatively trivial for the U.S. taxpayer to cover at about $5 per person + some uncertain amount for the local road signs that need changed. Moving on to far greater expenses- industry and wider infrastructure. While it’s impossible to accurately estimate the cost of such a change to American businesses as a whole, we do get a small glimpse of the issue when looking at a NASA report studying the feasibility of swapping the shuttle program to full metric. They determined the price tag would be a whopping $370 million for that project alone at the time, so decided it wasn’t worth the cost for little practical benefit… Now extrapolate that out to the approximately 28 million businesses in the United States, their software, their records, their labels, machinery, employee training, etc. needing switched like some sort of Y2K event on steroids. Thus, while it’s impossible to know for sure, many posit the cost could swell into the hundreds of billions of dollars, if not even creep into the trillion territory- in theory at least. At this point, even the most ardent supporter of the metric system in the United States may be rethinking whether it would be worth it to make the switch more official than it already is. But don’t fret metric supporters the world over! To begin with, the raw cost of making the switch doesn’t actually tell the whole story here. In fact, it tells a false story- while the gross total of making the change would be astronomical, it turns out the net cost likely wouldn’t be much, or anything at all. You see, beyond it noted that, for example, on average Australian businesses saw a 9-14% boost directly attributed to the switch when they made it, back in the United States when companies like IBM, GM, Ford and others spent the money to make the change, they universally found that they made a profit from doing this. This was largely from being able to reduce warehouse space, equipment needs, streamline production, lower necessary inventories, as well as taking the opportunity to, at the same time, remove inefficiencies that had crept into their respective businesses with regard to these systems. They were also able to more uniformly manage their businesses abroad and domestic to the same standards and systems. As a very small example, GM reported they were able to reduce its number of fan belts they had to manufacture and stock from about 900 sizes to 100 thanks to everything that went into the switch. In some cases the businesses also noted new international markets opening up, both in sales and ability to more easily, and often more cheaply, acquire product abroad. All of this resulted in a net profit extremely quickly from investing the money into making the switch. As you might expect from these types of benefits, an estimated 30% of businesses in the United States have largely already switched to metric. Granted, these are generally larger companies and various small businesses dealing mostly locally might not see such a benefit. However, with the increasing globalization of supply chains, many small businesses would likely still see some benefit. Unfortunately, particularly when it comes to construction, that general industry has lagged well behind others in switching, and, as you might imagine, the existing infrastructure of the nation from roads to bridges to homes to drill bits to screws to the architectural plans for all of it being based on customary units would not be cheap to change and it isn’t clear here what the net cost would be. However, as in all of this, the cost could potentially be mitigated via a slow phaseout approach with grandfathering allowed, similar to what other nations did, though in most cases on a vastly smaller scale than would be seen in the United States. All this said, we here at TodayIFoundOut would like to posit that what the international community actually finds irksome about the United States not using the metric system is not United States businesses who deal abroad or United States scientists or even the government- all of which largely use the metric system and all of which have little bearing on what Pierre sitting in his mother’s basement in France is doing at a given moment. No, what upsets Pierre is that the U.S. general populace does not use the metric system in their day to day lives. Why is this irksome? Beyond just the human drive for uniformity amongst one’s community, in this case of the global variety, because English websites the world over, keen to get some of those sweet, sweet U.S. advertising dollars, cater to the U.S. audience and use the units that said audience is more familiar with, those not familiar are often left to Google a conversion to the units they are familiar with. The alternative is for said websites to include both, but that often makes for a break in the flow of the content, something we here at TodayIFoundOut regularly wrestle with finding a proper balance with. This brings us around to the human side of the argument. To begin with, while the United States would unequivocally see many benefits to joining the rest of the world in some good old fashioned metric lovin’, as you might expect given the lack of immediately obvious benefit to the layperson, few among the American public see much point. After all, what does it really matter if a road sign is in kilometers or miles, or if one’s house is measured in square feet or square meters? While some cite the benefits of ease of conversion to other units in a given system, in day to day life, this is almost never a thing that’s cumbersome in the slightest. If it was, Americans would be clamoring to make the change. The argument that ease of conversion between units should be a primary driver for the public to want the change simply doesn’t hold water in an era where, on the extremely rare occasion people actually need to make such a precise conversion in day to day life, they have little more than to say “Hey Google”. And in most cases, even that isn’t necessary when you’re reasonably familiar with a given system. Perhaps a poignant example of how, when you’re familiar, a non base 10 system of measure really isn’t that complicated to deal with in day to day matters, consider that the world still uses 1000 milliseconds in a second, 60 seconds in a minute, 60 minutes in an hour, and 24 hours in a day. What few realize about this is that the original metric system actually attempted to simplify this as well, dividing the day into 10 hours, with 100 minutes in each hour, etc. Unfortunately, most people didn’t see the benefit in switching when also factoring in having to swap out their existing clocks. Nobody has much seen a need to fix the issue since, not even the most ardent champion of the metric system for its ease of conversions compared with imperial or customary units. And while you might still be lamenting the stubbornness of Americans for not seeing the genuine benefits to themselves that would likely be realized here, we should point out that virtually every nation in the world that uses the metric system has holdover units still relatively commonly used among laypeople that aren’t metric, for simple reasons of not seeing a reason to stop, from calories to horsepower to knots to lightyears and many more. Or how about, have you ever flown on a plane almost anywhere in the world? Congratulations, you’ve in all liklehood unwittingly been supporting the use of something other than the metric system. You see, the pilots aboard, from French to American, use a feet based, Flight Level, system for their altitude, and knots to measure their speed. Just two standards that, much like the American public and their road signs, nobody has seen much practical reason to change. Now to more concrete human psychology for not making the switch, which has gradually been converting more and more Americans from general apathy to the anti-switch crowd as the decades pass- when one group of humans tells another group what to do, occasionally using terms like “idiot units” and starting flame wars in comments of every website or video posted on the web that uses or discusses said units- you will universally get resistance if not outright hostility in response. This is not an American thing, as so often is purported- this is a human thing. Try forcing the French government to mandate by law that French is dead and English is now to be universal spoken for the sake of better international trade, economics, and relations. You might argue that in a not insignificant percentage of the world English is already the standard in such international business dealings, but that is really little different than the current situation in business in the U.S. concerning the metric system. What we’re talking about is how the general populace of France would react if the government mandated such a change, and even more so if outside nations were pressuring it. Again, it’s not an American thing- it’s a human thing. Beyond that, as anyone whose ever done anything online is well aware of- humans hate change. Loathe it. Make any change to, say, a format or style of video, no matter how small, and rest assured no matter if the change is unequivocally vastly superior and the audience universally comes to agree with that, a not insignificant number of one’s audience will complain, sometimes vehemently, at first. More directly we see this again and again throughout the history of various nations making the change to SI. Again, resistance of change is not an American thing- it’s a human thing. But fret not world. You see, slowly but surely the United States has been converting to metric and, for most practical purposes for those outside of the United States, other than having to see it on websites (which, again, we posit is the real driver of people’s ire the world over), the switch has already been made. So much so that at this stage while the cars made in America may say miles per hour on the speedometer, the makers of those cars are using metric to measure and build the things. The very military that defends American’s right to use “Freedom Units” has long since largely converted to the un-free variety. In the end, money talks, and, for much the same reason other big holdouts like the UK ultimately gave in, as American businesses who have interest in dealing internationally continue to make the switch, they are seeing to it that the metric system more and more creeps into the daily lives of Americans. This will only continue until the inevitable complete adoption. Slowly but surely America is inching towards metric, largely without anyone domestic or abroad noticing. Want to make the switch take longer? Continue calling them “idiot units”, a mildly humorous statement from a certain point of view given that it takes more brainpower to use customary units than metric, making the latter far more tailored to idiots. And continue to start flame wars in comments comprising mostly of personal attacks rather than using the many and very legitimate and rational arguments that exist as to why it would be of benefit for the people of the United States to make the switch. In the end, we all know there is no better way to convince someone to do something than making the whole thing a religious war, with you on one side and they on the other… If you like this article, you might also enjoy: - Who Invented the Fahrenheit and Celsius Temperature Scales and What Zero Degrees Fahrenheit Signifies - Why Do Screws Tighten Clockwise? - Why Is Comfortable Air Temperature So Much Lower Than Body Temperature? - The Evolution of the Metre - How the 20/20 Vision Scale Works - The United States and the Metric System - Highway Signs- Conversion to Metric Units Could Be Costly - CIA World Factbook Appendix G - History of the metric system - Mars Climate Orbiter - Mendenhall Order - Metrification in the U.S. - US Constitution Art. I sec. 8 - Weights and measures standards of the United States (Judson) - Who’s Afraid of the Metric System ? - Why Hasn’t the U.S. Gone Metric? - Why the U.S. Hasn’t Fully Adopted the Metric System - Why Won’t America Go Metric? - Why America Won’t Go Metric - America Has Been Struggling with the Metric System for 230 Years - Fair Packaging and Labeling Act - HR 596 - Whose Afraid of the Metric System - No, America Shouldn’t Go Metric - Why the U.S. Hasn’t Adopted the Metric System - What is the Cost of Not Going Metric - The Metric System is Anti-Human Central Planning - It’s Time for the U.S. To Get on Board with the Metric System - Why Doesn’t the U.S. Use the Metric System - Should America Adopt the Metric System? - Metrication in the United States - Pirates of the Caribbean- Metric Edition - Metric Act of 1866 - Metre Convention - United States Customary Units - When Does the Speed Limit Come Into Effect - Mendenhall Order - New York State - California Economy Now Ranks 5th - Economy of Texas |Share the Knowledge!|
HIV-Positive Does NOT Always Mean It’s AIDS AIDS mean Acquired Immune Deficiency Syndrome. This medical condition is developed when there is severe damage to the immune system of the body resulting in various life-threatening and serious infections and illnesses. HIV or Human Immunodeficiency Virus is the virus that potentially harms and damages the cells present in the immune system of the body. This consequently brings down the body’s ability to fight even common infections and everyday diseases. This virus is communicable and can be transmitted from person to person. Whereas AIDS is a medical condition and hence can only be developed in an individual and not transmitted. Even in present times there is no cure for HIV. Although a cure is not there but there are drugs that are extremely effective in the treatments which can allow most people having the virus to lead a long and healthy life. Also it is possible that in a high number of cases people with HIV will not even develop any AIDS related illnesses. These people can live upto almost a normal lifespan. There are two types of HIV. Type I and Type II. Type I is more common in India. AIDS is characterised by 3 main stages: - Acute symptoms - Clinical latency - Severe symptoms Most people who get infected by HIV, within a month or two after the virus entering the body, will develop Influenza or flu like illness. This medical illness will generally last for a few weeks and is known to be Primary or Acute HIV Infection. The symptoms are: - Sore throat - Muscle soreness - Mouth or genital ulcers - Swollen lymph glands, typically on the neck - Joint pain - Night sweats It is during the clinical latent phase that the lymph nodes are persistently swollen. Even though the body is still infected with the virus, there are no specific signs and symptoms that surface to make it’s presence known. - Blurred and distorted vision - Cough and shortness of breath - White spots which stay for than usual or normal time period - Soaking night sweats - Persistent chilliness in the body. Or having high fever, typically more than 100 F or 38 C for weeks. - Chronic diarrhoea - Persistent, unexplained fatigue - Weight loss - Skin rashes There are several causes that can infect a person with HIV and that it might develop to AIDS. The causes are: 1. Blood transfusions: Blood transfusions of two or multiple people, in some cases, can cause the virus to transmit between the infected and healthy people. 2. Sharing infected needles: The infected blood contaminated needles and syringes can be a source to transmit HIV. 3. Sexual Contact: This is the most common and in-fact the most frequent cause of HIV transmission from an infected person to a healthy person. 4. From mother to child: The foetus of a pregnant women can get infected with HIV because of their shared blood circulation. It may also happen that the newborn infant gets infected by the already infected nursing mother while breastfeeding her baby. The HIV is detected in saliva, serum or urine by the HIV test performed on the person. This test, as per the UNAIDS/WHO policy, should be conducted with a human rights approach and in a manner which provides due respect to the ethical principles. The primary address by these principles is towards confidentiality, where the entire process of carrying out diagnostic tests and the subsequent results should be kept private and confidential. Home-Based HIV Testing and Counselling (HBHTC) is also available and allows rapid HIV tests for the results to be available within 15 to 30 minutes for the person requesting the test. For those people who test positive are provided counselling sessions by experts. This home based test and counselling is always carried out with the informed consent of the person on whom the test is conducted. The doctors perform certain tests during the diagnosis of HIV or AIDS to know the stage of the disease or infection. The various tests are: The HIV antibodies appear after a certain period of time in the body of the person who is infected by HIV. This time period is called as ‘Window Period’. This is typically between 3 weeks and 6 months, and the antibody test may show false negative, which means that the results will not show the presence of any HIV anti-bodies even though the virus is present. There are CD4 cells in odour body that get destroyed specifically as the are targeted by HIV. For a healthy person the CD4 count varies from 500 to more than 1000. It may so happen that the HIV infection has progressed to develop AIDS even though the infected person does not have any symptoms when the CD4 count in the body reduces to less than 200. Rapid or point-of-care tests: This way of diagnosis produces the results in 20 minutes or less. Blood or oral fluid such as saliva is used to know the presence of HIV-antibodies. This test is an immunoassay and require a follow up if they show the result as positive. Also it should be kept in mind that if this test is conducted during the window period, then it may give a false negative as the result. ELISA (enzyme-linked immunosorbent assay): This is a set of blood tests which are performed by inserting a needle in the body of the person getting tested to draw the blood out and check it to diagnose the HIV infection. It may so happen that a false positive result comes up, which means that it is not necessary that the person has HIV infection even if the test results show positive. This can be in cases such as Lyme disease, Syphilis, and lupus. In this test instead of detecting the antibodies, the virus itself is directly detected. This test can even detect the presence of the infection about 10 days after the person gets infected. The Western Blot test is always done after a positive ELISA test to confirm the HIV infection. There is no definitive cure for AIDS. However, certain medications which are prescribed and taken correctly at certain stage of the disease can help with a healthy and near to normal prolonged life. Although this also depends on the CD4 count in the infected person’s bloodstream. - Reverse transcriptase (RT) inhibitors - Protease inhibitors - Fusion inhibitors - Integrase inhibitors - Multidrug combinations Avoiding AIDS is as easy as ABC; B= Be faithful - Prevention can be done for your own health and even for other people by taking and implementing certain precautionary and wise efforts. - Spreading awareness, specially to sex workers and drug users. Even people for very low income groups and uneducated people should be brought to this awareness. - Safe and protected sex by using condoms. - Using auto disposal syringes and not reusing injections - Blood transfusions only in authorised centres and blood bans - Provide counselling to an HIV positive pregnant mother and making her aware on how to Prevent Parent To Child Transmission (PPTCT). You May Also Like: The content made available at this site is for informational purposes only and is not intended to diagnose, treat, cure or prevent any disease. BreathAndBeats.com, it’s team and it’s content partners strongly recommend that you should consult a licensed medical practitioner for any medical or health condition.
Reflective practices for learning Researchers have noticed that some learners think a great deal about their experiences but others do not. This means that by analysing their successes and mistakes, some learners learn rapidly while others miss valuable opportunities to learn and improve. Reflective thinking enables you to review and learn from previous experiences, and can also help you maintain perspective and check-in with your own ambitions and values. Through reflection, you can: - Assess your previous academic performance and identify ways to improve - Identify effective study and learning habits to practice further - Identify not so effective habits to abandon or improve - Evaluate situations in the moment and respond appropriately Argyris and Schon (1977) proposed a theory to guide reflective practice. Their theory suggests two ways of reflection: Reflection in action When you reflect on what is happening in the present moment. This type of reflection prompts you to pause and think before you act. It is very useful in learning situations where you are under pressure (in tests and exams for example) or in conflict situations (in group work for example). You are likely to respond better if you delay your response and think things through first. To reflect in action: - Consider the situation - Take time to think about how you will respond - Take action Reflection on action When you reflect on an experience after the fact. This type of reflection encourages you to think about an experience that has occurred, analyse what worked well and what didn’t and identify ways that you might improve next time. In learning situations, it can be useful to reflect on how you performed in your assignments and courses to identify areas that require improvement. Reflection on action encourages you to learn from experiences and failures so that you can do better next time. To reflect on action: - Consider the situation - Think about what worked well and what didnt - Think about how you will respond next time to achieve a better outcome Pause to reflect - Think about the results of your last assignment. Were you pleased with the result? If so, what learnings can you use for your next assignment? If not, what will you do differently? - Reflection in action requires a degree of calmness to think clearly before you respond. What are some things you might do to help you stay calm in certain situations?
Featured Animal: June 2015 By Dr. Nicki Frey As the evenings grow longer, and we spend more time outside during the summer, we often see bats flitting around in the dusk. Those that have swimming pools, or frequent their favorite swimming hole as the sun goes down, will no doubt have a story about bats swooping down to take a drink, inevitably scaring the human swimmers out of the water. We are afraid of bats – because they look strange, we don’t want to get bit, or because of pop culture. But is there any real reason for us to have a healthy fear of bats? There are 18 species of bats in Utah; all 18 can be found in southern Utah, while 6 species are found mostly in northern Utah, predominantly in the mountainous habitats (UDWR 2015). The largest bat in Utah, the big free-tailed bat (Nyctinomops macrotis) weighs less than 1oz (28 g), about the weight of a small handful of crackers (DePaepe, Messmer & Conover, 2010). Most of the time, bats and humans exist together without much interaction. Potential conflict occurs when bats roost inside attics, crawl spaces, or other structures that place them close to human contact. Sometimes, bats fly into homes at night through open windows, creating a potentially hazardous situation for both the bat and the humans. Animal bites are nothing to joke about; one can get any number of diseases and infections as a result of getting bit by a wild animal (or a domestic one, for that matter; Conover & Vail, 2015). However, one of the things that makes bat bites potentially dangerous is that many people that have been bit or scratched by a bat never know it. Therefore they don’t seek proper medical attention when they need to. The danger in getting bit or scratched by a bat is our ability to get rabies from bats. Rabies cannot be transmitted through intact skin, but does enter the body through a bite or scratch. Additionally, mucous from bats can become airborne while they hibernate and inhaled by people that are exploring caves, thereby transmitting the rabies virus (Conover and Vail, 2015). At first, the symptoms of rabies are similar to influenza. Then symptoms follow a progression of hypersensitivity (extremely sensitive to light, air, or touch) and hyperactivity that eventually leads to paralysis and finally death. The symptoms of the virus do not present themselves for at least 10 days, and up to several weeks, possibly even several months. By the time a person is exhibiting the symptoms of rabies, death is likely. Of the 33 cases of rabies in the United States from 2002-2011, fatality was 91% (Conover and Vail, 2015). While possible, the risk of becoming infected by rabies is low in the western United States, because most people receive vaccinations in time. Of the known rabies cases in the United States from 2002-2011, nearly half were known to be transmitted by a bat bite or scratch (Conover and Vail, 2015). Many bats have been previously exposed to the rabies virus, which indicates that the virus continues to circulate through bat populations. However only about 6% of bats that have been submitted for testing actually had rabies. The problem is that one cannot tell by looking at a bat if it has rabies or not. The Center for Disease Control recommends seeking professional medical treatment for anyone scratched or bitten by a bat regardless of its apparent health (http://www.cdc.gov/rabies/exposure/animals/bats.html). If you are bitten or scratched by a bat (or any wild mammal) follow these steps: - Wash the wound immediately with soap, water, and antiviral antiseptic for at least 15 minutes. - Seek medical attention. Your physician will consult with local public health authorities to determine the appropriate steps to take. - Often, rabies vaccination may be required. This is a series of 4 doses of rabies vaccine. Since 1980, there have been no documented cases of rabies in the US among patients that have completed this series of vaccinations. - If you have been bitten or scratched by a bat, the Center for Disease Control recommends the vaccination series. - Avoid any bat that is active during the day, attacking other animals, unable to fly or resting on the ground. - Never handle a bat with your bare hands, even if it is behaving normally. - If you think you might have been scratched or bit, seek prompt medical attention. - If a bat if found, especially one that is showing signs of illness, around small children or those with an inability to express themselves (i.e. you don’t know if they’ve been bitten or not) seek prompt medical attention. Center for Disease Control [CDC]. (2015). Bats. Retrieved from http://www.cdc.gov/rabies/bats/index.html Conover, M. R., and R. M. Vail. (2015). Human Diseases from Wildlife. CRC Press: Boca Raton, Florida. Utah Division of Wildlife Resources [UDWR]. (2015). Vertebrate Animals. Retreived from http://dwrcdc.nr.utah.gov/rsgis2/Search/SearchVerts.asp. DePaepe, V., T. A. Messmer, and M. R. Conover. 2010. Bats. Retreived from http://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=2009&context=extension_histall.
Metric System -- One of the reasons measurement can be complicated is that there is more than one system in use. Based on the Ancient Roman system, the metric system is based on powers of 10; which is called decimalization. The metric system has been the preferred European and scientific method of measuring sine the 18th century, but is not part of the International System of Units, which is also standardized. Because the metric system is based on powers of 10, units are easier to align. Scientists use the metric system as a way to have a common measurement between countries and over time. Scientists use notation that makes it easier to conceptualize distances much easier, particularly when these distances are large. Mathematical examples include: If Mike needed a desk that was 5 feet by 4 feet wide, how many inches of trim would he need for the whole desk. If trim is measured in metric units, not inches or feet, additional calculations would need to be made. So the math would be 5 X 4 = 20-foot perimeter for the trim, and there are 12 inches per foot, so 20 X 12 = 240 inches, then converted to metric, would result in 6.096 meters, or roughly 6.1 trim. Since metric is more international in scope, Mike's chances of pricing and finding appropriate materials are greater. 2. In scientific notation, it is easier to use powers (ratio) for very large or very small numbers. For instance, in scientific notation, 420,000 becomes 4.2 X 105, which is much easier to notate when dealing with materials that require numerical notation. Part 1B - Distances -- Particularly in the sciences that deal with very large or very small distances, some units are measured differently in order to make their meaning much clearer. For example, if we measure the distance from the Earth to the Moon as 240,000 miles, and the distance from the Earth to the Sun as 94,000,000 miles, we can conceptually understand these terms and compare the differences. Even if we make it easier to read and write in scientific notation, we can still see an easy relationship: The distance from the Earth to the Moon is 2.4 X 105, while the distance from the Earth to the Sun is 9.4 X 107. When dealing with astronomical concepts, though, distances increase to trillion or zillions of miles -- too many zeros. In this case, we use light years and astronomical units (AU). AU uses the Earth to Sun ration as 1:1, so Earth to Venus is .72 AU. When distances become even larger, it is more understandable to measure in light years, or the distance it takes a particle of light to travel in a year; roughly 6 trillion miles or 10 trillion kilometers. A better example of the need for alternative notation comes when we think about the nearest star to Sol, Alpha Centuri. We can express this as 4.3 light years, 247,000,000,000 miles or 2.47 X 1011. The terms defined are: 1) Light years = an astronomical unit of measurement that is equal to the distance light travels in a vacuum in one year; or about 6 trillion miles or 10 trillion kilometers., 2) AU = astronomical unit, based on the distance from the Earth to the Sun as 1:1 or 92,955,807.3 miles (9.3 X 107 miles). Part 2A - Apparent magnitude, or stellar magnitude) is a measure of brightness as seen by someone observing an object from Earth adjusted o the value it would have if there were no atmosphere… Sources Used in Document: Seeds, M., Backman, D. (2012). Horizons: Exploring the Universe, 12th ed. Boston,
UA engineer studying Mars terrain simulants for potential mission to the red planet10/21/2019 While the blockbuster movie “The Martian” showed us how challenging the planet might be to navigate in our imagination, a real exploration of Mars involves thinking and planning beyond our wildest dreams. Would you believe that one of the smallest details to make a mission to Mars succeed is knowing if tires on a rover will work properly? Learning the details about the surface of Mars can take NASA years of time and energy to determine. Heather Oravec, Ph.D., a mechanical engineering research associate professor at The University of Akron (UA) who works at NASA’s Glenn Research Center in Cleveland on the Surface Mobility Team, has been tasked with identifying the appropriate terrain requirements of Mars. Her research will help design the relevant terrain conditions for tire performance testing for a potential mission to retrieve soil samples that were collected and left for pickup by NASA’s Mars 2020 Rover. “I will be doing an in-depth review of what we currently know about the Martian surface conditions and a review of the Martian simulants currently in existence,” said Oravec. “Specifically, I need to look at the geotechnical or mechanical properties of these simulants and see how closely they mimic that of the Martian terrain, especially in the areas we know will be difficult to traverse. Such things as wind-blown areas and ripples with loose sandy soil might cause the rover to get stuck.” It’s a major piece in NASA’s plan to return to the fourth planet from the sun, some 34 to 250 million miles away from Earth, depending on orbit. The Surface Mobility Team that Oravec is a part of is developing specialized tires for rovers that will traverse across Mars. The goal is to have a working test facility complete with a terrain simulant by the end of next year. How do you even begin to create a terrain simulant of a planet no human has ever stepped foot on? “Developing soil for a planet that humans have never stepped foot on is always a challenge,” Oravec said. “We have a limited amount of information from previous missions to Mars that we can go off of. So even though we don't have any actual Martian soil samples to learn from, there is plenty of other information that we have collected over the years. We take what we have learned from these past experiences and start our design from there.” When studying mobility, for example, researchers will analyze the size range and shape of soil particles, which will give a good indication of how the particles will interact and behave in bulk. They’ll take Earth soils, such as sand, clay, pebbles, and rocks to formulate a “recipe” of those soils that will aim to match the geotechnical properties of the Martian soils. Tires from previous rovers aren’t being used again because past missions have indicated older rover tire designs aren’t the most durable. For the Mars 2020 rover, NASA modified the design to make the tires’ skin thicker and the body narrower to reduce mass. Future rovers will have airless tires, which feature many load-bearing springs. This will help reduce mass and increase tractive performance. Literature has typically described Martian soil similar to both dry deserts and volcanic lava fields with loose sands, rocky outcrops and plateaus, respectively. Based on past missions, areas of aeolian (wind-blown) deposits consist of atmospheric dust. Rovers Spirit and Opportunity, however, observed dunes, ripples, and bedforms composed of sand with minimal dust. If the simulant turns out to be no good, the ultimate consequence could be mission failure. Though the Spirit Rover survived much past its design life, it ultimately got stuck in a loose sandy soil on Mars. “There is so much we can learn from having actual Martian soil samples here on Earth that it is extremely important for the future of space exploration that this mission succeed,” said Oravec. Media contact: Alex Knisely, 330-972-6477 or firstname.lastname@example.org.
A geometric sequence begins: The sum of the first seven terms is: . Again, because there are only seven terms, it isn’t unreasonable to go ahead and add them together. But let’s do it in a way that can be generalised into a formula. First, lets give the sum a name. As we are adding seven terms, we call the sum . Next, we multiply both sides by the common ratio which is . Then we subtract from : Notice that on the left hand side, we just have , and on the right hand side all but two of the terms cancel out. This is a tidy calculation to preform: we find that . To generalize, let’s write the first seven terms of a geometric sequence: Notice that the first term is not multiplied by and that the last term is the term. Now, if we multiply both sides by the common ratio we have: Subtracting like we did in the example above we have Again, all but two terms on the right hand side cancel out. Let’s divide through by : Now take out the common factor on the numerator: Finally, we are using 7 terms but we could be adding any number of terms, say terms. Let’s replace the 7 with : And there is the formula. Notice that to use this formula, it is not necessary to write out the terms of the sequence. The information required is , the number of terms; the first term and the common ratio. Find eleven questions to practice this math at the end of this mathisfun page. Next: Sigma Notation
BOULDER -- A new Earth-orbiting monitor is providing the most complete view assembled to date of the world's air pollution as it churns through the atmosphere, crossing continents and oceans. Policy makers and scientists now have, for the first time, a way to identify the major sources of air pollution and to closely track where pollution travels year round and anywhere on Earth. The first observations are being released Wednesday at the American Geophysical Union's spring meeting in Boston, Massachusetts. Launched in December 1999, MOPITT (Measurements of Pollution in the Troposphere) tracks the air pollutant carbon monoxide from aboard NASA's Terra spacecraft as it circles the Earth from pole to pole 16 times daily. Scientists at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, are blending the new data with output from a computer model of Earth's atmosphere to develop the world's first global maps of long-term lower-atmosphere pollution. MOPITT demonstrates a new capability to make global observations of carbon monoxide, which is both a toxin and a representative tracer of other types of pollution, says NCAR's John Gille, lead U.S. investigator. "With these new observations, we clearly see that air pollution is much more than a local problem. It's a global issue." Much human-generated air pollution is produced from large fires and then travels great distances, affecting areas far from the source, according to Gille. "MOPITT information will help us improve our understanding of the linkages between air pollution and global environmental change, and it will likely play a pivotal role in the development of international environmental policy," says atmospheric chemist Daniel Jacob of Harvard University, who used MOPITT data this spring in a major field campaign to study air pollution from Asia. The first set of MOPITT global observations, from March to December 2000, has captured extensive air pollution generated by forest fires in the western United States last summer. Emissions from the burning of fossil fuels for home heating and transportation, a major source of air pollution during the wintertime in the Northern Hemisphere, can be seen wafting across much of the hemisphere. The most dramatic features, however, are the immense clouds of carbon monoxide from forest and grassland fires in Africa and South America. The plumes travel rapidly across the Southern Hemisphere as far as Australia during the dry season. Gille was surprised to find a strong source of carbon monoxide in Southeast Asia during April and May 2000. The new maps show air- pollution plumes from this region traveling over the Pacific Ocean to North America, often at fairly high concentrations. While fires are the major contributor, Gille suspects that at times industrial sources may also contribute to these events. Although MOPITT cannot distinguish between individual industrial sources in the same city, it can map different sources that cover a few hundred square miles. The results are accurate enough to differentiate air pollution from a large metropolitan area, for example, from a major fire in a national forest. NCAR scientist Jean-Francois Lamarque helped create MOPITT's fully global maps of carbon monoxide by blending information from the satellite measurements with output from an atmospheric chemistry model developed at NCAR. "Most of the information contained in the maps comes from the data, not the model," Lamarque explains, "but the model fills in the blanks in a very smart way." The blending technique, called data assimilation, also enables scientists to work backwards from the observations to pinpoint pollution sources, a major goal of the experiment. In the United States carbon monoxide is regulated at ground level by the Environmental Protection Agency. MOPITT observes carbon monoxide in the atmosphere two miles above the surface, where it interacts with other gases to form ozone, another human health hazard and a greenhouse gas. Carbon monoxide can rise to higher altitudes, where it is blown rapidly for great distances, or it can sink to the surface, where it may become a health hazard. Carbon monoxide is produced through the incomplete burning of fossil fuels and combustion of natural organic matter, such as wood. By tracking carbon monoxide plumes, scientists are able to follow other pollutants, such as nitrogen oxides, that are produced by the same combustion processes but cannot be directly detected from space. Gille and his team at NCAR developed the software to retrieve and analyze the data. James Drummond and colleagues at the University of Toronto developed the instrument. NCAR is a national facility managed by the University Corporation for Atmospheric Research (UCAR) under primary sponsorship by the National Science Foundation. Terra is part of NASA's Earth Observing System (EOS). Materials provided by National Center For Atmospheric Research/University Corporation For Atmospheric Research. Note: Content may be edited for style and length. Cite This Page:
Anaximenes of Miletus (; Greek: Ἀναξιμένης ὁ Μιλήσιος; c. 585 – c. 528 BC) was an Ancient Greek Pre-Socratic philosopher active in the latter half of the 6th century BC. One of the three Milesian philosophers, he is identified as a younger friend or student of Anaximander. Anaximenes, like others in his school of thought, practiced material monism. This tendency to identify one specific underlying reality made up of a material thing is what Anaximenes is principally known for today. Anaximenes and the Arche While his predecessors Thales and Anaximander proposed that the archai (singular: arche, meaning the underlying material of the world) were water and the ambiguous substance apeiron, respectively, Anaximenes asserted that air was this primary substance of which all other things are made. The choice of air may seem arbitrary, but Anaximenes based his conclusion on naturally observable phenomena in the processes of rarefaction and condensation. When air condenses it becomes visible, as mist and then rain and other forms of precipitation. As the condensed air cools Anaximenes supposed that it went on to form earth and ultimately stones. In contrast, water evaporates into air, which ignites and produces flame when further rarefied. While other philosophers also recognized such transitions in states of matter, Anaximenes was the first to associate the quality pairs hot/dry and cold/wet with the density of a single material and add a quantitative dimension to the Milesian monistic system. The origin of the Cosmos Having concluded that everything in the world is composed of air, Anaximenes used his theory to devise a scheme that explains the origins and nature of the earth and the surrounding celestial bodies. Air felted to create the flat disk of the earth, which he said was table-like and behaved like a leaf floating on air. In keeping with the prevailing view of celestial bodies as balls of fire in the sky, Anaximenes proposed that the earth let out an exhalation of air that rarefied, ignited and became the stars. While the sun is similarly described as being aflame, it is not composed of rarefied air like the stars, but rather of earth like the moon; its burning comes not from its composition but rather from its rapid motion. Similarly, he considered the moon and sun to be flat and floating on streams of air. In his theory, when the sun sets it does not pass under the earth, but is merely obscured by higher parts of the earth as it circles around and becomes more distant. Anaximenes likens the motion of the sun and the other celestial bodies around the earth to the way that a cap may be turned around the head. Anaximenes used his observations and reasoning to provide causes for other natural phenomena on the earth as well. Earthquakes, he asserted, were the result either of lack of moisture, which causes the earth to break apart because of how parched it is, or of superabundance of water, which also causes cracks in the earth. In either case the earth becomes weakened by its cracks, so that hills collapse and cause earthquakes. Lightning is similarly caused by the violent separation of clouds by the wind, creating a bright, fire-like flash. Rainbows, on the other hand, are formed when densely compressed air is touched by the rays of the sun. These examples show how Anaximenes, like the other Milesian philosophers, looked for the broader picture in nature. They sought unifying causes for diversely occurring events, rather than treating each one on a case-by-case basis, or attributing them to gods or to a personified nature. The Anaximenes crater on the Moon is named in his honor.
In 1909 Rutherford's gold foil experiment changed the way people viewed the atom forever. The experiment was conducted at the University of Manchester by Hans Geiger and Ernest Marsden under the direction of Ernest Rutherford. Rightfully so, this experiment is also known as The Geiger- Marsden experiment. This was revolutionary in that it proved for the first time, the existence of the atomic nucleus, thus killing the idea of the plum pudding model. Geiger, Marsden, and Rutherford's work will be studied forever. The Plum Pudding Model composed by J.J. Thomson said that negative electrons were placed throughout and everything else (the pink) was positively charged "pudding" to balance the negative electrons. The experiment was set by placing a particle emitter that would shoot particles made by radioactive decay of radium directly to a thin sheet of gold foil. Gold Foil was used because it is inert and malleable. In order for the particles to go through the material, it had to be really thin and gold was the perfect metal. Around the particles that would shoot toward the gold foil there was a circle of zinc sulfide which would detect when and where the alpha particles would deflect. Rutherford hypothesized that the particles would pass straight through the foil or at most only deflect a couple of degrees. If this had occurred, it would have measured the distribution of charge through the "plum pudding" atom. This assumption was based on the theory that positive and negative charges were spread evenly through the atom, thus making their forces weak allowing for very little, if any deflection. What ended up happening was the slit in which the particles were passed through, when made larger (greater than two nanometers), more particles were able to get through and the majority of these particles passed straight through the foil. Only one out of 8,000 were deflected at very dramatic angles. These angles were even greater than 90 degrees and in some cases the particles completely back fired. In conclusion, this allowed Rutherford the prove J.J. Thomson's plum pudding model false. Rutherford found that an atom is mostly made up of empty space with a concentrated charge in the middle. He was able to conclude this because the majority of the particles passed through the empty space of the atoms and were not noticeably deflected, but the positive charge of the particles would repel against the one focused area of positive electrons in the atoms. The focused and concentrated area that was discovered was named the nucleus. Rutherford along with Geiger and Marsden came up with the following model of an atom. It is because of these scientist curiosity that the model of an atom kept evolving and became more accurate. <a href="http://www.wordle.net/show/wrdl/2530981/rutherfords_gold_foil_experiment" title="Wordle: rutherfords gold foil experiment"><img src="http://www.wordle.net/thumb/wrdl/2530981/rutherfords_gold_foil_experiment" alt="Wordle: rutherfords gold foil experiment" style="padding:4px;border:1px solid #ddd"></a>
Woodrow Wilson was president of America when the Versailles Treaty was signed. Unlike Georges Clemenceau he believed that a more moderate approach was needed to Germany after her defeat in World War One. In this sense, he was similar to Lloyd George of Britain who privately wanted Germany to remain relatively strong so that the country could act as a bulwark against the communism that he believed would spread from Russia. Woodrow Wilson was born in 1856. He became America’s 28th president. His father was a strict Christian minister and Woodrow Wilson was brought up in a household associated with such beliefs. He was educated at Princeton and then at the University of Virginia and John Hopkins University. In 1890, he was appointed a professor at Princeton, a position he held until 1902. From 1902 to 1910, Woodrow Wilson was president of Princeton. In 1910, Woodrow Wilson was elected governor of New Jersey for the Democrats. He swiftly got national fame for his social reforms in New Jersey and in 1912 won the presidential election. As president, Woodrow Wilson concentrated on issues that mattered to him – such as anti-trust legislation to ensure that the people of America got a system that was fair to them. Woodrow Wilson also embarked on reorganising the federal banking system. From 1914 to 1917, he observed a strict neutrality in the Great War but the activities of German U-boats forced his hand especially with the sinking of the ‘Lusitania’ in 1915 which killed 128 American citizens. On April 6th 1917, America entered the war as an “associated power” rather than as an ally of France and Britain. Ironically, Woodrow Wilson had won the 1916 national election on the slogan “He kept us out of war”. During the peace talks at Versailles, Woodrow Wilson presented a moderate voice. He had no doubts thatGermany should be punished, but he wanted those in power punished – not the people. On January 1918, Woodrow Wilson had issued his ‘Fourteen Points’ as a basis for peace. He also had an idea for a League of Nations to maintain world peace. In international affairs, Woodrow Wilson proved somewhat naïve. He wanted to place the trust for future world peace in the hands of the League of Nations, yet America refused to join it. By refusing to join the League, the American political structure had seriously weakened the forerunner of the United Nations. Woodrow Wilson spent time after 1919 criss-crossing America trying to ‘sell’ the idea of the League. On September 26th 1919, he collapsed and his political career ended suddenly. He was an invalid for the rest of his life and died in 1924. Woodrow Wilson was an idealist whose plan for a League was permanently weakened by America’s refusal to join it. His Fourteen Points were fine on paper but no nation was willing to substantially support them. As a Democrat, he had to deal with a Senate that had a Republican majority in it after the end of the war – and party loyalty meant that his ideas for a world that would be peaceful would be killed off at a political level.
A mountain is a landform that extends above the surrounding terrain in a limited area, with a peak. A mountain is generally steeper than a hill, but there is no universally accepted standard definition for the height of a mountain or a hill although a mountain usually has an identifiable summit. Mountains cover 64% of Asia, 36% of North America, 25% of Europe, 22% of South America, 17% of Australia, and 3% of Africa. As a whole, 24% of the Earth's land mass is mountainous. 10% of people live in mountainous regions. Most of the world's rivers are fed from mountain sources, and more than half of humanity depends on mountains for water. The adjective montane is used to describe mountainous areas and things associated with them. Orology is its specialized field of studies, though the term is mostly replaced by "mountain studies". (Not to be confused with horology.) Some authorities define a mountain as a peak with a topographic prominence over a defined value: for example, according to the Britannica Student Encyclopedia, the term "generally refers to rises over 2,000 feet (610 m)". The Encyclopædia Britannica, on the other hand, does not prescribe any height, merely stating that "the term has no standardized geological meaning". The height of a mountain is measured as the elevation of its summit above mean sea level. The Himalayas average 5 km above sea level, while the Andes average 4 km. The highest mountain on land is Everest, in the Himalayas. Other definitions of height are possible. The peak that is farthest from the center of the Earth is Chimborazo in Ecuador. At above sea level it is not even the tallest peak in the Andes, but because Chimborazo is very close to the equator and the Earth bulges at the equator, it is further away from the Earth's center than Everest. The peak that rises farthest from its base is Mauna Kea on Hawaii, whose peak is above its base on the floor of the Pacific Ocean. Mount Lamlam on Guam also lays claim to the tallest mountain as measured from it base. Although its peak is only above sea level, it measures to its base at the bottom of the Marianas Trench. Even though Everest is the highest mountain on Earth today, there have been much taller mountains in the past. During the Precambrian era, the Canadian Shield once had mountains in height that are now eroded down into rolling hills. These formed by the collision of tectonic plates much like the Himalaya and the Rocky Mountains. At (Fraknoi et al., 2004), the tallest known mountain in the solar system is Olympus Mons, located on Mars and is an ancient volcano. Volcanoes have been known to erupt on other planets and moons in our solar system and some of them erupt ice instead of lava (see Cryovolcano). Several years ago, the Hale telescope recorded the first known images of a volcano erupting on a moon in our solar system. High mountains, and mountains located closer to the Earth's poles, have elevations that exist in colder layers of the atmosphere. They are consequently often subject to glaciation and erosion through frost action. Such processes produce the popularly recognizable mountain peak shape. Some of these mountains have glacial lakes, created by melting glaciers; for example, there are an estimated 3,000 glacial lakes in Bhutan. Sufficiently tall mountains have very different climatic conditions at the top than at the base, and will thus have different life zones at different altitudes. The flora and fauna found in these zones tend to become isolated since the conditions above and below a particular zone will be inhospitable to those organisms. These isolated ecological systems are known as sky islands and/or microclimates. Tree forests are forests on mountain sides which attract moisture from the trees, creating a unique ecosystem. Very tall mountains may be covered in ice or snow. Mountains are colder than lower ground, because the Sun heats Earth from the ground up. The Sun's radiation travels through the atmosphere to the ground, where Earth absorbs the heat. Air closest to the Earth's surface is, in general, warmest (see lapse rate for details). Air as high as a mountain is poorly warmed and, therefore, cold. Air temperature normally drops 1 to 2 degrees Celsius (1.8 to 3.6 degrees Fahrenheit) for each 300 meters (1000 feet) of altitude. Mountains are generally less preferable for human habitation than lowlands; the weather is often harsher, and there is little level ground suitable for agriculture. At very high altitudes, there is less oxygen in the air and less protection against solar radiation (UV). Acute mountain sickness (caused by hypoxia - a lack of oxygen in the blood) affects over half of lowlanders who spend more than a few hours above 3,500 meters (11,483 feet). A number of mountains and mountain ranges of the world have been left in their natural state, and are today primarily used for recreation, while others are used for logging, mining, grazing, or see little use of any sort at all. Some mountains offer spectacular views from their summits, while others are densely wooded. Summit accessibility ranges from mountain to mountain; height, steepness, latitude, terrain, weather, and the presence or lack thereof of roads, lifts, or tramways are all factors that affect accessibility. Hiking, backpacking, mountaineering, rock climbing, ice climbing, downhill skiing, and snowboarding are recreational activities typically enjoyed on mountains. Mountains that support heavy recreational use (especially downhill skiing) are often the locations of mountain resorts. Mountains can be characterized in several ways. Some mountains are volcanoes and can be characterized by the type of lava and eruptive history. Other mountains are shaped by glacial processes and can be characterized by their glaciated features. Still others are typified by the faulting and folding of the Earth's crust, or by the collision of continental plates via plate tectonics (the Himalayas, for instance). Shape and placement within the overall landscape also define mountains and mountainous structures (such as butte and monadnock). Finally, many mountains can be characterized by the type of rock that make up their composition. More information on mountain types can be found in List of mountain types. A mountain is usually produced by the movement of lithospheric plates, either orogenic movement or epeirogenic movement. The compressional forces, isostatic uplift and intrusion of igneous matter forces surface rock upward, creating a landform higher than the surrounding features. The height of the feature makes it either a hill or, if higher and steeper, a mountain. The absolute heights of features termed mountains and hills vary greatly according to an area's terrain. The major mountains tend to occur in long linear arcs, indicating tectonic plate boundaries and activity. Two types of mountain are formed depending on how the rock reacts to the tectonic forces – block mountains or fold mountains. The compressional forces in continental collisions may cause the compressed region to thicken, so the upper surface is forced upward. In order to balance the weight of the earth surface, much of the compressed rock is forced downward, producing deep "mountain roots" [see the Book of "Earth", Press and Siever page.413]. These roots are deeply embedded in the ground, thus, a mountain have a shape like peg [See Anatomy of the Earth, Cailleus page.220]. Mountains therefore form downward as well as upward (see isostasy). However, in some continental collisions part of one continent may simply override part of the others, crumpling in the process. Block mountains are created when large areas are widely broken up by faults creating large vertical displacements. This occurrence is fairly common. The uplifted blocks are block mountains or horsts. The intervening dropped blocks are termed graben: these can be small or form extensive rift valley systems. This form of landscape can be seen in East Africa, the Vosges, the Basin and Range province of Western North America and the Rhine valley. These areas often occur when the regional stress is extensional and the crust is thinned. The mid-ocean ridges are often referred to as undersea mountain ranges due to their bathymetric prominence. Where rock does not fault it folds, either symmetrically or asymmetrically. The upfolds are anticlines and the downfolds are synclines; in asymmetric folding there may also be recumbent and overturned folds. The Jura mountains are an example of folding. Over time, erosion can bring about an inversion of relief: the soft upthrust rock is worn away so the anticlines are actually lower than the tougher, more compressed rock of the synclines.
|This article needs additional citations for verification. (April 2013)| Theoretical linguistics is the branch of linguistics that is most concerned with developing models of linguistic knowledge. The fields that are generally considered the core of theoretical linguistics are syntax, phonology, morphology, and semantics. Although phonetics often informs phonology, it is often excluded from the purview of theoretical linguistics, along with psycholinguistics and sociolinguistics. Theoretical linguistics also involves the search for an explanation of linguistic universals, that is, properties all languages have in common. |Applied and experimental| |This section requires expansion. (May 2008)| Phonetics is the study of speech sounds with concentration on three main points : - Articulation : the production of speech sounds in human speech organs. - Perception : the way human ears respond to speech signals, how the human brain analyses them. - Acoustic features : physical characteristics of speech sounds such as color, loudness, amplitude, frequency etc. According to this definition, phonetics can also be called linguistic analysis of human speech at the surface level. That is one obvious difference from phonology, which concerns the structure and organisation of speech sounds in natural languages, and furthermore has a theoretical and abstract nature. One example can be made to illustrate this distinction: In English, the suffix -s can represent either /s/, /z/, or can be silent (written Ø) depending on context. Orthographic representation : S, s Phonetic features: Phonetic representations: [s], [z], Ø Perception through the ear: high frequency sounds accompanied by a hissing noise. Acoustic features: Frequency : 8000 – 11000 Hz Color : similar to the hissing noise made by snakes. Phonological characteristics : Occurrence : beginning, middle or end of words. Accompanied by vowels or consonants. Distinguishes meanings of words depending on context: s''low ≠ g''low The field of articulatory phonetics is a subfield of phonetics. In studying articulation, phoneticians attempt to document how humans produce speech sounds (vowels and consonants). That is, articulatory phoneticians are interested in how the different structures of the vocal tract, called the articulators (tongue, lips, jaw, palate, teeth etc.), interact to create the specific sounds. Auditory phonetics is a branch of phonetics concerned with the hearing, acquisition and comprehension of phonetic sounds of words of a language. As articulatory phonetics explores the methods of sound production, auditory phonetics explores the methods of reception—the ear to the brain, and those processes. Acoustic phonetics is a subfield of phonetics which deals with acoustic aspects of speech sounds. Acoustic phonetics investigates properties like the mean squared amplitude of a waveform, its duration, its fundamental frequency, or other properties of its frequency spectrum, and the relationship of these properties to other branches of phonetics (e.g. articulatory or auditory phonetics), and to abstract linguistic concepts like phones, phrases, or utterances. The basic unit of analysis for phonology is called phoneme. A phoneme is a group of sounds which are not distinguished by the language rules in determining the meaning. In English, for example [t] and [tʰ] are different allophones represent a single phoneme /t/. Morphology is the study of word structure. For example, in the sentences The dog runs and The dogs run, the word forms runs and dogs have an affix -s added, distinguishing them from the base forms dog and run. Adding this suffix to a nominal stem gives plural forms, adding it to verbal stems restricts the subject to third person singular. Some morphological theories operate with two distinct suffixes -s, called allomorphs of the morphemes Plural and Third person singular, respectively. Languages differ with respect to their morphological structure. Along one axis, we may distinguish analytic languages, with few or no affixes or other morphological processes from synthetic languages with many affixes. Along another axis, we may distinguish agglutinative languages, where affixes express one grammatical property each, and are added neatly one after another, from fusional languages, with non-concatenative morphological processes (infixation, umlaut, ablaut, etc.) and/or with less clear-cut affix boundaries. Syntax is the study of language structure and phrasal hierarchies, depicted in parse tree format. It is concerned with the relationship between units at the level of words or morphology. Syntax seeks to delineate exactly all and only those sentences which make up a given language, using native speaker intuition. Syntax seeks to describe formally exactly how structural relations between elements (lexical items/words and operators) in a sentence contribute to its interpretation. Syntax uses principles of formal logic and Set Theory to formalize and represent accurately the hierarchical relationship between elements in a sentence. Abstract syntax trees are often used to illustrate the hierarchical structures that are posited. Thus, in active declarative sentences in English the subject is followed by the main verb which in turn is followed by the object (SVO). This order of elements is crucial to its correct interpretation and it is exactly this which syntacticians try to capture. They argue that there must be a formal computational component contained within the language faculty of normal speakers of a language and seek to describe it. Semantics is the study of intension, that is, the intrinsic meanings of words and phrases. Much of the work in the field of philosophy of language is concerned with the relation between meanings and the word, and this concern cross-cuts formal semantics in several ways. For example, both philosophers of language and semanticists make use of propositional, predicate and modal logics to express their ideas about word meaning. - Ottenheimer, H.J. (2006). The Anthropology of Language: An Introduction to Linguistic Anthropology. Canada: Thomas Wadsworth.
An ecosystem is comprised of, and affected by, two components. The biotic factors are living things that are in and influence an ecosystem, such as plants, animals and humans, bacteria and fungi. The other component is abiotic factors, all other elements in the ecosystem, which, despite not being alive, nonetheless impact the ecosystem. The categories of abiotic factors are water availability and quality, sunlight, meteorology, soil conditions, air quality and topograhy. Other People Are Reading Water availability is an abiotic factor of ecosystems. Living things need water to survive and how plentiful or scarce water is affects the necessary water cycle of evaporation, condensation and precipitation. Oceans, rivers or streams are key components of an ecosystem and the many forms of life that live there. The freshwater ecosystem itself is made up of biotic and abiotic elements and depends on them equally as well. Water quality is another factor, with important metabolic functions subject to water ingredients like zinc and iron that become poisonous with low-quality water. Sunlight is a major part of abiotic conditions in an ecosystem. The sun is the primary source of energy on our planet. It lights the surface, provides higher energy waves, affects the earth's temperature and circulates the earth's atmosphere. Meteorology or weather conditions considered abiotic are temperature, wind velocity, solar insulation, humidity and precipitation. The statistical and seasonal variation of these factors influence the habitat and temporal correlation. Soil conditions that affect ecosystems are the granularity, chemistry and nutrient content and availability. These soil conditions interact with precipitation to cause change. Although dead organic material such as animal remains are scientifically considered abiotic. Air quality plays an important part because pollution can contribute to carbon monoxide and sulphur dioxide degrading circulatory or pulmonary function. Air pollution can also disrupt the process of photosynthesis. Micro-topographic elements mix with meteorology barriers to affect plant growth and selection in a given area. Topography, soil type and precipitation shape surface run-off and limit the ability of animals to build burrows and nests and affects the way predators and prey are able to hunt and hide from each other. Abiotic factors are particularly important to new or barren or unpopulated ecosystems. This is because the abiotic factors of the unpopulated system sets the stage for how well a given species will be able to live, thrive and reproduce there. Each organism's ability to survive in a set of abiotic conditions is known as the tolerance range. The abiotic aspects of an ecosystem can be affected by the biotic aspects. For example, animals digging in the ground adds to soil erosion, and plants take carbon dioxide from the air and contribute oxygen. - 20 of the funniest online reviews ever - 14 Biggest lies people tell in online dating sites - Hilarious things Google thinks you're trying to search for
Stonehenge is a massive stone monument located on a chalky plain north of the city of Salisbury, England. Research shows that the site has continuously evolved over a period of about 10,000 years. The structure that we call ‘Stonehenge’ was built approx. 4,000 and 5,000 years ago and that forms just one part of a larger and highly complex, sacred landscape. The biggest of Stonehenge’s stones, known as sarsens, are up to 30” tall and weigh 25 tons on average. It is widely believed that they were brought from Marlborough Downs, a distance of 20 miles to the north. Smaller stones, referred to as ‘bluestones’ weigh up to 4 tons and come from several different sites in western Wales, having been transported as far as 140 miles. It’s unknown how people in antiquity moved them that far. Scientists have raised the possibility that during the last ice age glaciers carried these bluestones closer to the Stonehenge area and the monument’s makers didn’t have to move them all the way from Wales. Water transport thought raft is another idea that has been proposed but researchers now question whether this method was viable.
Presentation on theme: "Constitutionalism Parliament Limits the English Monarchy."— Presentation transcript: Constitutionalism Parliament Limits the English Monarchy Some Vocab Constitutionalism: Laws limit the rulers power Parliament: The legislative body of government in England and other parts of the world. They check the power of the monarch and make laws. Similar to Congress in the USA. Monarchs Clash with Parliament * James I—King of England (Remember Elizabeth died w/o an heir. He was already James VI of Scotland and her cousin.) -believed in divine right of kings -struggled with Parliament over money -he was a Calvinist, yet refused to make Puritan reforms like getting rid of bishops -He seemed to favor the Catholics because he didn’t cater to the Puritans (who were extreme Protestants). Charles I -Began taxing w/o Par. consent -When Par. objected, he dissolved (dismissed) Parliament -Petition of Right 1628 (he ignores it in the end) -Charles agreed to: 1) not imprison subjects w/o due cause 2) not levy high taxes w/o Par. consent 3) not house soldiers in private homes -quartering 4) not impose martial law in peacetime -yet it was important, indicating the law was higher than the king - “Long Parliament” forces king to sign Triennial Act (Par. Must be called every 3 years) English Civil War ( ) -Charles I wanted both his kingdoms (England and Scotland which he inherited through his grandmother, Mary Queen of Scots) to follow one religion. -the Scots rebelled (Presbyterian) -Charles I needed $ to fight the Scots -called on Parliament -Par. used this opportunity to pass laws limiting royal power and wouldn’t give him an army -angered Charles I -raised his own army Execution of the King More Religious Issues –Charles had married a French Catholic –Seemed too sympathetic to the Cath. Church Tried to bring back more ritual to the Anglican Church of England Angered the Puritans Execution (regicide), 1649 –The Puritans demanded that Charles be tried for treason after the Civil War Parliament: London, England Oliver Cromwell (r ) *English Civil War: Royalists/Cavaliers = supported Charles I -Roundheads = Puritan supporters of Par. *Oliver Cromwell -Led the Roundheads -Defeated Cavaliers -Put Charles on trial for treason, beheaded Created a republican form of govt. -Promoted religious tolerance -Ruled until 1658 as a dictator Cromwell Statue in Front of Westminster Abbey Restoration and Revolution -Charles II succeeded Cromwell (elected by Par.) restored the monarchy -habeas corpus gave every prisoner the right to trial -James II and the Glorious Revolution James II came to power -Catholic bro of Charles II -Parliament members helped overthrow James II -put Mary (daughter of James) and William of Orange on the throne -called the Glorious Revolution, a bloodless revolution in1688 William and Mary Political Changes Due to Glorious Revoltuion -Constitutional Monarchy -Laws limit the ruler’s power -English Bill of Rights Listed what a ruler could not do -Cabinet system develops -Group of gov’t ministers representing Par. -Model for U.S. gov’t Quickwrite In what ways did the English monarchs of the 17 th and 18 th century challenge the concept of Constitutionalism? (In other words, what did they do to oppose the English Parliament?)
Are the stars spinning, or is it just you? Discover how the spinning Earth changes the way we see stars. If you’re a stargazer (or an aspiring one), this science fair project will help you create a star clock that uses the stars to determine what time it is on earth! - Two 8 ½ x 11” sheets of cardstock paper that can go through a printer - Paper fastener - Print this two-page worksheet on your cardstock paper. Cut out the white disk and the black disk. Attach them in the center with a paper fastener or brad. This central point represents the North Star, Polaris. - Choose a clear night, and find the North Star in the sky. Face the North Star. Move the lighter circle so that you have the current month at the top. - Now, look for the Big Dipper. It should look like a big ladle in the sky. Move the smaller, darker circle around until it lines up with the positions of the big dipper’s stars. - What time do you see highlighted in the gap in the black circle? Check your watch. This should be close to the current time. If you are on daylight savings time, you will need to add one hour to the time. Did your star clock work? How well did it work? You will be able to tell the time by moving the star clock around to the current month and lining it up with the stars in the sky. Do the stars themselves move? You probably know that the rotation of the Earth makes it sunny for about half the day and dark for the other half. Of course, the amount of light and darkness depends on the time of the year. Day and night happen because the Earth rotates on an axis that runs through the north and south poles. Sometimes the part of the earth you’re standing on faces the sun, and sometimes it doesn’t. This is why the sun appears to rise and set—but the sun isn’t actually moving. Now, let’s talk about what happens in the night sky! The same apparent motion happens to the stars at night. The earth spins, and this makes the stars appear to move from east to west in what is called a diurnal circle—the apparent (not real) movement of the stars around the earth. For example, the Big Dipper constellation appears to move around the poles in what’s called circumpolar motion. If you watch long enough, the constellation will seem to travel around Polaris, the North Star. Exactly where the constellations appear to be in the sky depends on your latitude, or how far north or south you are on the earth.
One of our main responsibilities as timber suppliers in Brisbane is to ensure that all of our timber is ethically sourced. In other words, it must be grown and harvested in a way that doesn’t have a negative impact on the species of timber we harvest, the forest or plantation itself or the carbon footprint. While most issues concerning responsible forestry are well-publicised, there is one that isn’t: genetic pollution of timber. What is Genetic Pollution? Genetic pollution is a term that refers to genes taking over a species through various means. In forestry, it is usually what happens when genes from exotic species invade local forest stands. Species that survive do so due to their ability to adapt or genetic diversity. A balance must be struck to allow species to adapt within their species, but also to preserve a number of species to allow genetic diversity outside of any particular species. What it Means to You Many timber species are seen only in particular regions. The differences between timber grown in different regions demonstrate how they can evolve genetically based on surviving a particular locale or climate. Australia formulated something called the National Strategy on the Conservation of Australia’s Biological Diversity in 1996. In 2005, the Australian Government developed a set of regulations regulating the use of Australia’s native genetic resources. Threats to Native Genetic Diversity Inappropriate clearing or timber harvesting can have a negative effect on genetic diversity. So can growing exotic species on plantations too close to native forests or planting exotic species in a native forest after clearing. What We Do About It Luckily, the Institute of Foresters of Australia has strong policies to prevent genetic pollution of native timber species. There are also government regulations. To make a long story short, we only carry timber that has been forested and harvested according to ethical procedures that prevent genetic pollution and help preserve our native timber while protecting the environment. Call Narangba Timbers to find out more: (07) 3888 1293.
Using the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory and core samples from an ancient breakwater near Naples, Italy, a University of California, Berkeley, research team has examined the fine-scale structure and extraordinarily stable binding compound, calcium-aluminum-silicate hydrate of Roman seawater concrete. The Berkeley Lab equipment has also led to the first experimental determination of a rare hydrothermal mineral, aluminous tobermorite. “Roman concrete has remained coherent and well-consolidated for 2,000 years in aggressive maritime environments," says UC Berkeley Civil and Environmental Engineering Research Engineer Marie Jackson, who, as lead author, describes team findings in June and October 2013 articles for the Journal of the American Ceramic Society and American Mineralogist, respectively. These are the first publications that describe the use of synchroton radiation applications to investigate ancient Roman concrete. "It is one of the most durable construction materials on the planet, and that was no accident. “Shipping was the lifeline of political, economic and military stability for the Roman Empire, so constructing harbors that would last was critical.” As the Empire and shipping declined, so did the need for seawater concrete. “You could also argue that the original structures were built so well that, once they were in place, they didn't need to be replaced," concludes Jackson, who, along with the CTG Italcementi Group-sponsored Roman Maritime Concrete Study (ROMACONS), sourced core specimens. Roman concrete’s lime and volcanic ash binder formulation was described around 30 B.C. by Marcus Vitruvius Pollio, an engineer for Octavian, who became Emperor Augustus. Early practitioners packed their lime-ash mortar and rock chunks into wooden molds immersed in seawater, which became an integral part of the mix—with the resulting structures immune to chloride ion exposure. The UC Berkeley research aims to identify the potential for expanded use of lime and volcanic ash in concrete, potentially offsetting some of the carbon dioxide emissions associated with the high-temperature milling of ASTM C150 portland cement. Lower processing temperatures equate to lime production at substantially less than the CO2 levels of portland cement; volcanic ash is being considered as an alternative to fly ash (see ASTM C168) as market supply conditions allow or dictate. "The computed bulk modulus of Al-tobermorite based on high-pressure experiments at beamline 12.2.2. of the Advanced Light Source is 55±5 GPa," Jackson explains. "This measured bulk modulus is far higher than experimental measurements of C-A-S-H in alkaline-activated slag concrete, 35±3 GPa, by the same UC Berkeley research group. Until now, researchers have relied on theoretical models to estimate the mechanical properties of tobermorite, so this adds an important constraint on “real” behavior. "The wide interlayer, 11.49 Å, of the ancient Roman Al-tobermorite double-layer silicate structure likely provides cavities for Na+ and K+ cations derived from the alkali-rich volcanic ash and seawater-saturated lime. This contributes to charge balancing and stability in the maritime environment, which is important for long-term durability. However, the large interlayer spacing also increases compressibility relative to ideal tobermorite with 11.3 Å spacing. "Even so, this study shows that Al-tobermorite has increased mechanical performance relative to poorly-crystalline C-A-S-H. Romans were able to produce massive seawater concrete structures with Al-tobermorite as the principal crystalline cementitious product. If we could translate Roman expertise to modern concrete structures, then we could conceivably improve the mechanical and material properties of pozzolanic concretes." The research began with funding from King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, which has an abundance of potentially concrete-grade volcanic ash. In addition to the Berkeley Lab’s ALS, researchers deployed Berlin Electron Storage Ring Society for Synchrotron Radiation in their analyses. Jackson adds that the release of the Roman concrete investigation findings has spurred other proposals for using ancient Roman principles in innovative cement or concrete research. "We have received numerous messages from folks actively involved in developing products, especially block concrete, in aluminous pozzolanic systems, some of which are autoclaved to produce Al-tobermorite," she says. "We are in the first stages of discussing our research results regarding C-A-S-H and Al-tobermorite with some of these people, to explore how we can apply Roman principles of concrete construction with volcanic pozzolans to specialty concrete products." Other possible avenues opened up by these findings include the study of natural pozzolan from different parts of the world and the utilization of waste products to produce green concrete.
Here you'll meet independent and dependent clauses, including adverb, adjective, and noun clauses. Along the way, you learn how to use clauses to add description, show relationships between ideas, and eliminate unnecessary words. Clauses: Phrases on Steroids You've got words, you've got phrases, and now you've got clauses. The progression suggests that clauses are pumped up phrases. Indeed, clauses tend to be beefier than phrases. That's because a clause is a group of words with its own subject and verb. You Could Look It Up A clause is a group of words with its own subject and verb. An independent (main) clause is a complete sentence; a dependent (subordinate) clause is part of a sentence. A dependent clause cannot stand alone. Like phrases, clauses enrich your written and oral expression by adding details and making your meaning more exact. Clauses also allow you to combine ideas to show their relationship. This adds logic and cohesion, very good things when you're trying to communicate. There are two types of clauses: independent clauses (main clauses) and dependent clauses (subordinate clauses and relative clauses). Here are some examples of each type of clause. Why is there a period at the end of each independent clause? Because they are complete sentences. Note that there's no period at the end of each dependent clause. That's because they're not complete sentences. Independent Clauses: Top Dogs An independent clause contains a subject and a predicate. It can stand alone as a sentence because it expresses a complete thought. The three independent clauses shown on the previous chart all contain a subject and a verb and express a complete idea. The following table shows some independent clauses divided into their subjects and predicates. Dependent Clauses: I Get by with a Little Help from My Friends Dependent clauses add additional information to the main clauses, but they are not necessary to form a complete thought. They do not form a complete thought by themselves. Although each of the dependent clauses shown on the first chart in this section has a subject and a verb, it does not express a complete thought. As a result, it cannot stand alone. A dependent clause is like a child; it's unable to support itself but able to cause a lot of problems if crossed. Quoth the Maven See Sentences for additional information on subjects and predicates. A dependent clause often starts with a word that makes the clause unable to stand alone. Look back at the three dependent clauses on the first chart. The words used here are until, although, and because, respectively. These words are subordinating conjunctions, as you learned in Parts of Speech. We'll review subordinating conjunctions in a few minutes. I Know 'Em When I See 'Em Before we go on, make sure you can identify independent and dependent clauses. In the space provided, write I for independent clauses and D for dependent clauses. Excerpted from The Complete Idiot's Guide to Grammar and Style © 2003 by Laurie E. Rozakis, Ph.D.. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc. 24 X 7 Explore Indices ,
CLIMATE science is famously complicated, but one useful number to keep in mind is “climate sensitivity”. This measures the amount of warming that can eventually be expected to follow a doubling in the atmospheric concentration of carbon dioxide. The Intergovernmental Panel on Climate Change, in its most recent summary of the science behind its predictions, published in 2007, estimated that, in present conditions, a doubling of CO2 would cause warming of about 3°C, with uncertainty of about a degree and a half in either direction. But it also says there is a small probability that the true number is much higher. Some recent studies have suggested that it could be as high as 10°C. If that were true, disaster beckons. But a paper published in this week's Science, by Andreas Schmittner of Oregon State University, suggests it is not. In Dr Schmittner's analysis, the climate is less sensitive to carbon dioxide than was feared. Existing studies of climate sensitivity mostly rely on data gathered from weather stations, which go back to roughly 1850. Dr Schmittner takes a different approach. His data come from the peak of the most recent ice age, between 19,000 and 23,000 years ago. His group is not the first to use such data (ice cores, fossils, marine sediments and the like) to probe the climate's sensitivity to carbon dioxide. But their paper is the most thorough. Previous attempts had considered only small regions of the globe. He has compiled enough information to make a credible stab at recreating the climate of the entire planet. The result offers that rarest of things in climate science—a bit of good news. The group's most likely figure for climate sensitivity is 2.3°C, which is more than half a degree lower than the consensus figure, with a 66% probability that it lies between 1.7° and 2.6°C. More importantly, these results suggest an upper limit for climate sensitivity of around 3.2°C. Before you take the SUV out for a celebratory spin, though, it is worth bearing in mind that this is only one study, and, like all such, it has its flaws. The computer model used is of only middling sophistication, Dr Schmittner admits. That may be one reason for the narrow range of his team's results. And although the study's geographical coverage is the most comprehensive so far for work of this type, there are still blank areas—notably in Australia, Central Asia, South America and the northern Pacific Ocean. Moreover, some sceptics complain about the way ancient data of this type were used to construct a different but related piece of climate science: the so-called hockey-stick model, which suggests that temperatures have risen suddenly since the beginning of the industrial revolution. It will be interesting to see if such sceptics are willing to be equally sceptical about ancient data when they support their point of view.
Case Study: Implementation of 10 wedges The scenario chosen for this case study used 10 wedges to reduce carbon emissions. The value “10” was entered into the spreadsheet (see the (1) in the figure bbelow) and the required emissions for 2061 entered as an equation (2). 25.85 Gt C - 10 Gt C = 15.85 Gt C As shown in the table below, with only 10 wedges, the carbon dioxide emissions are expected to double and the CO2 concentration (534 ppm) is well above the target of 350 ppm. The following graphs confirm that the climate mitigation goals (green lines) are not met with this mitigation strategy (red lines). The projected global average temperature anomaly was evaluated by reviewing results compiled from the EdGCM global climate model. Based on these results, the temperature will continue to change nearly 4°C over the 21st century (relative to the 1980-1999 baseline). This would be in addition to the ~0.5°C change between the late 1800s and the late 20th century for a total of ~4.5°C temperature increase. This is clearly much greater than the suggested goal of 2°C. The choice of mitigation wedges can be based on your expectations for feasible solutions, distribution of strategies between electricity, fuel, forests, etc., and total cost. The worksheet below shows an example of the information that should be entered into the worksheet for each wedge. Example Wedge Worksheet listing
Rickets is a disorder of infancy and early childhood of multiple etiologies. Rickets, causing soft bones, may occur if enough vitamin D is not present to assist in calcium absorption. When enough calcium is not absorbed by the bone, it does not harden properly, and is too soft to support the weight of the growing body properly. The disease of rickets takes its name from the Greek word for spine, rhakhis. Vitamin D is made by the body when it is exposed to ultraviolet light (which is found in sunlight). Vitamin D is also added to milk, milk products, and multi-vitamin pills through a process originally patented by Harry Steenbock. Some people who do not get enough sun exposure, milk products, or green vegetables may also develop the disease. Deficiency of calcium can also cause rickets, particularly in some developing countries where the intake of calcium-rich products such as leafy greens, nuts, and seeds is low. A similar disorder can occur in adults, and is called osteomalacia. Then, it is caused by the inability of bone cells to calcify, or harden. Less frequently, nutritional shortage of calcium or phosphorus may produce rickets. Manifestations of diseaseEdit - Vitamin D deficiency, - Skeletal deformity, - Growth disturbance, - Hypocalcemia (low level of calcium in the blood), - Tetany (uncontrolled muscle spasms). The X-ray, or radiograph, in the article is the classic image of advanced rickets sufferers: bow legs (outward curve of long bone of the legs) and a deformed chest. Changes in the skull also occur causing a distinctive "square headed" appearance. These deformities persist into adult life. Treatment and preventionEdit A sufficient amount of sunlight each day and adequate supplies of calcium and phosphorus in the diet can prevent rickets. Darker-skinned babies need to be exposed longer to the ultraviolet rays. The replacement of Vitamin D may correct rickets using these methods of ultraviolet light and medicine.
Food and feeding As in all molluscan groups except the bivalves, gastropods have a firm odontophore at the anterior end of the digestive tract. Generally, this organ supports a broad ribbon (radula) covered with a few to many thousand “teeth” (denticles). The radula is used in feeding: muscles extrude the radula from the mouth, spread it out, and then slide it over the supporting odontophore, carrying particles or pieces of food and debris into the esophagus. Although attached at both ends, the radula grows continuously during the gastropod’s life, with new rows of denticles being formed posteriorly to replace the worn denticles cast off at the anterior end. Both form and number of denticles vary greatly among species—the differences correlating with food and habitat changes. Radular morphology is an important tool for species identification. Evidently, the most primitive type of gastropod feeding involved browsing and grazing of algae from rocks. Some species of the order Archaeogastropoda still retain the basic rhipidoglossan radula, in which many slender marginal teeth are arranged in transverse rows. During use, the outer, or marginal, denticles swing outward, and the radula is curled under the anterior end of the odontophore. The latter is pressed against the feeding surface, and, one row at a time, the denticles are erected and scrape across the surface, removing fine particles as the odontophore is withdrawn into the mouth. As the marginals swing inward, food particles are carried toward the midline of the radula and collected into a mucous mass. By folding the teeth inward, damage to the mouth lining is avoided and food particles are concentrated. Mucus-bound food particles are then passed through the esophagus and into the gut for sorting and digestion. From this basic pattern, numerous specializations have developed, involving changes in the numbers, sizes, and shapes of radular teeth that correspond to dietary specializations. Prosobranch gastropods include herbivores, omnivores, parasites, and carnivores, some of which drill through the shells of bivalves, gastropods, or echinoderms to feed. Some gastropods, for example, possess a “toxoglossate” radula that has only two teeth, which are formed and used alternately. Most toxoglossate gastropods inject a poison via the functional tooth. Prey selection usually is highly specific. Although many cones hunt polychaete worms, others prey on gastropods or fishes, using the radular tooth as a harpoon, with poison being injected into the prey through the hollow shaft of the tooth. Several of the large fish-eating cones, which produce a variety of potent nerve poisons, have been known to kill humans. Some other gastropods, such as the opisthobranch Dolabella, have as many as 460 teeth per row with a total of 25,000 denticles. In terms of feeding, opisthobranchs are extremely varied. Besides the algae-sucking sacoglossans, Aplysia cuts up strips of seaweed for swallowing, and a number of the more primitive species feed on algae encrusted on rocks. Perhaps the majority of opisthobranchs, including the sea slugs, are predators on sessile animals, ascidians and coelenterates being especially favoured. Pyramidellids are ectoparasites on a variety of organisms. Some of the pteropods are ciliary feeders on microorganisms. Pulmonate gastropods are predominantly herbivores, with only a few scavenging and predatory species. Primitively, the pulmonate radular tooth has three raised points, or cusps (i.e., is tricuspid), but modifications involving splitting of cusps or reductions to one cusp are numerous. The modification of the radular tooth reflects dietary differences between species. In particular, with each successive appearance of a carnivorous type during evolution, the teeth have been reduced in number, each tooth usually having one long, sickle-shaped cusp. Much of the diversity achieved by the gastropods relates to the evolutionary shifts in radular structure, which have led to exploitation of a variety of food sources. Predators capable of swimming, surface crawling, and burrowing to capture prey have evolved among the prosobranchs and opisthobranchs; predators that produce chemical substances for entering the shells of their prey have evolved among the mesogastropods (family Naticidae and superfamily Tonnacea), the neogastropods (family Muricidae), and a nudibranch opisthobranch (Okadaia); and, in the pulmonates, predation and thus a carnivorous diet have evolved at least 12 times. Form and function Gastropods present such a variety of structures and adaptations that few all-encompassing characteristics can be presented. The following survey focuses on variety in the external shell and the body.
Many of the medical and scientific terms used in this summary are found in the NCI Dictionary of Genetics Terms. When a linked term is clicked, the definition will appear in a separate window. Many of the genes described in this summary are found in the Online Mendelian Inheritance in Man (OMIM) database. When OMIM appears after a gene name or the name of a condition, click on OMIM for a link to more information. Structure of the Skin The genetics of skin cancer is an extremely broad topic. There are more than 100 types of tumors that are clinically apparent on the skin; many of these are known to have familial components, either in isolation or as part of a syndrome with other features. This is, in part, because the skin itself is a complex organ made up of multiple cell types. Furthermore, many of these cell types can undergo malignant transformation at various points in their differentiation, leading to tumors with distinct histology and dramatically different biological behaviors, such as squamous cell cancer (SCC) and basal cell cancer (BCC). These have been called nonmelanoma skin cancers or keratinocytic cancers. Figure 1 is a simple diagram of normal skin structure. It also indicates the major cell types that are normally found in each compartment. Broadly speaking, there are two large compartments—the avascular cellular epidermis and the vascular dermis—with many cell types distributed in a largely acellular matrix. Figure 1. Schematic representation of normal skin. The relatively avascular epidermis houses basal cell keratinocytes and squamous epithelial keratinocytes, the source cells for BCC and SCC, respectively. Melanocytes are also present in normal skin and serve as the source cell for melanoma. The separation between epidermis and dermis occurs at the basement membrane zone, located just inferior to the basal cell keratinocytes. The outer layer or epidermis is made primarily of keratinocytes but has several other minor cell populations. The bottom layer is formed of basal keratinocytes abutting the basement membrane. The basement membrane is formed from products of keratinocytes and dermal fibroblasts, such as collagen and laminin, and is an important anatomical and functional structure. As the basal keratinocytes divide and differentiate, they lose contact with the basement membrane and form the spinous cell layer, the granular cell layer, and the keratinized outer layer or stratum corneum. The true cytologic origin of BCC remains in question. BCC and basal cell keratinocytes share many histologic similarities, as is reflected in the name. Alternatively, the outer root sheath cells of the hair follicle have also been proposed as the cell of origin for BCC. This is suggested by the fact that BCCs occur predominantly on hair-bearing skin. BCCs rarely metastasize but can invade tissue locally or regionally, sometimes following along nerves. A tendency for superficial necrosis has resulted in the name "rodent ulcer."
The most important events in the life of Mahatma Gandhi centered around his fight for India's independence. In 1930, in perhaps his most important show of disobedience, he walked 200 miles to the sea to get salt as a symbolic act of rebellion against Great Britain's monopoly on salt.Continue Reading After his salt walk he spent time in prison until 1931. That same year he was part of London Round Table Conference that discussed constitutional reform in India. He was instrumental in working with the Cabinet Mission to implement final changes to the constitution in 1946. Later that year, India gained its independence. His work as a peacekeeper between the Hindu and Muslim cultures in the country led to his assassination at the hands of Hindu fanatic Nathuram Godse. Born Mohandas Karamchand Gandhi in 1869, Ghandi's fight for India's sovereignty and respect first led him to South Africa in 1893. There he spent two decades combating anti-Indian policies and practices. He came back to India in 1914 and immediately used his activist method of Satyagraha, which is using non-violent means to show civil disobedience. By using this form of resistance, he became one of the most popular political activists of the century.Learn more about Modern Asia
Tanzania - FAO wheat database Wheat production potential in Tanzania Tanzania's population is about 20 m and nine tenths of the people depend on agriculture, directly or indirectly, for their livelihood. The inhospitably long dry season, and the infestation of large areas with tsetse fly, restrict two thirds of the population to one tenth of the area of the country. Tanzania's economy is, and will continue to b e , agricultural. Since 1970, however, food production has expanded at a rate of only 2.9 percent, while the population has grown at a rate of 3.3 percent annually. During this period, Tanzania has changed from a net exporter of food to a net importer on a large scale. Tanzania is situated between latitudes 1 0 S and 11 0 S , and between longitudes 30 E and 43 E. Since it is so close to the equator, Tanzania has a typically tropical climate in all lowland areas, with warm temperatures, slow wind velocities, humid air in most months., and no winter. Altitude modifies the climate of the highlands to a temperature regime suitable for wheat. In the south-east the climate is warmed by the Indian Ocean. In the east and south-east annual rainfall generally ranges between 750 mm and 1,250 mm. Some parts of the Southern Highlands receive more than 1,250 mm. Large areas in central and southern Tanzania receive much less than 750 mm of rainfall. Rainfall increases towards the west and north, and parts of the Northern Highlands receive more than 1,250 mm. Mean monthly temperatures for January range between 20 0 C and 26 0 C, coolest in the Northern and -Southern Highlands and warmest along the coast of the Indian Ocean. The range of mean monthly temperatures for July is between 16 0 C and 220C. In much of Tanzania, lack of water limits the growing of wheat more than temperature. Under the prevailing conditions of temperature, sunshine and wind, the minimum annual rainfall required for wheat is 750 mm. The following tables give the proportions of the total area of the country receiving various amounts of rainfall. The total land area of Tanzania is estimated at 88.6 m ha. The area with sufficient rainfall (more than 750 mm per year) for reliable crop production is thus about 18.6 m ha. The total area harvested during 1979/80 was 6.3 m ha, while potential arable land (regardless of rainfall) has been estimated at 28 m ha. Thus '-he potential for the development of rainfed crop production, including wheat, to sustain the growing rural population, is satisfactory. Soils with medium to high potential are found mostly in the highland and plateau regions of Mbeya, Njombe, Iringa, Rukwa, Kigoma, Tabora, Kagera, Shinyanga, Mwanza, Singida, Dodoma, Kilimanjaro, Meru and the Usambaras. The level to gently undulating terrain of these areas favours cultivation, but this very feature is often associated with flooding, inadequate drainage and saline soils. These soils cover an area of about 8 m ha. Soils of slight to medium fertility, with moderate potential, cover considerable areas in most regions. Another group of soils, which are infertile but have moderate potential when fertilized, is fairly extensive in central and south-eastern Tanzania. Together these two groups of soils occupy about 20 m ha. Wheat production in Tanzania depends almost entirely on rainfall. At present, only 144,000 ha are under partial or full-scale irrigation. Potential irrigable land is estimated at 933,000 ha. Present wheat production Wheat is grown entirely under rainfed conditions. Production falls short of requirements and the country relies heavily on wheat import, which ranged between 47,000 and 76,000 tons between 1982 and 1984. Wheat is the preferred food grain in towns, while the rural population lives mainly on other cereals. As people move into the towns, consumption of wheat in the coming years is likely to grow faster than that of all other cerals. Evidently, unless a determined effort is undertaken in the research and development of wheat, Tanzania's wheat production will remain inadequate for a long time. There are three modes of wheat production in Tanzania, each with a different level of technical management: (1) Large-scale mechanized production is being carried out by the National Food and Agriculture Corporation (NAFCO) with the assistance of the Canadian International Development Agency in the Arusha and Kilimajaro regions. Since 1970 these agencies have established six farms totalling 20,757 ha in the Hanang Wheat Complex in the Arusha region . in addition, NAFCO has 6,000 ha under wheat in the Kilimajaro region. Wheat cultivation on these farms is fully mechanized and average yield is 1.6 tons/ha without the use of fertilizer. Erosion problems at Hanang are discussed below in Chapter 7 , item 15. To conserve water in the soil and to prevent erosion, the technique of minimum tillage is now being practised, using chisel and sweep ploughs. About three quarters of Tanzania's commercial wheat production comes from these farms. (2) Small- to medium-scale mechanized wheat cultivation has been practised in parts of the Arusha and Kilimanjaro regions, and in the Irenga region, since 1945. Medium-sized wheat estates, established by expatriate (British) farmers, still continue under state management. Alongside these estates, small-scale farmers grow wheat, hiring tractors and combine harvesters from specialist local contractors. (3) Hand-tool cultivated wheat is traditional in parts of the Southern Highlands in the Iringa and Mbeya regions. The average plot is a quarter of a hectare. Growers use no input other than seeds and family labour, and they consume practically all that they harvest. Tanzania's evolving rural development programme may improve the output of small-scale wheat farming. All areas below an 0 altitude of 1,500 m experience temperatures above 20 C for most of the possible growing period and are, therefore, not suitable for wheat. Annual precipitation in these areas may appear to be adequate, but the amounts received during the season when wheat would be grown (April to October) are small and erratic. Areas above 1,500 m are relatively cool, have more reliable precipitation and offer good prospects for rainfed wheat production. Land with potential for wheat is delineated in seven mapping units: Land with medium potential (P2) P2c. Potential limited by climate. This land has no limitation except that temperatures are too warm to achieve large wheat yields. Three fifths of this mapping unit consist of nearly level to gently undulating land with deep, brown, medium-textured soils (chromic, eutric and calcic cambisols). A quarter of the mapping unit consists of deep, dark-coloured, clayey soils with a fair amount of organic matter and a good surface structure (pellic vertisols) and brownish-coloured, clayey soils, containing little organic matter but with a medium-textured surface (chromic vertisols). These soils, together constituting some 85 percent of this mapping unit, are moderately suitable (P2c) f or wheat . The remaining 15 percent of the unit has very shallow, gravelly soils (lithosols) which have no potential (N) for wheat production. P2h. Potential limited by hard pans. This mapping unit occupies nearly level to undulating plateaux, about four fifths of which have deep, medium-textured soils with hard lime layers at shallow depths (petrocalcic phases of calcic cambisols, eutric cambisols and eutric nitosols). The hard layers occurring in these soils restrict aeration and the penetration of plant roots. In about a third of these soils, the hard pans are fairly deep and the soils are moderately suitable (P2h) for wheat. In the remaining part, mostly where the surface is undulating, the pans occur near the surface so that any attempt to improve the land, such as by levelling or terracing, would be likely to expose tha pans; this difficulty renders the land unsuitable (N) for wheat. The remaining fith of the mapping unit comprises deep, dark-coloured, clayey soils containing a moderate amount of organic matter (pellic vertisols); shallow., medium-textured soils with hard pans just below the plough layer (eutric planosols); shallow. dark-coloured, steep soils containing much organic matter and lime (rendzinas); and very shallow S 0 11 S (lithosols). The vertisols are moderately suitable (P2c) and the others unsuitable (N) for wheat production even with a high level of inputs. P2e. Potential limited by erosion risks. Half this mapping unit (not counting the enclaves near Mbeya in south-west Tanzania, and Arusha in the north-east) consists of gently sloping to steeply dissected. reddish, medium-to fine-textured soils (haplic, eutric and dystric nitosols). A fifth of the unit consists, in almost equal proportions, of brownish, medium-textured soils (eutric and calcic cambisols) and black to greyish-brown volcanic ash soils (humic and ochric andosols). Two fifths of all these have a small risk of soil erosion and have moderate potential (P2e) for wheat. Three fifths have a medium to severe erosion risk, and small to no potential (P3/N). Some 15 percent of this mapping unit is covered by strongly leached, reddish-brown to dark red acidic soils (orthic and ferric acrisols) which have a low potential (P3s) for wheat production. The remaining 15 percent comprises dark-coloured, clayey soils (pellic vertisols), imperfectly-drained soils (gleysols), organic soils (histosols) and very shallow soils (lithosols), of which the vertisols have a moderate potential (P2c) while the others have no potential (N) for wheat. About three fifths of the area of the enclaves near Mbeya and Arusha have steeply dissected topography and medium- to fine-textured, black volcanic ash soils, containing much organic matter (mollic andosols). These soils are very fertile, but are mostly unsuitable (N) for wheat because of their steep slopes, except for limited areas where potential would be large (P1) after terracing. A quarter of the land in the enclaves has gentle slopes and deep, reddish-coloured soils (eutric nitosols). These soils have a medium risk of erosion and a medium potential (P2e) f or wheat. . The remaining 15 percent of the area has very shallow soils (lithosols) with no potential (N) for wheat production. P2s. Potential limited by infertile soils. Half this mapping unit has nearly level to undulating terrain with leached, reddish-brown to dark red soils (ferric acrisols, rhodic ferralsols and ferric luvisols), which are poor in nutrients and have medium to low potential (P2s, P3s and P 2 s , respectively). About 30 percent of the mapping unit has reddish-brown soils with an iron pan at shallow depths (plinthic acrisols). These soils have a limited rooting depth and are infertile, and thus either marginally suitable or unsuitable (P3h/N) for wheat. The remaining fifth of the mapping unit has deep, reddish-brown soils (eutric nitosols), imperfectly-drained soils with an iron pan at shallow depths (plinthic gleysols) and dark-coloured, clayey soils (pellic vertisols) which have medium (P2c), no (N) and moderate (P2c) potential, respectively. Land with low potential (P3) P3e. Potential limited by erosion risks. This mapping unit occurs in an area with plentiful rainfall and an undulating to steeply dissected landform. Erosion risks are severe. Three fifths of it have medium- to fine-textured, leached, reddish-coloured soils that contain much iron (ferric acrisols). one third of these soils has an iron pan at shallow depth (plinthic acrisols). None of the acrisols are very fertile, and the erosion risks moderate to severe. One third of them occupy relatively gentle slopes and have low potential (P3e) for wheat, while the rest are unsuitable (N). About 15 percent o f t he unit consists o f imperfectly-drained soils with an iron pan at shallow depths (plinthic gleysols) while the remaining quarter of the unit has very sandy soils containing much iron (ferralic arenosols). These soils are not suitable (N) for wheat production. P3t. Potential limited by stony soils. Seven tenths of this mapping unit are consist chiefly of undulating to steeply dissected land with brown, medium-textured, gravelly and stony soils (chromic cambisols/dystric regosols). Their main constraint is that stones in the surface soil hinder cultural operations. They are also subject to medium to severe risks of erosion. one tenth of their extent, however, is not too stony or gravelly, and has a low potential (P3t) for wheat. The remaining part has no potential. One tenth of the mapping unit has deeply leached, reddish-coloured, fine-textured soils (orthic acrisols) with iron pans at shallow depths in places (plinthic acrisols); they have small to no potential (P3s and P3h/N respectively) for wheat. The remaining fifth of this mapping unit has very shallow soils (lithosols) that. have no potential (N) for wheat production. P3h. Potential limited by hard pans. Three quarters of this mapping unit have nearly level to undulating terrain with storngly leached, reddish-coloured, medium-textured soils and very hard pans at shallow depths (duripan phases of orthic, ferric and plinthic acrisols). These soils are not very fertile, and the hard pans restrict water movement and the penetration of plant roots. In a third of this area the pans are not too close to the surface and the soils are marginally suitable (P3h) for wheat; the rest have no potential (N) because the pans are close to the surface. The remaining quarter of the mapping unit is occupied by different soils which include highly weathered, sandy soils containing much iron (ferralic arenosols), river-deposited, stratified soils (fluvisols), imperfectly-drained soils ( pellic vertisols) These soils have no (N) and moderate (P2c) potential respectively for wheat production. Land with no potential (N) This class of land comprises all parts of Tanzania below an altitude of 1,500 m, which are too warm for wheat, and all areas with soils that are not suitable for rainfed wheat production. In general, the regions of Tanzania in which wheat could be produced are known. However detailed soil investigations are urgently needed within these regions, to select suitable areas for extending wheat cultivation. Research on wheat in Tanzania is at present concerned mainly with large-scale, mechanized production. This research is necessary but. an over-dependence on machinery also limills the expansion of the crop. Intermediate methods of production which might be u s e f u 1 to small- and medium-scale producers, should also be investigated. For this purpose, research at Uyole Agricultural Centre, near Mbeya, should be strengthened. There has in the past been a long period of small and unchanging producer prices for wheat. Although this is over and producer prices have increased in recent years, they should continue to be adjusted regularly, taking into account the cost of production and the prices of alternative crops. Finally Government should recognize that such inputs as electric power, seed, fertilizer and pesticides, and access to agricultural credit, marketing and transport, are vitally important components of any policy for crop improvement.
Thought to be encased in a frozen, static crust, the Martian north pole is actually a dynamic place, with sand dunes skidding and sliding in spring. The dunes were first observed in the 1970s, spotted at edge of Mars’s north polar cap. They appeared to be frozen in place. Scientists figured they formed at least 30,000 years ago when Mars’s climate was more extreme. But new images from the sharp-eyed HiRISE camera on NASA’s Mars Reconnaissance Orbiter tell a different story. “In one Mars year, we see really fairly substantial changes on the dunes,” said planetary scientist Candice Hansen of the Planetary Science Institute in Tuscon, Arizona. Hansen is lead author on a paper in the Feb. 4 Science reporting the new observations. “That was the surprise.” HiRISE has been snapping high-resolution photos of the Martian surface since March 2006, or about two and a half Martian years. Hansen and colleagues examined images of the same location at different times of year, and found that dark sand streaks and new ravines appeared as the seasons changed. “Because we had all these years of data where no one saw any changes, people developed theories — the dunes are cemented by ice, maybe they’re crusted over — theories for why they were not changing,” Hansen said. “In fact, they were probably changing all along, and we just didn’t have instruments that were good enough to see it.” The changes could be forged by a layer of frozen carbon dioxide — dry ice — changing directly from solid to steam. “This is a very un-Earthly process,” Hansen said. Every winter, Mars’s polar cap is sheathed in a thin blanket of carbon dioxide. In the spring, the warming ice layer sublimates, or shifts directly to gaseous form without bothering to melt first. This sudden shift destabilizes the dunes and triggers avalanches. In the center panel of the images above, the green or blue stuff is bright fresh frost. The dark streaks are escaping sand. In another surprise, ravines and gullies seemed to disappear from the dunes from one spring to the next. Models of Martian climate predict that the winds should not be strong enough to shift sand grains, and measurements from the Phoenix lander and the Mars rovers Spirit and Opportunity support that idea. “Everybody may have to sharpen up their pencils and go back to their climate models,” Hansen said, though she points out that they only have two Martian summers to compare. “Is this just an oddball year, or is this something that happens regularly? We’ll need more Mars years to be able to say.” Image: 1) Science/AAAS. 2) NASA/JPL/University of Arizona.
What does science look like at Kennedy School? Science is relevant to all transdisciplinary themes of the programme of inquiry and is characterized by concepts and skills with the knowledge component of science arranged into four strands: living things, Earth and space, materials and matter, and forces and energy. The IB provide a framework for science throughout the programme of inquiry and the ESF have developed comprehensive Scope and Sequence guidance documents based on this framework for use throughout the foundation. Science is explored through the central ideas of units of inquiry and includes a range of external resources and settings as well as classroom-based investigations. Science is viewed as a way of thinking and a process that strives for balance between the construction of meaning and the acquisition of knowledge and skills. Through scientific inquiry, students are invited to investigate science by formulating questions and proceeding with research, experimentation and observations. Scientific inquiry encourages curiosity, develops an understanding of the world, and enables the individual to develop a sense of responsibility regarding the impact of their actions on themselves, others and their world. Learners develop an appreciation and awareness of the world as it is viewed from a scientific perspective. Our understanding of science is constantly changing and evolving. As students conduct their inquiries, they should be able to provide accurate information and valid explanations. They should be able to identify possible causes of an issue, choose a solution and determine appropriate action to be taken. A willingness and ability to take action demonstrates evidence of learning. Through these processes, students should develop the habits and attitudes of successful lifelong learners.
We all know that space can be a dangerous place. Many safety measures are put in place by space agency scientists so astronaut’s lives are protected and mission success can be assured. Generally, some degree of certainty can be insured in near Earth orbit, protecting astronauts onboard the International Space Station and Shuttle missions, as most activities go on within the Earth’s protective magnetosphere. But in the future, when we establish a colony on the Moon and Mars, how will human life be protected from the ravages of solar radiation? In the case of Mars, this will be of special interest as should something go wrong, colonists will be by themselves… Solar energy is essential to life on Earth. Without it, we wouldn’t be here. In space, this friendly source of energy suddenly becomes our enemy. Highly energetic particles in the form of ions (i.e. atoms of solar elements stripped of most of their electrons) are generated by the Sun and ejected into space during periods of intense solar activity. These intense periods of solar activity are known as “solar maxima”, occurring approximately every 11 years as a part of the solar cycle. Although we can predict the periods of the solar cycle, we cannot predict when the Sun might launch a devastating solar flare or increase its solar wind output. Astronauts caught in a solar “ion storm” will receive high doses of radiation, putting them at risk of short term radiation poisoning and long term health problems. Astronauts in Earth orbit are comparatively protected from the worst of the solar radiation as the energetic ions will be deflected by the Earth’s strong magnetic field. But future manned missions to Mars are at an obvious risk as Mars does not have a significant magnetic field and has a very tenuous atmosphere. So what can be done for our future colonists? Research is afoot to protect long-haul travel through space (i.e. the six month transit between the Earth and Mars), but colonies will need to be warned about the onset of a solar storm should a long-term Mars base be established. Taking the lead from the recent real-time early warning system established with the Solar and Heliospheric Observatory (SOHO), sitting in the Earth-Sun First Lagrangian Point, 1.5 million km away from the Earth in direct line of sight of the Sun, the concept of an early warning system for Mars could be (inexpensively) set up. Like Earth, Mars has its own Lagrangian points with the Sun. Currently there are no man-made satellites in L1 (Mars) or L2 (Mars) orbit, but it is conceivable that these islands of gravitational stability may be used to greatly benefit future Mars colonies. The SOHO mission receives the signal that solar ions are approaching Earth an hour before atmospheric impact. This not only provides excellent diagnostic data, but also gives advanced warning to companies and organizations that the Earth is 60 minutes away from experiencing an increase in solar radiation. Emergency procedures can be enacted accordingly, possibly saving delicate satellites and astronauts. A simple, cost effective probe may be inserted into the Mars-Sun L1 point. This probe needn’t be as sophisticated as SOHO, it just needs to monitor the flux of energetic particles travelling toward Mars. Akin to a “flag” system on a patrolled beach (red for “dangerous”, no swimming. Green for “safe”, water is safe), Mars settlers could have advanced warning of an incoming flood of ions from the Sun. If constantly measured by a particle detector on the probe at the L1 point, various stages of danger levels may be used to indicate to settlers unprotected on the surface of what severity of risk they are in. Surface “walkabouts” may be tightly restricted by such a system. The Mars L1 time-lag problem The distance between Earth’s L1 point and the planet is approximately 1.5 million km. This provides information on the solar wind particles approximately 1 hour before they are received on Earth. Mars is a less massive planet than the Earth; therefore, Mars’ L1 point will be closer to the planet than the Earth’s. Reaching a logical conclusion, assuming solar particles are travelling at the same velocity in near-Mars orbit as with near-Earth orbit, a Mars early warning system of the design outlined above will be less effective than the terrestrial version. So, how much time will the Mars early warning system provide to colonists from detection (at L1) to impact (at Mars’ surface)? Using the equation from Lagrangian point calculations: where r is the distance of L1 from Mars, R is the distance between the bodies and MM and MS are the masses of Mars and the Sun respectively. Using R = 2.28 × 1011 meters, MM = 6.4191×1023 kg and MS = 1.98892×1030 kg, we arrive at a value of 1.08 million km, 72% of the distance of Earth’s 1.5 million km. Now, keeping the assumption that it will approximately take solar ions 60 minutes to travel 1.5 million km (from Earth’s L1 point to Earth), the time from L1 to Mars’ surface = 60 × 72% = 43.2 minutes. Although 43 minutes is less than the warning time Earth-based solar wind probes are able to provide, this is not a great reduction in lag time, and would still greatly benefit the humans unprotected from solar radiation on the surface of Mars.
- HIV is spread through the contact of the bodily fluids from two people, when one of the people is HIV positive. - HIV is spread through: - Unprotected sex with an infected person - Blood-to-blood contact with an infected person - Mother-to-child transmission at birth and during breastfeeding - You can avoid getting HIV - You can protect yourself from HIV by following the ABC guide: - Abstain from sex as long as possible - Be faithful to one sexual partner at a time - Use a Condom correctly every time you have sex - HIV testing is free at local VCT centers and public clinics. Make Your Move: Find a partner and run with him or her to the closest site for HIV testing. Make sure to look at your resource sheet for the exact location. If you have a soccer ball, try dribbling the ball all the way there! Don't be afraid to use your head in football. Use your head also to make smart decisions to avoid HIV, such as abstaining from sex, Being faithful to one partner and using a condom every time you have sex. Health Fact: About 19 per cent of South Africans who are 15 years or older are living with HIV or AIDS. That means one out of every five adults in South Africa is living with HIV.
Classical Conditioning - Watson and Rayner (1920) Aim: Emotional response of fear could be conditioned in a human being. Method: -Albert was 11 months old. -Like a white laboratory rat and had no fear of any white furry objects. -In the conditioning trials the rat was shown to Albert, as he reached for it, a metal bar was hit very hard with a hammer, behind Albert's back. This was done several times. Results: -After seven times, the rat was presented again, Albert screamed and tried to get away. -Did this even though the bar was not hit by the hammer and there was no loud noise. -Albert screamed when he was shown a Santa Claus mask and a fur coat. Conclusion: Showed that fear response could be learnt and even very young children could learn in the way suggested by classical conditioning. Evaluation of the study Watson and Rayner's -It is not a very ethical thing for the researchers to do to a small child. -This study involved one child and maybe the researchers needed more evidence that fear can be learnt in this way. However, the study certainly seems to fit with what you might already know about any phobia that you might have. Operant conditioning - Law of effect Learning that takes place be chase of the consequences of behaviour. Type of learning was investigated by Thorndike (1911) during his studies of the problem solving abilities or animals. He designed a puzzle box into which he would place a cat. The task for the cat inside the box was to escape. Inside the box there was a loop of string attached to a latch. When the string was pulled, the latch would lift and the door would open. Thorndike showed that a cat was placed in a puzzle box would learn to pull a string to escape from the box. When it was first placed in the puzzle box, cat moved around the box and by accident the string would be pulled and the latch would be lifted. This would happen each time the cat was placed in the box. However, after about 20 trials, he noticed that the cat began to escape very quickly. Suggested that the cat had learnt to escape from the box by trial and error learning. It was ten pleasant consequence that encouraged the cat to pull the string rather than produce any other behaviours. He proposed a hypothesis: 'If a certain response has pleasant consequences, it is more likely than other responses to occur in the same circumstance' Known as the Law of effect. B F Skinner Introduced the idea of reinforcement to the Law of effect. Said all behaviour is learnt from the consequences of that behaviour (called this operant conditioning because the animal or human produces a behaviour that is voluntary, so it operates on the environment). Consequence of the particular behaviour produced by the animal or human will be to either increase or decrease the likelihood of the behaviour being repeated. He would place a hungry rat in the box & the rat would produce a variety of actions such as sniffing, exploring & grooming. By accident it would press the lever and a pellet of food would immediately drop into the food tray. Every time the lever was pressed the behaviour of 'lever pressing' was positively reinforced by a food pellet. 2 kinds of reinforcement, one being positive reinforcement and negative reinforcement but they have the same effect: to increase the likelihood that a particular behavior will be repeated. Sometimes there would be an electric shock through the floor of the Skinner box. When the rat pressed the lever the shock would switch off = negative reinforcement. Punishment is different from reinforcement because it does not encourage the desired behaviour, it just stops one unwanted behaviour. A child, who is punished by having colouring pens taken away for writing on the wall, is very likely to find another object to scratch the wall instead. Reinforcement can be used to teach complex behaviours in animal&humans (behaviour shaping) - broken down into small steps. Eg: bird playing ping pong involves, moving towards the ball, touching ball with its beak, hitting the ball, then hitting ball towards another bird. - one reward at the end. Classical conditioning is concerned with the process of associating a new stimulus, like a bell, with a reflex response, like salvation. Ring a bell & dog salivates then we have built up a new learning & established a conditional stimulus - conditional response (CS-CR), bond or connection. Order to understand how we treat phobia's, it is important to recognise that a phobia is a fear respond that has gone wrong. Normal reflex is: DANGER - FEAR UCS - UCR When someone has a phobia, there fear response is something that could cause or has little or no danger such as: KNEE'S - FEAR CS - CR Person with a phobia, their fear response is no longer the automatic response to danger or threat. It is something that has little or no danger. eg: spiders (anarchnophobia) SPIDER - FEAR CS - CR In order for this fear to be made, spider must have been present when something scar happened. -person is exposed repeatedly & rapidly to the thing they fear; they are flooded with thoughts & actual experiences. Eg: someone with a fear of spiders would have to imagine a spider & maybe visualise one running across the floor (thoughts) & then would have to hold an actual spider in their hand (actual experience). -quite simple. The person has to unlearn the connection between the stimulus and the fear response: the CS - CR bond has to be broken. Most people with a phobia avoid or run away from the feared object. However, flooding prevents escape. People learn that their anxiety levels start to drop the more times they are exposed to their fears. Flooding removes phobia when a person realises they are not in danger & this happens quite quickly. Ethical implications of flooding. -the person loses their right to withdraw; for the treatment to work they have to stay. -stressful procedure: psychologist has to judge how much distress the person should undergo before stopping the treatment. Difficult to protect & avoid harming someone who is being flooded. -treatment of phobia's is based on the idea that people cannot be anxious and relaxed at the same time. As a person with a phobia cannot be afraid and relaxed at the same time, the fear response is replaced by feeling relaxed instead. Treatment works in the following way. -person with phobia is taught how to relax themselves (may involved listening to music and relaxing their muscles. -Conduct a hierarchy of fears that contains the things that they are afraid of in order from least frightening (word 'spider') to most frightening (havings a spider in my hand). -Person relaxes and then gradually works through the hierarchy of fears, relaxing after each feared event is presented. -Person only moves up the hierarchy if they have been relaxed at the previous stage. -Final stage is to be relaxed at the 'most frightening' event. Practical applications of Systematic desensitisati Treating a fear of balloons involves the following: -person taught to relax, breathing deeply and calmly. -Constructs the hierarchy of fears in five stages: 1) Word 'Balloon'. 2)Squeaky sound of balloons being touched. 3)Picture of balloon. 4)Real balloon. 5)Holding a balloon. -Person is exposed to stage 1 and must be completely relaxed while the word 'balloon' is repeated. -Therapist then 'squeaks' a balloon out of sight of the person while the person relaxes. -Therapist moves gradually through all the next stages until stage 5 is achieved. -No more fear of balloons, just a relaxed person. Ethical implications - systematic desensitisation -Treatment used when the therapist believes that flooding would be too stressful for the person with a phobia. Children are treated with this method. -Therapist works with the person and together they decide on how quickly the person should move through the hierarchy. -Person takes an active role in the therapy and can always withdraw from a stage if they feel uncomfortable and they can then practice relaxing again. -No deception because the person knows exactly what is happening. -Ethical treatment for phobia's. -Takes longer than flooding to remove a phobia but it is a very effective treatment. -Cost more as there are often more sessions of therapy. However, most therapists and their clients prefer this method of treatment. It is much less anxiety-arousing and much less stressful for the person undergoing treatment. classical conditioning can be useful in the treatment of behaviour problems. Some therapists think that behaviour problems result from faulty learning and therefore that 'bad' behaviour can be unlearnt. A technique that has been used to help people who suffer from addictions like drug and alcohol dependency is called aversion therapy. The aim of the therapy is to get the patient to develop an extremely negative reaction to the drug or alcohol using the vomiting reflex. Emetic (UCS) --- Vomiting (UCR) Alcohol (CS) + Emetic (UCS) --- Vomiting (UCR) Alcohol (CS) --- Vomiting (CR) Emetic is specially designed so that it only procudes the vomiting reflex when the patient drinks alcohol. (drink lots of that you will be sick and you will not be able to stop the vomiting). Patients desire for the alcohol decreases and the addiction can be overcome. Do not think that people who drink a lot are sick anyway so this treatment would not work. The emetic makes people sick immediately when they swallow the alcohol. Therapy can be more effective when it is used along with other support. Unpleasant experience for the person and there are many ethical issues raised by this kind of treatment. Evaluation of Aversion therapy. Aversion therapy is used for some individuals who have serious behavioural problems. -It can be extremely unpleasant for the person who has the treatment. -Ethical issues: balanced against the possible benefits to the person. It is not always sucessful overtime. People find that it reduces it for a period, unless they have additional support, they are likely to go back to their addictive behaviour once the treatment stops. Token Economy programmes. There are many things that can act as rewards or positive reinforcers. Primary Reinforcer: a reward, such as Food and Water - ESSENTIAL. Secondary Reinforcer: a reward such as Money or a token - something that someone can exchange for a primary reinforcer. Evaluation of token economy programmes -improvements in the behaviour and self care of patients who have been in hospital for a long time. -criticised: patients focus on the rewards rather than wanting there own behaviour to improve, change may not last in the outside world. -if the reward is not immediate then the association between the reward and action is lost, which means that the behaviour is not being reinforced.
When you think of calcium, you most probably think of bones and while it’s true that this mineral is vital for keeping our bones strong it is also vital to the health of our teeth. In fact, 99 percent of the body’s calcium reserves are stored in the bones and teeth, where the mineral provides structural support. Let’s take a look at some calcium facts and why your teeth need it so much. Did you know that aside from strengthening bones and teeth, calcium also helps muscles, blood vessels, and nerves work properly? Calcium is found in blood, muscle and in the fluid between your cells. It helps to keep the muscles and blood vessels functioning normally. It regulates hormones and enzymes and helps to transmit nerve impulses. It’s a very busy mineral indeed! You might not think that osteoporosis has anything to do with your teeth but research has shown that osteoporosis can cause the jaw to weaken. As the jaw bone anchors your teeth in place, if it becomes damaged then teeth can loosen or even fall out. This is why calcium is directly important for your oral health too as women with osteoporosis are three times more likely to lose teeth than women with healthy bones. For ifsc dental, visit http://www.docklandsdental.ie/. Calcium is essential for people in every life stage, from infants to the elderly. Babies, children, and teenagers need calcium in order to develop strong bones and teeth; adults need it to maintain a strong skeleton and healthy teeth. A calcium-deficient diet increases your risk of developing osteoporosis, a serious condition in which the bones weaken and are more likely to fracture. At different ages, we will require different levels of calcium in our diets. Children aged 1-3 need 500mg a day, from ages 4-8 that rises to 800mg a day. Older children and teens need 1300mg a day and adults need 1000mg a day. Over the age of 51, people need 1200mg a day and pregnant and nursing others will require 1000mg a day. To better absorb all of this wonderful calcium it is important to have sufficient amounts of Vitamin D in our diet too. Another reason calcium is essential for oral health is that Not getting enough can raise your risk for periodontal (gum) disease. In studies comparing calcium intake and gum disease, the healthiest teeth were seen in people who consumed more than 800mg a day. Those who consumed less than 500mg were 54 times more likely to develop gum disease. Sources of calcium can be found in leafy green vegetables and of course in dairy products. Foods such as milk, cheese and yoghurt are full of calcium. Other sources include calcium-fortified juices, breakfast cereal and canned sardines. Maintaining your calcium intake is important as we get older as bone mass and skeletons do become more fragile as we age. Getting plenty of calcium, incorporating weight-bearing exercise into your routine and cutting back on alcohol will also help to keep your bones and teeth in tip top condition.
On a steam locomotive, a wheel which is driven by the locomotive's pistons (or turbine, in the case of a steam turbine locomotive). On a conventional, non-articulated locomotive, the driving wheels are all coupled together with side rods (also known as coupling rods); normally one pair is directly driven by the main rod (or connecting rod) which is connected to the end of the piston rod; power is transmitted to the others through the side rods. On an articulated locomotive or a rigid-framed locomotive with divided drive, such as a Duplex locomotive, driving wheels are grouped into sets which are linked together within the set. Driving wheels are generally larger than leading or trailing wheels. Since a conventional steam locomotive is directly driven, one of the few ways to 'gear' a locomotive for a particular performance goal is to size the driving wheels appropriately. Freight locomotives generally had driving wheels between 40" and 60" in diameter; dual-purpose locomotives generally between 60" and 70", and passenger locomotives betwen 70" and 100" or so.
The Human Rights of Women Numerous international and regional instruments have drawn attention to gender-related dimensions of human rights issues, the most important being the UN Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), adopted in 1979 (see box). In 1993, 45 years after the Universal Declaration of Human Rights was adopted, and eight years after CEDAW entered into force, the UN World Conference on Human Rights in Vienna confirmed that women’s rights were human rights. That this statement was even necessary is striking – women’s status as human beings entitled to rights should have never been in doubt. And yet this was a step forward in recognizing the rightful claims of one half of humanity, in identifying neglect of women’s rights as a human rights violation and in drawing attention to the relationship between gender and human rights violations. CEDAW: The International Bill of Rights for Women The Convention on the Elimination of All Forms of Discrimination Against Women defines the right of women to be free from discrimination and sets the core principles to protect this right. It establishes an agenda for national action to end discrimination, and provides the basis for achieving equality between men and women through ensuring women’s equal access to, and equal opportunities in, political and public life as well as education, health and employment. CEDAW is the only human rights treaty that affirms the reproductive rights of women. The Convention has been ratified by 180 states, making it one of the most ratified international treaties. State parties to the Convention must submit periodic reports on women’s status in their respective countries. CEDAW’s Optional Protocol establishes procedures for individual complaints on alleged violations of the Convention by State parties, as well as an inquiry procedure that allows the Committee to conduct inquiries into serious and systematic abuses of women’s human rights in countries. So far the Protocol has been ratified by 71 States. In 1994, the International Conference on Population and Development in Cairo (ICPD) articulated and affirmed the relationship between advancement and fulfilment of rights and gender equality and equity. It also clarified the concepts of women’s empowerment, gender equity, and reproductive health and rights. The Programme of Action of ICPD asserted that the empowerment and autonomy of women and the improvement of their political, social, economic and health status was a highly important end in itself as well as essential for the achievement of sustainable development. In 1995, the Fourth World Conference on Women in Beijing generated global commitments to advance a wider range of women’s rights. The inclusion of gender equality and women’s empowerment as one of the eight Millennium Development Goals was a reminder that many of those promises have yet to be kept. It also represents a critical opportunity to implement those promises. In spite of these international agreements, the denial of women’s basic human rights is persistent and widespread. For instance: - Over half a million women continue to die each year from pregnancy and childbirth-related causes. - Rates of HIV infection among women are rapidly increasing. Among those 15-24 years of age, young women now constitute the majority of those newly infected, in part because of their economic and social vulnerability. - Gender-based violence kills and disables as many women between the ages of 15 and 44 as cancer. More often than not, perpetrators go unpunished. - Worldwide, women are twice as likely as men to be illiterate. - As a consequence of their working conditions and characteristics, a disproportionate number of women are impoverished in both developing and developed countries. Despite some progress in women’s wages in the 1990s, women still earn less than men, even for similar kinds of work. - Many of the countries that have ratified CEDAW still have discriminatory laws governing marriage, land, property and inheritance. While progress has been made in some areas, many of the challenges and obstacles identified in 1995 still remain. In addition, the new challenges for women’s empowerment and gender equality that have emerged over the past decade, such as the feminization of the AIDS epidemic, feminization of migration, and increasing of trafficking on women need to be more effectively addressed. Anyone Can Stand Up for the Rights of Women Any individual, non-governmental organization, group or network may submit communications(complaints/appeals/petitions) to the Commission on the Status of Women containing information relating to alleged violations of human rights that affect the status of women in any country in the world. The Commission on the Status of Women considers such communications as part of its annual programme of work in order to identify emerging trends and patterns of injustice and discriminatory practices against women for purposes of policy formulation and development of strategies for the promotion of gender equality. UNFPA at work In every region of the world, UNFPA is working to promote women’s rights and end discrimination against them. The Fund is increasingly involved in protecting the rights of women affected by conflict, and ensuring that women can have an active role in peacebuilding and reconstruction efforts. The Fund’s programming also addresses all 12 of the critical areas of concern identified at Beijing. In many cases, UNFPA is able to multiply its effectiveness by supporting legislation that protects the rights of women, such as groundbreaking laws in Ecuador and Guatemala granting women the right to reproductive health care. In some cases, the Fund gets results by partnering with men as in Uganda. The Fund also supports services for women who are victimized by various forms of gender-based violence. For instance, it supports help for women who are abused by their husbands in the Gaza Strip. It has helped establish a shelter for women who have been trafficked in Moldova and funds a safe haven for girls running away from female genital mutilation or forced marriage in Kenya. - Beijing at Ten: UNFPA’s Commitment tot the Platform of Action - Promoting Gender Equality - Women’s Rights are Human Rights - State of World Population 2005: The Promise of Equalit
In 2010, The Dawes Arboretum along with state and federal agencies, private and public organizations, and respected field professionals, formed the Ohio Native Plant Network (ONPN). Ohio Native Plant Defined An Ohio native plant is one that was part of the Ohio landscape in the late 1700s, before European settlers arrived, and when nearly 95 percent of Ohio was forested. The rich woodlands with towering trees, some standing 100-150 feet tall, were some of the most impressive of all temperate zone hardwood forests. The rapid European settlement of Ohio resulted in a steady decline of forest cover and wetlands, as they were cleared and drained to make way for agriculture. The native plant species that the ONPN focuses on are those that survived the vast changes to the ecosystems during times of settlement. The ONPN establishes guidelines for the collection, propagation and distribution of trees, shrubs and herbaceous perennials of wild known origin. The overarching goal is to create a vision to enhance native plant biodiversity, conserve local genotypes and restore native plant communities in Ohio. Enhance native plant biodiversity To enhance native plant biodiversity, the group considers and addresses the following issues: - Habitat loss and degradation – This is mostly due to competition from non-native, invasive plant species. - The threat of climate change on present Ohio native plant species – As species decline and disappear, species more tolerant of a warmer, wetter climate will move north and many Ohio native plant species that coexisted and co-evolved with these ecosystems may disappear. Conserve local genotypes To attain the goal of conserving local genotypes, the ONPN encourages planters to buy plant material that originates from Ohio and the local eco-region. When the environmental conditions of the plant material source—the seeds—are matched to that of the planting site, the better it grows. This occurs because species have become genetically adapted to their local conditions. Therefore, buying locally will preserve not only the diversity of Ohio native plant species but also genetic diversity within each species. Restore native plant communities Another goal of the ONPN is to restore native plant communities in Ohio by working collaboratively with Ohio nursery and landscape industries to ensure the availability and use of common Ohio native plants of local known genotypes. To accomplish this, seeds from common Ohio native plants of wild known origins are collected and then dispersed to local nursery and landscape industries for their use and for eventual distribution to anyone responsible for creating backyard landscapes as well as restoring natural ecosystems. Promoting public awareness in regards to Ohio native plant conservation and the value in choosing Ohio native plants of local genotypes for home landscapes will be a key component to the success of the project as it moves forward.
All subspecies of gorilla are endangered. Western lowland gorillas are the most numerous, with an estimated population of about 175,000 individuals. The populations in Equatorial Guinea and Nigeria are critically endangered. The eastern lowland gorilla population is estimated at 5,000 to 10,000 individuals. Population estimates for both western and eastern lowland gorillas are based on habitat availability and actual populations are probably lower. There are approximately 700 total mountain gorillas (both subspecies) based on population censuses, some groups of which are critically endangered. The Cross River gorillas are found in five small pockets of habitat along the border of Nigeria and Cameroon and number approximately 150 animals. A reduction of 80 percent in the next ten years is estimated by some conservation organization estimates. This estimate is based on the decline in quality of habitat. As of November 2009, there were 337 gorillas in North American facilities, and the world zoo population was about 750 gorillas. Almost all zoo gorillas are western lowland. The only eastern lowland gorillas in captivity live in Europe. There are no mountain gorillas in zoos. Threats to gorillas come from humans. The political instability in central western Africa has led to a decline in the number of gorillas. Humans kill individuals in order to capture young, to get trophy body parts (less so now), and for bush meat. The greatest threat to gorillas, and all apes, is from habitat destruction caused by logging and agricultural expansion. The bushmeat trade, facilitated by logging, has become an immediate threat to the western lowland gorilla population. In one year alone approximately 2,000 gorillas were killed for bushmeat. Another great pressure is being put on mountain and eastern lowland gorillas due to war and refugee movements. There are international guidelines and laws to protect gorillas. Notably, the World Conservation Union has developed criteria to identify threatened species and drafted the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), under which trade in gorillas or gorilla parts is illegal. There are also national laws and programs to protect gorillas. See the Conservation section for more details. Unfortunately, compliance and enforcement remain problematic. Gorillas at the National The National Zoo exhibits six western lowland gorillas in the Great Ape House. They live in one family group. Species Survival Plan The gorillas at the National Zoo are managed under a Species Survival Plan. The Zoo has experienced successful breeding of gorillas.
E = I x R = 4 x 3 = 12 volts A series circuit is one in which the resistances or other electrical devices are connected end to end so that the same current flows in each part of the circuit. Each part of the circuit adds its opposition to the flow of current to the opposition offered by every other part. An example of a series circuit is shown in figure 4. Each resistor opposes the flow of current from the power source. The total opposition is therefore the sum of the resistances of all five resistors. The resistor having the highest resistance value will develop the largest value of voltage across it. Since all components of the circuit under discussion are in series, it is evident that the same current that flows through any one component flows through all components included within the circuit. The total value of current is therefore proportional to the total opposition to the flow of current It is also evident that the current leaving the circuit must be of the same value as that entering the circuit.
Infancy is a time of intense development. Babies start out with little more than instinctual reflexes and an innate ability to learn. Over the course of two years, they progress to the point where they have recognizable personalities; are able to move themselves from place to place and manipulate things; and understand how certain important aspects of the world operate (such as object permanence; the understanding that objects continue to exist even when you are not looking at them). They understand the basics of how to make their wishes known, have formed attachments and relationships, and have learned basic ways of managing their emotions and impulses. While these achievements are tremendous and set the stage for later learning, they are also commonplace. So long as children are born without significant illness, and so long as they are properly nurtured and cared for, their development towards these achievements will likely progress uneventfully. The key phrase is, of course, "properly nurtured." As Bronfenbrenner stressed, child development is influenced by the environment at every level. Children progress toward milestones through interaction with their physical environments, with loving parents, and with the larger world. Problematic or lack of nurturing has a negative impact on their ability to progress smoothly. Children who are not exposed to language and communication stimulation, either because of hearing problems or caregivers' neglect to speak with and around them, can have difficulty learning more complex language skills in later years. Similarly, children who are deprived of consistent nurturing care can grow to learn to mistrust others and have problems bonding with caregivers or other people in later years. Good parenting skills can help smooth out some of the inevitable bumps and bruises that might threaten to derail more sensitive or temperamental children. Though all parents will make mistakes in the 22 years it takes to raise a child; love, attention, and care provide strong bedrock for healthy child development. Development doesn't stop here, of course. The next center in this series discusses how children progresses into the next stage of development, the preoperational stage, which lasts from ages 2 through 7.
Predictive Modeling is a process through which a future outcome or behavior is predicted based on the past and current data at hand. It is a statistical analysis technique that enables the evaluation and calculation of the probability of certain results. Predictive modeling works by collecting data, creating a statistical model and applying probabilistic techniques to predict the likely outcome. Precision looks at the ratio of correct positive observations. The formula is True Positives / (True Positives + False Positives). Note that the denominator is the count of all positive predictions, including positive observations of events which were, in fact, negative. Power Analysis is an important aspect of experimental design. It allows us to determine the sample size required to detect an effect of a given size with a given degree of confidence. There are four parameters involved in a power analysis. The research must ‘know’ 3 and solve for the 4th. Probability of finding significance where there is none Probability of a Type I error Usually set to.05 Probability of finding true significance 1 – beta, where beta is: Probability of not finding significance when it is there Probability of a Type II error Usually set to.80 The sample size (usually the parameter you are solving for) May be known and fixed due to study constraints 4. Effect size: Usually, the ‘expected effect’ is ascertained from: Pilot study results Published findings from a similar study or studies May need to be calculated from results if not reported May need to be translated as design specific using rules of thumb Field defined ‘meaningful effect’ Educated guess (based on informal observations and knowledge of the Paired t-Test has its purpose in the testing is to determine whether there is statistical evidence that the mean difference between paired observations on a particular outcome is significantly different from zero. The Paired-Samples t Test is a parametric test. This test is also known as Dependent t-Test. Out-Of-Sample Evaluation means to withhold some of the sample data from the model identification and estimation process, then use the model to make predictions for the hold-out data in order to see how accurate they are and to determine whether the statistics of their errors are similar to those that the model made within the sample of data that was fitted. Multinomial Logistic Regression is the linear regression analysis to conduct when the dependent variable is nominal with more than two levels. Thus it is an extension of logistic regression, which analyzes dichotomous (binary) dependents. Since the output of the analysis is somewhat different to the logistic regression’s output, multinomial regression is sometimes used instead. Like all linear regressions, the multinomial regression is a predictive analysis. Multinomial regression is used to describe data and to explain the relationship between one dependent nominal variable and one or more continuous-level(interval or ratio scale) independent variables. Model Fitting is running an algorithm to learn the relationship between predictors and outcome so that you can predict the future values of the outcome. It proceeds in three steps: First, you need a function that takes in a set of parameters and returns a predicted data set. Second you need an ‘error function’ that provides a number representing the difference between your data and the model’s prediction for any given set of model parameters. Third, you need to find the parameters that minimize this difference. Once you set things up properly, this third step is easy. Markov Model in probability theory is a stochastic model used to model randomly changing systems where it is assumed that future states depend only on the current state not on the events that occurred before it (defined as the Markov property). Generally, this assumption enables reasoning and computation with the model that would otherwise be intractable. For this reason, in the fields of predictive modeling and probabilistic forecasting, it is desirable for a given model to exhibit the Markov property. There are four most common Markov models used in different situations, depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made. These are Markov chain (the simplest model), Hidden Markov Model (Markov chain with only part of states observable), Markov decision process (chain with applied action vector) and Hidden Markov decision process. There is also a Markov random field, or Markov network may be considered to be a generalization of a Markov chain in multiple dimensions, and Hierarchical Markov Models which can be applied to categorize human behavior at various levels of abstraction. Manhattan Distance is the distance between two points measured along axes at right angles. The name hints to the grid layout of the streets of Manhattan, which causes the shortest path a car could take between two points in the city. The limitation of the Manhattan Distance heuristic is that it considers each tile independently, while in fact, tiles interfere with each other. MAE – Mean Absolute Error in statistics is a quantity used to measure how close forecasts or predictions are to the eventual outcomes.The mean absolute error is an average of the absolute error where is the prediction and the true value. Note that alternative formulations may include relative frequencies as weight factors. The mean absolute error used the same scale as the data being measured. This is known as a scale-dependent accuracy measure and therefore cannot be used to make comparisons between series using different scales. The mean absolute error is a common measure of forecast error in time series analysis, where the terms “mean absolute deviation” is sometimes used in confusion with the more standard definition of mean absolute deviation. The same confusion exists more generally.
Featured College Readiness Standard Cross-Disciplinary Standard I - Key Cognitive Skills I. Key Cognitive Skills A. Intellectual curiosity - Engage in scholarly inquiry and dialogue. - Accept constructive criticism and revise personal views when valid evidence warrants. - Consider arguments and conclusions of self and others. - Construct well-reasoned arguments to explain phenomena, validate conjectures, or support positions. - Gather evidence to support arguments, findings or lines of reasoning. - Support or modify claims based on the results of an inquiry. C. Problem solving - Analyze a situation to identify a problem to be solved. - Develop and apply multiple strategies to solve a problem. - Collect evidence and data systematically and directly relate to solving a problem D. Academic behaviors - Self-monitor learning needs and seek assistance when needed. - Use study habits necessary to manage academic pursuits and requirements. - Strive for accuracy and precision. - Persevere to complete and master tasks. E. Work habits - Work independently. - Work collaboratively. F. Academic integrity - Attribute ideas and information to source materials and people. - Evaluate sources for quality of content, validity, credibility, and relevance. - Include the ideas of others and the complexities of the debate, issue, or problem. - Understand and adhere to ethical codes of conduct. More CCRS Resources
Cheap, compact chip could expand T-ray scanning potential By Ben Coxworth December 12, 2012 Terahertz technology (or T-Ray, for short), sounds like something out of a science fiction movie. It utilizes high-frequency terahertz waves – which are located between microwaves and far-infrared radiation on the electromagnetic spectrum – to see through solid matter without the harmful ionizing radiation of X-rays. Although T-Ray devices have yet to become compact and affordable, that could soon change thanks to new silicon microchips developed at the California Institute of Technology. Prof. Ali Hajimiri and postdoctoral scholar Kaushik Sengupta developed the tiny T-Ray chips using industry-standard complementary metal-oxide semiconductor (CMOS) manufacturing methods. One challenge that they faced, however, was the fact that the transistors on regular silicon chips simply cannot operate at high enough frequencies to amplify a T-Ray signal. The scientists got around this limitation by tuning and coordinating the frequencies of multiple transistors on one chip, using their combined power to achieve terahertz frequencies. “Traditionally, people have tried to make these technologies work at very high frequencies, with large elements producing the power. Think of these as elephants,” Hajimiri explained. “Nowadays we can make a very large number of transistors that individually are not very powerful, but when combined and working in unison, can do a lot more. If these elements are synchronized – like an army of ants – they can do everything that the elephant does and then some.” Even once the T-Ray signal could be amplified, there was still another problem – its frequency was too high to allow it to be transmitted via a normal wire antenna. Once again, the answer lay in spreading the task around. Numerous small pieces of metal were incorporated into the chip, which all worked together to turn the chip itself into the antenna. The end result: inexpensive chips that are small enough to fit on a fingertip, operate at a speed almost 300 times faster than traditional silicon chips, and that have a directable T-Ray signal over 1,000 times stronger than that of previous technologies. According to Caltech, the chips constitute “the world's first integrated terahertz scanning arrays.” Using the chips in a scanning device, Hajimiri and Sengupta were able to do things such as imaging a razor blade hidden inside a piece of plastic, and analyzing the fat content of a piece of chicken. Ultimately, it is hoped that such chip-equipped scanners could be used for applications such as luggage inspection, bomb or drug detection, and Star Trek tricorder-like medical imaging. “We are not just talking about a potential,” said Hajimiri. “We have actually demonstrated that this works.” A paper on the research was recently published in the IEEE Journal of Solid-State Circuits. Update: This story was amended on Dec. 13, 2012, to state that X-rays don't generate harmful ionizing radiation, but are ionizing radiation.Share - Around The Home - Digital Cameras - Good Thinking - Health and Wellbeing - Holiday Destinations - Home Entertainment - Inventors and Remarkable People - Mobile Technology - Urban Transport - Wearable Electronics - 2014 Small Compact Camera Comparison Guide - 2014 Entry-Level to Enthusiast DSLR Comparison Guide - 2014 iPad Comparison Guide - 2014 Superzoom Camera Comparison Guide - 2014 Tablet Comparison Guide - 2014 Full Frame DSLR Comparison Guide - 2014 Smartphone Comparison Guide - 2014 Windows 2-in-1 Comparison Guide - 2014 Smartwatch Comparison Guide
One of the mysteries of the English language finally explained. Denoting a system of algebraic notation used to represent logical propositions, especially in computing and electronics. - ‘As an example, consider the AND function from Boolean algebra.’ - ‘Previously, users could construct limited Boolean queries using drop-down menus.’ - ‘Indeed, today, all computers are based on Boolean algebra, another example that mathematics is often a hundred years ahead of its time.’ - ‘Devoting resources to Boolean training for their employees and to an assisted search interface for their website could be a valuable investment.’ - ‘You can use Boolean functions to create custom objects by adding, subtracting, or intersecting two objects together.’ - ‘We want to get rid of all this Boolean query stuff.’ - ‘The program gives the user plenty of 3D tools including true Boolean operations.’ - ‘He did this in particular with submarine attack and navigation, employing Boolean algebra.’ - ‘He continued his work showing how Boolean algebra could be used to synthesise and simplify relay switching circuits.’ - ‘It began the algebra of logic called Boolean algebra which now finds application in computer construction, switching circuits etc.’ - ‘One of the more interesting things that you can do with Boolean gates is to create memory with them.’ - ‘It was based on using Boolean algebra with computer circuitry.’ - ‘In addition to his work on semigroups, number theory and finite fields, Schwarz contributed to the theory of non-negative and Boolean matrices.’ - ‘Then in 1934 he published two papers on Boolean algebras.’ - ‘Fundamental to these operations are electronic gates for handling Boolean logic.’ - ‘Search can now handle full-nested Boolean queries.’ - ‘Table 6 shows actors' strategies transformed into Boolean equations.’ - ‘During this time he published three papers on Boolean logic and one on probability.’ - ‘Developing a search strategy (choosing relevant terms and applying Boolean logic) before beginning a search is the single most important step in searching.’ - ‘Only one participant used explicit Boolean operators (in this case ‘AND’).’ A binary variable, having two possible values called “true” and “false.”. - ‘Its search defaults to a Boolean AND and supports phrase searching with quotation marks.’ - ‘Another general algebraic notion which applies to Boolean algebras is the notion of a free algebra.’ - ‘There are two types of options, Boolean (true/false) options and those that take a value.’ - ‘The full Boolean is available from the main page search box and supports nesting.’ In this article we explore how to impress employers with a spot-on CV.
To end HIV transmission, stigma and discrimination in Queensland by 2020 we must raise awareness about how HIV is different now, and dispel the myths and misconceptions that still exist around HIV. In this section you will find information on: It may surprise you to learn that HIV (human immunodeficiency virus) and AIDS (acquired immune deficiency syndrome) are still shrouded in myth and misconception, despite decades of awareness campaigns and improved access to information through the internet. For instance, did you know that 47% of people still think HIV and AIDS are the same thing? Or that 21% of people think you can get HIV from kissing? HIV is different now. Over the last few years, HIV treatment and prevention have changed significantly in Queensland, Australia, and the world. With major advances in testing and treatment, people living with HIV are now being diagnosed earlier and living long, healthy and fulfilling lives. Knowing the facts about HIV is an important part of preventing it. Many people fear HIV because they do not know the facts, such as how HIV is transmitted or what it is like to live with HIV. HIV stands for human immunodeficiency virus. If HIV is undiscovered or untreated it can affect a person’s immune system—the body’s defence against disease. When HIV attacks the immune system, it can cause many different infections and illnesses known as "opportunistic infections". Today, there are several effective treatments available to treat HIV and stop the virus from damaging the immune system. However, if left untreated, HIV can progress and cause Acquired Immunodeficiency Syndrome (AIDS), the most advanced stage of infection. It’s important to note that HIV is not AIDS. Due to advancements in treatment and testing in Australia, HIV rarely progresses to AIDS. AIDS stands for acquired immune deficiency syndrome. It’s a term which only applies to the most advanced stages of HIV infection. AIDS cannot be transmitted between people; a person with HIV is considered to have developed AIDS when the immune system is so severely damaged by the virus that it can no longer fight off diseases and infections that the body would normally be able to cope with. These “opportunistic infections” can now be prevented with treatment. HIV transmission can be simplified to a three part equation: It is important to understand that HIV cannot be passed on through saliva, vomit, urine or faeces. You cannot contract HIV from kissing, hugging, sharing eating utensils, shaking hands or any other everyday social contact. HIV can only be transmitted by the following fluids from a person with HIV coming into direct contact with another person and entering their bloodstream: - Vaginal fluid. - Rectal fluid. - Breast milk. HIV is commonly transmitted in the following ways: - Having sex without condoms or a condom breaking (vaginal, anal, or oral). - Sharing needles and other drug injecting equipment contaminated with blood. - Other blood-to-blood contact. - Mothers who are HIV positive can transmit the virus to their baby (during pregnancy, birth or breastfeeding). In Australia, there is no longer a risk of contracting HIV through donated blood and blood products (e.g. blood transfusion) as all donated blood, organs, tissues and semen are screened for HIV. Anybody can get HIV. HIV is a virus; it can enter the body if you are rich or poor; young or old; black or white; gay or straight; married or single. It’s what you do, not who you are, that puts you at risk. If you or someone you know is at risk of contracting HIV, learn more about HIV prevention and testing. You can get confidential advice any time about HIV by contacting one of Queensland’s HIV organisations. In the video below, you will hear inspiring true stories from Queenslanders living with HIV and the stigma they and their loved ones deal with every day. Learn how, despite the many misconceptions surrounding HIV, they continue to live and love life to the fullest. Regardless of context, the causes and consequences of stigma and discrimination are the same worldwide. People living with HIV can find it hard to tell others about their condition for fear of rejection or prejudice from friends, family, colleagues, or members of their local community, not to mention prospective sexual/romantic partners. The stigma surrounding HIV may also cause people to feel reluctant or fearful of having an HIV test or accessing treatment and care. HIV prejudice is often the result of a lack of knowledge about how HIV is passed on and an unfounded fear of becoming infected. By encouraging people to talk about HIV, learning the truth and reducing your own stigmatising behaviours, the stigma surrounding HIV can be overcome. Here are some examples of everyday stigma to avoid: - Referring to HIV as AIDS. - Presuming that because someone is living with HIV, they’re sick, contagious, or dying. - Believing HIV can be contracted by casual contact or kissing. - Using the word “clean” when referring to a negative HIV status, or combining drug use with HIV status, often referred to in online personal ads as the acronym, “DDF”. A better acronym to use is “SSO”—safe sex only. - Dismissing, judging, or rejecting someone when they disclose their HIV positive status. - Perceiving people living with HIV as promiscuous, or “deserving” of becoming HIV positive. - Discussing someone’s HIV status, whether it is rumour or factual, without their consent or knowledge. - Avoiding getting tested for HIV for fear of a positive result. - Laws criminalising people living with HIV. To find more HIV stigma infographics and learn more about HIV stigma visit www.thestigmaproject.org. In 2018 - 2019, the National Association of People with HIV Australia (NAPWHA) conducted the Stigma Audit to try to understand stigma in an Australian context. The results showed a moderate experience of stigma was experienced by participants who completed the survey: - 34% of respondents agreed with ‘I feel guilty because I have HIV’. - 77% agreed with ‘telling someone I have HIV is risky’. - 35% disagreed with ‘I never feel ashamed of having HIV’. - 42% agreed with ‘I work hard to keep my HIV a secret’. - 40% agreed with ‘most people think that a person with HIV is disgusting’. - 40% agreed with ‘I have been hurt by how people reacted to learning I have HIV’. Notably, the media was singled out as an ongoing source of stigma for people living with HIV. For more information on how to ensure that media reports on HIV in Australia are accurate and sensitive, visit the HIV Media Guide Toolkit.
'St Luke's Hospital', print, London, England, 1785 This print shows St Luke’s Hospital for Lunatics shortly after a new building was built in 1782 in Old Street, London. St Luke’s was a hospital for the mentally ill founded in 1751 to relieve the pressure on London’s other asylum, Bethlem Hospital. St Luke’s was one of the first teaching hospitals to study mental illness. Unlike other asylums, visitors were not permitted purely to be amused by the plight of the patients. The illustration was engraved by an artist named Deeble for The European Magazine and London Review, which was launched in 1782. Related Themes and Topics There are 310 related objects. View all related objects Pictorial works produced by transferring images by means of a matrix such as a plate, block, or screen, using any of various printing processes. When emphasizing the individual printed image, use "impressions." Avoid the controversial expression "original prints," except in reference to discussions of the expression's use. If prints are neither "reproductive prints" nor "popular prints," use just "prints." A technique to obtain prints from an engraved surface. Engraving is the practice of cutting into a hard, usually flat surface. Glossary: psychiatric hospital Psychiatric hospital specialising in the treatment of serious mental illness, usually for relatively long-term patients. A historic term for a psychiatric hospital. The term in this context was common in the 1700s and 1800s, but is no longer in use. Glossary: mental illness Who were the `mentally ill’? We use this phrase to reflect the historical descriptions of individuals with a variety of behaviours, mental health problems and pathologies. Historically, the concept of ‘ madness’ or ‘insanity’ was used to describe people who may have had what we would now consider psychiatric disorders. It often also included those showing symptoms of syphilis, epilepsy, depression, or in some cases merely behaviour considered to be eccentric or outside commonly accepted norms.
Westphalia, which means western plain, is the contemporary Bundesland, or state, of Nordrhein-Westfalen. After the fall of the Roman Empire in the 5th century, the Saxons inhabited the territories in north central Germany. Westphalia was a part of the old duchy of Saxony, which included most of the land between the Rhine and the Elbe between the 9th and 12th centuries. In the 9th century, the Frankish Emperor Charlemagne incorporated Saxony and the other German duchies into the Carolingian Empire. Charlemagne's conquest brought temporary unity to the duchies, but the collapse of the Carolingian Empire loosened these bonds of common order. Tribal consciousness and local particularism fought all centralizing influences until the late 19th century. Under powerful dukes, the duchies of Saxony, Franconia, Swabia, Lorraine and Bavaria, which were originally districts of the Carolingian Empire, became independent political entities in the 10th and 11th centuries. However, in 911, the German dukes recognized the need for a common leader and they elected Henry of Saxony as their king. The most powerful of this line of Saxon kings was Otto I, who became king of Germany and persuaded the Dukes of Lorraine, Franconia, Swabia and Bavaria to act as his attendants in the coronation ceremony at Aachen. The King subordinated the dukes, made the German Church a national institution, and fused the German tribes into a powerful state. Most importantly, Otto was crowned emperor by Pope John XII in 962, which marked the genesis of the medieval Holy Roman Empire. In the 12th century, Frederick Barbarossa of the House of Hohenstaufen attempted to build a lasting foundation for the German Empire. In 1180, Frederick placed Henry the Lion of the Welfen family, who was the Duke of both Bavaria and Saxony, under ban and divided up his former duchy. Eventually the duchy of Westphalia was under the control of the Archbishopric of Cologne, a powerful church and state government. During the Reformation, Westphalia remained Catholic and Saxony converted to the Protestant faith. Westphalia has often been associated with the important Treaty of Westphalia which ended the Thirty Years War, but divided Germany's kingdoms and principalities into Protestant and Catholic regions. Following the French invasion during the Napoleonic Wars, Westphalia was declared a Kingdom but it soon fell under Prussian dominance. In the 19th century, the course of Westphalian history was drastically altered. After the Congress Of Vienna in 1815, the various German states began to move toward the creation of a modern and united German nation. After the Revolutions of 1848, and the rise of Otto von Bismarck, Germany expanded territorially, developed its economy, and emerged as a great world power. German Unification was proclaimed in 1871, by which time Germany had attained roughly the size and boundaries it would have in the 20th century. Nordrhein-Westfalen is the most industrialized and populous state in the western part of Germany, and it is situated between the Weser and the Low Countries. This German state consists of the lower Rhineland, which includes the Ruhr region, which is the most industrialized area in the world and is named the Kohlenpott or the coal pot. The state of Nordrhein-Westfalen also encompasses the northern edge of the Rhenish mountains and the bain around Munster. The eastern part of the state is a vast forest region. The chief industries in Nordrhein-Westfalen are mining, mechanical engineering, textiles, glass, chemicals and tourism. The city of Duesseldorf is the present capital of Nordrhein-Westfalen, combining the northern part of the former "Rheinprovinz" and Westphalia. The city of Bochum has coal mines, heavy and chemical industries, and a space exploration institute. Dortmund is Westphalia's largest city, a former Imperial City or Reichsstadt, and was a member of the Hanseatic League, a trading and commercial organization of the Middle Ages designed to foster trade among the European states. Nordrhein-Westfalen also is the site of Bonn and Cologne, the latter of which is famous for its museum of original settlement by the Romans and its beautiful historic cathedral. - ^ Swyrich, Archive materials
Applying Educational Pedagogies Just as theory underpins nursing practice, so too does theory support and inspire educational practice. Educational theories provide nurse educators with a unique lens for the development and evaluation of learning experiences. As a nurse educator, it is imperative that you become familiar with the theories that will help to drive your curriculum. explore three* pedagogies that are well known in the field of education: andragogy, Gagne’s Nine Events of Instruction, and constructivism. Though andragogy and constructivism are essentially theories of learning and Gagne’s Nine Events of Instruction is thought to be a model of learning, each is also referred to as pedagogy, or the practice and science of teaching. Save your time - order a paper! Get your paper written from scratch within the tight deadline. Our service is a reliable solution to all your troubles. Place an order on any task and we will take care of it. You won’t have to worry about the quality and deadlinesOrder Paper Now - Review Chapter 2, “Theoretical Basis of E-Learning,” and Chapter 3, “Instructional Design for E-Learning in Nurse Education,” of the Bristol and Zerwekh course text. Reflect on the ways nurse educators use pedagogies to guide the development of curricula. - Search the Internet to locate a learning experience that adheres to the following guidelines: o The learning experience is described in detail through a step-by-step Lesson Plan, lesson summary, or an online video. o The learning experience is representative of nursing curriculum (i.e., the lesson teaches a technical skill or concept that could be beneficial to nursing students, staff, or patients). o You are able to share the learning experience with colleagues via a Word Document, PDF, hyperlink, or another method. - Search the Walden Library as well as other reputable sites to locate one scholarly article related to the use of your assigned pedagogy. Consider how your article and your assigned pedagogy are applicable to the education of nursing students, staff, and/or patients. Conduct further research as needed. - Examine your selected learning experience through the lens of your assigned pedagogy. o Is this pedagogy implicitly or explicitly used in this lesson? o If you were the instructor, how might you integrate (or further include) this pedagogy to enhance the learning experience?
As the major absorbing aerosols, black carbon exerts positive radiative forcing on the atmosphere and affects the radiation balance of the earth-atmosphere system and hence aerosol-boundary interactions. Although air quality in Beijing has been improved significantly since the implementation of the Clean Air Action Plan in 2013, the long-term changes in black carbon and optical properties of aerosols are poorly known, let alone their impacts. Prof. Yele Sun and his team from the Institute of Atmospheric Physics (IAP) of the Chinese Academy of Sciences conducted nine-year measurements of black carbon and light extinction coefficient in Beijing from 2012 to 2020. By analysing the long-term changes in black carbon, single-scattering albedo, mass extinction efficient, brown carbon and absorption Angstrom exponent, they demonstrated the response of black carbon and optical properties to the Clean Air Action Plan, and evaluated the impacts of black carbon and brown carbon on direct radiative forcing. This study was published in Atmospheric Chemistry and Physics. The researchers found that black carbon decreased by approximately 71% during the last decade in Beijing and the decreases were different in different seasons. The black carbon showed the most significant reductions in autumn and at night due to the decreased emissions in biomass burning and heavy-duty vehicles. "The particle extinction coefficient that is directly related to visibility also decreased by 47%. Our results supported that the Clean Air Action Plan works in mitigating air pollution in Beijing," said Prof. Sun. However, different changes in black carbon and extinction coefficient have caused increases in single-scattering albedo and mass extinction efficient. That is, scattering aerosols are more important in affecting aerosol radiative forcing. "The increased mass extinction efficient might not be a good sign because it will bring new challenges to improve atmospheric visibility in Beijing in the future," said Prof. Sun. According to the study, the changes in black carbon, brown carbon and aerosol optical properties have significant implications in changing their radiative effects and aerosol-boundary layer interactions. Future mitigation of air pollution in megacities need to take these changes into account.
Nearly 30 years after the nuclear disaster that took place at Chernobyl, Ukraine, scientists made an interesting discovery regarding the birds that flew in the area: they’ve managed to adapt their body to the catastrophic environmental consequences. The study led by Dr Ismael Galván of the Spanish National Research Council (CSIC) took place at the border between Ukraine and Belarus, where the risk of irradiation exceeds normal levels. The team that conducted the study took 152 birds of 16 different species from the “forbidden zone” and performed advanced laboratory experiments and tests that showed incredible results: according to previous tests, the scientists expected to see depleted levels of antioxidants and increased oxidative damage. Instead, they found the exact opposite, meaning that the birds managed somehow to adapt to the background radiation from the area. This is another confirmation that, in time, animals and humans are able to adapt to radiation if they are exposed to low doses of it. image source: static-economist.com
This 25 minute lesson is intended to give teachers a sense of what equitable representation for the GLBTQ community might feel like through text selection. It’s based on “Reading LGBT-Themed Literature with Young People: What’s Possible?” by Caroline T. Clarke and Mollie V. Blackburn. (PDF) (Estimated Running Time: 23-25 Minutes) Teachers will be able to identify how text selection can represent their classroom by building ideal decks representing their students. Required materials are normal text, recommended but not required materials are italicized - A deck of standard cards for every 4 students. Not all cards will be used. The distribution of cards should be 1/2 spades (16), 1/3 clubs (10) and 1/6 hearts (6). Of the four decks, one will have 3 hearts, one will have 2 hearts, one will have 1 heart, and one will have no hearts. Clubs and spades will be evenly distributed. How can we engage students in GLBTQ literature without actively or tacitly supporting a homophobic, heterosexist environment? - Using GLBTQ literature in a classroom only once or twice makes it stand out as different from the norm. - Deliberately citing the GLBTQ nature as a purpose for reading makes it stand out as different from the norm. - By making it regular and representative and queering other texts, GLBTQ lit can be normalized. - Some GLBTQ literature presents a heterosexist view of the GLBTQ community. - Older GLBTQ lit is often not representative of the current GLBTQ culture. I’m assuming you’re all card players, that all of you have played a type of card game or has friends who do. What card games do you play at home, with your family, with friends? What makes a card game fun and engaging? Lead students toward recognizing participation, cooperation, equal competition, and possibly the ability to play it their way. Introduction to New Material: Before we begin, I want you to know two terms: heteronormative and heterosexist. Heteronormative means relating to a world that promotes heterosexuality or straightness. Heterosexist refers to active bias against same-sex relationships. Blackburn and Clark’s article states that because LGBTQ literature and issues are so new to the classroom, teachers do not have adequate means of introducing them that doesn’t tacitly support the homophobic social norms currently in place. There are however, four things you can do to make it easier. The first of which, I’ve already done by assuming you’re card players. Blackburn and Clark state that if you enter a classroom assuming kids are LGBTQ allies already rather than treating them as homophobic, it sets a precedent where the class can engage in the conversation from a positive standpoint. The rest will be shown in our activity for the night and I’ll explain them after. The decks of cards you have represent the books we might teach over the course of the year. Don’t think about this too hard. For right now, they’re just cards. In these decks are hearts, spades and clubs, no diamonds. The cards go from five to ace. Each deck is a unique set of eight cards taken from this distribution. Choose one suit, number or face card (King, Jack, Queen, etc) to represent yourself. Don’t show or tell anyone right now. Take a minute to write down what represents you, why you chose it, and three adjectives you would use to describe yourself/what represents you. For example, I’m represented by 8s because it’s my favorite number, it becomes the infinity sign when turned on its side, and it can symbolize glasses and wisdom; my three adjectives would be wise, infinite, and observant. Look through your deck of eight cards and analyze its makeup. Take a minute and write down what you think sets your deck apart. What sort of cards comprise it? What stands out? Group up (groups of 3 or 4). Compare your decks. Do not reveal what represents you. Discuss what’s different. Compare what stood out to each of you. Now, without revealing what represents you, let’s build a communal deck. Your job is to make sure you’re represented in the deck. The communal deck will also be exactly eight cards. Decide together how to build this deck drawing from each of your decks. Sorry to burst your bubble, but we’re not actually going to play a card game. Instead, I want you to write about your experience today. How did you feel you were represented in your original deck? How about in the communal deck? What problems did you have making sure you were represented in the final deck? How do you think the others felt? If spades represented the traditional heteronormative canon, clubs represented minority focused heteronormative texts, and hearts represented LGBTQ literature, how does that affect the way your look at your communal deck? Collect writings as the exit ticket. Blackburn and Clark’s essay points out that treating LGBTQ literature as special rather than normal already sets a classroom up for failure. If you only read one LGBTQ book or article, it will stand out, much like a single heart will stand out in a sea of black suited cards. By making LGBTQ-friendly literature and concepts a regularity normalizes the students to its existence. In addition, you want to make sure that the books you read reflect your class and allow them to explore their own experiences. Just as you were represented in the decks, your students are represented in the books you read. Blackburn and Clark divide LGBTQ literature into three categories: “homosexual visibility,” in which the story revolves around the LGBTQ characters’ sexuality and the response to it, often battling homophobia; “gay assimilation,” in which LGBTQ characters appear, but their sexuality isn’t key to the plot; and “Queer consciousness or community,” where LGBTQ characters are shown to be in supportive communities and families regardless of the plot. Just because a book has a LGBTQ character doesn’t mean it’s representative of the LGBTQ members of your classroom, so presenting multiple and positive LGBTQ experiences is important for varied representation. Lastly, books, like card games, are supposed to be pleasurable. Choosing a book simply because it deals with a difficult topic is like playing solitaire: it’ll take up some time, and you might feel accomplished at the end, but who’s going to call it fun? Really. If you choose LGBTQ books that are enjoyable, your students will be more engaged and more willing to explore other books that are queer-friendly.
Racism is a belief that humans can be divided into a hierarchy of power on the basis of their differences in race and ethnicity. With some groups seen as superior to others on the sole basis of their racial or ethnic characteristics. Racism is frequently expressed through prejudice and discrimination. The belief can manifest itself through individuals, but also through societies and institutions. While xenophobia — or fear of those unlike us — has long been a part of human cultures, the concept of race first appeared in the English language around the 17th century. North Americans began to use the term in their scientific writings by the late 18th century. Racism began to be studied by scientists in the 19th century. Racism was used to in an attempt to explain political and economic conflicts as well as to justify European colonialism and imperialism across the world. By the mid-19th century, many racists believed the world’s population could be divided into a variety of races: groups of people who shared similar physical attributes, such as skin colour and hair texture. This process of race categorization is referred to as racialization and is necessary for the emergence of racism as an ideology. Racism claims the human species can be divided into different biological groups that determine the behaviour, economic, and political success of individuals within that group. This belief views races as natural and fixed subdivisions of the human species, each with its distinct and variable cultural characteristics and capacity for the development of civilizations. Thus, racists believe that biological factors can be used to explain the social and cultural variations of humans. Racism also includes the belief that there is a natural hierarchical ordering of groups of people so that superior races can dominate inferior ones. Racist thinking presumes that differences among groups are innate and not subject to change. Thus, intelligence, attitudes and beliefs are viewed as not affected by one’s environment or history. The existence of groups at the bottom or top of the social hierarchy is interpreted as the natural outcome of an inferior or superior biological makeup and not the result of social influences. Racists reject social integration because they believe the mixing of groups would result in the degeneration of the superior group. If biological differences are not easily discernible, racists invent biological differences (for example — size of nose or colour of eyes). Racism does not exist because of the presence of objective, physical differences among humans, but rather the social recognition of and the importance attached to such differences. Racist ideology is based upon three false assumptions: That biological differences are equal to cultural differences; that biological makeup determines the cultural achievements of a group; and that biological makeup limits the type of culture a group can develop. Research shows these assumptions are wrong and largely based on the untenable position that biology is the single cause for everything. Evidence showing that differences within groups are greater than differences between groups, and that social factors have an impact on behaviour, argue strongly against racist beliefs. In the 1960s, the concept of racism was usually applied to the treatment of individuals and the belief that one individual was racially inferior. The term has since broadened to include institutional racism — describing political, economic and social institutions that operate to the detriment of a specific individual or group. Cultural racism is based on the supposed incompatibility of cultural traditions rather than ideas of innate biological superiority. Racism can also be reflected in the ways that social institutions operate, by denying groups of people fair and equitable treatment. In this case we talk about structure and power, otherwise known as institutional racism. This includes the power to establish what is normal, necessary and desirable and reinforces superiority or preferences for one group over another. Examples of institutional racism in Canada's history are evident in its restrictive immigration policies and in policies against Indigenous peoples and non-white immigrants, particularly Asians, Black people, and Jews. Institutional racism also exists when policies or programs seem racially neutral, but either intentionally or unintentionally put minority group members at a disadvantage. As an example, in certain provinces the process for selecting citizens for jury duty results in Indigenous people rarely being selected. These dimensions of racism reveal that power, and individuals in positions of power, can create or perpetuate racial policies or practises. Some of the most intense racist policies have been directed at Indigenous peoples. Until 1960, Indigenous adults could not vote in federal elections unless they first renounced their Indian Act status and gave up treaty rights. In 1880, the Canadian government began sponsoring residential schools designed to assimilate Indigenous children into Euro-Canadian culture. From the 1930s to the 1990s, institutions across the country were dedicated to assimilating Indigenous children into the dominant culture. Many children were taken from their parents and subjected to humiliating abuse, scientific experiments and poor living conditions. Indigenous culture was routinely insulted, and many students were beaten if they spoke their own language. Indigenous women are murdered or go missing in Canada at much higher rates than non-Indigenous women — a fact that many believe stems from the impact of racism. Indigenous women account for only 4.3 per cent of Canadian women, but 16 per cent of all homicide victims. Black Canadians have faced higher scrutiny from police and reports show evidence of racial profiling. One study found that police described 33.6 per cent of drivers stopped in Toronto by police as Black, compared to about 8.1 per cent of the total population being Black. Over the past quarter-century, the provincial and federal governments have implemented laws to combat racism. While blatant racist ideology is uncommon (see Ku Klux Klan), examples of racist beliefs are still evident. Today, racism and discrimination are more commonly experienced by visible minorities. Canada has federal and provincial laws to protect individuals, groups, and cultural expressions. However, forms of racism and discrimination persist. The Canadian Human Rights Act makes it discriminatory to communicate hatred. The Act protects Canadians from public statements that promote hatred, or incite hate against an identifiable group based on their ethnicity and/or skin colour. The Canadian Charter of Rights and Freedoms specifically addresses the constitutional rights that are necessary in a democratic society, and all Canadian law must be consistent with the Charter. The equality of all Canadians is protected under the Charter. The Charter also protects certain rights guaranteed to Indigenous peoples, and it affirms Canada's multicultural heritage. The Canadian Human Rights Act prohibits discrimination based on individual characteristics including race, national or ethnic origin, colour, religion, age, gender, sexual orientation or marital status. The Canadian Multiculturalism Act protects groups from cultural discrimination and is a commitment to new Canadians that they may retain aspects of their culture in Canada. Other key pieces of legislation include the Criminal Code of Canada (which prohibits the promotion of hatred and hate propaganda), and the Employment Equity Act, which protects against discriminatory hiring practices that disadvantage women, Indigenous peoples, individuals with disabilities, and members of visible minorities.
Affiliate links on Android Authority may earn us a commission. Learn more. Artificial intelligence takes to the skies with Project Loon Google’s rather wacky sounding Project Loon continues to evolve, and has recently received a tune up to its navigation technology. The X Lab team behind the project has revealed that it is making use of artificial intelligence technology, or machine learning to be more specific, to help keep the balloons in the air for longer. Originally, Project Loon has been using basic pre-crafted algorithms to help change the altitude of the balloons and keep them in roughly the same position. While this worked fairly well under static conditions, the balloons couldn’t cope very well with unexpected weather conditions. These algorithms have since been replaced with artificial intelligence-like algorithms that are able to adapt to wind and other weather conditions that threaten to blown the balloons off course. Project Loon’s navigation system does not use deep neural networks though, instead it uses a simpler form of machine learning called Gaussian processes. The algorithms comb through huge amounts of previous data and learn from it to make future predictions and adjust behaviour, in what is basically an ongoing feedback loop. However, using past data is no guarantee that the future will pan out in the same way. So, the team have also implemented something called “reinforcement learning”, whereby Project Loon software alters and corrects its behaviour even after making predictions. As an example, a balloon headed out to catch winds over the Pacific Ocean before changing plans after determining that there wouldn’t be enough wind for it to stay over land. OK, this isn’t AI in the usual sci-fi human-brain simulation meaning of the of the term, but these self updating algorithms account for a lot of machine learning innovations right now. The new AI-based upgrade is designed to keep the balloons up in the air longer and therefore leaving them able to providing internet access to users below more consistently. Using the new technology, one test balloon stayed in the Peruvian stratosphere for 98 days and made just under 20,000 tweaks to its flight plan over those 14 weeks, which works out to dozens of adjustments each day. This of all sounds pretty great for those that may soon rely on Project Loon for their internet access.
What Is a Raster Scan? A raster scan is a method of constructing an image through the use of horizontal lines. The lines can be analog representations of the image, or they can be a sequence of pixels in which each dot represents a tiny rectangular area of the image. One of the primary applications of raster scaning technology has been in traditional display devices such as televisions or computer monitors. Some computerized printers also use similar methods to construct images on paper. Most digital images files also are stored and reconstructed using raster scan techniques. In a television or computer monitor, an image is constructed using raster scan technology by starting in the upper left-hand corner of the screen and drawing a horizontal line that ends on the right edge of the screen. The line returns to the left side, dropping a tiny amount downward, and draws the next line of the image. When the beam that is drawing the image reaches the bottom right corner of the screen, effectively indicating the whole image was rendered, it moves back to the upper left corner to start again, an action known as a vertical retrace. This process occurs dozens of times every second to create a smooth moving image. Even though a raster scan is capable of producing a very realistic image, the actual process creates a minute amount of nearly unnoticeable distortion in the image. At the end of each horizontal line being drawn, the beam must return to the left-hand side of the screen, which is called blanking or horizontal retrace. This is accomplished most efficiently by actually drawing each visible line with a slight downward slope toward the bottom right of the screen. In this way, during the horizontal retrace, the beam moves back in an almost straight horizontal line. Although it is the fastest way to draw the image, it actually means a monitor using raster scan technology is drawing the image skewed at a very miniscule angle. Computer software that saves images digitally uses a similar technique to encode information and subsequently to decompress it. A raster scan of the image starts in the upper left corner of the image and progresses in the same way toward the bottom right. Instead of saving an entire line of analog information, however, the image is converted into small rectangles called pixels that can be set to a single color. The collection of the pixels in horizontal lines forms the image not only in the file encoding, but also in computer memory when the image is displayed. @Terrificli -- the solution is use vector graphics which can be scaled up or down as much as needed. However, creating vector images can be a fussy process and the "industry standard" is still raster. Think of all those digital cameras out there pumping out JPG and PNG images -- converting those over to vector is hit and miss. You have hit on the flaw in raster images -- pixels. For example, let's say you publish a magazine and are sent a picture stored in a "rater format" (JPG, PNG, BMP -- the most common image formats, by the way) that was reduced in size so it would load quickly on an Internet site and still look good to visitors. That image might look great on a screen, but the resolution isn't high enough to reproduce well on paper. If you try to enlarge the photo so that it will fit well in your publication, you also blow up those pixels so the photo looks blocky. The point here is that small raster images don't play nice with the publishing industry, but people prefer to use them so they will load quickly on Internet sites. That is why the megapixels count is important when you are shopping for cameras -- more pixels means a larger image size is created by the camera and one of those will look great in print. Still, people tend to scale photos down to save space and that can lead to trouble. Post your comments
What exactly is a filling? And how do they work? Fillings are a treatment for cavities found at your local dentist Stevenage but to really understand what fillings do and how they resolve cavities, you need to know how cavities come about in the first place and what dental decay is. Decay starts with a small number of bacteria establishing themselves in a sheltered spot in the tooth where they are unlikely to be disturbed. It could be between teeth or in the canyon of molars. The bacteria then multiply by consuming sugars found in saliva but when they digest anaerobically, they produce lactic acid as a by-product. This acid attacks the tooth upon which they live and creates small crevices in the tooth. This provides additional shelter for the bacteria and helps them grow in number. Eventually, this process repeats itself enough for them to bore completely through the enamel layer and into the dentin underneath. So, cavities are likely to occur in places that are difficult to spot and they have few symptoms until they reach the nerve. It’s usually only by a dental examination that you find out if you have cavities and whether the treatment is necessary. Why are fillings done? When decay has set in, there is a brief period just between, when the colony has not breached the enamel layer. These holes can be remineralised where the lost calcium phosphate is replaced either with dietary calcium or fluoride absorbed directly from the fluoride in toothpaste. These fluorides form a less chemically vulnerable component of the enamel, which will restore the volume of the enamel layer. If the decay is significant enough to reach through the enamel layer and has attacked the dentine, a filling is necessary, as remineralisation is no longer possible and the active bacteria is too deep into the tooth to be removed by brushing. In addition to this, dentine is far softer and can be rapidly broken down, so the rate of decay will increase dramatically. How are fillings carried out? The drilling stage of filling a tooth is the longest and most important. The bacteria which have been causing the decay will have found their way deep into the tooth and unless they are totally removed, they will continue to burrow into the tooth. After the drilling has begun, often resulting in a loss of structural stability, your dental team will establish which type of filling is suitable and will continue to clean the cavity using the drill. After completely cleaning out the cavity, the filling can begin. Traditional metal amalgam fillings start with the production of a metal putty; the ingredients are mixed and the putty is applied usually through an application gun deep into the base of the drilled cavity and gradually filled until the surface is reached. In a few minutes, the amalgam will set hard. Composite fillings use ceramic glass sealed into place with UV cured resin. They are applied in layers; the resin is injected through a syringe and then has to be cured with a brief exposure to a UV lamp. After which, a second and third layer can be built up until the tooth has been filled solidly and completely set.
Merriam Webster’s definition of a giant is “a legendary creature usually thought of as being an extremely large or powerful person.” The person in the Bible that is most often associated with being a giant is Goliath, the Philistine David slew with a sling and a stone (1 Samuel 17:50). Goliath was actually never described as a giant. It says about him in 1 Samuel 17:4, “And there went out a champion out of the camp of the Philistines, named Goliath, of Gath, whose height was six cubits and a span.” A cubit equals 18 inches, so Goliath was over nine and a half feet tall, but that didn’t qualify him to be a giant. When the Israelites first went in to spy out the Promised Land, it was reported back to Moses, “And there we saw the giants, the sons of Anak, which came of the giants: and we were in our own sight as grasshoppers, and so we were in their sight” (Numbers 13:33). The word that is translated giants in the phrase, we saw the giants, n‘phîl (nef – eel´) is derived from the word nâphal (naw – fal´) which means to fall (5307). N‘phil is properly translated as a feller (5303), meaning someone that causes another to fall. The word translated giants in the phrase, which came from giants, gibbôr (ghib – bore´) means powerful and is usually used to describe a valiant man or warrior (1368). What the Israelite spies saw in the Promised Land were bullies or tyrants that had killed all their enemies, squashing them like grasshoppers under their feet. Joshua and Caleb, two of the men in the group of spies that went out, believed that God was more powerful than the giants, but because all the people were frightened by the report, the Israelites spent 40 years wandering in the wilderness. There is only one giant specifically mentioned in the Bible. He is described as living in Gath and having four sons (2 Samuel 21:22). After David’s army destroyed the children of Ammon, there were a series of wars with the Philistines. During the final conflict, it says in 1 Chronicles 20:6-8: And yet again there was war at Gath, where was a man of great stature, whose fingers and toes were four and twenty, six on each hand, and six on each foot: and he also was the son of the giant. But when he defied Israel, Jonathon the son of Shimea David’s brother slew him. These were born unto the giant in Gath; and they fell by the hand of David, and by the hand of his servants. The word translated giants in this passage, râphâh (raw – faw´) is derived from a primary root word that is properly translated as “to mend (by stitching)” and is figuratively meant to cure. “Rapah means to heal, a restoring to normal, an act which God typically performs” (7495). At the point in Israel’s history and David’s life, when the sons of the giant in Gath were killed, a healing occurred that could be thought of as a healing of the land. The violence and killing that had been going on for centuries was finally over and peace came to the land.
Prime purpose of this lecture is to present on Metals and Acids. The products of acid/metal reactions are a salt and hydrogen gas. Some metals are so unreactive that they do not react with dilute acids at all, e.g. copper, silver and gold. Examples of Metals: Magnesium, Iron, Sodium, Calcium etc. Examples of Acids: Hydrochloric acid; Sulphuric acid; Nitric acid; Ethanoic acid etc. The chemical reaction of Metals and Acids: Metal + Acid → Salt + Hydrogen.
1. Agricultural improvements From the 11C the invasions stoppe in Europe and that more safe time made easier to extendfrom the Netherlands improvements in agricultural production, which was the basis of the medieval economy. - Forests were cut and swamps were drained. - Horseshoes and collar allowed horses as draft animals for plowing. - The mouldboard plough or Norman was replacing Roman. - In central Europe it spread three-year rotation system: ach village grouped farm plots into three areas, each devoted to wheat, oats and fallow, and making them rotat each year. Each farmer had to have a plot in each area and they should respect the rotation. - Water force began to used to move the mills trough waterwheels. All this made people eat better and the population increased. 2. The development of trade The increasing population, agricultural surpluses and increae security influenced the development of trace sine 12C There were 2 major tours: - The Mediterranean connected the cities of the crown of Arafon or Italian with Muskim or Byzantine ports. European people exported weapons and fabrics, and they imported luxury products. - The Baltic and the Atlantic was controlled by a league of merchants, the Hansa, and joined to Portuguese or Cantabrian ports with Flanders, England and the German ities. They traded with Catillian wool, French wines, English tin or amber, furs or wood from te Bltic countries. Merchants gathered at fairs. Over time, trade developed baking, payments on credit or bills of exchange (14C), to not carry too much money
The human eye grows very little from birth to adulthood, but even small errors in its proportion can cause vision problems. Vision involves both the ability of the eye to capture images and the ability of the brain to process the signals that the eye sends. Vision deficiencies may be a result of problems from either function. Children born prematurely or with a family history of eye problems are at greater risk of developing eye health issues. If you are experiencing serious medical symptoms, seek emergency treatment immediately. A newborn’s eye measures about .7 inches from front to back--approximately 70 percent of the size of an adult’s eye. This is why babies' eyes seem large in proportion to their heads. During infancy, the eyeball grows just 1 millimeter, to a length of about .74 inches. The eye continues to grow gradually throughout childhood until it reaches a length of about 1 inch in adulthood 1. The protective skull cavity where the eyeball rests, sometimes called the eye socket, grows along with the eyeball. As the eyeball grows, changes in its shape may cause errors in the focal point inside the eye. If the eye is too short in length, the focal point for images will fall behind the retina and your child will experience hyperopia, or far-sightedness. The resultant blurry image may cause headaches, eye strain or fatigue. If you notice your child squinting or rubbing his eyes, or if he complains of difficultly in reading, he may have hyperopia. Conversely, if the eyeball grows disproportionally long, the focal point will fall short of the retina and your child will not be able to see distant objects. This condition is myopia, or near-sightedness. It too may cause headaches, eye strain and squinting. If your child’s eyes are not properly positioned or if the length of the eye is not proportional, one or both of her eyes may become misaligned and she may be unable to track movement. This condition, called strabismus, may interfere with depth perception and can lead to reading disabilities. It also can lead to amblyopia, a condition in which the brain does not receive matching images from the two eyes that it can fuse into a single stereo image. This condition may cause vision loss. The formation of the eye’s blood vessels takes a full 40 weeks from conception, so a baby who is born prematurely does not yet have the full network of blood vessels necessary to support the eyes’ needs. In some cases, the formation of blood vessels becomes erratic and excessive after birth. This condition is called retinopathy of prematurity. Blood vessels may proliferate in the retina and may even form in the vitreous space, which should be devoid of veins and arteries. These abnormal vessels may leak, and then contract, which can detach or distort the retina, and vision loss or blindness may result. In severe cases, membranes may form behind the lens and block the passage of light to the retina. Eye examinations help to detect and correct problems that may arise as your child’s eyes develop. Children's Vision Information Network recommends that your child have his first eye exam by age 3, or sooner if vision problems run in your family or your spouse’s. He should have another complete eye exam before entering school to ensure that his eyes are ready to handle the stress of close work, such as reading and mathematics. Preschool children use their eyes mainly for distance, and problems with close-range vision may not be obvious without a thorough eye exam. Although schools conduct testing using eye charts, this perfunctory type of exam does not reveal other issues, such as difficulty using the eyes together for prolonged periods of time, or tracking a line of print, or the overall health of your child’s eyes. If the eye is too short in length, the focal point for images will fall behind the retina and your child will experience hyperopia, or far-sightedness. Preschool children use their eyes mainly for distance, and problems with close-range vision may not be obvious without a thorough eye exam. In severe cases, membranes may form behind the lens and block the passage of light to the retina. - Medioimages/Photodisc/Photodisc/Getty Images
If your student has started at 3 or 4 year olds and some of the 5 year olds, they may be ready to move on to standard music in the five finger level or a little above depending on the child. It is wise to use the "Moving On-One” with them before making the transition to only standard music. It will make it easier for them. If they forget a standard note, simply remind them by asking what the note is doing. You might have to give them a little more of the hint at first, but in no time at all, they will be playing standard music with as much ease and enjoyment as they did the Animal Notes. You have given your student a wonderful gift: a strong foundation in music without frustration and their wanting to quit lessons. If your child is in this age group and wants to start lessons, the Animal Note method is the best out there for them. Trying to distinguish one spot (note) on a music staff from another is so confusing to a child. They have nothing in their world to associate with this new information. They are starting from ground zero, and it is not easy for them. Once the Animal Note method was developed to the point that it was in book form, I never lost a young student who started in our studio and I have started many young children. Almost all of our students have continued through their senior year in high school unless they move away. The Animal Notes and their word clues take the frustration out of learning the two important basics of music - Note Reading and Timing. "I have started with my six year old and so far so good. He is very interested and proud he can follow the music. I am by no means a music teacher but this making it clear how I can get them started." I love your idea of teaching notes with animals and am so happy that you have published something like this. Slogans are just too hard for young students to remember and figure out. That's why I have actually used animal and other similar concepts in the past for the young ones. - Diana Farias
Let’s talk about Habitat – The limits of tolerance regulate the distribution of organisms. Animal, plant, micro-organisms, and humans are concentrated in regions where the conditions are suitable. Habitats are classified as terrestrial or aquatic. Habitats such us freshwater habitats ( swamps, lakes, rivers, ponds, irrigation canals, and ditches ). There are several sorts of habitats such as woods, orchards, and grasslands. We will discover lowland grasslands, pine stands groves, citrus plantations, bamboo thickets, cogon areas on mountain slopes, thick virgin forests and other areas. In these habitats of animal and plants, many physicochemical variables are found. These factors, which interact with things, may include dirt and temperature. One very important factor is the soil. It lies under the layer of the bedrock in the world. It may be of water and wind in the erosion. There are forces of nature that affect it. Soil may vary according to the sort of plant and animal life it supports. The tilting of the axis of the earth leads to the distribution of temperature over the surface of the earth. The poles are tilted toward sunlight and away from it. That’s the reason there are places that have four seasons namely fall, winter, spring and summer. After the ground is tilted toward the sun, the days get warmer and longer. Temperature changes from region to region at a time because the earth at various angles strikes. That’s the reason it’s essential for some organisms on the planet to manage the changes in temperature to be able to survive.
Answer (Detailed Solution Below) Detailed SolutionDownload Solution PDF Data is a collection of numbers gathered to give some information. To get particular information from the given data quickly, the data can be arranged in a tabular form using tally marks. The following are the points related to data: - Data can be organized using (I) marks in a group of five such that four can be represented as (IIII) while for each 5th mark a line slash (/ or \) is drawn across these four lines as shown in figures. Hence, we conclude that the above pictures represent ‘5’
Since we were slaves in Egypt, persecution and oppression has been a major part of Jewish history. The founding of the United States of America in 1776 ushered in a new era, introducing us to a land of religious tolerance and freedom, for which we are forever grateful. According to historian John Buescher, the first two dozen Jewish immigrants to America from the Netherlands landed in New York in 1654. Early American Jews formed communities in Manhattan; Newport, RI; Charleston, SC; Savannah, GA; and Philadelphia. By 1776, the Jewish population of the Colonies neared 2,000 (out of 3 million total). The Founding Fathers incorporated religious Jewish symbolism and verses into the icons of freedom they created. Philadelphia’s Liberty Bell was cast in 1752 and proudly displays a quote from the Torah (Vayikra/ Leviticus), “Proclaim liberty throughout the land unto all inhabitants thereof.” Benjamin Franklin’s design for the Great Seal of the United States depicts the splitting of the sea with the quote “Rebellion to Tyrants is Obedience to God.” Buescher writes that Jews were sympathetic to the revolutionary cause because of their experience in Europe, and loved Thomas Jefferson’s idea of a separation between church and state. Many religious Jews fought in Washington’s army in the war for independence. On this July 4th, here are seven Jewish-American freedoms we should never take for granted: 1-Freedom to Perform Bris Milah – Circumcising a Jewish male at eight days old is a protected right in the U.S.A. but it was not always the case throughout Jewish history. While the most common known banners of circumcision were the Hellenist Greeks during the times of the Maccabees, in 135 C.E. the Romans forbid Jews from practicing circumcision, reading the Torah, and eating matza on Passover. 2-Freedom to Observe Shabbos – Not only is Sabbath observance protected by the First Amendment, numerous companies and organizations make great accommodations for shomer shabbos Jews. Throughout Jewish history, though, there were many occasions when it was illegal to observe shabbos. Many people are aware that the Greeks banned Sabbath observance in 167 B.C.E., but fewer realize that in 325 C.E. the first edict in favor of the “Venerable Day of the Sun” (Sunday) was made at the Roman Council. Sabbath worship and other Jewish observances became heretical to the Christian faith. 3-Freedom to build Synagogues – There is a federal statute which protects houses of worship in the U.S., including synagogues (RLUIPA), but Jews were not always extended this privilege. While the burning of shuls in Germany before World War II is taught every year on the anniversary of Kristallnacht, far less people are aware that in 379 C.E. Roman Emperor, Theodosius the Great, permitted the destruction of synagogues if they served a religious purpose. 4-Freedom to not convert – The forced conversions of the Spanish Inquisition are widely discussed, but far fewer people realize that 900 years earlier in Spain (in 589 C.E.) The Third Council of Toledo ordered that children who were a product of intermarriage between a Jew and a Christian be baptized by force. Forced conversion of all Jews was initiated. Thousands of Jews fled. Thousands of other Jews converted. 5-Freedom to educate Jewish Children Jewishly – In the U.S. today, there are a record number of Jewish children receiving a Jewish eduction, but in Spain in 613 C.E., Jewish children who were older than seven were taken from their parents and given a Christian education. 6-Fair Legal System – While the U.S. legal system is not perfect, in general, it is not discriminatory or biased. This, unfortunately was not the case throughout much of Jewish history. One example is in 1130 when the Jews of London had to pay one million marks as punishment for allegedly killing a sick man. Another is “host abuse” – the charge that Jews would steal the Catholic communion wafer in order torture it. According to William Nichol in Christian Antisemitism, “over 100 instances of the charge have been recorded, in many cases leading to massacres.” 7-Freedom to live anywhere – There is no part of the United States of America that a Jew cannot live, but throughout Jewish history things were different. Many people are familiar with the ghettos of pre-WW II Eastern Europe in which Jews were forced to reside, but fewer are aware that in 1516, the Governor of the Republic of Venice decided that Jews would be allowed to live only in one area of the city called the “Ghetto Novo.” God bless America for being such a hospitable home to the Jews for the last 241 years. And may our long history of persecution speedily come to an end with the coming of Moshiach!
Graphical Periodic Table with the 118 elements currently known and their properties. Includes the following atomic properties: name, symbol, mass, atomic number, group, period, physical state, type, 1st ionization energy, electronegativity, electron affinity, valence electrons, electron configuration, atomic and covalent radius, density, melting and boiling points, heat of fusion and of vaporization, thermal and electrical conductivities and specific heat. - Plots of atomic properties versus atomic number; - Finds an element's name from its atomic number or symbol; - Finds an element's symbol from its atomic number or name; - Finds an element's mass from its atomic number, name or symbol. Download the Periodic Table in:
Android is an open source, Linux-based software stack created for a wide array of devices and form factors. The following diagram shows the major components of the Android platform. The Linux Kernel The foundation of the Android platform is the Linux kernel. For example, the Android Runtime (ART) relies on the Linux kernel for underlying functionalities such as threading and low-level memory management. Using a Linux kernel allows Android to take advantage of key security features and allows device manufacturers to develop hardware drivers for a well-known kernel. Hardware Abstraction Layer (HAL) The hardware abstraction layer (HAL) provides standard interfaces that expose device hardware capabilities to the higher-level Java API framework. The HAL consists of multiple library modules, each of which implements an interface for a specific type of hardware component, such as the camera or bluetooth module. When a framework API makes a call to access device hardware, the Android system loads the library module for that hardware component. For devices running Android version 5.0 (API level 21) or higher, each app runs in its own process and with its own instance of the Android Runtime (ART). ART is written to run multiple virtual machines on low-memory devices by executing DEX files, a bytecode format designed specially for Android that's optimized for minimal memory footprint. Build toolchains, such as Jack, compile Java sources into DEX bytecode, which can run on the Android platform. Some of the major features of ART include the following: - Ahead-of-time (AOT) and just-in-time (JIT) compilation - Optimized garbage collection (GC) - On Android 9 (API level 28) and higher, conversion of an app package's Dalvik Executable format (DEX) files to more compact machine code. - Better debugging support, including a dedicated sampling profiler, detailed diagnostic exceptions and crash reporting, and the ability to set watchpoints to monitor specific fields Prior to Android version 5.0 (API level 21), Dalvik was the Android runtime. If your app runs well on ART, then it should work on Dalvik as well, but the reverse may not be true. Android also includes a set of core runtime libraries that provide most of the functionality of the Java programming language, including some Java 8 language features, that the Java API framework uses. Native C/C++ Libraries Many core Android system components and services, such as ART and HAL, are built from native code that require native libraries written in C and C++. The Android platform provides Java framework APIs to expose the functionality of some of these native libraries to apps. For example, you can access OpenGL ES through the Android framework’s Java OpenGL API to add support for drawing and manipulating 2D and 3D graphics in your app. Java API Framework The entire feature-set of the Android OS is available to you through APIs written in the Java language. These APIs form the building blocks you need to create Android apps by simplifying the reuse of core, modular system components and services, which include the following: - A rich and extensible View System you can use to build an app’s UI, including lists, grids, text boxes, buttons, and even an embeddable web browser - A Resource Manager, providing access to non-code resources such as localized strings, graphics, and layout files - A Notification Manager that enables all apps to display custom alerts in the status bar - An Activity Manager that manages the lifecycle of apps and provides a common navigation back stack - Content Providers that enable apps to access data from other apps, such as the Contacts app, or to share their own data Developers have full access to the same framework APIs that Android system apps use. Android comes with a set of core apps for email, SMS messaging, calendars, internet browsing, contacts, and more. Apps included with the platform have no special status among the apps the user chooses to install. So a third-party app can become the user's default web browser, SMS messenger, or even the default keyboard (some exceptions apply, such as the system's Settings app). The system apps function both as apps for users and to provide key capabilities that developers can access from their own app. For example, if your app would like to deliver an SMS message, you don't need to build that functionality yourself—you can instead invoke whichever SMS app is already installed to deliver a message to the recipient you specify.
To understand the effects of a recent Supreme Court ruling, consider some history and policy on tribal lands and sovereignty. Last Thursday, the Supreme Court ruling for the Jimcy McGirt v. Oklahoma case concluded that nearly half of the state was in fact an Indian reservation despite portions of it being sold off to private citizens for years. There are a lot of takes on what this means for tribal members and non-tribal members living in that part of Oklahoma and elsewhere. It helps to know some of the basics about tribal land and sovereignty before exploring the case’s implications. Note: I use “U.S.” and “federal government” interchangeably; “American Indian” and “Native American” interchangeably; “tribes,” “tribal nations,” and “nations” interchangeably. A very brief overview of U.S. policy regarding Indian country To grasp the complicated policies surrounding Indian country, start with their history. There are great resources out there to learn more (I’ve included some under related reading), but here’s the gist: - From colonial times to the post-Revolutionary War era, European-American relations with tribes involved treaties, recognizing each as individual sovereign nations. These established “government-to-government” relationships between nations and states or the U.S. Some treaties are still valid to this day, though the U.S. has all but reneged many of them. - In 1830, Congress passed the Indian Removal Act, which allowed the U.S. to strike treaties with tribes that exchanged their current land for lands out west. In reality, this formalized the forced relocation of Native Americans from their homes; tribal nations were pressured by the growing U.S. presence around their communities to agree to the terms. The process took place over several years, during which thousands died along the way to designated Indian territory—we know this as the Trail of Tears. - In 1851, the U.S. set up reservations in Oklahoma for Indian use. That designated area would shrink over time as more settlers made their way out west. Other reservations would be established elsewhere in the following years. - In addition to having shrinking boundaries, reservation lands were also carved up following the Dawes Act of 1871. Whereas the lands were communally used by inhabiting tribes, this act promoted allotment, which converted parts of reservations into parcels for individual tribal members. A person could then sell this land, but they would forfeit their status as a tribal citizen in the process. Some “surplus” allotments were sold to non-tribal members. In other words, chunks of land within reservation boundaries were now owned by private, non-tribal member U.S. citizens. - The U.S. stopped the allotment program with the Indian Reorganization Act (1934) in order to re-establish tribal and reservation legitimacy. Existing allotments held by the U.S in trust were not to be sold to private citizens, but instead held “to benefit” tribal nations and individuals. While this did prevent some further fracturing of reservations, it didn’t restore reservations to their original state. There have been other developments since 1934 (some of them harmful), and the summary above doesn’t account for much of the racism and mistreatment perpetrated by the U.S. government and individuals. In any case, here’s the key takeaway: There’s a long history of systemic and societal racism toward American Indians conducted by the federal and state governments. Defining ‘Indian country’ Indian country encompasses all land reserved for nations and individuals, per U.S. statute. There are a few types of land included in this definition, with key differences involving ownership and governmental jurisdiction. The most commonly known type of Indian country is the reservation. In a nutshell, Indian reservations are lands managed by federally recognized tribes. Because of past policies like allotment, there are many “checkerboard” examples of reservations where pieces of private land are next to tribal ones. Some, like the Navajo Nation, have large gaps within their boundaries because of this. While being managed by tribal governments, reservations are legally owned by the federal government. Some other types of Indian country differ in this regard: - Trust lands are held by the U.S. for the benefit of a tribe. They’re often land within a reservation that was previously allotted and then reacquired by the federal government for tribal use. Legally, these frequently are treated the same way as reservation land; depending on the source of information, they’re even categorized as the same. - Allotments are trust lands, except they’re held for the benefit of an individual or family. - Fee lands are legally owned and held by tribes or individuals. The transfer of the title from the U.S. to a person or tribe requires approval from the federal gov. Collectively, reservations and trust land areas account for 56 million acres in the contiguous states. When including the 44 million acres of Alaska Native land, Indian country would be the 4th largest state in the U.S. in terms of land mass. Right now, the single largest reservation is the Navajo Nation, which holds over 17.4 million acres (27,000 square miles) in Arizona, New Mexico, and Utah and is home to about 173,000 people. Tribal governments and their relationship with outside entities Tribes can form their own governments to manage Indian country they hold or use. To do so with federally held land, a tribal nation has to be formally recognized by the U.S. At this time, there are 345 federally recognized American Indian tribes and 229 Alaska Native tribes, a total of 574 recognized tribal nations. Not all nations have a reservation, some have more than one, and some share Indian country land with other tribes. (This does not account for the hundreds of state-recognized tribes, by the way.) In some but not all ways, a tribal government acts like a state. It needs to abide by many federal laws, but it otherwise “possess[es] a nationhood status and retain[s] inherent powers of self-government” as SCOTUS Chief Justice John Marshall found in Worcester v. Georgia. Those powers include the ability to create its structure of government, determine and enforce civil and criminal laws, and offer human services. How these powers are used varies among the tribes; there’s no “right way” for nations to govern themselves. These powers are not absolute. Some are restricted by Congress or relinquished in treaties, while others are also ones the states lack. Tribal governments also face unique hurdles in maintaining their structure and well-being, like onerous federal regulations, groups interested in acquiring tribal lands or places of great significance, and a lack of enfranchised representation in Congress. The relationships between tribal, state, and federal civil and criminal jurisdictions are sometimes (but not always) straightforward. There are broad rules over who has jurisdiction, yet there are also frequently exceptions. Generally, a tribal government’s jurisdiction applies within the boundaries of its reservation and to its tribal members, while the state in which the reservation exists does not have jurisdiction here (though there are some exceptions). Federal criminal jurisdiction in tribal lands is applicable for particular crimes. In some cases, tribal criminal jurisdiction is shared with the federal government, yet there are few instances like this involving civil cases. In any case, there usually needs to be some coordination among the different governments, as many reservations remain “checkerboarded” thanks to allotment. Centuries of the U.S. taking advantage of and mistreating American Indians has led to an oftentimes confusing system of land rights and civil and criminal law. Properly understanding McGirt v. Oklahoma‘s impact requires an acknowledgment of this history and tangled web. Here are some sources on the history of reservations and Indian country: I read maybe a dozen different explanations on the different types of Indian country. These two were the most helpful: - The U.S. Dept. of the Interior’s page on Native American Ownership; and - This page on the different types from 1st Tribal Lending. - The National Congress of American Indians has a comprehensive report on the state of tribes and tribal lands and gives detail to the American Indian and Alaska Native populations in the U.S. - The Department of the Interior’s Bureau of Indian Affairs answers FAQs on tribes, tribal citizenship, etc. - The National Conference of State Legislatures lists federally and state-recognized tribes.
VISIBLE THINKING IN MATH: USING REPRESENTATIONAL MODELS For PRopoRTIoNAL REASoNING The CCSSM (NGA Center and CCSSO, 2010) recommend the use of representations to illustrate the concept of ratio and rate reasoning. CCSS.Math.Content.6.RP.A.3: Use ratio and rate reasoning to solve real-world and mathematical problems, for example, by reasoning about tables of equivalent ratios, tape diagrams, double number line diagrams, or equations. Here are some examples of the use of tables of equivalent ratios, tape or bar diagrams, double number line diagrams, or equations. back to School Shopping The cost of 3 notebooks is S2.40. At the same price, how much will 10 notebooks cost? Students might think that means each notebook is $0.80 since 2.40/3 = .80 and .80 * 10 is $8.00. Or some might consider, 3 * $2.40 = $7.20 which gives me 9 notebooks and I need one more, which is +$0.80 that totals to $8.00. Mary’s best time for running 100 yards is 15 seconds. How long will it take Mary to run 500 yards? A potential way to approach this is to reason up to go from 100 yards to 500 yards directly and notice the answer is five times the time taken for 15 seconds. It may also be easier to reason up to 1000 yards as students are comfortable multiplying by 10 which yields 150 seconds and then noticing 500 yards is just half of 1000 yards suggest reasoning down or halving 150 seconds. Sue and Julie were running at the same speed around a track. Sue started first. When she had run 9 laps, Julie had run 3 laps. When Julie completed 15 laps, how many laps had Sue run? (See Table 9.1, Cramer, Post, & Currier, 1993) Notice the similarity to the last problem. Most students attempt this by writing an equation that is obtained by “cross-multiplication.” In reality, noticing that Julie completed 15 laps that is five times should immediately tell us how many laps Sue completed. Of course, there is something implicit in the problem which is not stated directly that is the two runners are running at uniform rates. This is important to solve the problem via proportional reasoning. In grades 6-8, proportional reasoning problems may be broadly classified into three modeling approaches: (a) Quantitative Proportional Reasoning (QPR); (b) Algebraic Proportional Reasoning (APR) and; (c) Spatial Proportional Reasoning (SPR). Next, we describe the specific mathematics content through benchmark examples developed as a part each of the three modules (QPR, APR, SPR) mentioned earlier. QPR Module: This module refers to the content knowledge needed to compare and order rational numbers presented in multiple representations including integers, percentages, positive and negative fractions, and decimals. Topics in QPR often focus on fractions and divisions; additions and subtraction of like and unlike fractions; addition and subtraction of mixed numbers; multiplying fractions by whole numbers; fraction of a set; product of fractions; dividing fractions by a whole number; dividing by a fraction. This module also helps to choose and employ appropriate operations to solve real-world applications involving rational numbers. The content developed through QPR can then be applied to concepts in probability to make predictions and decisions. Consider the following benchmark example of Sam and his wife on the next page. The pictorial technique presented in this example is one of the many ways to become comfortable reasoning and talking about parts of discrete quantities. Note that the problem not only brings out the importance of the concept of “a unit” but they also help guide proportional reasoning. Once this concept is mastered such effective pictorial techniques provide an opportunity to apply them to a variety of related questions which teachers can later supplement in their traditional classroom. Table 9.1 Number of laps around a track For example, consider the following multiple-choice question from a grade 8 classroom: Apple juice concentrate is mixed with water to make apple juice. Which final mixture has the highest percentage of apple juice concentrate? F. 400 mL apple juice concentrate mixed with 600 mL water G. 400 mL apple juice concentrate mixed with 400 mL water H. 300 mL apple juice concentrate mixed with 600 mL water I. 300 mL apple juice concentrate mixed with 400 mL water APR Module: This module involves employing strategies to compare and contrast proportional and nonproportional linear relationships. Topics in APR also estimate and determine solutions to application problems involving percent, decimals, and other proportional relationships such as similarity, ratios, and rates. One important focus, herein, includes making connections among various representations of a numerical relationship such as tabular, graphical, pictorial, verbal, and algebraic equations. In particular, the APR module gives an opportunity to predict and justify solutions to application problems through a variety of strategies including proportional reasoning. As the algebraic habits of mind evolve, students must be constantly taught to effectively communicate mathematical ideas using language efficient tools, appropriate units, and graphical, numerical, physical or algebraic mathematical models. John bought a piece of land next to the land he owns. Now John has 25% more land than he did originally. John plans to give 20% of his new, larger amount of land to his daughter. Once John does this, how much land will John have in comparison to the amount he had originally? This benchmark problem gives an opportunity to help the students to determine the percent increase or decrease for a given situation. This problem is also an example of a Approach: To answer this, onemay be able to illustrate the pictorial approach using "1 unit = 100 ml” in each case as follows clearly illustrating the answer. common misconception that leads to an incorrect solution. Most students believe that if there is percent increase followed by a percent decrease of the same amount (or vice versa), the answer returns back to the original amount. To help them understand their misconception, one strategy is to start with 100 units of land as the original piece of land. A 25% increase would be the same as 1.25 (100) = 125 units of land. Now John plans to give 20% of his new piece of land which will leave him with 80% of the new piece of land. This is the same as 0.8 (125) = 100 units of land. One can also see this using the pictorial approach. SPR Module: Along with QPR and APR, a good proportional reasoning curriculum must also develop a spatial sense through transformational geometry exercises. The proportional reasoning can be built through special exercises that will build students’ skills to generate similar figures using graph dilations including enlargements and reductions on a coordinate plane. They will also be trained to use proportional relationships in similar two-dimensional figures or similar 3D figures to determine missing measurements. This module will also provide an opportunity to use proportional reasoning to describe and verbalize how changes in dimensions affect linear, area, and volume measures. Consider the following problem that will allow students to describe how changing one measured attribute of the figure affects the volume and surface area. Example: Given a rectangular prism with a length of 2, a width of 2, and a height of 1: - 1. Without changing the length and width, change the height by a factor of n and create a table showing volume for increasing values of n. - 2. Write in words the pattern you observe in the volumes. - 3. Write an algebraic function for the pattern. - 4. Without changing length and height, change the width by a factor of n and create a table showing volume for increasing values of n. - 5. Write in words the pattern you observe in the volumes. - 6. Write an algebraic function for the pattern. - 7. Is the pattern in (iii) the same and why? - 8. Without changing the length, change the height and the width by a factor of n and create a table showing volume for increasing values of n. - 9. Write in words the pattern you observe in the volumes. - 10. Write an algebraic function for the pattern. - 11. Write a function to predict the volume if all three dimension change by a factor of n. - 12. Repeat all the steps (i)-(xi) for calculating “surface area” instead of “volume.” This exercise can help build spatial reasoning related to change in one dimension versus two dimensions and the effect of varying dimensions on volume and surface area. This activity also helps to explore patterns and discover the relationship between linear ratios, area ratios, and volume ratios. At a summer institute focused on proportional reasoning, teachers made connections to solving real-world problems 6.RP.A.3. They were shown packages of 100-calorie snacks, given cereal and asked to show 100 calories of their favorite cereal. Figure 9.3 shows how one teacher illustrated the solution in multiple ways. Teachers working together helped each other appreciate multiple strategies. One of the teachers commented, “I am so comfortable with mental math and using numbers. I find it hard to think in terms of manipulatives and pictures but seeing how other teachers solved it using these tools really helped me see how my students might approach it. I can truly see the value of hands-on manipulatives for my math students.” Other teachers shared, “Today using a ratio table, Karen showed me how to ‘pull apart’ a ratio so that I could manipulate it more easily.” Through the experience of relearning mathematics through multiple models, teachers felt more confident and more “strategically competent” using multiple models and posing rich proportional reasoning problems in class with their students. The task of the teachers is therefore to help students connect their constructed knowledge to the powerful new ideas that they want to teach them. This, combined with the Common Core Standards, creates a great need to enhance teachers’ mathematics content and pedagogical knowledge with a special focus on modeling proportional reasoning. It is also essential to understand how these standards translate into classroom practices and assessment strategies. With a growing population of students identified as economically disadvantaged, LEP, and special needs in many school divisions across the nation, teachers need to be proficient in presenting mathematical ideas visually and through multiple representations. Figure 9.3 100-calorie cereal portions using pictorial approach for the unitizing method. Source: Authors. Think about it! How do these visual representations (ratio tables, double number lines, bar models) help develop a deeper understanding of proportional reasoning?
Chapter 9 – Electrical Instrumentation Signals The use of variable voltage for instrumentation signals seems a rather obvious option to explore. Let’s see how a voltage signal instrument might be used to measure and relay information about the water tank level: The “transmitter” in this diagram contains its own precision regulated source of voltage, and the potentiometer setting is varied by the motion of a float inside the water tank following the water level. The “indicator” is nothing more than a voltmeter with a scale calibrated to read in some unit height of water (inches, feet, meters) instead of volts. As the water tank level changes, the float will move. As the float moves, the potentiometer wiper will correspondingly be moved, dividing a different proportion of the battery voltage to go across the two-conductor cable and on to the level indicator. As a result, the voltage received by the indicator will be representative of the level of water in the storage tank. This elementary transmitter/indicator system is reliable and easy to understand, but it has its limitations. Perhaps greatest is the fact that the system accuracy can be influenced by excessive cable resistance. Remember that real voltmeters draw small amounts of current, even though it is ideal for a voltmeter not to draw any current at all. This being the case, especially for the kind of heavy, rugged analog meter movement likely used for an industrial-quality system, there will be a small amount of current through the 2-conductor cable wires. The cable, having a small amount of resistance along its length, will consequently drop a small amount of voltage, leaving less voltage across the indicator’s leads than what is across the leads of the transmitter. This loss of voltage, however small, constitutes an error in measurement: Resistor symbols have been added to the wires of the cable to show what is happening in a real system. Bear in mind that these resistances can be minimized with heavy-gauge wire (at additional expense) and/or their effects mitigated through the use of a high-resistance (null-balance?) voltmeter for an indicator (at additional complexity). Despite this inherent disadvantage, voltage signals are still used in many applications because of their extreme design simplicity. One common signal standard is 0-10 volts, meaning that a signal of 0 volts represents 0 percent of measurement, 10 volts represents 100 percent of measurement, 5 volts represents 50 percent of measurement, and so on. Instruments designed to output and/or accept this standard signal range are available for purchase from major manufacturers. A more common voltage range is 1-5 volts, which makes use of the “live zero” concept for circuit fault indication. - DC voltage can be used as an analog signal to relay information from one location to another. - A major disadvantage of voltage signaling is the possibility that the voltage at the indicator (voltmeter) will be less than the voltage at the signal source, due to line resistance and indicator current draw. This drop in voltage along the conductor length constitutes a measurement error from the transmitter to the indicator.
Building Memory Skills: Tips and Techniques for Instant Recall Gain a better understanding of how memory works, how to manage everyday memory lapses and how to improve your memory. Are you one of those people that forgets someone’s name as soon as they are introduced to you? Well you might be surprised to know your brain is wired to forget. How your memory works has a direct impact on work quality, productivity, day-to-day interactions and functioning. This topic will provide you with practical strategies that will improve your capacity to recall information. • You will be able to describe key components of how memory systems work. • You will be able to explain how stress impacts memory. • You will be able to discuss how managing sleep, nutrition, exercise, and aspects of daily life can impact memory. • You will be able to identify practical strategies to improve memory and recall of information.
Tech moves fast! Stay ahead of the curve with Techopedia! Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia. A capsule network is a kind of shorthand term for a specific kind of neural network pioneered by Stanford scientist Geoffrey Hinton. In the capsule network, specific methodology is applied to image processing to try to affect an understanding of objects from a three-dimensional spectrum. To understand capsule networks or what Hinton has called the “dynamic routing between capsules” algorithm, it is important to understand convolutional neural networks (CNNs). Convolutional neural networks have done an amazing job of helping computers to assemble features in image processing to understand pictures in some of the same ways that humans do. Complex sets of filtering, pooling and scaling layers help to achieve detailed results. But CNNs are not good at understanding an image from various three-dimensional views. Hinton's concept is that algorithms such as dynamic routing between capsules can use reverse rendering to break down objects and understand the relationships of their views from various three-dimensional angles. Experts point out that progress in computing power and data storage has made items like capsule networks possible. These interesting ideas form the basis for some current groundbreaking research into more powerful AI.
Move to resolve ‘vocal learning’ puzzle. New Zealand’s tiniest native birds could be the missing piece of a puzzle facing scientists around the world By Jamie Morton, originally published in New Zealand Herald, 29 December 2017 A new study aims to answer whether New Zealand wrens — the petite group of birds now represented only by our smallest bird, the rifleman, and the mountain-dwelling New Zealand rock wren—possess a special trait that allows them to learn new sounds. But the implications of the research, supported with a $300,000 Marsden Fund grant, could reach beyond our shores. Although most animals communicate with innate sounds, a select group — among them whales, dolphins, bats, elephants, some birds, and us — can develop their own. What’s called “vocal learning” is often associated with songbirds and parrots, whose famed vocal abilities and learning process are so similar to ours, they’ve become important research subjects for understanding of how speech evolved in humans. Much of this research is based on assumptions about when, where and why vocal learning evolved — but recent research has drastically reordered the bird family tree. “For decades, we thought that parrots and songbirds were only distantly related, so we assumed that each group had evolved vocal learning independently,” said Dr Kristal Cain, an integrative biologist at the University of Auckland. “However, new research has shown that they are actually very close to each other on the bird family tree, and scientists now think their distant ancestor may have evolved vocal learning. “If so, vocal learning is much, much older than previously thought, and probably evolved for very different reasons than we previously thought.” One special group of birds was on the branch between parrots and songbirds —New Zealand wrens — but no one yet knew if they were vocal learners. “If they are, this would really change how we view vocal learning, and alter the direction of future research.” The New Zealand wrens are considered unusual and ancient, having evolved before all of the songbirds found in our wilderness today. “They are very important for understanding the evolution of vocal learning and how the brain evolved to allow vocal learning, but we know almost nothing about their vocal patterns,” Cain said. “Because they make very simple noises, it has long been assumed that they are not learners, but recent research suggests it is possible they actually learn those simple vocalisations.” Her team will focus on rifleman birds living in the wild, tracking them from birth to adulthood and recording their sounds. They’ll then compare those sounds with those of adults around them, and to other populations, to find whether they have been learned. They are working with people who did earlier research on the species and draw on new technology that allows large amounts of data to be recorded without disturbing the birds. “This will solve one problem, but it also creates a new one: how to efficiently deal with stacks of recordings. “To deal with that challenge we’re using special software that will allow us to train the computer to find the noises we’re interested in. “The best part of this project, is that whatever the answer — whether they are or not learners — the results will be very exciting and important for future research.” New Zealand was prized by biologists as a natural laboratory for studying the evolutionary history of birds, as it was once part of the supercontinent Gondwana, yet still boasted very old species of songbirds and parrots. “Songbirds are the most widespread and diverse group of birds, and parrots are widely regarded as some of the most intelligent and vocally flexible. Consequently, New Zealand could become one of the world leaders driving our understanding of how and why these bird groups evolved the way they did.” Understanding whether the New Zealand wrens were vocal learners would also be important for conservation: “Finally, to understand when, why and how vocal learning evolved in birds we must have a better idea of vocalisations in the New Zealand wrens. They really are a missing piece to this puzzle.” Dr Kristal Cain The University of Auckland
On Christmas Day in Hopewell Township, New Jersey, approximately 100 patriots will re-enact George Washington’s crossing of the Delaware River. This annual memorial to the Christmas Night crossing of 1776 historically draws large crowds, and takes place regardless of inclement weather. In a war known for its many heroic deeds and acts of bravery, why does this specific moment merit such attention? After the publication of his book 1776, historian David McCullough gave a speech at The Heritage Foundation in which he masterfully explained how Washington’s courageous actions that Christmas Night determined the fate of America. Click here to hear “America’s historian” speak about the lasting significance of Washington’s crossing of the Delaware 234 years ago.
Doran's line drawing of an Australian canoe that was built by bunching and tying the ends of a half-tube of bark was obviously based on this fine early 19th-century engraving, from Lesueur and Petit's Atlas to Peron's "Voyage of Discovery." (I've transcribed the name of the source verbatim from Johnstone's caption, but unfortunately, there is no bibliography or footnote citation for this source.) In the engraving, one can see thwarts, and ties (probably bark) running across the boat, from shear to shear just above the thwarts, , but no gunwale members, ribs, or other structure. The Aboriginal people paddling in the background provide scale, showing that this was a small two-man canoe indeed. They are paddling with single-handed paddles, and they have a small fire going amidships. In the background, a man is carrying a similar boat above his head. From the abundance of ducks and other waterfowl in the image, one can safely speculate that the canoes were used for waterfowling, egg collecting, or both. My earlier blog included Doran's description of the bark rafts of Tasmania, which stated that they were extremely temporary craft, becoming waterlogged in about six hours. Here's an image, from the same source as the above image, by way of Johnstone. As Johnstone notes, these are similar in design to bundled-reed floats that were used widely in many cultures around the world, the Peruvian caballito pictured just a couple posts ago being an example. But given their short working life, Johnstone seems justified in calling them among the most primitive of craft. I have no information on what material was used to bind them together, nor on the type of bark used in the bundles. Again, the two paddlers in the background show that this is a very small tandem craft, and it doesn't look particularly stable either. They carry poles, but apparently no paddles, and a pair of poles appear in the foreground as well. As any modern canoe poler can attest, however, poles can serve entirely adequately to propel a boat in deep water, as their projected area under water is often not much less than that of a paddle blade.
1. What is an ideology? Why are ideologies important? An ideology is set of beliefs that reflect a person’s outlook on the world. Ideologies are important because they shape how we perceive and interact with the world. In politics, they affect the voting choices we make and the policies we support. 2. Define fundamentalism. Fundamentalism is the belief that a religious text is absolutely, literally true. This ideology also states that anything that opposes the text must be wrong. All behavior and belief must be guided by this central text, and anything else is sinful. 3. Describe the core elements of classical liberalism. Name at least one key figure in the founding of liberalism. The core elements of classical liberalism include the importance of freedom, political equality, limited government, the free market, and a faith in reason and progress. 4. American conservatism differs in many ways from traditional conservatism. Describe those differences. Traditional conservatism was hostile to the spread of democracy and the free market because they undermined tradition. American conservatism embraces both of these ideas. 5. Socialism can be characterized as an attempt to make good on the failed promises of liberalism. Why is this? Socialists argue that although liberalism promises freedom and equality, it does not deliver them because of the inequalities of the market. Therefore, socialists want the government to play a very strong role in the economy—perhaps even controlling it entirely—in order to rectify the failings of liberalism.
One Survivor Remembers is a free film, available for streaming exclusively on Teaching Tolerance. To view the full film, please login or create an account. Please note that this trailer (and full film) contains graphic footage of atrocities committed during the Holocaust. We recommend this content for sixth grade and higher. In cooperation with the United States Holocaust Memorial Museum and HBO, Teaching Tolerance is pleased to present the Oscar-winning documentary One Survivor Remembers online. Accompanying online teacher resources encourage thoughtful classroom discussion about a historical topic that is particularly difficult to teach and comprehend. Gerda Weissmann Klein’s account of surviving the Holocaust reduces the distance of both time and geography, making the topic more accessible to students. It also places the responsibility of remembering not solely on the shoulders of one woman, but on us all. This educational film deepens students’ understanding of the Holocaust and draws connections to students’ lives by asking enduring questions such as: - How can individuals and societies remember and commemorate difficult histories? - What is the purpose of remembering? What are the consequences of forgetting? - During the Holocaust, what strategies were used to create distinctions between “us” and “them”? What were the consequences of these distinctions? - What are the costs of injustice, hatred and bigotry? - What choices do people make in the face of injustice? What obstacles keep individuals from getting involved in their communities and larger world? What factors encourage participation? "Deeply affecting in its dignity... A testament to courage and hope." One Survivor Remembers was produced in 1995 by HBO and the United States Holocaust Memorial Museum and directed by Kary Antholis. Since 2005, more than 130,000 copies of the film have been distributed to schools and youth organizations around the country by the Southern Poverty Law Center’s Teaching Tolerance project. In 2012, the film was selected for inclusion in the National Film Registry of the Library of Congress, a registry reserved for films that are “culturally, historically or aesthetically significant." "Here to Tell My Story" (Interview with Gerda Weissmann by Jeff Sapp, Teaching Tolerance Fall 2005) Each chapter of the Teacher's Guide for One Survivor Remembers is linked below. - One Survivor Remembers: Bullies & Bystanders - One Survivor Remembers: A Call to Action - One Survivor Remembers: Intolerance Today - One Survivor Remembers: Antisemitism - One Survivor Remembers: Twenty Pounds
History of Belarus |This article needs additional citations for verification. (November 2012) (Learn how and when to remove this template message)| Part of a series on the |History of Belarus| After an initial period of independent feudal consolidation, Belarusian lands were incorporated into the Kingdom of Lithuania, Grand Duchy of Lithuania, and later in the Polish–Lithuanian Commonwealth, and the Russian Empire and eventually the Soviet Union. Belarus became an independent country in 1991 after declaring itself free from the Soviet Union. - 1 Early history - 2 Polish–Lithuanian Commonwealth - 3 Russian Empire - 4 20th century - 5 Republic of Belarus - 6 See also - 7 References - 8 Further reading - 9 External links The history of Belarus, or more precisely of the Belarusian ethnicity, begins with the migration and expansion of the Slavic peoples throughout Eastern Europe between the 6th and 8th centuries. East Slavs settled on the territory of present-day Belarus, Russia and Ukraine, assimilating local Baltic — (Yotvingians, Dniepr Balts), Ugro-Finnic (in Russia) and steppe nomads (in Ukraine) already living there, their early ethnic integrations contributed to the gradual differentiation of the three East Slavic nations. These East Slavs, a pagan, animistic, agrarian people, had an economy which included trade in agricultural produce, game, furs, honey, beeswax and amber. During the 9th and 10th centuries, Scandinavian Vikings established trade posts on the way from Scandinavia to the Byzantine Empire. The network of lakes and rivers crossing East Slav territory provided a lucrative trade route between the two civilizations. In the course of trade, they gradually took sovereignty over the tribes of East Slavs, at least to the point required by improvements in trade. The Rus' rulers invaded the Byzantine Empire on few occasions, but eventually they allied against the Bulgars. The condition underlying this alliance was to open the country for Christianization and acculturation from the Byzantine Empire. The common cultural bond of Eastern Orthodox Christianity and written Church Slavonic (a literary and liturgical Slavic language developed by 8th century missionaries Saints Cyril and Methodius) fostered the emergence of a new geopolitical entity, Kievan Rus' — a loose-knit network of principalities, established along preexisting trade routes, with major centers in Novgorod (currently Russia), Polatsk (in Belarus) and Kiev (currently in Ukraine) — which claimed a sometimes precarious preeminence among them. First Belarusian states Between the 9th and 12th centuries, the Principality of Polotsk (northern Belarus) emerged as the dominant center of power on Belarusian territory, with a lesser role played by the Principality of Turaŭ in the south. It repeatedly asserted its sovereignty in relation to other centers of Rus', becoming a political capital, the episcopal see of a bishopric and the controller of vassal territories among Balts in the west. The city's Cathedral of the Holy Wisdom (1044–66), though completely rebuilt over the years, remains a symbol of this independent-mindedness, rivaling churches of the same name in Novgorod and Kiev, referring to the original Hagia Sophia in Constantinople (and hence to claims of imperial prestige, authority and sovereignty). Cultural achievements of the Polatsk period include the work of the nun Euphrosyne of Polatsk (1120–73), who built monasteries, transcribed books, promoted literacy and sponsored art (including local artisan Lazarus Bohsha's famous "Cross of Euphrosyne", a national symbol and treasure stolen during World War II), and the prolific, original Church Slavonic sermons and writings of Bishop Cyril of Turau (1130–82). Grand Duchy of Lithuania In the 13th century, the fragile unity of Kievan Rus' disintegrated due to nomadic incursions from Asia, which climaxed with the Mongol sacking of Kiev (1240), leaving a geopolitical vacuum in the region. The East Slavs splintered into a number of independent and competing principalities. Due to military conquest and dynastic marriages the West Ruthenian (Belarusian) principalities were acquired by the expanding Lithuania, beginning with the rule of Lithuanian King Mindaugas (1240–63). From the 13th to 15th century, Baltic and Ukrainian lands were consolidated into the Grand Duchy of Lithuania, with its initial capital unknown, but which presumably could have been either Navahrudak, Voruta, Trakai, Kernavė or Vilnius. Since the 14th century, Vilnius had been the only official capital of the state. The Lithuanians' smaller numbers in this medieval state gave the Ruthenians (present-day Belarusians and Ukrainians) an important role in the everyday cultural life of the state. Owing to the prevalence of East Slavs and the Eastern Orthodox faith among the population in eastern and southern regions of the state, the Ruthenian language was a widely used colloquial language. An East Slavic variety (rus'ka mova, Old Belarusian or West Russian Chancellery language), gradually influenced by Polish, was the language of administration in the Grand Duchy of Lithuania at least since Vytautas reign until the late 17th century when it was eventually replaced by Polish language. This period of political breakdown and reorganization also saw the rise of written local vernaculars in place of the literary and liturgical Church Slavonic language, a further stage in the evolving differentiation between the Belarusian, Russian and Ukrainian languages. Several Lithuanian monarchs — the last being Švitrigaila in 1432–36 — relied on the Eastern Orthodox Ruthenian majority, while most monarchs and magnates increasingly came to reflect the opinions of the Roman Catholics. Construction of Orthodox churches in some parts of present-day Belarus had been initially prohibited, as was the case of Vitebsk in 1480. On the other hand, further unification of the, mostly Orthodox, Grand Duchy with mostly Catholic Poland led to liberalization and partial solving of the religious problem. In 1511, King and Grand Duke Sigismund I the Old granted the Orthodox clergy an autonomy enjoyed previously only by Catholic clergy. The privilege was enhanced in 1531, when the Orthodox church was no longer responsible to the Catholic bishop and instead the Metropolite was responsible only to the sobor of eight Orthodox bishops, the Grand Duke and the Patriarch of Constantinople. The privilege also extended the jurisdiction of the Orthodox hierarchy over all Orthodox people. In such circumstances, a vibrant Ruthenian culture flourished, mostly in major present-day Belarusian cities. Despite the legal usage of the Old Ruthenian language (the predecessor of both modern Belarusian and Ukrainian languages) which was used as a chancellery language in the territory of the Grand Duchy of Lithuania, the literature was mostly non-existent, outside of several chronicles. The first Belarusian book printed with the first printing press in the Cyrillic alphabet was published in Prague in 1517, by Francysk Skaryna, a leading representative of the renaissance Belarusian culture. Soon afterwards he founded a similar printing press in Polatsk and started an extensive undertaking of publishing the Bible and other religious works there. Apart from the Bible itself, before his death in 1551 he published 22 other books, thus laying the foundations for the evolution of the Ruthenian language into the modern Belarusian language. The Lublin Union of 1569 constituted the Polish–Lithuanian Commonwealth as an influential player in European politics and the largest multinational state in Europe. While Ukraine and Podlaskie became subject to the Polish Crown, present-day Belarus territory was still regarded as part of the Grand Duchy of Lithuania. The new polity was dominated by much more densely populated Poland, which had 134 representatives in the Sejm as compared to 46 representatives of the Grand Duchy of Lithuania. However the Grand Duchy of Lithuania retained much autonomy, and was governed by a separate code of laws called the Lithuanian Statutes, which codified both civil and property rights. Mogilyov was the largest urban centre of the territory of present-day Belarus, followed by Vitebsk, Polotsk, Pinsk, Slutsk, and Brest, whose population exceeded 10,000. In addition, Vilna (Vilnius), the capital of the Grand Duchy of Lithuania, also had a significant Ruthenian population. With time, the ethnic pattern did not evolve much. Throughout their existence as a separate culture, Ruthenians formed in most cases rural population, with the power held by local szlachta and boyars, often of Lithuanian, Polish or Russian descent. As in the rest of Central and Eastern Europe, the trade and commerce was mostly monopolized by Jews, who formed a significant part of the urban population. Since the Union of Horodlo of 1413, local nobility was assimilated into the traditional clan system by means of the formal procedure of adoption by the szlachta (Polish gentry). Eventually it formed a significant part of the szlachta. Initially mostly Ruthenian and Orthodox, with time most of them became polonized. This was especially true for major magnate families (Sapieha and Radziwiłł clans being the most notable), whose personal fortunes and properties often surpassed those of the royal families and were huge enough to be called a state within a state. Many of them founded their own cities and settled them with settlers from other parts of Europe. Indeed, there were Scots, Germans and Dutch people inhabiting major towns of the area, as well as several Italian artists who had been "imported" to the lands of modern Belarus by the magnates. Contrary to Poland, in the lands of the Grand Duchy, the peasants had little personal freedom in the Middle Ages. However, with time, the magnates and the gentry gradually limited the few liberties of the serfs, at the same time increasing their taxation, often in labour for the local gentry. This made many Ruthenians flee to the scarcely populated lands, Dzikie Pola (Wild Fields), the Polish name of the Zaporizhian Sich, where they formed a large part of the Cossacks. Others sought refuge in the lands of other magnates or in Russia. Also, with time the religious conflicts started to arise. The gentry with time started to adopt Catholicism while the common people by large remained faithful to Eastern Orthodoxy. Initially the Warsaw Compact of 1573 codified the preexisting freedom of worship. However, the rule of an ultra-Catholic King Sigismund III Vasa was marked by numerous attempts to spread the Catholicism, mostly through his support for counterreformation and the Jesuits. Possibly to avoid such conflicts, in 1595 the Orthodox hierarchs of Kiev signed the Union of Brest, breaking their links with the Patriarch of Constantinople and placing themselves under the Pope. Although the union was generally supported by most local Orthodox bishops and the king himself, it was opposed by some prominent nobles and, more importantly, by the nascent Cossack movement. This led to a series of conflicts and rebellions against the local authorities. The first of such happened in 1595, when the Cossack insurgents under Severyn Nalivaiko took the towns of Slutsk and Mogilyov and executed Polish magistrates there. Other such clashes took place in Mogilyov (1606–10), Vitebsk (1623), and Polotsk (1623, 1633). This left the population of the Grand Duchy divided between Greek Catholic and Greek Orthodox parts. At the same time, after the schism in the Orthodox Church (Raskol), some Old Believers migrated west, seeking refuge in the Rzeczpospolita, which allowed them to freely practice their faith. From 1569, the Polish–Lithuanian Commonwealth suffered a series of Tatar invasions, the goal of which was to loot, pillage and capture slaves into jasyr. The borderland area to the south-east was in a state of semi-permanent warfare until the 18th century. Some researchers estimate that altogether more than 3 million people, predominantly Ukrainians but also Russians, Belarusians and Poles, were captured and enslaved during the time of the Crimean Khanate. Despite the abovementioned conflicts, the literary tradition of Belarus evolved. Until the 17th century, the Ruthenian language, the predecessor of modern Belarusian, was used in Grand Duchy as a chancery language, that is the language used for official documents. Afterwards, it was replaced with the Polish language, commonly spoken by the upper classes of Belarusian society. Both Polish and Ruthenian cultures gained a major cultural centre with the foundation of the Academy of Vilna. At the same time the Belarusian lands entered a path of economic growth, with the formation of numerous towns that served as centres of trade on the east-west routes. However, both economic and cultural growth came to an end in mid-17th century with a series of violent wars against Tsardom of Russia, Sweden, Brandenburg and Transylvania, as well as internal conflicts, known collectively as The Deluge. The misfortunes were started in 1648 by Bohdan Chmielnicki, who started a large-scale Cossack uprising in Ukraine. Although the Cossacks were defeated in 1651 in the battle of Beresteczko, Khmelnytsky sought help from Russian tsar, and by the Treaty of Pereyaslav Russia dominated and partially occupied the eastern lands of the Commonwealth since 1655. The Swedes invaded and occupied the rest in the same year. The wars had shown internal problems of the state, with some people of the Grand Duchy supporting Russia while others (most notably Janusz Radziwiłł) supporting the Swedes. Although the Swedes were finally driven back in 1657 and the Russians were defeated in 1662, most of the country was ruined. It is estimated that the Commonwealth lost a third of its population, with some regions of Belarus losing as much as 50%. This broke the power of the once-powerful Commonwealth and the country gradually became vulnerable to foreign influence. Subsequent wars in the area (Great Northern War and the War of Polish succession) damaged its economy even further. In addition, Russian armies raided the Commonwealth under the pretext of the returning of fugitive peasants. By mid-18th century their presence in the lands of modern Belarus became almost permanent. Eventually by 1795 Poland was partitioned by its neighbors. Thus a new period in Belarusian history started, with all its lands annexed by the Russian Empire, in a continuing endeavor of Russian tsars of "gathering the Rus lands" started after the liberation from the Tatar yoke by Grand Duke Ivan III of Russia. Under Russian administration, the territory of Belarus was divided into the guberniyas of Minsk, Vitebsk, Mogilyov, and Hrodno. Belarusians were active in the guerrilla movement against Napoleon's occupation.. With Napoleon's defeat, Belarus again became a part of Imperial Russia and its guberniyas constituted part of the Northwestern Krai. The anti-Russian uprisings of the gentry in 1830 and 1863 were subdued by government forces. Although under Nicholas I and Alexander III the national cultures were repressed due to the policies of de-Polonization and Russification, which included the return to Orthodoxy, the 19th century was signified by the rise of the modern Belarusian nation and self-confidence. A number of authors started publishing in the Belarusian language, including Jan Czeczot, Władysław Syrokomla and Konstanty Kalinowski. In a Russification drive in the 1840s, Nicholas I forbade the use of the term Belarusia and renamed the region the "North-Western Territory". He also prohibited the use of Belarusian language in public schools, campaigned against Belarusian publications and tried to pressure those who had converted to Catholicism under the Poles to reconvert to the Orthodox faith. In 1863, economic and cultural pressure exploded into a revolt, led by Kalinowski. After the failed revolt, the Russian government reintroduced the use of Cyrillic to Belarusian in 1864 and banned the use of the Latin alphabet. In the second half of the 19th century, the Belarusian economy, like that of the entire Europe, was experiencing significant growth due to the spread of the Industrial Revolution to Eastern Europe, particularly after the emancipation of the serfs in 1861. Peasants sought a better lot in foreign industrial centres, with some 1.5 million people leaving Belarus in the half-century preceding the Russian Revolution of 1917. BNR and LBSSR Minsk was captured by German troops on 21 February 1918. World War I was the short period when Belarusian culture started to flourish. German administration allowed schools with Belarusian language, previously banned in Russia; a number of Belarusian schools were created until 1919 when they were banned again by the Polish military administration. At the end of World War I, when Belarus was still occupied by Germans, according to the Treaty of Brest-Litovsk, the short-lived Belarus National Republic was pronounced on 25 March 1918, as part of the German Mitteleuropa plan. In December 1918, Mitteleuropa was obsolete as the Germans withdrew from the Ober-Ost territory, and for the next few years in the newly created political vacuum the territories of Belarus would witness the struggle of various national and foreign factions. On 3 December 1918 the Germans withdrew from Minsk. On 10 December 1918 Soviet troops occupied Minsk. The Rada (Council) of the Belarus National Republic went into exile, first to Kaunas, then to Berlin and finally to Prague. On 2 January 1919, the Soviet Socialist Republic of Byelorussia was declared. On 17 February 1919 it was disbanded. Part of it was included into RSFSR, and part was joined to the Lithuanian SSR to form the LBSSR, Lithuanian–Byelorussian Soviet Socialist Republic, informally known as Litbel, whose capital was Vilnius. While Belarus National Republic faced off with Litbel, foreign powers were preparing to reclaim what they saw as their territories: Polish forces were moving from the West, and Russians from the East. When Vilnius was captured by Polish forces on 17 April 1919, the capital of the Soviet puppet state Litbel was moved to Minsk. On 17 July 1919 Lenin dissolved Litbel because of the pressure of Polish forces advancing from the West. Polish troops captured Minsk on 8 August 1919. Belarusian Soviet Republic and West Belarus Some time in 1918 or 1919, Sergiusz Piasecki returned to Belarus, joining Belarusian anti-Soviet units, the "Green Oak" (in Polish, Zielony Dąb), led by Ataman Wiaczesław Adamowicz (pseudonym: J. Dziergacz). When on 8 August 1919, the Polish Army captured Minsk, Adamowicz decided to work with them. Thus Belarusian units were created, and Piasecki was transferred to a Warsaw school of infantry cadets. In the summer of 1920, during the Polish–Soviet War, Piasecki fought in the Battle of Radzymin. The frontiers between Poland, which had established an independent government after World War I, and the former Russian Empire were not recognized by the League of Nations. Poland's Józef Piłsudski, who envisioned the formation of an Intermarum Federation as a Central and East European bloc that would be a bulwark against Germany to the west and Russia to the east, carried out a Kiev Offensive into Ukraine in 1920. This met with a Red Army counter-offensive that drove into Polish territory almost to Warsaw, Minsk itself was re-captured by the Soviet Red Army on 11 July 1920 and a new Byelorussian Soviet Socialist Republic was declared on 31 July 1920. Piłsudski, however, halted the Soviet advance at the Battle of Warsaw and resumed his eastward offensive. Finally the Treaty of Riga, ending the Polish–Soviet War, divided Belarus between Poland and Soviet Russia. Over the next two years, the Belarus National Republic prepared a national uprising, ceasing the preparations only when the League of Nations recognized the Soviet Union's western borders on 15 March 1923. The Soviets terrorised Western Belarus, the most radical case being Soviet raid on Stołpce. Poland created Border Protection Corps in 1924. The Polish part of Belarus was subject to Polonization policies (especially in the 1930s), while the Soviet Belarus was one of the original republics which formed the USSR. For several years, the national culture and language enjoyed a significant boost of revival in the Soviet Belarus. A Polish Autonomous District was also formed. This was however soon ended during the Great Purge, when almost all prominent Belarusian national intelligentsia were executed, many of them buried in Kurapaty. Thousands were deported to Asia. As the result of Polish operation of the NKVD tens of thousands people of many nationalities were killed. Belarusian orthography was Russified in 1933 and use of Belarusian language was discouraged as exhibiting anti-soviet attitude. In West Belarus, up to 30 000 families of Polish veterans (osadniks) were settled in the lands formerly belonging to the Russian tsar family and Russian aristocracy. Belarusian representation in Polish parliament was reduced as a result of the 1930 elections. Since the early 1930s, the Polish government introduced a set of policies designed to Polonize all minorities (Belarusians, Ukrainians, Jews, etc.). The usage of Belarusian language was discouraged and the Belarusian schools were facing severe financial problems. In spring of 1939, there already was neither single Belarusian official organisation in Poland nor a single exclusively Belarusian school (with only 44 schools teaching Belarusian language left). Belarus in World War II When the Soviet Union invaded Poland on 17 September 1939, following the terms of the Molotov–Ribbentrop Pact's secret protocol, much of what had been eastern Poland was annexed to the BSSR. Similarly to the times of German occupation during World War I, Belarusian language and Soviet culture enjoyed relative prosperity in this short period. Already in October 1940, over 75% of schools used the Belarusian language, also in the regions where no Belarus people lived, e.g. around Łomża, what was Ruthenization. Western Belarus was sovietised, tens of thousands were imprisoned, deported, murdered. The victims were mostly Polish and Jewish. After twenty months of Soviet rule, Germany and its Axis allies invaded the Soviet Union on 22 June 1941. Soviet authorities immediately evacuated about 20% of the population of Belarus, killed thousands of prisoners and destroyed all the food supplies. The country suffered particularly heavily during the fighting and the German occupation. Minsk was captured by the Germans on 28 June 1941. Following bloody encirclement battles, all of the present-day Belarus territory was occupied by the Germans by the end of August 1941. During World War II, the Nazis attempted to establish a puppet Belarusian government, Belarusian Central Rada, with the symbolics similar to BNR. In reality, however, the Germans imposed a brutal racist regime, burning down some 9 000 Belarusian villages, deporting some 380,000 people for slave labour, and killing hundreds of thousands of civilians more. Local police took part in many of those crimes. Almost the whole, previously very numerous, Jewish populations of Belarus that did not evacuate were killed. One of the first uprisings of a Jewish ghetto against the Nazis occurred in 1942 in Belarus, in the small town of Lakhva. Since the early days of the occupation, a powerful and increasingly well-coordinated Belarusian resistance movement emerged. Hiding in the woods and swamps, the partisans inflicted heavy damage to German supply lines and communications, disrupting railway tracks, bridges, telegraph wires, attacking supply depots, fuel dumps and transports and ambushing German soldiers. Not all anti-German partisans were pro-Soviet. In the largest partisan sabotage action of the entire Second World War, the so-called Asipovichy diversion of 30 July 1943 four German trains with supplies and Tiger tanks were destroyed. To fight partisan activity, the Germans had to withdraw considerable forces behind their front line. On 22 June 1944 the huge Soviet offensive Operation Bagration was launched, Minsk was re-captured on 3 July 1944, and all of Belarus was regained by the end of August. Hundred thousand of Poles were expelled after 1944. As part of the Nazis' effort to combat the enormous Belarusian resistance during World War II, special units of local collaborationists were trained by the SS's Otto Skorzeny to infiltrate the Soviet rear. In 1944 thirty Belarusians (known as Čorny Kot (Black Cat) and personally led by Michał Vituška) were airdropped by the Luftwaffe behind the lines of the Red Army, which had already liberated Belarus during Operation Bagration. They experienced some initial success due to disorganization in the rear of the Red Army, and some other German-trained Belarusian nationalist units also slipped through the Białowieża Forest in 1945. The NKVD, however, had already infiltrated these units. Vituška managed to escape to the West following the war, along with several other Belarusian Central Rada leaders. In total, Belarus lost a quarter of its pre-war population in World War II including practically all its intellectual elite. About 9 200 villages and 1.2 million houses were destroyed. The major towns of Minsk and Vitsebsk lost over 80% of their buildings and city infrastructure. For the defence against the Germans, and the tenacity during the German occupation, the capital Minsk was awarded the title Hero City after the war. The fortress of Brest was awarded the title Hero-Fortress. BSSR from 1945 to 1990 After the end of War in 1945, Belarus became one of the founding members of the United Nations Organisation. Joining Belarus was the Soviet Union itself and another republic Ukraine. In exchange for Belarus and Ukraine joining the UN, the United States had the right to seek two more votes, a right that has never been exercised. More than 200 000 Poles run away or were expelled to Poland, some killed by the NKVD or deported to Siberia. Armia Krajowa and post-AK resistance was the strongest in the Hrodna, Vaŭkavysk, Lida and Ščučyn regions. The Belarusian economy was completely devastated by the events of the war. Most of the industry, including whole production plants were removed either to Russia or Germany. Industrial production of Belarus in 1945 amounted for less than 20% of its pre-war size. Most of the factories evacuated to Russia, with several spectacular exceptions, were not returned to Belarus after 1945. During the immediate postwar period, the Soviet Union first rebuilt and then expanded the BSSR's economy, with control always exerted exclusively from Moscow. During this time, Belarus became a major center of manufacturing in the western region of the USSR. Huge industrial objects like the BelAZ, MAZ, and the Minsk Tractor Plant were built in the country. The increase in jobs resulted in a huge immigrant population of Russians in Belarus. Russian became the official language of administration and the peasant class, which traditionally was the base for Belarusian nation, ceased to exist. On 26 April 1986, the Chernobyl disaster occurred at the Chernobyl nuclear power plant in Ukraine situated close to the border with Belarus. It is regarded as the worst nuclear accident in the history of nuclear power. It produced a plume of radioactive debris that drifted over parts of the western Soviet Union, Eastern Europe, and Scandinavia. Large areas of Belarus, Ukraine and Russia were contaminated, resulting in the evacuation and resettlement of roughly 200,000 people. About 60% of the radioactive fallout landed in Belarus. The effects of the Chernobyl accident in Belarus were dramatic: about 50,000 km² (or about a quarter of the territory of Belarus) formerly populated by 2.2 million people (or a fifth of the Belarusian population) now require permanent radioactive monitoring (after receiving doses over 37 kBq/m² of caesium-137). 135,000 persons were permanently resettled and many more were resettled temporarily. After 10 years since the accident, the occurrences of thyroid cancer among children increased fifteenfold (the sharp rise started in about four years after the accident). Republic of Belarus On 27 July 1990, Belarus declared its national sovereignty, a key step toward independence from the Soviet Union. The BSSR was formally renamed the Republic of Belarus on 25 August 1991. Around that time, Stanislav Shushkevich became the chairman of the Supreme Soviet of Belarus, the top leadership position in Belarus. On 8 December 1991, Shushkevich met with Boris Yeltsin of Russia and Leonid Kravchuk of Ukraine, in Belavezhskaya Pushcha, to formally declare the dissolution of the Soviet Union and the formation of the Commonwealth of Independent States. In 1994, the first presidential elections were held and Alexander Lukashenko was elected president of Belarus. The 1996 referendum resulted in the amendment of the constitution that took key powers off the parliament. In 2001, he was re-elected as president in elections described as undemocratic by Western observers. At the same time the west began criticising him of authoritarianism. In 2006, Lukashenko was once again re-elected in presidential elections which were again criticised as flawed by most European Union countries. In 2010, Lukashenko was re-elected once again in presidential elections, which were described as flawed by most EU countries and institutions. A peaceful protest against the electoral fraud was attacked by riot police and by armed men dressed in black. After that, up to 700 opposition activists, including 7 presidential candidates, were arrested by KGB. - History of Lithuania - History of Poland - History of Russia - History of Ukraine - List of Belarusian rulers - Polish Autonomous Districts: Dzierzynszczyzna, Marchlewszczyzna - Björn Wiemer. "Dialect and language contacts on the territory of the Grand Duchy of Lithuania from the 15th century until 1939". Aspects of Multilingualism in European Language History. Edited by Kurt Braunmüller and Gisell Ferraresi. John Benjamins Publishing. 2003. pp. 110–111. - (Russian) Литовско–русское государство (Litovsko–russkoye gosydarstvo) in Brockhaus and Efron Encyclopedic Dictionary - (Russian) "Братства" (Bratstva) in Brockhaus and Efron Encyclopedic Dictionary - (Russian) Внутриполитические результаты Люблинской унии (Vnutripolitičeskie rezul'tati Lyublinskoy unii), Belarus.by portal - (Russian) Церковная уния 1596 г. (Tserkovnaya uniya 1596 g.) in "belarus.by portal" - (Polish) Jerzy Czajewski, Zbiegostwo ludności Rosji w granice Rzeczypospolitej (Russian population exodus into the Rzeczpospolita), Promemoria journal, October 2004 nr. (5/15), ISSN 1509-9091, Table of Contents online - (Russian) Белорусская Советская Социалистическая Республика (Belorusskaya Sovyetskaya Socialističeskaya Respublika), article in "Большая Советская Энциклопедия" (Great Soviet Encyclopedia). Last accessed in December 2005 - Żytko, Anatol (1999) Russian policy towards the Belarussian gentry in 1861–1914, Minsk, p. 551. - (Russian) Воссоединение униатов и исторические судьбы Белорусского народа (Vossoyedineniye uniatov i istoričeskiye sud'bi Belorusskogo naroda), Pravoslavie portal - (Russian) История строительства дорог 1850–1900 гг., Byelorussian Railways - Janowicz, Sokrat (1999). Forming of the Belarussian nation. RYTM. pp. 247–248. - (Polish) Stobniak-Smogorzewska, Janina (2003) Kresowe osadnictwo wojskowe 1920–1945 (Military colonization of Kresy 1920–1945), Warsaw, RYTM, ISBN 83-7399-006-2 - (Polish) Ogonowski, Jerzy (2000) Uprawnienia językowe mniejszości narodowych w Rzeczypospolitej Polskiej 1918–1939 (The Language Rights of National Minorities in the Second Republic of Poland, 1918–1939, Polish with an English summary), Wydawnictwo Sejmowe, Warsaw, pp. 164–165 - Ruchniewicz, Stosunki..., p254 - Jan T. Gross, Revolution from Abroad - (Polish) Mironowicz, Eugeniusz (1999) Białoruś, Trio, Warsaw, p. 136. ISBN 83-85660-82-8 - Strużyńska, Anti-Soviet conspiracy..., pp859–860. - "United Nations". U.S. Department of State. Archived from the original on 3 March 2003. Retrieved 22 September 2014. Voting procedures and the veto power of permanent members of the Security Council were finalized at the Yalta Conference in 1945 when Roosevelt and Stalin agreed that the veto would not prevent discussions by the Security Council. Roosevelt agreed to General Assembly membership for Ukraine and Byelorussia while reserving the right, which was never exercised, to seek two more votes for the United States. - Andrew Savchenko, Belarus: A Perpetual Borderland, page 135, BRILL, 2009, ISBN 9789004174481 - Strangers at Home: Memorialisation of the Armia Krajowa in Belarus, Iryna Kashtalian, Imre Kertész Kolleg’s Cultures of History Forum - Последствия аварии на Чернобыльской АЭС. expo2000.bsu.by - Bennett, Brian M. The last dictatorship in Europe: Belarus under Lukashenko (Columbia University Press, 2011) - Korosteleva, Elena A., Lawson Colin W. and Marsh, Rosalind J., eds. Contemporary Belarus (Routledge 2003) - Loftus, John J., 'The Belarus Secret'(Knopf, 1982) - Minahan, James (1998). Miniature Empires: A Historical Dictionary of the Newly Independent States. Greenwood. ISBN 0-313-30610-9. - Olson, James Stuart; Pappas, Lee Brigance; Pappas, Nicholas C. J. (1994). Ethnohistorical Dictionary of the Russian and Soviet Empires. Greenwood Press. ISBN 0-313-27497-5. - Plokhy, Serhii (2001). The Cossacks and Religion in Early Modern Ukraine. Oxford University Press. ISBN 0-19-924739-0. - Rudling, Pers Anders. The Rise and Fall of Belarusian Nationalism, 1906–1931 (University of Pittsburgh Press; 2014) 436 pages - Ryder, Andrew (1998). Eastern Europe and the Commonwealth of Independent States, Volume 4. Routledge. ISBN 1-85743-058-1. - Silitski, Vitali and Jan Zaprudnik (2010). The A to Z of Belarus. Scarecrow Press. - Skinner, Barbara. (2012) The Western Front of the Eastern Church: Uniate and Orthodox Conflict in Eighteenth-Century Poland, Ukraine, Belarus, and Russia - Snyder, Timothy. (2004) The Reconstruction of Nations: Poland, Ukraine, Lithuania, Belarus, 1569–1999 excerpt and text search - Strużyńska, Nina. Anti-Soviet conspiracy and partisan struggle of the Green Oak Party in Belarus, in Non Provincial Europe, London 1999, ISBN 83-86759-92-5 - Szporluk, Roman (2000). Russia, Ukraine, and the Breakup of the Soviet Union. Hoover Institution Press. ISBN 0-8179-9542-0. - Treadgold, Donald; Ellison, Herbert J. (1999). Twentieth Century Russia. Westview Press. ISBN 0-8133-3672-4. - Vauchez, André; Dobson, Richard Barrie; Lapidge, Michael (2001). Encyclopedia of the Middle Ages. Routledge. ISBN 1-57958-282-6. - Zaprudnik, Jan. Historical dictionary of Belarus (Scarecrow Pr, 1998) - Zaprudnik, Jan (1993). Belarus: At A Crossroads In History. Westview Press. ISBN 0-8133-1794-0. - (Polish) Piotr Eberhardt, Problematyka narodowościowa Białorusi w XX wieku ("Nationality issue of Belarus in the 20th century"), Lublin, 1996, ISBN 83-85854-16-9 - (Polish) Ryszard Radzik, Kim są Białorusini? (Who are the Belarusians?), Toruń 2003, ISBN 83-7322-672-9 - (Polish) Małgorzata Ruchniewicz, Stosunki narodowościowe w latach 1939–1948 na obszarze tzw. Zachodniej Białorusi in Przemiany narodowościowe na kresach wschodnich II Rzeczypospolitej 1931–1948 (Nationality relations in 1939–1948 on the territory of so-called Western Belarus), Toruń, 2004, ISBN 83-7322-861-6 |Wikimedia Commons has media related to History of Belarus.| - Belarus National Republic — the Belarusian Government in exile - Stary Hetman — forums and library (in Belarusian and Russian) on Belarusian history - Belarus, by CIA World Factbook, 2000 - Belarus, by United States Department of State - Belarusian diaspora - History of Grand Duchy of Lithuania - The lists of Polish-Lithuanian Commonwealth officers - Belarus 1994 Presidential Election - Belarus history on the Official Website of the Republic of Belarus - Belarusian Historical Review. Independent Academic Journal dedicated to history of Belarus (Belarusian and English versions) - History of Belarus in five minutes. YouTube
October is bullying prevention month. Unfortunately, bullying can happen any time and to anyone — from the youngest kids in day care all the way through high school or college. While nothing will totally stop bullying, here are some tips to help parents detect, prevent and deal with the problem. 1. Be your child’s go-to person. Make sure your child always feels safe telling you about incidents at school, in the neighborhood, or even at home. At the dinner table, some families share the best thing that happened during the day — and the worst. This helps everyone learn to appreciate when someone opens a door for them or plays with them. To illustrate that no one is exempt from bullying, other family members should share bad situations (when appropriate). Exploring how to handle the “bad” scenarios can offer teaching moments for children. 2. Parents, don’t be an inadvertent bully. If the parent is constantly saying things that make a child feel bad about themselves, this is form of bullying. You may hear yourself saying, “I know you can get better grades.” But the child may be hearing only, “I’m stupid and won’t ever be able to please anyone.” Listen to what you say to your child and make sure you aren’t behaving in a manner that would not be acceptable behavior from others. 3. Discuss what actions can be considered bullying. Help your child see that bullying can be words, actions, ignoring someone, giggling and pointing. Discuss ways to positively respond to each instance. 4. Welcome your child’s friends into your home. If any friends seem to have an unusual amount of power over your child, you may need to help your child see that this person is not a true friend. 5. Stop sibling bullying. If one child seems to have dominance over another child, sit down immediately and let them know that this behavior will NOT be tolerated. Make sure to follow through and discipline the bully. Also make sure the child being bullied feels safe in coming to you. 6. Discipline your children appropriately if you see them doing or saying (or texting) something that you don’t consider kind. That way others — teachers, other parents or day care workers — don’t have to become the disciplinarian. 7. Help your child think of ways to react to bullying. For instance, if they are being teased about wearing glasses, perhaps there is a phrase they use to make the other person think twice about making such comments again. Taking steps to change things, or practicing ways to react to mean comments, will make a child feel ready to stand up for themselves. 8. Understand cyber-bullying. The internet is one of the newest arenas where a child can feel helpless against what is being said or shown in pictures about them. Make sure to carefully monitor screen time in a way that feels protective but not intrusive. The more conversations you have with your kids about what occurs online, the more likely they will be able to talk to you about what’s going on. 9. Learn the latest lingo. This includes verbal, texting and online slang. Do you know that CD9 means parents are around and that 99 means parents have left? Your child may be hiding something. 10. Remember the Golden Rule. “Do unto others as you would have them do unto you” is great advice. A friend’s child was having trouble on the school bus with one boy. The mom suggested that this child might not know the right way to be a friend. So the child being bullied went out of his way to be extra nice to the bully. Once the bully realized there was a different way to act, the two children became real friends. –Courtesy of Thomas Weck, a national award-winning author of children’s books, including the popular Lima Bear Stories Series (limabearpress.com).
The purpose of the sediment filtration method is to remove suspended particulate matter or colloidal materials from the water source. If these particulate matter are not removed, it can cause damage to other precision filtration membranes for dialysis water or even block the waterway. This is the oldest and simplest method of water purification, so this step is commonly used in the preliminary treatment of water purification, or if necessary, several filters are added to the pipeline to remove large impurities. There are many types of filters used to filter suspended particulate matter, such as mesh filters, sand filters (such as quartz sand, etc.) or membrane filters. As long as the size of the particles is larger than these holes, they will be blocked. The ions dissolved in water cannot be stopped. If the filter is not replaced or cleaned for too long, the particulate matter accumulated on the filter will increase, and the water flow and water pressure will gradually decrease. People just use the difference between the inlet water pressure and the outlet water pressure to judge the degree of filter blockage. Therefore, the filter should be periodically backwashed to eliminate impurities accumulated on it, and the filter should be replaced within a fixed time. The sediment filtration method has another problem worth noting, because particulate matter is continuously blocked and accumulated. These materials may have bacteria breeding there, and release toxic substances through the filter, causing a pyrogen reaction, so the filter must be replaced frequently. Principle When the pressure drop between the inlet water and the outlet water increases five times, the filter needs to be replaced.
Ginkgo biloba, the ginkgo tree, can be one of our most spectacular trees for golden, autumn color. The long-stalked, fan-shaped, simple, leaves are 2-3 inches long, and wide, and occur in clusters of 3-5, on short spurs along the tree’s branch stems. There sometimes is a notch along the broad summit of some of its leaves, creating a butterfly-shape, and hence the epithet “biloba.” In late-autumn there is a tendency for most of a tree’s leaves to curiously all fall off, during a one-to-five-day period of time, hence the Nemerov poem. The Ginkgo has been termed a “living fossil” and an “emblem of changelessness”. It is one of the oldest surviving tree taxa on earth, with fossils of related species found that date back to at least 270-million years ago. This heritage is to a time beyond when dinosaurs thrived and roamed the earth. Today, Ginkgo biloba is termed monotypic, meaning it is the only species in the genus. Additionally Ginkgo is the only genus in the family, GINKGOACEAE. Ginkgo biloba is native to Eastern China, and during the Han Dynasty (206 BC – AD 220) it was known as a holy tree. In China, it has been the subject of poems and paintings, from the 11th century. Introduced into Japan from the Yangtze River delta region, it was in Japan, in 1690, that the German, naturalist-explorer, Engelbert Kaempfer (1651-1716) first observed Ginkgo in a Japanese temple garden and gave the plant the name Ginkgo. The first live trees arrived in Europe at the Utrecht Botanic Garden, Netherlands, about 1730. It first came to the United States from London, in 1785, to the Philadelphia area. Today, in Philadelphia, at Bartram’s Garden, one surviving tree planted in 1785 is commonly recognized as the oldest Ginkgo in the U. S. In China there are individual trees known to be older than 1000-years, and some have speculated that Ginkgo may live as long as 2000 years. The inconspicuous flowers of Ginkgo biloba, which occur in the spring, are deciduous, meaning there are separate male and female trees. The fruit, found only on female tree, is not a true “fruit,” but a seed with a fleshy, outer layer. This fleshy layer is orange-colored when ripe, and the source of the infamous foul odor. Inside a woody, nut-like structure contains a soft, kernel-like seed. For centuries these seeds have been considered to have medicinal value. The seeds are still marketed on a large scale and are an important Chinese export crop. Extracts from Ginkgo leaves also have several medicinal values; used to increase vasodilation and peripheral blood-flow rates, and effective in the treatment of arthritis, tinnitus, and some eye conditions. The interesting reputation of long survivability of Ginkgo biloba is enhanced by a famous specimen still growing at the site of the 1945 atom bomb at Hiroshima, Japan. One tree located 800 yards from the epicenter had its trunk destroyed, but sprouted from its base, and it still grows there today. We have more than two-dozen Ginkgo biloba growing at Mount Auburn! On your next autumn visit look for these living fossils on Garden Avenue, Halcyon Avenue, Magnolia Avenue, Spruce Avenue, Walnut Avenue, Pearl Avenue, Cherry Avenue, Western Avenue, Bradlee Road, Field Road, Narcissus Path, Indian Ridge Path, Robin Path, Mist Path, Arethusa Path, Sparrow Path, Aralia Path, and Ilex Path. *This Horticulture Highlight was originally published in the November 2011 issue of the Friends of Mount Auburn electronic newsletter.
Plasma is the fourth state of matter, after solids, liquids, and gases, and has been used for disinfection and sterilization for wound care and skin diseases, such as MRSA (methicillin-resistant Straphylococcus Aureus). A new study, “Effects of Cold Atmospheric Plasmas on Adenoviruses in Solution” published November 30, 2011 in the Journal of Physics D: Applied Physics, focuses on the effects of cold atmospheric plasma (CAP) to kill the adenovirus. What is the Adenovirus? Adenoviruses are a group of viruses that cause respiratory and intestinal illnesses. These illnesses are generally mild, but are highly contagious. Adenovirus illnesses include the common cold, croup, bronchitis, pneumonia, conjunctivitis, and intestinal tract illnesses. You can become infected with the adenovirus when a sick person coughs or sneezes and the germs land you, or on a surface that you touch. One of the challenges that hospitals face is the inactivation of viruses. Theadenovirus is one of the most difficult viruses to kill because they are physically stable, can tolerate moderate increases in temperature, and their pH levels are relatively resistant. To disinfect, hospitals use either autoclaving or chlorine bleach. Scientists at the Max-Planck Institut für extraterrestrische Physik and Technische Universität München in Germany chose the adenovirus to find out if cold atmospheric plasmas could inactive this difficult virus. When the adenovirus was exposed to the CAP for 240 seconds, only one in a million viruses survived. Decoded Science asked Dr. Julia Zimmerman, one of the authors of the study, how CAP worked to inactivate the adenovirus. “Unfortunately the exact mechanisms are not known yet and still researched. Looking at a few single components produced by the plasma (Ozone, UV, etc.), it is clear that the produced amounts of these components alone would not be sufficient to achieve the inactivation rates we achieved. The production of an air plasma leads to approximately 600 chemical reactions. It seems to be the mix – “the plasma cocktail” – which inactivates the viruses so efficiently.” Further research is needed on the specific mechanics of how the adenovirus is inactivated. However, it seems to be similar to the mechanics of the human immune system’s reaction to a virus attack. How Does the CAP Device Work? The plasma cocktail is apparently very effective at disinfecting against adenoviruses, but how easy is it to implement? Dr. Zimmerman explained how the CAP generating device works: “The plasmas we create are cold plasmas (at approximately a few degrees above room temperature) under atmospheric pressure. For the plasma device used in the study with adenoviruses we used the surrounding air as the gas which we partly ionize. The plasma is created by many microdischarges. For this we use a flat sheet (made out of copper for example), an insulating sheet (Teflon for example) and a mesh grid. These three parts are sandwiched together. By applying high voltage to the flat copper sheet, microdischarges are produced between the mesh grid, which partly ionize the surrounding air. Our plasmas therefore consist of electrons, ions, atoms, radicals, reactive species (mainly reactive oxygen and nitrogen species as we use air), a little bit of UV light (far below the ICNIRP limits) and a little bit of heat. All in all, our aim is to generate “safe plasmas” with regards to current through skin, UV production, toxic gas emission , etc.” Adenovirus Prevention With Plasma The CAP generating device could be used in hospitals to disinfect hands and equipment. In the future, patients may even be able to inhale the plasma to treat lung infections. Plasma may also be applied to blood before a transfusion to kill any infections in the blood. To make this new technology available in hospitals, a few more steps will be needed, explains Dr. Zimmerman. “In research: The next steps are to further analyze the effect of CAPs on microorganisms and to analyze the killing mechanisms in detail. Furthermore we will improve the plasma diagnosis and modeling. Concerning applications: Prof. Morfill (Director of the Mack-Planck Institute for extraterrestrial physics) and his co-workers founded the company terraplasma GmbH. At the moment we are in negotiations with different industry companies and already signed two contracts for proof of principle phases. The aim is that terraplasma GmbH develops CAPs devices for specific purposes together with the interested company.” Dr. Zimmerman also explained how CAP-based treatment will further the advancement in hygiene practices and treatment options for patients. “As the produced plasma is cold and works against bacteria, viruses, fungi and spores, it could be used for several applications in hygiene (professional and personal) and medicine. - Sterilization of heat sensitive materials (medical equipment and even for decontamination of satellites, space vehicles, etc) - Sterilization of surfaces: we developed a device which could serve as a self sterilizing surface and therefore sterilize itself. - Hand disinfection - Disinfection of food - Reduction of bacteria in chronic wounds: we are running a phase II study in two clinics at the moment where cold plasma is applied to chronic wounds in patients to improve wound healing. - For disinfection of operational wounds - Treatment of all kinds of skin diseases with bacteria, viruses or fungi as an origin.” Benefits of Cold-Atmospheric Plasma for Disinfection This new research has found a way to kill one of the most difficult viruses out there. This advance provides hope for treating other diseases, reducing infection, and possibly even preventing epidemics. CAP opens a new world for scientists, doctors, medical staff, and patient treatment protocols. Centers for Disease Control and Prevention. Adenoviruses. Accessed on December 7, 2011. The Institute of Physics. Plasma-based treatment goes viral. (2011). Accessed on December 7, 2011. Zimmermann, J., et al. Effects of cold atmospheric plasmas on adenovirus in solution. (2011). Journal of Physics D: Applied Physics. 44 505201. Accessed December 7, 2011.© Copyright 2011 Janelle Vaesa, MPH, All rights Reserved. Written For: Decoded Science