content
stringlengths
275
370k
Colonial America through the Revolutionary Era surveys the political, social, economic, cultural and ideological characteristics of the 17th and 18th centuries, beginning with the earliest settlements through the establishment of the early American republic. Students are introduced to the techniques and strategies of historians through the use of historical texts, both primary and secondary, as well as the procedures of historical writing. Attention is given to multiple American cultures and their prevalent values and institutions; the explanations for change in such values and institutions; and relationships within the American colonies and the early United States both among the cultural groups comprising the national population as well as with those of Europe and Africa. HIST 101 - US History: Colonial America through the Revolutionary Era Credit Hours: 3
For children playing is one of the most important aspects of their lives. Playing is central to children’s physical, mental, social and emotional health and wellbeing. Through play, children develop resilience and flexibility, contributing to physical and emotional wellbeing. Playing is important to all children no matter what their impairments or behaviour. Children also have a right to play as part of the United Nations Convention on the Rights of the Child (UNCRC). The Convention a list of all the rights children and young people everywhere in the world have. As adults we have a responsibility to provide time, space and freedom for children to play. The summer holiday is a great opportunity to provide outdoor play opportunities for our children. Helpful tips for parent to support children's play This resource includes information about the physical and mental benefits of playing, tips for playful parenting as well as addressing parental concerns about playing outdoors. Top tips – make time for play We are advocating a low cost approach to making the most of children’s free time – give them time to play. Playing with friends brings a whole host of positive benefits to children - so do we really need to break the bank to fill their lives with other activities? Children say they want more time and good places to play outside with their friends. Tips for supporting children to play out confidently We all have a responsibility to support and prepare our children to play out confidently in their community. To encourage parents and carers and local communities to support children playing out confidently these top tips may help. Top tips – screen time and digital play Many of us struggle with finding a solution to the challenges around screen time and how to support children and young people to access and interact with it in a way that is beneficial and balanced. As adults we have an important role to play in supporting children in a digitalised world. We developed these top tips to support a balanced approach to screen time and digital play. Why playing matters and what we can all do about it – explores the importance of playing outside and contact with nature for all children and their families and it provides tips for supporting children to play out confidently. Play and early years: birth to seven years – explores what is play and its importance to and for children’s development in the early years (birth to seven years old). It also explores the importance of adult roles, advocacy and the child’s right to play. Promoting physical activity through outdoor play in early years settings - explores how playing contributes to children’s physical activity levels and how early years practitioners can provide permission, time and space, as well as making materials available, for children to play outdoors. It also provides practical advice on thinking sensibly about health and safety. Play: health and wellbeing – provides information on why playing is crucial to children's health and wellbeing and explores ways to respond to children's need for more time and space for free play. Play: mental health and wellbeing – briefly explains the importance of playing for brain development and mental health as well as exploring how playing contributes to children’s emotional wellbeing. What’s happening in your area There are places and opportunities to play across north Wales over the summer holidays - find out what's happening in your area: Playday is the annual UK celebration of children’s right to play. We are calling on you to join us to celebrate that: - Playing is free - Playing is a child’s right - Playing is inclusive - Playing supports physical and emotional health and wellbeing - Playing support respect and appreciation of the natural environment - Playing promotes development and learning. Throughout Wales and the rest of the UK thousands of children and their families will go out to play at locally organised events – from small scale garden parties to large scale events in parks and town centres. Right to play These resources support children to learn about their rights as part of the United Nations Convention on the Rights of the Child (UNCRC). Article 31 of the Convention says that all children have the right to play. The resources can be printed and shared with children and their families. Right to play postcard Right to play - for colouring in Right to play A4 poster Right to play A3 poster
Santoor Music Instrument History and Facts about Santoor Music Instrument. Santoor is a very old string melodic instrument that originates from Jammu and Kashmir, India. A prehistoric ancestor of this kind of instruments was discovered in Mesopotamia during the period 1600 to 911 BC. The Santoor instrument is a hammered dulcimer instrument, designed with the shape of a trapezium, often prepared from walnut wood, with 72 strings. The unique-formed mallets are frivolous and they are held between the index finger and the middle finger. A usual Santoor instrument contains two sets of bridges, offering an array of three octaves. There are Santoor instruments that are designed in the shape of a rectangle and can contain additional strings than the Persian equivalent, which usually has 72 strings. History of the Santoor Music Instrument Santoor is an extremely antique musical instrument of India. Nowadays, when people say Veena, it denotes a particular instrument, but during ancient times Veena was a general word for dissimilar types of string instruments. The original string instrument was known as Pinaki-Veena. The thought to make this instrument originated from the bow and arrow while arrow was released, it made a sound. From that thought, someone shaped a musical instrument and called it as Pinaki Veena. Pinak means a bow in the Sanskrit language. In the Western Nations, this instrument is known as the Harp and in India, a small form of the equivalent instrument called “Swarmandal” that several current vocalists employ while singing. Subsequent to Pinaki Veena, in Antique India, there were the emergence of different types of Veenas, such as Katyayani Veena, Baan Veena, Rudra Veena, Tumbru Veena, Saraswati Veena, and Shata-tantri Veena. In antique Sanskrit manuscripts, the Santoor instrument has been mentioned as Shatatantri Veena, which means a Veena with 100 strings. The Santoor instrument was employed as an accessory instrument in the folk music of Jammu and Kashmir in India. The instrument is played in a unique style called the Sufiana Mausiqi. The Sufi spiritualists exploited it as an accessory to their songs. Growth of the Santoor instrument The trapezoid structure of the Santoor instrument is usually prepared from either maple or walnut wood. Sometimes, the bottom and top boards of the instrument can be either veneer or plywood. On the top plank, bridges, made of wood, are positioned, so as to seat extended across the metal strings. The strings, grouped in three or four units, are attached to pins or nails on the left hand side of the Santoor instrument and they are extended over the sound plank over the bridges to the instrument’s right side. On this side, there are changing tuning pins or pegs, made of steel, which allow adjusting each string unit to a preferred musical note or a pitch or a frequency. Method of playing the instrument The Santoor instrument is played by sitting in an asana known as Ardha-Padmasana pose and keeping it over the lap. When playing the instrument, the wider side is placed nearer to the waist of the performer and the smaller side is placed away from the performer. The instrument is played by means of a pair of light mallets or hammers, made of wood by means of both hands. The Santoor instrument is an extremely delicate one and is extremely sensitive to light glides and strokes. The strokes are played on the strings at all times either nearer to the bridges or a bit away from bridges. Both styles cause dissimilar tones. Occasionally, strokes through one hand can be muted by means of the other hand by making use of the palm face just to produce variety.
A tropical cyclone is a circular air movement that starts over the warm ocean waters in the warm part of Earth near the Equator. Most tropical cyclones create fast winds and great rains. While some tropical cyclones stay out in the sea, others pass over land. They can be dangerous because of flooding and because the winds pick up objects, including things as big as small boats. Tropical cyclones can throw these things at high speeds. Tropical cyclones, hurricanes or typhoons form when convection causes warm, moist air above the ocean to rise. They begin as a group of storms when the water gets as hot as 80 °F (27 °C) or hotter. The Coriolis effect made by the Earth's rotation causes the winds to rotate. Warm air rises quickly. Tropical cyclones usually move westward in the tropics, and can later move north or south into the temperate zone. The "eye of the storm" is the center. It has little rain or wind. The eye wall has the heaviest rain and the fastest winds. It is surrounded by rain bands which also have fast winds. Tropical cyclones are powered by warm, humid ocean air. When they go onto land, they weaken. They die when they spend a long time over land or cool ocean water. Tropical cyclone, typhoon or hurricane[change | change source] The term "tropical cyclone" is a summary term. In various places tropical cyclones have other local names such as "hurricane" and "typhoon". A tropical cyclone that forms in the Atlantic Ocean is called a hurricane. The word hurricane is also used for those that form in the eastern, central and northern Pacific. In the western Pacific a tropical cyclone is called a typhoon. In the Indian Ocean it is called a "cyclone". Naming[change | change source] Tropical cyclones are usually given names because it helps in forecasting, locating, and reporting. They are named once they have steady winds of 62 km/h. Committees of the World Meteorological Organization pick names. Once named, a cyclone is usually not renamed. For several hundred years hurricanes were named after saints. In 1887, Australian meteorologist Clement Wragge began giving women's names to tropical cyclones. He thought of history and mythology for names. When he used men's names, they were usually of politicians he hated. By World War II cyclone names were based on the phonetic alphabet (Able, Baker, Charlie). In 1953 the United States stopped using phonetic names and began using female names for these storms. This ended in 1978 when both male and female names were used for Pacific storms. In 1979 this practice was added for hurricanes in the Gulf of Mexico and the Atlantic. Impact[change | change source] In the past these storms sank many ships. Better weather forecasting in the 20th century helped most ships avoid them. When tropical cyclones reach land, they may break things. Sometimes they kill people and destroy cities. In the last 200 years, about 1.5 million people have been killed by tropical cyclones. Wind can cause up to 83% of the total damages of a storm. Broken wreckage from destroyed objects can become deadly flying pieces. Flooding can also occur when a lot of rain falls and/or when storm surges push water onto the land. Classifications[change | change source] |Five||≥70 m/s, ≥137 knots| ≥157 mph, ≥252 km/h |Four||58–70 m/s, 113–136 knots| 130–156 mph, 209–251 km/h |Three||50–58 m/s, 96–112 knots| 111–129 mph, 178–208 km/h |Two||43–49 m/s, 83–95 knots| 96–110 mph, 154–177 km/h |One||33–42 m/s, 64–82 knots| 74–95 mph, 119–153 km/h |18–32 m/s, 34–63 knots| 39–73 mph, 63–118 km/h |≤17 m/s, ≤33 knots| ≤38 mph, ≤62 km/h Tropical cyclones are classified into different categories by their strength and location. The National Hurricane Center, which observes hurricanes in the Atlantic Ocean and Eastern and Central Pacific Ocean, classifies them using the Saffir-Simpson Hurricane Scale. Tropical cyclones in other places such as the Western Pacific Ocean or the Southern Hemisphere are classified on scales that are quite a bit like the Saffir-Simpson Scale. For example; if a tropical storm in the western Pacific reaches hurricane-strength winds, it is then officially called a typhoon. A tropical depression is an organized group of clouds and thunderstorms with a clear circulation in air near the ocean and maximum continuing winds of less than 17 m/s (33 kt, 38 mph, or 62 km/h). It has no eye and does not usually have the spiral shape that more powerful storms have. Only the Philippines are known to name tropical depressions. A tropical storm is an organized system of strong thunderstorms with a very clear surface circulation and continuing winds between 17 and 32 m/s (34–63 kt, 39–73 mph, or 62–117 km/h). At this point, the cyclonic shape starts to form, although an eye does not usually appear in tropical storms. Most tropical cyclone agencies start naming cyclonic storms at this level, except for the Philippines which have their own way of naming cyclones. A hurricane or typhoon or a cyclone is a large cyclonic weather system with continuing winds of at least 33 m/s (64 kt, 74 mph, or 118 km/h). A tropical cyclone with this wind speed usually develops an eye, which is an area of calm conditions at the center of its circulation. The eye is often seen from space as a small, round, cloud-free spot. Around the eye is the eye wall, an area where the strongest thunderstorms and winds spin around the storm's center. The fastest possible continuing wind speed found in tropical cyclones is thought to be around 85 m/s (165 kt, 190 mph, 305 km/h). Related pages[change | change source] Notes[change | change source] |Wikimedia Commons has media related to Tropical cyclones.| - "What is a hurricane, typhoon, or tropical cyclone?". Atlantic Oceanographic & Meteorological Laboratory. Retrieved 30 November 2015. - "From Hurricane to Typhoon: What Happens When Tropical Cyclones Cross the International Date Line?". The Weather Channel. Retrieved 23 October 2015. - Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division. "Frequently Asked Questions: What are the upcoming tropical cyclone names?". NationalOAA. Retrieved 2006-12-11. - Ernst Behrendt, 'What Science is Discovering About Hurricanes', Popular Science Monthly, Vol. 173, No. 3 (September 1958), p. 101 - Wayne Neely, The Great Bahamian Hurricanes of 1899 and 1932 (Bloomington, IN: iUniverse, Inc., 2012), p. 58 - "Tropical Cyclone Naming History and Retired Names". National Hurricane Center. Retrieved 1 December 2015. - Chris Landsea (1998). "How does the damage that hurricanes cause increase as a function of wind speed?". Hurricane Research Division. Retrieved 2007-02-24. - James M. Shultz, Jill Russell and Zelde Espinel (2005). "Epidemiology of Tropical Cyclones: The Dynamics of Disaster, Disease, and Development". Oxford Journal. Retrieved 2007-02-24. - Staff Writer (2005-08-30). "Hurricane Katrina Situation Report #11" (PDF). Office of Electricity Delivery and Energy Reliability (OE) United States Department of Energy. Retrieved 2007-02-24. |Cyclones and Tropical cyclones of the World| |Cyclone - Tropical - Extratropical - Subtropical - Mesocyclone - Polar cyclone - Polar low|
Peregrinus (Latin: [pærɛˈɡriːnʊs]) was the term used during the early Roman empire, from 30 BC to AD 212, to denote a free provincial subject of the Empire who was not a Roman citizen. Peregrini constituted the vast majority of the Empire's inhabitants in the 1st and 2nd centuries AD. In AD 212, all free inhabitants of the Empire were granted citizenship by the constitutio Antoniniana, with the exception of the dediticii, people who had become subject to Rome through surrender in war, and freed slaves. The Latin peregrinus "foreigner, one from abroad" is related to the Latin adverb peregre "abroad", composed of per- "through" and an assimilated form of ager "field, country", i.e. "over the lands"; the -e ([eː]) is an adverbial suffix. During the Roman Republic, the term peregrinus simply denoted any person who did not hold Roman citizenship, full or partial, whether that person was under Roman rule or not. Technically, this remained the case during the Imperial era. But in practice the term became limited to subjects of the Empire, with inhabitants of regions outside the Empire's borders denoted barbari (barbarians). In the 1st and 2nd centuries, the vast majority (80–90%) of the empire's inhabitants were peregrini. By 49 BC, all Italians were Roman citizens.[Note 1] Outside Italy, those provinces with the most intensive Roman colonisation over the approximately two centuries of Roman rule probably had a Roman citizen majority by the end of Augustus' reign: Gallia Narbonensis (southern France), Hispania Baetica (Andalusia, Spain) and Africa proconsularis (Tunisia). This could explain the closer similarity of the lexicon of the Iberian, Italian and Occitan languages as compared to French and other oïl languages In frontier provinces, the proportion of citizens would have been far smaller. For example, one estimate puts Roman citizens in Britain c. AD 100 at about 50,000, less than 3% of the total provincial population of c. 1.7 million. In the empire as a whole, we know there were just over 6 million Roman citizens in AD 47, the last quinquennial Roman census return extant. This was just 9% of a total imperial population generally estimated at c. 70 million at that time.[Note 2] Peregrini were accorded only the basic rights of the ius gentium ("law of peoples"), a sort of international law derived from the commercial law developed by Greek city-states, that was used by the Romans to regulate relations between citizens and non-citizens. But the ius gentium did not confer many of the rights and protections of the ius civile ("law of citizens" i.e. what we call Roman law). In the sphere of criminal law, there was no law to prevent the torture of peregrini during official interrogations. Peregrini were subject to de plano (summary) justice, including execution, at the discretion of the legatus Augusti (provincial governor). In theory at least, Roman citizens could not be tortured and could insist on being tried by a full hearing of the governor's assize court i.e. court held in rotation at different locations. This would involve the governor acting as judge, advised by a consilium ("council") of senior officials, as well as the right of the defendant to employ legal counsel. Roman citizens also enjoyed the important safeguard, against possible malpractice by the governor, of the right to appeal a criminal sentence, especially a death sentence, directly to the emperor himself.[Note 3] As regards civil law, with the exception of capital crimes, peregrini were subject to the customary laws and courts of their civitas (an administrative circumscription, similar to a county, based on the pre-Roman tribal territories). Cases involving Roman citizens, on the other hand, were adjudicated by the governor's assize court, according to the elaborate rules of Roman civil law. This gave citizens a substantial advantage in disputes with peregrini, especially over land, as Roman law would always prevail over local customary law if there was a conflict. Furthermore, the governor's verdicts were often swayed by the social status of the parties (and often by bribery) rather than by jurisprudence. In the fiscal sphere, peregrini were subject to direct taxes (tributum): they were obliged to pay an annual poll tax (tributum capitis), an important source of imperial revenue. Roman citizens were exempt from the poll tax. As would be expected in an agricultural economy, by far the most important revenue source was the tax on land (tributum soli), payable on most provincial land. Again, land in Italy was exempt as was, probably, land owned by Roman colonies (coloniae) outside Italy. In the military sphere, peregrini were excluded from service in the legions, and could only enlist in the less prestigious auxiliary regiments; at the end of an auxiliary's service (a 25-year term), he and his children were granted citizenship. In the social sphere, peregrini did not possess the right of connubium ("inter-marriage"): i.e. they could not legally marry a Roman citizen: thus any children from a mixed union were illegitimate and could not inherit citizenship (or property). In addition, peregrini could not, unless they were auxiliary servicemen, designate heirs under Roman law. On their death, therefore, they were legally intestate and their assets became the property of the state. Each province of the empire was divided into three types of local authority: coloniae (Roman colonies, founded by retired legionary veterans), municipia (cities with "Latin Rights", a sort of half-citizenship) and civitates peregrinae, the local authorities of the peregrini. Civitates peregrinae were based on the territories of pre-Roman city-states (in the Mediterranean) or indigenous tribes (in the northwestern European and Danubian provinces), minus lands confiscated by the Romans after the conquest of the province to provide land for legionary veterans or to become imperial estates. These civitates were grouped into three categories, according to their status: civitates foederatae, civitates liberae, and civitates stipendariae. Although the provincial governor had absolute power to intervene in civitas affairs, in practice civitates were largely autonomous, in part because the governor operated with a minimal bureaucracy and simply did not have the resources for detailed micro-management of the civitates. Provided that the civitates collected and delivered their assessed annual tributum (poll and land taxes) and carried out required services such as maintaining trunk Roman roads that crossed their territory, they were largely left to run their own affairs by the central provincial administration. The civitates peregrinae were often ruled by the descendants of the aristocracies that dominated them when they were independent entities in the pre-conquest era, although many of these may have suffered severe diminution of their lands during the invasion period. These elites would dominate the civitas council and executive magistracies, which would be based on traditional institutions. They would decide disputes according to tribal customary law. If the chief town of a civitas was granted municipium status, the elected leaders of the civitas, and, later, the entire council (as many as 100 men), were automatically granted citizenship. The Romans counted on the native elites to keep their civitates orderly and submissive. They ensured the loyalty of those elites by substantial favours: grants of land, citizenship and even enrollment in the highest class in Roman society, the senatorial order, for those who met the property threshold. These privileges would further entrench the wealth and power of native aristocracies, at the expense of the mass of their fellow peregrini. The Roman Empire was overwhelmingly an agricultural economy: over 80% of the population lived and worked on the land. Therefore, rights over land use and product were the most important determinant of wealth. Roman conquest and rule probably led to a major downgrading of the economic position of the average peregrinus peasant, to the advantage of the Roman state, Roman landowners and loyal native elites. The Roman Empire was a society with enormous disparities in wealth, with the senatorial order owning a significant proportion of all land in the empire in the form of vast latifundia ("large estates"), often in several provinces e.g. Pliny the Younger's statement in one of his letters that at the time of Nero (r.54–68), half of all land in Africa proconsularis (Tunisia) was owned by just 6 private landlords. Indeed, the senatorial order, which was hereditary, was itself partly defined by wealth, as any outsider wishing to join it had to meet a very high property qualification (250,000 denarii). Under Roman law, lands formerly belonging to an unconditionally surrendering people (dediticii) became the property of the Roman state. A proportion of such land would be assigned to Roman colonists. Some would be sold off to big Roman landowners in order to raise money for the imperial treasury. Some would be retained as ager publicus (state-owned land), which in practice were managed as imperial estates. The rest would be returned to the civitas that originally owned it, but not necessarily returned to its previous ownership structure. Much land may have been confiscated from members of those native elites who opposed the Roman invaders, and, conversely, granted to those who supported them. The latter may also have been granted land that may once have been communal. The proportion of land in each province confiscated by the Romans after conquest is unknown. But there are a few clues. Egypt is by far the best-documented province due to the survival of papyri in the dry conditions. There, it appears that probably a third of land was ager publicus. From the evidence available one can conclude that, between imperial estates, land assigned to coloniae, and land sold to Roman private landowners, a province's peregrini may have lost ownership of over half their land as a result of the Roman conquest. Roman colonists would routinely help themselves to the best land. Little is known about the pattern of land ownership before the Roman conquest, but there is no doubt that it radically changed after the Roman conquest. In particular, many free peasants who had farmed the same plots for generations (i.e. were owners under tribal customary law) would have found themselves reduced to tenants, obliged to pay rent to absentee Roman landlords or to the agents of the procurator, the chief financial officer of the province, if their land was now part of an imperial estate. Even where their new landlord was a local tribal aristocrat, the free peasant may have been worse off, obliged to pay rent for land which he might previously have farmed for free, or pay fees to graze his herds on pastures which might previously have been communal. The proportion of Roman citizens would have grown steadily over time. Emperors occasionally granted citizenship en bloc to entire cities, tribes or provinces e.g. emperor Otho's grant to the Lingones civitas in Gaul AD 69 or to whole auxiliary regiments for exceptional service. Peregrini could also acquire citizenship individually, either through service in the auxilia for the minimum 25-year term, or by special grant of the emperor for merit or status. The key person in the grant of citizenship to individuals was the provincial governor: although citizenship awards could only be made by the emperor, the latter would generally act on the recommendation of his governors, as is clear from the letters of Pliny the Younger. As governor of Bithynia, Pliny successfully lobbied his boss, the emperor Trajan (r.98–117), to grant citizenship to a number of provincials who were Pliny's friends or assistants. In addition, bribery of governors, or other high officials, was undoubtedly a much-used route for wealthy peregrini to gain citizenship. This was the case of the commander of the Roman auxiliaries who arrested St Paul the Apostle in AD 60. He confessed to Paul: "I became a Roman citizen by paying a large amount of money." Inhabitants of cities that were granted municipium status (as were many capital cities of civitates peregrinae) acquired Latin rights, which included connubium, the right to marry a Roman citizen. The children of such a union would inherit citizenship, providing it was the father who held citizenship. Constitutio Antoniniana (212 AD)Edit In AD 212, the constitutio Antoniniana (Antonine decree) issued by Emperor Caracalla (ruled 211–217) granted Roman citizenship to all free subjects of the Empire, with the exception of the dediticii, people who had become subject to Rome through surrender in war, and freed slaves. The contemporary historian Dio Cassius ascribes a financial motive to Caracalla's decision. He suggests that Caracalla wanted to make the peregrini subject to two indirect taxes that applied only to Roman citizens: the 5% levies on inheritances and on the manumission of slaves (both of which Caracalla increased to 10% for good measure). But these taxes would probably have been outweighed by the loss of the annual poll tax previously paid by peregrini, from which as Roman citizens they would now be exempt. It seems unlikely that the imperial government could have foregone this revenue: it is therefore almost certain that the Antonine decree was accompanied by a further decree ending Roman citizens' exemption from direct taxes. In any case, citizens were certainly paying the poll tax in the time of Emperor Diocletian (r. 282–305). In this way the Antonine decree would indeed have greatly increased the imperial tax base, primarily by obliging Roman citizens (by then perhaps 20–30% of the population) to pay direct taxes: the poll tax and, in the case of owners of Italian land and Roman coloniae, the land tax. - The inhabitants of Italy south of the Arno-Rubicon line (i.e. excluding modern northern Italy, then known as Cisalpine Gaul and not considered part of Italy proper) were granted Roman citizenship after the Social War of 91–88 BC. The inhabitants of Cisalpine Gaul were granted Roman citizenship by a decree of Roman dictator-for-life Julius Caesar in 49 BC and incorporated into Italy under the Second Triumvirate in 43/42 BC. - This percentage calculation is based on the assumption that the 6 million figure includes the women and children of Roman citizens. Unfortunately, this is not certain: technically, only adult (i.e. over 14 years of age) males were citizens. However, there is a tenfold increase in citizens between the censuses of 114 BC and of 28 BC: this is regarded by demographers as an implausible progression, leading to the suggestion that the basis of recording was changed in the interval, with registration of the women and children of Roman citizens in the later census. If, on the other hand, the 6 million refers only to adult males, then the total citizen community including women and children would have been 15–20 million i.e. 20–30% of the total population. However, this is highly unlikely as it would imply that the population density of Italy was far higher than in other provinces. This may have been true if Italy is compared with Rhine/Danube frontier provinces, but was almost certainly not true compared with the Eastern provinces. Then again, some estimates put the total imperial population as high as 100 million at this time, in which case the "high count" of citizens would represent 15–20% of the total population. In conclusion, it is unlikely that citizens exceeded 20% of the total in AD 47, and probable that they were less than 10%.) - In theory, every Roman citizen had a right to be tried in Rome by a iudicium publicum, a criminal court with a jury. This was obviously impractical for citizens residing in distant provinces and was replaced by appeal to the emperor. Two examples illustrate the preferential treatment accorded to Roman citizens in criminal matters: (1) St Paul the Apostle, who although Jewish, was a Roman citizen by birth. In AD 60 he was rescued by Roman soldiers (clearly auxiliaries) from a Jewish mob at the Temple of Jerusalem that accused him of blasphemy and were on the point of lynching him. Taken to the Roman fort, the commander of the unit ordered him to be interrogated under the lash until he confessed what he had done to upset the Jews. But when Paul declared himself a Roman citizen, the flogging was aborted and his chains removed. What is very revealing is the evident fear of the peregrini soldiers when they realised that they had roughly handled a Roman citizen. He was then sent under escort to the governor of Judaea in Caesarea. Eventually, he was sent to Rome for his case to be heard by the emperor. (2) An incident in c. AD 110 mentioned in a letter to the emperor Trajan by Pliny the Younger, who was governor of Bithynia at the time. A number of provincials, some of whom were Roman citizens, were accused of being Christians. The peregrini accused who refused to recant (by paying homage to the emperor's image) were summarily executed. Those who were Roman citizens, on the other hand, were sent to Rome for judgement. N.B. As Pliny implies in his letter, until the rule of emperor Septimius Severus (197–211), there was no formal statute making it a crime to belong to the Christian church per se. But it was a capital offence of treason (maiestas) for a peregrinus to refuse to worship the emperor's image, which Christians invariably did, because of their belief in one god. Therefore, the Roman authorities regarded membership of the Christian church as treasonous by extension. - Giessen Papyrus, 40,7-9 "I grant to all the inhabitants of the Empire the Roman citizenship and no one remains outside a civitas, with the exception of the dediticii" - Brunt (1971) - Occitan language – Comparison with other Romance languages - Mattingly (2006) 166, 168) - Scheidel (2006) 9 - Columbia Encyclopedia 6th Ed Article: Roman Law (Univ of Columbia Press) - Bible New Testament, Acts of the Apostles 22-7 - Pliny the Younger X.9 - Burton (1987) 431 - Burton (1987) 433 - Burton (1987) 432 - Burton (1987) 427 - Hassall 1987, p. 690. - Goldsworthy (2005) 80 - Mattingly (2006) 204 - Hassall 1987, p. 689. - Burton (1987) 426, 434 - Mattingly (2006) 454 - Hassall 1987, p. 694. - Hassall 1987, p. 692. - Mattingly (2006) 356 - Thompson (1987) 556 - Duncan-Jones (1994) 48 - Mattingly (1987) 353-4 - Mattingly (1987) 354 - Tacitus I.78 - Goldsworthy (2005) 97 - Pliny the Younger VI.106 - Acts of the Apostles 22 - Dio Cassius LXXVIII.9 - Duncan-Jones (1990) 52 - Bible New Testament (late 1st century) - Dio Cassius History of Rome (early 3rd century) - Pliny the Younger Epistulae (early 2nd century) - Tacitus Historiae (late 1st century) - Brunt, P. A. (1971) Italian Manpower - Burton, G. (1987) Government and the Provinces in J. Wacher ed. The Roman World Vol I - Duncan-Jones, Richard (1990) The Roman Economy - Duncan-Jones, Richard (1994) Money & Government in the Roman Empire - Goldsworthy, Adrian (2005) The Complete Roman Army - Hassall, Mark (1987). "Romans and non-Romans". In Wacher, J. (ed.). The Roman World. II.CS1 maint: ref=harv (link) - Mattingly, David (2006) An Imperial Possession: Britain in the Roman Empire - Scheidel, Walter (2006) Population & Demography (Princeton-Stanford Working Papers in Classics) - Thompson, D.J. (1987) Imperial Estates in J. Wacher ed. The Roman World Vol II
A well-designed curriculum provides the foundations for student mental wellbeing. Designing an academic curriculum involves deciding what to teach (and assess) and how best to teach (and assess) your students. In a well-structured curriculum, these core curriculum decisions are made to ensure that: - There is ‘alignment’ between the curriculum elements – within and across year levels - Curriculum materials and learning experiences are optimally organised and sequenced - Planned learning activities promote deep learning and student engagement - Planned assessment encourages desired behaviours and informs learning When curriculum elements are aligned, learning is optimally sequenced, and student engagement and progress are fostered, students have a sound foundation for both learning and mental wellbeing. How might the elements of good curriculum design support student mental wellbeing? (You can review these elements in 1.3 Wellbeing essentials)
The issue of slavery divided the nation leading to the Civil War. The northern half of present-day West Virginia spearheaded the decision to secede from Virginia in 1863 and created the state border based on the geography of the Allegheny Mountains. The southern counties of the newly formed state, including Greenbrier County, wanted to stay with Virginia and the Confederacy. Most white men from Greenbrier County fought for the Confederacy and impressed or hired out their slaves to work in various departments of the Confederate Army. Many enslaved people took advantage of the war and fled to freedom or to support the Union. In 1863, Mary M. Lewis claimed in a letter that almost all of the enslaved African Americans in Greenbrier County fled Myths of the Civil War The Civil War was fought over States Rights not slavery. African American men served as soldiers in the Confederate Army Enslaved people were happy and benefited from slavery. White southerners who fought for the confederacy, but didn’t own slaves, were not fighting to protect slavery The Union Army welcomed African American soldiers from the start of the Civil War. FALSE: According to the Vice President of the Confederacy, Alexander H. Stephens, during a speech in 1861, “[the Confederate government] is founded upon… the great truth that the negro is not equal to the white man; that slavery subordination to the superior race is his natural and normal condition.” FALSE: While African Americans did work in the Confederate Army, they were enslaved and forced into service. They were not soldiers, but rather were manservants, cooks, and laborers. FALSE: White southerners created this myth to rationalize and romanticize the pre-Civil War era. It was based on the idea that African Americans were childlike and helpless without white direction. It was widely believed that Africans were savages, but slaves became more civilized through their exposure to Christianity. FALSE: White southerners who did not own slaves still participated in the institution of slavery. They acted as slave catchers, slave traders, or rented enslaved people to work on their farms. Slavery was a symbol of status and wealth so non-slaveholders aspired to own slaves. FALSE: African American men were not allowed to enlist in the Union Army until 1863. They were paid $3 less than their counterparts until Congress passed an equal pay bill in 1864. African American soldiers faced much racism and discrimination by their white counterparts. They were in segregated units and commanded by white officers and black noncommissioned officers.
What Is a Circle Spoke Diagram What is a Circle Spoke Diagram? It is a diagram which has a central item surrounded by other items in a circle. The Circle Spoke Diagrams are often used to show the features or components of the central item in marketing and management documents and presentations. ConceptDraw DIAGRAM extended with Business Diagrams Solution from the Management Area is the best software for creating Circle-Spoke Diagrams. Example 1. What Is a Circle Spoke Diagram? What is a Circle Spoke Diagram and what tools offers the ConceptDraw DIAGRAM software for drawing them? The Business Diagrams Solution provides the extensive drawing tools, a set of predesigned templates and samples, and Circle-Spoke Diagrams library with wide variety of different ready-to use shapes for drawing the Circle-Spoke Diagrams in seconds. Example 2. Circle-Spoke Diagrams Library Design Elements You can design your diagram in new document using the ready shapes or choose any desired Circle-Spoke Diagram sample from the offered in ConceptDraw STORE that meets your requirements and change it for your needs. Example 3. Circle-Spoke Diagram - CASA Exposition The Circle-Spoke Diagrams you see on this page were created in ConceptDraw DIAGRAM using the Business Diagrams Solution. They illustrate what is a Circle Spoke Diagram and how to create it in ConceptDraw DIAGRAM. An experienced user spent 5 minutes creating each of them. Use the Business Diagrams Solution to design your own professional-looking Circle-Spoke Diagrams of any complexity fast and easy. All source documents are vector graphic documents. They are available for reviewing, modifying, or converting to a variety of formats (PDF file, MS PowerPoint, MS Visio, and many other graphic formats) from the ConceptDraw STORE. The Business Diagrams Solution is available for all ConceptDraw DIAGRAM or later users. TEN RELATED HOW TO's: ConceptDraw DIAGRAM diagramming and vector drawing software extended with Physics solution from the Science and Education area is a powerful software for creating various physics diagrams. Physics solution provides all tools that you can need for physics diagrams designing. It includes 3 libraries with predesigned vector physics symbols: Optics Library, Mechanics Library and Nuclear Physics Library. Picture: Physics Symbols The vector stencils library Aircraft contains clipart of aerospace objects, transportation vehicles, office buildings, and ancillary facilities for ConceptDraw DIAGRAM diagramming and vector drawing software. Picture: Aircraft - Design Elements For documenting the Cloud Computing Architecture with a goal to facilitate the communication between stakeholders are successfully used the Cloud Computing Architecture diagrams. It is convenient and easy to draw various Cloud Computing Architecture diagrams in ConceptDraw DIAGRAM software with help of tools of the Cloud Computing Diagrams Solution from the Computer and Networks Area of ConceptDraw Solution Park. Picture: Cloud Computing Architecture Diagrams When we think of an hierarchy, we intuitively imagine a pyramid. Thus, a lot of management tools use pyramid charts as examples or illustrations of various documents or models. You can create any type of triangle diagram using ConceptDraw DIAGRAM. A triangle (pyramid) diagram is often used in marketing to show the data hierarchy. This pyramid diagram represents the Top Brand Model. It illustrate the model of rebranding strategy. It can be used by marketing agency dealing with rebranding strategies development. It’s critical for marketing diagrams to be both clear and eye catching. ConceptDraw Pyramid diagrams solution meet all these requirements and can be successfully applied for creating and presenting various marketing pyramidal diagrams. Picture: Pyramid Chart Examples The Rapid UML Solution from the Software Development area of ConceptDraw Solution Park helps you to create the UML Class Diagrams quick and easy using ConceptDraw DIAGRAM diagramming and vector drawing software. Picture: UML Class Diagram Tutorial ConceptDraw DIAGRAM is a powerful tool for business and technical diagramming. Software Development area of ConceptDraw Solution Park provides 5 solutions: Data Flow Diagrams, Entity-Relationship Diagram (ERD), Graphic User Interface, IDEFO Diagrams, Rapid UML. Picture: Software Diagram Examples and Templates ConceptDraw DIAGRAM is the best program to make workflow diagrams. With the new Arrows10 technology it brings features allow you make workflow diagrams faster then ever before. These features allow you to focus on thinking, not drawing, that makes ConceptDraw DIAGRAM the best program to make workflow diagrams. Picture: Best Program to Make Workflow Diagrams Complexity of databases increases day by day, due to this the Entity relationship models and E-R diagrams become more and more popular, and now are an important part of database design. E-R diagrams created at ConceptDraw DIAGRAM allow visually and graphically represent structure and design of databases. Picture: E-R Diagrams
219. Prepositions were not originally distinguished from adverbs in form or meaning, but have become specialized in use. They developed comparatively late in the history of language. In the early stages of language development the cases alone were sufficient to indicate the sense, but, as the force of the case endings weakened, adverbs were used for greater precision (cf. § 338). These adverbs, from their habitual association with particular cases, became prepositions; but, many also retained their independent function as adverbs. Most prepositions are true case forms, such as the comparative ablatives extrā, īnfrā, suprā (for †exterā, †īnferā, †superā), and the accusatives circum, cōram, cum (cf. § 215). Circiter is an adverbial formation from circum (cf. § 214.b, Note); praeter is the comparative of prae, propter of prope.1 Of the remainder, versus is a petrified nominative (participle of vertō); adversus is a compound of versus; trāns is probably an old present participle (cf. in-trā-re); while the origin of the brief forms ab, ad, dē, ex, ob, is obscure and doubtful.
Education and outreach Playing a Pokémon-like card game about ecology and biodiversity can result in broader knowledge of species and a better understanding of ecosystems than traditional teaching methods, like slideshows, according to new research from the University of British Columbia. An open-source project launched in 2010 by UBC biologist David Ng and collaborators, the Phylo Trading Card Game works similarly to Pokémon trading cards, but uses real organisms and natural events instead of imaginary characters. While the Phylo project has proven immensely popular around the world, this is the first study to have tested its efficacy as a teaching and learning tool. Researchers examined how people who played the game retained information about species and ecosystems, and how it impacted their conservation behavior. They compared the results to people who watched an educational slideshow, and those who played a different game that did not focus on ecosystems. “Participants who played the Phylo game weren’t just remembering iconic species like the blue whale and sea otter, but things like phytoplankton, zooplankton and mycorrhizal fungi,” said lead author Megan Callahan, a PhD candidate in the Institute for Resources, Environment and Sustainability. “They would say things like, ‘I really needed this card because it was the base of my ecosystem,’ or, ‘When my partner destroyed my phytoplankton it killed all of my chain of species.’ Obviously, the game is sending a strong message that is sticking with them.” Participants in both the Phylo Game group and slideshow group improved their understanding of ecosystems and species knowledge, but those who played the Phylo Game were able to recall a greater number of species. They were also more motivated to donate the money they received to preventing negative environmental events, such as climate change and oil spills. (Study participants were rewarded with a toonie [$2] or a loonie [$1], and were given options to donate the money toward different causes.) “The message for teachers is that we need to use all possible ways to engage the public and get them interested in and caring about the issues of species extinctions and ecosystem destructions,” said Callahan. “Something as simple as a card game can be adapted to any environment, from classrooms to field-based workshops, in any location. Our study shows that this can be a really beneficial way of learning about species, and their ecosystems and environments.” Researchers used a deck created for the Beaty Biodiversity Museum that focused on British Columbia’s ecosystems, but there are many other versions of the Phylo cards circulating the world. A global community of artists, institutions, scientists and game enthusiasts have created numerous iterations of the game—including decks featuring west coast marine life, dinosaurs, microbes, and even a Women in Science version created by Westcoast Women in Engineering, Science and Technology. “We have 20 to 30 decks and more coming every year,” said Ng. “Games have a way of enticing anybody.” All Phylo decks are open-source and can be downloaded for free from the Phylo website. The Beaty deck, used in the study, is also available at the Beaty Biodiversity Museum gift shop. The study, “Using the Phylo Card Game to advance biodiversity conservation in an era of Pokémon”, appears in Palgrave Communications.
In this lesson students are introduced to equivalent percents and fractions. This is also the first time they are expected to make their own box diagrams to model fraction and percent problems. The central theme of this lesson is having students share their ideas about how they make sense of the problem and how the math relates to the diagram. Students are asked to explain the thinking of others as well as their own thinking to encourage them to make connections between the multiple methods and representations. In the warm up warm up box diagram to find a fraction students are asked to find 2/5ths of 35, 40% of 35, 3/4ths of 80, and 7/10ths of 60. I provide a partial box diagram for them which they may or may not choose to use. The first one is separated into parts for them, but the rest are left whole. Even if they choose not to use the box diagram it still serves as a visual representation which is helpful in the concept development for fraction and percent of a number. The visual is also really helpful for the ELL students, not just for concept development, but for vocabulary reinforcement and for sharing their ideas visually. It is really important to spend time making connections between the math and the visual model. There are several questions I ask students in order to get them to make these connections. warm up box diagram to find a fraction notes. After asking the questions I want the class to listen to the different ways in which they respond, so I have them come up and explain. This is a great way to help them make sense of the problem and improve their fraction sense. Students do their work on individual white boards, but have access to their math family group. A lot of peer instruction and support takes place during white board practice. For the first one or two I let them solve it however they choose. Before I ask them to raise up their boards and show me I make sure they check in with their math family and have them explain how they solved it. On the count of three everyone holds up their boards at the same time so that I can give corrective feedback if necessary and no one can opt out. 20% of 45 25% of 40 After each one I ask who used (Angelina's method) with a box diagram and who used (Mariella's method) by simplifying and then scaling up to the new total? For the next one (60% of 25) I might ask them all to try Angelina's method with a box diagram. I might ask Angelina to join me as I circulate to help out. Then I would ask students to try Mariella's method on the last one (75% of 28). This really encourages students to see each other as resources and to feel that their input in math class is valuable. They also try harder when they are working on ideas that came from their own peers.
In this article, we are going to discuss one of the most controversial topics which are the difference between equality and equity. The difference between equality and equity can be based on different areas such as health, education, opportunities, sports, etc. There is a large number of people who mistake equality and equity as one and the same thing, but in reality, there is a huge difference between both of them. Equality means the equal distribution of resources to everyone and on the other hand, equity means the fair and just distribution of resources to all the people which in a way represents partiality. Let us take an example of a classroom full of students. The students in the class are going for practical science class. Suddenly, one student exclaims, “That is not fair! why do they get to go first! I want to go first, too!” you must have noticed it or might felt it many times. That doesn’t mean that teachers are unfair. But because students are taught equality is an equal distribution of resources among everyone. Because since childhood, people are entertained with the mentality of fairness such as “All of you will get two candies each” or “I will ride a bike for twenty minutes, and then you will ride a bike for twenty minutes.” There are some people who question the idea of “sharing is caring” and over-simplistic expectations of fairness. Treating everyone in exactly the same way is not fairness. Equal treatment should eliminate differences among people and it should promote privilege. Let us talk about this concept in detail. 1) Everyone is not the same; therefore, fairness and success differ for everyone Equality and equity are two strategies which can be used in an effort to produce fairness. The meaning of equity is to provide everything they need to become successful to everyone. Whereas, the meaning of equality is to treat everyone in the same way regardless of their differences. The aim behind equality is to promote fairness but it is effective only when everyone starts on the same level and from the same place and they require some kind of help. On the other hand, equity looks unfair in nature, as it moves everyone towards success by “leveling the playing field”. But not everyone begins at the same place and not everyone has similar requirements. Take the example of a classroom. It is made of different learners. There are students in the classroom with different learning abilities like auditory, visual or tactile, etc. it is obvious that auditory and visual learners will process information in a different style. If all the information is taught by lecturing then obliviously students with auditory abilities will take advantage. As everyone is different, hence, we should accept differences uniquely and there is dire need to redesign basic expectations of success and fairness. There is always a debate on equality between genders. That does not necessarily mean that everyone should become the same. The aim of equality is not to reach a genderless state. The meaning of equality is that man and woman should be presented with the same opportunities despite differences between them. It is important to recognize all difference as unique, rather than establishing one definition of success. By having one definition of success, we eliminate the differences that we have. But we should not forget that our differences are not difficulties. 2) There is a need to engage in equitable practices Working on to fix systematic obstacles will require more efforts and more hard work. but hard work will always pay off. Differences can’t be blamed rather we should look at systematic obstacles. You can’t assess a fish’s success by assessing her ability to climb a tree. Therefore, it is unfair to have the same question paper for everyone to assess their abilities. A teacher should re-examine the classroom system to attain success. The system in the classroom must change and teachers should about every student and assist them to learn better and recognize their differences and use their differences to become successful in life. The “differentiated instructions” are only required in the classroom, but it should also require to be implemented in real life. it is important to recognize the needs of the individual and should modify our actions to meet those needs. In the next section, you will learn about the definition of equity and equality and key differences between both of them. Definition of Equality The term equality can be defined as treating every person in the same way without regarding their requirements and needs. In other words, we can say that it is a state of getting similar value or quantity or status. According to equality, each and every person is given the same responsibilities and rights regardless of their individual differences. Equality is preached quite often in demographic societies. Its purpose is to give equal opportunities to everyone and avoid discrimination. Equality can be provided between rich and poor, men and women, people from different race and color, etc. The main idea behind equality is that all people of society are presented with same opportunities and are given equal treatment in the society and no discrimination is made between them on the basis of sex, race, caste, creed, disability, nationality, religion, age, etc. Definition of Equity The term equity can be referred to as a system of fairness and justice where people are given even-handed treatment. The needs and requirements of all members of this system are regarded and taken into account and given treatment accordingly. Fairness is desired for everyone in every situation. Even if it is the allocation of benefits or burdens. People should be treated fairly yet differently on the basis of their different circumstances. Equity desire to provide equal opportunities to everyone to help them to obtain their higher self. In this way, equity ensures that all individuals are given access to resources that they need to get to similar opportunities. Key differences between Equality and Equity |Equality means treating everyone equally regardless of their differences and circumstances.||Equity means being impartial and even-handed with everyone and giving respect to individual differences.| |It refers to even distribution||IT refers to fair distribution.| |Equality means “end”.||Equity means “means”.| |Equality recognizes everyone in same way.||Equity recognizes individual differences and treats people by keeping their individual differences in mind.| |Equality provides same and equal things to everyone.||Equity Provides people according to their needs and requirements.| Equity can only be achieved by treated people by keeping their differences and circumstances in mind. on the other hand, equality is good in those circumstances when the initial point of everyone is the same.
What is the Formula of Nitrogen gas there is the formula of nitrogen gas and their uses Nitrogen is a colorless, odorless, and tasteless non-metallic element of the fifteenth group, referred to as N and has the atomic number 7, the most abundant element in the Earth’s atmosphere, with four-fifths of its atmosphere. In addition, it is the sixth element in the global abundance, and nitrogen gas in the atmosphere is the main source of business and industry, also found in the envelope in the form of small amounts of ammonium salts, ammonia , nitric acid and nitrogen oxides, and found free nitrogen In a number of meteorites Lise: meteorites), and in mines, volcanoes and some mineral springs gases, and in some of the stars and nebulae (in English: nebulae), and in the sun as well. The formula of nitrogen gas the formula of nitrogen gas is: Atomic mass: 14.0067 u Atomic number: 7 Electron configuration: [He] 2s22p3 Information on nitrogen gas - Nitrogen gas is known to be tasteless, colorless and odorless, the atomic number of nitrogen gas is 7, and nitrogen gas is known to be one of the inactive gases that have five electrons inside the atmosphere. - Nitrogen gas is known to be a gas that is difficult to dissolve with water. It also does not ignite and cannot interact with gases or other elements. It is a group of non-metallic gases. - Nitrogen gas occupies the largest proportion of gases in the atmosphere, it is available by 78%, the doctor, “Rutherford,” a Scottish doctor who discovered hydrogen gas in the air. The importance of nitrogen gas The biological role of nitrogen Nitrogen plays a vital role. It is the element found in all living organisms, and is a component of amino acids, that is, it is involved in the formation of proteins and nucleic acids, and is the component of almost all neurotransmitters, the main component of alkalis, and nitrogen is recycled through live naturally in what is known as the cycle of nitrogen , as plants and algae get it in the form of nitrate, while the animals get it through other organisms consumption, containing in its constituent proteins and nucleic acids, as the transformation of microbes which Price nitrogen compounds to nitrate again to use, in addition to the bacteria nitrogen-fixing renewed supply of nitrate which directly from the atmosphere. Nitrogen in industry Nitrogen is used in many industries, including the following: - Nitrogen gas is used in food packaging as an alternative to oxygen, making the food last longer, and can also be used as a lining around food to protect it from damage in transport. - Nitrogen gas is used to weld electronics. - Nitrogen enters the stainless steel industry by using nitrogen as its coating, producing stronger corrosion-resistant steels. - Nitrogen gas can be used to rid liquids of volatile organic compounds in them before disposal, reducing pollution. - Nitrogen gas is used to make light bulbs, as it is a less expensive alternative to argon in incandescent lamps. - Nitrogen is used to extinguish potential fires in mining by disposing of oxygen in the air. - It is also used to ensure that the area does not explode if it is neglected, as well as to prevent explosions in high-risk locations such as manufacturing facilities and chemical facilities. - Nitrogen gas is useful for blowing tires, giving them a longer life by reducing oxidation, and improving the pressure of the gas trapped there. - Nitrogen gas is introduced into the petroleum industry. - It is used to dispose of gases in large reservoirs to prepare it for storing oil. - It does not react with existing oil being idle. - Nitrogen is used in the manufacture of various materials such as rubber, and plastics. - Pharmaceutical companies need high-quality refined nitrogen gas to be used to store and form medical compounds. - Dried, pressurized nitrogen gas is an insulator for high-voltage equipment and is used to drive liquids through pipes. - Nitrogen is one of the most critical components of fertilizers used to increase soil fertility and is used to manufacture various types of fertilizers such as ammonia and urea. - Nitrogen is used in the manufacture of explosive components such as dynamite and is used to form a wide range of highly reactive, unstable compounds of ammonium nitrate, nitrogen triiodide, nitric acid, and nitroglycerin. Facts of N2 gas There are several facts related to nitrogen, including: - Nitrogen has an atomic weight of 14.0067. The state of nitrogen at room temperature is invasive. - Nitrogen was discovered by chemist and physician Daniel Rutherford in 1772. The boiling point of nitrogen is -195.79 degrees Celsius, while the melting point is 210 degrees Celsius. - The number of isotopes of nitrogen is sixteen, two of which are stable. - Oxygen gas plays a role in the formation of the aurora borealis, according to NASA, which occurs mostly in the north and south poles in the form of a natural view of light in the sky, as a result of the collision of fast-moving electrons, coming from space with nitrogen and oxygen in the atmosphere. - Liquid nitrogen is widely used as a coolant, such as sperm and egg storage, as well as many cells used in fertility clinics or medical research. - Nitrogen gas can be produced by heating an aqueous solution of ammonium nitrate (NH4NO3), which is often used in fertilizers. - Nitrogen makes up 95% of the atmosphere of Titan, Saturn’s largest moon. - According to the Royal Society of Chemistry, nitrogen was produced in the form of ammonium chloride NH4Cl by heating the mixture of urine, salt, and animal waste in ancient Egypt. - Fractional distillation of liquefied air is the basis of commercial nitrogen production, which can also be largely produced by burning carbon, or hydrocarbons, and separating water and carbon dioxide products from residual nitrogen. Tight as its production by heating barium azide. - Nitrogen oxides are a group of seven gases and compounds composed of oxygen and nitrogen. - Nitric oxide and nitrogen dioxide are the most dangerous and common. - Nitrogen oxides are emitted from many sources such as burning coal, car exhaust, diesel fuel, natural gas, and so on. - Exposure to very high levels of these oxides can pose a significant risk to human health such as convulsions, genetic mutations, and increased speed. Pulse, you may reach death. Is nitrogen gas flammable Many scientists have conducted a lot of scientific experiments on nitrogen gas, it was discovered that nitrogen gas is a gas that does not ignite, and this is the result of many scientific experiments. Nitrogen gas is an inert gas, which is known to be an inert gas that does not work and does not cause ignition and is also used to reduce the ignition caused by many other gases. N2 gas damage We all know the importance of nitrogen gas to humans, it is one of the gases necessary to complete life, but many scientists stressed that the gas nitrogen a lot of damage caused to humans, and the most important of these damages are - When nitrogen gas is released very quickly, suffocation occurs, because at that time it causes the complete elimination of the oxygen gas in place. - When nitrogen gas is inhaled by partial pressure, it causes a state of anesthesia for the injured, causing a person to faint. - When divers use the air so they can dive into the sea, especially if they are diving too deep to the seafloor, when the diver ascends to the surface, nitrogen gas with many cases causes low blood. You may also like:
Skip to 0 minutes and 9 seconds Strictly speaking, vertigo is a sensation of spinning, as if the world is spinning around us. Vertigo nearly always arises from problems with the inner ear. The semicircular canals of the inner ear are filled with fluid. When our head turns, tiny hairs within the fluid detect the speed and direction of the movement of this fluid. One of the commonest causes of vertigo arises when tiny crystals enter these fluid filled canals. As we change position, the fluid moves, and these free-floating crystals knock into the tiny hairs, which confuses our brain into thinking that we are spinning around, resulting in transient vertigo. This condition is called benign paroxysmal positional vertigo. Skip to 0 minutes and 55 seconds Lightheadedness is our next type of dizziness. This is what people tend to experience before they faint, or some people describe it as a head rush. It might also result in blurred vision, fatigue, headaches, and a pale complexion. It occurs when gravity pulls blood down into our legs, away from our brain. If our body cannot react to this drop in blood pressure quickly enough, it can cause lightheadedness, loss of balance, blackouts, or falls. Our final type of dizziness is disequilibrium. This is a feeling of unsteadiness, as if the ground is moving beneath our feet. It tends to be caused by problems with the eyes, inner ear, or nerve signals from the feet and joints. Dizziness is commonly associated with falls, in fact it usually signifies an underlying medical problem. In this animation we will see three of the most common forms of dizziness: vertigo, lightheadedness and disequilibrium. Having an understanding of dizziness can help us to uncover the underlying medical causes. Dizziness is nearly always treatable, does not occur because of ageing and is an important cause of falls. It is what we call a ‘red flag’ - a sign that we need to seek a professional opinion. While watching the animation see if you have ever experienced any of these feelings. Most of us will have at some point. © Newcastle University
For further information, including the full final version of the list, read the Wikipedia article: Swadesh list. American linguist Morris Swadesh believed that languages changed at measurable rates and that these could be determined even for languages without written precursors. Using vocabulary lists, he sought to understand not only change over time but also the relationships of extant languages. To be able to compare languages from different cultures, he based his lists on meanings he presumed would be available in as many cultures as possible. He then used the fraction of agreeing cognates between any two related languages to compute their divergence time by some (still debated) algorithms. Starting in 1950 with 165 meanings, his list grew to 215 in 1952, which was so expansive that many languages lacked native vocabulary for some terms. Subsequently, it was reduced to 207, and reduced much further to 100 meanings in 1955. A reformulated list was published posthumously in 1971.
Teen Stress Management Teenagers, like adults, may experience stress everyday and can benefit from learning stress management skills. Most teens experience more stress when they perceive a situation as dangerous, difficult, or painful and they do not have the resources to cope. Some sources of stress for teens might include: - school demands and frustrations - negative thoughts and feelings about themselves - changes in their bodies - problems with friends and/or peers at school - unsafe living environment/neighborhood - separation or divorce of parents - chronic illness or severe problems in the family - death of a loved one - moving or changing schools - taking on too many activities or having too high expectations - family financial problems Some teens become overloaded with stress. When it happens, inadequately managed stress can lead to anxiety, withdrawal, aggression, physical illness, or poor coping skills such as drug and/or alcohol use. When we perceive a situation as difficult or painful, changes occur in our minds and bodies to prepare us to respond to danger. This "fight, flight, or freeze” response includes faster heart and breathing rate, increased blood to muscles of arms and legs, cold or clammy hands and feet, upset stomach and/or a sense of dread. The same mechanism that turns on the stress response can turn it off. As soon as we decide that a situation is no longer dangerous, changes can occur in our minds and bodies to help us relax and calm down. This "relaxation response” includes decreased heart and breathing rate and a sense of well being. Teens that develop a "relaxation response” and other stress management skills feel less helpless and have more choices when responding to stress. Here are some other resources for teens and parents to download:
Following Nazi Germany's surrender on May 8, 1945, an estimated 11 million Europeans--specifically non-German and non-Austrian nationals--remained uprooted from their home countries. They were classified as "displaced persons" (DPs) by the Allies and the United Nations Relief and Rehabilitation Administration (UNRRA), which had been founded on November 9, 1943, to deal with anticipated DP issues. Seven million of the DPs were in Germany. During the war, the majority of these people had been brought to Germany to work for the Third Reich. About 800,000 Poles alone had been conscripted for labor by the Nazis. Still others, including approximately 200,000 Jews, were recently liberated inmates who had survived Nazi camps and death marches. By the end of 1945, more than six million DPs had gone back to their native lands, but between 1.5 and two million of them refused repatriation. The non-Jews who did not want to return home were mostly Poles, Estonians, Latvians, Lithuanians, Ukrainians, and Yugoslavians. In some cases they feared political reprisals for their Nazi collaboration; in other cases they dreaded persecution by Eastern Europe's Communist regimes. For Jews, returning home was scarcely an option. Their families had been annihilated, their communities destroyed, their property confiscated. If these Jews did try to go home again, their arrival was often greeted with hostility and physical violence from former neighbors. For the most part, the concept of "home" no longer existed for Jewish DPs. Instead, they found themselves in grim DP camps on German soil (such as the one pictured at Zeilsheim). Most of these places were enclosed by barbed wire, overcrowded, and situated in former labor or concentration camps. Many Jews were harassed or assaulted by former Nazi collaborators. Jews hoped for immigration opportunities that would take them to destinations such as Palestine or the United States, but until then they endured daily drudgery and tension. Jewish chaplains in the U.S. Army, such as Rabbi Judah Nadich and especially Rabbi Abraham Klausner, worked tirelessly on behalf of Jewish DPs. They successfully encouraged the Allied authorities to establish all-Jewish DP camps, where conditions for the Jewish DPs improved. Feldafing, which housed about 3700 people, was the first of these places. Jewish DP camps at Landsberg and Föhrenwald sheltered another 5000 Jews each. In the American zone of occupation, a dozen DP camps were maintained exclusively for Jews by the end of 1945. By 1952 most of the Jewish DP camps had closed, although the one at Föhrenwald operated under the supervision of the democratic Federal Republic of Germany until early 1957. Before the Jewish DP camps finally were emptied, nearly 250,000 Jews had lived in them. Photo: Alice Lev Collection / United States Holocaust Memorial Museum Photo Archive
Presentation on theme: "GASES. General Properties of Gases There is a lot of “free” space in a gas. Gases can be expanded infinitely. Gases fill containers uniformly and completely."— Presentation transcript: General Properties of Gases There is a lot of “free” space in a gas. Gases can be expanded infinitely. Gases fill containers uniformly and completely. Gases diffuse and mix rapidly. Properties of Gases Gas properties can be modeled using math. Model depends on— V = volume of the gas (L, mL) T = temperature (K) ALL temperatures in the entire chapter MUST be in Kelvin!!! No Exceptions! n = amount (moles) P = pressure (atmospheres, mmHg, torr, kPa) Pressure Column height measures Pressure of atmosphere 1 standard atmosphere (atm) * = 760 mm Hg (or torr) * = 101.3 kPa (SI unit is PASCAL) Pressure conversions A.) What is 475 mm Hg expressed in atm? 475 mmHg 1 atm = 0.625 atm 760 mm Hg B.) The pressure of a tire is measured as 29.4 psi. What is this pressure in mm Hg? 29.4 psi 760 mmHg = 1.52 x 10 3 mmHg 14.7 psi Your Turn: Learning Check for Pressure Conversions A.) What is 2 atm expressed in torr? B.) The pressure of a tire is measured as 32.0 psi. What is this pressure in kPa? Boyle’s Law This means Pressure and Volume are INVERSELY PROPORTIONAL if moles and temperature are constant (do not change). For example, P goes up as V goes down. P1V1 = P2 V2 V1 is the original volume V2 is the new volume P1 is original pressure P2 is the new pressure Sample Problem Suppose you have a gas with 45.0 ml of volume and has a pressure of 760.mmHg. If the pressure is increased to 800mmHg and the temperature remains constant then according to Boyle's Law the new volume is 42.8 ml. (760mmHg)(45.0ml) = (800mmHg)(V2) V2 = 42.8ml Robert Boyle Charles’s Law V and T are directly proportional. If one temperature goes up, the volume goes up! V1 V2 T1 = T2 V1 is the initial volume T1 is the initial temperature V2 is the final volume T2 is the final temperature Sample Problem You have a gas that has a volume of 2.5 liters and a temperature of 250 K. What would be the final temperature if the gas has a volume of 4.5 liters? V1 / T1 = V2 / T2 V1 = 2.5 liters T1 = 250 K V2 = 4.5 liters T2 = ? Solving for T2, the final temperature equals 450 K. Important: Charles's Law only works when the pressure is constant. Note: Charles's Law is fairly accurate but gases tend to deviate from it at very high and low pressures. Jacques Charles Gay-Lussac’s Law If n and V are constant, then P α T P and T are directly proportional. P1 P2 T1 T2 If one temperature goes up, the pressure goes up! Sample problem The pressure inside a container is 770 mmHg at a temperature of 57 C. What would the pressure be at 75 C? P1= 770 mmHg T1 = 57°C T2= 75°C P2 = ? = Combined Gas Law Since they are all related to each other, we can combine them into a single equation. BE SURE YOU KNOW THIS EQUATION! P1 V1 P2 V2 T1 T2 = Combined Gas Law Problem A sample of helium gas has a volume of 0.180 L, a pressure of 0.800 atm and a temperature of 29°C. What is the new temperature(°C) of the gas at a volume of 90.0 mL and a pressure of 3.20 atm? Set up Data Table P1 = 0.800 atm V1 = 180 mL T1 = 302 K P2 = 3.20 atm V2= 90 mL T2 = ?? Calculations P1 = 0.800 atm V1 = 180 mL T1 = 302 K P2 = 3.20 atm V2= 90 mL T2 = ?? P1 V1 P2 V2 T1 = T2P1 V1 T2 = P2 V2 T1 T2 = P2 V2 T1 P1 V1 T2 = 3.20 atm x 90.0 mL x 302 K 0.800 atm x 180.0 mL T2 = 604 K - 273 = 331 °C
Monday, January 24, 2011 Our brains have evolved to be good at certain things: seeing, hearing, learning language, and interacting with other similar brains, to name a few examples. But say you want it to do something new – look at symbols on a page and map them to language. In other words, you want to teach your brain to read. How would you go about doing this? What parts of the brain would you use? Unless you plan on developing a completely new region, it makes sense to repurpose the brain regions you already have -- a process that neuroscientist Stanislas Dehaene refers to as “neuronal recycling.” This raises the question -- what regions are recycled? And do the regions that get co-opted become worse at their original function? Dehaene and colleagues explored this question by scanning adults at different levels of literacy: literates, ex-literates (adults who used to be illiterate but learned to read in adulthood), and illiterate adults. They had several interesting findings: 1. They first looked at whether learning to read changes brain activation when looking at words. Not surprisingly, it does. Reading performance was correlated with increased brain activation in much of the left hemisphere language network, including the visual word form area. And this increased activation appeared to be specific to word-like stimuli. 2. During reading, ex-literates have more bilateral activation and also recruited more posterior brain regions. This is similar to what we find in children, who also show more spread out activation while reading. This suggests that unskilled readers recruit a wider set of brain regions as they are learning to read. As readers become more skilled, their brains become more efficient and recruit fewer regions 3. In literate adults, response to checker boards and faces in the visual word form area was lower in the visual word form area compared to non-readers. This suggests that learning to process words may actually be taking resources away from processing other stimuli. 4. The researchers looked more closely at responses to other faces and houses to see how exactly learning to read competed with other visual functions. They found that activation in the peak voxels for faces and houses did not change with literacy. However, activation in surrounding voxels did decrease. 5. And here's an interesting result. Since reading is a horizontal process (at least in the languages they were testing), the researchers checked to see if the visual system became more attuned to horizontal stimuli. They found that literacy enhanced response to horizontal but not vertical checker boards in some primary visual areas. Dehaene S, Pegado F, Braga LW, Ventura P, Nunes Filho G, Jobert A, Dehaene-Lambertz G, Kolinsky R, Morais J, & Cohen L (2010). How learning to read changes the cortical networks for vision and language. Science (New York, N.Y.), 330 (6009), 1359-64 PMID: 21071632
Problem-based learning (PBL) gives students opportunities for collaborative as well as self-directed learning. The school of applied science at the Republic Polytechnic in Singapore uses one problem a day to teach general chemistry to a wide ability cohort of post-18-year old students. Problem-based learning encourages students to take responsibility for their own learning In a knowledge-based economy, it is vital for our graduates to be self-regulated learners, able to learn on their own and to evaluate vast amounts of information to solve problems and make decisions. They also need to develop interpersonal skills and professional attitudes, such as being able to work well in teams, communicate effectively, criticise constructively and uphold ethical behaviour.1-3 Such education outcomes require an active learning approach that gives students the opportunity to work on problems, similar to what they are likely to face in their future careers. We believe that problem-based learning (PBL) can provide a structured framework of active and collaborative learning that is in-line with these outcomes.4 PBL in chemistry Problem-based learning in chemistry is not new and has been used in many universities around the world. For example, Simon Belt and his colleagues at the University of Plymouth, UK,5,6 has used PBL with a group of students with a wide range of abilities and backgrounds in analytical and applied chemistry. Their findings demonstrated that this method enabled students to develop subject knowledge as well as other scientific and transferable skills such as critical thinking and collaboration. Others have found the approach to be an effective way of motivating students in their learning generally.7 One feature of these examples is that they were all developed as a separate unit within a more traditional learning context. At the Republic Polytechnic in Singapore, PBL is implemented across all subject disciplines.8 Here we will focus on how we in the school of applied science use PBL in an introductory chemistry module. Chemistry is a compulsory general core module for all students in the school of applied science. These students are enrolled in one of five diploma programmes: biomedical sciences; biotechnology; materials science; pharmaceutical sciences; and environmental science. Their prior knowledge in chemistry varies, ranging from those who passed chemistry at the Singapore-Cambridge general certificate of education (ordinary level), to others who have only a very basic knowledge from their lower secondary school science programme. Thus the general chemistry module aims to provide students with a basic understanding and application of foundational chemistry principles required for their more specific discipline of study. One day, one problem One unique feature about our PBL approach is that the problems have been designed so that students spend one day only working on them, individually as well as in teams, to propose a solution by the end of the day. This way they are constantly revisiting and refining their skills and they also receive daily feedback from us ('facilitators') which helps them improve their learning strategies. On average, we will have 25 students working in teams of five, and one facilitator. Each student has his or her own personal laptop computer, which they use to access an online learning environment where they will find daily problem statements, worksheets and resources. Students can also make use of the Internet as part of their individual study. Each day is divided into three sessions ('meetings') run by a facilitator. There are also two study periods in between for students to work individually or in their teams. An example of one of the problems we use is Fizzing bubbles, Box 1. First meeting - problem analysis The problem is introduced to students during the 'first meeting', which we refer to as the problem analysis phase. Students are then given about 10 minutes to discuss their initial response to the problem by working together to fill in a 'problem definition template' (PDT). In this process they identify what they know, do not know and need to find out to address the problem. Our problems are designed in such a way that students will recognise familiar elements in their daily life and therefore be able to get involved in discussions immediately. We also purposefully avoid complicated technical words in the problem statement. The opportunity for students to share ideas in a small group is believed to activate their prior knowledge and allows them to relate new information in the problem to their existing knowledge. Hearing what other students have to say can also uncover prior knowledge which they didn't realise they had.9 After the team discussion, the facilitator encourages all teams to share their ideas at the class level.10 At this point the facilitator suggests questions that the students should be asking, thus providing them with a good model for developing their thinking strategies.11 The discussions help students to realise the gaps in their existing knowledge and what they are required to know to deal with the problem. Thus by the end of the first meeting, students would have identified these gaps as areas for further study. In addition, for the problem in Box 1, the facilitators would demonstrate the reaction between citric acid and sodium hydrogencarbonate, allowing students to observe the effervescent effect. Students would then experiment to see what happens if more citric acid was added after the initial reaction, and vice versa. Through this, the facilitator prompts the students to think about issues such as the amounts of materials required for the reaction (mole concept), and the amounts of materials available for reaction (limiting reagent). The learning objectives of the problem in Box 1 include: understanding what chemical equations are and the importance of balancing them, as well as identifying limiting and excess reagents in a chemical reaction. Students are also expected to appreciate the significance of the mole concept and stoichiometry in the process of working out the chemical calculations. They should also learn about the reactions of acids, and the concept of pH. Note that the learning issues identified by students match to some extent the intended learning objectives of the problem (Table 1). The PDT is considered a work in progress, and students would continue to refine and add to it as they progressed in their learning throughout the day. Study period 1: self-directed learning phase After the first meeting, students are given about an hour to work on their own and in their teams. Besides the problem statement, we usually also include a worksheet, which students are encouraged to answer during the first study period. The worksheet introduces various key scientific concepts and vocabularies that they may need to understand while working on the problem. In Fizzing bubbles, the worksheet guides the students to understand the need for balancing chemical equations, mole concept and mole calculations. We do this because a significant proportion of our students do not have prior knowledge in chemistry. The worksheet breaks the problem down into smaller tasks or steps, thus helping students to think through the problem systematically. It usually ends by leading the students back to the problem statement, where they are challenged to make use of their newly acquired knowledge and understanding to respond to the problem. By the end of the first study period, each team would be expected to have answered at least in part, some of the questions they had raised in their PDT earlier. They may also refine and update their original ideas and questions as a result of their study. During the second meeting the various teams of students share their progress and understanding of the problem. The facilitator can help them at this stage with any learning difficulties and conceptual understanding. For example, students could have come across the mole concept and stoichiometric calculations from the worksheet and other resources gathered in the first study period, and would like to use mole calculations to respond to the problem. However, they could be facing some problems in the calculations. By being aware of the students' approach in handling the problem at this instance, the facilitator can then work together with them to overcome their learning obstacles. Study period 2: self-directed learning phase After discussion with the facilitator, the teams continue their self-directed study as well as team discussion to consolidate their findings and formulate a response to the problem. Since the third meeting will require a team presentation of their response to the problem, some time is also spent preparing a PowerPoint presentation, and rehearsing what will be presented. Third meeting - reporting back During the third meeting (reporting phase), each team presents its consolidated findings and response to the problem, defending and answering questions raised by peers and the facilitator. This process of critical questioning and explaining one's ideas is an essential component of a student's learning. At this point the facilitator would also clarify key ideas, if necessary. After all five teams have presented their solution to the problem, the facilitator also goes through a brief presentation that provides a possible response to the problem. At the end of each day, students are asked to reflect on their learning process in a journal. This helps them develop an awareness of their learning process, identify their learning difficulties as well as consider ways in which they can improve. We assess our students on a daily basis. The students receive a grade each day, which is based on their class participation, teamwork, presentation skills, understanding of concepts evaluated by their ability to identify relevant questions, issues and to defend their presentations, as well as what they write in their journals. This is in-line with our educational philosophy which places more emphasis on the process of learning rather than content knowledge. In addition, we run summative assessments administered over each semester, which are designed to evaluate students' understanding and application of concepts, rather than their recall of content knowledge. The daily grades and the results from these tests carry similar weightings to the student's final grade for the module because we consider the learning process to be equally important as the understanding and application of concepts in assessing the student's performance. From their reflections (Box 2) we find that while students do face learning challenges initially, they do learn to overcome them in the course of the module. We also note that, in general, the problem statements do trigger their interest to find out more about the various concepts. They also appreciate the importance of teamwork and collaboration in their learning process. Through the process of teamwork and individual study triggered by a problem, students develop an understanding of chemistry concepts as well as learn valuable life skills required in their future work and for further education. While the scope of this article did not allow us to share more about the challenges we face in implementing the PBL approach, some of these include the design of good problem triggers as well as the training and development of teachers in their roles as facilitators. Elaine H. J. Yew is senior manager (faculty development) in the school of applied science at the Republic Polytechnic, Singapore. Eric K. H. Kwek is head of curriculum services in the same department. Box 1- Fizzing bubbles A fruit salt is being developed for use to relieve stomach upsets and feelings of bloatedness caused by too much food. When a spoonful of the fruit salt is added to a glass of water, it produces a fizzing, bubbling effect, similar to what is observed when pouring out a can of soft drink into an empty glass. The active ingredients used in this product are citric acid and sodium hydrogencarbonate (sodium bicarbonate). Citric acid is found in citrus fruits, such as oranges and lemons, while sodium hydrogencarbonate, also known as 'baking powder', is what is used to make dough 'rise' when cakes are baked. The amounts and proportion of citric acid and sodium hydrogencarbonate used in such products have to be optimised to ensure a pleasant taste and gentle bubbling effects. The Table indicates the bubbling effect and taste for different compositions of the two ingredients. Box 2 - Examples of student feedback Nur, age 19, wrote: At the beginning of the lesson, I felt quite lost. This is because I work at a slower pace and need more time to understand things. I did not learn chemistry at secondary school and felt quite confused by some of the terms mentioned by my classmates. I had to do my own research to find out what the terms meant. Nevertheless, I am glad that I managed to understand at least some of the concepts from today's lesson. I believe it is alright to learn at a slower pace as long as I understand at the end of the day. Wong, age 18, wrote: I shared with my teammates what I knew about the topic and I volunteered to write the chemical equation and to explain it to them later. When I did this, I found out that the concepts on limiting reagents and reagents in excess were needed to answer the problem today. However, I was not really sure whether the mole calculations were relevant to our team presentation because there wasn't much about this in the problem. However, my teammates explained that they are required for the later parts of the problem to ensure the appropriate amounts of substances are used and produced. All in all, I think with my own ideas alone I would not have been able to complete what my team has completed today. However, I think I did put my ideas across to the team for the presentation, but there is always room for improvement. - J. S. Brown, A. Collins and P. Duguid, Educ. Researcher, 1989, 18, 32. - E. L. Pizzini, D. P. Shepardson and S. K. Abell, J. Res. Sci. Teach., 1991, 28, 111. - S. Papert in Constructionism: research reports and essays, 1985-1990, I. Harel and S. Papert (eds), Pp 1-11. Norwood, NJ: Ablex, 1991. - H. S. Barrows, The tutorial process. Springfield Illinois: Southern Illinois University School of Medicine, 1988. - S. T. Belt et al, Uni. Chem. Educ., 2002, 6, 65. - S. T. Belt et al, Chem. Educ. Res. Pract., 2005, 6, 166. - P. Ram, J. Chem. Educ., 1999, 76, 1122. - W. A. M. Alwis in a keynote paper, International Symposium on PBL: Reinventing PBL Singapore, 2007. - W. S. De Grave, H. P. A. Boshuizen and H. G. Schmidt, Instruct. Sci., 1996, 24, 321. - C. E. Hmelo-Silver and H. S. Barrows, The Interdisc. J. Problem-based Learn., 2006,1, 21. - A. Collins, J. S. Brown and S. E. Newman in Knowing, learning and instruction: essays in honour of Robert Glaser, L. B. Resnick (ed). Hillsdale NJ: Lawrence Erlbaum, 1989.
1 Answer | Add Yours Red blood cells are called erythrocytes and they are the most common type of blood cell in human blood. They give blood its red color. In terms of structure, they are oval shaped with dimples in the center. They are very simple cells with no nucleus or major organelles as found in most other types of cells. This is so that they can hold as much hemoglobin as possible. The hemoglobin is what allows the red blood cell to perform its primary function, to transport oxygen from the lungs to various parts of the body. Hemoglobin is an organic molecule with an iron atom in the center. The iron atom effectively binds to O2 molecules to transport blood in the bloodstream. The hemoglobin also carries some of the carbon dioxide waste from the body back to the lungs to be expelled. We’ve answered 317,789 questions. We can answer yours, too.Ask a question
Scientists and engineers have long envied nature’s ability to design crystalline structures whose properties are often superior to those of similar synthetic materials. Through a process called biomineralization, proteins orchestrate the growth processes of many natural minerals into designs that confer exceptional properties. Scientists are eager to understand nature’s biomineralization processes. The biological controls that determine the size, shape, and properties of crystals are key to addressing challenges as diverse as synthesizing nanostructures, characterizing climate change, treating disease, and designing new materials for national security applications. Researchers have been studying biominerals for decades and have known for some time that organic molecules can influence the shape and properties of a growing crystal. However, researchers have been limited in their ability to mimic biomineral growth for specific applications, because they lack a thorough understanding of how biominerals form. The limitations are evident by comparing a single calcite crystal (the most stable form of calcium carbonate) synthesized in the laboratory to a coccolith formed in nature. (See the figures below.) The synthetic form has a simple rhombohedral shape (a prism with six faces, each a rhombus) and tends to grow randomly with no preferred orientation. In contrast, the coccolith exhibits a very organized, repeating crystal pattern. In an effort funded originally by the Laboratory Directed Research and Development (LDRD) Program and now by the Department of Energy (DOE), a Livermore team is using advanced microscopy techniques and molecular modeling to investigate the effects of interactions between biomolecules and calcium carbonate surfaces. The Livermore team, which includes physicists Jim De Yoreo, Roger Qiu, and Chris Orme, is collaborating with geochemist Patricia Dove of Virginia Polytechnic Institute, molecular biologist Daniel Morse of the University of California at Santa Barbara, biochemist John Evans of New York University, and theoretical physicist Andrzej Wierzbicki of the University of South Alabama. The multidisciplinary collaboration is studying calcium carbonate because calcium-bearing minerals comprise about 50 percent of all known biominerals. The vast deposits of calcium carbonate laid down by marine organisms are the largest terrestrial reservoir of carbon and hold a historical record of the interplay between Earth systems and biological organisms stretching back to the Cambrian period. Moreover, the richness of calcium carbonate architectures suggests that the mineral can serve as an excellent model for determining the physical mechanisms that underlie biomineralization. (a) The simple shape of a synthetic calcite crystal is contrasted with (b) the complex and organized shape of a coccolith formed in nature. (This coccolith image and the other crystal images in the background at left are courtesy of Markus Greisen and Jeremy Young, copyright Natural History Museum, London.) Click here for a high resolution photograph. The Many Faces of Crystals More than 300 identified crystal forms of calcite can combine to produce at least a thousand different crystal variations. Calcite, in its simplest form, grows as a rhombohedron. Similar to other solution-grown crystals, calcite grows when molecules land on the surface of a crystal seed and attach to the edge of a one-molecule-thick layer, called a step. The addition of molecules causes the layer to spread outward until it covers the face of the crystal. The layers form because the crystal contains defects, called dislocations. A dislocation will naturally generate a step at the surface of the crystal lattice. Because this step never goes away, as the layer grows outward, a spiral ramp of new layers, called a dislocation hillock, forms. Molecules continually adsorb and desorb from the step edges of these hillocks. If the concentration of molecules in the solution is high, the rate of adsorption will be greater than the rate of desorption, and the crystal will grow. If the reverse is true, the crystal will dissolve. In the past, theories of solution crystal growth were based on the limited capabilities of imaging equipment such as the scanning electron microscope (SEM). The SEM allowed researchers to observe only one or two faces of a crystal and did not allow them to observe the growth mechanisms in real time and at the necessary size scales. More recently, the Livermore team was among the pioneers to use atomic force microscopy (AFM) for investigating the growth of crystals from solutions. AFM enables researchers to image and measure subnanometer changes on the surfaces of every face of a crystal in real time. In 2001, the team received a Laboratory Science and Technology Award for its use of this technology in investigating the physical controls on biomineral growth. To understand the growth mechanisms involved in biomineralization, researchers must know the orientation of a crystal’s molecules and how they bond to each other. The molecules in a crystalline structure are arranged in a framework called a crystal lattice. The geometric and chemical relationships between the crystal lattice and organic modifiers, such as proteins, determine the interaction energy between modifier and crystal. Because crystals found in biomineral structures often exhibit unusual faces not found in synthetic crystals, researchers originally thought that the addition of a modifier must lower the energy of those new faces, allowing the crystal to express a shape that wouldn’t have otherwise been stable. “We discovered this theory was incorrect,” says De Yoreo, who initiated the Livermore effort. “To understand how a particular additive affects crystal shape, we need to look at how it interacts at the growing steps of the crystal.” A modifier alters the shape and growth rate of a crystal by blocking the growth of certain steps or by increasing the rate at which new molecules attach to steps. Because the geometric and chemical relationships between modifier and crystal are different for each type of step, new crystal shapes can be generated. To understand the change in crystal shape, the team performed calculations that predicted which steps the modifier would bind to most strongly. (a) A crystal grows when molecules land on the surface of a crystal seed and attach to the edge of a layer, called a step. (b) An atomic force microscope image of a pure calcite sample shows that as the molecules attach to a step, the layer spreads, forming a dislocation spiral, or hillock. Proteins Modify Growth In 2005, the team studied how peptide sequences from proteins of abalone and oyster shells—AP7-N, AP24-N, and n16-N—affect calcite growth. Using AFM, the team explored the adsorption of these proteins. Results showed that AP7-N and AP24-N inhibit growth at some of the calcite crystal’s steps and accelerate growth in others, resulting in round dislocation hillocks. In contrast, n16-N interacts with corner sites and promotes the emergence of a new set of steps. To test the importance of protein sequence on the structure of the three proteins, the team scrambled the sequence of peptides in the chain. The scrambled proteins no longer accelerated the step kinetics, illustrating directly that the structure, not simply the chemistry, is important for the effect on crystallization. This study is one of the first to systematically change protein sequence and use AFM to cross-correlate sequence and structure. The process provides a mechanistic understanding of the effects on crystal growth. In another study funded by DOE, the team investigated whether the most acidic abalone nacre protein, AP8, alters calcite growth and whether differences in control mechanisms existed. The change in growth shape was similar to that seen with small molecules, but surprisingly, the team found that the molecular-scale kinetics are significantly accelerated by AP8. Moreover, although AP8 proteins are much larger than atomic steps, they modify the growth by step-specific interactions. The observed rounding of the step edges and accelerated kinetics indicate that these proteins act as surface-active agents to promote ion attachment at the calcite step. Qiu says, “The significant and exciting outcome of this work is that we can define a new role for biomolecules in controlling crystal growth. Specifically, acidic proteins promote crystallization, which is in stark contrast to an inhibitor of crystallization—a role that is generally postulated and observed for peptides and other small molecules.” The results of the study also create a coherent picture of how protein-induced morphological changes at the molecular scale can guide the growth of a crystal to its macroscopic form. Molecular models confirm that the step edges are the most favorable binding environment for the modifiers. The steps provide the greatest opportunity for the three-dimensional modifiers to make the largest number of bonds. De Yoreo says, “Although the mechanisms of growth modification are diverse, crystal shape is most controlled by the interactions that occur at the step edges of a crystal’s faces. By knowing which steps the modifiers bind to most strongly, we can predict how they change the crystal shape.” These findings are significant for applications in materials science, bioengineering, geosciences, and medical sciences. (a) An atomic force microscope image shows that the AP8 protein alters the step morphology of a growing calcite crystal by accelerating the growth kinetics. (b) A scanning electron microscope image shows the shape of a calcite crystal grown in the presence of AP8 protein. The morphology in the upper portion remains the same, but the morphology in the lower portion is modified. Inhibiting Kidney Stones Not all biomineralization processes are aimed at growing crystalline structures. Many organisms rely on arresting crystal growth in order to survive. For example, arctic fish must live in subfreezing environments without developing ice crystals in their blood. If it were not for the inhibitory effect of certain proteins, the supersaturation of calcium phosphates and oxalates in human blood and urine would be enough to turn a human into a proverbial pillar of salt. Understanding how proteins and other biomolecules—including therapeutic agents—inhibit mineralization is crucial for controlling a variety of disorders. An example of such a disorder is the formation of stones in the human urinary tract. Through funding from the National Institutes of Health, Livermore’s biomineralization team is collaborating with Wierzbicki, physical chemist George Nancollas from State University of New York at Buffalo, and nephrologist John Hoyer from Children’s Hospital in Philadelphia to investigate the influence of modifiers in inhibiting the growth of kidney stones. Kidney stones are composed of up to 95-weight-percent calcium oxalate. Calcium oxalate exists in two forms—calcium oxalate monohydrate (COM) and dihydrate (COD). Although normal human urine contains COM and COD, stone formation is suppressed by a number of protein inhibitors and small molecules such as citrate and magnesium. The Livermore team combined molecular modeling with AFM to provide the first molecular-scale views of COM modification by two urinary constituents, citrate and the protein osteopontin. The team found that while both molecules inhibit the growth kinetics and modify growth shape, they do so by attacking different faces on the COM crystals. Citrate has a stronger binding energy on the steps of one face than on the steps of the other faces. The results also suggest that when citrate and osteopontin are added simultaneously, a greater inhibitory effect on COM growth exists than when either is applied alone. The results have significant implications for kidney stone disease therapy. Qiu says, “We don’t completely understand kidney stones. For example, we would like to know how proteins control the nucleation of stones, that is, the growth of new crystals.” Qiu wants to create a synthetic cell surface for realistically studying nucleation kinetics and how additives may affect nucleation. “In the case of kidney stones, the problem can be addressed at a few different points along the pathway,” he says. “We could address it at the nucleation point or at the transformation of COD to COM. Using a synthetic cell would provide us with the most realistic model to understand the starting point of kidney stone formation.” Atomic force microscope images show at the molecular scale that citrate and a naturally occurring protein, osteopontin, inhibit the growth and change the shape of calcium oxalate monohydrate by attacking the steps on different faces. While citrate strongly affects the steps on the top face (a–b), osteopontin tackles the steps on the side face of the crystal (c–d). Click here for a high resolution photograph. Applications to Energy and Security Much of Livermore’s early AFM work on crystal growth focused on the need to better understand potassium dihydrogen phosphate (KDP) crystal growth for use on the Laboratory’s National Ignition Facility (NIF), which when completed will be the largest laser and optical instrument ever built. (See S&TR, September 2002, Empowering Light: Historic Accomplishments in Laser Research.) In 1983, Novette, one of NIF’s predecessors, was the first laser to be engineered with optical frequency converters made of KDP crystals. The crystals convert infrared light at a wavelength of 1,053 nanometers to a shorter wavelength of 351 nanometers. For NIF’s optics, about 600 large slices of KDP are needed. Using traditional crystal growing methods, it would have taken 15 months to grow that number at the large sizes needed. In the early 1990s, a fast-growing method, pioneered in Russia and perfected at Livermore, produced crystals at the required size in just days. The development team won an R&D 100 Award in 1994 for developing the process that produced high-quality KDP crystals for inertial confinement fusion lasers. Their process had, in only 27 days, produced a KDP crystal measuring 44 centimeters across. The results of the biomineralization studies conducted by the Livermore team may help NIF researchers in another effort. In inertial confinement fusion, a large amount of laser energy is delivered onto a target containing fusion fuel. The type of target envisioned for NIF ignition experiments (those designed to liberate more energy than the fusion fuel absorbs) is a 2-millimeter-diameter capsule containing deuterium–tritium (D–T) gas surrounded by a solid D–T layer. The specifications for the solid D–T layer are extremely demanding. Surface roughness and defects in the layer can promote hydrodynamic instabilities. Physicist Bernie Kozioziemski, who is on the team developing D–T layers for the ignition capsules, says, “The capsule needs to have a perfect seam between the D–T solid and the D–T gas inside. Any deviation from a perfect sphere indicates roughness either because the crystals are not smooth or because they are not joining well.” Solid D–T layers are formed by slowly cooling liquid D–T to its freezing temperature. Even the smallest crack may spread as the liquid is cooled to the required temperature. Growing perfectly uniform D–T layers that are free of defects has been a challenge because of the random orientations in the crystals’ initial growth patterns. The team plans to determine the optimal growth orientation for crystals and then find a way to grow this pattern repeatedly for each NIF ignition experiment. One possible approach is to mimic biological organisms by creating a nanoscale template to form a single seed crystal with a preset orientation in an ignition capsule, and then allow the seed to propagate. Comparing the smoothness of D–T layers grown from different seed crystal orientations would quickly indicate the ideal orientation. Understanding crystal formation is also important in maintaining high explosives (HE) for stockpile stewardship. Researchers need to know how HE evolves over time. “Many of the powders used in HE may form crystals after a length of time,” says Qiu. “As crystals form, the surface-area-to-volume ratio is decreased, which may affect safety and performance.” Researchers can solve problems with crystal formation in HE by using AFM to examine how HE crystals evolve under different environmental conditions. Uncovering the formation mechanism will allow researchers to design an effective means to suppress HE crystallization. In addition, researchers can explore the use of compounds that, like modifiers of biominerals, will control the crystallization process. (a) Interferometric images of a growing deuterium–tritium (D–T) crystal show a layer of the crystal that is growing more rapidly than those in the center, leading to a rough surface. (b) Visible light illuminates a transparent plastic shell in which D–T crystals have fused together to form a perfect circle, or interface, between a solid layer of D–T and the shell’s center of D–T gas. Liquid D–T is poured into the fill tube at the top, and the liquid is slowly cooled to form the solid layer. Click here for a high resolution photograph. Mapping Dissolution Mechanisms Just as molecules most readily attach to the crystal at steps during growth, when molecules leave the crystal during dissolution, they do so more easily by detaching from the steps, the sites where they have fewer bonds. Understanding dissolution mechanisms has important implications for studies of metal corrosion, a critical issue for DOE’s Yucca Mountain Project. Yucca Mountain, Nevada, is the proposed location to store about 70,000 tons of waste from civilian nuclear power plants and highly radioactive waste from defense-related activities at DOE facilities. Livermore scientists have contributed to the project by characterizing the proposed underground site, determining the effects on the site from storing high-temperature radioactive wastes, and selecting and characterizing corrosion-resistant materials for the waste packages. (See S&TR, April 2004, Defending against Corrosion.) The current repository design calls for waste to be stored in a package consisting of two nested canisters—an outer canister made of a highly corrosion-resistant metal (Alloy 22) and an inner canister made of a tough, nuclear-grade stainless steel (316NG). The proposed outer canister made of Alloy 22 consists of about 60-percent nickel, 22-percent chromium, 13-percent molybdenum, and 3-percent tungsten. Alloy 22 is extremely corrosion resistant at the high temperature and low humidity expected to prevail for hundreds to thousands of years in a repository. However, because it corrodes very slowly, Alloy 22 is difficult to study on laboratory time scales. Therefore, Orme’s team conducted studies using Inconel 600, an alloy of nickel–chromium–iron. To determine how the chemical bonding at the metal surface dictates the initiation of corrosion, Orme and metallurgist Chris Schuh (a former Lawrence Fellow now at the Massachusetts Institute of Technology) combined electron backscatter diffraction (EBSD) with AFM. EBSD can identify the orientation of each crystal grain at the surface of a sample and create a spatial map of the orientations. After exposing the mapped sample to a corrosive environment, such as diluted hydrochloric acid, the team can then use AFM to determine which orientations dissolve faster than others by measuring the relative heights of different crystallographic orientations. Orme says, “One of the promising aspects of this work is that all facet orientations can be tested simultaneously, making it much easier to identify trends in how materials interact with their environment.” In the studies with Inconel 600, the team expected the results to show that the metal grains with the fewest number of neighbor bonds at the surface corrode the fastest for the same reason that steps dissolve the fastest. Instead, the low-bonded facets were somehow better protected from dissolution than other facet orientations. To sort out this puzzle, the team enlisted postdoctoral researchers Jeremy Gray and Bassem El-dasher to carry out a series of experiments, placing Alloy 22 in increasingly corrosive environments. The researchers found that in the most aggressive environments, dissolution occurred as expected, according to the bonding of the metal. However, in less corrosive environments where oxides can form, the order changed. In highly corrosion-resistant materials, the metal is protected by a very thin layer of oxide, which like paint, provides a barrier between the environment and the metal, preventing it from dissolving. However, unlike paint, the oxide continually reforms if oxygen is present. Orme says, “The interesting twist here is the competition between metal dissolution and oxide growth. Both processes occur at the most reactive locations on the crystal, and it is a delicate balance to determine which will prevail.” The facets most susceptible to corrosion are also those most able to form a protective coating. Materials scientists will continue to glean information about the crystal growth process from the fascinating shapes and hierarchical designs of nature’s biomineral structures. “We’ve come a long way in our understanding of crystal growth,” says De Yoreo. “We understand many of the basic mechanisms by which modifiers control growth rate and shape. We also know more about surface science because of these studies. However, everything we’ve done so far has been in the laboratory. We, admittedly, still understand little about how organisms in nature control nucleation or how they form such remarkable crystalline architectures.” As Livermore researchers continue to unlock nature’s secrets of crystallization, they can look forward to applying this knowledge to many applications in materials science, geochemistry, and medicine and toward the fabrication of new classes of materials. The Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.
Rubella, sometimes called German measles, is an acute viral infection that causes a mild illness in children and slightly more severe illness in adults. The disease is spread person-to-person through airborne particles and takes two to three weeks to incubate. The following are the most common symptoms of rubella. However, each individual may experience symptoms differently. Symptoms may include: Rubella in pregnant women may cause serious complications in the fetus, including a range of severe birth defects. The symptoms of rubella may resemble other medical conditions. Always consult your physician for a diagnosis. In addition to a complete medical history and medical examination, diagnosis is often confirmed with a throat culture and blood testing. Specific treatment for rubella will be determined by your physician based on: Treatment for rubella is usually limited to acetaminophen for fever. Measles, mumps, and rubella (MMR) is a combination childhood vaccination that protects against these three viruses. MMR provides immunity for most people. People who have had rubella are immune for life. Usually, the first dose of the MMR vaccine is administered when a child is 12 months old, and a second dose given at 4 to 6 years of age. However, if 28 days have passed since the first dose was administered, a second dose may be given before the age of four. Click here to view the Online Resources of Infectious Diseases
A Walk In The Clouds Given that the Sun powers Earth's climate system and provides the energy for all life on our planet, it should come as no surprise that changes in solar activity can affect climate conditions. Because total solar irradiance varies only slightly, climate scientists have discounted our variable star as a driver of climate change. At the end of the 20th century, Heinrich Svensmark, of the Danish Space Research Institute, and Eigil Friis-Christensen proposed that solar activity may be a controlling factor for climate by changing low level cloud cover. Not surprisingly this idea was disparaged by mainstream climate science, since it would diminish the importance of greenhouse gases, CO2, the IPCC's favorite daemon, in particular. Now, after several years of experimentation at CERN, the preliminary results are in and it looks like Svensmark and Friis-Christensen were right after all. The idea that low solar activity might cause Earth's climate to cool may not sound far fetched, but the mechanism that is responsible for that cooling may seem counterintuitive—an increase in the number of cosmic rays striking Earth's atmosphere. For a century, scientists have known that charged particles from space constantly bombard Earth. Originating in distant stars and galaxies, these cosmic rays strike our planet's atmosphere, where they can ionize volatile compounds. This causes airborne droplets, or aerosols, to condense providing the nuclei around which clouds can form. It is the formation of lowlevel clouds that cools Earth, and that formation is controlled by cosmic rays. Ultimately, the cosmic rays are controlled by the Sun. Here is how we described this revelation in The Resilient Earth, chapter 11. The primary proponent of cosmic ray induced lowlevel cloud formation is Danish physicist Henrik Svensmark, of the Danish Space Research Institute. Svensmark and Eigil Friis-Christensen reported their discovery in a cogent paper in 1997: “Variation of Cosmic Ray Flux and Global Cloud Coverage — a Missing Link in Solar-Climate Relationships.” In it, they describe how ions created in the troposphere by cosmic rays could provide a mechanism for cloud formation. And, since the level of cosmic rays is controlled by the solar cycle, they suggested that the Sun is controlling Earth's climate variation by changing low-level cloud cover. Svensmark and Nigel Calder wrote an excellent book The Chilling Stars, describing the theory and the discoveries that led to its formulation. According to Svensmark: “Instead of thinking of clouds as a result of the climate, it’s actually showing that the climate is a result of the clouds, because the clouds take their orders from the stars.” To help prove their hypothesis, an experiment was set up in a basement at the Danish National Space Center, to verify that cosmic rays could cause low level clouds to form under controlled conditions. The heliosphere deflects cosmic-rays. Svensmark. The SKY Experiment used a cloud chamber to mimic conditions in the atmosphere. This included varying levels of background ionization and aerosol levels, particularly sulphuric acid (H2SO4). The SKY Experiment demonstrated that more ionization implies more particle nucleation. For more details, including video of Svensmark explaining his theory, see “Chilling Stars Author Henrik Svensmark On Video.” Still, it took years to convince European scientific funding agencies that this cosmic-ray/cloud formation link was worth investigating. Despite efforts to disprove the Sun/cosmic-ray/cloud link, eventually the CLOUD experiment, Cosmics Leaving Outdoor Droplets, was established. As I reported in 2009, the first experimental results were expected in 2011—and the preliminary results are in. In an article published online on the Nature website, the first report of experimental results are supportive of a cosmic-ray cloud formation link. The work involved over 60 scientists from 17 countries. In “Cloud formation may be linked to cosmic rays,” the mainstream science journal has grudgingly admitted that Svensmark may, in fact, be correct. “The findings, published today in Nature1, are preliminary, but they are stoking a long-running argument over the role of radiation from distant stars in altering the climate,” the news article states. The best exhibit of the experiment's success is shown in the graph below, taken from the report's supplemental material. As Nigel Calder reported on GWPF, “Tucked away near the end of online supplementary material, and omitted from the printed CLOUD paper in Nature, it clearly shows how cosmic rays promote the formation of clusters of molecules (“particles”) that in the real atmosphere can grow and seed clouds.” The significance of these results is underscored by the tepid, even hostile reception that the mainstream climate science community is giving them. From the equivocal title of the Nature news announcement to several warmists quoted in the article, their was a definite chill in the air. The CLOUD experiment is “not firming up the connection,” contends Mike Lockwood, a space and environmental physicist at the University of Reading, UK, who is skeptical of the cosmic-ray connection. In an oddly titled article, “Cloud-making: Another human effect on the climate,” New Scientist, a vocal global warming booster, quotes Jasper Kerkby, a physicist and lead investigator on the project, as saying “[t]his was a big surprise.” When Dr Kirkby first described the theory in 1998, he suggested cosmic rays “will probably be able to account for somewhere between a half and the whole of the increase in the Earth's temperature that we have seen in the last century.” The New Scientist article misleadingly spends its first paragraph talking about organic particles and trying to link the CLOUD results to agriculture and other human activities. “If it is significant on a global scale, it might mean that the natural emissions of organics is also important in cloud formation,” said Bart Verheggen of the Energy Research Centre of the Netherlands (see my earlier article on the same topic, “Airborne Bacteria Discredit Climate Modeling Dogma”). Other responses were more on target, if more guarded. “I think it's an incredibly worthwhile and overdue experiment,” says Piers Forster, a climatologist at the University of Leeds, UK, who studied the link between cosmic rays and climate for the latest scientific assessment by the International Panel on Climate Change. But for now at least, he says that the experiment “probably raises more questions than it answers.” Even the more restrained Nature quotes Kirkby as saying, “[a]t the moment, it actually says nothing about a possible cosmic-ray effect on clouds and climate, but it's a very important first step.” In a more thoughtful moment, Kirkby added, “[p]eople are far too polarized, and in my opinion there are huge, important areas where our understanding is poor at the moment.” None of this can detract from the experimental findings, however. Quoting from the actual paper's abstract, “We find that atmospherically relevant ammonia mixing ratios of 100 parts per trillion by volume, or less, increase the nucleation rate of sulphuric acid particles more than 100–1,000-fold.” A 1,000 fold increase in nucleation rate seems well out of the statistical noise as experimental results go. “Of course there are many things to explore, but I think the cosmic-ray/cloud-seeding hypothesis is converging with reality,” says Henrik Svensmark modestly of the report. For his part, Kirkby hopes to eventually answer the cosmic-ray question. In the coming years, he says, his group is planning experiments with larger particles in the chamber, and they hope eventually to generate artificial clouds for study. “There is a series of measurements that we will have to do that will take at least five years,” he says. “But at the end of it, we want to settle it one way or the other.” How cosmic-rays help form nuclei promoting could formation. CERN. Undoubtedly, science will slowly move forward and eventually affirm or reject Svensmark's theory—that is how science works. For those who refuse to think of science as a struggle, with proponents of competing theories attacking one another, let this be an example. Warm-mongering CO2 demonizers tried to kill this theory in its cradle, claiming that performing experiments like SKY and CLOUD were just a waste of time and money. They would rather stay in their comfort zone, their delusions reinforced by computer models of their own devising, without need for all that tedious experimentation. Fortunately, more inquisitive minds prevailed. Many scientists think that attacks on climate science dogma are an attack on all science, but that is not true. Such skepticism is what makes science work, and blind belief in current theories is antithetical to the advancement of human understanding. As more and more flaws have been found in the theory of anthropogenic global warming, real scientists have begun to look elsewhere for the real drivers of climate change—to the Sun, the stars and the clouds. Be safe, enjoy the interglacial and stay skeptical.
Presenting science to the public: The ethics of outreachJoy Branlund, Southwestern Illinois College SummaryThis case study involves class discussion about (a) the environmental concerns of developing a new industrial project (in this case, a new mine in Minnesota), and (b) the ethics of communicating those impacts, both between industry and the public, and a scientist and the public. The first part of the discussion ties into previous class instruction on sulfide mining and impacts. This activity was designed for use in an undergraduate introductory geoscience or environmental science class. The activity took place in a relatively small class (24 students), but could be scaled up or down. Class size: 15 to 30 students Skills and concepts that students must have mastered The activity as presented requires basic knowledge of metal mining/processing techniques and related environmental concerns (the environmental impacts of surface mining, floatation and leaching, in order to extract sulfide minerals). However, the case can be modified to stress only ethical communication (and not mining knowledge). How the activity is situated in the course The activity was part of a mineral resource unit, specifically used to apply knowledge of mining (extraction and processing). GoalsContent/concepts goals for this activity By completing the activity, students should be able to: - give examples of how mining, beneficiation, etc. affects society and how mining processes/extent are influenced by societal factors (i.e. economics) - summarize factors that define ethical behavior in communicating science. Higher order thinking skills goals for this activity By completing the activity, students will: - apply knowledge of mining processes and consequences to a novel (and foreign) potential mine site and population, and - evaluate and critique communication given by a fictional geologist. Other skills goals for this activity Students will work in groups, and present their group answers to the class. Ethical Principles Addressed in this ExerciseThe activity addresses both: - the ethical considerations involving scientists' communication with the public, - ethical behavior of the public in responding to possible environmental threats. Description and Teaching Materials Several projects (new mines, new power plants, new levee construction, etc.) cannot open until they receive a permit from the state. To receive the permit, the company completes an environmental impact statement. During a public comment period, the public can read the environmental impact statement, and then direct their opinions to their lawmakers. This activity addresses two components of the process. First, the public's role, and second, a geoscientist's role as educator of the public. The activity presented here specifically addresses a proposed sulfide mine in northern Minnesota (which was still in the public comment phase in spring 2013, when the case was taught). The activity was designed to incorporate small group discussion, with groups reporting results to the entire class. The second portion of the activity would work well as a gallery walk. Case Study ScenarioThis information is what would be given to students: PolyMet proposes to open a mine and processing plant in northern Minnesota (the location of which is labeled on the U. S. map). The company is planning to recover sulfide minerals (mostly copper) using surface-mining techniques. They have completed their environmental impact statement, and the public comment period has started. Imagine that you are a resident of Minnesota. Part 1. In a group, brainstorm answers to this question: What questions do you want answered before you decide to tell your lawmaker that you approve (or reject) PolyMet's plan? Part 2. A geologist visits her local library to give a talk to Minnesotans who are concerned about mining in the north. In response to the question: "I'm very worried that the streams up there, and even Lake Superior, might become polluted. Will that happen?", the geologist says: That's a good question. The Partridge and Embarrass rivers flow through the mine and processing sites, and these streams flow into the St. Louis River which flows into Lake Superior. Good news: the Boundary Waters Canoe Area Wilderness will not be impacted. The mine will operate a wastewater treatment plant, and this will reduce sulfate levels in the water. (This is also good news, because the sulfate would hurt the wild rice harvests.) Liners and will capture water seeping through waste rock piles, and the captured water will be treated. The company will also monitor water quality at places near the mine and downstream.Much of the mine and processing site was previously mined, and so this new wastewater treatment plant will actually make the water cleaner. However, modeling shows that there may be elevated levels of aluminum and lead downstream as a side effect of the project (not because of direct discharges from the mine site). Answer the following questions about the geologists presentation: - The ethical requirement of the geologist is that she clearly present the scientific evidence people need to make a decision. Critique the geologist's answer. Did the geologist act ethically? How could her answer have been better? - In her presentation, the geologist didn't state what she thinks ought to be done (whether she thinks the project should be approved or rejected). Again, considering ethical behavior, do you think she should have? Why or why not? - What is the public's responsibility in this permitting process? If you knew someone who lives in Minnesota, then what reasons would you give in order to encourage their involvement? - The mining company might find that disclosing all relevant information could prevent them from reaching their goals, whether those be acquiring a permit, recruiting investors, attaining needed land, etc. Is it ethical to withhold information? Explain your answer. Teaching Notes and TipsPart 1 Developing questions requires students to apply knowledge of mining processes (specifically the challenges of sulfide mining) to this different example. The amount of information given ahead of time was limited purposefully, in order to expand the range of questions. Student questions will include those with a social and economic bent, but also should include mining-specific questions, such as, "How much waste rock will be created?" "How will waste rock and tailings be stored and disposed of?" "What will the company do to limit effects of acid mine drainage?" I had students brainstorm in small groups, until they all had a long list of questions (and conversation died down). The small group work can be shortened depending on time constraints. Then, I went around the room and had representatives from each group share one question, which I listed in PowerPoint. We went around the room until all questions were listed. The length of this question list should give students a good idea of the breadth of information that should be given by the company to the public. In fact, this environmental impact statement is 2,169 pages long! The instructor can choose to answer some of these questions (if he/she wishes to skim through the environmental impact statement, or the fact sheets), and/or to clarify why the question is worth asking. How much area will be disturbed by mining? Who will inspect the mine operations, and how often? How will the company ensure employee safety? How many jobs will this bring to this community? What will these jobs pay? And will there be advancement opportunities? Will local people be employed? Will local people be trained to work at the mine? Or will there be a huge influx of strangers? Will these be full-time and permanent? Or temporary? If temporary, then will the company help find more jobs for their employees? For how many years will the mine be open? Who owns the land? Does the company own the land on which they'll mine? How much profit will the mine make? Would it be more profitable to open a mine elsewhere? How close is the nearest community? Will the mine bring tax revenue to the region, or will the company receive tax breaks/ incentives to be there? How will waste rock be managed? What is the reclamation plan? How will plants and animals be affected? What sorts of air pollution will be created by mining equipment? And on-site power plant? How will they manage water flowing on/though site? Will there be a water treatment plant on site? What kind of on-site monitoring will take place to check water and air quality? What plans are in place to deal with emergencies (chemical spills, worker safety issues, etc.)? How close is the mine site to streams and lakes? To where do nearby streams flow? Do people use this water? What are the start-up costs, and do we care? Is concentration [of the resource] going to happen on-site? How will stuff be transported from mine to concentration site to the market? Part 2 stresses ethical considerations. Again, students can answer the posed questions in smaller groups, and then report out to the larger class the suggestions they have. Alternatively, part 2 can be done as a gallery walk (with each of the four questions written on a flipchart paper, and groups rounding to each question where they add to the answers). The case covers ethical questions in three areas: science communication, the role of a scientist in society at large, and the role of nonscientists when dealing with science topics (and science ethics). Communications with the public are only successful if the public trusts the scientist. Therefore, the scientists must: - Tell the truth without omission. Provide clear, truthful description of (ideally) peer-reviewed* results including: methods, uncertainty, participating scientists, whether results differ from other studies, if other scientists disagree and why, possible negative implications/ consequences of the results, other possible explanations for the results, possible conflicts of interest. The results (or importance of results, or uncertainties in results) should not be over- or under-emphasized. - Respect the audience. Never try to manipulate or use the public (even for a good cause). Listen to (and value) non-scientific arguments and points of view. Communication should not happen for personal or institutional benefit. Think well of the audience; the public is capable of judging scientific evidence and making sound decisions, as long as the communicated science is clear, jargon free, and thorough. - Explain how science works. People (especially non-scientists) need to be reminded (a) that uncertainty in science doesn't mean disagreement, (b) theories are simplified explanations of nature, not truth (but also not guesses), (c) that science is based on observation, and our scientific explanations might change when new observations are made, and (d) predications carry with them some (or a lot of) uncertainty. *Science ethics dictates that a scientist should not discuss research until the results have been peer-reviewed. However, in the case presented here, company results do no go through the peer-review process. However, the guest scientist should mention any peer-reviewed data that agree or disagree with the company's findings. The role of the scientist in society at large There have been debates about how science fits into society, ranging from complete independence (science is separate from, and special in relation to) society, to integration (science should be completely integrated with society) (Briggle and Mitcham, 2012). Believers who favor independence would argue that values must be kept separate from facts, and thus scientists should present the facts and let other members of society determine the values. However, those that favor more integration acknowledge that scientists are humans, and thus have values and societal responsibilities. Carrada (2006) states, "Scientists should declare the values at the root of their work, but also be ready to divulge the social implications of their work as well as the work of others, and their own opinion, positive or negative." The fact that the NSF requires a Broader Impact Statement suggests that the science franchise expects scientists to be citizens and positively impact their societal realm. Nonscientists making science-related decisions Nonscientists are constantly faced with using science, and making science-related decisions. They will use (and dispose of) technology developed using science, they will rely on (and have their behaviors and beliefs changed by) the scientific body of knowledge, and they will participate in larger public policy debates because of how science affects them. Educators must empower students to make these judgments, scientists must ethically provide the scientific information that the public needs, trust nonscientists to make science-related decisions, and respect the non-science-related concerns that arise in the debate. Everyone must learn to be involved in the decision-making process, as decisions made only by scientists, politicians and/or industry will not involve all of society's concerns. The case can easily be modified to incorporate a local industrial project, or a more timely example (information in part 1, and the geologists' speech would change in part 2, but otherwise the questions could remain the same). AssessmentAssessment is embedded in the activity, by paying attention and adding to discussion points. References and Resources Briggle, Adam and Mitcham, Carl (2012) Ethics and Science - An Introduction. Cambridge Applied Ethics Series. Cambridge University Press. ISBN: 978-0-521-87841-8 Carrada, Giovanni (2006) Communicating Science. European Commission, Brussels. 76 pgs. Available online at: http://ec.europa.eu/research/science-society/pdf/communicating-science_en.pdf Johnson, Branden B. (1999) Ethical Issues in Risk Communication: Continuing the Discussion. Risk Analysis 19(3): 335-348.
|India Table of Contents Some 50 million hectares, about 17 percent of India's land area, were regarded as forestland in the early 1990s. In FY 1987, however, actual forest cover was 64 million hectares. However, because more than 50 percent of this land was barren or brushland, the area under productive forest was actually less than 35 million hectares, or approximately 10 percent of the country's land area. The growing population's high demand for forest resources continued the destruction and degradation of forests through the 1980s, taking a heavy toll on the soil. An estimated 6 billion tons of topsoil were lost annually. However, India's 0.6 percent average annual rate of deforestation for agricultural and nonlumbering land uses in the decade beginning in 1981 was one of the lowest in the world and on a par with Brazil. Many forests in the mid-1990s are found in high-rainfall, high-altitude regions, areas to which access is difficult. About 20 percent of total forestland is in Madhya Pradesh; other states with significant forests are Orissa, Maharashtra, and Andhra Pradesh (each with about 9 percent of the national total); Arunachal Pradesh (7 percent); and Uttar Pradesh (6 percent). The variety of forest vegetation is large: there are 600 species of hardwoods, sal (Shorea robusta ) and teak being the principal economic species. Conservation has been an avowed goal of government policy since independence. Afforestation increased from a negligible amount in the first plan to nearly 8.9 million hectares in the seventh plan. The cumulative area afforested during the 1951-91 period was nearly 17.9 million hectares. However, despite large-scale tree planting programs, forestry is one arena in which India has actually regressed since independence. Annual fellings at about four times the growth rate are a major cause. Widespread pilfering by villagers for firewood and fodder also represents a major decrement. In addition, the forested area has been shrinking as a result of land cleared for farming, inundations for irrigation and hydroelectric power projects, and construction of new urban areas, industrial plants, roads, power lines, and schools. India's long-term strategy for forestry development reflects three major objectives: to reduce soil erosion and flooding; to supply the growing needs of the domestic wood products industries; and to supply the needs of the rural population for fuelwood, fodder, small timber, and miscellaneous forest produce. To achieve these objectives, the National Commission on Agriculture in 1976 recommended the reorganization of state forestry departments and advocated the concept of social forestry. The commission itself worked on the first two objectives, emphasizing traditional forestry and wildlife activities; in pursuit of the third objective, the commission recommended the establishment of a new kind of unit to develop community forests. Following the leads of Gujarat and Uttar Pradesh, a number of other states also established community-based forestry agencies that emphasized programs on farm forestry, timber management, extension forestry, reforestation of degraded forests, and use of forests for recreational purposes. Such socially responsible forestry was encouraged by state community forestry agencies. They emphasized such projects as planting wood lots on denuded communal cattle-grazing grounds to make villages self-sufficient in fuelwood, to supply timber needed for the construction of village houses, and to provide the wood needed for the repair of farm implements. Both individual farmers and tribal communities were also encouraged to grow trees for profit. For example, in Gujarat, one of the more aggressive states in developing programs of socioeconomic importance, the forestry department distributed 200 million tree seedlings in 1983. The fast-growing eucalyptus is the main species being planted nationwide, followed by pine and poplar. The role of forests in the national economy and in ecology was further emphasized in the 1988 National Forest Policy, which focused on ensuring environmental stability, restoring the ecological balance, and preserving the remaining forests. Other objectives of the policy were meeting the need for fuelwood, fodder, and small timber for rural and tribal people while recognizing the need to actively involve local people in the management of forest resources. Also in 1988, the Forest Conservation Act of 1980 was amended to facilitate stricter conservation measures. A new target was to increase the forest cover to 33 percent of India's land area from the then-official estimate of 23 percent. In June 1990, the central government adopted resolutions that combined forest science with social forestry, that is, taking the sociocultural traditions of the local people into consideration. Since the early 1970s, as they realized that deforestation threatened not only the ecology but their livelihood in a variety of ways, people have become more interested and involved in conservation. The best known popular activist movement is the Chipko Movement, in which local women decided to fight the government and the vested interests to save trees. The women of Chamoli District, Uttar Pradesh, declared that they would embrace--literally "to stick to" (chipkna in Hindi)--trees if a sporting goods manufacturer attempted to cut down ash trees in their district. Since initial activism in 1973, the movement has spread and become an ecological movement leading to similar actions in other forest areas. The movement has slowed down the process of deforestation, exposed vested interests, increased ecological awareness, and demonstrated the viability of people power. Source: U.S. Library of Congress
Lab # 4 - SYSTEMATICS We could use any means of classification to organize the world's fossil and living species, and answer the above question. Organisms could be grouped on the basis of size, whether they lived on land or in the sea, or even by color. However, one of the main tenets of comparative biology is that there is order in nature: an order which manifests itself in patterns of similarity of appearance among all the organisms on the Earth. There are two fundamentally different ways of explaining this striking similarity, which are often dialectically opposed in western thought. One relies on the belief that all the different types of organisms are the result of order imposed by a divine omnipotence. The other approach is to look at the similarities between groups of organisms, and to see them as a manifestation of the degree of relatedness existing between these organisms due to descent with modification (e.g. evolution). In other words, all life is descended from a common ancestor (or at least a limited number of common ancestors) and there is a process called evolution which is responsible for the splitting of lineages and the divergence of form that results in the diversity of life. [There is also the occasional combining of lineages as in the symbiotic organelle theory]. Since life has been evolving for 4 billion years without human observers we cannot possibly know the exact evolutionary history of life. However, we can make inferences about the evolutionary relationships of organisms on the basis of their shared similarities, because the traits present in an ancestor tend to be passed on to its descendants. As you remember from class a character is a feature or thing which we can examine or label. A character which is an innovation developed in an ancestor of a group is called a derived character relative to the characters seen in the ancestors of the founder of the new group. The ancestors of the founder of the new group are said to have at least one primitive character relative to the derived character. The character which is derived because it is an innovation in the ancestor of a group, is also a primitive character with respect to the members of that group. A character shared by all members of the group is, as you might expect, a shared character. If the character is both in the new state, and shared by the members of the group in question, it is a shared derived character. If a group of organisms is believed to have shared a common ancestor, the group containing that ancestor and all of its descendants is called a monophyletic group. A monophyletic group must be recognized by the presence of at least one shared derived character. A character found only in one of the groups being studied is called a unique derived character, and does not help us relate this group to any other group. An example of a unique derived character for humans is frontal sex. The fact that organisms from different species resemble each other does not necessarily mean that they are closely related. They might resemble each other because they share a large number of primitive characters. On the other hand, they might share a character that evolved independently in the groups as a result of convergent evolution. This resemblance may be due to different lineages of organisms adapting to very similar environments. Similarities which evolve through means other than descent are called analogous characters. The wings of a bat, a bird, and a butterfly all perform the same function, and have similar form. However, on the basis of many other dramatically different characters we can conclude that this aerodynamic limb evolved independently in these three organisms from ancestors who did not have such a structure. Analogous characters such as those just described are not used to group organisms in an evolutionary classification. When we have an array of organisms and begin our search for shared derived characters, we need to know which characters are primitive for all of the organisms we are examining. We can do this by looking for the characters every member or almost every member seems to have then look at a group of organisms outside the group in question (an outgroup), and see what characters are shared with the group in question. Outgroups allow the polarity of characters (e.g. primitive to derived) to be established. Similarities between organisms which do share a common ancestor are called homologous characters. Relative to groups not possessing these characters they are also shared derived characters uniting the group which have them. The front limbs of a dog, a bird, a whale, and a human perform very different functions, yet they share a common anatomical structure: all have a single large bone, the humerus, which is attached at one end to the shoulder and at the other end to two smaller bones, the radius and the ulna. The same bones are present in the wings of bats and a birds. If the front limbs of each of these organisms had evolved independently from different ancestors without front limbs, it would be hard to imagine that such striking similarities would have arisen. It is far more reasonable to conclude that these similarities are present because the common ancestor of all these organisms had the same type of front limb with the same bones. On the basis of this assumption all of these organisms are classified together as tetrapods and their front limbs are called homologous structures. Thus, this type of front limb unites these different animals as a shared derived character. Two other commonly recognized schools of systematics other than cladistics, which groups organisms only by shared derived characters, are: evolutionary systematics, which groups by shared derived and shared primitive characters; and phenetics, which groups by convergent characters, shared derived characters, and shared primitive characters. The mechanics of the process consists of a set of hypotheses and tests. First we construct a hypothesis of relationship for the organisms in question. This hypothesis can come from anywhere: general resemblance, a whim, or an authoritative text. Second, we look for characters which allow us to define groups. Third, we look for the distribution of primitive characters which stand in contrast to the derived characters. Fourth, we look for unique derived characters which define each of the organisms. Fifth, we construct a cladogram and hang the distribution of the characters on it. OK - now we have groups defined by shared derived characters and we have our cladogram with our characters. It is now time to test the hypothesis by looking at the characters which could define groups other than the hypothesis in question. These characters are in conflict and must be explained by some ad hoc argument other than simple descent from a common ancestor. If you need more ad hoc arguments to justify your cladogram than you have shared derived characters supporting your cladogram, your cladogram must be discarded. If your cladogram survives this test, the next step is to look for more characters and hang them on your cladogram and see how they fit. If they do not and there are a lot of them, again your hypothesis fails and you must look for a new and better one. This criterion that allows the selecting of the hypothesis which requires the fewest number of ad hoc hypotheses is called the principle of parsimony, and it is the hallmark of science in general. Ultimately, we think something is true, whether it is in general life or in systematics, when it has survived a very large number of tests. Choose your outgroup, justifying your choice (remember, the outgroup is used to determine which characters are primitive and which are derived. Only the derived characters will help you make monophyletic groups). Show all the synapomorphic & autapomorphic characters on your cladogram. List the plesiomorphic conditions (remember to state which group they are for).
A crossword is a word puzzle that normally takes the form of a square grid of black and white squares. The goal is to fill the white squares with letters, forming words or phrases, by solving clues which lead to the answers. In languages which are written left-to-right, the answer words and phrases are placed in the grid from left to right and from top to bottom. The black squares are used to separate the words or phrases Squares in which answers begin are usually numbered. The clues are then referred to by these numbers and a direction, for example, "4-Across" or "20-Down". Kids learning & Exercise Book : Rearrange the jumbled words.
- Even though bioenergy is a renewable resource its use is not always sustainable - Bioenergy is not carbon neutral and its use doesn’t always cut greenhouse gas emissions - The International Panel on Climate Change (IPCC) does not consider biomass used for energy to be automatically “carbon neutral” - Europe’s demand for biomass may outstrip sustainable supply - Bioenergy is different from other renewable energy sources - Bioenergy industries use waste and residue biomass that can have other uses - Bioenergy has a limited role to play in a renewable energy mix - It’s not always better for the climate to use bioenergy than fossil fuels Even though bioenergy is a renewable resource its use is not always sustainable It is true that biomass from plants, trees and other organic matter has the ability to regrow after being cut and harvested. However, whereas solar, wind or wave power can’t be depleted or over-exploited by human actions, biomass resources can. For example, over-exploitation could mean that forests are being cut to the extent that it harms their capability to produce other ecosystem services and to maintain biodiversity or that soil is cultivated so intensively its capacity to grow plants lowers. The sustainability of biomass use for energy requires much more careful consideration than the use of non-depletable sources. In Europe, as well as globally, our ecological footprint is already bigger than the global biocapacity, which refers to the capacity of ecosystems to produce useful biological materials (vital for humankind) and to absorb waste and emissions generated by humans. This means that already today our use of biomass resources is not on a sustainable basis and there’s too much pressure on land and forests from different human needs. Bioenergy is not carbon neutral and its use doesn’t always cut greenhouse gas emissions It is widely assumed that biomass combustion would be “carbon neutral” and produce no greenhouse gas emissions. It is nevertheless obvious that when burning biomass, in other words organic matter, carbon is released from exhaust pipes or chimneys. So how come bioenergy is not supposed to be producing carbon emissions? The first basic error in the carbon accounting of bioenergy is the failure to account for the production and uses generated by biomass and land if not used for bioenergy. Reduction of greenhouse gas is usually compared to baseline level of emissions (in international policies the emission level of 1990 is an often used baseline.) With plants and trees, the baseline situation is that they keep on growing and absorbing carbon. However, when the biomass is harvested and burned for energy instead, the carbon benefit of continued growth will be lost. Alternatively, the baseline situation can be that the biomass or land is used for other human needs e.g. wood is used for construction and land for food production. If the biomass or land is used for energy instead, construction material and food will need to be produced elsewhere. To evaluate the carbon balance of bioenergy use correctly, the baseline or the so-called counterfactual scenario needs to be taken into account. For example, if land is used to produce crops for energy rather than food, the food typically needs to be grown somewhere else as the food demand remains or even rises following an increasing world population. If this leads to additional land clearing for agriculture, carbon will be released from the cleared ecosystems, such as forests (phenomenon also called indirect land use change (ILUC). In the case of forests and other slow growing biomass, the carbon neutrality assumption also falsely suggests that all the biomass harvested will grow back with time. In the case of an old-growth forest replaced by a commercial forest, this is not usually the case. There is also a time lag in the re-absorption of carbon released in combustion since the regrowth of trees can take several decades (phenomenon known as carbon debt). All of these impacts and their emissions are ignored in the current energy policies which mostly wrongly assign bioenergy a zero-carbon factor. If the use of bioenergy replaces the use of fossil fuels, more carbon will be left stored underground in the form of fossil fuels. However, this benefit comes at the expense of less carbon stored by plants and soils. Bioenergy reduces CO2 emissions only to the extent the first effect is larger than the second. The International Panel on Climate Change (IPCC) does not consider biomass used for energy to be automatically “carbon neutral” International standards for the accounting of greenhouse gas emissions have been developed for the purposes of the international climate convention (United Nations Framework – Convention on Climate Change). The UNFCCC is supported by the International Panel on Climate Change, a scientific intergovernmental body which also develops guidance on greenhouse gas accounting. Under accounting for the international climate convention, countries separately report their emissions from energy use and from land-use. For example, if a hectare of forest is cleared and the wood is used for bioenergy, the carbon lost from the forest is counted as a land-use emission. To avoid double-counting, the rules therefore allow countries to ignore the same carbon when it is released from a chimney. This accounting principle does not assume that biomass is carbon neutral, but rather that emissions can be reported in the land-use sector. As the IPCC has clearly stated, it “does not automatically consider biomass used for energy as “carbon neutral”, even if the biomass is thought to be produced sustainably”. Europe’s demand for biomass may outstrip sustainable supply Europe is already using significant amounts of biomass for energy. It’s estimated that roughly one half of the wood harvested in Europe is actually used for energy, either directly or as part of an industrial process where the main output is material such as paper or pulp (Mantau, 2010). Wood from forests and land to grow crops are the most crucial and needed resources when it comes to producing biomass. Several studies (examples here and here) already indicate that the potential in Europe to increase forest loggings or to cultivate more land is very limited. The EU’s planned demand for wood by 2030, assuming that current growing use of wood for energy continues, will probably outstrip the amount that can be safely and sustainably extracted from European forests. This means that Europe will rely more on imported wood or see degradation of its own forests. The amount of land that can be used for energy crops without displacing food or damaging valuable habitats has been estimated to be a maximum of 1.3 million hectares. In 2010, land roughly three times more than that was already used for biofuels production in the EU. Bioenergy is different from other renewable energy sources There are several ways in which bioenergy differs from other renewable energy sources such wind, solar or wave power. Firstly, even though renewable, biomass resources can be depleted and over-exploited by human actiona and their capacity to renew hampered. Other renewable energy sources are hardly affected by humans. Secondly, energy production from biomass is based on its combustion, like in the case of fossil fuels such as coal or oil. This means that biomass burning directly creates heat unlike other renewables. It also means that much of the energy infrastructure needed for bioenergy is similar to fossil fuels. With a few modifications, biomass can often be burned in the same power plants as coal or processed into fuels that can be used in the same tanks as gasoline in transportation. Reliance on biomass therefore is not a strong driver for the changes needed for an ‘energy transition’ or changes in infrastructure such as decentralised energy production, electrification of transport or closure of inefficient old power plants. In other words, biomass co-firing with coal allows the fossil fuel-based business model to continue. For example, as a result of the EU’s efforts to increase the use of bioenergy, more power plants have started to co-fire biomass with coal in coal power plants. This results in very low efficient energy production and can prolong the life of old coal power plants that otherwise would have reached the end of their life. Finally, since bioenergy always requires combustion, there are several other emissions apart from CO2 related to it, just like with coal or other fossil fuels. Biomass combustion typically produces a lot of small particulate matter (PM) emissions which can affect the heart and lungs and cause serious health effects. Bioenergy industries use waste and residue biomass that can have other uses Energy companies often declare that they only use biomass resources not needed by other industries, particularly in the case of wood. They claim that they only use the leftover residues. The paper and pulp and wood working sectors have already clearly recognised the energy sector as a competitor over the same wood resources, which already indicates that the energy sector isn’t only using leftovers of others. The competition has grown due to the renewable energy policies that have resulted in subsidies for the use of wood for energy, without any limitations or constraints. There’s also direct evidence, particularly from the Southern US which is currently the biggest importer of wood for energy use in Europe, that whole trees are harvested for energy and that harvesting for energy has been carried out in biodiversity rich forests. Bioenergy has a limited role to play in a renewable energy mix In an effort to move away from fossil fuel use towards more sustainable, renewable energy sources and a low carbon future, bioenergy can have a role to play. Practically all scenarios and models of energy use in the next decades – assuming we take the fight against climate change seriously – assume that a certain amount of energy will be produced with bioenergy. This applies both to scenarios of international institutions such as the International Energy Agency (IEA) and of environmental NGOs such as Greenpeace or WWF. The share of bioenergy and its role in the energy sector nevertheless varies significantly between different scenarios, meaning there are many alternatives to choose from. Scenarios advocating for high levels of bioenergy use tend to focus more on the energy sector only, with less consideration for impacts on ecosystems and raw material markets, availability of land and with less precaution in general. Other scenarios show that moving to a renewable energy future is also possible if we limit bioenergy use to the sustainable availability of waste and residue-based resources and to a very limited availability of land for energy crops, with bigger overall environmental benefits from biomass use. Such scenarios usually highlight the use of biomass in the heating sector and for specific uses in transportation where alternatives to combustion engines are harder to find. They also assume increased efforts in the efficiency of biomass burning. It’s not always better for the climate to use bioenergy than fossil fuels Increased use of bioenergy today is mostly driven by policies that aim to tackle climate change and reduce greenhouse gas emissions. With these policy aims, comparison of the emission levels of bioenergy and fossil fuel energy should be a priority. Several studies have already shown that if the full carbon emission impacts (including those resulting from indirect impacts and carbon stock changes in ecosystems due to bioenergy use) are considered, bioenergy does not always reduce emissions compared to fossil fuels. This is particularly the case with biodiesels made from soy, palm oil or rapeseed that require land to be grown and can lead to indirect land use change (e.g. clearing of forests for agricultural land elsewhere) and with wood pellets if they have, for example, lead to increasing harvests in the forests and are used to produce electricity only.
Objective-C is the primary programming language you use when writing software for OS X and iOS. It’s a superset of the C programming language and provides object-oriented capabilities and a dynamic runtime. Objective-C inherits the syntax, primitive types, and flow control statements of C and adds syntax for defining classes and methods. It also adds language-level support for object graph management and object literals while providing dynamic typing and binding, deferring many responsibilities until runtime. The most important thing to do when learning Objective-C is to focus on concepts and not get lost in language technical details. The purpose of learning a programming language is to become a better programmer; that is, to become more effective at designing and implementing new systems and at maintaining old ones. Xcode is Apple’s integrated development environment (IDE) for Mac, iPhone, and iPad app development. It includes not only a source code editor, but also an interface builder, a device simulator, a comprehensive testing and debugging suite, the frameworks discussed in the previous section, and everything else you need to make apps. While there are other ways to compile Objective-C code, Xcode is definitely the easiest. We strongly recommended that you install Xcode now so you can follow along with the examples in this tutorial. It is freely available through the Mac App Store. C was conceived and created as a procedural programming language, whereas Objective-C was to be object-oriented, hence the name. In a procedural language, the code is focused around variables, data, and functions — how to store data and what to do with the data. In contrast, an object-orientated language focuses on creating objects, which are then used to do certain things, just like objects, or “things”, do in real life. Object-orientated code seems to involve more work initially—there is a lot of “boilerplate” code for even the simplest objects. Fortunately, most of this code is already provided in Xcode’s templates, and the objects will quickly become more useful. So what is an object? Put simply, it is a “thing.” Throughout this book, one of the objects that we will be creating will be a Die — the kind you might find in a board game. From the program’s perspective, the die is a “black box” — it hides its inner workings; the object performs any task that is asked of it (assuming that the object has been programmed to actually perform the task), and when it finishes, the object is no longer used. How the object performs its task is irrelevant, as far as the program itself is concerned. Once you create an object, you can then tell your program to produce as many of them as you need. Therefore, your die object can create a pair of itself — a pair of dice. These dice have traits, such as color, size, or the number of faces. You can also perform actions with these dice — you can roll one, or you can roll both of them. After rolling both of them, you would then add, or perhaps multiply, the resulting numbers. From a higher-level viewpoint, all the program has to do is ask the dice to roll themselves, and report a total. The program does not have to know how the dice do that. In fact, if you were not the original creator of the die object, you wouldn’t either — and that’s perfectly fine. Object-orientated programs allow developers to hide the inner workings of their program, while also making the program more efficient to run, as well as maintain. It has become the de facto programming language convention for most large programs, and likely will remain as such for years to come.
About this Interactive How to Use This Site | Dynamic Earth is an interactive Web site where students can learn about the structure of the earth, the movements of its tectonic plates, as well as the forces that create mountains, valleys, volcanoes, and earthquakes. The first section focuses on the layers that make up the earth — from the thin crust on the surface all the way down to the metallic core at the very center. Next, the interactive explores the concept of plate tectonics — the well-accepted theory that states the earth is broken up into about a dozen separate plates that are in constant motion. Students will learn the names of the tectonic plates and will be able to identify whether certain plates are moving toward, spreading apart from, or sliding past each other. Finally, students will learn how mountains and other structures and earthquakes and other major geological events are caused by the slipping, sliding, and colliding of tectonic plates. Learning about the earth is an important topic for students of all ages. Global warming is a pressing social and environmental concern; to address this problem, students must become informed citizens. The Dynamic Earth interactive presents science concepts that every student needs to learn to better understand forces within our planet. According to the National Science Education Standards, all students in grades 5-8 should develop an understanding of: Fundamental concepts and principles that underlie this standard (Content Standard D: Earth and Space Science) include: - The structure of the earth's system - Earth's history STRUCTURE OF THE EARTH SYSTEM - The solid earth is layered with a lithosphere; hot, convecting mantle; and dense, metallic core. - Lithospheric plates on the scale of continents and oceans constantly move at rates of centimeters per year in response to movements in the mantle. Major geological events, such as earthquakes, volcanic eruptions, and mountain building, result from these plate motions. - Landforms are the result of a combination of constructive and destructive forces. Constructive forces include crustal deformation, volcanic eruption, and deposition of sediment, while destructive forces include weathering and erosion. (National Science Education Standards, 1996) - The earth processes we see today, including erosion, movement of lithospheric plates, and changes in atmospheric composition, are similar to those that occurred in the past. Earth's history is also influenced by occasional catastrophes, such as the impact of an asteroid or comet. With these expectations in mind, the specific goals of Dynamic Earth are for students to be able to: - Identify the different components of earth's structure. Visual diagrams and interactive presentations introduce students to the main components of the earth — the crust, mantle, and core — and the lithosphere that anchors the continents and oceans. - Understand the concepts of plate tectonics, recognizing that the earth's tectonic plates are in constant motion. Students will discover how scientists figured out that the current arrangement of continents on the globe is the result of a specific history of movements in the lithosphere, which is broken into sections called tectonic plates. Students will observe images of the earth at different points in history and see that the continents were not always located where they are today. Students also will predict how the continents might look in the future. - Describe the results of interactions between tectonic plates. Various landforms (such as mountains and valleys) and specific geologic events (such as earthquakes and volcanoes) are the result of movements along the boundaries between tectonic plates. After learning about the different ways that tectonic plates can meet and interact, students will discover that many common landforms and phenomena are caused by their meeting, spreading, and shifting. How to Use This Site Dynamic Earth consists of four sections and an assessment. Each section explores one aspect of the earth's structure and the movement of its tectonic plates. Simply follow the instructions on the screen to learn about the layers that make up the earth; how the continents arrived at their current locations; the constant movement of the tectonic plates; and the volcanoes, earthquakes, and other events that result from the movements of the plates. Students will view animations, read explanations, and use their mouse to drag and drop the earth's continents in their correct places, highlight features on a map, and cause earth's tectonic plates to move. At various points, students will check their knowledge by taking a quick quiz or playing a game to see how much they have learned about the Dynamic Earth. Students should read section introductions carefully, as they give a basic overview of concepts, and use the Glossary to look up definitions to unfamiliar terms. Using models of the earth's crust and making comparisons to familiar objects will help students retain the Dynamic Earth information. For example, students can bring in materials that resemble the earth's layers and build a class model of the earth as a way to make the information more concrete. Dynamic Earth includes an extensive assessment section designed to evaluate how well students have learned the interactive's content and skills. Multiple-choice questions, fill-in-the-blanks, and problem-solving questions are used to measure students' subject knowledge, and printable scorecards track progress. - Browser using Internet Explorer 5 (and higher) and Mozilla 5 (and higher). Best results will be with using latest browser versions - Flash player 7 minimum requirement Dynamic Earth is a production of Thirteen/WNET New York. Copyright 2007, Annenberg Media. All rights reserved. Ashlinn Quinn, Writer Ashlinn Quinn is an Outreach Producer in Thirteen/WNET New York's Educational and Community Outreach department, the LAB@Thirteen. She develops and manages educational outreach projects associated with PBS broadcasts. Recent projects have included producing a media-rich Web site for high school Global History teachers, WIDE ANGLE: "Window into Global History"; creating educational materials for the 2006 PBS broadcast series African American Lives; coordinating outreach events associated with the PBS news magazine program Religion & Ethics Newsweekly; and generating interactive online content for projects including the teen-oriented broadcast program What's Up in Finance and the animated kids' news Web site News Flash Five. Before joining Thirteen's Education Department, she worked first as a music teacher and then at the Peggy Notebaert Nature Museum in Chicago, where she wrote curriculum and conducted teacher professional development programs focusing on hands-on science. She holds a B.A. degree with dual concentrations in Music and Psychology from the University of California, Berkeley; and a M.A. degree in Sociocultural Anthropology from the University of Chicago. Interactive and Broadband Unit: Anthony Chapman, Director of Interactive and Broadband Anu Krishnan, Producer Shannon Palmer, Flash Programmer Lenny Drozner, Designer and Flash Animator Ying Zhou-Hudson, Graphics Production Brian Santalone, HTML Implementation Leslie Kriesel, Copy Editor Essential Science for Teachers: Earth and Space Science Science in Focus: Force and Motion
Alcohol proof is a measure of the content of ethanol (alcohol) in an alcoholic beverage. The term was originally used in the United Kingdom and was equal to about 1.75 times the alcohol by volume (ABV). The term proof dates back to 16th century England, when spirits were taxed at different rates depending on their alcohol content. Spirits were tested by soaking a pellet of gunpowder in them. If the gunpowder could still burn, the spirits were rated above proof and taxed at a higher rate. Gunpowder would not burn in rum that contained less than 57.15% ABV. Therefore, rum that contained this percentage of alcohol was defined to have 100 degrees proof. The gunpowder test was officially replaced by a specific-gravity test in 1816. Since 1 January 1980, the United Kingdom has used the ABV standard to measure alcohol content, as prescribed by the European Union. “In common with other EC countries, on 1st January, 1980, Britain adopted the system of measurement recommended by the International Organisation of Legal Metrology, a body with most major nations among its members. The OIML system measures alcohol strength as a percentage of alcohol by volume at a temperature of 20 °C. It replaced the Sikes system of measuring the proof strength of spirits, which had been used in Britain for over 160 years.” Britain, which used to use the Sikes scale to display proof, now uses the European scale set down by the International Organization of Legal Metrology (OIML). This scale, for all intents and purposes the same as the Gay-Lussac scale previously used by much of mainland Europe, was adopted by all the countries in the European Community in 1980.
5 October 2006 A brain region that curbs our natural self interest has been identified. The studies could explain how we control fairness in our society, researchers say. Humans are the only animals to act spitefully or to mete out “justice”, dishing out punishment to people seen to be behaving unfairly – even if it is not in the punisher’s own best interests. This tendency has been hard to explain in evolutionary terms, because it has no obvious reproductive advantage and punishing unfairness can actually lead to the punisher being harmed. Now, using a tool called the “ultimatum game”, researchers have identified the part of the brain responsible for punishing unfairness. Subjects were put into anonymous pairs, and one person in each pair was given $20 and asked to share it with the other. They could choose to offer any amount – if the second partner accepted it, they both got to keep their share. In purely economic terms, the second partner should never reject an offer, even a really low one, such as $1, as they are still $1 better off than if they rejected it. Most people offered half of the money. But in cases where only a very small share was offered, the vast majority of “receivers” spitefully rejected the offer, ensuring that neither partner got paid. Previous brain imaging studies have revealed that part of the frontal lobes known as the dorsolateral prefrontal cortex, or DLPFC, becomes active when people face an unfair offer and have to decide what to do. Researchers had suggested this was because the region somehow suppresses our judgement of fairness. But now, Ernst Fehr, an economist at the University of Zurich, and colleagues have come to the opposite conclusion – that the region suppresses our natural tendency to act in our own self interest. They used a burst of magnetic pulses called transcranial magnetic stimulation (TMS) – produced by coils held over the scalp – to temporarily shut off activity in the DLPFC. Now, when faced with the opportunity to spitefully reject a cheeky low cash offer, subjects were actually more likely to take the money. The researchers found that the DLPFC region’s activity on the right side of the brain, but not the left, is vital for people to be able to dish out such punishment. “The DLPFC is really causal in this decision. Its activity is crucial for overriding self interest,” says Fehr. When the region is not working, people still know the offer is unfair, he says, but they do not act to punish the unfairness. “Self interest is one important motive in every human,” says Fehr, “but there are also fairness concerns in most people.” “In other words, this is the part of the brain dealing with morality,” says Herb Gintis, an economist at the University of Massachusetts in Amherst, US. “[It] is involved in comparing the costs and benefits of the material in terms of its fairness. It represses the basic instincts.” Psychologist Laurie Santos, at Yale University in Connecticut, US, comments: “This form of spite is a bit of an evolutionary puzzle. There are few examples in the animal kingdom.” The new finding is really exciting, Santos says, as the DLPFC brain area is expanded only in humans, and it could explain why this type of behaviour exists only in humans. Fehr says the research has interesting implications for how we treat young offenders. “This region of the brain matures last, so if it is truly overriding our own self interest then adolescents are less endowed to comply with social norms than adults,” he suggests. The criminal justice system takes into account differences for under-16s or under-18s, but this area fully matures around the age of 20 or 22, he says. Journal reference: Science (DOI: 10.1126/science.1129156)
Dear Builder’s Engineer, I’m a framer and I hear all the time, “shear off a wall.” What’s that really mean? Jerome O., Branchville, South Carolina If you were in the giant scissor business, I’d have a different answer than the one that follows. In the world of construction, shear can refer to several things. To engineers it’s a certain type of stress inside a structural member due to some applied load. Shear can also mean a lateral load from earthquake or wind. And shear can refer to a construction method of resisting wind and earthquake loads. To shear off a wall is in reference to this third definition. Shear is one of several stresses, with bending, tension, and compression being the other main ones. To understand what shear really is, let’s look at how things fail. - If a member fails in tension, it is pulled apart. For example, a cable being pulled beyond its tensile capacity snaps in two. - If a member fails in compression, it crushes. For example, a short post supporting too heavy a load mashes. - If a member fails in bending it breaks due to too large a bending moment. Breaking a pencil in half with your hands is a good example. (Bending moment is another topic altogether.) If a member fails in shear it rips. A rip is caused by one side of a member going one way and another side going the other way. The most common examples of shear failures are walls that have been through an earthquake. In these you’ll see lots of diagonal cracking especially at doors and windows. That’s from in-plane (in the plane of the wall) lateral (sideways) forces racking the wall. Racking is the top of the wall being forced in one direction while the bottom is held stationary or is forced in the other. Door and window corners are particularly vulnerable to these racking shear forces. Here’s another seat-of-the-pants example of shear. Say you have a tall stack of long 1x4s. You and a buddy lift the stack, one guy on each end. The boards sag and bounce as you walk. They do this because the boards can slide on top of each other. Now envision that same stack of 1x4s with a layer of stout glue between each board. After the glue has cured, you and your buddy carry them and are astonished that there is zero sag or bounce. The glue is resisting shear forces between the boards. You have just turned a bunch of puny 1x4s into a mighty glu-lam beam. Wood glue, in fact, is mainly intended to resist shear, which in this sense means pieces of wood sliding on each other. The glue I spec in my structural designs, Liquid Nails LN-940, is good for 450 pounds per square inch, wood-on-wood shear. In most stick-framed construction, walls provide resistance to the racking, lateral forces brought on by wind, storms, and earthquakes. The amount of racking resistance a wall provides has everything to do with how it is constructed. A wall of just 2x4s, no plywood or drywall, can support a lot of gravity (downward) load but will provide almost no racking resistance. As a framer, you know this because the only way you can true up a wall is to rack it plumb before any sheathing has been nailed off. A “shear wall” generally means a wall intended to resist the racking loads applied by wind or seismic events. The word “intended” is key because while any wall can resist some lateral load, not all of them are designed to do so. If a non-shear walls does anyway, that’s great and adds to the redundancy of the lateral resisting system. When engineers design buildings, they determine how lateral loads will be distributed and ensure that shear walls are located strategically to resist those loads. A framed wall with drywall on one or both sides can resist a fair amount of racking force. The building code recognizes this and allows drywall-sheathed walls as shear walls. More typical are plywood or OSB-sheathed shear walls. As you might suspect, wood sheathing has greater shear capacity than drywall. The capacity of a shear wall also depends a lot on the nailing pattern of the sheathing to studs; whether or not the edges of sheathing have blocking behind them; and other factors. So when someone says that a wall is to be sheared off, they’re really saying, “Make sure that sheathing is applied to the studs with a certain nail pattern so that the wall can be depended upon to take wind and earthquake loads.” We all know that a chain is only as good as its weakest link. A shear wall is but a link in the chain of a building’s lateral resisting system. In a future column we’ll tear into that topic, also known as load path. Tim Garrison is an author, public speaker, and professional engineer. He welcomes correspondence via his blog at ConstructionCalc.com.
The core-mantle boundary is the yellowish curved floor of the box, which represents the study area. The slab is shown as the blue area, folding like honey poured slowly. Credit: ASU, UCSC and Steve Grand, UT Austin A huge slab of folded Earth that scientists think used to be part of the ocean floor has been detected near the planet's core. The discovery supports the theory that Earth's crust is constantly recycled deep into the planet as molten material from below simultaneously pushes up to refresh the surface. The structure is about 125 miles deep and at least 125 miles wide and 370 miles in the north-south direction. In consistency, it is more like a giant, folding mush of taffy, researchers said today. "If you imagine cold honey pouring onto a plate, you would see ripples and folds as it piles up and spreads out, and that's what we think we are seeing at the base of the mantle," said Alex Hutko, a graduate student University of California, Santa Cruz and lead author of a paper describing the discovery in the May 18 issue of the journal Nature. Giant recycling machine The slab began its plunge toward the center of the Earth about 50 million years ago. It is denser than surrounding material, which is why it sinks. Its lower reaches are near the core, about 1,740 miles down. Yet it is still attached to the surface, much like a conveyor belt. "It's like a carpet sliding off the dining room table," said study team member Edward Garnero of Arizona State University. "If it is more than half way off, it just goes taking everything with it." Earth is divided into three main layers: the core, mantle and crust. The crust, a thin surface layer, is divided into more than a dozen major plates. In the middle of the Pacific Ocean, plates spread apart and fresh material from the mantle wells up. Along the west coast of North America, crust beneath the ocean dives under a continental plate, creating earthquakes and volcanoes. Geologists have long speculated that when crust is folded into the planet, it sinks to the bottom of the mantle, where it displaces the material down there and forces some of it up. "Since there is a conservation of mass in the mantle, something must return as the slab sinks into the Earth," Garnero explained. "This return flow can include plumes of hot material that gives rise to volcanism." If the scientists have correctly interpreted their data, the folding slab is the first hard evidence that sinking crust drives the upwelling of material so deep inside the planet. "It's the first evidence from direct imaging to support the idea that ancient seafloor makes its way down to the bottom of the mantle," Hutko said. The slab was found by monitoring seismic waves—generated by earthquakes in South America—reflecting from deep inside the mantle and recorded in the United States. The diving crust is made of essentially the same material as the lower mantle, the researchers said, but it is much cooler, by about 1,260 degrees Fahrenheit. The lower mantle is roughly 4,500 degrees. Seismic waves are altered as they move through the hot and cooler regions, which allowed computer programs to generate the picture of the slab. It is possible, Garnero told LiveScience, that they are just seeing a formation of rock from the mantle that has different chemical components, but the temperature difference is best explained by crustal material that has been compressed, he said. The sound-imaging technique also revealed plumes of hot material at the lower edges of the slab. "We think there is a kind of pushing and bulldozing away of a hot basal layer of the mantle, giving rise to small plumes at the edges," Hutko said. - Top 10 Ways to Destroy Earth - Hole Drilled to Bottom of Earth's Crust - Breakthrough: New Way to Peek Inside Earth - Earth's Core Rotates Faster than Surface, Study Confirms - Mystery Vibrations Detected Inside Earth - GALLERY: Volcanoes
The mouth and esophagus of the leatherback turtle are a perfect example of how an animal can become adapted to its diet and habitat. When the turtle consumes jellyfish (and it must eat many, as jellyfish have low nutritional value), the esophagus stores both the jellyfish and the seawater that have been swallowed. However, to prevent the stomach filling with water, the seawater must be expelled. So how does this happen? The answer lies in the backwards-pointing spikes you see in the mouth of the turtle, which continue down the esophagus and grow progressively larger. As the muscles of the esophagus squeeze the seawater out, the spines keep the jellyfish in place. Once all the water has been expelled the jellyfish are then passed into the stomach. This strange adaptation is one of many that have kept this magnificent species in existence for 90 million years. Leatherback turtles are regularly seen in Guyana at Shell Beach, Galibi in French Guiana , Wanshishia (Marijkedorp) in Surname, also along the coast of Dominica. We offer tours to all these remote and stunning destinations – use the chat tab at the bottom of this page to find out more or send us an email with your questions!
From Wikipedia, the free encyclopedia Laminins are major proteins in the basal lamina (one of the layers of the basement membrane), a protein network foundation for most cells and organs. The laminins are an important and biologically active part of the basal lamina, influencing cell differentiation, migration, adhesion as well as phenotype and survival. Laminins are trimeric proteins that contain an α-chain, a β-chain, and a γ-chain, found in five, four, and three genetic variants, respectively. The laminin molecules are named according to their chain composition. Thus, laminin-511 contains α5, β1, and γ1 chains. Fourteen other chain combinations have been identified in vivo. The trimeric proteins intersect to form a cross-like structure that can bind to other cell membrane and extracellular matrix molecules. The three shorter arms are particularly good at binding to other laminin molecules, which allows them to form sheets. The long arm is capable of binding to cells, which helps anchor organized tissue cells to the membrane. The laminins are a family of glycoproteins that are an integral part of the structural scaffolding in almost every tissue of an organism. They are secreted and incorporated into cell-associated extracellular matrices. Laminin is vital for the maintenance and survival of tissues. Defective laminins can cause muscles to form improperly, leading to a form of muscular dystrophy, lethal skin blistering disease (junctional epidermolysis bullosa) and defects of the kidney filter (nephrotic syndrome). Fifteen laminin trimers have been identified. The laminins are combinations of different alpha-, beta-, and gamma-chains. - There are five forms of alpha-chains: LAMA1, LAMA2, LAMA3, LAMA4, LAMA5 - There are four of beta-chains: LAMB1, LAMB2, LAMB3, LAMB4 - There are three of gamma-chains: LAMC1, LAMC2, LAMC3 Laminins were previously numbered - e.g. Laminin-1, Laminin-2, Laminin-3 - but the nomenclature was recently[when?]changed to describe which chains are present in each isoform. For example, laminin-511 contains an α5-chain, a β1-chain and a γ1 chain. Laminins form independent networks and are associated with type IV collagen networks via entactin, fibronectin, andperlecan. They also bind to cell membranes through integrin receptors and other plasma membrane molecules, such as the dystroglycan glycoprotein complex and Lutheran blood group glycoprotein. Through these interactions, laminins critically contribute to cell attachment and differentiation, cell shape and movement, maintenance of tissue phenotype, and promotion of tissue survival. Some of these biological functions of laminin have been associated with specific amino-acid sequences or fragments of laminin. For example, the peptide sequence [GTFALRGDNGDNGQ], which is located on the alpha-chain of laminin, promotes adhesion of endothelial cells. Dysfunctional structure of one particular laminin, laminin-211, is the cause of one form of congenital muscular dystrophy. Laminin-211 is composed of an α2, a β1 and a γ1 chains. This laminin's distribution includes the brain and muscle fibers. In muscle, it binds to alpha dystroglycan and integrin alpha7—beta1 via the G domain, and via the other end binds to the extracellular matrix. Abnormal laminin-332, which is essential for epithelial cell adhesion to the basement membrane, leads to a condition called junctional epidermolysis bullosa, characterized by generalized blisters, exuberant granulation tissue of skin and mucosa, and pitted teeth. Malfunctional laminin-521 in the kidney filter causes leakage of protein into the urine and nephrotic syndrome. Laminins in cell culture Recently, several publications have demonstrated that laminins can be used to culture cells, such as pluripotent stem cells, that are difficult to culture on other substrates. Mostly two types of laminins have been used. Laminin-111 extracted from mouse sarcomas is one popular laminin type, as well as a mixture of laminins 511 and 521 from human placenta.Various laminin isoforms are practically impossible to isolate from tissues in pure form due to extensive cross-linking and the need for harsh extraction conditions such as proteolytic enzymes or low pH that cause degradation. However, professor Tryggvason's group at the Karolinska Institute in Sweden showed how to produce recombinant laminins using HEK293 cells in 2000. Kortesmaa et al. 2000. This made it possible to test if laminins could have a significant role in vitro as they have in the human body. In 2008, two groups independently showed that mouse embryonic stem cell can be grown for months on top of recombinant laminin-511. Later on Rodin et al. showed that recombinant laminin 511 can be used to create a totally xeno-free and defined cell culture environment to culture human pluripotent ES cells and human iPS cells. Role in neural development Laminin-111 is a major substrate along which nerve axons will grow, both in vivo and in vitro. For example, it lays down a path that developing retinal ganglion cells follow on their way from the retina to the tectum. It is also often used as a substrate in cell culture experiments. Interestingly, the presence of laminin-1 can influence how the growth cone responds to other cues. For example, growth cones are repelled by netrin when grown on laminin-111, but are attracted to netrin when grown on fibronectin. This effect of laminin-111 probably occurs through a lowering of intracellular cyclic AMP. Role in cancer |This section's tone or style may not reflect the encyclopedic tone used on Wikipedia. (July 2012)| The majority of transcripts that harbor an internal ribosome entry site (IRES) are involved in cancer development via corresponding proteins. A crucial event in tumor progression referred to as epithelial to mesenchymal transition (EMT) allows carcinoma cells to acquire invasive properties. The translational activation of the extracellular matrix component laminin B1 (LamB1) during EMT has been recently reported suggesting an IRES-mediated mechanism. In this study, the IRES activity of LamB1 was determined by independent bicistronic reporter assays. Strong evidences exclude an impact of cryptic promoter or splice sites on IRES-driven translation of LamB1. Furthermore, no other LamB1 mRNA species arising from alternative transcription start sites or polyadenylation signals were detected that account for its translational control. Mapping of the LamB1 5'-untranslated region (UTR) revealed the minimal LamB1 IRES motif between -293 and -1 upstream of the start codon. Notably, RNA affinity purification showed that the La protein interacts with the LamB1 IRES. This interaction and its regulation during EMT were confirmed by ribonucleoprotein immunoprecipitation. In addition, La was able to positively modulate LamB1 IRES translation. In summary, these data indicate that the LamB1 IRES is activated by binding to La which leads to translational upregulation during hepatocellular EMT.
Early Years Foundation Stage The early year’s foundation stage (EYFS) is the curriculum that the Government sets for all early years providers (0-5 years) to make sure that ‘all children learn and develop well and are kept healthy and safe’. (Department for Education) There are 17 early learning goals to be aimed for by the end of the Reception year in school. The Framework is divided into three sections: - Characteristics of learning - Three prime areas of learning - Four specific areas of learning Characteristics of learning - Playing and exploring: which is about finding out and exploring, playing with what they know and being willing to ‘have a go’. - Active Learning: which is about being involved and concentrating, persevering and enjoying achieving, what they set out to do. - Creating and thinking critically: which is about having their own ideas, making links and choosing ways to do things. Personal, Social and Emotional Development: which is about making relationships and getting along with other children and adults, having confidence and self-awareness, and being able to manage their feelings and behaviour: - forming meaningful relationships with other children and adults - having respect for other people - being an individual and also belonging to a community - being able to express and cope with your feelings and emotions - becoming independent and helping others - being able to make choices and taking responsibility - developing a sense of fairness, what is right and wrong - understanding appropriate behaviour - respecting and being able to empathise with others - having feelings of wonder and joy - sharing and celebrating festivals, traditions and special occasions. Communication and Language, which is about developing good listening and attention skills, to have good understanding and also speak and express themselves clearly: - developing the confidence to be able to express your opinions and make your own choices - talking, listening, discussing and recalling experiences in a range of different situations - being able to describe and explain things in your own words, using your own ideas - listening to stories, anticipating what might happen and responding appropriately to the story - listening and following instructions, and being able to answer questions appropriately. Physical Development, which is about large and small movements in a variety of ways, having good control and co-ordination, handling different tools and equipment well. It also covers health and self-care, looking at ways to keep healthy and safe: - developing confidence and independence through achievement - learning to use tools competently - learning co-ordination and control - building confidence, stamina, energy and strength - learning to move in a variety of ways - expressing yourself through movement - understanding the importance of exercise - learning to make healthy choices about food, and taking care of ourselves and our healthy body. Literacy, which is about stories, rhymes, books and reading, and also mark making/writing: - believing in yourself as a reader and writer and developing the skills to become one - enjoying stories and a wide range of reading materials e.g. books, poems, print in the environment - learning to recognise letters and the sounds they each make - learning to make marks and give meanings to those marks. Mathematics, which looks at numbers, counting, shape, space and measure: - appreciating pattern, and relationships in mathematics - logical thinking - exploring, comparing and describing shapes, quantities, height, etc. - finding ways to solve mathematical problems e.g. estimating, measuring - learning to use and understand mathematical language - understanding and using number - counting, understanding and using numbers - calculating simple addition and subtraction problems. Understanding the World, which is about people and communities and helps children understand about the world they live in, including ICT: - exploring the local environment - finding out about the past - developing an understanding of travelling to other places, distance and maps - using technology – making models in a variety of ways - planning, making and designing things - exploring and solving problems - using ICT for a range of purpose - exploring, experimenting and having ideas - being curious – wondering why, how, what if ? - understanding why and how things happen - observing carefully and closely - experiencing and changing materials - sharing the joy of finding things out with your friends. Expressive Arts and Design, which develops different forms of expression, exploring music, dance and song, encouraging children to be creative in all respects. It also focuses on media and materials and imaginative/pretend play: - representing and communicating your thoughts, ideas and feelings in a variety of ways e.g. art, music, movement dance, language and design and technology - expressing yourself through a wide range of media e.g. paint, clay, drawing, 3D materials - experiencing and enjoying beauty - imagining, expressing and creating - having original ideas and thoughts. The staff plan to deliver a broad and balanced curriculum that touches on all aspects across the year, based on the observations of children’s play and what their interests are. This appears in the weekly enhancements to the continuous provision, as well as in the adult-led focuses and group-time work. Special Educational Needs and Disabilities (SEND) We have systems in place for each SEND child to have their own key person who will know them best and be proactive in planning for their needs, working closely with parents when doing so. SEND children are observed closely and their achievements are celebrated in their Learning Journal and planning documents. This information is then used to tailor the curriculum to meet the interests and enthusiasms of each child using methods of delivery that are appropriate to their needs. If there is ever evidence that this is not occurring the SENCO will initiate training to challenge staff and enable them to provide a curriculum that ensures equality and diversity for all. At Haxby Road Nursery School, we have a highly trained staff who are dedicated to working with SEND children. They have a wealth of experience and have supported children with a wide range of needs. Their training includes Makaton, PECS communication, Speech and Language, Autism Spectrum Disorder, early movement, attachment and physical development. At Haxby Road, we are fortunate to have a Deputy Headteacher with over 12 years experience working with children with additional needs from 2-11. We also have a specialist Educational Resource Provision, with highly trained staff in speech and language development. The Head and Senco of the Educational Resource Provision continuously reviews the qualification needs of this team to ensure they match the needs of the children we have in nursery and additional training is arranged if required. This ensures that teaching styles and methods are appropriate and up to date, enabling SEND children maximum access to an Early Years curriculum. In addition, all school staff receive specialist support and training when there is a need, for example strategies to use when supporting a child with a hearing impairment or visual impairment. As a team we continually support each other and share expertise to ensure our teaching styles can be adapted appropriately so that all children reach their full potential. Children with SEND are supported in a variety of ways – through one to one support, group activities or whole key worker tasks. The key worker for each SEND child will decide how everyday activities and experiences within the curriculum can be adjusted to ensure their child is fully involved at the appropriate level. If you would like to discuss your SEND requirements please contact the school and we will try our best to help you.
Reading Group Guide Questions and Topics For Discussion - Harry S. Truman was born on May 8, 1884. Nearly twenty years prior, Anderson Truman freed his five slaves, Hannah, Marge, and their three daughters in Leavenworth, Kansas. Later on, a keeper of the family would conclude that the Truman's never owned slaves. Since owning slaves was a relatively accepted practice in the Confederacy, why would someone think to rewrite history? How would you describe the turning point in the American social consciousness over slavery? Why do you think it took so long for someone to stand up to Jim Crow, even after the senseless killing of nine African-Americans? How does history influence what lives are valuable within the consciousness of a society? What other factors are at play? - Truman's boyhood was shaped by deeply instilled values. Often eager to please and a "bookworm" Truman was the perfect child. Even at such an early age, Truman displayed a love for politics. What values did Truman hold that would later make him an outstanding politician? A significant part of Truman's moral character was reinforced by his education. Do you think that a similar education should be taught in today's public schools? If so, how? - Truman was a farmer, even though farmers were discouraged to fight, he felt it was his duty to serve in the war in Europe. The president at the time, Woodrow Wilson said, "upon the farmers rested the fate of the country and thus the fate of the world." Why were farmers so highly regarded at the time? What professions or occupations are held in the same regard today? What professions or occupations should be the last to fight a war? Explain. - In Captain Truman's first confrontation with the Germans he proved brave and stood his ground when many retreated. Despite the inexperience of his infantry, not a single soldier was killed in the melee. In your opinion, what were the critical points in Truman's life that led him to becoming a great leader? What led him toward an interest in artillery and a fascination for power? - December 1933 marked the end of prohibition. Having been repeatedly passed over for a position in Congress, Truman became a bit disgruntled with politics. What is the correlation between the end of prohibition and the political climate of the time? Why do you think Truman was consistently overlooked in the political arena? - How would you describe Truman's reluctance to run for Vice President with President Franklin Roosevelt? What factors made Truman the prime Vice Presidential candidate for the election? Compare and contrast Roosevelt and Truman, what made them the ideal pair? - After the election in 1944, Truman has very little contact with President Roosevelt. In fact, when Roosevelt was meeting with Churchill and Stalin for his second Big Three Conference, Truman was attending parties and receptions. Do you think Truman was intentionally left out of the loop of the strategy overseas? Considering Roosevelt's health at the time why do you think Truman was not briefed on international affairs? - On April 12, 1945 Roosevelt dies of a cerebral hemorrhage. In the events immediately following his presidential oath, there seems to be a lot of uncertainty about whether he can handle the job. What other events leading up to this moment give you the impression that he lacks a confidence in himself that is required of a president? 9. What were the strongest factors contributing to Truman's victory in the election of 1948? Compare and contrast Truman and Dewey's campaign strategy. 10. When the steel industry was brought to a standstill due to labor strikes, Truman decides to take government control of the industry. A sincere advocate for labor unions, why did he feel that was the best decision? Why did his decision cause a devastating blow in Truman's popular opinion? 11. In your opinion, what were the greatest highlights of Truman's presidency? What progress did he make in settling The Cold War? What deeply held values carried him through seven years and nine months in office? 12. Senator Adlai E. Stevenson III of Illinois remarked that Truman's life was "an example of the ability of this society to yield up, from the most unremarkable origins, the most remarkable men." What do you have to learn from Truman's life? Did Truman epitomize the American dream? Explain.
COMMON NAME: Ferruginous hawk SCIENTIFIC NAME: Buteo regalis The ferruginous hawk is the largest buteo in North America, with a length of 20 to 25 inches and a wingspan of 53 to 56 inches. These hawks have short, dark, hooked beaks and extremely long, yellow gapes that extend to below the eye. The adult is brown above with rusty streaks and white below. Its legs are feathered to the toes. The sexes are similar. Mostly western half of North America in the Great Basin and Great Plains. They breed from eastern Washington to southern Alberta and southern Saskatchewan, Canada, south to eastern Oregon, Nevada, northern and southeastern Arizona, northern New Mexico, northwest Texas, western Nebraska, western Kansas, and western Oklahoma. Winters across the southwest to Baja California and central Mexico. Open country in semiarid grasslands with scattered trees, rocky mounds, or outcrops and shallow canyons that overlook open valleys. During migration, they may be seen along streams or in agricultural areas. Ferruginous hawks may nest in close proximity to each other, less than a half a mile away. They select rocky outcrops, hillsides, rock pinnacles, or trees for nest sites. Nests may be built right on the ground. Nests are built of large twigs or roots, grasses, old bones, or cow or horse dung. Both the male and female participate in nest building, followed by the laying and incubation of three or four eggs that are laid at two-day intervals. The young hatch between February and July after about 28 days of incubation, and leave the nest 38 to 50 days later. The adults continue to feed the fledged young as well as the nestlings. The young remain with their parents for several weeks after fledging before dispersing on their own. Ferruginous hawks rely primarily on ground squirrels, jackrabbits, pocket gophers, prairie dogs, and kangaroo rats. Other prey includes snakes, lizards, grasshoppers, and crickets. The birds tend to hunt in early morning or late afternoon. Populations of ferruginous hawks seem to have declined in most areas of their range—except in California, where they are thought to have increased in the past decade.
Teach your students the Alphabet of the Universe! DOWNLOAD OUR FREE PERIODIC TABLE POSTERS BELOW Take a look around you. Have a look at the objects in your room, outside at the trees, birds and sky, and then think about everything else in the world. I’m sure you’ll find the variety and complexity of the world’s different materials amazing. But what is truly astounding, is that everything you see is merely the result of a combination of elements, of which (naturally occurring) there are only 92 to choose from. But what is more incredible, is that it is highly likely (virtually certain) that the whole Universe is the same; a combination of the selection of these 92 elements. Fascinating! What is an element? An element is a substance that is made up of only one type of atom. Oxygen gas is made up of only Oxygen atoms, as a strip of Magnesium metal is made up of only Magnesium atoms. Basically, it is something that cannot be broken down into anything simpler. If you look around you, you will struggle to find examples of elements. The ‘lead’ in your pencil is made of only carbon, as is diamond, as is the black charred substance on your burnt toast! As you may have gathered, carbon is a little complex and immensely important. You may also have silver, gold or platinum as jewellery. It is possible that you have aluminium foil in your drawer, and you may have seen the copper in your electrical wires or water pipes, and your car may be rusting due to it being iron, but apart from that, elements are hard to find. This is because they mainly exist as compounds. A Compounding Problem The English language has 26 letters in its alphabet. Some of these letters can exist as words, such as ‘a’ and ‘I’, but most are placed in combinations that allow us to make millions upon millions of different words. Some letters are also more common, such as ‘e’ which is the most commonly used letter in the English language. It is the exact same thing with elements. Elements bond with different elements to make compounds. When this happens, the properties of the compound are usually vastly different to the elements that make them up. A classic example is when the element sodium, a highly reactive metal that can set on fire in water, reacts with the green, poisonous gas chlorine, to produce the compound sodium chloride. As you may know, this is the chemical name for table salt; a relatively harmless substance that makes your chips taste great! Clearly table salt is different from the two elements that make it, and it is this varying combination of different elements that accounts for the incredibly diverse materials that make up our universe. A massive dictionary from just 92 letters. The Periodic Table All elements are arranged on the Periodic Table. They are arranged in such a specific order that you can tell a great deal about an element, simply by its position on the table. Elements are arranged in order of increasing atomic number from the lightest element Hydrogen, to the heaviest natural element Uranium. I say natural, as there are now over 115 elements, but everything after Uranium has been artificially made in laboratories by humans. Elements are also arranged in groups, with the first column being group one, the alkali metals and group two being the second column, the alkali Earth metals. Then there is the middle section known as the transition metals, before we get to group three which is the column starting with Boron and containing Aluminium. The next column is group 4, then 5, 6, 7 (the Halogens), finishing with Helium’s column, group 8; the noble gases. What these groups tell us is how many electrons (sub-atomic particles) are contained in the outer shells of the elements. Electrons orbit the nucleus of an atom which is made up of the sub-atomic particles called protons and neutrons. They orbit in clouds but also in shells. How many electrons are in the outer shell determines the element’s behaviour, how it reacts and how it bonds with other elements. As Lithium, Sodium and Potassium are all in group one, they all have one electron in their outer shell. This means that they are extremely reactive as their outer shell is not full so they do what they can to bond to non-metal elements (at the top right hand side of the table). They need to get rid of their one electron. Chlorine on the other hand is in group 7 and is desperate to gain an electron to make it stable which will complete its outer shell. It therefore happily reacts and bonds with sodium, resulting in sodium chloride where the sodium has ‘given’ its electron to chlorine. The group 8 elements already have full outer shells, which means they are already in a stable state, therefore they don’t have to lose or gain electrons, resulting in them being inert. They are unreactive and stable and because of this, exist as elements and not compounds. The rows of the periodic table are known as periods and dictate how many shells the elements have. So if you look at Magnesium, you can tell it has two electrons in its outer shell as it is in group 2, and it has three electron shells as it is in period 3. Clever! The elements in their respective groups also react in similar ways. You can see trends in their behaviour which is another reason for their grouping. The periodic table really is a work of genius and a work of art! Mendeleev was a Russian scientist whose work on the periodic table defined its modern form. You will be pushed to find a chemistry classroom anywhere in the world that does not have a periodic table on the wall. His work on the periodic table and the properties of elements has shaped our understanding of not only the world, but of the universe. His importance as a scientist cannot be overstated. When he was alive (1834-1907) there were only 63 known elements. He managed to arrange these elements into groups depending on their atomic weight and their similar properties. He then left gaps in the table as to where unknown and undiscovered elements were to go, and he accurately described their appearance, reactivity and properties based on the properties of elements around them. When these elements were discovered, his predictions and descriptions were incredibly accurate. This is truly staggering! What’s in a Name? All of the elements on the periodic table have chemical symbols which are recognized internationally. Some symbols, such as Na for Sodium, appear not to make sense, but many of them are symbols from their Latin or Greek names. Fe comes from the Latin Ferrum which we know as Iron. Au, also Latin, means Aurum which we know as Gold, but my favourite is Pb which stands for Plumbum! We know this as Lead, which is why plumbers are known as such, as when the profession originated, water pipes were made from lead. Clever! The Usual Suspects Hydrogen is by far the most common element in the universe as it makes up the majority of a star’s mass. On Earth, Oxygen, Iron, Aluminium, Silicon, Sodium, Potassium, Carbon and Magnesium are among the most common within the Earth’s crust and their importance relates directly to us. Our planet dictates which elements we use and our bodies contain all of these elements in various forms. The most important element to life though, is unquestionably Carbon. Carbon is essential to all known life and could not exist without it. You are a combination of mainly Carbon, Oxygen, Hydrogen, Iron, Calcium, Phosphorus and Nitrogen but there’s even a trace of Arsenic in you, amongst other elements. It is the combination of so few elements that create an incredibly diverse universe. “Elementary my dear Watson”!
The definition of cross-cultural transmissions is the exchange of ideas between communities and empires over time, which includes the transfer of ideas, culture, religion, political identities and knowledge of medicine being spread into nations and communities of differing ideas along the Silk Road. The Ottoman Empire is one example of an empire that used the Arabs and Greeks already gained knowledge in medicine and empire building, which operated as the super structure for the Ottoman society. Also, travelers from Western Europe would often visit Istanbul, writing and exchanging ideas concerning knowledge and medicine. However, it is of vital importance that Evliya Celebi traveled to the West and East to witness and write detailed accounts of important surgeries and baths paying particular detail to their medical properties and medical knowledge. “The second tradition’s is the mathematical-geographical lore of Greek antiquity which is connected to the name of Ptolemy.” “Greeks from the Phanariot aristocracy went to study in Italy, and later found employment as interpreters and as physicians in the Ottoman palace.” This exemplified the importance of the role that the Greek and Jewish communities had, going to other schools of thought in order to practice other western ideas of medicine, herbal interaction and surgery, which could be learnt and brought back to the Ottoman Empire and the capital to practice in the madrasas and hospitals. What was the importance behind understanding cross-cultural transmission between the Greek antiquities and Arabs knowledge of medicine, science and the super structure of that had been incorporated into the Islamic ottoman element during the scientific revolution and medical knowledge transmissions!. Firstly it should be noted that the Ottoman Empire had always been a cross-cultural transmission amongst communities and other empires evidenced by its very location and geography. “To be sure, among the ancients there did appear Plato and Hippocrates and Socrates, during the reign of Sultan Selim there appeared the physician Kaysuni, but the Khan is superior to them all—still, because Bitlis is an ancient metropolis,” Here Evliya Celebi talks about the important texts from antiquities and how the transmission over time from the ‘ancient metropolis’ which used the important texts of medical knowledge to be omitted into the Ottoman elite society and medicine to be provided, used by scholars and physicians. The age of enlightenment played an important role, but during the age of reasoning and understanding of the different science but however, did it played a important part for the Ottoman Empire by cross cultural transmissions. The age of enlightenment was important for the Ottoman Empire because there was an increase in the interaction with Europe even though this interaction had always existed. It was now the case that more ideas concerning science, reason and medicine were being exchanged. Despite these exchanges in ideas the religious identity of the Ottoman Empire was never questioned because the ‘faith of Islam was strong’. An important point to recognise is that the Ottoman Empire was an ever-expanding military empire, which continued to expand its borders from 1300 onwards to 1700 from the borders of Europe, Africa and Asia. However, it could be said that in the process of venturing on conquests and adopting and taking in other knowledge, people’s cultural elites were negotiating amongst the powers. Over time this gradually become very important for the transmission of knowledge and ideas regarding medicine, and how it could be facilitated both in economic and political terms. “finally, because of the conquest of Constantinople on 29 May of 1453 by the Janissaries of Sultan Mehmet II (1434-1481) A.D., Sultan: 1444-46 and 1451-82).”This quote exemplifies the importance of the conquest of Constantinople in the year 1453 and how it played a major role in cross cultural transmission amongst the Greek population, as well as following the practice of scientific medicine which was being used and transmitted amongst the population. “In this perspective, it is often affirmed that Greek Culture and Science were transferred to the West. Where they contributed to the Renaissance. Albeit fundamentally correct this view of History is, however, highly fragmentary, as the Greek culture and scientific lore continued to be transmitted among Greek-speaking people within the Ottoman Empire.” This shows that the ‘Greek communities in the Balkans had extensive amounts of knowledge and culture which was one of the cores practices in the Ottoman Empire and was important in helping to advance Ottoman medicine. Transmissions of knowledge from the Greeks, which were seen to be important, include, ‘galen’ and ‘discordies’. Such transmissions of knowledge had also taken place in Jewish practices in Spain where they had developed their own unique practices of medicine. There are famous Jewish physicians, which were used in the Ottoman places to heal the Ottoman sultans, “For example, Jewish physicians brought with them the much higher level of medical knowledge that characterized Europe in contrast to the Near East.” “The most prominent of these was Joseph’s son, Moshe Hammon (c.1490-c. 1554), who served as the personal physician to Suleiman the Magnificent.” This quote shows the importance of skills and cross-cultural transmission of other cultures being practiced. During the Ottoman Empire in Sultan Suleiman the Magnificent period he had a Jewish head physician who cured his pains in his knee in various ways such as ointment. When he was going through some heart problems they used various methods such as ensuring his breathing was constant and that he was properly cooled over using ice to facilitate the process. The Jewish tradition concerning medicine was very important for the operation of the Ottoman systems as they often brought rare commodities, knowledge as well as special herbs that couldn’t be found in the Ottoman region. They also possessed political leverage over the west, making it an important incorporation of the Ottoman Empire. Thirdly it was very common for Jewish physicians to be in the ottoman courts because there was a large population in Istanbul, around about 30,000 of their communities and they highly skilled and cross cultural transmissions into the ottoman system. Further expansion of Empire “In the Ottoman Empire Avicenna became a legendary hero; he was accepted or named as elokman Hekim’ who knew the secret of eternal life.” This is another important quote because it the shows the legendary past of Avicenna and how his works was vital in the development of the Ottoman Empire and how his works played a part in cross cultural transmissions. He conducted important studies concerning plants and other various collections of treatments relating to various diseases, which were present during the 15th century. This idea of transformation between cultures and transmissions between empires of the ages was a common tradition regarding medicine and was commonly shared. According to Prof Esin kaya article, ‘“Ancient Turks also had an idea about contagion and contagious diseases. For instances they used the crust of smallpox to prevent it.”This showed how common practices were utilised in attempting to prevent certain illnesses. Another example of how an idea is transmitted of illness and ideas of how to cure or deal with it such as ‘leprosies’ where the ideas which was transmitted from Byzantium and carry on the tradition in the Ottoman Empire such as the keep healthy and leprosarium separate here a example of a quote: rephrase “For leprosy they built places of treatments where were named nosocomonium in Byzantium. Ottoman Turks also built leprosarium named Miskinler Tekkesi in Edirne where patients were not treated but simply isolated from healthy people as other leprosarium founded in different parts of the Ottoman Empire.” There were many kinds of medicines located along the Silk Road such as the divination and medical methods, shamanism, Buddhist medical system, India tantra, Persian and Arabic traditions as well as Greek traditions. It could honestly be said that there was a rich wealth of medical knowledge regarding medicine along the Silk Road. There was also Avicenna medicine book tradition, which was very important because of all the recorded documentation it possessed. Music therapy was also very important within the medical practice in Ottoman society and had been practiced for a long time. The aim of the organic music therapy was to help the mentally disturbed and would do so by attempting to balance the harmony of the body as well as the health of the ‘body, mind and emotion’. The importance of the music therapy was a part of the Ottoman (Turkish) tradition identifying back to the Turkic tribes who had always played music and sung believing them to be therapeutic. The tradition of the music had progressed along the Silk Road with the migration of the Turks and Ottoman Empire that they are known. Sufi and physicians practice the music in a mystical and shamanism way because of its mental healing properties. This was a very important practice, which was used because of its various healing properties. ‘Used for psychological effects’ – relaxation, Hot water springs were very important in the Ottoman Empire , the westerners thought the ‘baths’ and bathing itself would make the ottomans immortal. The majority of the works that originates from and formulates part of the Ottoman Empires heritage as a result of the conquest over the lands, dated from 1300 AD until the capturing of the Middle East and Africa. Examples include the translated records from the ancient Greeks, which were written in Latin, and when the Arabs conquered the lands and subsequently converted the texts to Arabic through the transmission of knowledge. Alain Touwaide emphasised on the importance of Greek inheritance regarding knowledge as it was seen to be central to their very thinking such as ‘Dioscorides”. “It presents traces of use by Post-Byzantine or Ottoman people the Greek plant names of Dioscorides’ text have been transliterated into Arabic alphabet and/or translated into Arabic”. In addition to the quote, which shows that, the very translations of medical knowledge and plant names occurred prior to the Ottoman Empires expansion when the empire took on new ‘ideologies’. This demonstrates the cross cultural transmissions that occurred regarding knowledge and medicine where it occurred both naturally as well as being forwarded by Ottoman society and minorities as well as different communities hubs within the Ottoman Empire. ‘Therapeutic and surgical medical literature was an important area of medicine written by both Greek and Roman scholars. This was translated, copied and then organized by medieval Muslim societies. This lead to the transmissions within the Ottoman Empire which went on to lead other discoveries within the field of ‘autonomy and pathology’ the Arabs, Inoculation was an important method in Ottoman medicine because: It involved using a needle. As shown in the following extract, this was something that Westerners saw as highly valuable:“The Ottoman method of inoculation so astonished Lady Montagu, herself a survivor of smallpox, that she ordered the embassy surgeon, Charles Maitland, to inoculate her son on March 1718 with the help of the’ old woman. ‘Her daughter was later inoculated in April 1721 on her return to London. “” This is a good quote which demonstrated how important and valuable inoculation was viewed by the western world but for the purpose of this essay it also exemplifies the transmission of Ottoman medicine to the West. This would then lead to the exchange of ideas between the British and the Ottoman Empire…. “She immediately rips open that you offer to her with a large needle (which gives you no more pain than a common scratch), and puts into the vein as much venom as can lie upon the head of her needle…the children and young patients play together all the rest of the day, and are in perfect health” There were important medical healers throughout the 18th Century in Egypt as a result of the transmissions of other communities and improvements following centuries of development in medicine and knowledge along the Silk Road. The 18th century was very important for the Ottoman Empire, as this was the period where it had witnessed major reforms because of the improvements within medicine in the Ottoman Empire. The reason behind why the Ottoman Empire wanted to reform was for the ‘fear‘ of falling behind other nations such as the West. In theory one could argue that this is an example of cross cultural transmission of ideas from the West into the Ottoman Empire which would go onto be integrated into their system. They had a doctor known as Hekimbasi, which meant ‘head doctor’. It was a very important device because he was the sultan’s personal doctor during the Sultan Suleiman period. This ideas purpose designed for the elites, especially if there was something wrong with the ottoman sultan or Vizer. “He writes that he attended patient examination and treatment sessions and describes some surgical operations in detail. He himself goes to a dentist to fix some broken teeth; the treatment is so successful that, as he records, he can now use his teeth to crack walnuts (VII.63a–b).”This demonstrated Evliyaa Celebi as a recorder and travel account when he traveled into Vienna with his master Köprülüzade Fazıl Ahmed Pasha, which showed the narrative that he wrote as a eye witness account to help the Ottoman Empire learn about knowledge and how the western world used medicine, surgical instruments in a detailed account of operation. Celebi even had ‘treatment’ on his tooth, which he highly praised the venial physicians for being highly effective. In spite of this he had a very important role in cross cultural transmission and its influence over Ottoman research for the later parts of the 18th century because of its profound affect over society and hospitals. Dankoff, Robert, An Ottoman Mentality: The World of Evliya Celebi (Leiden:Koninklijke Brill NV, 2004)p.219 Robert, ‘An Ottoman Mentality: The World of Evliya Celebi ‘p.250 Dankoff, Robert, Evliya Celebi in Bitlis The Relevant Section of The Seyahataname Edited with Translation, Commentary and Introduction (Leiden:EJBrill,1990)Pp.75 Goffman Daniel, The Ottoman Empire and Early Modern Europe, (Cambridge: Cambridge University Press, 2002) p.33 Alain, Touwaide , The permanence of classical Greek medicine in the Ottoman Empire The case of Dioscorides’ De Materia Medica (Istanbul:ISIS,1999).pp.178 Alain, Touwaide , The permanence of classical Greek medicine in the Ottoman Empire The case of Dioscorides’ De Materia Medica (Istanbul:ISIS,1999)p.178 Lawrence Fine, Physician of the Soul, Lawrence Fine, Healer of the Cosmos Isaac Luria and His Kabbalistic Fellowship, (California: Stanford University Press, 2003) pp.22 Prof. Dr. Esin Kahya, Prof.Dr.A. Demirhan Edemir, Medicine In The Ottoman Empire (And Other Scientific Developments) (Nobel Medical Publication Ltd STI: Istanbul, 1997) pp.12 Prof. Dr. Esin Kahya, Prof.Dr.A. Demirhan Edemir, Medicine In The Ottoman Empire (And Other Scientific Developments) (Nobel Medical Publication Ltd STI: Istanbul, 1997) pp.23 Prof. Dr. Esin Kahya, Prof.Dr.A. Demirhan Edemir, Medicine In The Ottoman Empire (And Other Scientific Developments) (Nobel Medical Publication Ltd STI: Istanbul, 1997).p.58 Prof. Dr. Esin Kahya, Prof.Dr.A. Demirhan Edemir, ‘Medicine In The Ottoman Empire’.p.58 Shefer-Mossensohn Miri, Ottoman Medicine Healing and Medical Institution 1500-1700, (U.S.A: State University of New York Press, 2009).p.189 Shefer-Mossensohn Miri, ‘Ottoman Medicine Healing and Medical Institution 1500-1700’,p.86 Alain, Touwaide , The permanence of classical Greek medicine in the Ottoman Empire The case of Dioscorides’ De Materia Medica (Istanbul:ISIS,1999).pp184 Richard G, Ellenbogen, Saleem I. Abdulrauf, Laligam N. Sekhar, Principles Of Neurological Surgery, (Philadelphia: Elsevier Saunders ,2012) p.8 Basil H. Aboul-Enein, Faisal H. Aboul-Enein, “Smallpox inoculation and the Ottoman contribution: A Brief Historiography” TPHA Journal Volume 64, Issue 1 (2012). pp.18 Basil H. Aboul-Enein, Faisal H. Aboul-Enein, “Smallpox inoculation and the Ottoman contribution: A Brief Historiography” TPHA Journal Volume 64, Issue 1 (2012). p.18 Kasaba,Resat, The Ottoman Empire And The World Economy The Nineteenth century, (New York: University Of New York,1988)p.35. <https://www.academia.edu/4296152/Evliy%C3%A2_%C3%87elebi_s_Journey_to_Vienna> [accessed on 15 Of April 2015] Gisela Proazka-eisl, Evliya Celebi Journey to Vienna p.112 https://www.academia.edu/4296152/Evliy%C3%A2_%C3%87elebi_s_Journey_to_Vienna> [accessed on 16 Of April 2015] Gisela Proazka-eisl, Evliya Celebi Journey to Vienna p.111-112
Anatomy and Physiology December 14, 2011 Skeletal System and Muscular System In anatomy and physiology we study the structure of living things and the function of living systems. In physiology, the scientific method is applied to determine how different organisms, organ systems, organs, cells, and biomolecules carry out the chemical or physical function that they have in the living system. Both anatomy and physiology are subcategories of biology. Throughout our class we have discussed many different systems of the body. One system that is very important to the human body is the muscular system, which consists of three different types of muscle tissue. Along with the muscular system is the skeletal system, which consists of the bones and the different types of tissue it contains. Both of these systems have many different functions, and can be interconnected together. These two systems work together and they form the musculoskeletal system. The muscular system consists of three different types of muscle tissue and it has four different functions. The muscular system also encompasses many properties. The three different types of muscle tissues it includes are skeletal muscle, cardiac muscle, and smooth muscle. The skeletal muscle’s main function is to move the bones of the skeleton. This muscle is voluntary and is also striated, meaning that it is striped, which is due to the muscle fibers that are combined into parallel fibers. Some of these muscles can be controlled unknowningly. For example, the diaphragm of the human body continues to alternately contract and relax while we are asleep, allowing our lungs to expand to breathe. The cardiac muscle tissue is found only in the heart. It is similar to the skeletal muscle in that it is striated, but it is also different in that it is involuntary. The smooth muscle tissue can be found within the walls of the digestive tract, blood vessels, and airways of the respiratory system. This...
The planet Saturn captures the imagination with its visually stunning rings. Close-up views from our robotic emissaries have revealed braided ring structures, dynamic weather systems that include a gigantic polar hexagon and a diverse family of moons — each with a distinctive appearance. One of the moons, Titan, features landscapes reminiscent of Earth — but with a twist. This summer, Saturn sits above Scorpius in the southern evening sky, where it is very conveniently positioned for observing. In this edition of Mobile Astronomy, we'll explore Saturn — as a target for your telescope, as planetary science laboratory and as inspiration for your own journey into astronomy. We'll also highlight some interesting aspects of Saturn that you can demonstrate with mobile astronomy apps. Some Saturn science Saturn is very similar to Jupiter. Both are gas giants that are chiefly composed of hydrogen and helium — although only in the thin outer shell are those elements in their gaseous state. Descending into the planet's interior, immense pressure compresses the substances first into liquids, then metallic solids that conduct currents, which generate large magnetic fields. [10 Best Space Apps in the Universe] Saturn is best-known for its glorious ring system. While all the large planets have rings, Saturn's are chiefly composed of water-ice fragments in sizes ranging from fine particles to house-size chunks. These reflect sunlight efficiently and make the rings shine brightly. The other planets' ring systems are mostly dust and rock, which renders them poorly reflective. Saturn's axis of rotation is tilted 26.73 degrees (a few degrees more than Earth's 23.5-degree tilt) from the plane of its orbit. If this were not the case, we would not be able to see Saturn's rings from here. Amazingly, the rings are only about 66 feet (20 meters) thick, but they span a distance that's 4,100 to 75,000 miles (6,598 to 120,700 kilometers) from the planet — an area that's so large, that a number of small moons orbit within them, carving out gaps in the rings. Saturn has a large retinue of moons. Its biggest moon, Titan, is larger than the planet Mercury (but much lighter, due to its high ice content). Saturn also hosts six more good-size moons and dozens of house-size moonlets. You can easily see Titan through a backyard telescope, plus three or four of the next largest, depending on the telescope's aperture. Because the moons orbit in Saturn's tilted ring plane, you'll find them above, below or to either side of the planet. To identify the moons using a sky-charting app like SkySafari 5, Star Walk or Stellarium Mobile, center Saturn and zoom in until you see the moons displayed. If the app time is set to Now, it will match what you see in your telescope, except for any image inverting or mirroring your telescope's optics might introduce. Tap the upper right corner of the SkySafari app's display to bring up a dialogue that allows you to flip the view horizontally, vertically or both. (Don't forget to switch back to "none" when you're finished.) The Saturn Moons and Gas Giants apps for iOS are designed to provide realistic views of the planet and moons at any time you choose. They incorporate buttons to flip the view to match your telescope's optics. The Solar Walk app displays the correct positions of the planet and moons using attractive photorealistic surfaces and a 3D interface you can rotate and zoom, but you can't flip the view. The Pocket Universe app allows you to select the planet, tap once for additional information and again on Extras to bring up a moon-position interface, complete with view-flipping options. Saturn's moons are worlds unto themselves — massive enough to have geologic activity, interesting terrain, subsurface liquid saltwater oceans and even atmospheres. The Saturn Atlas app for iOS, provides labeled globes for each moon, complete with high-resolution imagery and coordinate grids. [Ocean on Saturn Moon Enceladus Suspected Beneath Ice (Video)] Finding and observing Saturn Ancient Greek astronomers coined the term "planētēs," meaning "wanderer," because when they looked at the planets that were visible to the naked eye, they observed that those planets were moving among the fixed stars. They also realized that the planets, sun and moon traveled within a narrow strip of the sky that was populated by the constellations of the Zodiac. We now know that the planets do this because their orbits, defining the plane of the solar system, follow that great ecliptic circle through those constellations. The entire solar system revolves counter-clockwise when it is viewed from above. From our vantage point on Earth, the outer planets shift eastward, or prograde, as they orbit the sun. The farther a planet is from the sun, the longer it takes to complete one orbit (its year). Saturn's year encompasses 29.5 of our Earth years, so every year, when Saturn returns to our night sky, it has shifted eastward by about 12 degrees, or one zodiac constellation every 2.5 years. Due to the Earth's faster orbital velocity, it passes the outer planets on the inside track every year. While the Earth is overtaking them, the planets appear to reverse course and move westward in what astronomers called a retrograde loop. This year, Saturn is retrograde from March through August. You can demonstrate the yearly path of the planet, complete with retrograde loops, in SkySafari. In the Coordinates menu, change the default Horizon to Ecliptic. (This will make the planet's orbit horizontal.) Enable the Selected Object Path option and exit the Settings menu. Select and center Saturn. Its two-year path through the sky will appear, labeled with dates at intervals. If the path is obscured, switch off the ground. The center of a retrograde loop coincides with the day when the Earth is closest to the planet, also known as opposition. This year, it was June 3. Try setting this as the date. (Don't forget to revert to the Horizon coordinate setting later.) For a more dynamic demonstration, switch off the object path, select Ecliptic coordinates, disable the Show Daylight option and hide the ground. Select and center Saturn, then open the time-flow controls and set the increment to Day. Stepping or flowing time forward and backward will reveal the planet's motions. Better yet, select and center a fixed star, such as Antares, and watch Saturn and the other planets drift through the stars. It's fun! [First Mars, Then e — It's an Opposition Party! (Video)] The first telescopes that people used to observe Saturn were extremely limited, with optics hardly better than those in today's smallest binoculars. When Galileo pointed his modest telescope at Saturn in 1610, he prepared a sketch showing the main globe of the planet bracketed by a matching pair of small moons. (After his experience with Jupiter the year before, he was used to thinking of planets having moons.) When he subsequently viewed Saturn with better telescopes, he got the impression that Saturn had a pair of handles. Like any good astronomer, Galileo looked again from time to time. Imagine his surprise when, in the summer of 1612, the "handles" had disappeared! When the planet returned to his skies the following summer, they had returned. What was happening? Decades later, telescope technology had improved. In 1659, the Dutch astronomer Christiaan Huygens worked out what was going on. As Saturn orbits the sun, its tilted axis of rotation points in the same direction at all times. (For Earth, this spot is near Polaris.) At Saturn's summer solstice, that spot in the sky sits beyond the sun, so Saturn is tilted in the direction of the sun. From our vantage point, which is relatively near the sun, Saturn is tipped toward the Earth, too — so we see the rings, from above, at their widest. At Saturn's winter solstice, 14.73 years later, it's tipped directly away from Earth, and we see the rings from below, again at their widest. Midway between the solstices, at Saturn's equinoxes, it tilts to the left or the right, and the thin rings vanish for us for a few weeks or more. The next time this will occur is around March 23, 2025, but you can see it for yourself right now using your favorite astronomy app. Find Saturn and center it, then set the date to March 23, 2025. (If Saturn is below the horizon, adjust the hour until it rises.) The rings will shrink to a thin line. You can also try June 1612, or another instance using the 14.73-year interval. This demonstration works in the Gas Giants and Saturn Moons apps, too. [Photos: Saturn's Glorious Rings Up Close] The Cassini-Huygens mission In 2004, the Cassini-Huygens spacecraft entered orbit around Saturn on a four-year mission to study the planet and moons in detail. The highly successful mission has been extended several times, with the spacecraft orbiting in the ring plane and making close flyby images of the main moons, swooping above the rings and planet to capture details in the rings and the embedded tiny moons that sweep out gaps in them, and imaging the polar hexagon and other weather patterns. Early in the mission, a probe named Huygens detached from Cassini and descended by parachute through Titan's thick, opaque atmosphere, landing a small science laboratory on the surface. On the way down, it imaged incredibly Earth-like landscapes complete with mountains, rivers, lakes and seas. That far from the sun's heat, all surface water is frozen solid. Instead, Titan's hydrological cycle uses liquid natural gas and other hydrocarbons: those substances rain from the clouds, carving drainage channels and flowing into seas. People have assigned creative names to the new geography. For example, the mountains are named for those in "The Lord of the Rings" books, and the lakes are named after famous Earth lakes. Select Titan in your astronomy app to call up even more information about the solar system's second-largest moon. On Friday, July 19, 2013, the Cassini spacecraft was located on the far side of Saturn, opposite the sun and the rest of the inner planets. At 5:30 p.m. EDT, NASA encouraged everyone on Earth to turn toward Saturn and wave a greeting from our Pale Blue Dot — while it captured the most distant selfie ever taken! In the resulting true-color image, Saturn's atmosphere and rings are dramatically backlit in superb detail, while seven of Saturn's moons sit nearby. In the background are Mars, Venus and, to the lower right, a tiny blue pixel representing the Earth and moon. [Earth From Saturn: Cassini Takes Our Picture (Video)] You can partially re-create the event using the SkySafari app. Select Saturn, open the time controls, set the date and time to match the photo (don't worry if Saturn disappears), then tap the Orbit icon. Rotate the 3D-rendered planet until the sun and the inner planets are in the background distance. If there are orbit lines cluttering the view, you can switch them off in the Settings/Solar System/Orbits menu. You won't be able to zoom in too far before Saturn fills the screen, but you'll get the idea. The inexpensive Cassini HD app for iOS features a collection of facts about the Saturn system and an extensive gallery of images from the mission. All of the content is in the public domain, but Saturn enthusiasts might still enjoy having their favorite planet in their pocket. Saturn for iOS is another comprehensive and stylish app that presents Saturn system imagery and data in the "Star Trek" style. The Cassini mission is due to end in 2017, when it will be de-orbited into Saturn to prevent any possibility of human contamination of Saturn's moons. We believe that little Enceladus has a global ocean of liquid saltwater under its icy crust — an environment in which life could have evolved. We have actually seen the water erupting into space through hundreds of surface fissures, and we expect that it is augmenting Saturn's rings. Before it takes its final bow, Cassini will perform one last dramatic experiment, altering its orbit to dive between the planet's cloud tops and the rings. By doing this, we'll get unmatched close-up images of both, and use gravitational perturbations on the spacecraft to measure the mass of the rings for the first time. Saturn has been known to trigger a lifelong passion for astronomy in people who have viewed it for the first time, even through a backyard telescope. In 2010, an artist friend of mine was so moved by seeing Saturn through a telescope, 400 years after Galileo did the same thing, that she became an amateur astronomer and embarked on a 30-year project to follow it and photograph it through one full orbit around the Sun — merging art and science. Her initial idea has spawned related projects, including a large, multicomponent art installation entitled Imaging Saturn and a blog covering astronomy, art and more. Perhaps it will inspire you, too. In our next edition of mobile astronomy, we'll look at how to operate your telescope remotely with your smartphone or tablet, and highlight some telescopes with built-in Wi-Fi connections. Until then, keep looking up! Editor's note: Chris Vaughan is an astronomy public outreach and education specialist, and operator of the historic 1.88-meter David Dunlap Observatory telescope. You can reach him via email, and follow him on Twitter as @astrogeoguy, as well as on Facebook and Tumblr. This article was provided by Simulation Curriculum, the leader in space science curriculum solutions and the makers of the SkySafari app for Android and iOS. Follow SkySafari on Twitter @SkySafariAstro. Follow us @Spacedotcom, Facebook and Google+. Original article on Space.com.
Thomas Hobbes Facts The English philosopher and political theorist Thomas Hobbes (1588-1679) was one of the central figures of British empiricism. His major work, "Leviathan, " published in 1651, expressed his principle of materialism and his concept of a social contract forming the basis of society. Born prematurely on April 5, 1588, when his mother heard of the impending invasion of the Spanish Armada, Thomas Hobbes later reported that "my mother gave birth to twins, myself and fear." His father was the vicar of Westport near Malmesbury in Gloucestershire. He abandoned his family to escape punishment for fighting with another clergyman "at the church door." Thereafter Thomas was raised and educated by an uncle. At local schools he became a proficient classicist, translating a Greek tragedy into Latin iambics by the time he was 14. From 1603 to 1608 he studied at Magdalen College, Oxford, where he was bored by the prevailing philosophy of Aristotelianism. The 20-year-old future philosopher became a tutor to the Cavendish family. This virtually lifelong association with the successive earls of Devonshire provided him with an extensive private library, foreign travel, and introductions to influential people. Hobbes, however, was slow in developing his thought; his first work a translation of Thucydides's History of the Peloponnesian Wars, did not appear until 1629. Thucydides held that knowledge of the past was useful for determining correct action, and Hobbes said that he offered the translation during a period of civil unrest as a reminder that the ancients believed democracy to be the least effective form of government. According to his own estimate the crucial intellectual event of Hobbes's life occurred when he was 40. While waiting for a friend he wandered into a library and chanced to find a copy of Euclid's geometry. Opening the book, he read a random proposition and exclaimed, "By God that is impossible!" Fascinated by the interconnections between axioms, postulates, and premises, he adopted the ideal of demonstrating certainty by way of deductive reasoning. His interest in mathematics is reflected in his second work, A Short Treatise on First Principles, which presents a mechanical interpretation of sensation, as well as in his brief stint as mathematics tutor to Charles II. His generally royalist sympathy as expressed in The Elements of Law (1640) caused Hobbes to leave England during the "Long Parliament." This was the first of many trips back and forth between England and the Continent during periods of civil strife since he was, in his own words, "the first of all that fled." For the rest of his long life Hobbes traveled extensively and published prolifically. In France he met René Descartes and the anti-Cartesian Pierre Gassendi. In 1640 he wrote one of the sets of objections to Descartes's Meditations. Although born into the Elizabethan Age, Hobbes outlived all of the major 17th-century thinkers. He became a sort of English institution and continued writing, offering new translations of Homer in his 80s because he had "nothing else to do." When he was past 90, he became embroiled in controversies with the Royal Society. He invited friends to suggest appropriate epitaphs and favored one that read "this is the true philosopher's stone." He died on Dec. 4, 1679, at the age of 91. The diverse intellectual currents of the 17th century, which are generically called modern classical philosophy, began with a unanimous repudiation of the authorities of the past, especially Aristotle and the scholastic tradition. Descartes, who founded the rationalist tradition, and his contemporary Sir Francis Bacon, who is considered the originator of modern empiricism, both sought new methodologies for achieving scientific knowledge and a systematic conception of reality. Hobbes knew both of these thinkers, and his system encompassed the advantages of both rationalism and empiricism. As a logician, he believed too strongly in the power of deductive reasoning from definitions to share Bacon's exclusive enthusiasm for inductive generalizations from experience. Yet Hobbes was a more consistent empiricist and nominalist, and his attacks on the misuse of language exceed even those of Bacon. And unlike Descartes, Hobbes viewed reason as summation of consequences rather than an innate, originative source of new knowledge. Psychology, as the mechanics of knowing, rather than epistemology is the source of Hobbes's singularity. He was fascinated by the problem of sense perception, and he extended Galileo's mechanical physics into an explanation of human cognition. The origin of all thought is sensation which consists of mental images produced by the pressure of motion of external objects. Thus Hobbes anticipates later thought by distinguishing between the external object and the internal image. These sense images are extended by the power of memory and imagination. Understanding and reason, which distinguish men from other animals, consist entirely in the ability to use speech. Speech is the power to transform images into words or names. Words serve as the marks of remembrance, signification, conception, or self-expression. For example, to speak of a cause-and-effect relation is merely to impose names and define their connection. When two names are so joined that the definition of one contains the other, then the proposition is true. The implications of Hobbes's analysis are quite modern. First, there is an implicit distinction between objects and their appearance to man's senses. Consequently knowledge is discourse about appearances. Universals are merely names understood as class concepts, and they have no real status, for everything which appears "is individual and singular." Since "true and false are attributes of speech and not of things, " scientific and philosophic thinking consists in using names correctly. Reason is calculation or "reckoning the consequences of general laws agreed upon for either marking or signifying." The power of the mind is the capacity to reduce consequences to general laws or theorems either by deducing consequences from principles or by inductively reasoning from particular perceptions to general principles. The privilege of mind is subject to unfortunate abuse because, in Hobbes's pithy phrase, men turn from summarizing the consequences of things "into a reckoning of the consequences of appellations, " that is, using faulty definitions, inventing terms which stand for nothing, and assuming that universals are real. The material and mechanical model of nature offered Hobbes a consistent analogy. Man is a conditioned part of nature, and reason is neither an innate faculty nor the summation of random experience but is acquired through slow cultivation and industry. Science is the cumulative knowledge of syllogistic reasoning which gradually reveals the dependence of one fact upon another. Such knowledge is conditionally valid and enables the mind to move progressively from abstract and simple to more particular and complex sciences: geometry, mechanics, physics, morals (the nature of mind and desire), politics. Hobbes explains the connection between nature, man, and society through the law of inertia. A moving object continues to move until impeded by another force, and "trains of imagination" or speculation are abated only by logical demonstrations. So also man's liberty or desire to do what he wants is checked only by an equal and opposite need for security. A society or commonwealth "is but an artificial man" invented by man, and to understand polity one should merely read himself as part of nature. Such a reading is cold comfort because presocial life is characterized by Hobbes, in a famous quotation, as "solitary, poor, nasty, brutish and short." The equality of human desire is matched by an economy of natural satisfactions. Men are addicted to power because its acquisition is the only guarantee of living well. Such men live in "a state of perpetual war" driven by competition and desire for the same goods. The important consequence of this view is man's natural right and liberty to seek self-preservation by any means. In this state of nature there is no value above self-interest because where there is no common, coercive power there is no law and no justice. But there is a second and derivative law of nature that men may surrender or transfer their individual will to the state. This "social contract" binds the individual to treat others as he expects to be treated by them. Only a constituted civil power commands sufficient force to compel everyone to fulfill this original compact by which men exchange liberty for security. In Hobbes's view the sovereign power of a commonwealth is absolute and not subject to the laws and obligations of citizens. Obedience remains as long as the sovereign fulfills the social compact by protecting the rights of the individual. Consequently rebellion is unjust, by definition, but should the cause of revolution prevail, a new absolute sovereignty is created. Further Reading on Thomas Hobbes The standard edition is The English Works of Thomas Hobbes, edited by Sir William Molesworth (11 vols. 1839-1845). In addition see The Elements of Law, Natural and Politic, edited by Ferdinand Tönnies (1928); Body, Mind and Citizen, edited by Richard S. Peters (1962); and Leviathan, edited by Michael Oakeshott (1962). There is a wealth of good secondary literature available. John Aubrey included a biography of his friend Hobbes in Brief Lives, edited by Oliver Lawson Dick (1950). Leo Strauss, The Political Philosophy of Hobbes: Its Basis and Genesis (trans. 1936); Leslie Stephen, Hobbes (1904); and Richard Peters, Hobbes (1956), are excellent studies. Consult also John Larid, Hobbes (1934); Clarence DeWitt Thorpe, The Aesthetic Theory of Thomas Hobbes (1940); John Bowle, Hobbes and His Critics: A Study in Seventeenth Century Constitutionalism (1952); Samuel I. Mintz, The Hunting of Leviathan: Seventeenth-century Reactions to the Materialism and Moral Philosophy of Thomas Hobbes (1962); C. B. Macpherson, The Political Theory of Possessive Individualism: Hobbes of Locke (1962); J. W. N. Watkins, Hobbes's System of Ideas: A Study in the Political Significance of Philosophical Theories (1965); and F. S. McNeilly, The Anatomy of Leviathan (1968).
In knitting, the word gauge is used both in hand knitting and machine knitting; the latter, technical abbreviation GG, refers to "Knitting Machines" fineness size. In both cases, the term refers to the number of stitches per inch, not the size of the finished garment. In both cases, the gauge is measured by counting the number of stitches (in hand knitting) or the number of needles (on a knitting machine bed) over several inches then dividing by the number of inches in the width of the sample. Gauge on knitting machines There are two types of classification of Knitting Gauges or Unit of Measure: - A – Used for Cotton Fully fashion flat machines (Bentley – Monk, Textima, Sheller etc..) where “Gauge” is measured in 1,5” Inches (2,54 cm x 1,5) and the machine's gauge is expressed by the number of needles needed to achieve that gauge. - B – Used for hand, mechanical or modern Electronic Flat Machines (Stoll, Shima, Protti etc..), where gauge is measured in 1 inch increments (or 2,5 cm) and the machine's gauge is similarly measured by the number of needles required to achieve that number. Compared graduation scale Gauge (GG) A versus B system: A 30 GG (A) Cotton Fully fashion flat machine (30 needles in 1,5”) is comparable to a 20 GG (B) Electronic Flat machine, a 27 GG (A) is an 18 GG (B), an 18 GG (A) is a 12 GG (B), a 12 GG (A) is an 8 GG (B), a 7,5 GG (A) is a 5 GG (B) and a 4,5 GG (A) is a 3 GG (B). Factors that affect knitting gauge The gauge of a knitted fabric depends on the pattern of stitches in the fabric, the kind of yarn, the size of knitting needles, and the tension of the individual knitter (i.e., how much yarn he or she allows between stitches). - For example, ribbing and cable patterns tend to "pull in," giving more stitches over an identical width than stockinette, garter, or seed stitch. Even the same stitch produced in two different ways may produce a different gauge. - Thicker yarns with less loft generally produce larger stitches than thinner yarns (reducing the number of stitches per width and length). - Larger knitting needles also produce larger stitches, giving fewer stitches and rows per inch; changing needle size is the best way to control one's own gauge for a given pattern and yarn. - Finally, the knitter's tension, or how tightly one knits, can affect the gauge significantly. The gauge can even vary within a single garment, typically with beginning knitters; as knitters become more familiar with a stitch pattern, they become more relaxed and make the stitch differently, producing a different gauge. Sometimes the gauge is deliberately altered within a garment, usually by changing needle size; for example, smaller stitches are often made at the collar, sleeve cuffs, hemline ribbing or pocket edges. Uneven knitting is a knitting technique in which two knitting needles of different sizes are used. The method is sometimes used when the knitter has a significantly different gauge on knit and purl stitches. It is also useful for producing elongated stitches and certain specialty patterns. Knitting gauge in patterns To produce a knitted garment of given dimensions, whether from one's own design or from a published pattern, the gauge should match as closely as possible; significant differences in gauge will lead to a deformed garment. Patterns for knitting projects almost always include a suggested gauge for the project. For illustration, suppose that a sweater is designed to measure 40" around the bustline with a gauge of 5 st/inch in the chosen stitch. Therefore, the pattern should call for 200 stitches (5 st/inch x 40") at the bustline. If the knitter follows the pattern with a gauge of 4 st/inch, the sweater will measure 50" around the bustline (200 st / 4st/in) -- too baggy! Conversely, if the knitter follows the pattern with a gauge of 6 st/inch, the sweater will measure ~33" around the bustline (200 st / 6st/inch) -- too tight! Generally, the gauge should match to better than 5%, corresponding to 1" of ease in a 20" width. Similar concerns apply to the number of rows per inch. Luckily, the gauge can be adjusted by changing needle size, without changing the pattern, stitch, yarn, or habits of the knitter. Larger needles produce a smaller gauge (fewer stitches per inch) and smaller needles produce a larger gauge (more stitches per inch). If necessary, further adjustments can be made by subtly altering the pattern dimensions, e.g., shortening a vertically aligned pattern. Ribbing can also be used to "draw in" the fabric to the proper gauge. Measuring knitting gauge To check one's gauge before starting a project, a sample of knitting (a swatch) is made, ideally in the stitch pattern used in the garment. The swatch edges affect the reading of the gauge, so it's best that the swatch be at least 4" square and more safely 6–8" square. Dividing the number of stitches used by the actual size of the sample gives the stitch gauge of that sample. Similarly, the row gauge is calculated by dividing the number of rows knitted by the length of the sample. Making a swatch also helps familiarize the knitter with the stitch pattern and yarn, which will lead to a more uniform gauge in the final garment. ||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (July 2010)| - June Hemmons Hiatt (1988) The Principles of Knitting, Simon and Schuster, pp. 415–432. ISBN 0-671-55233-3 - knitty.com article on gauge - Measuring Gauge - Needle Conversion Chart Shows conversion between current and past US and European knitting needle sizes. - Yarn Weight and Gauge Chart Recommended needle and crochet hook sizes used to achieve gauge with various yarns.
The study of bacterial virulence often requires a suitable animal model. Mammalian models of infection are costly and may raise ethical issues. The use of insects as infection models provides a valuable alternative. Compared to other non-vertebrate model hosts such as nematodes, insects have a relatively advanced system of antimicrobial defenses and are thus more likely to produce information relevant to the mammalian infection process. Like mammals, insects possess a complex innate immune system1. Cells in the hemolymph are capable of phagocytosing or encapsulating microbial invaders, and humoral responses include the inducible production of lysozyme and small antibacterial peptides2,3. In addition, analogies are found between the epithelial cells of insect larval midguts and intestinal cells of mammalian digestive systems. Finally, several basic components essential for the bacterial infection process such as cell adhesion, resistance to antimicrobial peptides, tissue degradation and adaptation to oxidative stress are likely to be important in both insects and mammals1. Thus, insects are polyvalent tools for the identification and characterization of microbial virulence factors involved in mammalian infections. Larvae of the greater wax moth Galleria mellonella have been shown to provide a useful insight into the pathogenesis of a wide range of microbial infections including mammalian fungal (Fusarium oxysporum, Aspergillus fumigatus, Candida albicans) and bacterial pathogens, such as Staphylococcus aureus, Proteus vulgaris, Serratia marcescens Pseudomonas aeruginosa, Listeria monocytogenes or Enterococcus faecalis4-7. Regardless of the bacterial species, results obtained with Galleria larvae infected by direct injection through the cuticle consistently correlate with those of similar mammalian studies: bacterial strains that are attenuated in mammalian models demonstrate lower virulence in Galleria, and strains causing severe human infections are also highly virulent in the Galleria model8-11. Oral infection of Galleria is much less used and additional compounds, like specific toxins, are needed to reach mortality. G. mellonella larvae present several technical advantages: they are relatively large (last instar larvae before pupation are about 2 cm long and weight 250 mg), thus enabling the injection of defined doses of bacteria; they can be reared at various temperatures (20 °C to 30 °C) and infection studies can be conducted between 15 °C to above 37 °C12,13, allowing experiments that mimic a mammalian environment. In addition, insect rearing is easy and relatively cheap. Infection of the larvae allows monitoring bacterial virulence by several means, including calculation of LD5014, measurement of bacterial survival15,16 and examination of the infection process17. Here, we describe the rearing of the insects, covering all life stages of G. mellonella. We provide a detailed protocol of infection by two routes of inoculation: oral and intra haemocoelic. The bacterial model used in this protocol is Bacillus cereus, a Gram positive pathogen implicated in gastrointestinal as well as in other severe local or systemic opportunistic infections18,19. 25 Related JoVE Articles! Determination of the Gas-phase Acidities of Oligopeptides Institutions: University of the Pacific. Amino acid residues located at different positions in folded proteins often exhibit different degrees of acidities. For example, a cysteine residue located at or near the N-terminus of a helix is often more acidic than that at or near the C-terminus 1-6 . Although extensive experimental studies on the acid-base properties of peptides have been carried out in the condensed phase, in particular in aqueous solutions 6-8 , the results are often complicated by solvent effects 7 . In fact, most of the active sites in proteins are located near the interior region where solvent effects have been minimized 9,10 . In order to understand intrinsic acid-base properties of peptides and proteins, it is important to perform the studies in a solvent-free environment. We present a method to measure the acidities of oligopeptides in the gas-phase. We use a cysteine-containing oligopeptide, Ala3 CH), as the model compound. The measurements are based on the well-established extended Cooks kinetic method (Figure 1 . The experiments are carried out using a triple-quadrupole mass spectrometer interfaced with an electrospray ionization (ESI) ion source (Figure 2 ). For each peptide sample, several reference acids are selected. The reference acids are structurally similar organic compounds with known gas-phase acidities. A solution of the mixture of the peptide and a reference acid is introduced into the mass spectrometer, and a gas-phase proton-bound anionic cluster of peptide-reference acid is formed. The proton-bound cluster is mass isolated and subsequently fragmented via collision-induced dissociation (CID) experiments. The resulting fragment ion abundances are analyzed using a relationship between the acidities and the cluster ion dissociation kinetics. The gas-phase acidity of the peptide is then obtained by linear regression of the thermo-kinetic plots 17,18 The method can be applied to a variety of molecular systems, including organic compounds, amino acids and their derivatives, oligonucleotides, and oligopeptides. By comparing the gas-phase acidities measured experimentally with those values calculated for different conformers, conformational effects on the acidities can be evaluated. Chemistry, Issue 76, Biochemistry, Molecular Biology, Oligopeptide, gas-phase acidity, kinetic method, collision-induced dissociation, triple-quadrupole mass spectrometry, oligopeptides, peptides, mass spectrometry, MS Design and Use of Multiplexed Chemostat Arrays Institutions: University of Washington. Chemostats are continuous culture systems in which cells are grown in a tightly controlled, chemically constant environment where culture density is constrained by limiting specific nutrients.1,2 Data from chemostats are highly reproducible for the measurement of quantitative phenotypes as they provide a constant growth rate and environment at steady state. For these reasons, chemostats have become useful tools for fine-scale characterization of physiology through analysis of gene expression3-6 and other characteristics of cultures at steady-state equilibrium.7 Long-term experiments in chemostats can highlight specific trajectories that microbial populations adopt during adaptive evolution in a controlled environment. In fact, chemostats have been used for experimental evolution since their invention.8 A common result in evolution experiments is for each biological replicate to acquire a unique repertoire of mutations.9-13 This diversity suggests that there is much left to be discovered by performing evolution experiments with far greater throughput. We present here the design and operation of a relatively simple, low cost array of miniature chemostats—or ministats—and validate their use in determination of physiology and in evolution experiments with yeast. This approach entails growth of tens of chemostats run off a single multiplexed peristaltic pump. The cultures are maintained at a 20 ml working volume, which is practical for a variety of applications. It is our hope that increasing throughput, decreasing expense, and providing detailed building and operation instructions may also motivate research and industrial application of this design as a general platform for functionally characterizing large numbers of strains, species, and growth parameters, as well as genetic or drug libraries. Genetics, Issue 72, Molecular Biology, Microbiology, Biochemistry, Cellular Biology, Basic Protocols, Genomics, Eukaryota, Bacteria, Biological Phenomena, Metabolic Phenomena, Genetic Phenomena, Microbiological Phenomena, Life sciences, chemostat, evolution, experimental evolution, Ministat, yeast, E. coli., Physiology, Continuous culture, high throughput, arrays, cell culture Measuring Fluxes of Mineral Nutrients and Toxicants in Plants with Radioactive Tracers Institutions: University of Toronto. Unidirectional influx and efflux of nutrients and toxicants, and their resultant net fluxes, are central to the nutrition and toxicology of plants. Radioisotope tracing is a major technique used to measure such fluxes, both within plants, and between plants and their environments. Flux data obtained with radiotracer protocols can help elucidate the capacity, mechanism, regulation, and energetics of transport systems for specific mineral nutrients or toxicants, and can provide insight into compartmentation and turnover rates of subcellular mineral and metabolite pools. Here, we describe two major radioisotope protocols used in plant biology: direct influx (DI) and compartmental analysis by tracer efflux (CATE). We focus on flux measurement of potassium (K+ ) as a nutrient, and ammonia/ammonium (NH3 ) as a toxicant, in intact seedlings of the model species barley (Hordeum vulgare L.). These protocols can be readily adapted to other experimental systems (e.g. , different species, excised plant material, and other nutrients/toxicants). Advantages and limitations of these protocols are discussed. Environmental Sciences, Issue 90, influx, efflux, net flux, compartmental analysis, radiotracers, potassium, ammonia, ammonium A High-throughput Automated Platform for the Development of Manufacturing Cell Lines for Protein Therapeutics Institutions: Merck & Co., Inc. The fast-growing biopharmaceutical industry demands speedy development of highly efficient and reliable production systems to meet the increasing requirement for drug supplies. The generation of production cell lines has traditionally involved manual operations that are labor-intensive, low-throughput and vulnerable to human errors. We report here an integrated high-throughput and automated platform for development of manufacturing cell lines for the production of protein therapeutics. The combination of BD FACS Aria Cell Sorter, CloneSelect Imager and TECAN Freedom EVO liquid handling system has enabled a high-throughput and more efficient cell line development process. In this operation, production host cells are first transfected with an expression vector carrying the gene of interest 1 , followed by the treatment with a selection agent. The stably-transfected cells are then stained with fluorescence-labeled anti-human IgG antibody, and are subsequently subject to flow cytometry analysis 2-4 . Highly productive cells are selected based on fluorescence intensity and are isolated by single-cell sorting on a BD FACSAria. Colony formation from single-cell stage was detected microscopically and a series of time-laps digital images are taken by CloneSelect Imager for the documentation of cell line history. After single clones have formed, these clones were screened for productivity by ELISA performed on a TECAN Freedom EVO liquid handling system. Approximately 2,000 - 10,000 clones can be screened per operation cycle with the current system setup. This integrated approach has been used to generate high producing Chinese hamster ovary (CHO) cell lines for the production of therapeutic monoclonal antibody (mAb) as well as their fusion proteins. With the aid of different types of detecting probes, the method can be used for developing other protein therapeutics or be applied to other production host systems. Comparing to the traditional manual procedure, this automated platform demonstrated advantages of significantly increased capacity, ensured clonality, traceability in cell line history with electronic documentation and much reduced opportunity in operator error. Medicine, Issue 55, Manufacturing cell line, protein therapeutics, automation, high-throughput, FACS, FACS Aria, CloneSelect Imager, TECAN Freedom EVO liquid handling system The Portable Chemical Sterilizer (PCS), D-FENS, and D-FEND ALL: Novel Chlorine Dioxide Decontamination Technologies for the Military Institutions: United States Army-Natick Soldier RD&E Center, Warfighter Directorate, University of Connecticut Health Center, Lawrence Livermore National Laboratory, Children's Hospital Oakland Research Institute. There is a stated Army need for a field-portable, non-steam sterilizer technology that can be used by Forward Surgical Teams, Dental Companies, Veterinary Service Support Detachments, Combat Support Hospitals, and Area Medical Laboratories to sterilize surgical instruments and to sterilize pathological specimens prior to disposal in operating rooms, emergency treatment areas, and intensive care units. The following ensemble of novel, ‘clean and green’ chlorine dioxide technologies are versatile and flexible to adapt to meet a number of critical military needs for decontamination6,15 . Specifically, the Portable Chemical Sterilizer (PCS) was invented to meet urgent battlefield needs and close critical capability gaps for energy-independence, lightweight portability, rapid mobility, and rugged durability in high intensity forward deployments3 . As a revolutionary technological breakthrough in surgical sterilization technology, the PCS is a Modern Field Autoclave that relies on on-site, point-of-use, at-will generation of chlorine dioxide instead of steam. Two (2) PCS units sterilize 4 surgical trays in 1 hr, which is the equivalent throughput of one large steam autoclave (nicknamed “Bertha” in deployments because of its cumbersome size, bulky dimensions, and weight). However, the PCS operates using 100% less electricity (0 vs. 9 kW) and 98% less water (10 vs. 640 oz.), significantly reduces weight by 95% (20 vs. 450 lbs, a 4-man lift) and cube by 96% (2.1 vs. 60.2 ft3 ), and virtually eliminates the difficult challenges in forward deployments of repairs and maintaining reliable operation, lifting and transporting, and electrical power required for steam autoclaves. Bioengineering, Issue 88, chlorine dioxide, novel technologies, D-FENS, PCS, and D-FEND ALL, sterilization, decontamination, fresh produce safety A New Approach for the Comparative Analysis of Multiprotein Complexes Based on 15N Metabolic Labeling and Quantitative Mass Spectrometry Institutions: University of Münster, Carnegie Institution for Science. The introduced protocol provides a tool for the analysis of multiprotein complexes in the thylakoid membrane, by revealing insights into complex composition under different conditions. In this protocol the approach is demonstrated by comparing the composition of the protein complex responsible for cyclic electron flow (CEF) in Chlamydomonas reinhardtii , isolated from genetically different strains. The procedure comprises the isolation of thylakoid membranes, followed by their separation into multiprotein complexes by sucrose density gradient centrifugation, SDS-PAGE, immunodetection and comparative, quantitative mass spectrometry (MS) based on differential metabolic labeling (14 N) of the analyzed strains. Detergent solubilized thylakoid membranes are loaded on sucrose density gradients at equal chlorophyll concentration. After ultracentrifugation, the gradients are separated into fractions, which are analyzed by mass-spectrometry based on equal volume. This approach allows the investigation of the composition within the gradient fractions and moreover to analyze the migration behavior of different proteins, especially focusing on ANR1, CAS, and PGRL1. Furthermore, this method is demonstrated by confirming the results with immunoblotting and additionally by supporting the findings from previous studies (the identification and PSI-dependent migration of proteins that were previously described to be part of the CEF-supercomplex such as PGRL1, FNR, and cyt f ). Notably, this approach is applicable to address a broad range of questions for which this protocol can be adopted and e.g. used for comparative analyses of multiprotein complex composition isolated from distinct environmental conditions. Microbiology, Issue 85, Sucrose density gradients, Chlamydomonas, multiprotein complexes, 15N metabolic labeling, thylakoids Metabolic Labeling and Membrane Fractionation for Comparative Proteomic Analysis of Arabidopsis thaliana Suspension Cell Cultures Institutions: Max Plank Institute of Molecular Plant Physiology, University of Hohenheim. Plasma membrane microdomains are features based on the physical properties of the lipid and sterol environment and have particular roles in signaling processes. Extracting sterol-enriched membrane microdomains from plant cells for proteomic analysis is a difficult task mainly due to multiple preparation steps and sources for contaminations from other cellular compartments. The plasma membrane constitutes only about 5-20% of all the membranes in a plant cell, and therefore isolation of highly purified plasma membrane fraction is challenging. A frequently used method involves aqueous two-phase partitioning in polyethylene glycol and dextran, which yields plasma membrane vesicles with a purity of 95% 1 . Sterol-rich membrane microdomains within the plasma membrane are insoluble upon treatment with cold nonionic detergents at alkaline pH. This detergent-resistant membrane fraction can be separated from the bulk plasma membrane by ultracentrifugation in a sucrose gradient 2 . Subsequently, proteins can be extracted from the low density band of the sucrose gradient by methanol/chloroform precipitation. Extracted protein will then be trypsin digested, desalted and finally analyzed by LC-MS/MS. Our extraction protocol for sterol-rich microdomains is optimized for the preparation of clean detergent-resistant membrane fractions from Arabidopsis thaliana We use full metabolic labeling of Arabidopsis thaliana suspension cell cultures with K15 as the only nitrogen source for quantitative comparative proteomic studies following biological treatment of interest 3 . By mixing equal ratios of labeled and unlabeled cell cultures for joint protein extraction the influence of preparation steps on final quantitative result is kept at a minimum. Also loss of material during extraction will affect both control and treatment samples in the same way, and therefore the ratio of light and heave peptide will remain constant. In the proposed method either labeled or unlabeled cell culture undergoes a biological treatment, while the other serves as control 4 Empty Value, Issue 79, Cellular Structures, Plants, Genetically Modified, Arabidopsis, Membrane Lipids, Intracellular Signaling Peptides and Proteins, Membrane Proteins, Isotope Labeling, Proteomics, plants, Arabidopsis thaliana, metabolic labeling, stable isotope labeling, suspension cell cultures, plasma membrane fractionation, two phase system, detergent resistant membranes (DRM), mass spectrometry, membrane microdomains, quantitative proteomics Live Imaging Assay for Assessing the Roles of Ca2+ and Sphingomyelinase in the Repair of Pore-forming Toxin Wounds Institutions: University of Maryland . Plasma membrane injury is a frequent event, and wounds have to be rapidly repaired to ensure cellular survival. Influx of Ca2+ is a key signaling event that triggers the repair of mechanical wounds on the plasma membrane within ~30 sec. Recent studies revealed that mammalian cells also reseal their plasma membrane after permeabilization with pore forming toxins in a Ca2+ -dependent process that involves exocytosis of the lysosomal enzyme acid sphingomyelinase followed by pore endocytosis. Here, we describe the methodology used to demonstrate that the resealing of cells permeabilized by the toxin streptolysin O is also rapid and dependent on Ca2+ influx. The assay design allows synchronization of the injury event and a precise kinetic measurement of the ability of cells to restore plasma membrane integrity by imaging and quantifying the extent by which the liphophilic dye FM1-43 reaches intracellular membranes. This live assay also allows a sensitive assessment of the ability of exogenously added soluble factors such as sphingomyelinase to inhibit FM1-43 influx, reflecting the ability of cells to repair their plasma membrane. This assay allowed us to show for the first time that sphingomyelinase acts downstream of Ca2+ -dependent exocytosis, since extracellular addition of the enzyme promotes resealing of cells permeabilized in the absence of Ca2+ Cellular Biology, Issue 78, Molecular Biology, Infection, Medicine, Immunology, Biomedical Engineering, Anatomy, Physiology, Biophysics, Genetics, Bacterial Toxins, Microscopy, Video, Endocytosis, Biology, Cell Biology, streptolysin O, plasma membrane repair, ceramide, endocytosis, Ca2+, wounds Isolation of Cellular Lipid Droplets: Two Purification Techniques Starting from Yeast Cells and Human Placentas Institutions: University of Tennessee, University of Tennessee. Lipid droplets are dynamic organelles that can be found in most eukaryotic and certain prokaryotic cells. Structurally, the droplets consist of a core of neutral lipids surrounded by a phospholipid monolayer. One of the most useful techniques in determining the cellular roles of droplets has been proteomic identification of bound proteins, which can be isolated along with the droplets. Here, two methods are described to isolate lipid droplets and their bound proteins from two wide-ranging eukaryotes: fission yeast and human placental villous cells. Although both techniques have differences, the main method - density gradient centrifugation - is shared by both preparations. This shows the wide applicability of the presented droplet isolation techniques. In the first protocol, yeast cells are converted into spheroplasts by enzymatic digestion of their cell walls. The resulting spheroplasts are then gently lysed in a loose-fitting homogenizer. Ficoll is added to the lysate to provide a density gradient, and the mixture is centrifuged three times. After the first spin, the lipid droplets are localized to the white-colored floating layer of the centrifuge tubes along with the endoplasmic reticulum (ER), the plasma membrane, and vacuoles. Two subsequent spins are used to remove these other three organelles. The result is a layer that has only droplets and bound proteins. In the second protocol, placental villous cells are isolated from human term placentas by enzymatic digestion with trypsin and DNase I. The cells are homogenized in a loose-fitting homogenizer. Low-speed and medium-speed centrifugation steps are used to remove unbroken cells, cellular debris, nuclei, and mitochondria. Sucrose is added to the homogenate to provide a density gradient and the mixture is centrifuged to separate the lipid droplets from the other cellular fractions. The purity of the lipid droplets in both protocols is confirmed by Western Blot analysis. The droplet fractions from both preps are suitable for subsequent proteomic and lipidomic analysis. Bioengineering, Issue 86, Lipid droplet, lipid body, fat body, oil body, Yeast, placenta, placental villous cells, isolation, purification, density gradient centrifugation From a 2DE-Gel Spot to Protein Function: Lesson Learned From HS1 in Chronic Lymphocytic Leukemia Institutions: IRCCS, San Raffaele Scientific Institute, King's College London, IFOM, FIRC Institute of Molecular Oncology, Università Vita-Salute San Raffaele. The identification of molecules involved in tumor initiation and progression is fundamental for understanding disease’s biology and, as a consequence, for the clinical management of patients. In the present work we will describe an optimized proteomic approach for the identification of molecules involved in the progression of Chronic Lymphocytic Leukemia (CLL). In detail, leukemic cell lysates are resolved by 2-dimensional Electrophoresis (2DE) and visualized as “spots” on the 2DE gels. Comparative analysis of proteomic maps allows the identification of differentially expressed proteins (in terms of abundance and post-translational modifications) that are picked, isolated and identified by Mass Spectrometry (MS). The biological function of the identified candidates can be tested by different assays (i.e. migration, adhesion and F-actin polymerization), that we have optimized for primary leukemic cells. Medicine, Issue 92, Lymphocytes, Chronic Lymphocytic Leukemia, 2D Electrophoresis, Mass Spectrometry, Cytoskeleton, Migration A New Screening Method for the Directed Evolution of Thermostable Bacteriolytic Enzymes Institutions: University of Maryland . Directed evolution is defined as a method to harness natural selection in order to engineer proteins to acquire particular properties that are not associated with the protein in nature. Literature has provided numerous examples regarding the implementation of directed evolution to successfully alter molecular specificity and catalysis1 . The primary advantage of utilizing directed evolution instead of more rational-based approaches for molecular engineering relates to the volume and diversity of variants that can be screened2 . One possible application of directed evolution involves improving structural stability of bacteriolytic enzymes, such as endolysins. Bacteriophage encode and express endolysins to hydrolyze a critical covalent bond in the peptidoglycan (i.e. cell wall) of bacteria, resulting in host cell lysis and liberation of progeny virions. Notably, these enzymes possess the ability to extrinsically induce lysis to susceptible bacteria in the absence of phage and furthermore have been validated both in vitro and in vivo for their therapeutic potential3-5 . The subject of our directed evolution study involves the PlyC endolysin, which is composed of PlyCA and PlyCB subunits6 . When purified and added extrinsically, the PlyC holoenzyme lyses group A streptococci (GAS) as well as other streptococcal groups in a matter of seconds and furthermore has been validated in vivo . Significantly, monitoring residual enzyme kinetics after elevated temperature incubation provides distinct evidence that PlyC loses lytic activity abruptly at 45 °C, suggesting a short therapeutic shelf life, which may limit additional development of this enzyme. Further studies reveal the lack of thermal stability is only observed for the PlyCA subunit, whereas the PlyCB subunit is stable up to ~90 °C (unpublished observation). In addition to PlyC, there are several examples in literature that describe the thermolabile nature of endolysins. For example, the Staphylococcus aureus endolysin LysK and Streptococcus pneumoniae endolysins Cpl-1 and Pal lose activity spontaneously at 42 °C, 43.5 °C and 50.2 °C, respectively8-10 . According to the Arrhenius equation, which relates the rate of a chemical reaction to the temperature present in the particular system, an increase in thermostability will correlate with an increase in shelf life expectancy11 . Toward this end, directed evolution has been shown to be a useful tool for altering the thermal activity of various molecules in nature, but never has this particular technology been exploited successfully for the study of bacteriolytic enzymes. Likewise, successful accounts of progressing the structural stability of this particular class of antimicrobials altogether are nonexistent. In this video, we employ a novel methodology that uses an error-prone DNA polymerase followed by an optimized screening process using a 96 well microtiter plate format to identify mutations to the PlyCA subunit of the PlyC streptococcal endolysin that correlate to an increase in enzyme kinetic stability (Figure 1 ). Results after just one round of random mutagenesis suggest the methodology is generating PlyC variants that retain more than twice the residual activity when compared to wild-type (WT) PlyC after elevated temperature treatment. Immunology, Issue 69, Molecular Biology, Genetics, Microbiology, directed evolution, thermal behavior, thermostability, endolysin, enzybiotic, bacteriolytic, antimicrobial, therapeutic, PlyC Preparation of Primary Myogenic Precursor Cell/Myoblast Cultures from Basal Vertebrate Lineages Institutions: University of Alabama at Birmingham, INRA UR1067, INRA UR1037. Due to the inherent difficulty and time involved with studying the myogenic program in vivo , primary culture systems derived from the resident adult stem cells of skeletal muscle, the myogenic precursor cells (MPCs), have proven indispensible to our understanding of mammalian skeletal muscle development and growth. Particularly among the basal taxa of Vertebrata, however, data are limited describing the molecular mechanisms controlling the self-renewal, proliferation, and differentiation of MPCs. Of particular interest are potential mechanisms that underlie the ability of basal vertebrates to undergo considerable postlarval skeletal myofiber hyperplasia (i.e. teleost fish) and full regeneration following appendage loss (i.e. urodele amphibians). Additionally, the use of cultured myoblasts could aid in the understanding of regeneration and the recapitulation of the myogenic program and the differences between them. To this end, we describe in detail a robust and efficient protocol (and variations therein) for isolating and maintaining MPCs and their progeny, myoblasts and immature myotubes, in cell culture as a platform for understanding the evolution of the myogenic program, beginning with the more basal vertebrates. Capitalizing on the model organism status of the zebrafish (Danio rerio ), we report on the application of this protocol to small fishes of the cyprinid clade Danioninae . In tandem, this protocol can be utilized to realize a broader comparative approach by isolating MPCs from the Mexican axolotl (Ambystomamexicanum ) and even laboratory rodents. This protocol is now widely used in studying myogenesis in several fish species, including rainbow trout, salmon, and sea bream1-4 Basic Protocol, Issue 86, myogenesis, zebrafish, myoblast, cell culture, giant danio, moustached danio, myotubes, proliferation, differentiation, Danioninae, axolotl Using Coculture to Detect Chemically Mediated Interspecies Interactions Institutions: University of North Carolina at Chapel Hill . In nature, bacteria rarely exist in isolation; they are instead surrounded by a diverse array of other microorganisms that alter the local environment by secreting metabolites. These metabolites have the potential to modulate the physiology and differentiation of their microbial neighbors and are likely important factors in the establishment and maintenance of complex microbial communities. We have developed a fluorescence-based coculture screen to identify such chemically mediated microbial interactions. The screen involves combining a fluorescent transcriptional reporter strain with environmental microbes on solid media and allowing the colonies to grow in coculture. The fluorescent transcriptional reporter is designed so that the chosen bacterial strain fluoresces when it is expressing a particular phenotype of interest (i.e. biofilm formation, sporulation, virulence factor production, etc .) Screening is performed under growth conditions where this phenotype is not expressed (and therefore the reporter strain is typically nonfluorescent). When an environmental microbe secretes a metabolite that activates this phenotype, it diffuses through the agar and activates the fluorescent reporter construct. This allows the inducing-metabolite-producing microbe to be detected: they are the nonfluorescent colonies most proximal to the fluorescent colonies. Thus, this screen allows the identification of environmental microbes that produce diffusible metabolites that activate a particular physiological response in a reporter strain. This publication discusses how to: a) select appropriate coculture screening conditions, b) prepare the reporter and environmental microbes for screening, c) perform the coculture screen, d) isolate putative inducing organisms, and e) confirm their activity in a secondary screen. We developed this method to screen for soil organisms that activate biofilm matrix-production in Bacillus subtilis ; however, we also discuss considerations for applying this approach to other genetically tractable bacteria. Microbiology, Issue 80, High-Throughput Screening Assays, Genes, Reporter, Microbial Interactions, Soil Microbiology, Coculture, microbial interactions, screen, fluorescent transcriptional reporters, Bacillus subtilis Monitoring Intraspecies Competition in a Bacterial Cell Population by Cocultivation of Fluorescently Labelled Strains Institutions: Georg-August University. Many microorganisms such as bacteria proliferate extremely fast and the populations may reach high cell densities. Small fractions of cells in a population always have accumulated mutations that are either detrimental or beneficial for the cell. If the fitness effect of a mutation provides the subpopulation with a strong selective growth advantage, the individuals of this subpopulation may rapidly outcompete and even completely eliminate their immediate fellows. Thus, small genetic changes and selection-driven accumulation of cells that have acquired beneficial mutations may lead to a complete shift of the genotype of a cell population. Here we present a procedure to monitor the rapid clonal expansion and elimination of beneficial and detrimental mutations, respectively, in a bacterial cell population over time by cocultivation of fluorescently labeled individuals of the Gram-positive model bacterium Bacillus subtilis . The method is easy to perform and very illustrative to display intraspecies competition among the individuals in a bacterial cell population. Cellular Biology, Issue 83, Bacillus subtilis, evolution, adaptation, selective pressure, beneficial mutation, intraspecies competition, fluorophore-labelling, Fluorescence Microscopy Setting-up an In Vitro Model of Rat Blood-brain Barrier (BBB): A Focus on BBB Impermeability and Receptor-mediated Transport Institutions: VECT-HORUS SAS, CNRS, NICN UMR 7259. The blood brain barrier (BBB) specifically regulates molecular and cellular flux between the blood and the nervous tissue. Our aim was to develop and characterize a highly reproducible rat syngeneic in vitro model of the BBB using co-cultures of primary rat brain endothelial cells (RBEC) and astrocytes to study receptors involved in transcytosis across the endothelial cell monolayer. Astrocytes were isolated by mechanical dissection following trypsin digestion and were frozen for later co-culture. RBEC were isolated from 5-week-old rat cortices. The brains were cleaned of meninges and white matter, and mechanically dissociated following enzymatic digestion. Thereafter, the tissue homogenate was centrifuged in bovine serum albumin to separate vessel fragments from nervous tissue. The vessel fragments underwent a second enzymatic digestion to free endothelial cells from their extracellular matrix. The remaining contaminating cells such as pericytes were further eliminated by plating the microvessel fragments in puromycin-containing medium. They were then passaged onto filters for co-culture with astrocytes grown on the bottom of the wells. RBEC expressed high levels of tight junction (TJ) proteins such as occludin, claudin-5 and ZO-1 with a typical localization at the cell borders. The transendothelial electrical resistance (TEER) of brain endothelial monolayers, indicating the tightness of TJs reached 300 ohm·cm2 on average. The endothelial permeability coefficients (Pe) for lucifer yellow (LY) was highly reproducible with an average of 0.26 ± 0.11 x 10-3 cm/min. Brain endothelial cells organized in monolayers expressed the efflux transporter P-glycoprotein (P-gp), showed a polarized transport of rhodamine 123, a ligand for P-gp, and showed specific transport of transferrin-Cy3 and DiILDL across the endothelial cell monolayer. In conclusion, we provide a protocol for setting up an in vitro BBB model that is highly reproducible due to the quality assurance methods, and that is suitable for research on BBB transporters and receptors. Medicine, Issue 88, rat brain endothelial cells (RBEC), mouse, spinal cord, tight junction (TJ), receptor-mediated transport (RMT), low density lipoprotein (LDL), LDLR, transferrin, TfR, P-glycoprotein (P-gp), transendothelial electrical resistance (TEER), A Toolkit to Enable Hydrocarbon Conversion in Aqueous Environments Institutions: Delft University of Technology, Delft University of Technology. This work puts forward a toolkit that enables the conversion of alkanes by Escherichia coli and presents a proof of principle of its applicability. The toolkit consists of multiple standard interchangeable parts (BioBricks)9 addressing the conversion of alkanes, regulation of gene expression and survival in toxic hydrocarbon-rich environments. A three-step pathway for alkane degradation was implemented in E. coli to enable the conversion of medium- and long-chain alkanes to their respective alkanols, alkanals and ultimately alkanoic-acids. The latter were metabolized via the native β-oxidation pathway. To facilitate the oxidation of medium-chain alkanes (C5-C13) and cycloalkanes (C5-C8), four genes (alkB2 ) of the alkane hydroxylase system from Gordonia were transformed into E. coli . For the conversion of long-chain alkanes (C15-C36), theladA gene from Geobacillus thermodenitrificans was implemented. For the required further steps of the degradation process, ADH and ALDH ( originating from G. thermodenitrificans ) were introduced10,11 . The activity was measured by resting cell assays. For each oxidative step, enzyme activity was observed. To optimize the process efficiency, the expression was only induced under low glucose conditions: a substrate-regulated promoter, pCaiF, was used. pCaiF is present in E. coli K12 and regulates the expression of the genes involved in the degradation of non-glucose carbon sources. The last part of the toolkit - targeting survival - was implemented using solvent tolerance genes, PhPFDα and β, both from Pyrococcus horikoshii OT3. Organic solvents can induce cell stress and decreased survivability by negatively affecting protein folding. As chaperones, PhPFDα and β improve the protein folding process e.g. under the presence of alkanes. The expression of these genes led to an improved hydrocarbon tolerance shown by an increased growth rate (up to 50%) in the presences of 10% n -hexane in the culture medium were observed. Summarizing, the results indicate that the toolkit enables E. coli to convert and tolerate hydrocarbons in aqueous environments. As such, it represents an initial step towards a sustainable solution for oil-remediation using a synthetic biology approach. Bioengineering, Issue 68, Microbiology, Biochemistry, Chemistry, Chemical Engineering, Oil remediation, alkane metabolism, alkane hydroxylase system, resting cell assay, prefoldin, Escherichia coli, synthetic biology, homologous interaction mapping, mathematical model, BioBrick, iGEM Optimization and Utilization of Agrobacterium-mediated Transient Protein Production in Nicotiana Institutions: Fraunhofer USA Center for Molecular Biotechnology. -mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana plants with Agrobacteria carrying launch vectors. Optimization of Agrobacterium cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana , N. excelsiana × N. excelsior ) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium harboring pBID4-GFP (Tobacco mosaic virus -based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium laboratory strain GV3101 showed the highest protein production compared to Agrobacteria laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin). Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria Microwave-assisted Functionalization of Poly(ethylene glycol) and On-resin Peptides for Use in Chain Polymerizations and Hydrogel Formation Institutions: University of Rochester, University of Rochester, University of Rochester Medical Center. One of the main benefits to using poly(ethylene glycol) (PEG) macromers in hydrogel formation is synthetic versatility. The ability to draw from a large variety of PEG molecular weights and configurations (arm number, arm length, and branching pattern) affords researchers tight control over resulting hydrogel structures and properties, including Young’s modulus and mesh size. This video will illustrate a rapid, efficient, solvent-free, microwave-assisted method to methacrylate PEG precursors into poly(ethylene glycol) dimethacrylate (PEGDM). This synthetic method provides much-needed starting materials for applications in drug delivery and regenerative medicine. The demonstrated method is superior to traditional methacrylation methods as it is significantly faster and simpler, as well as more economical and environmentally friendly, using smaller amounts of reagents and solvents. We will also demonstrate an adaptation of this technique for on-resin methacrylamide functionalization of peptides. This on-resin method allows the N-terminus of peptides to be functionalized with methacrylamide groups prior to deprotection and cleavage from resin. This allows for selective addition of methacrylamide groups to the N-termini of the peptides while amino acids with reactive side groups (e.g. primary amine of lysine, primary alcohol of serine, secondary alcohols of threonine, and phenol of tyrosine) remain protected, preventing functionalization at multiple sites. This article will detail common analytical methods (proton Nuclear Magnetic Resonance spectroscopy (; H-NMR) and Matrix Assisted Laser Desorption Ionization Time of Flight mass spectrometry (MALDI-ToF)) to assess the efficiency of the functionalizations. Common pitfalls and suggested troubleshooting methods will be addressed, as will modifications of the technique which can be used to further tune macromer functionality and resulting hydrogel physical and chemical properties. Use of synthesized products for the formation of hydrogels for drug delivery and cell-material interaction studies will be demonstrated, with particular attention paid to modifying hydrogel composition to affect mesh size, controlling hydrogel stiffness and drug release. Chemistry, Issue 80, Poly(ethylene glycol), peptides, polymerization, polymers, methacrylation, peptide functionalization, 1H-NMR, MALDI-ToF, hydrogels, macromer synthesis Modeling Neural Immune Signaling of Episodic and Chronic Migraine Using Spreading Depression In Vitro Institutions: The University of Chicago Medical Center, The University of Chicago Medical Center. Migraine and its transformation to chronic migraine are healthcare burdens in need of improved treatment options. We seek to define how neural immune signaling modulates the susceptibility to migraine, modeled in vitro using spreading depression (SD), as a means to develop novel therapeutic targets for episodic and chronic migraine. SD is the likely cause of migraine aura and migraine pain. It is a paroxysmal loss of neuronal function triggered by initially increased neuronal activity, which slowly propagates within susceptible brain regions. Normal brain function is exquisitely sensitive to, and relies on, coincident low-level immune signaling. Thus, neural immune signaling likely affects electrical activity of SD, and therefore migraine. Pain perception studies of SD in whole animals are fraught with difficulties, but whole animals are well suited to examine systems biology aspects of migraine since SD activates trigeminal nociceptive pathways. However, whole animal studies alone cannot be used to decipher the cellular and neural circuit mechanisms of SD. Instead, in vitro preparations where environmental conditions can be controlled are necessary. Here, it is important to recognize limitations of acute slices and distinct advantages of hippocampal slice cultures. Acute brain slices cannot reveal subtle changes in immune signaling since preparing the slices alone triggers: pro-inflammatory changes that last days, epileptiform behavior due to high levels of oxygen tension needed to vitalize the slices, and irreversible cell injury at anoxic slice centers. In contrast, we examine immune signaling in mature hippocampal slice cultures since the cultures closely parallel their in vivo counterpart with mature trisynaptic function; show quiescent astrocytes, microglia, and cytokine levels; and SD is easily induced in an unanesthetized preparation. Furthermore, the slices are long-lived and SD can be induced on consecutive days without injury, making this preparation the sole means to-date capable of modeling the neuroimmune consequences of chronic SD, and thus perhaps chronic migraine. We use electrophysiological techniques and non-invasive imaging to measure neuronal cell and circuit functions coincident with SD. Neural immune gene expression variables are measured with qPCR screening, qPCR arrays, and, importantly, use of cDNA preamplification for detection of ultra-low level targets such as interferon-gamma using whole, regional, or specific cell enhanced (via laser dissection microscopy) sampling. Cytokine cascade signaling is further assessed with multiplexed phosphoprotein related targets with gene expression and phosphoprotein changes confirmed via cell-specific immunostaining. Pharmacological and siRNA strategies are used to mimic SD immune signaling. Neuroscience, Issue 52, innate immunity, hormesis, microglia, T-cells, hippocampus, slice culture, gene expression, laser dissection microscopy, real-time qPCR, interferon-gamma High Efficiency Differentiation of Human Pluripotent Stem Cells to Cardiomyocytes and Characterization by Flow Cytometry Institutions: Medical College of Wisconsin, Stanford University School of Medicine, Medical College of Wisconsin, Hong Kong University, Johns Hopkins University School of Medicine, Medical College of Wisconsin. There is an urgent need to develop approaches for repairing the damaged heart, discovering new therapeutic drugs that do not have toxic effects on the heart, and improving strategies to accurately model heart disease. The potential of exploiting human induced pluripotent stem cell (hiPSC) technology to generate cardiac muscle “in a dish” for these applications continues to generate high enthusiasm. In recent years, the ability to efficiently generate cardiomyogenic cells from human pluripotent stem cells (hPSCs) has greatly improved, offering us new opportunities to model very early stages of human cardiac development not otherwise accessible. In contrast to many previous methods, the cardiomyocyte differentiation protocol described here does not require cell aggregation or the addition of Activin A or BMP4 and robustly generates cultures of cells that are highly positive for cardiac troponin I and T (TNNI3, TNNT2), iroquois-class homeodomain protein IRX-4 (IRX4), myosin regulatory light chain 2, ventricular/cardiac muscle isoform (MLC2v) and myosin regulatory light chain 2, atrial isoform (MLC2a) by day 10 across all human embryonic stem cell (hESC) and hiPSC lines tested to date. Cells can be passaged and maintained for more than 90 days in culture. The strategy is technically simple to implement and cost-effective. Characterization of cardiomyocytes derived from pluripotent cells often includes the analysis of reference markers, both at the mRNA and protein level. For protein analysis, flow cytometry is a powerful analytical tool for assessing quality of cells in culture and determining subpopulation homogeneity. However, technical variation in sample preparation can significantly affect quality of flow cytometry data. Thus, standardization of staining protocols should facilitate comparisons among various differentiation strategies. Accordingly, optimized staining protocols for the analysis of IRX4, MLC2v, MLC2a, TNNI3, and TNNT2 by flow cytometry are described. Cellular Biology, Issue 91, human induced pluripotent stem cell, flow cytometry, directed differentiation, cardiomyocyte, IRX4, TNNI3, TNNT2, MCL2v, MLC2a Protocols for Implementing an Escherichia coli Based TX-TL Cell-Free Expression System for Synthetic Biology Institutions: California Institute of Technology, California Institute of Technology, Massachusetts Institute of Technology, University of Minnesota. Ideal cell-free expression systems can theoretically emulate an in vivo cellular environment in a controlled in vitro This is useful for expressing proteins and genetic circuits in a controlled manner as well as for providing a prototyping environment for synthetic biology.2,3 To achieve the latter goal, cell-free expression systems that preserve endogenous Escherichia coli transcription-translation mechanisms are able to more accurately reflect in vivo cellular dynamics than those based on T7 RNA polymerase transcription. We describe the preparation and execution of an efficient endogenous E. coli based transcription-translation (TX-TL) cell-free expression system that can produce equivalent amounts of protein as T7-based systems at a 98% cost reduction to similar commercial systems.4,5 The preparation of buffers and crude cell extract are described, as well as the execution of a three tube TX-TL reaction. The entire protocol takes five days to prepare and yields enough material for up to 3000 single reactions in one preparation. Once prepared, each reaction takes under 8 hr from setup to data collection and analysis. Mechanisms of regulation and transcription exogenous to E. coli , such as lac/tet repressors and T7 RNA polymerase, can be supplemented.6 Endogenous properties, such as mRNA and DNA degradation rates, can also be adjusted.7 The TX-TL cell-free expression system has been demonstrated for large-scale circuit assembly, exploring biological phenomena, and expression of proteins under both T7- and endogenous promoters.6,8 Accompanying mathematical models are available.9,10 The resulting system has unique applications in synthetic biology as a prototyping environment, or "TX-TL biomolecular breadboard." Cellular Biology, Issue 79, Bioengineering, Synthetic Biology, Chemistry Techniques, Synthetic, Molecular Biology, control theory, TX-TL, cell-free expression, in vitro, transcription-translation, cell-free protein synthesis, synthetic biology, systems biology, Escherichia coli cell extract, biological circuits, biomolecular breadboard Toxin Induction and Protein Extraction from Fusarium spp. Cultures for Proteomic Studies Institutions: Centre de Recherche Public-Gabriel Lippmann. are filamentous fungi able to produce different toxins. Fusarium mycotoxins such as deoxynivalenol, nivalenol, T2, zearelenone, fusaric acid, moniliformin, etc... have adverse effects on both human and animal health and some are considered as pathogenicity factors. Proteomic studies showed to be effective for deciphering toxin production mechanisms (Taylor et al. , 2008) as well as for identifying potential pathogenic factors (Paper et al. , 2007, Houterman et al., 2007) in Fusaria . It becomes therefore fundamental to establish reliable methods for comparing between proteomic studies in order to rely on true differences found in protein expression among experiments, strains and laboratories. The procedure that will be described should contribute to an increased level of standardization of proteomic procedures by two ways. The filmed protocol is used to increase the level of details that can be described precisely. Moreover, the availability of standardized procedures to process biological replicates should guarantee a higher robustness of data, taking into account also the human factor within the technical reproducibility of the extraction procedure. The protocol described requires 16 days for its completion: fourteen days for cultures and two days for protein extraction (figure 1). strains are grown on solid media for 4 days; they are then manually fragmented and transferred into a modified toxin inducing media (Jiao et al. , 2008) for 10 days. Mycelium is collected by filtration through a Miracloth layer. Grinding is performed in a cold chamber. Different operators performed extraction replicates (n=3) in order to take into account the bias due to technical variations (figure 2). Extraction was based on a SDS/DTT buffer as described in Taylor et al. (2008) with slight modifications. Total protein extraction required a precipitation process of the proteins using Aceton/TCA/DTT buffer overnight and Acetone /DTT washing (figure 3a,3b). Proteins were finally resolubilized in the protein-labelling buffer and quantified. Results of the extraction were visualized on a 1D gel (Figure 4, SDS-PAGE), before proceeding to 2D gels (IEF/SDS-PAGE). The same procedure can be applied for proteomic analyses on other growing media and other filamentous fungi (Miles et al., Microbiology, Issue 36, MIAPE, Fusarium graminearum, toxin induction, fungal cultures, proteomics, sample processing, protein extraction Interview: HIV-1 Proviral DNA Excision Using an Evolved Recombinase Institutions: Heinrich-Pette-Institute for Experimental Virology and Immunology, University of Hamburg. HIV-1 integrates into the host chromosome of infected cells and persists as a provirus flanked by long terminal repeats. Current treatment strategies primarily target virus enzymes or virus-cell fusion, suppressing the viral life cycle without eradicating the infection. Since the integrated provirus is not targeted by these approaches, new resistant strains of HIV-1 may emerge. Here, we report that the engineered recombinase Tre (see Molecular evolution of the Tre recombinase , Buchholz, F., Max Planck Institute for Cell Biology and Genetics, Dresden) efficiently excises integrated HIV-1 proviral DNA from the genome of infected cells. We produced loxLTR containing viral pseudotypes and infected HeLa cells to examine whether Tre recombinase can excise the provirus from the genome of HIV-1 infected human cells. A virus particle-releasing cell line was cloned and transfected with a plasmid expressing Tre or with a parental control vector. Recombinase activity and virus production were monitored. All assays demonstrated the efficient deletion of the provirus from infected cells without visible cytotoxic effects. These results serve as proof of principle that it is possible to evolve a recombinase to specifically target an HIV-1 LTR and that this recombinase is capable of excising the HIV-1 provirus from the genome of HIV-1-infected human cells. Before an engineered recombinase could enter the therapeutic arena, however, significant obstacles need to be overcome. Among the most critical issues, that we face, are an efficient and safe delivery to targeted cells and the absence of side effects. Medicine, Issue 16, HIV, Cell Biology, Recombinase, provirus, HeLa Cells Molecular Evolution of the Tre Recombinase Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden. Here we report the generation of Tre recombinase through directed, molecular evolution. Tre recombinase recognizes a pre-defined target sequence within the LTR sequences of the HIV-1 provirus, resulting in the excision and eradication of the provirus from infected human cells. We started with Cre, a 38-kDa recombinase, that recognizes a 34-bp double-stranded DNA sequence known as loxP. Because Cre can effectively eliminate genomic sequences, we set out to tailor a recombinase that could remove the sequence between the 5'-LTR and 3'-LTR of an integrated HIV-1 provirus. As a first step we identified sequences within the LTR sites that were similar to loxP and tested for recombination activity. Initially Cre and mutagenized Cre libraries failed to recombine the chosen loxLTR sites of the HIV-1 provirus. As the start of any directed molecular evolution process requires at least residual activity, the original asymmetric loxLTR sequences were split into subsets and tested again for recombination activity. Acting as intermediates, recombination activity was shown with the subsets. Next, recombinase libraries were enriched through reiterative evolution cycles. Subsequently, enriched libraries were shuffled and recombined. The combination of different mutations proved synergistic and recombinases were created that were able to recombine loxLTR1 and loxLTR2. This was evidence that an evolutionary strategy through intermediates can be successful. After a total of 126 evolution cycles individual recombinases were functionally and structurally analyzed. The most active recombinase -- Tre -- had 19 amino acid changes as compared to Cre. Tre recombinase was able to excise the HIV-1 provirus from the genome HIV-1 infected HeLa cells (see "HIV-1 Proviral DNA Excision Using an Evolved Recombinase", Hauber J., Heinrich-Pette-Institute for Experimental Virology and Immunology, Hamburg, Germany). While still in its infancy, directed molecular evolution will allow the creation of custom enzymes that will serve as tools of "molecular surgery" and molecular medicine. Cell Biology, Issue 15, HIV-1, Tre recombinase, Site-specific recombination, molecular evolution Investigating the Microbial Community in the Termite Hindgut - Interview Institutions: California Institute of Technology - Caltech. Jared Leadbetter explains why the termite-gut microbial community is an excellent system for studying the complex interactions between microbes. The symbiotic relationship existing between the host insect and lignocellulose-degrading gut microbes is explained, as well as the industrial uses of these microbes for degrading plant biomass and generating biofuels. Microbiology, issue 4, microbial community, diversity
- Basic Explanation for the Non-scientist WindFuels™ Basic Process Explanations. WindFuels™. Now that is a word you have not heard before. So what are WindFuels™? The concept is really not complicated. We will use energy generated by wind to power processes that will carbon dioxide into transportation fuels for automobiles, like diesel, ethanol or gasoline. We can also make fuels like jet fuel (We are talking mostly about fuels, but the FTS process can also ethylene and propylene which are used to make plastics – used in everything from textiles to We'll recycle CO2 from power plants or other exhausts (which release CO2into the air, contributing to global warming). Because we have removed the CO2 from the air to make the fuels, using (burning) WindFuels releases no new carbon, making it a carbon neutral process. Replacing oil with WindFuels will reduce total No experienced chemist has doubted that it is possible to convert CO2 to fuels. The problem has been that prior proposals for doing this conversion have had efficiencies of only 20% to 30%. The combination of the eight major technical advances we have made over the past five years will now permit this conversion to be done at 60% efficiency. That’s high enough for carbon-neutral fuels made from waste CO2 to easily compete with petroleum on a cost basis, especially when the input energy is from excess wind energy in the middle of the night. What we are doing is not magic. It is just good chemistry, physics, and engineering. Because we are using the carbon from waste CO2 rather than coal, we have to add a lot of energy from wind. However, when all the processes are properly optimized, the cost of this energy becomes affordable. It is a small price to pay to dramatically reduce greenhouse gases in the atmosphere and provide a limitless supply of clean transportation fuels. Fuels like ethanol, gasoline and jet fuel are hydrocarbon fuels. Hydrocarbons and alcohols are chemicals that contain hydrogen (H), carbon (C) and oxygen (O). We will use water (H2O) and the waste (polluting) carbon dioxide (CO2) from power-plant smokestacks to provide the carbon, oxygen, and hydrogen needed to make fuels like ethanol (C2H5OH) and gasoline (C8H18). Here's How: (Later on this page, we’ll explain each of these processes - and we will try to explain them in a clear way.) |1. Wind Farms generate electricity for electrolysis and other processes. ||2. Electrolysis is the process in which electric current is passed through water (H2O) to break the bonds between the hydrogen and the oxygen, yielding hydrogen (H2) and Oxygen |RWGS & FTS Plant |3. Reverse Water Gas Shift (RWGS) is used to produce carbon monoxide (CO) from carbon dioxide (CO2). |4. In a widely used process Tropsch Synthesis (FTS), the liquid fuel production from hydrogen (H) and carbon monoxide (CO) occurs. |5. The resulting products will fuel our cars, trucks or jet planes and are: (WindFuels do not release new CO2 into the air. The carbon was recycled from exhausts.) Renewable (Both wind and CO2 are replaced. In contrast – oil and coal which are used up are not renewable.) Economical (competes on a price basis) Contributing to Energy Independence! Overview of FTS (Fischer The liquid fuel production from hydrogen, carbon, and oxygen occurs in the Fischer Tropsch (FT) reactor. FTS has been used in commercial all types from coal or natural gas (gas) for over 60 years. FTS was used in Germany during WWII to generate fuels when crude oil was scarce, so the process is not The process has not been used much in the United States because oil was cheap and plentiful. Utilization of the FTS processes has been increasing, as coal and gas are now much cheaper than oil. (If we use waste CO2 in place of gas or coal in FTS, we can supply fuel and reduce in the air.) The FTS process uses catalysts to efficiently convert a feed mixture of carbon monoxide (CO) and hydrogen (H2) to hydrocarbons of all types. Different catalysts and different operating conditions can help “select” for higher yields of some hydrocarbons than others, but there will always be a mixture of different products created by the reaction: In conventional FTS, the syngas is obtained from high-temperature reforming of coal or methane. Usually, the FT catalysts and conditions (pressure, temperature, and mixture) have been chosen to obtain mostly gasoline, diesel, and waxes from the FT reactor. Recent progress in the catalysts and conditions now allow high yields of ethanol, propanol, and butanol also. Getting clean hydrogen - Electrolysis Electrolysis is used rather than fossil fuels to generate the hydrogen for our WindFuels. Electrolysis is the process in which electric current is passed through water (H2O) to break the bonds between the hydrogen and the oxygen, yielding hydrogen (H2) and Oxygen (O2). for efficiently splitting water into high purity hydrogen and oxygen have been in industrial production for decades. A solution of potassium hydroxide (KOH) in water is used because it has resistivity and thus lower power loss. The addition of electrons at the negative electrode (also called the cathode) produces hydrogen gas (H2) and hydroxyl ions (OH-), which remain in the solution. At the positive electrode called the anode), electrons are removed from OH- ions, producing water (H2O) and oxygen (O2). A membrane that is permeable to the OH- ions (and possibly to water too) separates the two electrodes to keep the gases from mixing while allowing the electrical current to flow through it on the charge carriers. In practice, the solutions on both sides are continually flowing maintain the desired salt concentrations. The two gases produced also contain a lot of water vapor (which is easily separated) but only minute traces (easily under 0.1%, and sometimes under 0.01%) of other impurities (primarily the other major gas, either O2 Efficiency of commercially available 2 MW (megawatt) electrolyzers has typically been 73%. Laboratory experiments have exceeded pressures and lower current densities, and we have shown that the waste heat (at 160 0C) can be utilized at 30% efficiency. Total system efficiency of a 250 MW electrolysis system may eventually approach A quick note about water, the FTS process will require about 5 gallons of water for every 3-4 gallons of produced fuels, which is at least an order of magnitude less than the water requirements for biofuels. Using reverse-osmosis, water sources as impure as seawater used, which would add only $0.01/gallon to the cost of the fuel produced. Water will not be a limitation. the CO: The Reverse Water Gas Shift (RWGS) The next step is to efficiently get the carbon monoxide (CO) needed in the syngas from CO2. There is a very robust and efficient reaction that has been known for the past century as the water gas shift reaction (WGS). This is used in fossil-fuels FTS to generate hydrogen by combining CO (from fossil fuels) with steam at high temperatures (400-800 to form hydrogen and carbon dioxide. The fossil fuels FTS systems have no trouble getting carbon monoxide, and use excess carbon monoxide to get the hydrogen they need for proper syngas mixtures. The reverse is true for clean, renewable WindFuels. Through electrolysis, we can efficiently get all the H2 we need, but we need an efficient way of getting the CO. The reverse of WGS reaction, known as the reverse water gas shift (RWGS) provides a robust method of producing CO and water from CO2 and H2. Getting this reaction to achieve high yield of CO at high efficiency with low production of unwanted methane (CH4) has previously been a challenge, but we have shown elsewhere how this can now be achieved achieved at very high efficiency. The syngas (remember, the CO plus H2 mixture) then goes to the FT reactor, where it is adsorbed onto the surface of the catalysts small metal particles), where it is reformed into hydrocarbons (such as gasoline, propane, and diesel), alcohols (including propanol), water, and waste heat. As these reactions are exothermic (heat is released), they proceed readily. The reaction here are in the range of 70-85%, depending on the compound that is formed, with the higher efficiencies being for the light (methanol and ethanol). We have shown elsewhere in detail how the waste heat from the reactor can be utilized at over 40% efficiency. The output from the FT reactor includes the desired products (alcohols, jet fuel, propane, etc.) along with a lot of (CO and H2) and some undesired products – water, CO2, and methane. One of the most important keys to achieving is devising extremely efficient methods of separating and recycling the unwanted components. We have developed important improvements in separations and recycling. Elsewhere we have shown how this can be done. FTS is dirty. Windfuels are a path to true Global Warming The biggest problem with fossil-based FTS is that an enormous amount of polluting CO2 is released – especially if coal is used. For every kg of coal used for coal-to-liquids (CTL) diesel, 2.2 kg of CO2 are emitted and 0.3 kg of fuel is produced. (Even NG-based FTS results in about 25% more CO2 total release than simply using conventional oil.) WindFuels uses similar FTS processes, but begins with carbon-neutral “syngas” (the feed mixture of CO and H2) made from water (H2O) and waste CO2 (from coal plants). This can be done at very high efficiency with zero net carbon release, as we show in detail elsewhere on this website and summarize below. the CO2 was removed from the air (or smokestacks) to make the WindFuels, no new CO2 has been released. The carbon was recycled. The net carbon from WindFuels is zero. Some may say that the CO2 from coal is eventually released, and this is therefore not carbon neutral. However, many of those same people have probably either bought carbon offsets or at least looked into the idea. The principle of carbon offsets is to reduce carbon emissions elsewhere to offset the carbon you are generating. Well, the coal from the coal power plants (which provides electricity to hundreds of millions of homes) emits billions of tons of CO2. If we recycle that CO2 to produce WindFuels, it will still eventually be emitted, but oil and natural gas – as well as much more environmentally destructive fuels such as coal-to-methanol, tar-sands fuels, and oil-shale fuels – are NOT burned and are therefore NOT emitting CO2… which reduces overall greenhouse gas emissions. Eventually, the CO2 can be taken from the atmosphere rather than from smokestacks, but that will be more expensive. Today, using CO2 from the atmosphere might make the WindFuels 40% more expensive. Thirty years from now, we’ll probably be able to do it for just an 8% cost penalty. We can’t wait 30 years to get started. We'll start with CO2 from smokestacks. Perfectly Solving the Grid We’ve all probably heard it will not be possible to stabilize the power grid if much more wind energy is added, and the result would be frequent regional grid failures and blackouts. Without a solution to the energy storage problem, that would be true. The electric grid stability challenge arises from changes in grid supply (power plants, wind, and solar) not being able to follow the changes in grid demand (from users) quickly enough. Wind power is often greater in the middle of the night when demand is minimal. “Clean coal”, nuclear, and many of the older natural gas power plants take many hours to turn down, and there is not a cost effective method of storing energy other than pumped hydro storage, which is not an option in most areas. (Compressed air energy storage, CAES, will be either very expensive or quite inefficient.) WindFuels will only draw power during off-peak hours when there is excess renewable energy available at very low cost. Off-peak power rates are often 15% of peak rates to encourage more use of it. The WindFuels electrolyzer can respond within milliseconds to changes in supply and demand. It will completely solve the grid stability problem by storing the excess peak grid energy temporarily in compressed hydrogen and then converting it to liquid fuels (which are easily stored and distributed) at a fairly steady rate around the clock. Storing enormous amounts of energy in hydrogen is considerably more expensive than storing energy in liquid fuels, but storing enough hydrogen to keep the FTS going steadily around the clock (to efficiently convert the hydrogen to liquid fuels) will not be either too expensive or too risky. the competitiveness of fuels from CO2. The Windfuels process seems unquestionable destined to be the dominant, sustainable solution for transportation fuels in the future, but electrolyzers today are still expensive. Hence, the capital outlay for the electrolyzers (for perhaps the next five years) may be beyond what most investors wish to consider in today’s That has motivated us to begin developing a less expensive “bridging” approach to synthesizing fuels from a combination of CO2, methane (from shale gas), water, and renewable energy. The outcome of this research is a process we have dubbed CARMA-GTL, for Carbon dioxide Advanced Reforming of Methane Adiabatically, with GTL. As explained in a paper recently presented at the national ACS meeting, as long as low-cost natural gas is available, our CARMA-GTL process reduces the electrolyzer requirements by a factor of three to ten while actually increasing plant efficiency. Since most of the carbon in these fuels comes from shale gas (only a minor fraction comes from CO2), these fuels are only slightly carbon neutral (like most biofuels). However, the CARMA-GTL plants will be much less expensive than Windfuels plants, and they will be able to steadily transition to using more renewable energy and less shale gas. Developing this technology will begin to drive the cost of electrolyzers down and allow investors to become more comfortable with coming Windfuels paradigm. Test Fischer Tropsch System Sequestering a ton of CO2 prevents one ton of CO2 from being emitted into the atmosphere, adding a significant cost burden. Pumping one ton of CO2 into a WindFuels plant would profitably create about 170 gallons of liquid fuels (~0.56 tons), which keeps additional fossil fuels from being consumed . the CO2 is removed from the air (or smokestacks) to make the WindFuels, no new CO2 has been released. The net carbon from WindFuels is essentially added to the atmosphere for 1 ton of various liquid fuels: |Ethanol from established corn fields |Deep Sea Oil |Oil Shale, ICP |Oil Shale, ATP newly plowed grasslands converted to crops releases tons of CO2. (see Science 319, 1235-1238, Mar 21, 2008.) are using wind energy because it is the most cost effective renewable power source in the United States. In some countries, another renewable energy source like solar or geo-thermal would be more appropriate.
from Nature 520, 411 (23 April 2015) doi:10.1038/520411d Exploding stars grouped in one family because of their similarities actually form two distinct groups. This may have important cosmic implications because the explosions, called supernovae, are the primary evidence that the Universe’s expansion is accelerating. Half of type Ia supernovae seem to have similar intrinsic brightnesses when seen in the visible spectrum. But when Peter Milne of the University of Arizona in Tucson and his team analysed data from the Hubble Space Telescope and NASA’s Swift satellite, they found that the supernovae fell into two subfamilies, each brighter than the other in a different part of the ultraviolet spectrum. The relative abundances of the two subfamilies seem to have changed over the past several billion years, a fact that could complicate their use as markers of cosmic expansion, the authors say.
Get the details behind our redesign Any of a major group of animals defined by its embryonic development, in which the first opening in the embryo becomes the mouth. At this stage of development, the later specialization of any given embryonic cell has already been determined. Protostomes are one of the two groups of animals having a true body cavity (coelom) and are believed to share a common ancestor. They include the mollusks, annelids, and arthropods. Compare deuterostome. group of animals-including the arthropods (e.g., insects, crabs), mollusks (clams, snails), and annelid worms-classified together on the basis of embryological development. The mouth of the Protostomia (proto, "first"; stoma, "mouth") develops from the first opening into the embryonic gut (blastopore). The coelom (body cavity) forms from a split in the embryonic mesoderm (middle tissue). Larval (immature) forms, if present, are called trochophores. The Protostomia constitute one of two divisions of the coelomates (animals with a body cavity, or coelom). Compare Deuterostomia.
The greenhouse effect and global warming From Learn Chemistry Wiki | Experiments related| to climate change A garden greenhouse keeps plants warmer than they would be outside. It does this because the glass traps some of the Sun’s radiation energy. The atmosphere keeps the Earth warm in a similar way. Without the greenhouse effect the earth would be about 33 °C cooler than today’s pleasant average of 15 °C. Greenhouse gases include carbon dioxide, oxides of nitrogen, methane, chlorofluorocarbons (CFCs) and water. - If the greenhouse effect did not exist, what would the normal temperature of the earth be? - What do you think is meant by global warming? - List all the things you can think of that give off greenhouse gases – e.g., cars. Looking at the data Temperature changes over the last century Study the graphs, published by the International Panel of Climate Change (IPCC) and then answer the questions. These are temperatures above and below the average for the period 1961–1990. - 1980–2000 data |Global average temperatures for 1980-2000| - Describe how the temperature of the earth has changed over the period 1980-2000. - By how much has the temperature changed in that period? - Draw a sketch graph to show what you expect the average temperature to change by over the twenty years from 2000–2020. |Global average temperatures for 1940-1960||Global average temperatures for 1960-1980| - 4. Compare how the temperature of the earth changed between 1960 and 1980 and between 1940 and 1960. - 5. Based on the new information from question 4, draw a second sketch graph to show what you expect the temperature to change by over the twenty years, 2000–2020. |Global average temperatures for 1900-1920||Global average temperatures for 1920-1940| - 6. Compare how the temperature of the earth changed between 1920 and 1940 and between 1900 and 1920. - 7. Based on the new information from question 6, draw a third sketch graph to show what you expect the temperature to change by over the twenty years 2000–2020. - 8. Has the overall temperature of the earth increased over the last century? (Use the data above to support your - 9. Based on the temperature changes over the whole of the 20th century, which of your sketch graphs is most likely to be correct? - 10. Do you think that you have enough evidence to support a firm conclusion to your answer to question 9? When looking for answers scientist usually analyse data from more than one source if they can. The following graph comes from the Central England Temperature (CET) record. This record goes back to 1660 when instruments were first used to record temperatures. Annual temperatures are recorded here. |Central England temperatures for 1900-2000| - 11. Does the Central England Temperature Record show the same temperature pattern as the IPCC data above? - 12. How does the overall temperature change compare to the answer you gave in question 8? - 13. Suggest a reason why different data collected during different experiments might vary, leading to uncertain conclusions. For more information see the USAEPA website, which (as of September 2010) includes a global temperature graph for 1880-2008. Looking at the data – temperature changes over several centuries Study the graphs, from the Central England Temperature record (CET) and then answer the questions. |Central England temperatures for 1800-1900||Central England temperatures for 1900-2000| 1. Do you think that the temperature changes seen in the 19th century were any different to those seen in the 20th century? 2. Which century was the coldest? You may have seen pictures of people ice skating on the river Thames. At the same time, other places in Europe were also suffering from long bitterly cold winters and cold wet summers. This cooler period is often known as the ‘Little Ice Age’. |Central England temperatures for 1660-2000| 3. When do you think the end of the Little Ice Age was? 4. After the Little Ice Age, how long did it take to warm up? 5. Compare the temperatures of the 1730s and 1740s with the present temperature. 6. How do these values compare to the temperature in the 1930s? 7. How do you think the temperature will vary over the next twenty years? In groups of 3 or 4 8. Look back at the conclusions you came to about how you think the average temperature will change based on: - the 1980–2000 data - the 1940–1980 data - the 1900–1940 data - the 1660–2000 data Try and decide if any are correct. 9. What is your overall conclusion about global warming? 10. Have the data changed your views about global warming? If you have time, go on to the extension sheet.
Greenhouses are a wonderful way to extend the growing season. Depending on where you live and how long you want the season to be, however, you may need to think about how to heat your greenhouse. Maintaining greenhouse temperatures above freezing can be tricky, especially in areas with harsh winters. Keeping your greenhouse as small as possible will help, since smaller spaces are easier to heat than large ones, but there are several things you can do to generate heat if your plans are large and you have a greenhouse to match. Build the greenhouse over a large natural rock or place a large, dark rock in the greenhouse to absorb heat on sunny days and radiate it to nearby plants. Place large plastic tubs or containers in the greenhouse where they will be in direct sunlight and fill them with water. Water will hold the heat from the sun much longer than the air in the greenhouse will and will radiate it into the air overnight. Choose dark-colored containers or paint them black to maximize heat absorption. You can use metal drums, if desired, but check them for rust damage frequently. Build deep, raised beds for your greenhouse plants. Raised beds filled with rich, dark soil warm better than smaller, flat beds and hold the heat longer. Move an active hot compost bin into the middle of the greenhouse in the winter. Hot composting requires a compost mixture that is comprised of two parts carbon to one part nitrogen. Carbon comes from items such as wood chips, shredded paper and fall leaves while nitrogen comes from grass clippings, manure and fruit and vegetable scraps. Hang a plastic or aluminum cover across the top of the greenhouse to separate the peaked portion of the roof from the rest of the greenhouse. Since hot air rises, large amounts of heat can be lost in greenhouses with a peaked roof. Temporarily blocking this off in the winter helps to keep the greenhouse warmer. Install a heater in the greenhouse. Whether you use a space heater, or install something larger and more permanent, always keep heaters well away from flammable materials and make sure that they are properly installed. Some heaters may create emissions that need to be vented or require the installation of 220 volt electric service. Contact an HVAC professional if you are unsure what size or type of heater you will need for your greenhouse. Plant a hedge or build a fence to protect greenhouses that are in the path of cold winter winds. Things You Will Need - Large barrels - Black paint - Lumber and nails - Compost bin - Plastic or aluminum sheeting - Hedge plants or fencing - If your greenhouse is made from rolled plastic, create a layer of plastic inside the greenhouse and another one outside. Leave a bit of space between the two and pump outside air in between them with a small blower to create a pocket of air insulation. - If your greenhouse is built against a wall of an existing building, paint that wall white to reflect as much sunlight as possible back into the greenhouse. - University of Missouri - Columbia: An Energy-Efficient Solar-Heated Greenhouse Produces Cool-Season Vegetables All Winter Long - P. Allen Smith Garden Home: Greenhouse 101 with Andrew Cook: Heating Your Green House - Vegetable Gardener: What's the Best Way to Heat a Small Greenhouse? - Fine Gardening Magazine: Hot Composting vs. Cold Composting - The Telegraph: Warm Up Your Greenhouse - Jupiterimages/BananaStock/Getty Images
Grains provide many nutrients that are vital for the health and maintenance of your body. But not all grains and grain products are equally nutritious. Whole grains and grain products contain all of the parts of the grain — including the germ, the bran, and the endosperm — and are rich in fiber, vitamins, minerals, antioxidants, and phytochemicals (chemicals or nutrients derived from a plant source). Refined grains, such as white flour and white rice, are stripped of the bran and germ parts of the grain, which reduces the amount of fiber, antioxidants, and other nutrients in the grain. The amount of grains you need for good health depends on your age, sex, and level of physical activity. While most Americans consume enough total grains on a daily basis, most of those grain servings are refined grains, and very few are whole grains. The Center for Nutrition Policy and Promotion, an organization of the U.S. Department of Agriculture established in 1994 to improve the nutrition and well-being of Americans, recommends making at least half of your grain choices whole. For most adults, that means consuming at least three servings of a cooked whole grain or whole grain bread, cereal, crackers, or pasta every day. In general, a serving of whole grains is about ½ cup of cooked cereal or grain, 1 slice of whole-grain bread, or 1 cup of whole-grain cold breakfast cereal. (For more on serving sizes, see “What Counts as an Ounce?”) Whole grains naturally provide a number of nutrients, including dietary fiber, several B vitamins (thiamine, riboflavin, niacin, and folate), and minerals (iron, magnesium, and selenium). Refined grain products are often enriched with B vitamins and iron after processing, but they tend to provide significantly less fiber than whole grains. Whole grains that contain soluble fiber, such as oats and barley, can help lower blood cholesterol levels when eaten in adequate amounts. They may therefore prevent plaque buildup in the arteries, and may lower the risk of heart disease. Insoluble fiber, which is found in large amounts in whole wheat products, can help prevent constipation and diverticulosis, a condition in which there are small pouches in the colon that bulge outward. While diverticulosis itself may cause no symptoms, if the pouches become infected or inflamed, a condition called diverticulitis, a person will feel pain, and medical intervention is necessary. Diverticulosis is thought to be caused by increased pressure within the colon, which is often a result of chronic constipation. Fiber-containing foods such as whole grains also help provide a feeling of fullness when eating, perhaps leading to a decrease in calorie intake. The B vitamins play a key role in metabolism: They help the body use the energy it gets from protein, fat, and carbohydrate. B vitamins are also essential for a healthy nervous system. Many refined grains are enriched with thiamine, riboflavin, niacin, and folic acid, the synthetic form of folate that is found in supplements and added to foods. In fact, in the United States, manufacturers of enriched breads, flours, corn meals, pastas, rice, and other grain products have been required to add folic acid to their products since 1998. Folate helps the body form red blood cells. Women of childbearing age who may become pregnant and those in the first trimester of pregnancy are advised to consume adequate folate to reduce their chances of having a baby with a type of birth defect known as a neural tube defect, which includes spina bifida and anencephaly. For many women, consuming folic acid in fortified foods or supplements—in addition to consuming folate-rich foods—is necessary to get enough. Iron is an essential part of hemoglobin, which carries oxygen in the blood to all of the cells in the body. Iron comes from both animal foods (called heme iron) and plant foods (called non-heme iron). Heme iron is more readily absorbed by the body, but absorption of non-heme iron can be enhanced by eating foods rich in vitamin C along with foods rich in non-heme iron. Whole and enriched refined grain products are major sources of non-heme iron in American diets. Consuming inadequate iron can lead to iron-deficiency anemia, which causes fatigue and weakness. Many teenage girls and women in their childbearing years could benefit from eating more good food sources of iron. Two ways to absorb more iron from whole grains are to eat them with meat and to eat them with foods rich in vitamin C, such as bell peppers, cantaloupe, broccoli, or citrus fruits. Whole grains are sources of magnesium and selenium. Magnesium is a mineral used in building bones and releasing energy from muscles. Selenium protects cells from oxidation. It is also important for a healthy immune system. To be sure the products you are buying contain whole grains, take some time to read their labels carefully. If the package displays the words “whole grain,” it must contain at least 8 grams of whole grain per serving, which is considered half a serving of whole grains. If a product label says “100% whole grain,” it must contain at least 16 grams of whole grain per serving, which is one serving of whole grains. Foods that are labeled with the words “multi-grain,” “stone-ground,” “100% wheat,” “cracked wheat,” “seven-grain,” or “bran” are usually not whole-grain products. If a whole-grain ingredient is not listed first on a product’s ingredients list, the item may contain only a small amount of whole grains or none at all. Don’t let the color of an item fool you. Just because a grain product is brown doesn’t mean it is made from whole grains. Ingredients such as molasses can be added to darken products that are made from primarily refined grains. It is best to check the ingredients list to see if a food item contains whole grains. Also check the % Daily Value (%DV) for fiber on the Nutrition Facts panel of the label. The higher the %DV for fiber, the greater the likelihood that there’s whole grain in the product. By law, ingredients lists must list ingredients in descending order by weight. That’s to say, the first ingredient in the list is the one the product contains most of, and the last ingredient in the list is the one the product contains least of. As you read the ingredients list, therefore, note where added sugars such as sucrose, high-fructose corn syrup, honey, and molasses fall in the list. The closer they are to the beginning of the list, the more calories from sugar a food contains and the greater the chance the food is made primarily of refined ingredients. For a list of grains and grain products to keep an eye out for, see “Identifying Whole Grains.” Whole-grain “stamps” of approval One way to confirm that a food item contains whole grain—and to determine how much—is to look for the Whole Grains Council stamp of approval. The Whole Grains Council, which is backed by the Food and Drug Administration, has designed two logos, both of which resemble postage stamps, to identify foods that have a particular amount of whole grains in them. One of the logos is for foods labeled “whole grain,” and it indicates that the food contains at least 8 grams of whole grains per serving. The other is for foods labeled “100% whole grain,” and it indicates that the food contains only whole grains and has at least 16 grams or more per serving. Both logos include the Whole Grains Council message, which is “Eat 48 grams or more of whole grains daily.” This is the amount of whole grains in the three servings generally recommended for the adult population. For a product to use the stamp of approval, the company must be a member of the Whole Grains Council and must file information about each qualifying product with the council. Companies also sign a legal agreement stating that they will abide by all rules and guidelines of the Stamp program. So the Stamp logo can be a reliable source to help you find legitimate whole-grain products. A product could still contain 50% to 100% whole grains without the Whole Grain Council logo, but it can be difficult to be sure. If you don’t see the logo, you will need to rely on the descriptive words on the package as well as the ingredients list and the amount of fiber listed in the Nutrition Facts panel. Grains and diabetes control Most of the calories in grains come from carbohydrate, and carbohydrate is the type of nutrient that affects blood glucose level the most after meals. For that reason, people with diabetes are advised to monitor their carbohydrate intake and to match the amount of carbohydrate they eat with the amount of insulin they take before meals or the amount of insulin their pancreas can secrete. For a person with Type 2 diabetes, high blood glucose levels after a meal may indicate that the meal contained more carbohydrate than the pancreas could handle. Changes in either food choices or medication may be in order to maintain blood glucose control. Most nutrition experts agree that including carbohydrate-containing foods such as fruits, vegetables, whole grains, legumes, and low-fat milk in a diabetes meal plan is good for both good health and effective diabetes management. Such foods are important sources of energy, fiber, vitamins, and minerals. Choosing whole grains over refined grains may help with blood glucose control since the soluble fiber found in some grains slows digestion, and any kind of fiber helps to fill you up, so you may consume fewer calories overall. For suggestions on including more whole grains in your meals, see “Adding Whole Grains to Your Menus.” In addition, consider meeting with a registered dietitian, ideally one that specializes in diabetes, to help you better understand healthy grain choices, the impact of your choices on blood glucose control, as well as overall healthy meal-planning for your best possible blood glucose control and diabetes health. People who include whole grains in their daily meals have been found to have a reduced risk of some chronic diseases, including heart disease and Type 2 diabetes. But even if you already have Type 2 diabetes, consuming at least three servings of whole grains daily may help you with weight management and reduce the likelihood of constipation. For women planning a pregnancy, eating grains fortified with folic acid before as well as during pregnancy helps prevent neural tube defects in the fetus during development. Why not go with the whole grain? You and your health will only benefit.
Culture - Day 2 grades K-3 learning activities These activities will help youth build awareness and learn to identify city resources. They can be done alone, but work best with friends over a video chat such as Skype, Zoom, Facetime, etc. Trivia question: In what year did Calgary become a city? A) 2000 B) 1899 C) 1950 D) 1894 - Take a deep breath in and squeeze your fists as tight as you can. - Hold it for 5 seconds. - Blow all the air out through your mouth and shake your hands and arms as fast as you can. - Pencil Crayons - Give your child a piece of paper and pencil - Have your child divide the sheet into four sections (they can fold the paper or draw lines). - Help them write 4 of their favourite places in Calgary (one in each section). - Have them draw each place. - Share the drawings with family members and see if anyone else has the same favourite places. Using household items, have your child build their very own skyscraper at home! See how tall they can make it and imagine what it would look like in downtown Calgary. If they want to and have enough supplies, build more skyscrapers to make their own downtown. - Pencil crayons - Markers and/or crayons For this activity, have your child draw a map of their dream city. They can draw buildings, stores and houses like they see in Calgary. They can also make up amazing new attractions, like a chocolate fountain or a sledding race course. Have them name their new city. How many people live there? What do people do for fun? Are there any special events or festivals? Ask your child: - What's your favourite thing about Calgary? - What makes Calgary special? - How can you make Calgary an even better place? Mindfulness Activity: 1894. Calgary is 126 years old!
Cotton is a soft, fluffy staple fiber that grows in a boll, or protective case, around the seeds of the cotton plants of the genus Gossypium in the mallow family Malvaceae. The fiber is almost pure cellulose, and can contain minor percentages of waxes, fats, pectins, and water. Under natural conditions, the cotton bolls will increase the dispersal of the seeds. The plant is a shrub native to tropical and subtropical regions around the world, including the Americas, Africa, Egypt and India. The greatest diversity of wild cotton species is found in Mexico, followed by Australia and Africa. Cotton was independently domesticated in the Old and New Worlds. The fiber is most often spun into yarn or thread and used to make a soft, breathable, and durable textile. The use of cotton for fabric is known to date to prehistoric times; fragments of cotton fabric dated to the fifth millennium BC have been found in the Indus Valley civilization, as well as fabric remnants dated back to 6000 BC in Peru. Although cultivated since antiquity, it was the invention of the cotton gin that lowered the cost of production that led to its widespread use, and it is the most widely used natural fiber cloth in clothing today. Current estimates for world production are about 25 million tonnes or 110 million bales annually, accounting for 2.5% of the world's arable land. India is the world's largest producer of cotton. The United States has been the largest exporter for many years. There are four commercially grown species of cotton, all domesticated in antiquity: Hybrid varieties are also cultivated. The two New World cotton species account for the vast majority of modern cotton production, but the two Old World species were widely used before the 1900s. While cotton fibers occur naturally in colors of white, brown, pink and green, fears of contaminating the genetics of white cotton have led many cotton-growing locations to ban the growing of colored cotton varieties. The word "cotton" has Arabic origins, derived from the Arabic word قطن (qutn or qutun). This was the usual word for cotton in medieval Arabic. Marco Polo in chapter 2 in his book, describes a province he calls Khotan in Turkestan, today's Xinjiang, where cotton was grown in abundance. The word entered the Romance languages in the mid-12th century, and English a century later. Cotton fabric was known to the ancient Romans as an import but cotton was rare in the Romance-speaking lands until imports from the Arabic-speaking lands in the later medieval era at transformatively lower prices. Main article: History of cotton Further information: Tree cotton The earliest evidence of the use of cotton in the Old World, dated to 5500 BC and preserved in copper beads, has been found at the Neolithic site of Mehrgarh, at the foot of the Bolan Pass in ancient India, today in Balochistan Pakistan. Fragments of cotton textiles have been found at Mohenjo-daro and other sites of the Bronze Age Indus Valley civilization, and cotton may have been an important export from it. Cotton bolls discovered in a cave near Tehuacán, Mexico, have been dated to as early as 5500 BC, but this date has been challenged. More securely dated is the domestication of Gossypium hirsutum in Mexico between around 3400 and 2300 BC. During this time, people between the Río Santiago and the Río Balsas grew, spun, wove, dyed, and sewed cotton. What they didn't use themselves, they sent to their Aztec rulers as tribute, on the scale of ~116 million pounds annually. In Peru, cultivation of the indigenous cotton species Gossypium barbadense has been dated, from a find in Ancon, to c. 4200 BC, and was the backbone of the development of coastal cultures such as the Norte Chico, Moche, and Nazca. Cotton was grown upriver, made into nets, and traded with fishing villages along the coast for large supplies of fish. The Spanish who came to Mexico and Peru in the early 16th century found the people growing cotton and wearing clothing made of it. The Greeks and the Arabs were not familiar with cotton until the Wars of Alexander the Great, as his contemporary Megasthenes told Seleucus I Nicator of "there being trees on which wool grows" in "Indica". This may be a reference to "tree cotton", Gossypium arboreum, which is a native of the Indian subcontinent. According to the Columbia Encyclopedia: Cotton has been spun, woven, and dyed since prehistoric times. It clothed the people of ancient India, Egypt, and China. Hundreds of years before the Christian era, cotton textiles were woven in India with matchless skill, and their use spread to the Mediterranean countries. In Iran (Persia), the history of cotton dates back to the Achaemenid era (5th century BC); however, there are few sources about the planting of cotton in pre-Islamic Iran. Cotton cultivation was common in Merv, Ray and Pars. In Persian poems, especially Ferdowsi's Shahname, there are references to cotton ("panbe" in Persian). Marco Polo (13th century) refers to the major products of Persia, including cotton. John Chardin, a French traveler of the 17th century who visited Safavid Persia, spoke approvingly of the vast cotton farms of Persia. Cotton (Gossypium herbaceum Linnaeus) may have been domesticated 5000 BC in eastern Sudan near the Middle Nile Basin region, where cotton cloth was being produced. Around the 4th century BC, the cultivation of cotton and the knowledge of its spinning and weaving in Meroë reached a high level. The export of textiles was one of the sources of wealth for Meroë. Aksumite King Ezana boasted in his inscription that he destroyed large cotton plantations in Meroë during his conquest of the region. During the Han dynasty (207 BC - 220 AD), cotton was grown by Chinese peoples in the southern Chinese province of Yunnan. Egyptians grew and spun cotton in the first seven centuries of the Christian era. Handheld roller cotton gins had been used in India since the 6th century, and was then introduced to other countries from there. Between the 12th and 14th centuries, dual-roller gins appeared in India and China. The Indian version of the dual-roller gin was prevalent throughout the Mediterranean cotton trade by the 16th century. This mechanical device was, in some areas, driven by water power. The earliest clear illustrations of the spinning wheel come from the Islamic world in the eleventh century. The earliest unambiguous reference to a spinning wheel in India is dated to 1350, suggesting that the spinning wheel was likely introduced from Iran to India during the Delhi Sultanate. During the late medieval period, cotton became known as an imported fiber in northern Europe, without any knowledge of how it was derived, other than that it was a plant. Because Herodotus had written in his Histories, Book III, 106, that in India trees grew in the wild producing wool, it was assumed that the plant was a tree, rather than a shrub. This aspect is retained in the name for cotton in several Germanic languages, such as German Baumwolle, which translates as "tree wool" (Baum means "tree"; Wolle means "wool"). Noting its similarities to wool, people in the region could only imagine that cotton must be produced by plant-borne sheep. John Mandeville, writing in 1350, stated as fact that "There grew there [India] a wonderful tree which bore tiny lambs on the endes of its branches. These branches were so pliable that they bent down to allow the lambs to feed when they are hungry." (See Vegetable Lamb of Tartary.) Cotton manufacture was introduced to Europe during the Muslim conquest of the Iberian Peninsula and Sicily. The knowledge of cotton weaving was spread to northern Italy in the 12th century, when Sicily was conquered by the Normans, and consequently to the rest of Europe. The spinning wheel, introduced to Europe circa 1350, improved the speed of cotton spinning. By the 15th century, Venice, Antwerp, and Haarlem were important ports for cotton trade, and the sale and transportation of cotton fabrics had become very profitable. Further information: Economic history of India Under the Mughal Empire, which ruled in the Indian subcontinent from the early 16th century to the early 18th century, Indian cotton production increased, in terms of both raw cotton and cotton textiles. The Mughals introduced agrarian reforms such as a new revenue system that was biased in favour of higher value cash crops such as cotton and indigo, providing state incentives to grow cash crops, in addition to rising market demand. The largest manufacturing industry in the Mughal Empire was cotton textile manufacturing, which included the production of piece goods, calicos, and muslins, available unbleached and in a variety of colours. The cotton textile industry was responsible for a large part of the empire's international trade. India had a 25% share of the global textile trade in the early 18th century. Indian cotton textiles were the most important manufactured goods in world trade in the 18th century, consumed across the world from the Americas to Japan. The most important center of cotton production was the Bengal Subah province, particularly around its capital city of Dhaka. The worm gear roller cotton gin, which was invented in India during the early Delhi Sultanate era of the 13th–14th centuries, came into use in the Mughal Empire some time around the 16th century, and is still used in India through to the present day. Another innovation, the incorporation of the crank handle in the cotton gin, first appeared in India some time during the late Delhi Sultanate or the early Mughal Empire. The production of cotton, which may have largely been spun in the villages and then taken to towns in the form of yarn to be woven into cloth textiles, was advanced by the diffusion of the spinning wheel across India shortly before the Mughal era, lowering the costs of yarn and helping to increase demand for cotton. The diffusion of the spinning wheel, and the incorporation of the worm gear and crank handle into the roller cotton gin, led to greatly expanded Indian cotton textile production during the Mughal era. It was reported that, with an Indian cotton gin, which is half machine and half tool, one man and one woman could clean 28 pounds of cotton per day. With a modified Forbes version, one man and a boy could produce 250 pounds per day. If oxen were used to power 16 of these machines, and a few people's labour was used to feed them, they could produce as much work as 750 people did formerly. Main article: History of Egypt under the Muhammad Ali dynasty In the early 19th century, a Frenchman named M. Jumel proposed to the great ruler of Egypt, Mohamed Ali Pasha, that he could earn a substantial income by growing an extra-long staple Maho (Gossypium barbadense) cotton, in Lower Egypt, for the French market. Mohamed Ali Pasha accepted the proposition and granted himself the monopoly on the sale and export of cotton in Egypt; and later dictated cotton should be grown in preference to other crops. Egypt under Muhammad Ali in the early 19th century had the fifth most productive cotton industry in the world, in terms of the number of spindles per capita. The industry was initially driven by machinery that relied on traditional energy sources, such as animal power, water wheels, and windmills, which were also the principal energy sources in Western Europe up until around 1870. It was under Muhammad Ali in the early 19th century that steam engines were introduced to the Egyptian cotton industry. By the time of the American Civil war annual exports had reached $16 million (120,000 bales), which rose to $56 million by 1864, primarily due to the loss of the Confederate supply on the world market. Exports continued to grow even after the reintroduction of US cotton, produced now by a paid workforce, and Egyptian exports reached 1.2 million bales a year by 1903. Main articles: Calico Acts and Textile manufacture during the Industrial Revolution The English East India Company (EIC) introduced the British to cheap calico and chintz cloth on the restoration of the monarchy in the 1660s. Initially imported as a novelty side line, from its spice trading posts in Asia, the cheap colourful cloth proved popular and overtook the EIC's spice trade by value in the late 17th century. The EIC embraced the demand, particularly for calico, by expanding its factories in Asia and producing and importing cloth in bulk, creating competition for domestic woollen and linen textile producers. The impacted weavers, spinners, dyers, shepherds and farmers objected and the calico question became one of the major issues of National politics between the 1680s and the 1730s. Parliament began to see a decline in domestic textile sales, and an increase in imported textiles from places like China and India. Seeing the East India Company and their textile importation as a threat to domestic textile businesses, Parliament passed the 1700 Calico Act, blocking the importation of cotton cloth. As there was no punishment for continuing to sell cotton cloth, smuggling of the popular material became commonplace. In 1721, dissatisfied with the results of the first act, Parliament passed a stricter addition, this time prohibiting the sale of most cottons, imported and domestic (exempting only thread Fustian and raw cotton). The exemption of raw cotton from the prohibition initially saw 2 thousand bales of cotton imported annually, to become the basis of a new indigenous industry, initially producing Fustian for the domestic market, though more importantly triggering the development of a series of mechanised spinning and weaving technologies, to process the material. This mechanised production was concentrated in new cotton mills, which slowly expanded till by the beginning of the 1770s seven thousand bales of cotton were imported annually, and pressure was put on Parliament, by the new mill owners, to remove the prohibition on the production and sale of pure cotton cloth, as they could easily compete with anything the EIC could import. The acts were repealed in 1774, triggering a wave of investment in mill-based cotton spinning and production, doubling the demand for raw cotton within a couple of years, and doubling it again every decade, into the 1840s. Indian cotton textiles, particularly those from Bengal, continued to maintain a competitive advantage up until the 19th century. In order to compete with India, Britain invested in labour-saving technical progress, while implementing protectionist policies such as bans and tariffs to restrict Indian imports. At the same time, the East India Company's rule in India contributed to its deindustrialization, opening up a new market for British goods, while the capital amassed from Bengal after its 1757 conquest was used to invest in British industries such as textile manufacturing and greatly increase British wealth. British colonization also forced open the large Indian market to British goods, which could be sold in India without tariffs or duties, compared to local Indian producers who were heavily taxed, while raw cotton was imported from India without tariffs to British factories which manufactured textiles from Indian cotton, giving Britain a monopoly over India's large market and cotton resources. India served as both a significant supplier of raw goods to British manufacturers and a large captive market for British manufactured goods. Britain eventually surpassed India as the world's leading cotton textile manufacturer in the 19th century. India's cotton-processing sector changed during EIC expansion in India in the late 18th and early 19th centuries. From focusing on supplying the British market to supplying East Asia with raw cotton. As the Artisan produced textiles were no longer competitive with those produced Industrially, and Europe preferring the cheaper slave produced, long staple American, and Egyptian cottons, for its own materials. Main article: Textile manufacture during the Industrial Revolution The advent of the Industrial Revolution in Britain provided a great boost to cotton manufacture, as textiles emerged as Britain's leading export. In 1738, Lewis Paul and John Wyatt, of Birmingham, England, patented the roller spinning machine, as well as the flyer-and-bobbin system for drawing cotton to a more even thickness using two sets of rollers that traveled at different speeds. Later, the invention of the James Hargreaves' spinning jenny in 1764, Richard Arkwright's spinning frame in 1769 and Samuel Crompton's spinning mule in 1775 enabled British spinners to produce cotton yarn at much higher rates. From the late 18th century on, the British city of Manchester acquired the nickname "Cottonopolis" due to the cotton industry's omnipresence within the city, and Manchester's role as the heart of the global cotton trade. Production capacity in Britain and the United States was improved by the invention of the modern cotton gin by the American Eli Whitney in 1793. Before the development of cotton gins, the cotton fibers had to be pulled from the seeds tediously by hand. By the late 1700s, a number of crude ginning machines had been developed. However, to produce a bale of cotton required over 600 hours of human labor, making large-scale production uneconomical in the United States, even with the use of humans as slave labor. The gin that Whitney manufactured (the Holmes design) reduced the hours down to just a dozen or so per bale. Although Whitney patented his own design for a cotton gin, he manufactured a prior design from Henry Odgen Holmes, for which Holmes filed a patent in 1796. Improving technology and increasing control of world markets allowed British traders to develop a commercial chain in which raw cotton fibers were (at first) purchased from colonial plantations, processed into cotton cloth in the mills of Lancashire, and then exported on British ships to captive colonial markets in West Africa, India, and China (via Shanghai and Hong Kong). By the 1840s, India was no longer capable of supplying the vast quantities of cotton fibers needed by mechanized British factories, while shipping bulky, low-price cotton from India to Britain was time-consuming and expensive. This, coupled with the emergence of American cotton as a superior type (due to the longer, stronger fibers of the two domesticated native American species, Gossypium hirsutum and Gossypium barbadense), encouraged British traders to purchase cotton from plantations in the United States and in the Caribbean. By the mid-19th century, "King Cotton" had become the backbone of the southern American economy. In the United States, cultivating and harvesting cotton became the leading occupation of slaves. During the American Civil War, American cotton exports slumped due to a Union blockade on Southern ports, and because of a strategic decision by the Confederate government to cut exports, hoping to force Britain to recognize the Confederacy or enter the war. The Lancashire Cotton Famine prompted the main purchasers of cotton, Britain and France, to turn to Egyptian cotton. British and French traders invested heavily in cotton plantations. The Egyptian government of Viceroy Isma'il took out substantial loans from European bankers and stock exchanges. After the American Civil War ended in 1865, British and French traders abandoned Egyptian cotton and returned to cheap American exports, sending Egypt into a deficit spiral that led to the country declaring bankruptcy in 1876, a key factor behind Egypt's occupation by the British Empire in 1882. During this time, cotton cultivation in the British Empire, especially Australia and India, greatly increased to replace the lost production of the American South. Through tariffs and other restrictions, the British government discouraged the production of cotton cloth in India; rather, the raw fiber was sent to England for processing. The Indian Mahatma Gandhi described the process: In the United States, growing Southern cotton generated significant wealth and capital for the antebellum South, as well as raw material for Northern textile industries. Before 1865 the cotton was largely produced through the labor of enslaved African Americans. It enriched both the Southern landowners and the new textile industries of the Northeastern United States and northwestern Europe. In 1860 the slogan "Cotton is king" characterized the attitude of Southern leaders toward this monocrop in that Europe would support an independent Confederate States of America in 1861 in order to protect the supply of cotton it needed for its very large textile industry. Russell Griffin of California was a farmer who farmed one of the biggest cotton operations. He produced over sixty thousand bales. Cotton remained a key crop in the Southern economy after slavery ended in 1865. Across the South, sharecropping evolved, in which landless farmers worked land owned by others in return for a share of the profits. Some farmers rented the land and bore the production costs themselves. Until mechanical cotton pickers were developed, cotton farmers needed additional labor to hand-pick cotton. Picking cotton was a source of income for families across the South. Rural and small town school systems had split vacations so children could work in the fields during "cotton-picking." During the middle 20th century, employment in cotton farming fell, as machines began to replace laborers and the South's rural labor force dwindled during the World Wars. Cotton remains a major export of the United States, with large farms in California, Arizona and the Deep South. China's Chang'e 4 took cotton seeds to the Moon's far side. On 15 January 2019, China announced that a cotton seed sprouted, the first "truly otherworldly plant in history". Inside the Von Kármán Crater, the capsule and seeds sit inside the Chang'e 4 lander. Successful cultivation of cotton requires a long frost-free period, plenty of sunshine, and a moderate rainfall, usually from 60 to 120 cm (24 to 47 in). Soils usually need to be fairly heavy, although the level of nutrients does not need to be exceptional. In general, these conditions are met within the seasonally dry tropics and subtropics in the Northern and Southern hemispheres, but a large proportion of the cotton grown today is cultivated in areas with less rainfall that obtain the water from irrigation. Production of the crop for a given year usually starts soon after harvesting the preceding autumn. Cotton is naturally a perennial but is grown as an annual to help control pests. Planting time in spring in the Northern hemisphere varies from the beginning of February to the beginning of June. The area of the United States known as the South Plains is the largest contiguous cotton-growing region in the world. While dryland (non-irrigated) cotton is successfully grown in this region, consistent yields are only produced with heavy reliance on irrigation water drawn from the Ogallala Aquifer. Since cotton is somewhat salt and drought tolerant, this makes it an attractive crop for arid and semiarid regions. As water resources get tighter around the world, economies that rely on it face difficulties and conflict, as well as potential environmental problems. For example, improper cropping and irrigation practices have led to desertification in areas of Uzbekistan, where cotton is a major export. In the days of the Soviet Union, the Aral Sea was tapped for agricultural irrigation, largely of cotton, and now salination is widespread. Cotton can also be cultivated to have colors other than the yellowish off-white typical of modern commercial cotton fibers. Naturally colored cotton can come in red, green, and several shades of brown. The water footprint of cotton fibers is substantially larger than for most other plant fibers. Cotton is also known as a thirsty crop; on average, globally, cotton requires 8,000–10,000 liters of water for one kilogram of cotton, and in dry areas, it may require even more such as in some areas of India, it may need 22,500 liters. Main article: Bt cotton Genetically modified (GM) cotton was developed to reduce the heavy reliance on pesticides. The bacterium Bacillus thuringiensis (Bt) naturally produces a chemical harmful only to a small fraction of insects, most notably the larvae of moths and butterflies, beetles, and flies, and harmless to other forms of life. The gene coding for Bt toxin has been inserted into cotton, causing cotton, called Bt cotton, to produce this natural insecticide in its tissues. In many regions, the main pests in commercial cotton are lepidopteran larvae, which are killed by the Bt protein in the transgenic cotton they eat. This eliminates the need to use large amounts of broad-spectrum insecticides to kill lepidopteran pests (some of which have developed pyrethroid resistance). This spares natural insect predators in the farm ecology and further contributes to noninsecticide pest management. However, Bt cotton is ineffective against many cotton pests, such as plant bugs, stink bugs, and aphids; depending on circumstances it may still be desirable to use insecticides against these. A 2006 study done by Cornell researchers, the Center for Chinese Agricultural Policy and the Chinese Academy of Science on Bt cotton farming in China found that after seven years these secondary pests that were normally controlled by pesticide had increased, necessitating the use of pesticides at similar levels to non-Bt cotton and causing less profit for farmers because of the extra expense of GM seeds. However, a 2009 study by the Chinese Academy of Sciences, Stanford University and Rutgers University refuted this. They concluded that the GM cotton effectively controlled bollworm. The secondary pests were mostly miridae (plant bugs) whose increase was related to local temperature and rainfall and only continued to increase in half the villages studied. Moreover, the increase in insecticide use for the control of these secondary insects was far smaller than the reduction in total insecticide use due to Bt cotton adoption. A 2012 Chinese study concluded that Bt cotton halved the use of pesticides and doubled the level of ladybirds, lacewings and spiders. The International Service for the Acquisition of Agri-biotech Applications (ISAAA) said that, worldwide, GM cotton was planted on an area of 25 million hectares in 2011. This was 69% of the worldwide total area planted in cotton. GM cotton acreage in India grew at a rapid rate, increasing from 50,000 hectares in 2002 to 10.6 million hectares in 2011. The total cotton area in India was 12.1 million hectares in 2011, so GM cotton was grown on 88% of the cotton area. This made India the country with the largest area of GM cotton in the world. A long-term study on the economic impacts of Bt cotton in India, published in the Journal PNAS in 2012, showed that Bt cotton has increased yields, profits, and living standards of smallholder farmers. The U.S. GM cotton crop was 4.0 million hectares in 2011 the second largest area in the world, the Chinese GM cotton crop was third largest by area with 3.9 million hectares and Pakistan had the fourth largest GM cotton crop area of 2.6 million hectares in 2011. The initial introduction of GM cotton proved to be a success in Australia – the yields were equivalent to the non-transgenic varieties and the crop used much less pesticide to produce (85% reduction). The subsequent introduction of a second variety of GM cotton led to increases in GM cotton production until 95% of the Australian cotton crop was GM in 2009 making Australia the country with the fifth largest GM cotton crop in the world. Other GM cotton growing countries in 2011 were Argentina, Myanmar, Burkina Faso, Brazil, Mexico, Colombia, South Africa and Costa Rica. Cotton has been genetically modified for resistance to glyphosate a broad-spectrum herbicide discovered by Monsanto which also sells some of the Bt cotton seeds to farmers. There are also a number of other cotton seed companies selling GM cotton around the world. About 62% of the GM cotton grown from 1996 to 2011 was insect resistant, 24% stacked product and 14% herbicide resistant. Cotton has gossypol, a toxin that makes it inedible. However, scientists have silenced the gene that produces the toxin, making it a potential food crop. On 17 October 2018, the USDA deregulated GE low-gossypol cotton. Organic cotton is generally understood as cotton from plants not genetically modified and that is certified to be grown without the use of any synthetic agricultural chemicals, such as fertilizers or pesticides. Its production also promotes and enhances biodiversity and biological cycles. In the United States, organic cotton plantations are required to enforce the National Organic Program (NOP). This institution determines the allowed practices for pest control, growing, fertilizing, and handling of organic crops. As of 2007, 265,517 bales of organic cotton were produced in 24 countries, and worldwide production was growing at a rate of more than 50% per year. Organic cotton products are now available for purchase at limited locations. These are popular for baby clothes and diapers; natural cotton products are known to be both sustainable and hypoallergenic. The cotton industry relies heavily on chemicals, such as fertilizers, insecticides and herbicides, although a very small number of farmers are moving toward an organic model of production. Under most definitions, organic products do not use transgenic Bt cotton which contains a bacterial gene that codes for a plant-produced protein that is toxic to a number of pests especially the bollworms. For most producers, Bt cotton has allowed a substantial reduction in the use of synthetic insecticides, although in the long term resistance may become problematic. Main article: List of cotton diseases Significant global pests of cotton include various species of bollworm, such as Pectinophora gossypiella. Sucking pests include cotton stainers, the chili thrips, Scirtothrips dorsalis; the cotton seed bug, Oxycarenus hyalinipennis. Defoliators include the fall armyworm, Spodoptera frugiperda. Historically, in North America, one of the most economically destructive pests in cotton production has been the boll weevil. Boll weevils are beetles who ate cotton in the 1950s, that slowed the production of the cotton industry drastically. “This bone pile of short budgets, loss of market share, failing prices, abandoned farms, and the new immunity of boll weevils generated a feeling of helplessness” Boll Weevils first appeared in Beeville, Texas wiping out field after field of cotton in south Texas. This swarm of Boll Weevils swept through east Texas and spread to the eastern seaboard, leaving ruin and devastation in its path, causing many cotton farmers to go out of business. Due to the US Department of Agriculture's highly successful Boll Weevil Eradication Program (BWEP), this pest has been eliminated from cotton in most of the United States. This program, along with the introduction of genetically engineered Bt cotton, has improved the management of a number of pests such as cotton bollworm and pink bollworm). Sucking pests include the cotton stainer, Dysdercus suturellus and the tarnish plant bug, Lygus lineolaris. A significant cotton disease is caused by Xanthomonas citri subsp. malvacearum. Most cotton in the United States, Europe and Australia is harvested mechanically, either by a cotton picker, a machine that removes the cotton from the boll without damaging the cotton plant, or by a cotton stripper, which strips the entire boll off the plant. Cotton strippers are used in regions where it is too windy to grow picker varieties of cotton, and usually after application of a chemical defoliant or the natural defoliation that occurs after a freeze. Cotton is a perennial crop in the tropics, and without defoliation or freezing, the plant will continue to grow. Cotton continues to be picked by hand in developing countries and in Xinjiang, China, by forced labor. Xinjiang produces over 20% of the world's cotton. The era of manufactured fibers began with the development of rayon in France in the 1890s. Rayon is derived from a natural cellulose and cannot be considered synthetic, but requires extensive processing in a manufacturing process, and led the less expensive replacement of more naturally derived materials. A succession of new synthetic fibers were introduced by the chemicals industry in the following decades. Acetate in fiber form was developed in 1924. Nylon, the first fiber synthesized entirely from petrochemicals, was introduced as a sewing thread by DuPont in 1936, followed by DuPont's acrylic in 1944. Some garments were created from fabrics based on these fibers, such as women's hosiery from nylon, but it was not until the introduction of polyester into the fiber marketplace in the early 1950s that the market for cotton came under threat. The rapid uptake of polyester garments in the 1960s caused economic hardship in cotton-exporting economies, especially in Central American countries, such as Nicaragua, where cotton production had boomed tenfold between 1950 and 1965 with the advent of cheap chemical pesticides. Cotton production recovered in the 1970s, but crashed to pre-1960 levels in the early 1990s. High water and pesticide use in cotton cultivation has prompted sustainability concerns and created a market for natural fiber alternatives. Other cellulose fibers, such as hemp, are seen as more sustainable options because of higher yields per acre with less water and pesticide use than cotton. Cellulose fiber alternatives have similar characteristics but are not perfect substitutes for cotton textiles with differences in properties like tensile strength and thermal regulation. Cotton is used to make a number of textile products. These include terrycloth for highly absorbent bath towels and robes; denim for blue jeans; cambric, popularly used in the manufacture of blue work shirts (from which we get the term "blue-collar"); and corduroy, seersucker, and cotton twill. Socks, underwear, and most T-shirts are made from cotton. Bed sheets often are made from cotton. It is a preferred material for sheets as it is hypoallergenic, easy to maintain and non-irritant to the skin. Cotton also is used to make yarn used in crochet and knitting. Fabric also can be made from recycled or recovered cotton that otherwise would be thrown away during the spinning, weaving, or cutting process. While many fabrics are made completely of cotton, some materials blend cotton with other fibers, including rayon and synthetic fibers such as polyester. It can either be used in knitted or woven fabrics, as it can be blended with elastine to make a stretchier thread for knitted fabrics, and apparel such as stretch jeans. Cotton can be blended also with linen producing fabrics with the benefits of both materials. Linen-cotton blends are wrinkle resistant and retain heat more effectively than only linen, and are thinner, stronger and lighter than only cotton. In addition to the textile industry, cotton is used in fishing nets, coffee filters, tents, explosives manufacture (see nitrocellulose), cotton paper, and in bookbinding. Fire hoses were once made of cotton. The cottonseed which remains after the cotton is ginned is used to produce cottonseed oil, which, after refining, can be consumed by humans like any other vegetable oil. The cottonseed meal that is left generally is fed to ruminant livestock; the gossypol remaining in the meal is toxic to monogastric animals. Cottonseed hulls can be added to dairy cattle rations for roughage. During the American slavery period, cotton root bark was used in folk remedies as an abortifacient, that is, to induce a miscarriage. Gossypol was one of the many substances found in all parts of the cotton plant and it was described by the scientists as 'poisonous pigment'. It also appears to inhibit the development of sperm or even restrict the mobility of the sperm. Also, it is thought to interfere with the menstrual cycle by restricting the release of certain hormones. Cotton linters are fine, silky fibers which adhere to the seeds of the cotton plant after ginning. These curly fibers typically are less than 1⁄8 inch (3.2 mm) long. The term also may apply to the longer textile fiber staple lint as well as the shorter fuzzy fibers from some upland species. Linters are traditionally used in the manufacture of paper and as a raw material in the manufacture of cellulose. In the UK, linters are referred to as "cotton wool". A less technical use of the term "cotton wool", in the UK and Ireland, is for the refined product known as "absorbent cotton" (or, often, just "cotton") in U.S. usage: fluffy cotton in sheets or balls used for medical, cosmetic, protective packaging, and many other practical purposes. The first medical use of cotton wool was by Sampson Gamgee at the Queen's Hospital (later the General Hospital) in Birmingham, England. Long staple (LS cotton) is cotton of a longer fibre length and therefore of higher quality, while Extra-long staple cotton (ELS cotton) has longer fibre length still and of even higher quality. The name "Egyptian cotton" is broadly associated high quality cottons and is often an LS or (less often) an ELS cotton. Nowadays the name "Egyptian cotton" refers more to the way cotton is treated and threads produced rather than the location where it is grown. The American cotton variety Pima cotton is often compared to Egyptian cotton, as both are used in high quality bed sheets and other cotton products. While Pima cotton is often grown in the American southwest, the Pima name is now used by cotton-producing nations such as Peru, Australia and Israel. Not all products bearing the Pima name are made with the finest cotton: American-grown ELS Pima cotton is trademarked as Supima cotton. "Kasturi" cotton is a brand-building initiative for Indian long staple cotton by the Indian government. The PIB issued a press release announcing the same. Cottons have been grown as ornamentals or novelties due to their showy flowers and snowball-like fruit. For example, Jumel's cotton, once an important source of fiber in Egypt, started as an ornamental. However, agricultural authorities such as the Boll Weevil Eradication Program in the United States discourage using cotton as an ornamental, due to concerns about these plants harboring pests injurious to crops. Cotton lisle, or fil d'Ecosse cotton, is a finely-spun, tightly twisted type of cotton that is noted for being strong and durable. Lisle is composed of two strands that have each been twisted an extra twist per inch than ordinary yarns and combined to create a single thread. The yarn is spun so that it is compact and solid. This cotton is used mainly for underwear, stockings, and gloves. Colors applied to this yarn are noted for being more brilliant than colors applied to softer yarn. This type of thread was first made in the city of Lisle, France (now Lille), hence its name. The largest producers of cotton, as of 2017, are India and China, with annual production of about 18.53 million tonnes and 17.14 million tonnes, respectively; most of this production is consumed by their respective textile industries. The largest exporters of raw cotton are the United States, with sales of $4.9 billion, and Africa, with sales of $2.1 billion. The total international trade is estimated to be $12 billion. Africa's share of the cotton trade has doubled since 1980. Neither area has a significant domestic textile industry, textile manufacturing having moved to developing nations in Eastern and South Asia such as India and China. In Africa, cotton is grown by numerous small holders. Dunavant Enterprises, based in Memphis, Tennessee, is the leading cotton broker in Africa, with hundreds of purchasing agents. It operates cotton gins in Uganda, Mozambique, and Zambia. In Zambia, it often offers loans for seed and expenses to the 180,000 small farmers who grow cotton for it, as well as advice on farming methods. Cargill also purchases cotton in Africa for export. The 25,000 cotton growers in the United States are heavily subsidized at the rate of $2 billion per year although China now provides the highest overall level of cotton sector support. The future of these subsidies is uncertain and has led to anticipatory expansion of cotton brokers' operations in Africa. Dunavant expanded in Africa by buying out local operations. This is only possible in former British colonies and Mozambique; former French colonies continue to maintain tight monopolies, inherited from their former colonialist masters, on cotton purchases at low fixed prices. To encourage trade and organize discussion about cotton, World Cotton Day is celebrated every October 7. |Top 10 cotton-producing countries (in tonnes)| |Source: UN Food & Agriculture Organization| The five leading exporters of cotton in 2019 are (1) India, (2) the United States, (3) China, (4) Brazil, and (5) Pakistan. In India, the states of Maharashtra (26.63%), Gujarat (17.96%) and Andhra Pradesh (13.75%) and also Madhya Pradesh are the leading cotton producing states, these states have a predominantly tropical wet and dry climate. In the United States, the state of Texas led in total production as of 2004, while the state of California had the highest yield per acre. Cotton is an enormously important commodity throughout the world. It provides livelihoods for up to 1 billion people, including 100 million smallholder farmers who cultivate cotton. However, many farmers in developing countries receive a low price for their produce, or find it difficult to compete with developed countries. This has led to an international dispute (see Brazil–United States cotton dispute): On 27 September 2002, Brazil requested consultations with the US regarding prohibited and actionable subsidies provided to US producers, users and/or exporters of upland cotton, as well as legislation, regulations, statutory instruments and amendments thereto providing such subsidies (including export credits), grants, and any other assistance to the US producers, users and exporters of upland cotton. On 8 September 2004, the Panel Report recommended that the United States "withdraw" export credit guarantees and payments to domestic users and exporters, and "take appropriate steps to remove the adverse effects or withdraw" the mandatory price-contingent subsidy measures. While Brazil was fighting the US through the WTO's Dispute Settlement Mechanism against a heavily subsidized cotton industry, a group of four least-developed African countries – Benin, Burkina Faso, Chad, and Mali – also known as "Cotton-4" have been the leading protagonist for the reduction of US cotton subsidies through negotiations. The four introduced a "Sectoral Initiative in Favour of Cotton", presented by Burkina Faso's President Blaise Compaoré during the Trade Negotiations Committee on 10 June 2003. In addition to concerns over subsidies, the cotton industries of some countries are criticized for employing child labor and damaging workers' health by exposure to pesticides used in production. The Environmental Justice Foundation has campaigned against the prevalent use of forced child and adult labor in cotton production in Uzbekistan, the world's third largest cotton exporter. The international production and trade situation has led to "fair trade" cotton clothing and footwear, joining a rapidly growing market for organic clothing, fair fashion or "ethical fashion". The fair trade system was initiated in 2005 with producers from Cameroon, Mali and Senegal, with the Association Max Havelaar France playing a lead role in the establishment of this segment of the fair trade system in conjunction with Fairtrade International and the French organisation Dagris (Développement des Agro-Industries du Sud). Cotton is bought and sold by investors and price speculators as a tradable commodity on 2 different commodity exchanges in the United States of America. |Tick value:||5 USD| A temperature range of 25 to 35 °C (77 to 95 °F) is the optimal range for mold development. At temperatures below 0 °C (32 °F), rotting of wet cotton stops. Damaged cotton is sometimes stored at these temperatures to prevent further deterioration. Egypt has a unique climatic temperature that the soil and the temperature provide an exceptional environment for cotton to grow rapidly. |Shape||Fairly uniform in width, 12–20 micrometers;| length varies from 1 cm to 6 cm (1⁄2 to 21⁄2 inches); typical length is 2.2 cm to 3.3 cm (7⁄8 to 11⁄4 inches). Damage, weaken fibers resistant; no harmful effects high resistance to most Prolonged exposure weakens fibers. Mildew and rot-producing bacteria damage fibers. Silverfish damage fibers. Decomposes after prolonged exposure to temperatures of 150 °C or over. Burns readily with yellow flame, smells like burning paper. The residual ash is light and fluffy and greyish in color. Depending upon the origin, the chemical composition of cotton is as follows: Cotton has a more complex structure among the other crops. A matured cotton fiber is a single, elongated complete dried multilayer cell that develops in the surface layer of cottonseed. It has the following parts. Dead cotton is a term that refers to unripe cotton fibers that do not absorb dye. Dead cotton is immature cotton that has poor dye affinity and appears as white specks on a dyed fabric. When cotton fibers are analyzed and assessed through a microscope, dead fibers appear differently. Dead cotton fibers have thin cell walls. In contrast, mature fibers have more cellulose and a greater degree of cell wall thickening There is a public effort to sequence the genome of cotton. It was started in 2007 by a consortium of public researchers. Their aim is to sequence the genome of cultivated, tetraploid cotton. "Tetraploid" means that its nucleus has two separate genomes, called A and D. The consortium agreed to first sequence the D-genome wild relative of cultivated cotton (G. raimondii, a Central American species) because it is small and has few repetitive elements. It has nearly one-third of the bases of tetraploid cotton, and each chromosome occurs only once.[clarification needed] Then, the A genome of G. arboreum would be sequenced. Its genome is roughly twice that of G. raimondii. Part of the difference in size is due to the amplification of retrotransposons (GORGE). After both diploid genomes are assembled, they would be used as models for sequencing the genomes of tetraploid cultivated species. Without knowing the diploid genomes, the euchromatic DNA sequences of AD genomes would co-assemble, and their repetitive elements would assemble independently into A and D sequences respectively. There would be no way to untangle the mess of AD sequences without comparing them to their diploid counterparts. The public sector effort continues with the goal to create a high-quality, draft genome sequence from reads generated by all sources. The effort has generated Sanger reads of BACs, fosmids, and plasmids, as well as 454 reads. These later types of reads will be instrumental in assembling an initial draft of the D genome. In 2010, the companies Monsanto and Illumina completed enough Illumina sequencing to cover the D genome of G. raimondii about 50x. They announced that they would donate their raw reads to the public. This public relations effort gave them some recognition for sequencing the cotton genome. Once the D genome is assembled from all of this raw material, it will undoubtedly assist in the assembly of the AD genomes of cultivated varieties of cotton, but much work remains. As of 2014, at least one assembled cotton genome had been reported. Gossypium arboreum is a diploid species cultivated in the Old World. It was first domesticated near the Indus Valley before 6000 BC (Moulherat et al. 2002). ((cite web)): CS1 maint: multiple names: authors list (link)
- native American —native-American, adj.a person born in the United States.[1835-45, Amer.] * * *▪ indigenous peoples of Canada and United StatesIntroductionalso called American Indian, Amerindian, Amerind, Indian, Aboriginal American, or First Nation personmember of any of the aboriginal peoples of the Western Hemisphere, although the term often connotes only those groups whose original territories were in present-day Canada and the United States.Pre-Columbian Americans used technology and material culture that included fire and the fire drill; the domesticated dog; stone implements of many kinds; the spear-thrower (atlatl), harpoon, and bow and arrow; and cordage, netting, basketry, and, in some places, pottery. Many indigenous American groups were hunting-and-gathering cultures (hunting and gathering culture), while others were agricultural peoples. American Indians domesticated a variety of plants and animals, including corn (maize), beans, squash, potatoes and other tubers, turkeys, llamas, and alpacas, as well as a variety of semidomesticated species of nut- and seed-bearing plants. These and other resources were used to support communities ranging from small hamlets to cities such as Cahokia (Cahokia Mounds), with an estimated population of 10,000 to 20,000 individuals, and Teotihuacán, with some 125,000 to 200,000 residents.At the dawn of the 16th century AD, as the European conquest of the Americas began, indigenous peoples resided throughout the Western Hemisphere. They were soon decimated by the effects of epidemic disease, military conquest, and enslavement, and, as with other colonized peoples, they were subject to discriminatory political and legal policies well into the 20th, and even the 21st, century. Nonetheless, they have been among the most active and successful native peoples in effecting political change and regaining their autonomy in areas such as education, land ownership, religious freedom, the law, and the revitalization of traditional culture.Culturally, the indigenous peoples of the Americas are usually recognized as constituting two broad groupings, American Indians and Arctic peoples. American Indians are often further grouped by area of residence: Northern America (present-day United States and Canada), Middle America (present-day Mexico and Central America; sometimes called Mesoamerica), and South America. This article is a survey of the culture areas, prehistories, histories, and recent developments of the indigenous peoples and cultures of the United States and Canada. Some of the terminology used in reference to indigenous Americans is explained in Sidebar: Tribal Nomenclature: American Indian, Native American, and First Nation; Sidebar: The Difference Between a Tribe and a Band; and Sidebar: Native American Self-Names. An overview of all the indigenous peoples of the Americas is presented in American Indian; discussions of various aspects of indigenous American cultures may also be found in the articles pre-Columbian civilizations; Middle American Indian; South American Indian; Arctic: The people (Arctic); American Indian languages; Native American religions; and Native American arts.Native American culture areasComparative studies are an essential component of all scholarly analyses, whether the topic under study is human society, fine art, paleontology, or chemistry; the similarities and differences found in the entities under consideration help to organize and direct research programs and exegeses. The comparative study of cultures falls largely in the domain of anthropology, which often uses a typology known as the culture area approach to organize comparisons across cultures.The culture area approach was delineated at the turn of the 20th century and continued to frame discussions of peoples and cultures into the 21st century. A culture area is a geographic region where certain cultural traits have generally co-occurred; for instance, in North America between the 16th and 19th centuries, the Northwest Coast culture area was characterized by traits such as salmon fishing, woodworking, large villages or towns, and hierarchical social organization.The specific number of culture areas delineated for Native America has been somewhat variable because regions are sometimes subdivided or conjoined. The 10 culture areas discussed below are among the most commonly used—the Arctic, the Subarctic, the Northeast, the Southeast, the Plains, the Southwest, the Great Basin, California, the Northwest Coast, and the Plateau. Notably, some scholars prefer to combine the Northeast and Southeast into one Eastern Woodlands culture area or the Plateau and Great Basin into a single Intermontane culture area. Each section below considers the location, climate, environment, languages, tribes, and common cultural characteristics of the area before it was heavily colonized. Prehistoric and post-Columbian Native American cultures are discussed in subsequent sections of this article. A discussion of the indigenous peoples of the Americas as a whole is found in American Indian.The Arctic (Arctic)This region lies near and above the Arctic Circle and includes the northernmost parts of present-day Alaska and Canada. The topography is relatively flat, and the climate is characterized by very cold temperatures for most of the year. The region's extreme northerly location alters the diurnal cycle; on winter days the sun may peek above the horizon for only an hour or two, while the proportion of night to day is reversed during the summer months (see midnight Sun).The indigenous peoples of the North American Arctic include the Eskimo (Inuit and Yupik/Yupiit) and Aleut; their traditional languages are in the Eskimo-Aleut family (Eskimo-Aleut languages). Many Alaskan groups prefer to be called Native Alaskans rather than Native Americans; Canada's Arctic peoples generally prefer the referent Inuit.The Arctic peoples of North America relied upon hunting and gathering (hunting and gathering culture). Winters were harsh, but the long hours of summer sunlight supported an explosion of vegetation that in turn drew large herds of caribou and other animals to the inland North. On the coasts, sea mammals and fish formed the bulk of the diet. Small mobile bands (band) were the predominant form of social organization; band membership was generally based on kinship and marriage (see also Sidebar: The Difference Between a Tribe and a Band). Dome-shaped houses were common; they were sometimes made of snow and other times of timber covered with earth. Fur clothing, dog sleds, and vivid folklore, mythology, and storytelling traditions were also important aspects of Arctic cultures. See also Arctic: The people (Arctic).The SubarcticThis region lies south of the Arctic and encompasses most of present-day Alaska and most of Canada, excluding the Maritime Provinces (New Brunswick, Nova Scotia, and Prince Edward Island), which are part of the Northeast culture area. The topography is relatively flat, the climate is cool, and the ecosystem is characterized by a swampy and coniferous boreal forest (taiga) ecosystem.Prominent tribes include the Innu (Montagnais and Naskapi), Cree, Ojibwa, Chipewyan, Beaver, Slave, Carrier, Gwich'in, Tanaina, and Deg Xinag (Ingalik). Their traditional languages are in the Athabaskan (Athabaskan language family) and Algonquian (Algonquian languages) families.Small kin-based bands were the predominant form of social organization, although seasonal gatherings of larger groups occurred at favoured fishing locales. Moose, caribou, beavers, waterfowl, and fish were taken, and plant foods such as berries, roots, and sap were gathered. In winter people generally resided in snug semisubterranean houses built to withstand extreme weather; summer allowed for more mobility and the use of tents or lean-tos. Snowshoes, toboggans, and fur clothing were other common forms of material culture. See also American Subarctic peoples.The Northeast (Northeast Indian)This culture area reaches from the present-day Canadian provinces of Quebec, Ontario, and the Maritimes (New Brunswick, Nova Scotia, and Prince Edward Island) south to the Ohio River valley (inland) and to North Carolina (on the Atlantic Coast). The topography is generally rolling, although the Appalachian Mountains include some relatively steep slopes. The climate is temperate, precipitation is moderate, and the predominant ecosystem is the deciduous forest. There is also extensive coastline and an abundance of rivers and lakes.Prominent tribes include the Algonquin, Iroquois, Huron, Wampanoag, Mohican, Mohegan, Ojibwa, Ho-Chunk (Winnebago), Sauk, Fox, and Illinois. The traditional languages of the Northeast are largely of the Iroquoian and Algonquian language (Algonquian languages) families.Most Northeastern peoples engaged in agriculture, and for them the village of a few dozen to a few hundred persons was the most important social and economic unit in daily life. Groups that had access to reliably plentiful wild foods such as wild rice, salmon, or shellfish generally preferred to live in dispersed hamlets of extended families. Several villages or hamlets formed a tribe, and groups of tribes sometimes organized into powerful confederacies. These alliances were often very complex political organizations and generally took their name from the most powerful member tribe, as with the Iroquois Confederacy.Cultivated corn (maize), beans, squash, and weedy seed-bearing plants such as Chenopodium formed the economic base for farming groups. All northeastern peoples took animals including deer, elk, moose, waterfowl, turkeys, and fish. Houses were wickiups (wickiup) (wigwams) or longhouses (longhouse); both house types were constructed of a sapling framework that was covered with rush matting or sheets of bark. Other common aspects of culture included dugouts made of the trunks of whole trees, birchbark canoes, clothing made of pelts and deerskins, and a variety of medicine societies (medicine society). See also Northeast Indian.The Southeast (Southeast Indian)This region reaches from the southern edge of the Northeast culture area to the Gulf of Mexico; from east to west it stretches from the Atlantic Ocean to somewhat west of the Mississippi valley. The climate is warm temperate in the north and grades to subtropical in the south. The topography includes coastal plains, rolling uplands known as the Piedmont, and a portion of the Appalachian Mountains; of these, the Piedmont was most densely populated. The predominant ecosystems were coastal scrub, wetlands, and deciduous forests.Perhaps the best-known indigenous peoples originally from this region are the Cherokee, Choctaw, Chickasaw, Creek, and Seminole, sometimes referred to as the Five Civilized Tribes. Other prominent tribes included the Natchez, Caddo, Apalachee, Timucua, and Guale. Traditionally, most tribes in the Southeast spoke Muskogean languages; there were also some Siouan language (Siouan languages) speakers and one Iroquoian-speaking group, the Cherokee.The region's economy was primarily agricultural and often supported social stratification; as chiefdoms, most cultures were structured around hereditary classes of elites and commoners, although some groups used hierarchical systems that had additional status levels. Most people were commoners and lived in hamlets located along waterways. Each hamlet was home to an extended family and typically included a few houses and auxiliary structures such as granaries and summer kitchens; these were surrounded by agricultural plots or fields. Hamlets were usually associated with a town that served as the area's ceremonial and market centre. Towns often included large earthen mounds on which religious structures and the homes of the ruling classes or families were placed. Together, each town and its associated hamlets constituted an autonomous political entity. In times of need these could unite into confederacies, such as those of the Creek and Choctaw.People grew corn, beans, squash, tobacco, and other crops; they also gathered wild plant foods and shellfish, hunted deer and other animals, and fished. House forms varied extensively across the region, including wickiups (wickiup) (wigwams), earth-berm dwellings, and, in the 19th century, chickees (thatched roofs with open walls). The Southeast was also known for its religious iconography, which often included bird themes, and for the use of the “black drink,” an emetic used in ritual contexts. See also Southeast Indian.The Plains (Plains Indian)The Plains lie in the centre of the continent, spanning the area between the western mountains and the Mississippi River valley and from the southern edge of the Subarctic to the Rio Grande in present-day Texas. The climate is of the continental type, with warm summers and cold winters. Relatively flat short-grass prairies with little precipitation are found west of the Missouri River and rolling tallgrass prairies with more moisture are found to its east. Tree-lined river valleys form a series of linear oases throughout the region.The indigenous peoples of the Plains include speakers of Siouan (Siouan languages), Algonquian (Algonquian languages), Uto-Aztecan (Uto-Aztecan languages), Caddoan, Athabaskan (Athabaskan language family), Kiowa-Tanoan, and Michif languages. Plains peoples also invented a sign language to represent common objects or concepts such as “buffalo” or “exchange.”Earth-lodge villages were the only settlements on the Plains until the late 16th century; they were found along major waterways that provided fertile soil for growing corn, beans, squash, sunflowers, and tobacco. The groups who built these communities divided their time between village-based crop production and hunting expeditions, which often lasted for several weeks and involved travel over a considerable area. Plains villagers include the Mandan, Hidatsa, Omaha, Pawnee, and Arikara.By 1750 horses (horse) from the Spanish colonies in present-day New Mexico had become common in the Plains and had revolutionized the hunting of bison. This new economic opportunity caused some local villagers to become dedicated nomads, as with the Crow (who retained close ties with their Hidatsa kin), and also drew agricultural tribes from surrounding areas into a nomadic lifestyle, including the Sioux, Blackfoot, Cheyenne, Comanche, Arapaho, and Kiowa.Groups throughout the region had in common several forms of material culture, including the tepee, tailored leather clothing, a variety of battle regalia (such as feathered headdresses), and large drums used in ritual contexts. The Sun Dance, a ritual that demanded a high degree of piety and self-sacrifice from its participants, was also found throughout most of the Plains.The Plains is perhaps the culture area in which tribal and band classifications were most conflated. Depictions of indigenous Americans in popular culture have often been loosely based on Plains peoples, encouraging many to view them as the “typical” American Indians. See also Plains Indian.The Southwest (Southwest Indian)This culture area lies between the Rocky Mountains and the Mexican Sierra Madre, mostly in present-day Arizona and New Mexico. The topography includes plateaus, basins, and ranges. The climate on the Colorado Plateau is temperate, while it is semitropical in most of the basin and range systems; there is little precipitation and the major ecosystem is desert. The landscape includes several major river systems, notably those of the Colorado and the Rio Grande, that create linear oases in the region.The Southwest is home to speakers of Hokan (Hokan languages), Uto-Aztecan (Uto-Aztecan languages), Tanoan, Keresan, Kiowa-Tanoan, Penutian (Penutian languages), and Athabaskan (Athabaskan language family) languages. The region was the home of both agricultural and hunting and gathering peoples, although the most common lifeway combined these two economic strategies. Best known among the agriculturists are the Pueblo Indians, including the Zuni and Hopi. The Yumans (Yuman), Pima, and Tohono O'odham (Papago) engaged in both farming and foraging, relying on each to the extent the environment would allow. The Navajo and the many Apache groups usually engaged in some combination of agriculture, foraging, and the raiding of other groups.The major agricultural products were corn, beans, squash, and cotton. Wild plant foods, deer, other game, and fish (for those groups living near rivers) were the primary foraged foods. The Pueblo peoples built architecturally remarkable apartment houses of adobe and stone masonry (see pueblo architecture) and were known for their complex kinship structures, kachina (katsina) dances and dolls, and fine pottery, textiles, and kiva and sand paintings. The Navajo built round houses (“hogans”) and were known for their complex clan system, healing rituals, and fine textiles and jewelry. The Apaches, Yumans, Pima, and Tohono O'odham generally built thatched houses or brush shelters and focused their expressive culture on oral traditions. Stone channels and check dams (low walls that slowed the runoff from the sporadic but heavy rains) were common throughout the Southwest, as were basketry and digging sticks. See also Southwest Indian.The Great Basin (Great Basin Indian)The Great Basin culture area is centred in the intermontane deserts of present-day Nevada and includes adjacent areas in California, Oregon, Idaho, Montana, Wyoming, Colorado, Utah, and Arizona. It is so named because the surrounding mountains create a bowl-like landscape that prevented water from flowing out of the region. The most common topographic features are basin and range systems; these gradually transition to high intermontane plateaus in the north. The climate is temperate in the north and becomes subtropical to the south. Higher elevations tend to receive ample moisture but other areas average as little as 2 inches (50 mm) per year. Much of the region's surface water, such as the Great Salt Lake, is brackish. The predominant ecosystem is desert.The Great Basin is home to the Washoe, speakers of a Hokan language (Hokan languages), and a number of tribes speaking Numic languages (a division of the Uto-Aztecan language (Uto-Aztecan languages) family). These include the Mono, Paiute, Bannock, Shoshone, Ute, and Gosiute.The peoples of this region were hunters and gatherers and generally organized themselves in mobile, kin-based bands. Seeds, piñon nuts, and small game formed the bulk of the diet for most groups, although those occupying northern and eastern locales readily adopted horses and equestrian bison hunting after Spanish mounts became available. Some of these latter groups also replaced wickiups (wickiup) and brush shelters, the common house forms until that time, with Plains-style tepees (tepee); peoples in the west and south, however, continued to use traditional house forms well into the 19th century. Other common forms of material culture included digging sticks, nets, basketry, grinding stones for processing seeds, and rock art. See also Great Basin Indian.California (California Indian)This culture area approximates the present states of California (U.S.) and northern Baja (Mex.). Other than the Pacific coast, the region's dominant topographic features are the Coast Range and the Sierra Nevada; these north-south ranges are interspersed with high plateaus and basins. An extraordinary diversity of local conditions created microenvironments such as coasts, tidewaters, coastal redwood forests, grasslands, wetlands, high deserts, and mountains.California includes representatives of some 20 language families, including Uto-Aztecan (Uto-Aztecan languages), Penutian (Penutian languages), Yokutsan, and Athabaskan (Athabaskan language family); American linguist Edward Sapir (Sapir, Edward) described California's languages as being more diverse than those found in all of Europe. Prominent tribes, many with a language named for them, include the Hupa, Yurok, Pomo, Yuki, Wintun, Maidu, and Yana.Many California peoples eschewed centralized political structures and instead organized themselves into tribelets, groups of a few hundred to a few thousand people that recognized cultural ties with others but maintained their political independence. Some tribelets comprised just one village and others included several villages; in the latter cases, one village was usually recognized as more important than the others. The relatively few groups that lived in areas with sparse natural resources preferred to live in small mobile bands.Agriculture was practiced only along the Colorado River; elsewhere hunting and gathering provided a relatively easy living. Acorns were the most important of the wild food sources; California peoples devised a method of leaching the toxins from acorn pulp and converting it into flour, thus ensuring abundant and constant food. Fishing, hunting, and gathering shellfish and other wild foods were also highly productive. Housing varied from wood-framed single-family dwellings to communal apartment-style buildings; ceremonial structures were very important and could often hold several hundred people. The California peoples were also known for their fine basketry, ritualized trade fairs, and the Kuksu and Toloache religions. See also California Indian.The Northwest Coast (Northwest Coast Indian)This culture area is bounded on the west by the Pacific Ocean and on the east by the Coast Range, the Sierra Nevada, and the Rocky Mountains; it reaches from the area around Yakutat Bay in the north to the Klamath River area in the south. It includes the coasts of present-day Oregon, Washington, British Columbia, much of southern Alaska, and a small area of northern California. The topography is steep and in many places the coastal hills or mountains fall abruptly to a beach or riverbank. There is an abundance of precipitation—in many areas more than 160 inches (406 cm) annually, but rarely less than 30 inches (76 cm). The predominant ecosystems are temperate rainforests, intertidal zones, and the ocean.This culture area is home to peoples speaking Athabaskan (Athabaskan language family), Tshimshianic, Salishan, and other languages. Prominent tribes include the Tlingit, Haida, Tsimshian, Kwakiutl, Bella Coola, Nuu-chah-nulth (Nootka), Coast Salish, and Chinook.The peoples of the Northwest Coast had abundant and reliable supplies of salmon and other fish, sea mammals, shellfish, birds, and a variety of wild food plants. The resource base was so rich that they are unique among nonagricultural peoples in having created highly stratified societies of hereditary elites, commoners, and slaves. Tribes often organized themselves into corporate “houses”—groups of a few dozen to 100 or more related people that held in common the rights to particular resources. As with the house societies of medieval Japan and Europe, social stratification operated at every level of many Northwest Coast societies; villages, houses, and house members each had their designated rank, which was reflected in nearly every social interaction.Most groups built villages near waterways or the coast; each village also had rights to an upland territory from which the residents could obtain terrestrial foods. Dwellings were rectilinear structures built of timbers or planks and were usually quite large, as the members of a corporate “house” typically lived together in one building. Northwest Coast cultures are known for their fine wood and stone carvings, large and seaworthy watercraft, memorial or totem poles (totem pole), and basketry. The potlatch, a feast associated with the bestowal of lavish gifts, was also characteristic of this culture area. See also Northwest Coast Indian.The Plateau (Plateau Indian)Lying at the crossroads of five culture areas (the Subarctic, Plains, Great Basin, California, and Northwest Coast), the Plateau is surrounded by mountains and drained by two great river systems, the Fraser and the Columbia. It is located in present-day Montana, Idaho, Oregon, Washington, and British Columbia. Topographically, the area is characterized by rolling hills, high flatlands, gorges, and mountain slopes. The climate is temperate, although milder than the adjacent Plains because the surrounding mountain systems provide protection from continental air masses. The mountains also create a substantial rain shadow; most precipitation in this region falls at higher elevations, leaving other areas rather dry. The predominant ecosystems are grassland and high desert, although substantial forested areas are found at altitude.Most of the languages spoken in this culture area belong to the Salishan, Sahaptin, Kutenai, and Modoc and Klamath families. Tribes include the Salish, Flathead, Nez Percé, Yakima, Kutenai, Modoc and Klamath, Spokan, Kalispel, Pend d'Oreille, Coeur d'Alene, Wallawalla, and Umatilla. “Flathead” is incorrectly used in some early works to denote all Salishan-speaking peoples, only some of whom moulded infants' heads so as to achieve a uniform slope from brow to crown; notably, the people presently referred to as the Flathead did not engage in this practice (see head flattening).The primary political unit was the village; among some groups a sense of larger tribal and cultural unity led to the creation of representative governments, tribal chieftainships, and confederations of tribes. This was possible in part because the Columbia and Fraser rivers provided enough salmon and other fish to support a relatively dense population; however, this region was never as heavily populated or as rigidly stratified as the Northwest Coast.Efficient hunters and gatherers, Plateau groups supplemented fish with terrestrial animals and wild plant foods, especially certain varieties of camas (Camassia). Most groups resided in permanent riverside villages and traveled to upland locales during fair-weather foraging excursions; however, horses were readily adopted once available and some groups subsequently shifted to nomadic buffalo hunting. These groups quickly adopted tepees and many other Plains cultural forms; they became particularly respected for their equine breeding programs and fine herds (see Appaloosa). Plateau fishing villages were characterized by their multifamily A-frame dwellings, while smaller conical structures were used in the uplands; both house forms were covered with grass, although canvas became a popular covering once available. In terms of portable culture, the Plateau peoples were most characterized by the wide variety of substances and technologies they used; continuously exposed to new items and ideas through trade with surrounding culture areas, they excelled at material innovation and at adapting others' technologies to their own purposes.PrehistoryIndigenous Americans had (and have) rich traditions concerning their origins, but until the late 19th century, most outsiders' knowledge about the Native American past was speculative at best. Among the more popular misconceptions were those holding that the first residents of the continent had been members of the Ten Lost Tribes of Israel or refugees from the lost island of Atlantis, that their descendents had developed the so-called Mound Builder culture, and that Native Americans had later overrun and destroyed the Mound Builder civilization. These erroneous and overtly racist beliefs were often used to rationalize the destruction or displacement of indigenous Americans. Such beliefs were not dispelled until the 1890s, when Cyrus Thomas, a pioneering archaeologist employed by the Smithsonian Institution, demonstrated conclusively that the great effigy mounds (effigy mound), burial mounds (burial mound), and temple mounds of the Northeast and Southeast culture areas had been built by Native Americans.It is now known that humans arrived in the Americas at least 13,000 years ago and perhaps much earlier. During the last ice age, a land bridge or isthmus connected northeastern Asia to northwestern North America. The land bridge is known as Beringia because it formed along the present-day Bering Strait.Beringia began to emerge some 36,000-40,000 years ago, as the ice age began. At that time glaciers began to absorb increasing amounts of water, causing global sea levels to fall by as much as 400 feet (120 metres). A complete connection between Asia and North America existed from about 28,000 to 10,000 BC, and, at its greatest extent, the isthmus may have spanned some 1,000 miles (1,600 km) from north to south.The people who moved into Beringia from Asia relied on hunting and gathering (hunting and gathering culture) for subsistence and traveled in bands (band): small, mobile, kin-based groups of people who lived and foraged together. Three factors suggest that the isthmus was inhabited for some time before people moved into North America itself: the long period during which the land bridge existed, the generally slow advance of hunter-gatherers into new territory, and the presence of unsurpassable glaciers at Beringia's eastern extreme until perhaps 13,000 BC. When calculated from the point where sea levels began to expose the land bridge, Beringia may have been inhabited for as long as 20,000 years.As the eastern glaciers began to recede, some Beringians probably followed the coast south, perhaps combining walking with boat travel; people had used boats to settle Australia as early as 50,000–60,000 BC, which suggests that such technology was by this time well-known. Other Beringians probably traveled via ice-free routes through the interior of North America; geological studies indicate that such passages probably existed in the Mackenzie (Mackenzie River) Basin and along the Yukon (Yukon River), Liard (Liard River), and Peace River systems. Later migrations may also have occurred by way of the Aleutian Islands.In studies of North American prehistory, these very early cultures are generally known as Paleo-Indians. By about 6000 BC some groups had begun to experiment with food production as well as foraging; they are known as Archaic cultures (Archaic culture). Archaic peoples often returned to the same location on a seasonal basis, and as a result began to build small settlements. Archaic subsistence techniques were very efficient, and in a number of culture areas people sustained an essentially Archaic way of life until after European colonization.By about 1000 BC a number of Native American peoples had become fully reliant upon agriculture for subsistence; their cultures were eventually characterized by relatively large, sedentary societies that included social or religious hierarchies. These groups include the early farmers of the Southwest, known as the Ancestral Pueblo culture, Mogollon culture, and Hohokam culture; those east of the Mississippi valley, known as Woodland cultures and later as Mississippian cultures (Mississippian culture); and those who settled along the rivers of the Plains, known as members of the Plains Woodland and the Plains Village cultures.Paleo-Indian culturesAsia and North America remained connected until about 12,000 years ago. Although most of the routes used by the Paleo-Indians are difficult to investigate because they are now under water or deeply buried or have been destroyed by erosion and other geological processes, research has divulged a variety of information about their lives and cultures.Archaeological discoveries in the first half of the 20th century indicated that the migration (human migration) had occurred by about 9500 BC, and subsequent finds pushed this boundary to even earlier dates. Scholars group Paleo-Indians into two distinct traditions: the Clovis (Clovis complex), Folsom (Folsom complex), and related cultures of the North American interior; and the pre-Clovis cultures, whose distribution is emerging through current research.All the Paleo-Indian groups lived in a relatively dynamic landscape that they shared with Pleistocene flora and fauna (Pleistocene Epoch), most notably with megafauna such as mammoths (mammoth), mastodons (mastodon), giant bison, giant ground sloths, sabre-toothed cats (sabre-toothed cat), and short-faced bears. Paleo-Indian sites often include the remains of megafauna, sometimes leading to the mistaken impression that these peoples were solely dedicated to the capture of big game. For a time this impression was sustained by a variety of preservation and identification issues such as the rapid degeneration of small mammal, fish, and vegetal remains in the archaeological record and the use of recovery techniques that neglected or ignored such materials. By the turn of the 21st century, however, excavations at sites such as Gault (Texas) and Jake Bluff (Oklahoma) had clearly demonstrated that at least some Paleo-Indians used a variety of wild animal and plant foods and so are better characterized as generalized hunter-gatherers than as people who limited themselves to the pursuit of big game.In 1908 George McJunkin, ranch foreman and former slave, reported that the bones of an extinct form of giant bison (Bison antiquus) were eroding out of a wash near Folsom, N.M.; an ancient spear point was later found embedded in the animal's skeleton. In 1929 teenager Ridgley Whiteman found a similar site near Clovis, N.M., albeit with mammoth rather than bison remains. The Folsom and Clovis sites yielded the first indisputable evidence that ancient Americans had co-existed with and hunted the megafauna, a possibility that most scholars had previously met with skepticism.The Clovis (Clovis complex) culture proved to be the earlier of the two. Clovis projectile points are thin, lanceolate (leaf-shaped), and made of stone; one or more longitudinal flakes, or flutes, were removed from the base of each of the point's two flat faces. Clovis points were affixed to spear handles and are often found on mammoth kill sites, usually accompanied by side scrapers (used to flense the hide) and other artifacts used to process meat. Clovis culture was long believed to have lasted from approximately 9500 to 9000 BC, although early 21st-century analyses suggest it may have been of shorter duration, from approximately 9050 to 8800 BC.Folsom (Folsom complex) culture seems to have developed from Clovis culture. Also lanceolate, Folsom points were more carefully manufactured and include much larger flutes than those made by the Clovis people. The Lindenmeier site, a Folsom campsite in northeastern Colorado, has yielded a wide variety of end and side scrapers, gravers (used to engrave bone or wood), and bone artifacts. The Folsom culture is thought to have lasted from approximately 9000 to 8000 BC. Related Paleo-Indian groups, such as the Plano culture, persisted until sometime between 6000 and 4000 BC.Pre-Clovis culturesThe long-standing belief that Clovis people were the first Americans was challenged in the late 20th century by the discovery of several sites antedating those of the Clovis culture. Although many scholars were initially skeptical of the evidence from these sites, the late 1990s saw general agreement that humans had arrived in North and South America by at least 11,000 BC, some 1,500 years before the appearance of Clovis culture.Dating to about 10,500 BC, Monte Verde, a site in Chile's Llanquihue province, is the oldest confirmed human habitation site in the Americas. First excavated in the 1970s, the site did not seem to concord with findings that placed the earliest humans in northeastern Asia no earlier than c. 11,500 BC; it seemed extremely unlikely that people could have meandered from Siberia to Chile in just 1,000 years. However, excavations at the Yana Rhinoceros Horn site in Siberia subsequently determined that humans were present on the western side of the Bering land bridge as early as 25,000 BC, providing ample time for such a migration.A number of other sites may be as early or earlier than Monte Verde: excavations of note include those at the Topper site (South Carolina), Cactus Hill (Virginia), Schaefer and Hebior (Wisconsin), and others. Further investigations will continue to clarify the patterns of Paleo-Indian migration.Archaic cultures (Archaic culture)Beginning about 6000 BC, what had been a relatively cool and moist climate (climate change) gradually became warmer and drier. A number of cultural changes are associated with this environmental shift; most notably, bands became larger and somewhat more sedentary, tending to forage from seasonal camps rather than roaming across the entire landscape. Fish, fowl, and wild plant foods (especially seeds) also become more apparent in the archaeological record, although this may be a result of differential preservation rather than changes in ancient subsistence strategies. Finally, various forms of evidence indicate that humans were influencing the growth patterns and reproduction of plants through practices such as the setting of controlled fires to clear forest underbrush, thereby increasing the number and productivity of nut-bearing trees. In aggregate, these changes mark the transition from Paleo-Indian to Archaic cultures (Archaic culture).The duration of the Archaic Period varied considerably in Northern America: in some areas it may have begun as long ago as 8000 BC, in others as recently as 4000 BC. Between 6000 and 4000 BC the wild squash seeds found at archaeological sites slowly increased in size, a sign of incipient domestication. Similar changes are apparent by about 5000 BC in the seeds of wild sunflowers and certain “weedy” plants (defined as those that prefer disturbed soils and bear plentiful seeds) such as sumpweed (Iva annua) and lamb's-quarters (lamb's quarters) (Chenopodium album). Northern Americans independently domesticated several kinds of flora, including a variety of squash (c. 3000 BC) unrelated to the those of Mesoamerica or South America, sunflowers Helianthus annuus (c. 3000 BC), and goosefoot Chenopodium berlandieri (c. 2500 BC).Many prehistoric Native American peoples eventually adopted some degree of agriculture; they are said to have transitioned from the Archaic to subsequent culture periods when evidence indicates that they began to rely substantively upon domesticated foods and in most cases to make pottery. Archaeologists typically place the end of the North American Archaic at or near 1000 BC, although there is substantial regional variation from this date. For instance, the Plains Archaic continued until approximately the beginning of the Common Era, and other groups maintained an essentially Archaic lifestyle well into the 19th century, particularly in the diverse microenvironments of the Pacific Coast, the arid Great Basin, and the cold boreal forests, tundras, and coasts of Alaska and Canada.Pacific Coast Archaic culturesArchaic peoples living along the Pacific Coast and in neighbouring inland areas found a number of innovative uses for the rich microenvironments of that region. Groups living in arid inland locales made rough flint tools, grinding stones, and, eventually, arrowheads and subsisted upon plant seeds and small game. Where there was more precipitation, the food supply included elk, deer, acorns, fish, and birds. People on the coast itself depended upon the sea for their food supply, some subsisting mainly on shellfish, some on sea mammals, others on fish, and still others on a mixture of all three.In contrast to the larger projectile points found elsewhere in North America, many Pacific Coast Archaic groups preferred to use tools made of microblades; sometimes these were set into handles to make knives composed of a series of small individually set teeth rather than a long, continuous cutting edge. However, in the Northwest Coast (Northwest Coast Indian) culture area, the people of the Old Cordilleran culture (sometimes called the Paleoplateau or Northwest Riverine culture;uc. 9000/8500–5000 BC) preferred lanceolate points, long blades, and roughly finished choppers.During the postglacial warming period that culminated between 3000 and 2000 BC, the inhabitants of the drier areas without permanent streams took on many of the traits of the Desert Archaic cultures (see below), while others turned increasingly toward river and marsh resources. In the 1st millennium BC the Marpole complex, a distinctive toolmaking tradition focusing on ground slate, appeared in the Fraser River area. Marpole people shared a basic resemblance to historic Northwest Coast groups in terms of their maritime emphasis, woodworking, large houses, and substantial villages.Desert Archaic culturesAncient peoples in the present-day Plateau and Great Basin culture areas created distinctive cultural adaptations to the dry, relatively impoverished environments of these regions. The Cochise (Cochise culture) or Desert Archaic (Desert cultures) culture began by about 7000 BC and persisted until the beginning of the Common Era.Desert Archaic people lived in small nomadic bands and followed a seasonal round. They ate a wide variety of animal and plant foods and developed techniques for small-seed (seed and fruit) harvesting and processing; an essential component of the Desert Archaic tool kit was the milling stone, used to grind wild seeds into meal or flour. These groups are known for having lived in caves and rock shelters; they also made twined basketry, nets, mats, cordage, fur cloaks, sandals, wooden clubs, digging sticks, spear-throwers (spear-thrower), and dart shafts tipped with pointed hardwood, flint, or obsidian. Their chopping and scraping tools often have a rough, relatively unsophisticated appearance, but their projectile points show excellent craftsmanship.Plains Archaic culturesThe Plains Archaic began by about 6000 BC and persisted until about the beginning of the Common Era. It is marked by a shift from just a few kinds of fluted Paleo-Indian points to a myriad of styles, including stemmed and side-notched points. The primary game animal of the Plains Archaic peoples was the bison, although as savvy foragers they also exploited a variety of other game and many wild plant foods.As the climate became warmer, some groups followed grazing herds north into present-day Saskatchewan and Alberta; by 3000 BC these people had reached the Arctic tundra zone in the Northwest Territories and shifted their attention from bison to the local caribou. Other groups moved east to the Mississippi valley and western Great Lakes area.Eastern Archaic culturesThe Eastern Archaic (c. 8000–1500 BC) included much of the Eastern Subarctic, the Northeast, and the Southeast culture areas; because of this very wide distribution, Eastern Archaic cultures show more diversity over time and space than Archaic cultures elsewhere in North America. Nonetheless, these cultures are characterized by a number of material similarities. The typical house was a small circular structure framed with wood; historical analogies suggest that the covering was probably bark. Cooking was accomplished by placing hot rocks into wood, bark, or hide containers of food, which caused the contents to warm or even boil; by baking in pits; or by roasting. Lists of mammal, fish, and bird remains from Eastern Archaic sites read like a catalog of the region's fauna at about the time of European contact. Game-gathering devices such as nets, traps, and pitfalls were used, as were spears, darts, and dart or spear throwers (spear-thrower). Fishhooks, gorges, and net sinkers were also important, and in some areas fish weirs (underwater pens or corrals), were built. River, lake, and ocean mollusks were consumed, and a great many roots, berries, fruits, and tubers were part of the diet.Over time, Eastern Archaic material culture reflects increasing levels of technological and economic sophistication. A large variety of chipped-flint projectiles, knives, scrapers, perforators, drills, and adzes appear. The era is also marked by the gradual development of ground and polished tools such as grooved stone axes, pestles, gouges, adzes, plummets (stones ground into a teardrop shape, used for unknown purposes), and bird stones (bird stone) and other weights that attached to spear throwers.Eastern Archaic people in what are now the states of Michigan and Wisconsin began to work copper, which can be found in large nodules there. Using cold-hammer techniques, they created a variety of distinctive tools and art forms. Their aptly named Old Copper culture appeared about 3000 BC and lasted approximately 2,000 years. Its tools and weapons, particularly its adzes, gouges, and axes, clearly indicate an adaptation to the forest environment.In the area south of James Bay to the upper St. Lawrence River about 4000 BC, there was a regional variant called the Laurentian Boreal Archaic and, in the extreme east, the Maritime Boreal Archaic (c. 3000 BC). In this eastern area, slate was shaped into points and knives similar to those of the copper implements to the west. Trade between the eastern and western areas has been recognized; in addition, copper implements have been found as far south as Louisiana and Florida and southeastern marine shells have been found in the upper Mississippi–Great Lakes area. This suggests that transportation by canoe was known to Eastern Archaic peoples.Along the southern border of the central and eastern boreal forest zone between 1500 and 500 BC there developed a distinctive burial complex, reflecting an increased attention to mortuary ceremonies. These burials, many including cremations, were often accompanied by red ochre, caches of triangular stone blanks (from which stone tools could be made), fire-making kits of iron pyrites and flint strikers, copper needles and awls, and polished stone forms. The triangular points of this complex may have represented the introduction of the bow and arrow from the prehistoric Arctic peoples east of Hudson Bay.Prehistoric farmersIn much of North America, the shift from generalized foraging and horticultural experimentation to a way of life dependent on domesticated plants occurred about 1000 BC, although regional variation from this date is common.corn (maize), early forms of which had been grown in Mexico since at least 5000 BC, appeared among Archaic groups in the Southwest culture area by about 1200 BC and in the Eastern Woodlands by perhaps 100 BC; other Mesoamerican domesticates, such as chile peppers and cotton, did not appear in either region until approximately the beginning of the Common Era. Although the importance of these foreign domesticates increased over time, most Native American groups retained the use of locally domesticated plants for several centuries. For instance, improvements to sumpweed continued until about AD 1500, after which the plants abruptly returned to their wild state. It is unclear why sumpweed fell out of favour, although some have suggested that its tendency to cause hay fever and contact dermatitis may have contributed to the demise of its domesticated forms. Others believe that the timing of the event, coincident with the first wave of European conquest, suggests that cultural disruption initiated this change. Notably, many other indigenous American domesticates, including sunflowers, squashes, beans, and tobacco, have persisted as economically important crops into the 21st century.Although prehistoric farming communities exhibited regional and temporal variation, they shared certain similarities. For the most part, farming groups were more sedentary than Archaic peoples, although the dearth of domesticated animals in Northern America (turkeys and dogs (dog) being the exception) meant that most households or communities continued to engage in hunting forays. Agriculturists' housing and settlements tended to be more substantial than those of Archaic groups, and their communities were often protected by walls or ditches; many also developed hierarchical systems of social organization, wherein a priestly or chiefly class had authority over one or more classes of commoners.Southwestern cultures: the Ancestral Pueblo (Ancestral Pueblo culture), Mogollon (Mogollon culture), and Hohokam (Hohokam culture)The first centuries of the Common Era saw the development of three major farming complexes in the Southwest, all of which relied to some extent on irrigation. The Ancestral Pueblo (Ancestral Pueblo culture) peoples (also known as the Anasazi; c. AD 100–1600) of the Four Corners area built low walls (check dams) to slow and divert the flow of water from seasonal rivulets to cultivated fields. The Mogollon (Mogollon culture) (c. 200–1450) built their communities in the mountainous belt of southwestern New Mexico and southeastern Arizona and depended upon rainfall and stream diversion to water their crops. The Hohokam (Hohokam culture) (c. 200–1400) lived in the desert area of the Gila basin of southern Arizona and built irrigation canals to water their fields.These three cultures are known for their geographic expansion, population growth, and pueblo architecture, all of which reached their greatest levels of complexity between approximately 700 and 1300—a period that generally coincided with an unusually favourable distribution of rainfall over the entire Southwest (analogous climatic conditions elsewhere in North America supported cultural florescences in the Eastern Woodlands [c. 700–1200] and on the Plains [c. 1000–1250]). During this period the population and cultures of central and western Mexico (Mesoamerican Indian) expanded to the northwest; trade and cultural stimuli were thus moving from Mesoamerica into the Southwest culture area at a time when the climate in both regions was most favourable for population and cultural growth. Materials entering the Southwest from Mexico during this era included cast copper bells, parrots, ball courts, shell trumpets, and pottery with innovative vessel shapes and designs.Between 750 and 1150 the Ancestral Pueblo expanded into the Virgin River valley of southeastern Nevada, north as far as the Great Salt Lake and northwestern Colorado, to the east into southeastern Colorado and to the Pecos and upper Canadian River valleys of New Mexico. They also developed priestly offices, rituals, and ceremonialism during this period.Ancestral Pueblo achievements during 1150–1300, a period known as Pueblo III, included the construction of large cliff dwellings (cliff dwelling), such as those found at Mesa Verde National Park, and the apartment-like “great houses” of Chaco Canyon and elsewhere (see Chaco Culture National Historic Park (Chaco Culture National Historical Park)). Dressed stones were used in many localities to bear the weight of these massive structures, which had from 20 to as many as 1,000 rooms and from one to four stories. Each of the larger buildings was in effect a single village. Windows and doors were quite small, and usually no openings were made in the lowest rooms, which were entered by ladder through the roof. Buildings had a stepped appearance because each level or floor was set back from the one below it; the resulting terraces were heavily used as outdoor living space. Roofs were constructed to carry great weights by using heavy beams, covering them with a mat of smaller poles and brush, then adding a coat of adobe six to eight inches thick.A number of new kivas (kiva) (a type of subterranean ceremonial structure found at each settlement) were also built during this period, with some as large as 80 feet (25 metres) in diameter. Craftsmanship in pottery reached a high level; innovations included the use of three or more colours, and the techniques used by different communities—Chaco canyon, Mesa Verde, Kayenta, and a number of others—became so distinct that the vessels from each settlement can be recognized easily. Cotton cloth, blankets, and bags were woven, and yucca fibre also entered into various articles of clothing and such utility objects as mats. Feather-cloth robes were worn in cold weather.Between about 1300 and 1600, increasing aridity and the arrival of hostile outsiders accelerated the pace of change; armed conflict and drought redirected Ancestral Pueblo efforts from artistic development to survival. Rituals designed to ensure rain increased in importance and elaboration and are portrayed in wall paintings and pottery. This period was also characterized by a general movement southward and eastward, and new villages were built on the Little Colorado, Puerco, Verde, San Francisco, Rio Grande, Pecos, upper Gila, and Salt rivers.In their early phases, from about 200 to 650, Mogollon settlements consisted of relatively small villages of pit houses grouped near a large ceremonial structure. Villages of this period were laid out rather randomly, and trash disposal was also haphazard. Houses became more substantial and several innovations in pottery design occurred between about 650 and 850. From about 850 to 1000, Mogollon villages exhibit Ancestral Pueblo influence in such things as construction techniques (shifting from pit houses to pueblos) and pottery design. The Mogollon reached their artistic pinnacle during the Classic Mimbres Period (c. 1000–1150). During the climatic deterioration after 1200, the Mogollon abandoned their territory in southwestern New Mexico.The Hohokam (Hohokam culture) people of central and southern Arizona built most of their settlements in major river valleys and lived in villages of pit houses that were arrayed along streams and canals. Agriculture was expanded through the use of extensive irrigation (irrigation and drainage) canals that may have been built by cooperating villages. Between approximately 775 and 1150, the Hohokam built their largest settlements and experienced a period of cultural innovation. Following this period, and until sometime between 1350 and 1450, Hohokam culture exhibits Ancestral Pueblo and Mexican influences. During this period, people built more compact settlements, often with a few massive multiroom and two-story buildings that were surrounded by compound walls.The Ancestral Pueblo were the ancestors of contemporary Pueblo Indians such as the Hopi, Zuni, Acoma, and others. The Hohokam are the ancestors of the Pima and Tohono O'odham. After abandoning their villages, the Mogollon dispersed, probably joining other groups.Eastern Woodland culturesOutside of the Southwest, Northern America's early agriculturists are typically referred to as Woodland cultures. This archaeological designation is often mistakenly conflated with the eco-cultural delineation of the continent's eastern culture areas: the term Eastern Woodland cultures refers to the early agriculturists east of the Mississippi valley, but the term Eastern Woodlands refers to the Northeast (Northeast Indian) and Southeast (Southeast Indian) culture areas together.As in the Southwest, the introduction of corn in the East (c. 100 BC) did not cause immediate changes in local cultures; Eastern Archaic groups had been growing locally domesticated plants for some centuries, and corn was a minor addition to the agricultural repertoire. One of the most spectacular Eastern Woodland cultures preceding the introduction of maize was the Adena culture (c. 500 BC–AD 100, although perhaps as early as 1000 BC in some areas), which occupied the middle Ohio River valley. Adena people were hunters, gatherers, and farmers who buried their dead in large earthen mounds, some of which are hundreds of feet long. They also built effigy mounds (effigy mound), elaborate earthen structures in the shape of animals.This tradition of reshaping the landscape was continued by the Hopewell culture (c. 200 BC–AD 500) of the Illinois and Ohio river valleys. Hopewell society was hierarchical and village-based; surplus food was controlled by elites who used their wealth to support highly skilled artisans and the construction of elaborate earthworks. An outstanding feature of Hopewell culture was a tradition of placing elaborate burial goods in the tombs of individuals or groups. The interment process involved the construction of a large box-like log tomb, the placement of the body or bodies and grave offerings inside, the immolation of the tomb and its contents, and the construction of an earthen mound over the burned materials. Artifacts found within these burial mounds indicate that the Hopewell obtained large quantities of goods from widespread localities in North America, including obsidian and grizzly bear teeth from as far away as the Rocky Mountains; copper from the northern Great Lakes; and conch shells and other materials from the southeast and along the coast of the Gulf of Mexico. Sites in Ohio were particularly important distribution centres, controlling ceremonial goods and special products over a wide area. Evidence for this so-called Hopewell Interaction Sphere rapidly faded after about AD 400, although Hopewell traditions continued for another century and Eastern Woodland cultures as a whole persisted for another 300 years.Mississippian cultures (Mississippian culture)About AD 700 a new cultural complex arose in the Mississippi valley between the present-day cities of St. Louis and Vicksburg. Known as the Mississippian culture, it spread rapidly throughout the Southeast (Southeast Indian) culture area and into some parts of the Northeast. Its initial growth and expansion took place during approximately the same period (700–1200) as the cultural zenith of the Southwest farmers. Some scholars believe that Mississippian culture was stimulated by the introduction of new concepts, religious practices, and improved agricultural techniques from northern Mexico, while others believe it developed in place as a result of climactic change and internal innovation.Whatever the origin of particular aspects of Mississippian life, the culture as such clearly developed from local traditions; between 700 and 1000, many small Eastern Woodland villages grew into large towns with subsidiary villages and farming communities nearby. Regionally delimited styles of pottery, projectile points, house types, and other utilitarian products reflected diverse ethnic identities. Notably, however, Mississippian peoples were also united by two factors that cross-cut ethnicity: a common economy that emphasized corn production and a common religion focusing on the veneration of the sun and a variety of ancestral figures.One of the most outstanding features of Mississippian culture was the earthen temple mound. These mounds often rose to a height of several stories and were capped by a flat area, or platform, on which were placed the most important community buildings—council houses and temples. Platform mounds were generally arrayed around a plaza that served as the community's ceremonial and social centre; the plazas were quite large, ranging from 10 to 100 acres (4–40 hectares). The most striking array of mounds occurred at the Mississippian capital city, Cahokia (Cahokia Mounds), located near present-day St. Louis; some 120 mounds were built during the city's occupation. Monk's Mound, the largest platform mound at Cahokia, rises to approximately 100 feet (30 metres) above the surrounding plain and covers some 14 acres (6 hectares).In some areas, large, circular charnel houses received the remains of the dead, but burial was normally made in large cemeteries or in the floors of dwellings. Important household industries included the production of mats, baskets, clothing, and a variety of vessels for specialized uses, as well as the creation of regalia, ornaments, and surplus food for use in religious ceremonies. In some cases, particular communities seem to have specialized in a certain kind of craft activity, such as the creation of a specific kind of pottery or grave offering. Ritual and religious events were conducted by an organized priesthood that probably also controlled the distribution of surplus food and other goods. Core religious symbols such as the weeping eye, feathered serpent, owl, and spider were found throughout the Mississippian world.As the Mississippian culture developed, people increased the number and complexity of village fortifications and often surrounded their settlements with timber palisades. This was presumably a response to increasing intergroup aggression, the impetus for which seems to have included control of land, labour, food, and prestige goods. The Mississippian peoples had come to dominate the Southeast culture area by about 1200 and were the predominant groups met and described by Spanish and French explorers in that region. Some Mississippian groups, most notably the Natchez, survived colonization and maintained their ethnic identities into the early 21st century.Plains Woodland and Plains Village culturesArchaic peoples dominated the Plains until about the beginning of the Common Era, when ideas and perhaps people from the Eastern Woodland cultures reached the region; some Plains Woodland sites, particularly in eastern Kansas, were clearly part of the Hopewell Interaction Sphere. Beginning between about AD 1 and 250 and persisting until perhaps 1000, Plains Woodland peoples settled in hamlets along rivers and streams, built earth-berm or wattle-and-daub structures, made pottery and other complex items, and raised corn, beans, and eventually sunflowers, gourds, squash, and tobacco.On the Plains a regional variation of the favourable agricultural conditions that elsewhere supported the most elaborate forms of culture also fostered a marked increase in settlement size and population density; during this period (locally c. 1000–1250) the hospitable areas along most major streams became heavily occupied. These and subsequent village-dwelling groups are known as Plains Village cultures. These cultures were characterized by the building of substantial lodges, the coalescence of hamlets into concentrated villages, and the development of elaborate rituals and religious practices. Having expanded their populations and territories when conditions were favourable, a period of increasing aridity that began about 1275 caused hardship and in some cases armed conflict among these peoples; at the early 14th-century Crow Creek site (South Dakota), for instance, nearly 500 people were killed violently and buried in a mass grave.Some village-dwelling peoples sustained their communities through this difficult period, while others retreated eastward and returned when the climate had improved. The descendents of the early Plains Village cultures, such as the Arikara, Mandan, Hidatsa, Crow, Wichita, Pawnee, and Ponca, greeted European explorers from the 16th century onward and continued to live on the Plains in the early 21st century.Between 1500 and 1700, the farming peoples of the western and southern Plains, such as the Apache and Comanche, took up a predominantly nomadic, equestrian way of life; most continued to engage in some agriculture, but they did not rely on crops to the same extent as settled village groups. From the early 18th century onward, a number of agricultural groups from the Northeast culture area left their forest homes for the Plains and completely substituted equestrian nomadism for agriculture; perhaps the best known of these were the Sioux and Cheyenne, whose traditional territory had been in present-day Minnesota.Native American history (Native American)The thoughts and perspectives of indigenous individuals, especially those who lived during the 15th through 19th centuries, have survived in written form less often than is optimal for the historian. Because such documents are extremely rare, those interested in the Native American past also draw information from traditional arts (Native American arts), folk literature, folklore, archaeology, and other sources.Native American history is made additionally complex by the diverse geographic and cultural backgrounds of the peoples involved. As one would expect, indigenous American farmers living in stratified societies, such as the Natchez, engaged with Europeans differently than did those who relied on hunting and gathering, such as the Apache. Likewise, Spanish conquistadors were engaged in a fundamentally different kind of colonial enterprise than were their counterparts from France or England.The sections below consider broad trends in Native American history from the late 15th century to the late 20th century. More-recent events are considered in the final part of this article, Developments in the late 20th and early 21st centuries (Native American).North America and Europe circa 1492The population of Native AmericaScholarly estimates (demography) of the pre-Columbian population of Northern America have differed by millions of individuals: the lowest credible approximations propose that some 900,000 people lived north of the Rio Grande in 1492, and the highest posit some 18,000,000. In 1910 anthropologist James Mooney (Mooney, James) undertook the first thorough investigation of the problem. He estimated the precontact population density of each culture area based on historical accounts and carrying capacity, an estimate of the number of people who could be supported by a given form of subsistence. Mooney concluded that approximately 1,115,000 individuals lived in Northern America at the time of Columbian landfall. In 1934 A.L. Kroeber (Kroeber, A.L.) reanalyzed Mooney's work and estimated 900,000 individuals for the same region and period. In 1966 ethnohistorian Henry Dobyns estimated that there were between 9,800,000 and 12,200,000 people north of the Rio Grande before contact; in 1983 he revised that number upward to 18,000,000 people.Dobyns was among the first scholars to seriously consider the effects of epidemic diseases on indigenous demographic change. He noted that, during the reliably recorded epidemics of the 19th century, introduced diseases such as smallpox had combined with various secondary effects (i.e., pneumonia and famine) to create mortality rates as high as 95 percent, and he suggested that earlier epidemics were similarly devastating. He then used this and other information to calculate from early census data backward to probable founding populations.Dobyns's figures are among the highest proposed in the scholarly literature. Some of his critics fault Dobyns for the disjunctions between physical evidence and his results, as when the number of houses archaeologists find at a site suggests a smaller population than do his models of demographic recovery. Others, including the historian David Henige, criticize some of the assumptions Dobyns made in his analyses. For instance, many early fur traders noted the approximate number of warriors fielded by a tribe but neglected to mention the size of the general population. In such cases small changes in one's initial presumptions—in this example, the number of women, children, and elders represented by each warrior—can, when multiplied over several generations or centuries, create enormous differences in estimates of population.A third group suggests that Dobyns's estimates may be too low because they do not account for pre-Columbian contact between Native Americans and Europeans. This group notes that severe epidemics of European diseases may have begun in North America in the late 10th or early 11th century, when the Norse briefly settled a region they called Vinland. The L'Anse aux Meadows site (on the island of Newfoundland), the archaeological remains of a small settlement, confirms the Norse presence in North America about 1000 CE. Given that sagas attest to an epidemic that struck Erik the Red's colony in Greenland at about the same time, the possibility that native peoples suffered from introduced diseases well before Columbian landfall must be considered.Yet another group of demographers protest that an emphasis on population loss obscures the resilience shown by indigenous peoples in the face of conquest. Most common, however, is a middle position that acknowledges that demographic models of 15th-century Native America must be treated with caution, while also accepting that the direct and indirect effects of the European conquest included extraordinary levels of indigenous mortality not only from introduced diseases but also from battles, slave raids, and—for those displaced by these events—starvation and exposure. This perspective acknowledges both the resiliency of Native American peoples and cultures and the suffering they bore.Native American ethnic and political diversityDetermining the number of ethnic (ethnic group) and political groups in pre-Columbian Northern America is also problematic, not least because definitions of what constitutes an ethnic group or a polity vary with the questions one seeks to answer. Ethnicity is most frequently equated with some aspect of language, while social or political organization can occur on a number of scales simultaneously. Thus, a given set of people might be defined as an ethnic group through their use of a common dialect or language even as they are recognized as members of nested polities such as a clan, a village, and a confederation. Other factors, including geographic boundaries, a subsistence base that emphasized either foraging or farming, the presence or absence of a social or religious hierarchy, and the inclinations of colonial bureaucrats, among others, also affected ethnic and political classification; see Sidebar: The Difference Between a Tribe and a Band.The cross-cutting relationships between ethnicity and political organization are complex today and were equally so in the past. Just as a contemporary speaker of a Germanic language—perhaps German or English—might self-identify as German, Austrian, English, Scottish, Irish, Australian, Canadian, American, South African, Jamaican, Indian, or any of a number of other nationalities, so might a pre-Columbian Iroquoian speaker have been a member of the Cayuga, Cherokee, Huron, Mohawk, Oneida, Onondaga, Seneca, or Tuscarora nation. And both the hypothetical Germanic speaker and the hypothetical Iroquoian speaker live or lived in nested polities or quasi-polities: families, neighbourhoods, towns, regions, and so forth, each of which has or had some level of autonomy in its dealings with the outside world. Recognizing that it is difficult to determine precisely how many ethnic or political groups or polities were present in 15th-century Northern America, most researchers favour relative rather than specific quantification of these entities.The outstanding characteristic of North American Indian languages is their diversity—at contact Northern America was home to more than 50 language families comprising between 300 and 500 languages. At the same moment in history, western Europe had only 2 language families (Indo-European (Indo-European languages) and Uralic (Uralic languages)) and between 40 and 70 languages. In other words, if one follows scholarly conventions and defines ethnicity through language, Native America was vastly more diverse than Europe.Politically (political system), most indigenous American groups used consensus-based forms of organization. In such systems, leaders rose in response to a particular need rather than gaining some fixed degree of power. The Southeast Indians (Southeast Indian) and the Northwest Coast Indians (Northwest Coast Indian) were exceptions to this general rule, as they most frequently lived in hierarchical societies with a clear chiefly class. Regardless of the form of organization, however, indigenous American polities were quite independent when compared with European communities of similar size.European populations and politiesJust as Native American experiences during the early colonial period must be framed by an understanding of indigenous demography, ethnic diversity, and political organization, so must they be contextualized by the social, economic, political, and religious changes that were taking place in Europe at the time. These changes drove European expansionism and are often discussed as part of the centuries-long transition from feudalism to industrial capitalism (see Western colonialism (colonialism, Western)).Many scholars hold that the events of the early colonial period are inextricably linked to the epidemics of the Black Death, or bubonic plague, that struck Europe between 1347 and 1400. Perhaps 25 million people, about one-third of the population, died during this epidemic. The population did not return to preplague levels until the early 1500s. The intervening period was a time of severe labour shortages that enabled commoners to demand wages for their work. Standards of living increased dramatically for a few generations, and some peasants were even able to buy small farms. These were radical changes from the previous era, during which most people had been tied to the land and a lord through serfdom.Even as the general standard of living was improving, a series of military conflicts raged, including the Hundred Years' War, between France and England (1337–1453); the Wars of the Roses (Roses, Wars of the), between two English dynasties (1455–85); and the Reconquista, in which Roman Catholics fought to remove Muslims from the Iberian Peninsula (c. 718–1492). These conflicts created intense local and regional hardship, as the roving brigands that constituted the military typically commandeered whatever they wanted from the civilian population. In the theatres of war, troops were more or less free to take over private homes and to impress people into labour; famine, rape, and murder were all too prevalent in these areas. Further, tax revenues could not easily be levied on devastated regions, even though continued military expenditures had begun to drain the treasuries of western Europe.As treasuries were depleted, overseas trade beckoned. The Ottoman Empire controlled the overland routes from Europe to South Asia, with its markets of spices and other commercially lucrative goods. Seeking to establish a sea route to the region, the Portuguese prince Henry the Navigator sponsored expeditions down the Atlantic coast of Africa. Later expeditions attempted to reach the Indian Ocean, but they were severely tested by the rough seas at the Cape of Good Hope (Good Hope, Cape of). Christopher Columbus (Columbus, Christopher) had been a member of several such voyages and proposed an alternative, transatlantic route; in 1484 he requested the sponsorship of John II, the king of Portugal, who refused to support an exploratory journey.Iberia was a hotbed of activity at the time. Ferdinand II of Aragon and Isabella I of Castille had begun to unify their kingdoms through their 1469 marriage, but they were soon forced to resolve bitter challenges to their individual ascensions. Eventually quelling civil war, the devout Roman Catholic sovereigns initiated the final phase of the Reconquista, pitting their forces against the last Moorish stronghold, Grenada. The city fell in January 1492, an event Columbus reportedly witnessed.The seemingly endless military and police actions to which Ferdinand and Isabella had been party had severely depleted their financial reserves. This situation was exacerbated by the chief inquisitor of the Spanish Inquisition, Tomás de Torquemada (Torquemada, Tomás de), who persuaded the monarchs to expel any Jews who refused to be baptized. Under his authority some 160,000—and by some accounts as many as 200,000—Jews were ultimately expelled or executed for heresy, including many of Spain's leading entrepreneurs, businessmen, and scientists. Having lost so many of its best minds, Spain faced a very slow economic recovery, if it was to recover at all. Seeking new sources of income, the royal treasurer, Luis de Santángel, urged the monarchs to accept Columbus's proposal to explore a western route to the East. Although Columbus did not find a route with which to sidestep Ottoman trade hegemony, his journey nonetheless opened the way to overseas wealth. Spain used American resources to restore its imperiled economy, a strategy that was soon adopted by the other maritime nations of Europe as well.Colonial goals and geographic claims: the 16th and 17th centuriesAlthough the situation in 15th-century Iberia framed Columbus's expedition to the Americas, the problems of warfare, financial naïveté, and religious intolerance were endemic throughout Europe. This situation continued into the 16th century, when at least four factors contributed to levels of inflation so high as to be unprecedented: the rise of Protestantism inflamed religious differences and fostered new military conflicts, which in turn hindered free trade; the plague-depleted population recovered, creating an excess of labour and depressing wages; mass expulsions of Jews and Protestants undermined local and regional economies; and an influx of American gold and silver, with additional silver from new mines in Germany, devalued most currencies.European colonialism (colonialism, Western) was thus begotten in a social climate fraught with war, religious intolerance, a dispossessed peasantry, and inflation. Despite these commonalities, however, each of the countries that attempted to colonize North America in the 16th and 17th centuries—Spain, France, England, the Netherlands, and Sweden—had particular goals, methods, and geographic interests that played an important role in shaping Native American history.Spain's overseas agenda emphasized the extraction of wealth, with secondary goals that included the relocation of armies, the conversion of indigenous peoples to Roman Catholicism, and the re-creation of the feudal social order to which the Spanish were accustomed. The first country to send large expeditions to the Americas, Spain focused its initial efforts on the conquest of the wealthy Aztec and Inca empires, which fell in 1521 and 1532, respectively. Immense quantities of precious metals were seized from these peoples and shipped to Spain; the initial influx of hard currency provided a period of fiscal relief, but the country suffered bankruptcy in the later 16th century and never fully recovered.The conquest of the Americas also provided overseas work for the men who had fought in the Reconquista, thus limiting the damage they might have inflicted if left unemployed in Iberia. In lieu of pay or a pension, many conquistadors were provided with encomiendas (encomienda), a form of vassal slavery in which a particular Indian population was granted to a Spaniard. The system alleviated demands on the treasury and also transplanted the Spanish social hierarchy to the colonies. Encomiendas were gradually supplanted by haciendas (hacienda)—landed estates or plantations. However, this legal nicety did little to change conditions for the Indians living under Spanish rule.Having vanquished the indigenous nations of Mexico and Peru, the conquistadors (conquistador) turned their attention to Northern America. In 1540 Francisco Vázquez de Coronado (Coronado, Francisco Vázquez de), the governor of Nueva Galicia (northwestern Mexico and the southwestern United States), began the exploration and conquest of the Southwest Indians (Southwest Indian), taking with him 300 troops. In the same year, Hernando de Soto (Soto, Hernando de) was authorized to establish Spanish control of La Florida (the southeastern United States) and its residents; he rode out with more than 600 conquistadors. Both expeditions relied upon large complements of native labourers, who were forcibly impressed into service. Coronado, de Soto, and their troops destroyed communities that resisted their demands for tribute, women, supplies, and obeisance. Concerted efforts at settlement north of Mexico began in 1565 in La Florida, with the founding of St. Augustine (Saint Augustine); similar efforts in the Southwest did not begin until 1598, when Juan de Oñate (Oñate, Juan de) led 400 settlers to a location near what is now El Paso, Texas. Although its explorers sighted the coast of California in 1542, Spain did not colonize that area until the second part of the 18th century.Marriage between Spanish men and native women was acceptable, although concubinage was more common; intermarriage was effectively forbidden to the few Spanish women who lived in the colonies. After a few generations, a complex social order based on ancestry, land ownership, wealth, and noble titles had become entrenched in the Spanish colonies.The Roman Catholic missionaries that accompanied Coronado and de Soto worked assiduously to Christianize the native population. Many of the priests were hearty supporters of the Inquisition, and their pastoral forays were often violent; beatings, dismemberment, and execution were all common punishments for the supposed heresies committed by Native Americans.France was almost constantly at war during the 15th and 16th centuries, a situation that spurred an overseas agenda focused on income generation, although territorial expansion and religious conversion were important secondary goals. France expressed an interest in the Americas as early as 1524, when the Italian explorer Giovanni da Verrazzano (Verrazzano, Giovanni da) was commissioned to explore the Atlantic coast; in 1534 the French seaman Jacques Cartier (Cartier, Jacques) entered the Gulf of St. Lawrence and claimed for King Francis I the region that became known as New France. (New France) The French eventually claimed dominion over most of the Northeast (Northeast Indian), Southeast (Southeast Indian), and American Subarctic (American Subarctic peoples) peoples. France's North American empire was, however, contested: its warm southern reaches were claimed by both France and Spain, while parts of the northern territory were claimed by both France and England. Native nations, of course, had their own claims to these territories.Concerned about Spanish claims to the Americas, the French made a number of unsuccessful attempts at settlement in the 16th century. They built (and subsequently abandoned) a fort near present-day Quebec in 1541; they also built a fort near present-day St. Augustine (Saint Augustine), Fla., in 1564, but the Spanish soon forced them to abandon that facility as well. In 1604 the French successfully established a more permanent presence on the continent, founding Acadia in present-day Nova Scotia. They did not succeed in establishing a major settlement in the south until 1718, when they founded New Orleans.French colonial settlements were built on major waterways in order to expedite trade and shipping; the city of Quebec was founded in 1608 at the confluence of the St. Lawrence and St. Charles rivers, and Montreal was founded in 1642 at the conjunction of the St. Lawrence and the Ottawa rivers. Although these trading centres were lively, the settlement of northern New France was slowed by several factors. Among these were the lucrative nature of the fur trade, which required a highly mobile and enterprising workforce—quite a different set of habits and skills than those required of farmers—and a cool climate, which produced thick furs but unpredictable harvests. In 1627 a group of investors formed the Company of New France, but governance of the colony reverted to the king in 1663, after the company repeatedly failed to meet the obligations of its charter.Most of the northern locales where the French founded settlements were already occupied by various Algonquin groups or members of the Iroquoian-speaking Huron ( Wendat) confederacy, all of whom had long used the inland waterways of the heavily forested region as trade and transportation routes. These peoples quickly partnered with the French—first as fur trappers, later as middlemen in the trade, and always as a source of staples such as corn (maize). Because the Algonquin, Huron, and French were all accustomed to using marriage as a means of joining extended families, because indigenous warfare caused a demographic imbalance that favoured women, and because few women were eager to leave France for the rough life of the colonies, unions between native women and French men quickly became common. The attitudes of missionaries in New France varied: some simply promoted the adoption of Roman Catholic beliefs and practices, while others actively discouraged and even used force in order to end the practice of indigenous religions.England focused its conquest of North America primarily on territorial expansion, particularly along the Atlantic coast from New England to Virginia. The first explorer to reach the continent under the English flag was John Cabot (Cabot, John), an Italian who explored the North Atlantic coast in 1497. However, England did little to follow up on Cabot's exploits until the early 17th century. By that time, the wool trade had become the driving force in the English economy; as a source of foreign exchange, wool sales softened inflation somewhat but did not render the English immune to its effects.England responded to the pressure of inflation in several ways that influenced Native American history. One response, the intensification of wool production, ensured that the wealthy would remain secure but greatly disrupted the domestic economy. To effect the production of more wool, the landed nobility began to practice enclosure, merging the many small fields that dotted the English countryside into larger pastures. This allowed more sheep to be raised but came at a harsh cost to the burgeoning population of commoners. The landless majority were evicted from their farms, and many had to choose between starvation and illicit activities such as theft, poaching, and prostitution. By the mid-1600s a new option arose for the dispossessed: indentured servitude, a form of contract labour in which transport to a colony and several years' room and board were exchanged for work; petty criminals (crime) were soon disposed of through this method as well.The English elite chartered a variety of commercial entities, such as the Virginia Company, to which King James I granted the control of large swaths of American territory. These business ventures focused especially on the extraction of resources such as tobacco, a new commodity that had proved extremely popular throughout Europe. The monarch also made land grants to religious dissidents, most notably to the Puritan shareholders of the Massachusetts Bay Company, to the Roman Catholic leader Cecilius Calvert, who established the colony of Maryland, and to the Quaker leader William Penn (Penn, William), who established the Pennsylvania colony. English settlements eventually stretched from the Chesapeake Bay north to present-day Massachusetts and included Jamestown (Jamestown Colony) (founded in 1607), Plymouth (1620), Boston (1630), St. Mary's City (Saint Marys City) (1634), New York City (formerly New Amsterdam, which England had seized from the Dutch in 1664), and Philadelphia (1681).England was the only imperial nation in which colonial companies were successful over the long term, in large part because ordinary citizens were eventually granted clear (and thus heritable) title to land. In contrast, other countries generally reserved legal title to overseas real estate (real and personal property) to the monarch, a situation that encouraged entrepreneurs to limit their capital investments in the colonies. In such cases it made much more financial sense to build ships than to improve settler housing or colonial infrastructure; a company could own a ship outright but was at constant risk of losing new construction to the sovereign. Because English real estate practices more or less assured entrepreneurs and colonizers that they would retain any infrastructure they built, they set about the construction of substantial settlements, farms, and transportation systems.A tradition of enduring title also caused the English to conclude formal compacts with Native Americans, as some of the former believed (and the English courts could potentially have ruled) that indigenous groups held common-law (common law) title to the various Northern American territories. As a result, tribes from Newfoundland (Canada) to Virginia (U.S.) engaged in early agreements with the English. However, a fundamental philosophical difference undermined many such agreements: the English held that it was possible to own land outright, while the indigenous American peoples believed that only usufruct, or use rights, to land could be granted. The situation was further complicated by the French custom, soon adopted by the English, of providing native communities with gifts on a seasonal or annual basis. What the colonizers intended as a relatively inexpensive method for currying goodwill, the indigenous peoples interpreted as something akin to rent.Although mortality was high in the malarial lowlands that the English initially settled, a seemingly endless stream of indentured labourers—and, from 1619 onward, enslaved Africans—poured into the new communities throughout the 17th century. Colonial laws (marriage law) meant to discourage intermarriage generally prevented the children of indigenous-English marriages from inheriting their father's wealth. This effectively forestalled the formation of multiethnic households in areas that were under close colonial control. However, such households were considered unremarkable in indigenous towns.In contrast to their Spanish and French counterparts, who were invariably Roman Catholic, most English colonizers were members of the Church of England or of various Protestant sects. Evangelization was not particularly important to most of the English elite, who traveled to the Americas for commercial, territorial, or political gain, nor for most indentured servants or criminal (crime) transportees. Among those who had left in pursuit of religious freedom, however, some proselytized with zeal. Like the clergy from France, their emphases and methods ranged from the fairly benign to the overtly oppressive.The Netherlands and SwedenThe colonial efforts of the Netherlands and Sweden were motivated primarily by commerce. Dutch businessmen formed several colonial monopolies soon after their country gained independence from Spain in the late 16th century. The Dutch West India Company took control of the New Netherland colony (comprising parts of the present-day states of Connecticut, New York, New Jersey, and Delaware) in 1623. In 1624 the company founded Fort Orange (present-day Albany, N.Y.) on the Hudson River; New Amsterdam was founded on the island of Manhattan soon after.In 1637 a group of individuals formed the New Sweden Company. They hired Peter Minuit (Minuit, Peter), a former governor of New Amsterdam, to found a new colony to the south, in what is now Delaware, U.S. In 1655 New Sweden fell to the Dutch.Despite some local successes, the Dutch ceded their North American holdings to the English after just 40 years, preferring to turn their attention to the lucrative East Indies trade rather than defend the colony (see Dutch East India Company). The English renamed the area New York and allowed the Dutch and Swedish colonists to maintain title to the land they had settled.Native Americans and colonization: the 16th and 17th centuriesFrom a Native American perspective, the initial intentions of Europeans were not always immediately clear. Some Indian communities were approached with respect and in turn greeted the odd-looking visitors as guests. For many indigenous nations, however, the first impressions of Europeans were characterized by violent acts including raiding, murder, rape, and kidnapping. Perhaps the only broad generalization possible for the cross-cultural interactions of this time and place is that every group—whether indigenous or colonizer, elite or common, female or male, elder or child—responded based on their past experiences, their cultural expectations, and their immediate circumstances.The Southwest Indians (Southwest Indian)Although Spanish colonial expeditions to the Southwest had begun in 1540, settlement efforts north of the Rio Grande did not begin in earnest until 1598. At that time the agricultural Pueblo Indians lived in some 70 compact towns, while the hinterlands were home to the nomadic Apaches (Apache), Navajos (Navajo), and others whose foraging economies were of little interest to the Spanish.Although nomadic groups raided the Pueblos from time to time, the indigenous peoples of the Southwest had never before experienced occupation by a conquering army. As an occupying force, the Spanish troops were brutal. They continued to exercise the habits they had acquired during the Reconquista, typically camping outside a town from which they then extracted heavy tribute in the form of food, impressed labour, and women, whom they raped or forced into concubinage.The missionaries who accompanied the troops in this region were often extremely doctrinaire. They were known to beat, dismember, torture, and execute Indians who attempted to maintain traditional religious practices; these punishments were also meted out for civil offenses. Such depredations instigated a number of small rebellions from about 1640 onward and culminated in the Pueblo Rebellion (1680)—a synchronized strike by the united Pueblo peoples against the Spanish missions and garrisons. The Pueblo Rebellion cost the lives of some 400 colonizers, including nearly all the priests, and caused the Spanish to remove to Mexico.The Spanish retook the region beginning in 1692, killing an estimated 600 native people in the initial battle. During subsequent periods, the Southwest tribes engaged in a variety of nonviolent forms of resistance to Spanish rule. Some Pueblo families fled their homes and joined Apachean foragers, influencing the Navajo and Apache cultures in ways that continue to be visible even in the 21st century. Other Puebloans remained in their towns and maintained their traditional cultural and religious practices by hiding some activities and merging others with Christian rites.The Southeast Indians (Southeast Indian)Most Southeast Indians experienced their first sustained contact with Europeans through the expedition led by Hernando de Soto (Soto, Hernando de) (1539–42). At that time most residents were farmers who supplemented their agricultural produce with wild game and plant foods. Native communities ranged in size from hamlets to large towns, and most Southeast societies featured a social hierarchy comprising a priestly elite and commoners.Warfare was not unknown in the region, but neither was it endemic. The indigenous peoples of present-day Florida treated de Soto and his men warily because the Europeans who had visited the region previously had often, but not consistently, proved violent. As the conquistadors moved inland, tribes at first treated them in the manner accorded to any large group of visitors, providing gifts to the leaders and provisions to the rank and file. However, the Spaniards either misread or ignored the intentions of their hosts and often forced native commoners, who customarily provided temporary labour to visitors as a courtesy gesture, into slavery.News of such treatment traveled quickly, and the de Soto expedition soon met with military resistance. Indigenous warriors harassed the Spanish almost constantly and engaged the party in many battles. Native leaders made a number of attempts to capture de Soto and the other principals of the party, often by welcoming them into a walled town and closing the gates behind them. Such actions may have been customary among the Southeast Indians at this time—diplomatic customs in many cultures have included holding nobles hostage as a surety against the depredations of their troops. Such arrangements were common in Europe at the time and were something with which the conquistadors were presumably familiar. However, the Spanish troops responded to these situations with violence, typically storming the town and setting upon the fleeing residents until every inhabitant was either dead or captured.As losses to capture, slaughter, and European diseases progressively decimated the Native American population, the Spanish began to focus on extracting the region's wealth and converting its inhabitants to Christianity. The Southeast nations had little gold or silver, but they had accumulated a plenitude of pearls to use as decoration and in ritual activities. The slave trade was also extremely lucrative, and many of those who survived the immediate effects of conquest were kidnapped and transported to the Caribbean slave markets. Some indigenous communities relocated to Catholic missions in order to avail themselves of the protection offered by resident priests, while others coalesced into defensible groups or fled to remote areas.The Northeast Indians (Northeast Indian)The Northeast Indians began to interact regularly with Europeans in the first part of the 16th century. Most of the visitors were French or English, and they were initially more interested in cartography and trade than in physical conquest. Like their counterparts in the Southeast, most Northeast Indians relied on a combination of agriculture and foraging, and many lived in large walled settlements. However, the Northeast tribes generally eschewed the social hierarchies common in the Southeast. Oral traditions and archaeological materials suggest that they had been experiencing increasingly fierce intertribal rivalries in the century before colonization; it has been surmised that these ongoing conflicts made the Northeast nations much more prepared for offensive and defensive action than the peoples of the Southwest or the Southeast had been.Discussions of the early colonial period in this region are typically organized around categories that conjoin native political groupings and European colonial administrations. The discussion below considers two broad divisions: the Algonquian-speaking tribes of the mid-Atlantic region, an area where the English settled, and the Algonquian- and Iroquoian-speaking tribes of New England and New France, where the English and the French competed in establishing colonial outposts.The mid-Atlantic Algonquians (Algonquin)The mid-Atlantic groups that spoke Algonquian languages were among the most populous and best-organized indigenous nations in Northern America at the time of European landfall. They were accustomed to negotiating boundaries with neighbouring groups and expected all parties to abide by such understandings. Although they allowed English colonizers to build, farm, and hunt in particular areas, they found that the English colonial agenda inherently promoted the breaking of boundary agreements. The businessmen who sponsored the early colonies promoted expansion because it increased profits; the continuous arrival of new colonizers and slaves caused settlements to grow despite high mortality from malaria and misfortune; and many of the individuals who moved to the Americas from England—especially the religious freethinkers and the petty criminals (crime)—were precisely the kinds of people who were likely to ignore the authorities.The earliest conflict between these Algonquians and the colonizers occurred near the Chesapeake Bay. This region was home to the several hundred villages of the allied Powhatan tribes, a group that comprised many thousands of individuals. In 1607 this populous area was chosen to be the location of the first permanent English settlement in the Americas, the Jamestown Colony. Acting from a position of strength, the Powhatan were initially friendly to the people of Jamestown, providing the fledgling group with food and the use of certain lands.By 1609 friendly interethnic relations had ceased. Powhatan, the leader for whom the indigenous alliance was named, observed that the region was experiencing a third year of severe drought; dendrochronology (the study of tree rings) indicates that this drought ultimately spanned seven years and was the worst in eight centuries. In response to English thievery (mostly of food), Powhatan prohibited the trading of comestibles to the colonists. He also began to enforce bans against poaching. These actions contributed to a period of starvation for the colony (1609–11) that nearly caused its abandonment.It is not entirely clear why Powhatan did not press his advantage, but after his death in 1618 his brother and successor, Opechancanough, attempted to force the colonists out of the region. His men initiated synchronized attacks against Jamestown and its outlying plantations on the morning of March 22, 1622. The colonists were caught unawares, and, having killed some 350 of the 1,200 English, Opechancanough's well-organized operation created so much terror that it nearly succeeded in destroying the colony.The so-called Powhatan War continued sporadically until 1644, eventually resulting in a new boundary agreement between the parties; the fighting ended only after a series of epidemics had decimated the region's native population, which shrank even as the English population grew. Within five years, colonists were flouting the new boundary and were once again poaching in Powhatan territory. Given the persistence of the mid-Atlantic Algonquians, their knowledge of local terrain, and their initially large numbers, many scholars argue that the Algonquian alliance might have succeeded in eliminating the English colony had Powhatan pressed his advantage in 1611 or had its population not been subsequently decimated by epidemic disease.The Iroquoians of HuroniaDuring the 15th and early 16th centuries, warfare in the Northeast culture area fostered the creation of extensive political and military alliances. It is generally believed that this period of increasing conflict was instigated by internal events rather than by contact with Europeans; some scholars suggest that the region was nearing its carrying capacity. Two of the major alliances in the area were the Huron confederacy (which included the Wendat alliance) and the Five Tribes (later Six Tribes), or Iroquois Confederacy. The constituent tribes of both blocs spoke Iroquoian languages; the term “Iroquoian” is used to refer generally to the groups speaking such languages, while references to the “ Iroquois” generally imply the tribes of the Iroquois Confederacy alone.The Huron were a relatively tight alliance of perhaps 20,000–30,000 people who lived in rather dense settlements between Hudson Bay and the St. Lawrence River, an area thus known as Huronia. This was the northern limit at which agriculture was possible, and the Huron grew corn (maize) to eat and to trade to their Subarctic Indian (American Subarctic peoples) neighbours—the Innu to the north and east and the Cree to the west—who provided meat and fish in return. The Huron confederacy is believed to have coalesced in response to raids from other Iroquoians and to have migrated northward to escape pressure from the Five Tribes to their south and southeast. Although the Huron coalition's major goal was defense, the strength of the alliance also helped them to maintain trading, rather than raiding, relationships with the Innu, the Cree, and later the French.The Five Tribes of the Iroquois Confederacy (Iroquois Confederacy) lived south of the St. Lawrence River and Lake Erie, for the most part in the present-day state of New York. The alliance comprised the Mohawk, Oneida, Onondaga, Cayuga, and Seneca peoples; the Tuscarora joined the confederacy later. Evenly matched with the Huron alliance in terms of aggregate size, the Iroquois were more loosely united and somewhat less densely settled across the landscape. While the Huron nations traded extensively for food, this was less the case for the Five Tribes, who relied more thoroughly upon agriculture. Before colonization they seem to have removed southward, perhaps in response to raids from the Huron to their north. The alliances among the Five Tribes were initiated not only for defense but also to regulate the blood feuds that were common in the region. By replacing retributory raids among themselves with a blood money payment system, each of the constituent nations was better able to engage in offensive and defensive action against outside enemies.The Northeast was crisscrossed by an extensive series of trade routes that consisted of rivers and short portages. The Huron used these routes to travel to the Cree and Innu peoples, while the Iroquois used them to travel to the Iroquoians on the Atlantic coast. The French claimed the more northerly area and built a series of trade entrepôts at and near Huron communities, whose residents recognized the material advantages of French goods as well as the fortifications' defensive capabilities. The Huron alliance quickly became the gatekeeper of trade with the Subarctic, profiting handsomely in this role. Its people rapidly adopted new kinds of material culture, particularly iron axes, as these were immensely more effective in shattering indigenous wooden armour than were traditional stone tomahawks (tomahawk).For a period of time the new weapons enabled the Huron confederacy to gain the upper hand against the Iroquois, who did not gain access to European goods as quickly as their foes. By about 1615 the long traditions of interethnic conflict between the two alliances had become inflamed, and each bloc formally joined with a member of another traditional rivalry—the French or the English (England). Initially the Huron-French alliance held the upper hand, in no small part because the French trading system was in place several years before those of the Dutch and English. The indigenous coalitions became more evenly matched after 1620, however, as the Dutch and English trading system expanded. These Europeans began to make guns available for trade, something the French had preferred not to do. The Huron found that the technological advantage provided by iron axes was emphatically surpassed by that of the new firearms.French records indicate that a smallpox epidemic killed as many as two-thirds of the Huron alliance in 1634–38; the epidemic affected the Iroquois as well, but perhaps to a lesser extent. At about the same time, it became increasingly clear that beavers, the region's most valuable fur-bearing animals, had been overhunted to the point of extinction in the home territories of both groups. The Iroquois blockaded several major rivers in 1642–49, essentially halting canoe traffic between Huronia and the Subarctic. The combination of smallpox, the collapse of the beaver population, and the stoppage of trade precipitated an economic crisis for the Huron, who had shifted so far from a subsistence economy to one focused on exchange that they faced starvation. Decades of intermittent warfare culminated in fierce battles in 1648–49, during which the Iroquois gained a decisive victory against the Huron and burned many of their settlements. In 1649 the Huron chose to burn their remaining villages themselves, some 15 in all, before retreating to the interior.Having defeated the Huron confederacy to their north and west, the Iroquois took the Beaver Wars to the large Algonquin population to their north and east, to the Algonquian territory to their west and south, and to the French settlements of Huronia. They fought the alliances of these parties for the remainder of the 17th century, finally accepting a peace agreement in 1701. With both the Huron and the Iroquois confederacies having left Huronia, mobile French fur traders took over much of the trade with the Innu and Cree, and various bands of Ojibwa began to enter the depopulated region from their original homelands to the south of the Great Lakes.The Subarctic Indians and the Arctic peoplesThe European exploration of the Subarctic was for many decades limited to the coasts of the Atlantic and Hudson Bay, an inland sea connected to the Atlantic and the Arctic oceans. The initial European exploration of the bay occurred in 1610. It was led by the English navigator Henry Hudson (Hudson, Henry), who had conducted a number of voyages in search of a northwest passage from the Atlantic to the Pacific.The Subarctic climate and ecosystem were eminently suited to the production of fur-bearing animals. This circumstance was well understood by the Huron alliance, which maintained a virtual lock on trade between this region and the French posts to the south until about 1650. Although the French colonial administration purported to encourage entrepreneurial individuals, its bureaucracy could be difficult to work with. In the 1660s, brothers-in-law Pierre Esprit Radisson (Radisson, Pierre-Esprit) and Médard Chouart des Groseilliers, their pelts seized by authorities for the lack of a proper license, offered the English their services as guides to the region around Hudson Bay. The English hired the men and sponsored an exploratory voyage in 1668. The expedition was well received by the resident Cree, who had relied upon the Hurons for trade goods and found their supply greatly diminished in the wake of the Beaver Wars.The initial voyage was successful enough to instigate the creation of the Hudson's Bay Company, which was chartered in 1670. Its first governor was Prince Rupert, an experienced military commander and the cousin of King Charles II (Charles II). The company was granted proprietary control of the vast territory from Labrador to the Rocky Mountains, a region that soon became known as Rupert's Land. Company traders spent the remainder of the 17th century building relationships with the local Cree, Innu, and peoples. The Hudson's Bay Company eventually became one of the most dominant forces of colonialism (colonialism, Western) in Northern America, maintaining political control over Rupert's Land until 1870 and economic control of the north for decades more.By about 1685 the company had built a series of trading posts around the bay. These posts were staffed by company employees who were instructed not to travel far afield. As a result, indigenous peoples came to the posts to trade, and particular bands became associated with particular posts. Known as Home Guard Indians, the relatively close proximity of these bands and Hudson's Bay Company employees often led to intermarriage, adoption, and other forms of kinship. Band members with limited mobility might spend most of the year at a post community, and all of the population would usually reside there for some part of the year.The French built a few trading posts in the Subarctic but found that having independent contractors transport goods to native communities was more profitable—as was the practice of taking over Hudson's Bay Company posts after running off the staff. Accustomed to the difficult conditions of the boreal forest and the tundra, the Innu, Cree, and Inuit could easily defend themselves against potential depredations by Europeans. Many bands chose not to form an exclusive alliance with either colonial power. Instead, they played the French and the English against one another in order to gain advantageous terms of exchange, profiting as the two colonial powers squabbled for control over the northern trade.The chessboard of empire: the late 17th to the early 19th centuryIn general, this period was characterized by indigenous resistance to colonial efforts at establishing anything more than toeholds in Northern America. Had victory been based on military skill and tenacity alone, Native Americans might well have avoided or significantly delayed colonization. However, epidemic diseases, the slave trade, and a continuous stream of incoming Europeans proved to be more decisive elements in the American narrative.Eastern North America and the SubarcticDuring the 17th century the Iroquois Confederacy and the English had created a strong alliance against the competing coalitions formed by the Huron, Algonquin, Algonquian, and French. The tradition of forming such alliances continued in the 18th century. Some of these coalitions were very strong, while loyalties shifted readily in others. Indigenous leaders often realized that they could reap the most benefit by provoking colonial rivalries and actively did so. Many also recognized that the Europeans were no more consistent in maintaining alliances than they were in observing territorial boundaries, and so they became wary of colonial opportunism. Such was the case for the Iroquois: about 1700 they adopted a policy of neutrality between the English and French that held for some 50 years.Colonial administrative decisions of the 18th century were thoroughly coloured by issues in Europe, where the diplomatic and military milieus were characterized by constant tension. England, France, Spain, Austria, Prussia, and other countries engaged in several conflicts that either spread to or greatly influenced events in eastern North America during this period. The most important of these conflicts are discussed below.Queen Anne's War (1702–13) and the Yamasee War (1715–16)The War of the Spanish Succession (Spanish Succession, War of the) (1702–13) pitted France and Spain against England, the Dutch Republic, and Austria in a fight to determine the European balance of power. One theatre of this war was Northern America, where the conflict became known as Queen Anne's War. It set an alliance of the English and some Southeast Indian nations, notably the Creek and the eastern Choctaw, against one comprising the French, the Spanish, and other Southeast Indians, notably the western Choctaw.The latter alliance lost, and treaties negotiated in Europe caused France to relinquish claim to a vast area including Newfoundland, French Acadia (renamed Nova Scotia), and Rupert's Land. The French presence in the north was thin and had always been contested by the English; as a result, the war had few immediate effects on First Nations peoples (the Native Americans of Canada; see Sidebar: Tribal Nomenclature: American Indian, Native American, and First Nation) other than to cement the position of the Hudson's Bay Company. The company remained paramount in the north until 1783, when its hegemony was challenged by the rival North West Company.In the Southeast the war caused widespread havoc. Many communities, both native and colonial, were forced to move or risk destruction. With territorial boundaries in disarray, the war's aftermath included a series of smaller engagements through which Native Americans tried to avoid being squeezed between the westward expansion of the English, who held the Atlantic coast, and the French expansion eastward from their Mississippi River entrepôts.One of the better-known of these conflicts was the Yamasee War (1715–16), in which an alliance of Yamasee, Creek, and other tribes fought against English expansion. Their resistance was ultimately unsuccessful, and some of the refugees fled south to Florida, where their descendants later joined with others to found the Seminole nation. The Yamasee War inspired the Creek to take a neutral stance between the colonizers; they subsequently became one of the most successful groups in profiting from colonial rivalries. However, the Creek and their traditional rivals, the Cherokee, continued intermittent raids against one another until the late 1720s. At the same time, the neighbouring Chickasaw were shifting their trade from the French to the English because the goods provided by the latter were generally less expensive and of better quality than those of the former. The Chickasaw defended themselves from repeated Choctaw-French attacks and successfully avoided French trade hegemony. The Natchez were less fortunate: their resistance was quashed by the Choctaw-French alliance, which captured hundreds of Natchez people and sold them into the Caribbean slave trade.The French and Indian War (1754–63) and Pontiac's War (1763–64)During the years from 1754 to 1763, disputes between the European empires ignited conflicts in Europe, Asia, and North America. The fighting that took place in Europe became known as the Seven Years' War (1756–63) and pitted the joint forces of Prussia, Hanover, and England against an alliance comprising Austria, France, Russia, Saxony, and Sweden.Although they participated in the European theatre of war, for France and England the most important battlegrounds were their colonies in Asia and America. The last of the Carnatic Wars (1756–63) saw these two colonial powers battle for control over eastern India—a contest in which England's victory was decisive.The international conflict was most prolonged in North America, where it became known as the French and Indian War (1754–63). There it pitted the English, allied with the Iroquois Confederacy once again, against a much larger coalition comprising many Algonquian-speaking tribes, the French, and the Spanish. Most of the fighting occurred in the Ohio River watershed and the Great Lakes region. Surprisingly, given their smaller numbers, the Iroquois-English alliance prevailed. Under the terms of the Treaty of Paris (Paris, Treaty of) (1763), France ceded to England its colonies east of the Mississippi River. England now ruled a vast landmass reaching from Hudson Bay to the Gulf of Mexico and from the Atlantic coast to the Mississippi River.Treaties at this time generally transferred sovereignty over a territory from one monarch to another but did not dispossess locals of their property nor abrogate prior agreements between monarch and subject. Categories of people were seen as rather interchangeable—if the sovereign (in this case, of France) had made a promise to subjects in a territory that was to become the domain of another monarch (in this case, of England), the latter was expected to honour the arrangement. The subjects living in the region, here the native and colonial peoples of New France, were likewise expected to transfer their loyalty from the first monarch to the second. Although European and Euro-American colonists were accustomed to having no voice in such matters, the region's indigenous residents objected to being treated as subjects rather than nations; not having been party to the treaty, they felt little need to honour it.With English rule came the usual flood of settlers. Like their compatriots in New England and the mid-Atlantic, the First Nations in the former French territory observed that the English were unwilling or unable to prevent trespass by squatters. Indigenous groups throughout the Great Lakes region were further piqued because the annual giveaway of trade goods had been suspended. The English had come to view the giveaway as an unnecessary expense and were glad to be rid of it. In contrast, the First Nations felt that they were being deprived of income they were owed for allowing foreign access to the North American interior.These and other issues caused the indigenous nations to press their advantage during the disorderly period marking the end of the French and Indian War. Recognizing that strength of unified action, the Ottawa leader Pontiac organized a regional coalition of nations. Among other actions in the conflict that became known as Pontiac's War (1763–64), the native coalition captured several English forts near the Great Lakes. These and other demonstrations of military skill and numerical strength prompted King George III's ministers to issue the Proclamation of 1763 (1763, Proclamation of), one of the most important documents in Native American legal history. It reserved for the use of the tribes “all the Lands and Territories lying to the Westward of the sources of the Rivers which fall into the Sea from the West and Northwest.” That is, the land between the Appalachian Mountains and the Mississippi River, and from the Great Lakes almost to the Gulf of Mexico, was declared reserved for Indian use exclusively. The proclamation also reserved to the English monarch the exclusive right to purchase or otherwise control these tribal lands.The proclamation also required all settlers to vacate the region. Despite this mandate, thousands of English settlers followed their forebears' tradition of ignoring the colonial authorities and moved into the reserved territory during the relatively quiescent period following Pontiac's War. French Canadians were also on the move, not least because British law prohibited Roman Catholics from a number of activities, such as holding public office. The British attempted to address French Canadian discontent by passing the Quebec Act (1774). It included a number of provisions ensuring the free practice of religion and the continuation of French civil law.More important from an indigenous view, it extended Quebec's boundaries northward to Hudson Bay and southward to the confluence of the Ohio and Mississippi rivers, the site of present-day Cairo, Ill. Although England saw this as an expedient way to establish the governor of Quebec's political authority over remote French Canadian settlements, Native Americans saw the act as an abrogation of the Proclamation of 1763. In addition, Euro-American settlers who had entered the region after pacification saw it as an attempt to curtail what they believed was their God-given right to expand into the west. The feelings among these parties soon became so inflamed that they led to the brink of yet another war.The American Revolution (1775–83)The discontentment caused by the Quebec Act contributed directly to a third 18th-century war of empire, the American Revolution (1775–83), in which 13 of the English colonies in North America eventually gained political independence. This war was especially important to the Iroquois Confederacy, which by then included the Tuscarora. The confederacy had long been allied with the English against the Huron, the northern Algonquians, and the French. Now the Iroquois were faced with a conundrum: a number of the English individuals with whom they had once worked were now revolutionaries and so at least nominally allied with France. All the foreigners, whether English loyalists, revolutionaries, or French, promised to uphold the sovereignty of Iroquois lands, but by this time most Indians recognized that such promises were as likely to be expediencies as they were to be true pledges. This left the council of the Iroquois Confederacy with the problem of balancing its knowledge of individual colonizers, some of whom were trustworthy allies, against its experiences with the colonial administrations, which were known to be inconstant. Despite much deliberation, the council was unable to reach consensus. As its decisions could only be enacted after full agreement, some individuals, families, and nations allied themselves with the English loyalists and others with the colonial upstarts and their French allies.For the colonizers, the war ended with the Peace of Paris (Paris, Peace of) (1783). The treaties between England and the new United States included the English cession of the lands south of the St. Lawrence River and the Great Lakes and as far west as the Mississippi River. The indigenous nations were not consulted regarding this cession, which placed those Iroquois who had been allied with the English loyalists in what was now U.S. territory. Realizing that remaining in the territory would expose them to retribution, several thousand members of the Iroquois-English alliance left their homes and resettled in Canada.The nascent United States was deeply in debt after the war and had a military too small to effectively patrol its extensive borders. Hoping to overextend and reconquer the upstarts, their rivals—formidable alliances comprising the displaced Iroquois, the Algonquians, and the English in the north and the Spanish with some of the Chickasaw, Creek, Cherokee, and Choctaw in the south—engaged in munitions trading and border raids. The United States committed to a number of treaties in order to clarify matters with indigenous nations, but in eastern North America the end of the 18th century was nonetheless characterized by confusion over, and lack of enforcement of, many territorial boundaries.The War of 1812 (1812, War of) (1812–14)American Indian experiences of the transition from the 18th to the 19th century were rather thoroughly, if indirectly, affected by the French revolutionary and Napoleonic wars (1789–1815). The fall of the French monarchy worried Europe's elite, who began to decrease the level of conspicuous consumption to which they had previously been accustomed. The subsequent suppression of Napoleon (Napoleon I)'s armies required a concentrated international military effort that was enormously expensive in both cash and lives and which further encouraged relative frugality. This social and economic climate caused a serious decline in the fur trade and much hardship for those who depended upon it, including indigenous North Americans.By 1808–10, despite assurances from the U.S. government that the Proclamation of 1763 (1763, Proclamation of) would be honoured, settlers had overrun the valleys of the Ohio and Illinois rivers. Game and other wild food was increasingly scarce, and settlers were actively attempting to dislocate native peoples. Tensions that had been building since the American Revolution were worsened by the decline in the fur trade and a multiyear drought during which native and settler crops alike failed.Realizing that the fates of indigenous peoples throughout the Great Lakes region were intertwined, Tecumseh, a Shawnee leader who had served with the British during the American Revolution, began to advocate for a pan-Indian alliance. He recommended a renewed association with the English (England), who seemed less voracious for land than the Americans. By all accounts, however, Tecumseh was simply choosing the less odious of two fickle partners. He had fought in the Battle of Fallen Timbers (Fallen Timbers, Battle of) (1794), one of several postrevolutionary engagements in which Indian-English coalitions attempted to prevent the United States' settlement of the Ohio valley. Tecumseh's brother and hundreds of other native combatants were killed at Fallen Timbers because the British would neither send reinforcements nor open the gates of their fort to the fleeing warriors. British inconstancy in events with such severe and personal consequences was not soon forgotten.For the Native American coalition that participated in the War of 1812 (1812, War of), the conflict centred on territorial rights; for the English and the Euro-Americans, it was a conflict over transatlantic shipping rights. Eventually, the actions of future U.S. president William Henry Harrison (Harrison, William Henry), who attempted to break the nascent native alliance by burning its settlement at Prophetstown during the Battle of Tippecanoe (Tippecanoe, Battle of) (1811), sealed the indigenous leaders' decision to support England.Tecumseh's coalition won a number of early victories. One of the most notable was the 1813 capture of Fort Detroit—through canny tactics that made his troops seem much greater in number than they were, Tecumseh caused the fort's commander, Gen. William Hull (Hull, William), to panic. Hull surrendered without mounting a defense and was later court-martialed.Despite these and other victories won by the alliance of Indians and English, the War of 1812 was ultimately a draw between England and the United States. They agreed to terms in the Treaty of Ghent (Ghent, Treaty of) (1814); England did not consult with its native allies regarding the terms of the agreement, which for the most part returned Northern America to its prewar status. That status did not hold in southern Quebec, however, which at the time extended well south of the Great Lakes. Instead, the English relinquished their claims to the Ohio River basin area and left the members of Tecumseh's coalition to fend for themselves. This was a tremendous blow, as the resident nations were immediately subject to displacement by Euro-American settlers. With the fur trade in the doldrums and peaceful relations between England and the United States, the pelts and military assistance that had been the economic mainstays of the Northeast tribes had lost their value. Indigenous prosperity and power in the region entered a period of rapid decline.The Southwest and the southern Pacific CoastWhile the 18th-century wars of empire raged in Europe and eastern North America, colonization continued apace in the western part of the continent. There the principal imperial powers were Spain and Russia. In the Southwest the Spanish continued to dominate the indigenous nations. The tribes there, particularly the Puebloans, continued to face severe punishment for “heretical” practices and other forms of direct resistance to colonization. They maintained their cultural heritage through a combination of overt acceptance of European conventions and private practice of their own traditions. Most hunting and gathering groups in the region continued to live in areas that were not amenable to farming or ranching and so encountered the colonizers less often.European explorers had sighted California in 1542 but did not attempt to occupy it until 1769. Following the Pacific coast northward from Mexico, the Franciscan friar Junípero Serra (Serra, Junípero, Blessed) and his successors established 21 missions, while their military and civilian counterparts chose nearby sites for presidios (forts) and haciendas (hacienda) (estates).The arrival of the Spanish proved disastrous for the California Indians (California Indian). The resident nations of California were unusually prosperous hunters and gatherers, making a living from a landscape that was extremely rich with wild foods. These peoples used a form of political organization known as the tribelet: moderately sized sedentary groups characterized by hierarchical but highly independent relationships both within and between polities.The California nations were accustomed to negotiating agreements among themselves but, like their Southwestern counterparts, had no experience of occupation. As elsewhere, the Spanish occupation was brutal. Having selected a building site, Spanish leaders dispatched troops to indigenous villages, where they captured the residents. Having been marched to the chosen location, the people were forced to labour as builders and farmers and were forbidden to leave. In both hacienda and mission contexts, but more so in the missions, rules often mandated that native individuals be separated by gender, a practice that left women and children especially vulnerable to physical and sexual abuse at the hands of clergy and soldiers. As in the Southwest, resistance to any aspect of the missionizing experience was often harshly punished; nonetheless, many native Californians sought to escape the conquest by fleeing to distant areas and rebuilding their lives.The northern Pacific CoastNorth America's northern Pacific coast was home to Arctic peoples (Arctic) and Northwest Coast Indians (Northwest Coast Indian). These groups made their living primarily from the sea. Like their counterparts in the Northeast culture area, they were accustomed to offensive and defensive military action. They also participated in an indigenous trade network so extensive that it necessitated its own pidgin, or trade language, known as Chinook Jargon.By the early 18th century, European elites had begun to recognize the potential profitability of trade relations with the peoples of North America's Pacific coast. From the mid-18th century on, the northern Pacific trade was dominated by Russia, although explorers and traders from other countries also visited the region.Russian elites initially saw North America as rich but so distant that attempts at occupation might prove ill-advised. This perception was soon reversed, however. The Russian tsar Peter I sent Vitus Bering (Bering, Vitus) to explore the northern seas in 1728, and Russian traders reached the Aleutian Islands and the coasts of present-day Alaska (U.S.) and British Columbia (Can.) in the 1740s.Russian trade was conducted by a rugged group of Siberian sailors and trappers, the promyshlenniki. Like their French counterparts, they wished to establish themselves in the lucrative fur trade, but, whereas the French sought beaver pelts for the European markets, the Russians sought the rich pelts of sea otters for trade with China. The differences between the French and Russian traders were more substantial than their pelt preferences, however. Where the 17th-century French traders had generally built settlements near native towns and partnered with local peoples, the 18th-century Russians imposed a devastating occupation that replicated the brutal social order to which they were accustomed—one in which they assumed the status of elites and exercised the power of life and death over their indigenous “serfs.”The initial encounters between the native peoples of the northern Pacific coast and Russian traders presaged terrible hardships to come. In 1745 a group of promyshlenniki overwintered in the Aleutian Islands; their behaviour was so extreme that the Russian courts eventually convicted several members of the party of atrocities. The Aleuts (Aleut) and the neighbouring Koniag mounted a spirited resistance against Russian incursions over the next 20 years but were outgunned. The Native Alaskan men who survived these early battles were immediately impressed into service hunting sea otters from light boats; their absences could range in length from days to months. During these periods the colonizers held entire villages hostage as surety and demanded food, labour, and sex from the remaining residents. This caused extraordinary human suffering; many communities endured cruel exploitation and prolonged periods of near-starvation.During the last decade of the 18th century, Russian attempts to expand operations southward met with fierce military resistance from the Northwest Coast Indians (Northwest Coast Indian), especially the Tlingit. With larger numbers than the Aleuts and Koniag, access to firearms, and the ability to retreat to the interior, the Tlingit nation successfully repelled the Russian colonizers. Having gained control of the region's harbours and waterways, the Tlingit and other Northwest Coast peoples profited by charging European (and later Euro-American) traders tolls for passage therein and by selling them immense quantities of fish, game, and potatoes.In 1799 Russia's many independent trading outfits coalesced into a single monopoly, the Russian-American Company. Over the next decade it became clear that the practice of hunting mature female otters, which had more-luxurious pelts than males, was seriously depleting the sea otter population. Desiring a permanent southern outpost from which to stage hunts as well as a source for cheaper comestibles, in 1812 the company founded the northern California trading post of Fort Ross (about 90 miles [140 km] north of what is now San Francisco). The promyshlenniki continued to force Aleut and Koniag men on extended hunting trips. In many cases, local Pomo women married these Native Alaskan men, and together they built a unique multiethnic community.In the early decades of the 19th century, voluntary cohabitation and intermarriage between native women and Russian men began to soften colonial relations in Alaska. Equally important, the multiethnic progeny of these matches and of the Native Alaskan-Pomo couples at Fort Ross began to ascend into the administrative ranks of the fur trade. By the 1850s, common customs in the northern Pacific colonies included wage rather than impressed labour; ritual godparenting, a Russian custom in which an adult makes a serious and public commitment to ensuring the physical, economic, and spiritual well-being of another person's child; and name exchanges, a Native Alaskan custom in which one receiving a name (usually of a deceased person) assumes some of the rights of its previous owner.The European conquest of North America proceeded in fits and starts from the coasts to the interior. During the early colonial period, the Plains (Plains Indian) and the Plateau (Plateau Indian) peoples were affected by epidemics of foreign diseases and a slow influx of European trade goods. However, sustained direct interaction between these nations and colonizers did not occur until the 18th century.In 1738 the Mandan villages on the upper Missouri River hosted a party led by the French trader Pierre Gaultier de Varennes et de la Vérendrye; this is often characterized as the event that initiated lasting contact between the peoples of the northern Plains and the colonial powers. Certainly a significant number of traders, such as David Thompson (Thompson, David), were living with the Mandans and other Plains peoples by the late 18th century. Accounts of daily life in the region, gleaned from the diaries and letters of these traders, indicate that the interior nations were adept negotiators who enjoyed a relatively prosperous lifestyle; indeed, many visitors commented on the snug nature of the earth lodges in which Plains families lived and on the productivity they witnessed in the region. Although somewhat less historical data exists for the Plateau peoples of this era, it is clear that the 18th century was a time of great change for both groups. Three key factors influenced the trajectory of change: the arrival of horses (horse), the arrival of guns, and the arrival of native peoples from adjacent culture areas.Horses were introduced to the Americas by the Spanish conquistadors. The advantages of using horses, whether as pack animals or as mounts, were obvious to the Plains and Plateau peoples, who had until then been obligated to travel overland by foot or in small boats on the regions' few navigable rivers. Horses might be acquired in one of several ways: through purchase or trade, by capture from a rival group, or by taming animals from the wild herds that soon arose.The dense forests of the Northeast, Southeast, and Subarctic had discouraged the widespread use of horses; in those regions, abundant waterways provided a more readily negotiated system of transportation. Thus, horses spread from the Southwest culture area to the Plains and the Plateau following a northerly and easterly trajectory. As horses spread, the pedestrian foragers of the southwestern Plains quickly incorporated them into bison hunts. Previously these had been dangerous affairs: the range of the bow and arrow was not great and so required hunters to approach animals rather closely, while the alternative method of hunting was to stampede a herd of bison toward a cliff, from which they would fall to their deaths. The speed and mobility provided by horses were great improvements over these earlier conditions.Spanish law expressly forbade the distribution of firearms (gun) to indigenous individuals, but the English and Dutch traded them freely. Initially used in battle and to hunt the large game of the eastern and boreal forests, firearms were readily incorporated into the bison hunt by the pedestrian forager-farmers of the northeastern Plains. The horse's speed and agility had inspired a more effective form of hunting in the southern Plains; in the north a similar increase in productivity occurred as guns replaced bows and arrows. A rifle's greater firepower allowed more distance between hunter and hunted, lessening the danger of attack from a charging animal.Horses and guns spread to the interior over the course of about 100 years, from roughly 1600 to 1700. By approximately 1700 many tribes were moving to the interior as well. Those from the Northeast were agriculturists pushed west by the intertribal hostilities of the Huron-Algonquian-French and Iroquois-English alliances. Those from the Southwest were Apachean and other hunters and gatherers who, having acquired horses, were able for the first time to match the movement of the bison herds.By the 1750s the horse culture of the southern interior had met with the gun culture of the northern interior. The combination of guns and horses was invaluable: nations could follow herds of bison across the landscape and also take advantage of the greater distance and power allowed by firearms. From the mid-18th century to the first part of the 19th century, horses and guns enabled the indigenous nations of the North American interior to enjoy an unprecedented level of prosperity.Domestic colonies: the late 18th to the late 19th centuryWhile Native American experiences of the 18th century were influenced by internecine warfare between the European powers, their experiences of the 19th century reflected an increasing political shift from overseas colonialism (colonialism, Western) to domestic expansionism. Events of the 19th century made two things clear to indigenous nations: there were no longer any territories so remote as to escape colonization, and, for the most part, colonizers continued to prove inconstant and unable—or unwilling—to fulfill the commitments to which they agreed.Removal of the eastern nationsThe first full declaration of U.S. policy toward the country's indigenous peoples was embodied in the third of the Northwest Ordinances (1787):The utmost good faith shall always be observed toward the Indians, their lands and property shall never be taken from them without their consent; and in their property, rights, and liberty, they shall never be invaded or disturbed, unless in just and lawful wars authorized by Congress; but laws founded in justice and humanity shall from time to time be made, for preventing wrongs being done to them, and for preserving peace and friendship with them.Within a few decades this guarantee of legal, political, and property rights was undermined by a series of Supreme Court decisions and the passage of a new federal law.The rulings in question were written by Chief Justice John Marshall (Marshall, John). In Johnson v. M'Intosh (1823), the court ruled that European doctrine gave a “discovering” (e.g., colonial) power and its successors the exclusive right to purchase land from aboriginal nations. This ruling removed control of land transactions from the tribes, which had previously been able to sell to whomever they wished. In Cherokee Nation v. Georgia (1831), the court further opined that the political autonomy of indigenous polities was inherently reliant on the federal government, defining them as domestic (dependent) nations rather than foreign (independent) nations. This status prevented tribes from invoking a number of privileges reserved to foreign powers, such as suing the United States in the Supreme Court. In a third case, Worcester v. Georgia (1832), the court ruled that only the federal government, not the states, had the right to impose their regulations on Indian land. This created an important precedent through which tribes could, like states, reserve some areas of political autonomy. Together these three decisions suggested that Indian nations were simultaneously dependent upon and independent from federal control; subsequent case law has often focused on defining exactly which form of relationship obtains in a particular situation.Even as these cases made their way through the U.S. courts, Congress was passing the Indian Removal Act (1830). The act was initiated after the 1828 discovery of gold (gold rush) on Cherokee land in Georgia. Speculators hoping to profit from the discovery, including President Andrew Jackson (Jackson, Andrew), subsequently pressured Congress to find a way to legally divest the tribe of its land. Jackson's speech On Indian Removal (Andrew Jackson: On Indian Removal), presented to Congress in December 1830, provides a sample, although certainly not a full account, of his rationalizations for such action.The Indian Removal Act enabled the president to designate tracts of land west of the Mississippi as new Indian Territories, to negotiate with tribes to effect their removal from east of the Mississippi, and to fund these transactions and associated transportation costs. The Native American population had not been consulted in these matters and responded in a variety of ways: Black Hawk led the Sauk and Fox in defending their territory; the Cherokee pursued resolution through the courts; the Choctaw agreed to arrange a departure plan with the designated federal authorities; and the Chickasaw gained permission to sell their property and arrange their own transportation to points west. Perhaps the most determined to remain in place were the Seminoles (Seminole), who fiercely defended their homes; the Seminole Wars (1817–18, 1835–42, and 1855–58) came to be the most expensive military actions undertaken by the U.S. government up to that point.Ultimately, all the eastern tribes found that overt resistance to removal was met with military force. In the decade after 1830, almost the entire U.S. population of perhaps 100,000 eastern Indians—including nearly every nation from the Northeast and Southeast culture areas—moved westward, whether voluntarily or by force. Encountering great difficulties and losing many people to exposure, starvation, and illness, those who survived this migration named it the Trail of Tears.The conquest of the western United StatesIn 1848 the Treaty of Guadalupe Hidalgo (Guadalupe Hidalgo, Treaty of) granted the United States all of Mexico's territories north of the Rio Grande (see Mexican-American War); in the same year, gold (gold rush) was discovered in California. Thousands of miners and settlers streamed westward on the Oregon Trail and other routes, crossing over and hunting on indigenous land without asking leave or paying tribute. From the resident nations' perspective, these people were trespassers and poachers, although their presence was somewhat ameliorated by the goods and services they purchased from the tribes. Contrary to their frequent portrayal in 20th-century popular culture, few armed conflicts between travelers and Indians took place, although tense situations certainly occurred. These circumstances moved the U.S. government to initiate a series of treaties through which to pacify the trans-Mississippi west. Perhaps the most important of these was the first Treaty of Fort Laramie (1851), which was negotiated with the Arapaho, Arikara, Assiniboin, Blackfoot, Cheyenne, Crow, Dakota Sioux, Hidatsa, and Mandan nations. Among other issues, it explicitly defined the home territories of each of these peoples, disputes over which had fostered intertribal conflict. It also required the signatory nations to forego battle among themselves and against Euro-Americans and gave the United States the right to build and protect roads through the Plains. In return, the United States agreed to provide a variety of goods and services to the tribes.Notably, while the impetus for the first Treaty of Fort Laramie was federal concern about the safety of travelers, indigenous actions against these people paled before the depredations of Euro-Americans, which have been described as genocidal. In the first three decades following the 1848 gold strike (gold rush), for example, California's Native American population declined from between 100,000 and 150,000—a figure already depleted by the decades of poor conditions the “novitiates” had experienced at the hands of Spanish missionaries and businessmen—to perhaps 15,000 individuals. In 1850 the California legislature legalized the de facto slavery of indigenous individuals by allowing Euro-American men to declare them “vagrant” and to bind such “vagrants” by indenture. Thousands of people were enslaved under this statute, and many died of maltreatment. Between 1851 and 1857 the state legislature also authorized some $1.5 million for reimbursement to private individuals who quelled native “hostilities”; most of these private expeditions were little more than shooting sprees and slave raids against peaceful indigenous settlements.For a time, the conquest of the West was overshadowed by the American Civil War (1861–65). Conflicts in the Plains increased during this period and included two of the worst interethnic atrocities of 19th-century America: the Sioux Uprising (1862), in which Santee warriors killed some 400 settlers in Minnesota, many of whom were women and children, and the Sand Creek Massacre (1864), in which members of the Colorado militia killed at least 150 and perhaps as many as 500 people, mostly women and children, at a Cheyenne village known to be peaceable.As the Civil War ended, increasing numbers of U.S. troops were sent to pacify the North American interior. The federal government also began to develop the policies that eventually confined the nations of the West to reservations (reservation), and to pursue treaties with Native American polities in order to effect that goal. These agreements generally committed tribes to land cessions, in exchange for which the United States promised to designate specific areas for exclusive indigenous use and to provide tribes with annual payments (annuities) comprising cash, livestock, supplies, and services. A second major treaty convention occurred at Fort Laramie in 1868, but treaty making ceased with the passage of the Indian Appropriation Act (1871), which declared that “hereafter no Indian nation or tribe” would be recognized “as an independent power with whom the United States may contract by treaty.” Indian affairs were thus brought under the legislative control of the Congress to a much greater extent than previously.These actions eventually had an enormous effect on native nations. However, policy changes made from afar are difficult to enforce, and Washington, D.C., was nearly 1,700 miles (2,700 km) away from the communication nexus at Fort Laramie. The tasks of finding, moving, and restricting the nomadic nations to their designated reservations were given to the U.S. military. The best-known event of the conquest of the American West, the Battle of the Little Bighorn (Little Bighorn, Battle of the) (June 25, 1876), arose directly from this delegation of authority. Notably, and despite its notoriety, this engagement caused few or no injuries to noncombatants; only military personnel were directly injured or killed. During the battle a combined group of Cheyenne and Sioux warriors defended their families from George Armstrong Custer (Custer, George Armstrong) and the U.S. 7th Cavalry. Custer's mission had been to remove these people (several hundred in all) to their reservations, and he had intended to forcibly capture or kill every member of the community, including women, children, the aged, and the infirm, in order to do so. With the exception of a small group of soldiers led by Maj. Marcus Reno, who were trapped under fire on a hill, Custer and his troops were completely annihilated. Unfortunately for the western nations, this event—and particularly Elizabeth Custer's decades-long promotion of her husband's death as an atrocity, despite his status as a recognized combatant—spawned a prolonged media sensation that reignited the United States' commitment to complete hegemony over Native America.By the late 1880s an indigenous millenarian movement, the Ghost Dance religion, had arrived on the Plains. Growing from an older tradition known as the Round Dance, the new religion was based on the revelations of a young Paiute man, Wovoka, who prophesied the departure of the Euro-Americans and a reunion of Indians and their departed kin. The songs and ceremonies born of Wovoka's revelation swept the Plains, offering hope to indigenous believers but also shifting over time and space from a pacifist set of practices to one with at least some military aspects. Concerned that the Ghost Dance would disturb the uneasy peace of the northern Plains, U.S. government agents moved to capture its proponents. Among them was the Sioux leader Sitting Bull, who was killed on Dec. 15, 1890, while being taken into custody. Just 14 days later the U.S. 7th Cavalry—Custer's regiment reconstituted—encircled and shelled a peaceful Sioux encampment at Wounded Knee, S.D., an action many have argued was taken in revenge of the Little Bighorn battle. More than 200 men, women, children, and elders who were waiting to return to their homes were killed. Although this massacre marked the effective end of native military resistance in the western United States, tribes and individuals continued to resist conquest in a variety of other ways.The conquest of western CanadaFor the indigenous peoples of the Canadian West, the 19th century was a time of rapid transformation. The fur trade and a variety of large prey animals were in decline, and, with the elimination of government tribute payments, this created a period of economic hardship for the tribes. In addition, Canada's northern forests and Plains saw an influx of European and Euro-American settlers and a series of treaties that greatly reduced the landholdings of aboriginal peoples.Many legal issues of import to aboriginal nations were decided early in the century, before Canadian independence. Among the most important of these policies was the Crown Lands Protection Act (1839), which affirmed that aboriginal lands were the property of the crown unless specifically titled to an individual (see crown land). By disallowing indigenous control of real estate, a requirement for full citizenship in most of Canada, the act disenfranchised most native peoples. Through the 1850s a series of additional laws codified Indian policy in Canada. Initiated by the assimilationist Bagot Commission (1842–44), these laws defined what constituted native identity, mandated that individuals carry only one legal status (e.g., aboriginal or citizen), prohibited the sale of alcohol to native peoples, and shifted the administration of native affairs from the British Colonial Office to Canada.For native peoples, the most momentous legal changes in the later 19th century included the creation of the Dominion of Canada (1867) and the passage of legislation including the Gradual Civilization Act (1857), the Act Providing for the Organisation of the Department of the Secretary of State of Canada and for the Management of Indian and Ordnance Lands (1868), the Manitoba Act (1870), and the first consolidated Indian Act (1876). Events of the 19th century were also heavily influenced by the intensifying competition between the Hudson's Bay Company and the North West Company, a rivalry with roots a century old.The Red River crisis and the creation of ManitobaThe Hudson's Bay Company (HBC) and the North West Company (NWC) had initially exploited different territories: the HBC took northern Huronia, Hudson Bay, and the land from the bay's western shore to the Rocky Mountains, while the NWC took the region lying between Lake Superior and the Rockies. In 1810 Thomas Douglas, 5th earl of Selkirk (Selkirk, Thomas Douglas, 5th Earl of, Lord Daer And Shortcleuch), became the major shareholder of the HBC. Selkirk was a Scottish philanthropist who felt that emigration was the most reasonable response to enclosure, which in Scotland was causing the precipitous eviction and impoverishment of literally thousands of farm families. He arranged to have the HBC provide nearly 120,000 square miles (approximately 310,000 square km) for settlement in and around the Red River (Red River Settlement) valley of present-day Manitoba and North Dakota. The area was referred to as Assiniboia, named after the Assiniboin nation, which resided there.The scheme was not well received by the established residents of the area. The population of Assiniboia was a heterogeneous mix of aboriginal and Euro-American individuals, essentially all of whom were engaged in the fur trade in one form or another. Members of the Métis nation were among the region's most prominent residents—economically successful, numerous, and well-traveled. Their economy emphasized commercial hunting, trapping, fishing, trading, and cartage; by generally limiting farming to such labour as was required to meet subsistence needs, they preserved the habitat of the animals upon which the fur trade relied. Métis culture arose from the marriages of indigenous women, who were most often Cree, to European traders, who were most often French or Scottish. In the early 19th century, some Métis identified most closely with their indigenous heritage, some with their European heritage, and some with both equally. A fourth group saw themselves as members of a unique culture that drew from, but was independent of, those of their ancestors. Given that the first interethnic marriages had occurred some two centuries earlier and that new individuals were constantly admixing into Métis communities, each of these perspectives could be reasonably held.A number of Métis were officers in the NWC (North West Company); the HBC, however, eschewed hiring them (and all indigenous individuals) for anything but the most basic labour. This rankled the Métis, many of whom supposed that Selkirk's settlers and their intensive farming were meant to dispossess the residents of Assiniboia of their lands and livelihoods. The NWC shareholders encouraged these sentiments. The two companies' dispute over control of the territory became quite heated; the NWC had a longer presence there, but both had trading posts in the region, and the crown's grant of Rupert's Land to the HBC seemed—to HBC shareholders, at least—to prove the superiority of the HBC claim.In 1812 the first of the Selkirk settlers arrived at the Red River Settlement (near present-day Winnipeg, Man.). Additional immigrants arrived in succeeding years; they were often harassed, and in some cases their buildings were burned and their crops destroyed. Tensions between the NWC-Métis contingent and the HBC-settler contingent were compounded in the severe winter of 1815–16, which produced widespread starvation. When a group of NWC men, almost entirely Métis, attacked and captured an HBC supply convoy, the HBC-appointed governor of the colony led a group of some 20–25 troops to retaliate. The NWC men killed 20 of this group in an engagement known as the Seven Oaks Massacre (1816). Many historians credit this event with fostering the unified Métis identity that later proved to be a key element in shaping the Canadian West and that continues to exist today.In 1818 the Canadian courts, packed with judges who were NWC shareholders and supporters, ordered Selkirk to pay the NWC a very large settlement. The animosity between the rival companies was not resolved until 1821, when the British government insisted that they merge. The resultant corporation retained the Hudson's Bay Company name and many of its policies, including the use of discriminatory employment practices. Many Métis thus lost their primary employment as trappers, traders, and carters and began to move from the countryside into the Red River Settlement. Over the next several decades they made numerous petitions to the colonial and British governments requesting recognition of their status as an independent people, an end to the HBC monopoly, and colony status for Assiniboia, among other things. Their petitions were denied, although in some cases only after heated debate in the British Parliament.Parliament granted Canada independence through the British North America Act (1867), legislation that included little acknowledgement of the concerns of the Métis or other aboriginal nations. Instead, Canada's 1868 Act Providing for the Organisation of the Department of the Secretary of State of Canada and for the Management of Indian and Ordnance Lands (sometimes referred to as the first Indian Act, although an act by that name was not passed until 1876) defined the ways that the dominion government would relate to native nations, essentially codifying the colonial legislation that had been passed during the 1850s.Britain's Parliament also approved the transfer of Rupert's Land from the HBC to Canada, to be effective Dec. 1, 1869. Convinced that this would result in the seizure of their homes and land, the Métis formed a coalition through which they hoped to negotiate with the new dominion government. Led by Louis Riel (Riel, Louis), a young Métis who had studied law in Montreal, the coalition waded into a political morass that pitted an assortment of competing interests against one another. The parties included not only the Métis but also various First Nations groups, the Canadian Parliament, the HBC, and a variety of entities whose interests were diametrically opposed, such as Irish Catholic Fenians (Fenian) and Irish Protestant Orange Order members, French Canadian Catholics and British Canadian Protestants, and fur traders and farmers. The United States followed the proceedings closely, hoping to connect the lower 48 states with Alaska through the purchase or annexation of Rupert's Land; the state of Minnesota even offered Canada $10 million for the territory.In an attempt to ensure that their concerns were heard, Riel's men prevented William McDougall (McDougall, William), the commissioner of crown lands, from entering Assiniboia to implement the transfer of Rupert's Land from the HBC to the dominion. A frustrated McDougall nevertheless executed the part of the proclamation eliminating HBC rule over the region, unwisely leaving it without an official government. Riel quickly emplaced a provisional government as allowed under law.Soon after, in one of the communities governed by the Riel coalition, an Orangeman was tried for disturbing the peace; his trial, despite its legality, and subsequent execution created an uproar throughout Canada. Hoping to quell the situation, the Canadian Parliament quickly wrote and passed the Manitoba Act (1870). Among other provisions, it recognized the property claims of the area's occupants and set aside 1,400,000 acres (some 565,000 hectares) for future Métis use. It also mandated legal and educational parity between the English- and French-speaking communities, as that had become the key political issue for most of the Canadian public.The Numbered Treaties and the Second Riel RebellionThe Red River crisis laid the groundwork for the Numbered Treaties, 11 in all, that were negotiated between 1871 and 1921. For the most part these involved the cession of indigenous land in exchange for reserve areas and the governmental provision of annuities, including cash, equipment, livestock, health care, and public education, all in perpetuity. Leaders from all the involved parties generally felt it better to negotiate than to fight, as the human and financial costs of the conflicts in the western United States were well publicized at the time.No aboriginal nation was able to negotiate everything it desired through the Numbered Treaties, although many native leaders were successful in pushing the dominion well beyond its preferred levels of remuneration. In addition to their own negotiating skills, which were considerable, these leaders relied upon individuals who were trained to repeat discussions verbatim—a group whose talents were especially useful when the colonizers “forgot” important clauses of agreements. By the end of 1876, Treaties 1 through 6 had been negotiated by the nations of the southern reaches of present-day Ontario, Manitoba, Alberta, and Saskatchewan. A particularly interesting idea had been advocated by the Plains Cree leader Big Bear, who persuaded the leaders of other nations to join him in requesting adjoining reserves. Their request was denied on the grounds that it would create an indigenous nation within a nation, which had of course been exactly the goal Big Bear wished to achieve.The Métis fared poorly during the implementation of the Manitoba Act and the Numbered Treaties despite their earlier role in instigating dominion consultation with indigenous peoples in the Canadian West. Government assurances that Métis property claims in Manitoba would be recognized had been negated by the post hoc addition of development requirements—approximately 90 percent of Métis title requests were refused on the basis of insufficient improvements such as too few cultivated acres or housing that was deemed unsuitable. A large number moved to Saskatchewan, where the government insisted they file individual land claims as regular citizens. As an aboriginal nation, the Métis argued against this, noting that new block reserves should replace the land they had lost in Manitoba. From the perspective of the dominion, however, the matter was closed.Even before the 1876 completion of Treaties 1–6, many members of the northern Plains nations were taking up farming and ranching. Most also continued to rely on bison for meat and for robes or finished hides, which had become very popular trade items. The Métis engaged in the same activities, and, while the resident tribes were not happy with the arrival of competitors, they and the Métis were generally sympathetic to each others' human rights causes.The bison robe trade peaked in the late 1870s. Consumers preferred the lush robes of young cows, and the hunting of animals in their prime reproductive years contributed heavily to the imminent collapse of the bison population. Even as bison became scarce, harvests failed, and for several years in the early 1880s starvation became a real possibility for many people. For indigenous nations, these hardships were worsened by government agents who refused to fulfill their legal obligations to distribute annuities or who distributed only partial or substandard goods.In 1884, at the suggestion of Big Bear, more than 2,000 people convened for a pan-tribal gathering. Although tribal leaders had been quietly meeting for years to arrange the scheduling of bison hunts, this was by far the largest indigenous gathering the Canadian Plains had seen. Government agents subsequently prohibited inter-reserve travel and began in earnest to use the withholding of food as a method of control.Their actions ultimately precipitated a crisis. Late in 1884 Louis Riel (Riel, Louis) arrived in Saskatchewan, having spent several years in exile in the United States. He attempted to engage the dominion government, advocating for colony status, a position supported by Big Bear's pan-tribal alliance, the Métis, and local Euro-Americans alike. In early 1885 a few starving tribal members looted Euro-American storage facilities and convoys, provoking government retaliation. Big Bear and another Plains Cree leader, Poundmaker, were able to intercede before the resultant skirmishes became full-blown engagements, thus preventing the deaths of many settlers and Royal Canadian Mounted Police officers. Government troops and ordnance were quickly transported to the area, and within a few weeks Big Bear, Poundmaker, Riel, and other alliance leaders had surrendered. They were soon convicted of various crimes. Riel was executed for treason, and, although their actions had clearly saved many lives, Big Bear and Poundmaker were sentenced to prison, where their health was quickly broken; both died within two years. Although Treaties 7 through 11 remained to be negotiated, colonial conquest was complete in the most populated portions of western Canada.In many parts of the world, including Northern America, the indigenous peoples who survived military conquest were subsequently subject to political conquest, a situation sometimes referred to colloquially as “death by red tape.” Formulated through governmental and quasi-governmental policies and enacted by nonnative bureaucrats, law enforcement officers, clergy, and others, the practices of political conquest typically fostered structural inequalities that disenfranchised indigenous peoples while strengthening the power of colonizing peoples.Although the removals of the eastern tribes in the 1830s initiated this phase of conquest, the period from approximately 1885 to 1970 was also a time of intense political manipulation of Native American life. The key question of both eras was whether indigenous peoples would be better served by self-governance or by assimilation to the dominant colonial cultures of Canada and the United States.For obvious reasons, most Indians preferred self-governance, also known as sovereignty. Although many Euro-Americans had notionally agreed with this position during the removal era, by the late 19th century most espoused assimilation. Many ascribed to progressivism, a loosely coherent set of values and beliefs that recognized and tried to ameliorate the growing structural inequalities they observed in Northern America. Generally favouring the small businessman and farmer over the industrial capitalist, most progressives realized that many inequities were tied to race or ethnicity and believed that assimilation was the only reasonable means through which the members of any minority group would survive.This view held that the desire among American Indians to retain their own cultures was merely a matter of nostalgia and that it would be overcome in a generation or two, after rationalism replaced indigenous sentimentality. In Canada, early assimilationist legislation included the Crown Lands Protection Act (1839) and the many acts flowing from Canada's Bagot Commission, such as the Act to Encourage the Gradual Civilization of the Indian Tribes of the Canadas (1857). In the United States, the most prominent example of such legislation was the Indian Civilization Act (1819).Although assimilationist perspectives were often patronizing, they were also more liberal than some of those that had preceded them. The reservation system had been formulated through models of cultural evolution (now discredited) that claimed that indigenous cultures were inherently inferior to those originating in Europe. In contrast to those who believed that indigenous peoples were inherently incompetent, assimilationists believed that any human could achieve competence in any culture.Programs promoting assimilation were framed by the social and economic ideals that had come to dominate the national cultures of Canada and the United States. Although they varied in detail, these ideals generally emphasized Euro-American social structures and habits such as nuclear or, at most, three-generation families; patrilineal kinship; differential inheritance among “legitimate” and “bastard” children; male-led households; a division of labour that defined the efforts of women, children, and elders as “domestic help” and those of men as “productive labour”; sober religiosity; and corporal punishment for children and women. Economically, they emphasized capitalist principles, especially the ownership of private property (particularly of land, livestock, and machinery); self-directed occupations such as shop keeping, farming, and ranching; and the self-sufficiency of the nuclear household.Most Native American nations were built upon different social and economic ideals. Not surprisingly, they preferred to retain self-governance in these arenas as well as in the political sphere. Their practices, while varying considerably from one group to the next, generally stood in opposition to those espoused by assimilationists. Socially, most indigenous polities emphasized the importance of extended families and corporate kin groups, matrilineal or bilateral kinship, little or no consideration of legitimacy or illegitimacy, households led by women or by women and men together, a concept of labour that recognized all work as work, highly expressive religious traditions, and cajoling and other nonviolent forms of discipline for children and adults. Economically, native ideals emphasized communitarian principles, especially the sharing of use rights to land (e.g., by definition, land was community, not private, property) and the self-sufficiency of the community or kin group, with wealthier households ensuring that poorer neighbours or kin were supplied with the basic necessities.Assimilationists initiated four movements designed to ensure their victory in this contest of philosophies and lifeways: allotment, the boarding school system, reorganization, and termination. Native peoples unceasingly fought these movements. The survival of indigenous cultures in the face of such strongly assimilationist programming is a measure of their success.Within about a decade of creating the western reservations, both Canada and the United States began to abrogate their promises that reservation land would be held inviolable in perpetuity. In Canada the individual assignment, or allotment, of parcels of land within reserves began in 1879; by 1895 the right of allotment had officially devolved from the tribes to the superintendent general. In the United States a similar policy was effected through the Dawes General Allotment Act (1887).Although some reservations were large, they consistently comprised economically marginal land. Throughout the colonial period, settlers and speculators—aided by government entities such as the military—had pushed tribes to the most distant hinterlands possible. Further, as treaty after treaty drew and redrew the boundaries of reservations, the same parties lobbied to have the best land carved out of the reserves and made available for sale to non-Indians. As a result, confinement to a reservation, even a large one, generally prevented nomadic groups from obtaining adequate wild food; farming groups, who had always supplemented their crops heavily with wild fare, got on only slightly better.Native leaders had insisted that treaties include various forms of payment to the tribes in exchange for the land they ceded. Although the governments of the United States and Canada were obliged to honour their past promises of annuities, many of the bureaucrats entrusted with the distribution of these materials were corrupt. The combination of marginal land and bureaucratic malfeasance created immense poverty in native communities.Ignorant of the legal and bureaucratic origins of reservation poverty, many Euro-Americans in the United States and Canada developed the opinion that reservation life, particularly its communitarian underpinnings, fostered indolence. They came to believe that the privatization of land was the key to economic rehabilitation and self-sufficiency. The right to allot reserves was held by the government in Canada, which at the time dictated that individual title and full citizenship were restricted to those who relinquished their aboriginal status. In the United States, the Dawes Act authorized the president to divide reservations into parcels and to give every native head of household a particular piece of property. The land would be held in trust for a period of 25 years, after which full title would devolve upon the individual. With title would go all the rights and duties of citizenship. Reservation land remaining after all qualified tribal members had been provided with allotments was declared “surplus” and could be sold by the government, on behalf of the tribe, to non-Indians. In the United States a total of 118 reservations were allotted in this manner. Through the alienation of the surplus lands and the patenting of individual holdings, the nations living on these reservations lost 86 million acres (34.8 million hectares), or 62 percent, of the 138 million acres (55.8 million hectares) that had been designated by treaty as Native American common property.Although the particulars of allotment were different in the United States and Canada, the outcomes were more or less the same in both places: indigenous groups and individuals resisted the partitioning process. Their efforts took several forms and were aided by allotment's piecemeal implementation, which continued into the early 20th century.A number of tribes mounted legal and lobbying efforts in attempts to halt the allotment process. In the United States these efforts were greatly hindered when the Supreme Court determined, in Lone Wolf v. Hitchcock (1903), that allotment was legal because Congress was entitled to abrogate treaties. In Canada the decision in St. Catherine's Milling & Lumber Company v. The Queen (1888) found that aboriginal land remained in the purview of the crown despite treaties that indicated otherwise and that the dominion, as an agent of the crown, could thus terminate native title at will.In the United States, some tribes held property through forms of title that rendered their holdings less susceptible to the Dawes Act. For instance, in the 1850s some members of the Fox (Meskwaki) nation purchased land on which to reside. Their original purchase of 80 acres (32 hectares) of land was held through free title and was therefore inalienable except through condemnation; the Meskwaki Settlement, as it became known, had grown to more than 7,000 acres (2,800 hectares) by 2000. In a number of other areas, native individuals simply refused to sign for or otherwise accept their parcels, leaving the property in a sort of bureaucratic limbo.Despite its broad reach, not every reservation had been subjected to partition by the end of the allotment movement. The reservations that avoided the process were most often found in very remote or very arid areas, as with land held by several Ute nations in the Southwest. For similar reasons, many Arctic nations avoided not only allotment but even its precursor, partition into reserves.Allotment failed as a mechanism to force cultural change: the individual ownership of land did not in itself effect assimilation, although it did enrich many Euro-American land speculators. Native social networks and cultural cohesion were in some places shattered by the dispersal of individuals, families, and corporate kin groups across the landscape. Many native institutions and cultural practices were weakened, and little to nothing was offered in substitution.The worst offenses of the assimilationist movement occurred at government-sponsored boarding, or residential, schools. From the mid-19th century until as late as the 1960s, native families in Canada and the United States were compelled by law to send their children to these institutions, which were usually quite distant from the family home. At least through World War II, the schools' educational programming was notionally designed to help students achieve basic literacy and arithmetic skills and to provide vocational training in a variety of menial jobs—the same goals, to a large extent, of public education throughout Northern America during that period.However, the so-called Indian schools were often led by men of assimilationist convictions so deep as to be racist (racism). One example is Carlisle Indian Industrial School (in Carlisle, Pa.) founder Richard Pratt, who in 1892 described his mission as “Kill the Indian in him, and save the man.” Such sentiments persisted for decades; in 1920 Duncan Campbell Scott (Scott, Duncan Campbell), the superintendent of the Canadian residential school system, noted his desire to have the schools “continue until there is not a single Indian in Canada that has not been absorbed into the body politic and there is no Indian question, and no Indian Department.” Stronger statements promoting assimilation at the expense of indigenous sovereignty can hardly be imagined.In pursuing their goals, the administrators of residential schools used a variety of material and psychological techniques to divest native children of their cultures. Upon arrival, students were forced to trade their clothes for uniforms, to have their hair cut in Euro-American styles, and to separate from their relatives and friends. Physical conditions at the schools were often very poor and caused many children to suffer from malnutrition and exposure, exacerbating tuberculosis and other diseases that were common at the time. The schools were generally run by clergy and commingled religious education with secular subjects; staff usually demanded that students convert immediately to Christianity. Displays of native culture, whether of indigenous language, song, dance, stories, religion, sports, or food, were cruelly punished through such means as beatings, electrical shocks, the withholding of food or water, and extended periods of forced labour or kneeling. Sexual abuse was rampant. In particularly bad years, abuse (child abuse) and neglect were acknowledged to have caused the deaths of more than half of the students at particular schools.Native families were aware that many children who were sent to boarding schools never returned, and they responded in a number of ways. Many taught their children to hide at the approach of the government agents who were responsible for assembling children and transporting them to the schools. Many students who were transported ran away, either during the trip or from the schools themselves; those who escaped often had to walk hundreds of miles to return home. Some communities made group decisions to keep their children hidden; perhaps the best-known of such events occurred in 1894–95, when 19 Hopi men from Oraibi pueblo were incarcerated for refusing to reveal their children's whereabouts to the authorities. Through these and other efforts, native communities eventually gained control over the education of their children. It was, however, a slow process: the first school in the United States to come under continuous tribal administration was the Rough Rock Demonstration School in Arizona in 1966, while in Canada the Blue Quills First Nations College in Alberta was the first to achieve that status, in 1971.Many researchers and activists trace the most difficult issues faced by 20th- and 21st-century Indian communities to the abuses that occurred at the boarding schools. They note that the problems common to many reservations—including high rates of suicide, substance abuse, domestic violence, child abuse, and sexual assault—are clear sequelae of childhood abuse. In 1991 the assaults perpetrated upon Canadian children who had attended residential schools in the mid-20th century began to be redressed through the work of the Royal Commission on Aboriginal Peoples. The commission's 1996 report substantiated indigenous claims of abuse, and in 2006 Canada allocated more than $2 billion (Canadian) in class-action reparations and mental health funding for the former students.By the late 19th century the removal of the eastern tribes, the decimation of California peoples, a series of epidemics in the Plains, and the high mortality rates at boarding schools seemed to confirm that Indians were “vanishing.” The belief that Native Americans would not survive long as a “race” provided a fundamental justification for all assimilationist policies. It also supported rationalizations that indigenous views on legislation and public policy were immaterial. When it became obvious after about 1920 that Northern American's aboriginal populations were actually increasing, the United States and Canada found themselves unprepared to acknowledge or advance the interests of these people.In the United States a 1926 survey brought into clear focus the failings of the previous 40 years. The investigators found most Indians “extremely poor,” in bad health, without education, and isolated from the dominant Euro-American culture around them. Under the impetus of these findings and other pressures for reform, the U.S. Congress adopted the Indian Reorganization Act of 1934, which was designed to effect an orderly transition from federal control to native self-government. The essentials of the new law were as follows: (1) allotment of tribal lands was prohibited, but tribes might assign use rights to individuals; (2) so-called surplus lands that had not been sold or granted to non-Indians could be returned to the tribes; (3) tribes could adopt written constitutions and charters of incorporation through which to manage their internal affairs; and (4) funds were authorized for the establishment of a revolving credit program which was to be used for land purchases, for educational assistance, and for helping the tribes to form governments. The terms of the act were universally applicable, but any particular nation could reject them through a referendum process.The response to the Reorganization Act was indicative of the indigenous peoples' ability to rise above adversity. About 160 communities adopted written constitutions, some of which combined traditional practices with modern parliamentary methods. The revolving credit fund helped to improve tribal economies in many ways: native ranchers built up their herds, artisans were better able to market their work, and so forth. Educational and health services were also improved.After 1871, when internal tribal matters had become the subject of U.S. legislation, the number and variety of regulatory measures regarding native individuals multiplied rapidly. In the same year that the Indian Reorganization Act was passed, Congress took the significant step of repealing 12 statutes that had made it possible to hold indigenous people virtual prisoners on their reservations. The recognition of tribal governments following the Reorganization Act seemed to awaken an interest in civic affairs beyond tribal boundaries. The earlier Snyder Act (1924) had extended citizenship to all Indians born in the United States, opening the door to full participation in American civic life. But few took advantage of the law, and a number of states subsequently excluded them from the franchise (suffrage). During the reorganization period, many native peoples successfully petitioned to regain the right to vote in state and federal elections. The major exception to this trend occurred in Arizona and New Mexico, which withheld enfranchisement until 1948 and granted it only after a lengthy lawsuit.A number of nations had for many years sponsored tribal councils. These councils had functioned without federal sanction, although their members had represented tribal interests in various ways, such as leading delegations to Washington, D.C., to protest allotment. Reorganization gave tribes the opportunity to formalize these and other indigenous institutions. Tribal governments soon initiated a number of lawsuits designed to regain land that had been taken in contravention of treaty agreements. Other lawsuits focused on the renewal of use rights, such as the right to hunt or fish, that had been guaranteed in some treaties.These legal strategies for extending sovereignty were often very successful. The federal courts consistently upheld treaty rights and also found that ancestral lands could not be taken from an aboriginal nation, whether or not a treaty existed, “except in fair trade.” The fair trade argument was cited by the Hualapai against the Santa Fe Railway (Atchison, Topeka and Santa Fe Railway Company, The), which in 1944 was required to relinquish about 500,000 acres (200,000 hectares) it thought it had been granted by the United States. A special Indian Claims Commission, created by an act of Congress on Aug. 13, 1946, received petitions for land claims against the United States. Many land claims resulted in significant compensation, including nearly $14,800,000 to the Cherokee nation, $10,250,000 to the Crow tribe, $12,300,000 to the Seminoles, and $31,750,000 to the Ute.Even as many tribes in the United States were regaining land or compensation, the U.S. Bureau of Indian Affairs instituted the Urban Indian Relocation Program. Initiated within the bureau in 1948 and supported by Congress from the 1950s on, the relocation program was designed to transform the predominantly rural native population into an assimilated urban workforce. The bureau established offices in a variety of destination cities, including Chicago, Dallas, Denver, Los Angeles, San Francisco, San Jose, and St. Louis. Through program auspices, it promised to provide a variety of services to effect the transition to city life, including transportation from the reservation, financial assistance, help in finding housing and employment, and the like, although the distribution and quality of these services were often uneven. From 1948 to 1980, when the program ended, some 750,000 Indians are estimated to have relocated to cities, although not all did so under the official program and not all remained in urban areas permanently. Evaluations of its success vary, but it is clear that urban relocation helped to foster the sense of pan-Indian identity and activism that arose in the latter half of the 20th century.The ultimate goals of assimilationist programming were to completely divest native peoples of their cultural practices and to terminate their special relationship to the national government. Canada's attempts at promoting these goals tended to focus on the individual, while those of the United States tended to focus on the community.In Canada a variety of 19th-century policies had been emplaced to encourage individuals to give up their aboriginal status in favour of regular citizenship. Native people were prohibited from voting, serving in public office, owning land, attending public school, holding a business license, and a variety of other activities. These disincentives did not prove to be very strong motivating forces toward the voluntary termination of native status. More successful were regulations that initiated the termination of status without an individual's permission. For instance, until 1985, indigenous women who married nonnative men automatically lost their aboriginal status; undertaking military service or earning a university degree could also initiate involuntary changes in status.Major adjustments to Canada's pro-termination policies did not occur until after World War II, when returning veterans and others began to agitate for change. In 1951 activists succeeded in eliminating many of the disincentives associated with indigenous status. After years of prohibitions, for instance, native peoples regained the right to hold powwows (powwow) and potlatches (potlatch) and to engage in various (if limited) forms of self-governance. The new policy also defined procedures for the reinstatement of aboriginal status, for which some 42,000 individuals applied within the first year of passage.In the United States, termination efforts were handled somewhat differently. In 1954 the U.S. Department of the Interior began terminating federal control and support of tribes that had been deemed able to look after their own affairs. From 1954 to 1960, support to 61 indigenous nations was ended by the withdrawal of federal services or trust supervision.The results were problematic. Some extremely impoverished communities lost crucial services such as schools and clinics due to a lack of funds; in a number of cases, attempts to raise the capital with which to replace these services attracted unscrupulous business partners and further impoverished the community. The protests of tribal members and other activists became so insistent that the termination program began to be dismantled in 1960.American Indians became increasingly visible in the late 20th century as they sought to achieve a better life as defined on their own terms. During the civil rights movement of the 1960s, many drew attention to their causes through mass demonstrations and protests. Perhaps the most publicized of these actions were the 19-month seizure (1970–71) of Alcatraz Island in San Francisco Bay (California) by members of the militant American Indian Movement (AIM) and the February 1973 occupation of Wounded Knee, on the Oglala Sioux Pine Ridge (South Dakota) reservation.During the 1960s and '70s, native polities continued to capitalize on their legal successes and to expand their sphere of influence through the courts; forestry, mineral, casino gambling, and other rights involving tribal lands became the subjects of frequent litigation. Of the many cases filed, United States v. Washington (1974) had perhaps the most famous and far-reaching decision. More commonly referred to as the Boldt case, after the federal judge, George Boldt, who wrote the decision, this case established that treaty agreements entitled certain Northwest Coast and Plateau tribes to one-half of the fish taken in the state of Washington—and by implication in other states where tribes had similarly reserved the right to fish. In addition, some groups continued their efforts to regain sovereignty over or compensation for tribal lands. The most important results of the latter form of activism were the passage of the Alaska Native Claims Settlement Act (1971), in which Native Alaskans received approximately 44 million acres (17.8 million hectares) of land and nearly $1 billion (U.S.) in exchange for land cessions, and the creation of Nunavut (1999), a new Canadian province predominantly administered by and for the Inuit.Developments in the late 20th and early 21st centuriesNative American life in the late 20th and early 21st centuries has been characterized by continuities with and differences from the trajectories of the previous several centuries. One of the more striking continuities is the persistent complexity of native ethnic and political identities. In 2000 more than 600 indigenous bands or tribes were officially recognized by Canada's dominion government, and some 560 additional bands or tribes were officially recognized by the government of the United States. These numbers were slowly increasing as additional groups engaged in the difficult process of gaining official recognition.The Native American population has continued to recover from the astonishing losses of the colonial period, a phenomenon first noted at the turn of the 20th century. Census data from 2006 indicated that people claiming aboriginal American ancestry numbered some 1.17 million in Canada, or approximately 4 percent of the population; of these, some 975,000 individuals were officially recognized by the dominion as of First Nation, Métis, or Inuit heritage. U.S. census figures from 2000 indicated that some 4.3 million people claimed Native American descent, or 1–2 percent of the population; fewer than one million of these self-identified individuals were officially recognized as of native heritage, however.The numerical difference between those claiming ancestry and those who are officially recognized is a reflection of many factors. Historically, bureaucratic error has frequently caused individuals to be incorrectly removed from official rolls. Marrying outside the Native American community has also been a factor: in some places and times, those who out-married were required by law to be removed from tribal rolls; children of these unions have sometimes been closer to one side of the family than the other, thus retaining only one parent's ethnic identity; and in some cases, the children of ethnically mixed marriages have been unable to document the degree of genetic relation necessary for official enrollment in a particular tribe. This degree of relation is often referred to as a blood quantum requirement; one-fourth ancestry, the equivalent of one grandparent, is a common minimum blood quantum, though not the only one. Other nations define membership through features such as residence on a reservation, knowledge of traditional culture, or fluency in a native language. Whether genetic or cultural, such definitions are generally designed to prevent the improper enrollment of people who have wishful or disreputable claims to native ancestry. Known colloquially as “wannabes,” these individuals also contribute to the lack of correspondence between the number of people who claim Indian descent and the number of officially enrolled individuals.A striking difference from the past can be seen in Native Americans' ability to openly engage with both traditional and nontraditional cultural practices. While in past eras many native individuals had very limited economic and educational opportunities, by the turn of the 21st century they were members of essentially every profession available in North America. Many native people have also moved from reservations to more urban areas, including about 65 percent of U.S. tribal members and 55 percent of aboriginal Canadians.Despite these profound changes in occupation and residency, indigenous Americans are often represented anachronistically. Depictions of their cultures are often “frozen” in the 18th or 19th century, causing many non-Indians to incorrectly believe that the aboriginal nations of the United States and Canada are culturally or biologically extinct—a misbelief that would parallel the idea that people of European descent are extinct because one rarely sees them living in the manner depicted in history museums (museum, types of) such as the Jorvik Viking Center (York, Eng.) or Colonial Williamsburg (Virginia). To the contrary, 21st-century American Indians participate in the same aspects of modern life as the general population: they wear ordinary apparel, shop at grocery stores and malls, watch television, and so forth. Ethnic festivals and celebrations do provide individuals who are so inclined with opportunities to honour and display their cultural traditions, but in everyday situations a powwow dancer would be as unlikely to wear her regalia as a bride would be to wear her wedding dress; in both cases, the wearing of special attire marks a specific religious and social occasion and should not be misunderstood as routine.Although life has changed drastically for many tribal members, a number of indicators, such as the proportion of students who complete secondary school, the level of unemployment, and the median household income, show that native people in the United States and Canada have had more difficulty in achieving economic success than non-Indians. Historical inequities have clearly contributed to this situation. In the United States, for instance, banks cannot repossess buildings on government trust lands, so most Indians have been unable to obtain mortgages unless they leave the reservation. This regulation in turn leads to depopulation and substandard housing on the reserve, problems that are not easily resolved without fundamental changes in regulatory policy.The effects of poorly considered government policies are also evident in less-obvious ways. For example, many former residential-school students did not parent well, and an unusually high number of them suffered from post-traumatic stress disorder. Fortunately, social service agencies found that mental health care, parenting classes, and other actions could resolve many of the problems that flowed from the boarding school experience.While most researchers and Indians agree that historical inequities are the source of many problems, they also tend to agree that the resolution of such issues ultimately lies within native communities themselves. Thus, most nations continue to pursue sovereignty, the right to self-determination, as an important focus of activism, especially in terms of its role in tribal well-being, cultural traditions, and economic development. Questions of who or what has the ultimate authority over native nations and individuals, and under what circumstances, remain among the most important, albeit contentious and misunderstood, aspects of contemporary Native American life.Although community self-governance was the core right that indigenous Americans sought to maintain from the advent of colonialism onward, the strategies they used to achieve it evolved over time. The period from the Columbian landfall to the late 19th century might be characterized as a time when Native Americans fought to preserve sovereignty by using economics, diplomacy, and force to resist military conquest. From the late 19th century to the middle of the 20th, political sovereignty, and especially the enforcement of treaty agreements, was a primary focus of indigenous activism; local, regional, and pan-Indian resistance to the allotment of communally owned land, to the mandatory attendance of children at boarding schools, and to the termination of tribal rights and perquisites all grew from the basic tenets of the sovereignty movement. By the mid-1960s the civil rights movement had educated many peoples about the philosophy of equal treatment under the law—essentially the application of the sovereign entity's authority over the individual—and civil rights joined sovereignty as a focus of Indian activism.One, and perhaps the principal, issue in defining the sovereign and civil rights of American Indians has been the determination of jurisdiction in matters of Indian affairs. Historical events in Northern America, that part of the continent north of the Rio Grande, created an unusually complex system of competing national, regional (state, provincial, or territorial), and local claims to jurisdiction. Where other countries typically have central governments that delegate little authority to regions, Canada and the United States typically assign a wide variety of responsibilities to provincial, state, and territorial governments, including the administration of such unrelated matters as unemployment insurance, highway maintenance, public education, and criminal law. With nearly 1,200 officially recognized tribal governments and more than 60 regional governments extant in the United States and Canada at the turn of the 21st century, and with issues such as taxation and regulatory authority at stake, it is unsurprising that these various entities have been involved in a myriad of jurisdictional battles.Two examples of criminal jurisdiction help to clarify the interaction of tribal, regional, and federal or dominion authorities. One area of concern has been whether a non-Indian who commits a criminal act while on reservation land can be prosecuted in the tribal court. In Oliphant v. Suquamish Indian Tribe (1978), the U.S. Supreme Court (Supreme Court of the United States) determined that tribes do not have the authority to prosecute non-Indians, even when such individuals commit crimes on tribal land. This decision was clearly a blow to tribal sovereignty, and some reservations literally closed their borders to non-Indians in order to ensure that their law enforcement officers could keep the peace within the reservation.The Oliphant decision might lead one to presume that, as non-Indians may not be tried in tribal courts, Indians in the United States would not be subject to prosecution in state or federal courts. This issue was decided to the contrary in United States v. Wheeler (1978). Wheeler, a Navajo who had been convicted in a tribal court, maintained that the prosecution of the same crime in another (federal or state) court amounted to double jeopardy. In this case the Supreme Court favoured tribal sovereignty, finding that the judicial proceedings of an independent entity (in this case, the indigenous nation) stood separately from those of the states or the United States; a tribe was entitled to prosecute its members. In so ruling, the court seems to have placed an extra burden on Native Americans: whereas the plaintiff in Oliphant gained immunity from tribal law, indigenous plaintiffs could indeed be tried for a single criminal act in both a tribal and a state or federal court.A plethora of other examples are available to illustrate the complexities of modern native life. The discussion below highlights a selection of four issues that are of pan-Indian importance: the placement of native children into non-Indian foster and adoptive homes, the free practice of traditional religions, the disposition of the dead, and the economic development of native communities. The article closes with a discussion of international law and Native American affairs.The outplacement and adoption of indigenous childrenFrom the beginning of the colonial period, Native American children were particularly vulnerable to removal by colonizers. Captured children might be sold into slavery, forced to become religious novitiates, made to perform labour, or adopted as family members by Euro-Americans; although some undoubtedly did well under their new circumstances, many suffered. In some senses, the 19th-century practice of forcing children to attend boarding school was a continuation of these earlier practices.Before the 20th century, social welfare programs were, for the most part, the domain of charities, particularly of religious charities. By the mid-20th century, however, governmental institutions had surpassed charities as the dominant instruments of public well-being. As with other forms of Northern American civic authority, most responsibilities related to social welfare were assigned to state and provincial governments, which in turn developed formidable child welfare bureaucracies. These were responsible for intervening in cases of child neglect or abuse; although caseworkers often tried to maintain the integrity of the family, children living in dangerous circumstances were generally removed.The prevailing models of well-being used by children's services personnel reflected the culture of the Euro-American middle classes. They viewed caregiving and financial well-being as the responsibilities of the nuclear family; according to this view, a competent family comprised a married couple and their biological or legally adopted children, with a father who worked outside the home, a mother who was a homemaker, and a residence with material conveniences such as electricity. These expectations stood in contrast to the values of reservation life, where extended-family households and communitarian approaches to wealth were the norm. For instance, while Euro-American culture has emphasized the ability of each individual to climb the economic ladder by eliminating the economic “ceiling,” many indigenous groups have preferred to ensure that nobody falls below a particular economic “floor.” In addition, material comforts linked to infrastructure were simply not available on reservations as early as in other rural areas. For instance, while U.S. rural electrification programs had ensured that 90 percent of farms had electricity by 1950—a tremendous rise compared with the 10 percent that had electricity in 1935—census data indicated that the number of homes with access to electricity did not approach 90 percent on reservations until 2000. These kinds of cultural and material divergences from Euro-American expectations instantly made native families appear to be backward and neglectful of their children.As a direct result of these and other ethnocentric criteria, disproportionate numbers of indigenous children were removed from their homes by social workers. However, until the mid-20th century there were few places for such children to go; most reservations were in thinly populated rural states with few foster families, and interstate and interethnic foster care and adoption were discouraged. As a result, native children were often institutionalized at residential schools and other facilities. This changed in the late 1950s, when the U.S. Bureau of Indian Affairs joined with the Child Welfare League of America in launching the Indian Adoption Project (IAP), the country's first large-scale transracial adoption program. The IAP eventually moved between 25 and 35 percent of the native children in the United States into interstate adoptions and interstate foster care placements. Essentially all of these children were placed with Euro-American families.Appalled at the loss of yet another generation of children—many tribes had only effected a shift from government-run boarding schools to local schools after World War II—indigenous activists focused on the creation and implementation of culturally appropriate criteria with which to evaluate caregiving. They argued that the definition of a functioning family was a matter of both sovereignty and civil rights—that a community has an inherent right and obligation to act in the best interests of its children and that individual bonds between caregiver and child are privileged by similarly inherent, but singular, rights and obligations.The U.S. Indian Child Welfare Act (1978) attempted to address these issues by mandating that states consult with tribes in child welfare cases. It also helped to establish the legitimacy of the wide variety of indigenous caregiving arrangements, such as a reliance on clan relatives and life with fewer material comforts than might be found off the reservation. The act was not a panacea, however; a 2003 report by the Child Welfare League of America, Children of Color in the Child Welfare System, indicated that, although the actual incidence of child maltreatment in the United States was similar among all ethnic groups, child welfare professionals continued to substantiate abuse in native homes at twice the rate of substantiation for Euro-American homes. The same report indicated that more than three times as many native children were in foster care, per capita, as Euro-American children.Canadian advocates had similar cause for concern. In 2006 the leading advocacy group for the indigenous peoples of Canada, the Assembly of First Nations (AFN), reported that as many as 1 in 10 native children were in outplacement situations; the ratio for nonnative children was approximately 1 in 200. The AFN also noted that indigenous child welfare agencies were funded at per capita levels more than 20 percent under provincial agencies. Partnering with a child advocacy group, the First Nations Child and Family Caring Society of Canada, the AFN cited these and other issues in a human rights complaint filed with the Canadian Human Rights Commission, a signal of the egregious nature of the problems in the country's child welfare system.Religious freedomThe colonization of the Americas involved religious as well as political, economic, and cultural conquest. Religious oppression began immediately and continued unabated well into the 20th—and some would claim the 21st—century. Although the separation of church and state is given primacy in the U.S. Bill of Rights (Rights, Bill of) (1791) and freedom of religion is implied in Canada's founding legislation, the British North America Act (1867), these governments have historically prohibited many indigenous religious activities. For instance, the Northwest Coast potlatch, a major ceremonial involving feasting and gift giving, was banned in Canada through an 1884 amendment to the Indian Act, and it remained illegal until the 1951 revision of the act. In 1883 the U.S. secretary of the interior, acting on the advice of Bureau of Indian Affairs personnel, criminalized the Plains Sun Dance and many other rituals; under federal law, the secretary was entitled to make such decisions more or less unilaterally. In 1904 the prohibition was renewed. The government did not reverse its stance on the Sun Dance until the 1930s, when a new Bureau of Indian Affairs director, John Collier, instituted a major policy shift. Even so, arrests of Sun Dancers and other religious practitioners continued in some places into the 1970s.Restrictions imposed on religion were usually rationalized as limiting dangerous actions rather than as legislating belief systems; federal authorities claimed that they had not only the right but the obligation to prevent the damage that certain types of behaviour might otherwise visit upon the public welfare. It was argued, for instance, that potlatches, by impoverishing their sponsors, created an underclass that the public was forced to support; the Sun Dance, in turn, was a form of torture and thus inherently harmed the public good. These and other public good claims were contestable on several grounds, notably the violation of the free practice of activities essential to a religion and the violation of individual self-determination. Analogues to the prohibited behaviours illustrate the problems with such restrictions. Potlatch sponsors are substantively comparable to Christian church members who tithe or to religious novitiates who transfer their personal property to a religious institution. Likewise, those who choose to endure the physical trials of the Sun Dance are certainly as competent to make that decision as those who donate bone marrow for transplant; in both cases, the participants are prepared to experience physical suffering as part of a selfless endeavour intended to benefit others.By the late 1960s it had become increasingly clear that arguments prohibiting indigenous religious practices in the name of the public good were ethnocentric and were applied with little discretion. In an attempt to ameliorate this issue, the U.S. Congress eventually passed the American Indian Religious Freedom Act (AIRFA; 1978). AIRFA was intended to ensure the protection of Native American religions and their practitioners, and it successfully stripped away many of the bureaucratic obstacles with which they had been confronted. Before 1978, for instance, the terms of the Endangered Species Act prohibited the possession of eagle feathers, which are an integral part of many indigenous rituals; after AIRFA's passage, a permitting process was created so that these materials could legally be owned and used by Native American religious practitioners. In a similar manner, permits to conduct indigenous religious services on publicly owned land, once approved or denied haphazardly, became more freely available.If allowing certain practices was one important effect of AIRFA's passage, so was the reduction of certain activities at specific sites deemed sacred under native religious traditions. For instance, Devils Tower National Monument (Wyoming), an isolated rock formation that rises some 865 feet (264 metres) over the surrounding landscape, is for many Plains peoples a sacred site known as Grizzly Bear Lodge. Since 1995 the U.S. National Park Service, which administers the property, has asked visitors to refrain from climbing the formation during the month of June. In the Plains religious calendar this month is a time of reflection and repentance, akin in importance and purpose to Lent for Christians, the period from Rosh Hashana to Yom Kippur for Jews, or the month of Ramadan (Ramaḍān) for Muslims. Many native individuals visit the monument during June and wish to meditate and otherwise observe their religious traditions without the distraction of climbers, whose presence they feel abrogates the sanctity of the site; to illustrate their point, religious traditionalists in the native community have noted that free climbing is not allowed on other sacred structures such as cathedrals. Although the climbing limits are voluntary and not all climbers refrain from such activities, a considerable reduction was effected: June climbs were reduced by approximately 80 percent after the first desist request was made.Repatriation and the disposition of the dead (death)At the close of the 20th century, public good rationales became particularly heated in relation to the disposition of the indigenous dead: most Native Americans felt that graves of any type should be left intact and found the practice of collecting human remains for study fundamentally repulsive. Yet from the late 15th century onward, anthropologists, medical personnel, and curiosity seekers, among others, routinely collected the bodies of American Indians. Battlefields, cemeteries, and burial mounds (burial mound) were common sources of such human remains into the early 21st century, and collectors were quite open—at least among themselves—in their disregard for native claims to the dead.Among others who freely admitted to stealing from recent graves was Franz Boas (Boas, Franz), one of the founders of Americanist anthropology, who was in turn sued by the tribe whose freshly dead he had looted. The rationale for such behaviour was that indigenous skeletal material was by no means sacrosanct in the face of science; to the contrary, it was a vital link in the study of the origins of American Indians specifically and of humans in general. Indigenous peoples disagreed with this perspective and used many tools to frustrate those intent on disturbing burial grounds, including protesting and interrupting such activities (occasionally while armed), creating new cemeteries in confidential locations, officially requesting the return of human remains, and filing cease-and-desist lawsuits. Despite their objections, the complete or partial remains of an estimated 300,000 Native Americans were held by repositories in the United States as of 1990. Most of these remains were either originally collected by, or eventually donated to, museums and universities. Inventories filed in the late 20th century showed that three of the largest collections of remains were at museums, two of which were university institutions: the Smithsonian Institution held the remains of some 18,000 Native American individuals, the Hearst Museum at the University of California at Berkeley held approximately 9,900, and the Peabody Museum at Harvard University held some 6,900. A plethora of smaller museums, colleges, and government agencies also held human remains.The larger repositories had in-house legal counsel as well as a plentitude of experts with advanced degrees, most of whom were ready to argue as to the value of the remains for all of humanity. Lacking such resources, indigenous attempts to regain native remains proved generally unsuccessful for most of the 20th century. By the 1970s, however, a grassroots pan-Indian (and later pan-indigenous) movement in support of repatriation began to develop.In crafting arguments for the return of human remains, repatriation activists focused on three issues. The first was moral: it was morally wrong, as well as distasteful and disrespectful, to disturb graves. The second centred on religious freedom, essentially holding that removing the dead from their resting places violated indigenous religious tenets and that allowing institutions to retain such materials amounted to unequal treatment under the law. The third issue was one of cultural property and revolved around the question, “At what point does a set of remains cease being a person and become instead an artifact?”In part because many of the remains held by repositories had been taken from archaeological contexts rather than recent cemeteries, this last question became the linchpin in the legal battle between repatriation activists and those who advocated for the retention of aboriginal human remains. Native peoples generally held that personhood was irreducible. From this perspective, the disturbance of graves was an act of personal disrespect and cultural imperialism—individuals' bodies were put to rest in ways that were personally and culturally meaningful to them, and these preferences should have precedence over the desires of subsequent generations. In contrast, archaeologists, biological anthropologists, and other researchers generally held (but rarely felt the need to articulate) that personhood was a temporary state that declined precipitously upon death. Once dead, a person became an object, and while one's direct biological descendants had a claim to one's body, such claims diminished quickly over the course of a few generations. Objects, like other forms of property, certainly had no inherent right to expect to be left intact, and, indeed, as mindless materials, they could not logically possess expectations. Thus, human remains were a legitimate focus of study, collection, and display.These arguments were resolved to some extent by the U.S. Native American Graves Protection and Repatriation Act (NAGPRA; 1990), which laid the groundwork for the repatriation of remains that could be attributed to a specific Native American nation. Important attributes in identifying the decedent's cultural affiliation included the century in which death occurred, the original placement of the body (e.g., fetal or prone position), physical changes based on lifestyle (such as the tooth wear associated with labrets, or lip plugs), and culturally distinct grave goods. Remains that could be attributed to a relatively recent prehistoric culture (such as the most recent Woodland cultures) with known modern descendants (such as the various tribes of Northeast Indians (Northeast Indian)) were eligible for repatriation, as were those from more post-Columbian contexts. However, some legal scholars claimed that NAGPRA left unclear the fate of those remains that were so old as to be of relatively vague cultural origin; tribes generally maintained that these should be deemed distant ancestors and duly repatriated, while repositories and scientists typically maintained that the remains should be treated as objects of study.This issue reached a crisis point with the 1996 discovery of skeletal remains near the town of Kennewick, Wash. Subsequently known as Kennewick Man (among scientists) or the Ancient One (among repatriation activists), this person most probably lived sometime between about 9,000 and 9,500 years ago, certainly before 5,600–6,000 years ago. A number of tribes and a number of scientists laid competing claims to the remains. Their arguments came to turn upon the meaning of “cultural affiliation”: Did the term apply to all pre-Columbian peoples of the territory that had become the United States, or did it apply only to those with specific antecedent-descendant relationships?The U.S. National Park Service, a division of the Department of the Interior, was responsible for determining the answer to this question. When it issued a finding that the remains were Native American, essentially following the principal that all pre-Columbian peoples (within U.S. territory) were inherently indigenous, a group of scientists brought suit. The lawsuit, Bonnichsen v. United States, was resolved in 2004. The court's finding is summarized in its concluding statement:Because Kennewick Man's remains are so old and the information about his era is so limited, the record does not permit the Secretary [of the Interior] to conclude reasonably that Kennewick Man shares special and significant genetic or cultural features with presently existing indigenous tribes, people, or cultures. We thus hold that Kennewick Man's remains are not Native American human remains within the meaning of NAGPRA and that NAGPRA does not apply to them.This finding frustrated and outraged the Native American community. Activists immediately asked legislators to amend NAGPRA so that it would specifically define pre-Columbian individuals as Native Americans. Many scientists countered that such a change would not reverse the need to specifically affiliate remains with an extant nation, and others lobbied for an amendment that would specifically allow the investigation of remains that lacked close affiliation to known peoples.economic development is the process through which a given economy, whether national, regional, or local, becomes more complex and grows in terms of the income or wealth generated per person. This process is typically accomplished by finding new forms of labour and often results in the creation of new kinds of products. One example of economic development has been the transition from hunting and gathering to a full reliance on agriculture; in this example, the new form of labour comprised the system of sowing and harvesting useful plants, while the new products comprised domesticates such as corn (maize) and cotton. During the 19th century, much of the economic growth of Northern America arose from a shift in which extractive economies, such as farming and mining, were replaced by those that transformed raw materials into consumer goods, as with food processing and manufacturing. In the 20th century a broadly analogous shift from a manufacturing economy to one focused on service industries (service industry) (e.g., clerical work, entertainment, health care, and information technology) took place.Economic underdevelopment has been an ongoing problem for many tribes since the beginning of the reservation eras in the United States and Canada. Reservations are typically located in economically marginal rural areas—that is, areas considered to be too dry, too wet, too steep, too remote, or possessing some other hindrance to productivity, even at the time of their creation. Subsequent cessions and the allotment process decreased the reservation land base and increased the economic hurdles faced by indigenous peoples. Studies of reservation income help to place the situation in perspective: in the early 21st century, if rural Native America had constituted a country, it would have been classified on the basis of median annual per capita income as a “developing nation” by the World Bank.Although underdevelopment is common in rural Northern America, comparisons of the economic status of rural Indians with that of other rural groups indicate that factors in addition to location are involved. For instance, in 2002 a national study by the South Carolina Rural Health Research Center found that about 35 percent of the rural Native American population in the United States lived below the poverty line; although this was about the same proportion as seen among rural African Americans, less than 15 percent of rural Euro-Americans had such low income levels. Perhaps more telling, rural counties with predominantly Native American populations had less than one-fourth of the bank deposits (i.e., savings) of the average rural county—a much greater disparity in wealth than existed for any other rural group. (Predominantly Hispanic counties, the next lowest in the rankings, had more than twice the deposits of predominantly Native American counties.)Explanations for the causes of such disparity abound, and it is clear that many factors—geography, historical inequities, nation-within-a-nation status, the blurring of boundaries between collectivism and nepotism, poor educational facilities, the prevalence of post-traumatic stress and of substance abuse, and others—may be involved in any given case. With so many factors to consider, it is unlikely that the sources of Indian poverty will ever be modeled to the satisfaction of all. Nonetheless, there is general agreement on the broad changes that mark the end of destitution. These typically involve general improvements to community well-being, especially the reduction of unemployment, the creation of an educated workforce, and the provision of adequate infrastructure, health care, child care, elder care, and other services.During the late 20th and early 21st centuries, native nations used a suite of approaches to foster economic growth. Some of these had been in use for decades, such as working to gain official recognition as a nation and the filing of lawsuits to reclaim parts of a group's original territory. Extractive operations, whether owned by individuals, families, or tribal collectives, also continued to play important and ongoing roles in economic development; mining, timber, fishing, farming, and ranching operations were long-standing examples of these kinds of enterprises.Highway improvements in the 1950s and '60s opened opportunities for tourism in what had been remote areas, and a number of indigenous nations resident in scenic locales began to sponsor cultural festivals and other events to attract tourists. Tribal enterprises such as hotels, restaurants, and service stations—and, more recently, golf courses, water parks, outlet malls, and casinos (the last of these is also discussed below)—proved profitable. At the same time, indigenous families and individuals were able to use traditional knowledge in new commercial ventures such as the production and sale of art. The powwow, a festival of native culture that features dancers, singers, artists, and others, is often the locus at which cultural tourism occurs. The provision of guide services to hunters and fishers represents another transformation of traditional knowledge that has proven valuable in the commercial marketplace, and ecotourism ventures were becoming increasingly popular among tribes in the early 21st century. Although the tourism industry is inherently volatile, with visitation rising and falling in response to factors such as the rate of inflation and the cost of travel, tourist enterprises have contributed significantly to some tribal economies.The same transportation improvements that allowed tourists to reach the reservation also enabled tribes to connect better with urban markets. Some tribes chose to develop new industries, typically in light manufacturing. More recent tribal enterprises have often emphasized services that, with the aid of the Internet, can be provided from any location: information technology (such as server farms), accounting, payroll, order processing, and printing services are examples. More-localized operations, such as tribal telecommunications operations and energy companies, have also benefitted from better transportation.In a reversal of the extractive industries common to rural Northern America, some indigenous nations have contracted to store materials that are difficult to dispose of, such as medical and nuclear waste. For the most part, these projects were not initiated until late in the 20th or early in the 21st century, and they have generally been controversial. Factions within actual or potential host tribes often disagree about whether the storage or disposal of dangerous materials constitutes a form of self-imposed environmental racism or, alternatively, a form of capitalism that simply takes advantage of the liminal geographic and regulatory space occupied by native nations.While the kinds of economic development noted above are certainly not exhaustive, they do represent the wide variety of projects that indigenous nations and their members had undertaken by the beginning of the 21st century. At that time, mainstream businesses like these represented the numeric majority of indigenous development projects in Northern America, although they were neither the most profitable nor among nonnatives the best-known forms of indigenous economic development. Instead, the most important development tool for many communities is the casino.In 1979 the Seminoles of Florida opened the first Native American gaming (gaming, Indian) operation, a bingo parlour with jackpots as high as $10,000 (U.S.) and some 1,700 seats. The Seminole and other tribes surmounted a number of legal challenges over the next decade, principally suits in which plaintiffs argued that state regulations regarding gaming should obtain on tribal land. The issue was decided in California v. Cabazon Band of Mission Indians (1987), in which the U.S. Supreme Court found that California's interest in the regulation of reservation-based gambling was not compelling enough to abrogate tribal sovereignty. Gaming could thus take place on reservations in states that did not expressly forbid gambling or lotteries. The U.S. Congress passed the Indian Gaming Regulatory Act in 1988; the act differentiated between various forms of gambling (i.e., bingo, slot machines, and card games) and the regulations that would obtain for each. It also mandated that tribes enter into compacts with state governments; these agreements guaranteed that a proportion of gaming profits—sometimes as much as 50 percent—would be given to states to support the extra burdens on infrastructure, law enforcement, and social services that are associated with casino traffic.Although some Native American gaming operations have proven extremely profitable, others have been only minimally successful. To a large extent, success in these ventures depends upon their location; casinos built near urban areas are generally able to attract a much higher volume of visitors than those in rural areas and, as a result, are much more profitable. In order to expand their businesses, some tribes have reinvested their earnings by purchasing and developing property that is proximal to cities; others have filed suits claiming land in such areas. Some groups have petitioned the U.S. government for official recognition as tribes, an action that some antigambling activists have complained is motivated by a desire to gain the right to open casinos. In many such cases the group in question has a variety of reasons to press a claim, as well as ample historical documentation to support the request for recognition; in these cases recognition is eventually granted. In other cases, however, claims to indigenous heritage have proved bogus, and recognition has been denied.International developmentsIn the early 21st century, while many of the efforts of Native American communities focused by necessity on local, regional, or national issues, others increasingly emphasized their interaction with the global community of aboriginal peoples. The quest for indigenous self-determination received international recognition in 1982, when the United Nations Economic and Social Council created the Working Group on Indigenous Populations. In 1985 this group began to draft an indigenous rights document, a process that became quite lengthy in order to ensure adequate consultation with indigenous nations and nongovernmental organizations. In 1993 the UN General Assembly declared 1995–2004 to be the International Decade of the World's Indigenous Peoples; the same body later designated 2005–2015 as the Second International Decade of the World's Indigenous Peoples.In 1995 the UN Commission on Human Rights received the draft Declaration of the Rights of Indigenous Peoples. The commission assigned a working group to review the declaration, and in 2006 the group submitted a final document to the Human Rights Council. Despite efforts by many members of the UN General Assembly to block a vote on the declaration, it was passed in 2007 by an overwhelming margin: 144 votes in favour, 11 abstentions, and 4 negative votes (Australia, Canada, New Zealand, and the United States). Indigenous communities in the Americas and elsewhere applauded this event, which they hoped would prove beneficial to their quests for legal, political, and land rights.Elizabeth Prine PaulsAdditional ReadingSynthetic accounts of traditional culturesThere are many syntheses of the traditional cultures of Native America. An excellent collection of photos and essays was commissioned to celebrate the opening of the Smithsonian Institution's National Museum of the American Indian, Gerald McMaster and Clifford E. Trafzer (eds.), Native Universe: Voices of Indian America (2004).An encyclopaedic summary of knowledge, literature, and research on the principal cultural regions north of Mexico is provided by the multivolume William C. Sturtevant (ed.), Handbook of North American Indians (1978– ). Ongoing research is published in American Indian Culture and Research Journal (quarterly); and American Indian Quarterly.Reference works include Carl Waldman and Molly Braun, Atlas of the North American Indian (1985), and Encyclopedia of Native American Tribes (1988); Barbara A. Leitch and Kendall T. LePoer (eds.), A Concise Dictionary of Indian Tribes of North America (1979); Barry T. Klein (ed.), Reference Encyclopedia of the American Indian, 6th ed. (1993); and Duane Champagne (ed.), The Native North American Almanac (1994), a combination of handbook, encyclopaedia, and directory.Classic surveys of the native peoples of North America include Edward S. Curtis, The North American Indian, 20 vol. (1907–30, reissued 1978); Clark Wissler, The American Indian: An Introduction to the Anthropology of the New World (1917, reprinted 2005); A.L. Kroeber, Cultural and Natural Areas of Native North America (1939, reprinted 1976); John R. Swanton, The Indian Tribes of North America (1952, reprinted 1984); and Fred Eggan (ed.), Social Anthropology of North American Tribes, 2nd enlarged ed. (1955, reissued 1970).Indigenous religions of the Americas as a whole are explored in Denise Lardner Carmody and John Tully Carmody, Native American Religions: An Introduction (1993). Religious beliefs and ceremonies specific to North America are described in Arlene Hirschfelder and Paulette Molin, The Encyclopedia of Native American Religions (1992); Sam D. Gill and Irene F. Sullivan, Dictionary of Native American Mythology (1992); Connie Burland, North American Indian Mythology, new rev. ed., revised by Marion Wood (1985); Omer C. Stewart, Peyote Religion: A History (1987); Weston La Barre, The Peyote Cult, 5th ed., enlarged (1989); and Gregory E. Smoak, Ghost Dances and Identity: Prophetic Religion and American Indian Ethnogenesis in the Nineteenth Century (2006).Broadly comparative works include Western Indians: Comparative Environments, Languages, and Cultures of 172 Western American Indian Tribes (1980), on Northwest Coast, Californian, North American Plateau, Great Basin, and Southwest peoples; Christopher Vecsey and Robert W. Venables (eds.), American Indian Environments: Ecological Issues in Native American History (1980); Thomas E. Ross and Tyrel G. Moore (eds.), A Cultural Geography of North American Indians (1987); Paul Stuart, Nations Within a Nation: Historical Statistics of American Indians (1987), with extensive tables and bibliography; North American Indians (1991), well illustrated; John Gattuso (ed.), Native America (1991), a description of people, places, history, and culture written and illustrated by Native Americans; Alice Beck Kehoe, North American Indians: A Comprehensive Account, 2nd ed. (1992); William T. Hagan, American Indians, 3rd ed. (1993); Shepard Krech III, The Ecological Indian: Myth and History (1999); and Julian Granberry, The Americas That Might Have Been: Native American Social Systems Through Time (2005).Information on the United States alone includes Francis Paul Prucha, Atlas of American Indian Affairs (1990); and Arlene Hirschfelder and Martha Kreipe de Montaño, The Native American Almanac: A Portrait of Native America Today (1993).Synthetic studies of Canadian peoples are Harold Cardinal, The Rebirth of Canada's Indians (1977), a study of government relations; Diamond Jenness, The Indians of Canada, 7th ed. (1977), a classic work; Jacqueline Peterson and Jennifer S.H. Brown (eds.), The New Peoples: Being and Becoming Métis in North America (1985); Bruce Alden Cox (ed.), Native People, Native Lands: Canadian Indians, Inuit, and Métis (1987), a study of economics with a bibliographic essay on Canadian native studies; J.R. Miller, Skyscrapers Hide the Heavens: A History of Indian-White Relations in Canada, rev. ed. (1991); Olive Patricia Dickason, Canada's First Nations: A History of Founding Peoples from Earliest Times (1992); and James S. Frideres and Lilianne Ernestine Krosenbrink-Gelissen, Native Peoples in Canada: Contemporary Conflicts, 4th ed. (1993).An extensive listing of books and articles on particular Indian groups is given in George Peter Murdock and Timothy J. O'Leary, Ethnographic Bibliography of North America, 4th ed., 5 vol. (1975); and in a companion work, M. Marlene Martin and Timothy J. O'Leary, Ethnographic Bibliography of North America, Supplement, 1973–1987, 3 vol. (1990).Prehistoric cultures, art, and populationsThomas D. Dillehay, The Settlement of the Americas: A New Prehistory (2000), is an account by the archaeologist whose analysis of the Monte Verde site changed modern notions of North American prehistory; it provides a synthetic account of the peopling of the Americas. Introductions to the broad chronological sweep of Native American prehistory include Jesse D. Jennings (ed.), Ancient North Americans (1983); M. Coe, Dean Snow, and Elizabeth Benson, Atlas of Ancient America (1986); David L. Browman (ed.), Early Native Americans: Prehistoric Demography, Economy, and Technology (1980); David Hurst Thomas, Exploring Ancient Native America: An Archaeological Guide (1994); and Norman Bancroft-Hunt, Historical Atlas of Ancient America (2001).An account of the archaeological exploration of the largest city in prehistoric Native America is Biloine W. Young and Melvin L Fowler, Cahokia, the Great Native American Metropolis (2000); an analysis of the culture's artistic tradition is F. Kent Reilly III and James F. Garber (eds.), Ancient Objects and Sacred Realms: Interpretations of Mississippian Iconography (2007). Richly illustrated catalogues of pre-Columbian art are available in Richard F. Townsend and Robert V. Sharp (eds.), Hero, Hawk, and Open Hand: American Indian Art of the Ancient Midwest and South (2004); and Geneviève Le Fort (ed.), Masters of the Americas: In Praise of the Pre-Columbian Artists: The Dora and Paul Janssen Collection (2005).The question of how many people lived in the Americas when the Europeans arrived has been the focus of much controversy. Authoritative essays on this topic are in William C. Sturtevant (ed.), Handbook of North American Indians, vol. 3, Environment, Origins, and Population (2006), ed. by Douglas H. Ubelaker. Key texts in the debate include Henry F. Dobyns and William R. Swagerty, Their Number Become Thinned: Native American Population Dynamics in Eastern North America (1983); Russell Thornton, American Indian Holocaust and Survival (1987); and William M. Denevan (ed.), The Native Population of the Americas in 1492, 2nd ed. (1992).The methods of historical demography and the role of epidemic disease in indigenous depopulation are examined in Noble David Cook, Born to Die: Disease and New World Conquest, 1492–1650 (1998); David Henige, Numbers from Nowhere: The American Indian Contact Population Debate (1998); and David S. Jones, Rationalizing Epidemics: Meanings and Uses of American Indian Mortality Since 1600 (2004).History to the late 19th centuryAn account that places the initial encounters between Europeans and Native Americans in very broad historical perspective may be found in Brian Fagan, Fish on Friday: Feasting, Fasting, and the Discovery of the New World (2006); many of the scholarly debates regarding pre-Columbian life in the Americas, such as those surrounding precontact population figures, the existence of urban areas, and the genetic manipulation of food crops, are addressed in Charles C. Mann, 1491: New Revelations of the Americas Before Columbus (2006). Syntheses of Native American history include Herman J. Viola, After Columbus: The Smithsonian Chronicle of the North American Indians (1990); Angie Debo, A History of the Indians of the United States (1970, reissued 1989), including Alaska; Eleanor Burke Leacock and Nancy Oestreich Lurie (eds.), North American Indians in Historical Perspective (1971, reprinted 1988). Military engagements are summarized in Michael L. Nunnally, American Indian Wars: A Chronology of Confrontations Between Native Peoples and Settlers and the United States Military, 1500s–1901 (2007).A number of 19th-century artists drew, painted, or photographed Native American individuals and communities; their works provide a compelling visual record of traditional life. Among these are Swiss artist Karl Bodmer, whose works are collected in Karl Bodmer and Maximilian Wied, Travels in the Interiors of North America 1832–1834 (1840, reprinted 2001); David C. Hunt and Marsha V. Gallagher (compilers), Karl Bodmer's America (1984); W. Raymond Wood, Joseph C. Porter, and David C. Hunt, Karl Bodmer's Studio Art: The Newberry Library Bodmer Collection (2002); and Brandon K. Ruud (ed.) and Marsha V. Gallagher (compiler), Karl Bodmer's North American Prints (2004). American painter George Catlin's work is collected in George Catlin, Letters and Notes on the North American Indians (1841, reprinted 1995); and George Gurney and Therese Thau Heyman (eds.), George Catlin and His Indian Gallery (2002). The work of American photographer Edward S. Curtis is widely available, including Christopher Cardozo (ed.), Edward S. Curtis: The Great Warriors (2004), Edward S. Curtis: The Women (2004), and Sacred Legacy: Edward S. Curtis and the North American Indian (2005).Indigenous accounts of colonial history are collected in Peter Nabokov (ed.), Native American Testimony: A Chronicle of Indian-White Relations from Prophecy to the Present, 1492–2000, rev. ed. (1999); Colin G. Calloway (ed.), First Peoples: A Documentary Survey of American Indian History, 2nd ed. (2004), The World Turned Upside Down: Indian Voices from Early America (1994), and Our Hearts Fell to the Ground: Plains Indian Views of How the West Was Lost (1996); and Vicki Rozema (ed.), Voices from the Trail of Tears (2003).The negotiation of power between colonizers and Native Americans is the focus of a myriad of texts, including Robert Blaisdell (ed.), Great Speeches by Native Americans (2000); Andrew L. Knaut, The Pueblo Revolt of 1680: Conquest and Resistance in Seventeenth-Century New Mexico (1997); Nicholas P. Cushner, Why Have You Come Here?: The Jesuits and the First Evangelization of Native America (2006); Nathaniel Philbrick, Mayflower: A Story of Courage, Community, and War (2006); Colin G. Calloway, The Scratch of a Pen: 1763 and the Transformation of North America (2006); Warren R. Hofstra (ed.), Cultures in Conflict: The Seven Years' War in North America (2007); and James Welch and Paul Stekler, Killing Custer: The Battle of Little Bighorn and the Fate of the Plains Indians (2007).The effects of the enslavement of indigenous Americans are illuminated in Alan Gallay, The Indian Slave Trade: The Rise of the English Empire in the American South, 1670–1717 (2002). The conflicts that derived from Native American slaveholding are considered in James F. Brooks, Captives and Cousins: Slavery, Kinship, and Community in the Southwest Borderlands (2002); Theda Purdue, Mixed Blood Indians: Racial Construction in the Early South (2005); and Tiya Miles, Ties That Bind: The Story of an Afro-Cherokee Family in Slavery and Freedom (2006). A case in which Africans and their descendants merged more easily with native peoples is illustrated in Gary Zellar, African Creeks: Estelvste and the Creek Nation (2007).History from the late 19th century onwardIn the 20th century, many indigenous peoples began to assert that academic scholarship undermined their oral traditions and histories. Discussions regarding this issue in Native American historiography are available in Peter Nabokov, A Forest of Time: American Indian Ways of History (2002); and Jennifer S.H. Brown and Elizabeth Vibert (eds.), Reading Beyond Words: Contexts for Native History, 2nd ed. (2003).Two memoirs that provide a fascinating perspective on the ways that Native American women's lives did (and did not) change during the period from about 1860 to the end of the 20th century are Frank B. Linderman, Red Mother (1932, reissued as Pretty-Shield: Medicine Woman of the Crows, 2003), essentially a transcript of a series of conversations between the author and Pretty-Shield; and Alma Hogan Snell, Grandmother's Grandchild: My Crow Indian Life, ed. by Becky Matthews (2000), the life story of Pretty-Shield's granddaughter. The lives of Pretty-Shield's contemporaries are recounted in Frank B. Linderman, American: The Life Story of a Great Indian, Plenty-Coups: Chief of the Crows (1930, reissued as Plenty-Coups: Chief of the Crows, new ed. 2002); Peter Nabokov (ed.), Two Leggings: The Making of a Crow Warrior (1967, reprinted 1982); and John Stands In Timber and Margot Liberty, Cheyenne Memories, 2nd ed. (1998), among others. Alma Hogan Snell's contemporaries have written memoirs, including Henry Mihesuah, First to Fight, ed. by Devon A. Mihesuah (2002); and Kenny Thomas, Sr., Crow Is My Boss: The Oral Life History of a Tanacross Athabaskan Elder. ed. by Craig Mishler (2005).Personal accounts of childhood, particularly of early educational encounters, are the substance of Clyde Ellis, To Change Them Forever: Indian Education at the Rainy Mountain Boarding School, 1893–1920 (1996). A number of essays are collected in Andrew Garrod and Colleen Larimore (eds.), First Person, First Peoples: Native American College Graduates Tell Their Life Stories (1997), which is notable for the essayists' reflections on the school experiences of earlier generations and the impact of those experiences on their own educational pursuits.Discussions of the problems that have plagued efforts at public education may be found in Delores J. Huff, To Live Heroically: Institutional Racism and American Indian Education (1997); Brenda J. Child, Boarding School Seasons: American Indian Families, 1900–1940 (1998); John Bloom, To Show What an Indian Can Do: Sports at Native American Boarding Schools (2000); Jon Reyhner and Jeanne Eder, American Indian Education: A History (2004); and Clifford E. Trafzer, Jean A. Keller, and Lorene Sisquoc (eds.), Boarding School Blues: Revisiting American Indian Educational Experiences (2006). A striking contrast to these accounts is Amanda J. Cobb, Listening to Our Grandmothers' Stories: The Bloomfield Academy for Chickasaw Females, 1852–1949 (2000); it tells of a school that was tribally run and operated on the premise that educated young women were instrumental in effecting cultural resistance.Another genre that relies heavily on first-person accounts focuses on Native American contributions to the military, such as Jere Bishop Franco, Crossing the Pond: The Native American Effort in World War II (1999); Kenneth William Townsend, World War II and the American Indian (2000); William C. Meadows, The Comanche Code Talkers of World War II (2002); and Tom Holm, Strong Hearts, Wounded Souls: Native American Veterans of the Vietnam War (1996). Biographies of Native Americans who have served in the military include Clark G. Reynolds, On the Warpath in the Pacific: Admiral Jocko Clark and the Fast Carriers (2005); between the world wars Admiral Clark (Cherokee) was instrumental in introducing aviation to the Navy. Memoirs of war include Hollis D. Stabler, No One Ever Asked Me: The World War II Memoirs of an Omaha Indian Soldier, ed. by Victoria Smith (2005); and Leroy TeCube, Year in Nam: A Native American Soldier's Story (1999).Public policy and economic developmentGovernment policy, ethnic identity and status, and land claims are set forth in Hazel W. Hertzberg, The Search for an American Indian Identity: Modern Pan-Indian Movements (1971), on developments prior to 1934; Alvin M. Josephy, Jr., Now That the Buffalo's Gone: A Study of Today's American Indians (1982), on land claims and on self-determination and sovereignty; Richard White, The Roots of Dependency: Subsistence, Environment, and Social Change Among the Choctaws, Pawnees, and Navajos (1983), on the Choctaw in the 18th century, the Pawnee in the 19th, and the Navajo in the 20th; Vine Deloria, Jr., and Clifford M. Lytle, The Nations Within: The Past and Future of American Indian Sovereignty (1984); Sandra L. Cadwalader and Vine Deloria, Jr. (eds.), The Aggressions of Civilization: Federal Indian Policy Since the 1880s (1984); Francis Paul Prucha, The Great Father: The United States Government and the American Indians, 2 vol. (1984), and The Indians in American Society: From the Revolutionary War to the Present (1985); Vine Deloria, Jr. (ed.), American Indian Policy in the Twentieth Century (1985); Sharon O'Brien, American Indian Tribal Governments (1989), on both historical and present-day governments; Janet A. McDonnell, The Dispossession of the American Indian, 1887–1934 (1991); Charles Wilkinson, Blood Struggle: The Rise of Modern Indian Nations (2005); and Harvard Project on American Indian Economic Development, The State of the Native Nations: Conditions Under U.S. Policies of Self-Determination (2008).Census data on housing, family structure, education, and mortality are in C. Matthew Snipp, American Indians: The First of This Land (1989), a text that also makes comparisons with other American ethnic groups. The causes driving the high rate of population increase in indigenous communities are considered in Nancy Shoemaker, American Indian Population Recovery in the Twentieth Century (1999).Discussions of the individuals, strategies, and tactics involved in Native American resistance and cultural movements are recounted in a number of texts, including Frederick E. Hoxie, Parading Through History: The Making of the Crow Nation in America, 1805–1935 (1995); Rennard Strickland, Tonto's Revenge: Reflections on American Indian Culture and Policy (1997); Alvin M. Josephy, Jr., Joane Nagel, and Troy Johnson (eds.), Red Power: The American Indians' Fight for Freedom, 2nd ed. (1999); David E. Wilkins and K. Tsianina Lomawaima, Uneven Ground: American Indian Sovereignty and Federal Law (2001); R. David Edmunds (ed.), The New Warriors: Native American Leaders Since 1900 (2001); Richard A. Grounds, George E. Tinker, and David E. Wilkins (eds.), Native Voices: American Indian Identity and Resistance (2003); and Sarah Eppler Janda, Beloved Women: The Political Lives of Ladonna Harris and Wilma Mankiller (2007).The postwar mass relocation from reservations to cities that was instigated by the U.S. Bureau of Indian Affairs is considered in Deborah Davis Jackson, Our Elders Lived It: American Indian Identity in the City (2002); and James B. LaGrand, Indian Metropolis: Native Americans in Chicago, 1945–1975 (2002).Economic development is often seen as the key to indigenous well-being. Discussions of trends in this area include Peter Iverson, When Indians Became Cowboys: Native Peoples and Cattle Ranching in the American West (1994); Donald Lee Fixico, The Invasion of Indian Country in the Twentieth Century: American Capitalism and Tribal Natural Resources (1998); Eve Darian-Smith, New Capitalists: Law, Politics, and Identity Surrounding Casino Gaming on Native American Land (2004); and Brian Hosmer and Colleen O'Neill (eds.), Native Pathways: American Indian Culture and Economic Development in the Twentieth Century (2004). A number of interesting tribal case studies are also available, including Joseph G. Jorgensen, Oil Age Eskimos (1990); and Colleen O'Neill, Working the Navajo Way: Labor and Culture in the Twentieth Century (2005).Cultural appropriationNative American cultures, images, and religions have been heavily appropriated by nonnative commercial ventures and individuals. General discussions include Carter Jones Meyer and Diana Royer (eds.), Selling the Indian: Commercializing & Appropriating American Indian Cultures (2001); Hal K. Rothman (ed.), The Culture of Tourism, the Tourism of Culture: Selling the Past to the Present in the American Southwest (2003); Eva Marie Garroutte, Real Indians: Identity and the Survival of Native America (2003); Alan Trachtenberg, Shades of Hiawatha: Staging Indians, Making Americans: 1880–1930 (2004); Philip Jenkins, Dream Catchers: How Mainstream America Discovered Native Spirituality (2004); and Huston Smith, A Seat at the Table: Huston Smith in Conversation with Native Americans on Religious Freedom, ed. by Phil Cousineau and Gary Rhine (2005).The controversies surrounding the ownership and control of indigenous human remains and cultural property are discussed in Devon A. Mihesuah, Repatriation Reader: Who Owns American Indian Remains? (2000); Kathleen S. Fine-Dare, Grave Injustice: The American Indian Repatriation Movement and NAGPRA (2002); David Hurst Thomas, Skull Wars: Kennewick Man, Archaeology, and the Battle for Native American Identity (2000); Keith James (ed.), Science and Native American Communities: Legacies of Pain, Visions of Promise (2001); and Peter Nabokov, Where the Lightning Strikes: The Lives of American Indian Sacred Places (2006).The use of racially stereotypical mascots by professional, collegiate, and high school sports teams is discussed in Carol Spindel, Dancing at Halftime: Sports and the Controversy over American Indian Mascots, updated ed. (2002); Bruce Stapleton, Redskins: Racial Slur or Symbol of Success? (2001); and C. Richard King and Charles Fruehling Springwood (eds.), Team Spirits: The Native American Mascots Controversy (2001).Evaluations of the portrayal of American Indians in the cinema include Peter C. Rollins and John E. O'Connor (eds.), Hollywood's Indian: The Portrayal of the Native American in Film, expanded ed. (2003); Jacquelyn Kilpatrick, Celluloid Indians: Native Americans and Film (1999); and M. Elise Marubbio, Killing the Indian Maiden: Images of Native American Women in Film (2006).The early 21st century saw a surge in Native American participation in media production, including acting, writing, directing, producing, and critiquing films and television, a phenomenon discussed in Beverly R. Singer, Wiping the War Paint off the Lens: Native American Film and Video (2001); and Sierra S. Adare, “Indian” Stereotypes in TV Science Fiction: First Nations' Voices Speak Out (2005). * * *
Teaching Children the Value of Struggle By Anne Broderius, West Elementary Principal Teaching children to move from thinking “this is too hard and I just can’t do it” to “this took time and effort, but I did it” is important. Perseverance and enduring through struggles are crucial to learning and being successful in life. People of all ages need to develop these skills. Perseverance is a character trait that allows a person to continue trying even when things are difficult or seem impossible. Perseverance is actually a skill that can be taught and practiced. Even though many learn it on their own through experiences of trial and error, and success and disappointment, the coaching and support children receive from the adults in their world can make a huge impact. Parents and adult mentors should have frequent discussions with children about hard work and perseverance where they teach and model how to react to life’s disappointments or setbacks. In addition, we can reinforce successes for children by naming the perseverance as a quality that truly matters. So instead of saying, “You are so smart” consider saying, “I noticed how hard you worked on that and stuck with it until the end”. This will help to foster a positive attitude about hard work, life experiences, and determination. Here are some tips for fostering perseverance in children. - Create an environment where it’s ok to make mistakes. When children live in a supportive environment where they trust the adults, they are more willing to take risks. When given the opportunity to build skills, meet new challenges, and see the results of their efforts, the value of hard work is instilled. - Encourage children to try new things and model trying something new yourself. No one is perfect at anything new to them, but with continued practice, even when things get hard, they will see the value of hard work and the rewards of perseverance. - Share personal experiences of facing situations that require perseverance. Children need to hear about others failures and life experiences to be willing to take risks themselves. - Start small and create the conditions for children to experience small successes. Encourage their own inner courage and strength along the way. Warn them they may experience the need for perseverance and keep supporting and coaching as needed. - Be there for them when they do struggle or fail. Provide support and help them evaluate how to adjust and try again. Work to instill a ”never give up” attitude. Recognize the effort and avoid using rewards when they experience success. Instead, use encouraging words that recognize effort, hard work, and perseverance instead. - It should be our goal to help all children maintain positive attitudes that will enable them to keep trying, and to feel proud of each success they experience along the way. We teach children to read, write, and do math, but teaching children how to persevere may be the greatest lesson of all.
What is Muscle Tear? Muscle Tear is referred to a medical condition in which there is a rupture or strain of a muscle or tendons to which the muscle is attached in the body caused due to an injury or due to overworking of the muscles and acute muscle fatigue. A Muscle Tear can occur even while doing activities of daily living like gardening or yard work where you may lift something heavy which may result in a tear of the muscle of the shoulders or what we call as a rotator cuff tear. Sportsmen are at much greater risks for Muscle Tears as they put their muscles into overdrive while participating in competitive sports, especially people involved in football, tennis, rugby, etc. People involved in construction are also at risk for Muscle Tears as they tend to lift heavy items repetitively throughout the day which may result in overworking of the muscles resulting in a torn muscle. Muscle Tear can be partial or complete. A partial muscle tear occurs when only a portion of the muscle fiber is torn while the remaining muscle and tendon is intact. This may cause minimal pain and some functional inability to use the corresponding body part. A complete muscle tear occurs when the entire muscle gets detached from its tendons and is said to be completely torn. In such cases there will be severe pain and inability to use the affected body part. There may also be bruising seen at the region of the affected muscle. There may also be bleeding at times. Treatment for Muscle Tears depends on the severity of the tear and range from icing and elevating to even surgery in extreme cases. What are the Classifications or Grading of a Muscle Tear? Muscle Tears are classified into three categories depending on the severity of the rupture: Grade I Muscle Tear: In this category, the muscle is just overstretched and is not detached from the tendon at all. This may result in mild pain in the region. There may be also mild swelling seen in the area of the muscle. Grade II Muscle Tear: In this category, there is partial tear of the muscle and some part of the muscle is detached from its tendons. Some of the symptoms include pain and swelling. It may also be difficult for the individual to use that particular body part normally. Grade III Muscle Tear: This category is given for the severe forms of Muscle Tears. In this category, there is complete tear of the muscle and there is complete detachment of the muscle from its tendons. Symptoms of grade III muscle tears are severe pain and swelling along with severe tenderness and bruising. The patient is unable to use the affected region in any way. What are the Causes of Muscle Tear? The root cause of muscle tears is injury to the muscle. These injuries can occur at anytime. Some of the possible causes of muscle tears are: - Not warming up adequately before indulging in a physically strenuous exercises or workout. - Poor flexibility of the body - Overexertion of the body Some of the other causes of Muscle Tears are: - Slip and fall accident - Jumping from a decent height - Running excessively - Throwing like in baseball - Heavy lifting - Poor posture - Sporting activities without adequate techniques What are the Symptoms of Muscle Tear? Some of the symptoms pointing to a Muscle Tear are: - Swelling over the affected region along with bruising and erythema can be a sign of muscle tear. - Severe tenderness at the site of the injury or the affected muscle. - Resting pain. - Common symptoms of muscle tear is pain with activity or use of the muscle. - Muscle weakness. - Inability of the muscle to function in any way. How is Muscle Tear Diagnosed? In order to diagnose a Muscle Tear, the treating physician will first take a medical history of the patient to inquire as to what activity may have caused the patient to present with the symptoms. The physician will then conduct a physical examination looking at the injury site to look for areas of tenderness and swelling. Here it is important to diagnose whether the Muscle Tear is complete or partial as treatment is different for partial tears and for complete tears and so is the recovery period. This can be done by taking an MRI of the injured site which can clearly show whether the patient is suffering from a partial or a complete tear of the muscle. How is Muscle Tear Treated? The treatment of Muscle Tear depends on the severity of the tear and whether the tear is partial or complete. For Grade I Muscle Tears, conservative treatment in the form of NSAIDs like Tylenol or ibuprofen along with resting the muscle for a few days and abstaining from any sporting activity or activity that initially started the symptoms is recommended. NSAIDs are contraindicated in patients who are on blood thinners or have a prior history of kidney issues or gastrointestinal tract problems. Apart from this icing the injured area for 15 to 20 minutes two to three times a day is also beneficial. Applying heat packs are also quite helpful for Grade I Muscle Tears but it should be made sure that both ice and heat should not be applied simultaneously as it may result in development of blisters. Grade II Muscle Tears can also be treated conservatively with the treatments mentioned above but they take a little bit longer than Grade I Muscle Tears to heal. Surgery is recommended for treating Grade III Muscle Tears as in such cases there is complete detachment of the muscle from its tendons and they need to be reattached. What is the Recovery Period for Muscle Tears? Recovery period for Muscle Tears are dependent on the severity of the injury. For Grade I and II Muscle Tears, the patient can return to normal activities gradually with three to five weeks. In cases of Grade III Muscle Tears or where surgery is required for correcting the tear, the recovery may take up to six months with physical therapy after surgery and then gradual return to normal activities after a severe muscle tear. With treatment, majority of the patients with muscle tear recover completely but for that the patient needs to be diligent with medical followup and adhere to the instructions of the treating physician and physical therapist so as to hasten recovery and return back to normal activities as soon as possible after suffering from a muscle tear.
An unknown visitor to Disneyland was sick with the measles virus and spread the virus to other unvaccinated individuals, one of whom traveled through airports and other states. The disease is highly contagious and the Centers for Disease Control estimate that 90% of the people close to that person who are not immune will also become infected. There were no measles cases originating in the United States from 2000 until 2011 but now there are an increasing number of outbreaks over the last few years. There was a dramatic uptick in measles cases in 2013, and the trend continues. From January 1-February 13, 2015 there have been 141 reported cases in 17 states with the majority of cases linked to the Disneyland transmissions.(1) There have also been outbreaks of Pertussis (whooping cough) and Mumps, including outbreaks on college campuses. Why are preventable disease outbreaks happening? Increased global travel makes it more likely that an individual who is not yet symptomatic but still contagious can bring a disease into the country while another factor makes it more likely that the disease spreads – the growing number of individuals in the United States, especially children, who are unvaccinated against these common childhood illnesses. Why this backlash against vaccination? There are many reasons why parents choose to not vaccinate their children. - SIDE EFFECTS: Some individuals believe that vaccines cause serious and sometimes fatal side effects and parents do not like the idea of introducing toxins into their children’s systems. Vaccine side effects are reported through a national registry, and serious adverse reactions are extremely rare. Most adverse reactions are minor discomforts. - INGREDIENTS: By November, 2009, in the face of growing public concern, the mercury based preservative thimerosol was removed from all US vaccines with the exception of certain tetanus, meningococcal and influenza vaccines. The ingredients in current vaccines are safe in the amounts used. This is supported by multiple medical organizations, both national and international, both government and private.(3) Paul Offit, MD, notes that “children are exposed to more bacteria, viruses, toxins, and other harmful substances in one day of normal activity than are in vaccines.”(4) There is also some concern that some vaccines may contain materials that are morally objectionable. In the 1960’s some vaccines were made from the cells of aborted fetuses however this is no longer the case. Some are made from animal products or human albumin however. - LINK WITH AUTISM?: Another concern is that vaccines cause autism. This persistent misconception stemmed from a 1998 paper written by British physician Dr. Andrew Wakefield. His research has been discredited and the paper retracted as Dr. Wakefield was found to have fabricated data for financial gain. His license to practice medicine in the UK was revoked. - HERD MENTALITY: Unvaccinated adults and children may also be relying on the fact that so many people have been vaccinated that the diseases covered by vaccines are so rare that the disease will not spread and they will rarely come in contact with the virus. As the number of unvaccinated individuals increases, the disease will spread more readily as witnessed by this current measles outbreak and the outbreak in the Amish community. It is true that no vaccine is perfect and it is possible to get the disease even if vaccinated. However these instances are rare and it is likely that a vaccinated individual would have partial immunity and have a milder case of the disease. The decision not to vaccinate holds complicated considerations. The unvaccinated individual may infect other individuals including young children under the age of 1 who are too young to be vaccinated, or individuals with a compromised immune system who cannot be vaccinated. These folks are not making an active choice to not be vaccinated, and yet they could become very sick or die because of someone else’s choice. Trending conversations are now about rights—that balance between the rights of the individual vs the good of society. Paul Ofitt opines in a recent blog post, “Is it your right to catch and transmit a potentially fatal infection?” Is it? (3), (4) (vaccines.procon.org, last updated on 02/06/15)
International Society for Horticultural Science We have characterized the effects of individual wavelengths of light on single leaf photosynthesis but we do not yet fully understand the effects of multi-wavelength radiation sources on growth and whole-plant net assimilation. Studies with monochromatic light by Hoover, McCree and Inada nearly a half century ago indicated that blue and cyan photons are used less efficiently than orange and red photons. Contrary to these measurements, studies in whole plants have found that photosynthesis often increases with an increasing fraction of blue photons. Plant growth, however, typically decreases as the fraction of blue photons increases above 5 to 10%. The dichotomy of increasing photosynthesis and decreasing growth reflects an oversight of the critical role of radiation capture (light interception) in the growth of whole plants. Photosynthetic efficiency is measured as quantum yield: moles of carbon fixed per mole of photons absorbed. Increasing blue light often inhibits cell division, cell expansion, and thus reduces leaf area. The thicker leaves have higher photosynthetic rates per unit area, but reduced radiation capture. This blue-light-induced reduction in photon capture is usually the primary reason for reduced growth in spite of increased photosynthesis per unit leaf area. This distinction is critical when extrapolating from single leaves to plant communities. Bugbee, Bruce, "Toward an optimal spectral quality for plant growth and development: The importance of radiation capture" (2016). Plants, Soils, and Climate Faculty Publications. Paper 763.
Things you'll need for this activity What is the learning in this activity? Playing outside and noticing changes in the environment helps children to develop observation skills and spatial awareness. As your child explores the size of the puddle and whether they can jump to the other side they are exploring space and distance, while also learning how to move their bodies by jumping, stomping and balancing. The weather has changed and winter has arrived. Often when this happens and the rain rolls in it is hard to find things to do outside to keep your child busy and active. Rain puddles are just made for jumping. You could even try jumping over puddles with your child.After the rain has finally stopped (or even during the rain!) and you can go outside, see how many different puddles you and your child can find. There will be both big and little puddles. Try and predict which ones will be easy to jump over without having to take a run up. Qs -“Are there any puddles that are so large you will only make the other side if you take a running jump?” “Can you find puddles that all of the family can jump over?” Are there others that are so big that only some of the family can jump across?” “How many times can you jump over and back before you are so tired you need a rest?” Extension - Trace around the puddle with chalk. What happens when the puddle dries up? Watch the weather or look at an online weather map to predict when it will be a good day for puddles. Invite your child to collect a few different bits of nature—a pebble, a leaf, a pinecone, a feather—then bring them to the puddle. Which of the items will sink and which will float? Why? Find a measuring tape or ruler and help your child measure the puddle. How wide is it? How deep is it? How much water do you think is in this puddle? Ask your child to look carefully into the puddle. What do they see? Do they see a reflection? Is there anything living in the puddle? We'd love to see your creations and home based play! Share with us on Facebook at @yrfamilies #YarraRangesPlay You can contact the YRC Family & Children’s Services team on 1300 368 333 or by email at email@example.com Download a printable copy(PDF, 336KB)
The most important producers of plant biomass in the ocean are diatoms. Because they rely on silica rather than calcium carbonate to build their shells, they were previously considered winners of ocean acidification – a chemical change in seawater caused by the uptake of CO2 that makes calcification more difficult. In a study published today in the journal Nature, scientists at GEOMAR Helmholtz Centre for Ocean Research Kiel show that diatoms, which belong to the plankton, are also affected. Analysis suggests that increasing acidification could drastically reduce populations of diatoms. While calcifying organisms in particular struggle to form their shells and skeletons in more acidic seawater, diatoms (diatoms) were previously thought to be less vulnerable to the effects of ocean acidification – a chemical change triggered by the uptake of carbon dioxide (CO2). The globally widespread tiny diatoms use silica, a compound of silicon, oxygen and hydrogen, as a building material for their shells. That diatoms are nevertheless threatened has now been demonstrated for the first time by researchers from GEOMAR Helmholtz Centre for Ocean Research Kiel, the Institute of Geological and Nuclear Sciences New Zealand and the University of Tasmania in a study published in the journal Nature. For their study, they linked an overarching analysis of various data sources with Earth system modeling. The findings provide a new assessment of the global impact of ocean acidification. As a result of ocean acidification, the silicon shells of diatoms dissolve more slowly. This is not an advantage – because it causes diatoms to sink to deeper water layers than before before they chemically dissolve and are converted back to silica. Consequently, the nutrient is increasingly withdrawn from the global cycle and thus becomes scarcer in the light-flooded surface layer, where it is needed to form new shells. This causes a decline in diatoms, the scientists:in their current publication. Diatoms contribute 40 percent of the production of plant biomass in the ocean and are the basis of many marine food webs. They are also the main driver of the biological carbon pump that transports CO2 to the deep ocean for long-term storage. “Using an overarching analysis of field experiments and observational data, we wanted to determine how ocean acidification affects diatoms on a global scale. Our current understanding of ecological effects of ocean change is largely based on small-scale experiments, i.e., from a particular place at a particular time. These findings can be deceptive if the complexity of the Earth system is not taken into account. Our study uses diatoms as an example to show how small-scale effects can lead to ocean-wide changes with unforeseen and far-reaching consequences for marine ecosystems and matter cycles. Since diatoms are one of the most important plankton groups in the ocean, their decline could lead to a significant shift in the marine food web or even a change for the ocean as a carbon sink.” – Dr. Jan Taucher, marine biologist The meta-analysis examined data from five mesocosm studies from 2010 to 2014, from different ocean regions, from Arctic to subtropical waters. Mesocosms are a type of large-volume, oversized test tube in the ocean, with a capacity of tens of thousands of liters, in which changes in environmental conditions in a closed but otherwise natural ecosystem can be studied. For this purpose, the water enclosed in the mesocosms was enriched in carbon dioxide to correspond to future scenarios with moderate to high increases in atmospheric CO2 levels. For the present study, the chemical composition of organic material from sediment traps was evaluated as it sank through the water contained in the experimental containers over the course of the experiments, which lasted several weeks. Combined with measurements from the water column, an accurate picture of biogeochemical processes within the ecosystem emerged. The findings obtained from the mesocosm studies could be confirmed using global observational data from the open ocean. They show – in line with the results of the analysis – a lower dissolution of the silicon shells at higher seawater acidity. The resulting data sets were used to run simulations in an Earth system model to assess the ocean-wide consequences of the observed trends. “Already by the end of this century, we expect a loss of up to ten percent of diatoms. That’s immense considering how important they are to life in the ocean and to the climate system,” Dr. Taucher continued. “However, it is important to think beyond 2100. Climate change will not stop abruptly, and global effects in particular take some time to become clearly visible. Depending on the amount of emissions, our model in the study predicts a loss of up to 27 percent silica in surface waters and an ocean-wide decline in diatoms of up to 26 percent by 2200 – more than a quarter of the current population.” This finding of the study is in sharp contrast to the current state of ocean research, which sees calcifying organisms as losers and diatoms as profiteers from ocean acidification. Professor Ulf Riebesell, marine biologist at GEOMAR and head of the mesocosm experiments adds: “This study once again highlights the complexity of the Earth system and the associated difficulty in predicting the consequences of man-made climate change in its entirety. Surprises of this kind remind us again and again of the incalculable risks we run if we do not counteract climate change swiftly and decisively.” Solutions for energy recovery from wastewater and organic waste From May 30 to June 3, the sales team of biogas specialist Weltec Biopower will be available in Hall A4, Stand 217, to answer all questions relating to the construction and retrofitting of anaerobic energy plants: The range includes proven processes from the field of biogas technology. The high savings potential of these processes is demonstrated by the modernization of the municipal wastewater treatment plant in Bückeburg, Germany, which serves 33,000 inhabitants. Since Weltec Biopower switched to anaerobic sludge stabilization in 2021, operation of the plant at full load has become significantly more economical. As general contractor, the company was responsible for the construction of the wastewater treatment system at the municipal sewage treatment plant. In addition to the earthworks, the construction of the foundation and the electrical cabling, the work included the construction of a new static sludge thickener, a machine room for the combined heat and power plant, the control and pumping station, and a stainless steel digester with a gas storage roof. Thanks to anaerobic wastewater treatment, the sludge volume has dropped by 35 percent, resulting in a significant reduction in transport and disposal costs. In addition, the digester gas produced can now generate around 465,000 kWh of electricity at full load. This allows the operator to meet about 40 percent of its electricity needs and save two-thirds of its electricity costs. “In view of the new greenhouse gas reduction targets and the sharp rise in energy prices, an anaerobic stage is an economically attractive solution for wastewater treatment plant operators, which also benefits from public subsidies. Ultimately, the combination of wastewater treatment, power and heat generation, and climate protection enables more efficient operation, especially for small and medium-sized wastewater treatment plants.” – Jens Albartus, Managing Director How these goals can be achieved with organic waste is demonstrated by a WELTEC plant in Piddlehinton, southwest England. Here, a mix of food waste, expired food from supermarkets and organic waste is fed to the biogas plant. In addition to the substrate mix, the technical approach is also special. Before feeding and shredding, a de-packaging machine separates the food from the packaging. Another efficiency bonus: The waste heat from the cogeneration plant is sold to a nearby feed producer, which also uses most of the electricity. The biogas plant operator feeds the excess electricity directly into the power grid, generating further revenue. The digestate from the process meets the requirements of the British industry standard PAS-100, so local farmers can use it as fertilizer. Following a capacity expansion in 2014 from 20,000 t of substrate input per year to 30,000 t, the Group installed an additional digester and storage tank, as well as GasMix blending systems and a separation unit. A plant with this equipment would also support the conversion to biomethane production. Therapeutics and bioinsecticides: production by spider venom The venom of a single spider can contain up to 3000 components. The components, mostly peptides, can be used to develop promising active ingredient candidates for the treatment of diseases. Spider venom can also be used in pest control – as a biological pesticide. A team of researchers from the Fraunhofer Institute for Molecular Biology and Applied Ecology IME and the Justus Liebig University Giessen is focusing on native spiders and their venom mix, which have received little attention to date. The research results on the biology of the toxins – especially on the venom of the wasp spider – have been published in scientific journals. Spiders make many people uncomfortable, and some are even afraid of the eight-legged creatures. At the Fraunhofer Institute for Molecular Biology and Applied Ecology IME in Giessen, however, they are welcome. Here, biochemist Dr. Tim Lüddecke and his team are conducting research on spider toxins. “Spider toxins are a largely untapped resource, this is partly due to the sheer diversity – some 50,000 species are known. There is a lot of potential in spider venom for medicine, for example in researching disease mechanisms.” – Dr. Tim Lüddecke, head of the new “Animal Venomics” research group For example, it is possible to study in the laboratory how individual toxins act on pain receptors of nerve cells. The venom cocktail of the Australian funnel web spider is particularly promising. It is assumed that it can be used to treat neuronal damage after strokes and to make hearts for organ transplants last longer. Other drug candidates are of interest for use as antibiotics or painkillers. “This is a very young field of research. The substances have been discovered and described, but they are not yet in the preclinical stage,” Lüddecke said. Pesticide research is a different story. Spiders stun insects with their venom and then eat them. Because the toxins are very effective against insects, they provide a good basis for biopesticides; they are suitable for crop pest control. Research to date has focused on the toxins of the very large or potentially dangerous species that live in the tropics. The native, small and harmless spiders have not been in focus. “Most spiders in Central Europe are no more than two centimeters in size, and their venom levels were not sufficient for experiments. But now we have precise analytical methods to study even the small amounts of the previously neglected majority of spiders,” explains Lüddecke. The working group at the Giessen Bioresources branch of the Fraunhofer IME is devoting itself to these species as part of a research project. In the process, they are collaborating with research teams from the Justus Liebig University in Giessen, among others. The work is funded by the LOEWE Center for Translational Biodiversity Genomics (LOEWE-TBG) in Frankfurt am Main. The scientists are paying particular attention to the wasp spider (Argiope bruennichi), which owes its name to its striking wasp-like coloration. They have succeeded in decoding its venom, identifying numerous novel biomolecules. The research findings were published in the journal Biomolecules. New biomolecules from wasp spider venom Spider venoms are highly complex, they can contain up to a maximum of 3000 components. The venom of the wasp spider, on the other hand, contains only about 53 biomolecules. It is heavily dominated by high-molecular-weight components, including so-called CAP proteins and other enzymes. As in other spider venoms, knottins are present – but these make up only a small part of the total mixture. Knottins represent a group of neurotoxic peptides that are robust to chemical, enzymatic, and thermal degradation due to their nodal structure. One could therefore administer these molecules orally as a component of drugs without digesting them in the gastrointestinal tract. They can therefore exert their effects very well, which is why they offer great potential for medicine. In addition, knottins bind specifically to ion channels. “The more specifically a molecule docks onto its target molecule, attacking only a single type of ion channel, the fewer side effects it triggers,” explains Lüddecke. Moreover, even in small amounts, the knottins affect the activity of the ion channels, i.e., they are effective at low concentrations. As a result, derived drugs can be administered in low doses. The combination of these properties is what makes spider venoms so interesting for science. The project partners also discovered molecules in the wasp spider’s venom that are similar in structure to neuropeptides, which are responsible for transporting information between nerve cells. “We have found novel families of neuropeptides that we have not previously seen in other spiders. We suspect that the wasp spider uses them to attack the nervous system of insects. It has been known for some time that neuropeptides in the animal kingdom are frequently converted into toxins in the course of evolution,” says the researcher. Replicating toxins in the lab Since the toxin yield is low in small spiders, the researchers extract the toxin glands and sequence the mRNA from them. Based on the gene structure, the toxins can be decoded. The venom profile of the wasp spider is now available in its entirety, and the next step is to produce the relevant components. For this, the gene sequence is incorporated into a bacterial cell using biotechnology, which then produces the toxin. “We are building quasi genetically modified bacteria that produce the toxin on a large scale.” Lüddecke and his team have been able to mass produce the main component of the wasp spider toxin, the CAP protein. The first functional studies will start soon. Venom of male and female spiders differs In another review paper, the biochemist, in cooperation with colleagues at the Justus Liebig University of Giessen and researchers at the Australian University of the Sunshine Coast, was able to deduce that spider venoms are very dynamic and that many influences shape their composition and functionality. “The dynamics of spider venom have been completely underestimated. The biochemical repertoire is critically influenced by life stage, habitat, and especially sex. Even the venom cocktail of juveniles and adults is not necessarily identical. It is rather the interaction of the many components that makes spider venom so effective than the effect of a single toxin. Through their interactions, the components increase their effectiveness,” the researcher sums up. Biotechnology Measurement, Instrumentation, Control & Automation Safety & Security Water & Waste Water Method for determination of legionella in a water New method for fully automated determination of the concentration of legionella in a water sample within a few hours The hygienic necessity to control the concentration of legionella in technical water systems from which aerosols can be discharged leads to the problem that the cultivation method (ISO 11731-2017) used for this purpose only provides reliable results after a delay of 7-12 days. On this basis, necessary measures can only be taken and controlled with a considerable time delay. Rapid tests currently available on the market either do not correlate reliably with the accredited cultivation method or require (time-) consuming preparation steps. Some rapid tests provide highly specific detection for single Legionella species, but not for all Legionella species in a water sample (Legionella spp. = species pluralis). The newly developed measuring device INWATROL L.nella+ from Inwatec is based on the method of measuring the metabolic activity of living cells and reliably determines the parameter Legionella spp. from a water sample within a few hours. The measuring device is directly connected to the technical water system with automatic and self-disinfecting sample feed, including self-disinfection of the water contained in the measuring cell after the measurement is completed. This enables the plant operator to determine the hygienic water quality continuously and safely. In addition to the direct control of the success of the measures carried out, it is also possible to control e.g. biocides according to requirements. The hygienic relevance of the spread of pathogenic Legionella via aerosols from technical water systems such as evaporative cooling systems and cooling towers has led to the creation of technical hygiene guidelines in many countries. In Germany, VDI 2047 part 2 and 3, generally accepted technical rules for ensuring the hygienic operation of evaporative cooling systems and cooling towers came into force for the first time in 2015. In addition, in many countries the tolerable concentration of legionella in the circulation water of the respective plants is limited by the legislator. In Germany, the forty-second ordinance for the implementation of the Federal Immission Control Act (Ordinance on Evaporative Cooling Systems, Cooling Towers and Wet Separators – 42nd BImSchV) came into force on 19.08.2017, which also includes wet separators. So far, the basis for hygiene control has always been the determination of the concentration of legionella in the water by cultivation according to ISO 11731:2017 with system-dependent threshold values. In this cultivation method, cell division produces visible and therefore countable colonies. In comparison to other bacterial species, Legionella bacteria divide relatively slowly, so that the results of the measurement are only available after 7-12 days, whereby in some cases further investigations to confirm suspicious colonies follow. For the operator of a plant with monitoring obligation, this means a strongly delayed control of the hygiene status. Furthermore, the efficiency of any necessary measures can only be determined with a long delay. Additional rapid tests for estimating the contamination of water with Legionella are available, e.g. based on immunological reactions (antibody reaction), detection of genetic material (PCR) or by means of color fluorescence microscopy. The limitations of these rapid tests are the live/dead quantification, the comparability to the culture method or the complex sample preparation. The newly developed and patented automatic measuring device INWATROL L.nella+ allows the reliable and continuous determination of the parameter Legionella species pluralis with high correlation to the cultivation method according to ISO 11731:2017 within a few hours without further preparation steps by the user. 2. Rapid test for the fully automated determination of Legionella species pluralis 2.1 Measuring principle The detection of metabolically active Legionella bacteria is based on a non-specific enzymatic conversion of a non-polar fluorescein acid ester, which only passes through the cell membrane of living cells into the cell interior where it is converted into color-active fluorescein. The increase in fluorescence as a function of time is directly proportional to the number of living cells and is converted into colony forming units per 100 ml. Due to a combined heat and pH pretreatment and the high measuring temperature compared to the cultivation method, the accompanying flora is killed. The measurement is performed undiluted in a sample volume of approx. 350 ml. In comparison to the cultivation method, the measurement is not significantly influenced by accompanying flora and a high measuring inaccuracy due to a high dilution. 2.2 Continuous, automated measurement For continuous measurements, the measuring device is directly connected to a water system. A thermally self-disinfecting sampling line ensures that no reproduction of legionella in the supply line affects the measurement result. Ideally, the sampling tap is in continuous operation to exclude stagnation of water between two measurements. The measuring cell in the device is rinsed several times during filling. After the rinsing process is completed, the combined heat and pH pretreatment starts. When the pre-treatment is completed, the measuring cell cools down to the measuring temperature and the measurement begins. The measuring cell is thermally disinfected before the device is filled again for the follow-up examination. The measuring cell is ready for the next measurement. Usually, a sampling tap is installed directly at the sampling point before the sampling line. This tap can be used to take microbiological samples at the time of filling the measuring cell or at any other times, e.g. for further validation measurements. 2.3 Automated measurement of manually loaded samples The continuous measuring operation can be interrupted for manual feeding of further water samples via the filling funnel. For cleaning, rinsing and filling the measuring cell, only the valve position on the device has to be changed. When the filling is completed, the valve position is returned to its original position and the measuring device switches back to automatic mode when the measurement is completed. The measuring procedure itself does not differ from the automatic mode. 2.4 Cultivation according to ISO 11731:2017/ UBA1 The cultivation method uses several approaches with different dilution and pretreatment stages (heat or acid). The aim is to obtain evaluable results for both low and high levels of Legionella. For the result, the preparation with the highest number of confirmed Legionella colonies is used (if the measurement accuracy/number of colonies is sufficiently high). The limits of the accuracy of the cultivation method are mainly due to the possible influence of the accompanying flora, i.e. other microorganisms which can suppress the growth of the legionella or overgrow their colonies. Furthermore, bacteria are particles in a water sample and are not homogeneously distributed. Therefore, when taking small volumes from the sample bottle, inaccuracies may occur due to the sometimes high dilution factors. During cultivation, living but non-cultivable cells in the so-called VBNC2 status are not detected. Many Legionella from a coherent agglomerate, e.g. by propagation within an amoeba, are only visible and evaluated as one colony during cultivation (see Lindner, Hahn: Microbiological analyses of the cooling water according to the 42nd BImSchV, p. 74, VGB PowerTech 9, 2018). 3. Examples of application and correlation to the culture method The INWATROL L.nella+ is being used in various practical applications. Case studies include the operation in the following plants: 3.1 Monitoring of the circulation water in the cooling tower of a coal-fired power plant Challenges for the measuring mode: Changing operating conditions due to load changes between full load, partial load and operation without load at varying flow rates (automatic sampling directly from the line behind the main cooling water pump) and circuit water temperatures. Increased influence of VBNC cells especially at low circuit water temperatures. A stable measuring operation has been achieved over several months. The interim influence of VBNC cells can be successfully suppressed in the instrument by changing the automated pretreatment adapted to the main cooling water. 3.2 Monitoring of the circulating water in the evaporative cooling system of a starch factory Challenges for the measuring operation: Outdoor location of the instrument (wall mounting) with strongly changing ambient temperatures Partially strong solid matter input into the circulation water with high organic load A stable measuring operation over several months was achieved. In particular, the influence of the biocide treatment on the concentration of legionella could be proven directly. When changing from a non-oxidizing to an oxidizing biocide, a directly measurable effect on both the concentration of the legionella and the reaction speed could be observed. 3.3 Monitoring the circulation water of a metal cast house Challenge for the measuring operation: Heavy contamination of the water with inorganic and organic impurities (casting oil) With strong fluctuations in the water quality, reliable measurements have been achieved over a period of several months. Casting plants are often equipped with a hot water storage tank. Depending on the requirements of the casting plant(s), the temperature and hydraulic retention time (stagnation), as well as the load of organic and inorganic contamination fluctuates strongly with a significant influence on the reproduction rate of legionella. 3.4 Monitoring the drinking water network of a beverage manufacturer Challenge for the measuring operation: Reliable detection of low and increasing concentration of legionella at changing drinking water temperature in the pipeline network Suppression of the influence of VBNC cells on measurement results, especially at low water temperatures With this characteristically low-nutrient and solid-free water, fluctuating Legionella contamination could be reliably detected over several months, depending on the consumption structure and temperatures in the pipeline network. 3.5 Hygienic monitoring of different cooling systems of a food producing company using a laboratory device Challenge for the measuring operation: Manual sample application of cooling water samples different in quality Disinfection of the feed funnel before sample preparation Guarantee of low work effort for manual samples including result evaluation A reliable, automated adjustment of the parameters for the pre-treatment in the device could be ensured over several months even with differently buffered and preloaded water samples. Both drinking water samples (monitoring of the make-up water for the cooling systems) and the cooling water samples showed a good correlation to the cultivation method according to UBA with clearly different results. 3.6 Correlation of the rapid test INWATROL L.nella+ with the cultivation method The correlation of the rapid test was carried out over a high number of measurements with the cultivation method according to ISO 11731:2017. Sampling, sample transport as well as preparation and evaluation of the measurement results were carried out in accordance with the current recommendation of the Federal Environmental Agency for sampling and detection of Legionella in evaporative cooling systems, cooling towers and wet separators (UBA). Validation measurements were made with different accredited laboratories. In order to obtain a reliable qualitative comparison between the rapid test and the cultivation method, the following measurements were carried out in only one accredited laboratory (IWW Rheinisch-Westfälisches Institut für Wasser Beratungs- und Entwicklungsgesellschaft mbH, D-45476 Mülheim an der Ruhr). The correlation to the cultural preparations carried out in the laboratory can be rated as very high overall. Two devices showed significant short-term deviations from the laboratory results in the form of additional findings. Here the influence of VBNC cells on the measurement result of the INWATROL L.nella+ rapid test was investigated. Metabolic activity measurements using fluorescein diacetate are used in microbiological tests in addition to other methods (membrane integrity, protein synthesis (FISH), intact polar membrane lipid analysis, cell extension (“direct viable count”)) for the detection of VBNC bacteria. This can be an additional benefit for the operator, because recontamination of water systems with Legionella can also be a “revival” of VBNC organisms (see Hans-Curt Flemming, Jost Wingender – IWW Zentrum Wasser, Biofilm Centre, University Duisburg-Essen). Often, however, the aim of the operator is to achieve the highest possible correlation to the legally required examination by means of cultivation in the laboratory. By adjusting the pre-treatment conditions (mainly by increasing the temperature and lengthening the pre-treatment time), the correlation to the cultivation method can be successfully restored in case of multiple findings with VBNC cells. Holger Ohme, Jennifer Becker, Pascal Jahn, Dirk Heinecke
“In proportion as slavery prevails in a State…the Government, however democratic in name, must be aristocratic in fact.”1 In this urgent plea and dire warning to his country in 1792, Madison pondered the institution that tore both his conscience and, eventually, the United States apart. In the 1790s, the contradiction between liberty and slavery and the threat that slavery posed to the unity of an increasingly factionalized United States weighed heavily on Madison. By the end of his career, Madison became a United States congressman, Secretary of State, spearheaded the adoption of the Bill of Rights, cemented himself as the Father of the Constitution, and was elected the fourth president of the United States. Madison’s status as a slave-owner, however, marred his career, much like the vast majority of Founding Fathers. This essay will focus on Madison’s personal view of slavery in Virginia during the Revolutionary War, 1774–1783, and following his retirement in 1817. Chronologically organized, the selected correspondence elucidates a connection between the increasing tangibility of an independent American republic, the development of Madison’s political career, and his disillusionment with the institution of slavery in Virginia during the Revolutionary War. In retirement, Madison’s anti-slavery sentiments only strengthened as the republic he helped build was on the verge of an existential crisis as the nation expanded west and the politics of slavery became dire. Madison wrote each letter tactfully to specific recipients, each in a unique military, political, or personal context. Madison’s purposeful intent in his letters must be considered when analyzing the meaning behind each letter. In retirement, Madison opposed slavery in his letters and advocated for emancipation and forced colonization as race and slavery became synonymous in the nineteenth century; but was frustrated by the impossibility of bringing an end to the institution. However, Madison owned over one hundred men, women, and children. His letters reveal that anti-slavery sentiments existed on a nuanced spectrum in early America. Madison opposed slavery even as he simultaneously was paternalist, anti-black, and owned enslaved people. America’s Founding Fathers, and especially James Madison, have always been popular subjects in historical discussion. Scholars mainly concentrate on Madison’s role in creating the Constitution and the political issue of slavery during the deliberations of the Constitutional Convention, giving less attention to his relationship with slavery during the revolution or his retirement. In his 2008 work, The Haunted Philosophe: James Madison, Republicanism, and Slavery, Scott Kester explores Madison’s anti-slavery views through the prism of his moralistic deism, the belief that God does not intervene in human affairs and that human reason, morality, and natural law were God’s greatest gift to mankind. Kester argues that Madison was frustrated by the contradiction between slavery and liberty and its corrupting effect on national unity.2 Robert J. Allison uses Madison’s Missouri Crisis allegory in his 1991 article, “From the Covenant of Peace, a Simile of Sorrow,” to argue that Madison’s support for emancipation and colonization was, from Madison’s perspective, in the best interest of both slaves and white Americans.3 In her 2012 book A Slave in the White House: Paul Jennings and the Madisons, Elizabeth Downing Taylor uses the perspectives of Madison’s slaves to illuminate how dependent the Madisons were on their bonded laborers, especially during his presidency.4 Each author agrees that while the presence of slavery in the United States frustrated Madison, he did little to combat it because of other political responsibilities. Madison recognized slavery’s moral evil and its corrupting influence on national unity. However, he built his entire livelihood on enslaved labor. Slave and free states formed a fragile union through compromises and concessions, and Madison prioritized this union of states over the lives and freedom of enslaved African Americans. The numerous biographies of Madison’s life focus on his work in the Constitutional Convention.5 Those that grapple with Madison’s relationship to slavery do so mainly within the context of his work on the Constitution. They conclude that he compromised morally and politically on slavery to see the Constitution ratified and the country united.6 Irving Brant’s groundbreaking 1942 six-volume biography on Madison does not contain the word “slavery” in the indexes until volume three, where it appears in the context of Madison’s work on the Constitution.7 Biographies offer a wealth of information spanning Madison’s life but they often lack sufficient depth and perspective that targeted analysis can provide. Scholarship on slavery during the Revolutionary Era takes two primary forms: emphasizing enslaved people’s direct action to combat their bondage and early pockets of abolitionism, primarily in New England. Benjamin Quarles, in the 1960s, focused on enslaved people as primary actors during the Revolutionary War and emphasized their political role in the conflict. Quarles’ 1961 thematic approach to the question of slavery and the Revolution, The Negro in the American Revolution, centers around slaves’ “loyalty not to a place nor to a people, but to a principle (freedom).” Manisha Sinha pays equal attention to slaves that took action to procure their freedom in her 2016 book The Slave’s Cause: A History of Abolition. The author echoes Quarles’ focus on African American responsiveness by studying the Haitian Revolution and slave petitions for freedom in New England. Sinha articulates the role played by New England white clerics in early abolition movements and the blatant hypocrisy on behalf of southern revolutionary slaveholders like Patrick Henry and James Madison. In her 1983 article “Between Slavery and Freedom: Virginia Blacks in the American Revolution,” Sylvia Frey examines slavery in Virginia during the Revolution and asserts that enslaved people made calculated and informed decisions based on their circumstances when choosing to run away, join the British, passively resist, or when they made the decision either to revolt or not. Scholarship on anti-slavery sentiment during the Revolution, regardless of the region or subject, heavily emphasizes the contradiction between the struggle for some men’s rights and the sustained effort to deny the rights of everyone else. Madison’s status as a lifetime slave-owner, a Founding Father, and philosopher provides a complicated perspective on what it meant to be anti-slavery during and after the American Revolution. A nuanced examination of Madison’s surviving letters to friends and family is necessary to understand how his anti-slavery sentiments evolved based on varying times and contexts. Universities and museums are leaders in recent efforts to explore the Founding Fathers’ relationships with slavery as part of a larger trend to confront race, slavery, and memory in history. In 2017, the curatorial staff at Montpelier, Madison’s family home, launched their exhibition “The Mere Distinction of Color” to help the public wrestle with understanding Madison as a slaveowner and slavery’s role in shaping the country. Archaeologists unearthed and rebuilt Montpelier’s slave quarters, and historians gathered testimonies from living descendants of Madison’s slaves. The goal of the initiative is to “hear the stories of those enslaved at Montpelier…and explore how the legacy of slavery impacts today’s conversations about race, identity, and human rights.”8 Princeton University, Madison’s alma mater, recently followed the trend of universities, such as Wake Forest University, the University of Virginia, and Georgetown University, in examining and reconciling their institutions’ historical relationships to slavery.9 Princeton’s project focuses on Madison’s relationship with one of his lifelong slaves, Sawney, as well as his political views on slavery as a whole.10 The Princeton initiative effectively highlights Madison’s complicated and layered relationship with slavery. These projects acknowledge and reconcile a troubled past and place silenced perspectives at the forefront. They do not shy away from bringing the uncomfortable histories of figures and institutions to light. The Revolutionary War disrupted every part of life in Virginia, especially the status quo power dynamic between the landed gentry and the enslaved. In 1774, British taxation and perceived parliamentary overreach ignited tensions and fanned flames of rebellion. Influenced by Enlightenment thinkers, Madison spoke out against British infringements on colonial liberties prior to the war.11 The constant threat of an impending British attack caused white Virginians to live in fear of a mass slave rebellion instigated by the British. This fear dominated Madison’s perception of slavery throughout the Revolutionary War. When writing to his trusted Princeton friend and Philadelphia lawyer, William Bradford, in September 1774, Madison confessed his fears: “If America and the British should come to war I fear an insurrection among the slaves may & will be promoted. In one of our Counties lately a few of those unhappy wretches met together & chose a leader who was to conduct them when the English Troops should arrive- which they foolishly thought would be very soon & that by revolting to them they should be rewarded with their freedom.”12 Madison’s trepidation about revolt stimulated by foreign interference was typical of the white slave owners in Virginia. The fear that conflict with the British would encourage slave insurrections across Virginia points towards Madison’s deep-seated uneasiness about the relationship between slave and master. Madison asserted that slaves politically organized themselves to act as de facto ambassadors on behalf of the entire Virginia slave population to negotiate terms of service with the British. Madison remarkably saw these slaves as diplomats negotiating for their freedom. Madison held some level of confidence in bondspeople’s ability to exercise agency as well as a non-paternalistic understanding of their yearning for freedom. British collusion with the enslaved population was a dire threat to the structure that kept slave owners in power. Madison’s fears came to fruition shortly thereafter. Rumors of British interference threatened the fragile balance of power in Virginia, while further British actions would only ignite the colonial tensions. In April of 1775, royal governor Lord Dunmore stoked Virginia slave-owners’ fears when he confiscated Virginia’s powder reserves as a preventive measure of a pre-emptive colonial revolt. This perceived breach of colonial rights heightened colonists’ anxieties of possible slave uprisings within the colony due to the decreased capacity for militia protection. These worries manifested themselves in Madison’s letter to William Bradford on June 19, 1775. Madison wrote, “It is imagined our Governor [Dunmore] has been tampering with the Slaves & that he has it in contemplation to make great use of them in case of a civil war in this province. To say the truth, that is the only part in which this Colony is vulnerable; & if we should be subdued, we shall fall like Achilles by the hand of one that knows that secret.”13 To Madison, the propensity for possible revolt increased dramatically with Dunmore’s powder reserve seizure. Madison’s positions as a slave-owner and in the militia and the Committee of Safety legitimized his concerns. Madison made his living through the forced bondage and labor of others. He was directly responsible for protecting Orange County from British invasion, both by British regulars and by proxy through slave forces. Madison’s indictment of Virginia’s economic reliance on unfree labor shows that he recognized the economic liability of a system built on slavery. Virginia’s economy would collapse without complete control of the enslaved. According to Madison, the mobilization of slave forces by the British against the colony would be nothing short of cataclysmic. It is important to contextualize how Madison referenced slavery in his correspondence. His ominous warnings to Bradford concern the institution of slavery in Virginia as a whole. Madison, however, never wrote about slaves he owned with the same sense of conviction. This distinction is accentuated in a letter to Bradford on July 28th, 1775: “The dysentery has been again in our family & is now amongst the slaves. I have hitherto Escaped and hope it has no commission to attack me.”14 Madison’s friendly and comparatively trivial update on his own slaves’ health to Bradford takes a jokingly hyperbolic tone. It is almost unrecognizable compared to the letters he fearfully wrote about Virginia’s collective enslaved population. Madison’s paternalistic perception of his slaves compared to his paranoia and disregard for the enslaved population as a whole displayed in the above letters was typical for southern slave-owners. Madison’s benevolence concerning his family’s slaves, which were an extension of the family itself, did not carry any anti-slavery connotation. Madison understood slavery in two separate ways: known and unknown threats. Enslaved people he owned posed less of a threat because he felt he knew them personally. Madison’s feelings toward unknown slaves, or slavery as an idea, was shaped by racist stereotypes and fearmongering meant to keep slaveowners in power. Madison’s tone and conclusions in his letters concerning slavery varied based on to whom he was writing. After graduating from Princeton with Madison, Bradford was a bright and aspiring lawyer but lacked an intimate understanding of the relationships between the enslaved and free in Virginia because of his northern upbringing. Compared to other letters on the subject, Madison’s letters to Bradford reflect an exaggeration of the situation in Virginia in order to convey the increased uneasiness in Virginia. Though his fears were sincere, his letters to Bradford must be viewed as a performance to present Virginia’s military circumstances to an outsider. Madison was desperately trying to convey how the dangers of a newfound slave agency at the hands of the British threatened Virginia’s economic and social fabrics. Madison’s fear of slave revolts catalyzed by the British invasion was a specific form of anti-slavery thinking. When writing to a militia officer tasked with protecting Orange County, the dire consequences of possible slave revolts overwhelmed his writings. Prior to the Revolutionary War, the possibility of a hostile insurrection stimulated by British interference was an immense concern for Madison. In 1774 and 1775, Madison’s sole concern was protecting his home from hostile invasion and internal slave rebellion. His warnings to Bradford can be characterized as a specific form of anti-slavery thinking. Madison was against slavery in the context of an increased propensity for domestic danger due to a large enslaved population within Virginia. By taking an anti-slave revolt position, Madison subscribed to a broader anti-slavery sentiment because of the possible consequences it carried within the context of a foreign force invading Virginia. In this way, slavery represented a social and military liability. Madison did not propose emancipation to alleviate potential social destruction at the hands of slaves. Instead, he indicated the possible consequences that an economy that relied on enslaved labor created for Virginia at this time of impending conflict and social unrest. Madison limited his anti-slavery position to the context of an imminent foreign invasion. He opposed the threat of slavery in concert with a British invasion created for Virginia. He did not offer any possible solutions in his letters, but that does not detract from his narrowly applied anti-slavery thinking. Madison’s aversion to slavery as a direct threat to the cohesive bond between states eventually formed as the nation expanded and his political responsibility grew. Madison recognized that chattel slavery empowered him by stripping all power away from the enslaved and thus depended on an imbalance of power. Madison indicted this fragile imbalance due to his fears of a rebellion, but still did not advocate for emancipation. From 1775 to 1780, Madison did not discuss slavery in Virginia in his personal or professional correspondence. In early November of 1775, Lord Dunmore, Virginia’s Royal Governor, declared martial law—shocking Virginians—by stating, “And I do hereby farther declare all indented Servants, Negroes, or others (appertaining to the Rebels) free, that are able and willing to bear Arms, they joining his Majesty’s Troops.”15 By 1774, enslaved people made up nearly forty percent of Virginia’s population.16 Dunmore hoped broad emancipation would provoke an overwhelming fear of insurrection in order to force Virginians to abandon their rebel cause entirely. Interestingly, there are no published letters from James Madison that respond to or reference the proclamation. This omission is remarkable when compared to his fearful June 1775 letter to Bradford. There was a sharp increase in runaway slave ads in the Virginia Gazette, so Madison must have been aware of the massive slave exodus to the British.17 Swift action forced Dunmore to evacuate to New York in 1776, and Virginia avoided a British-instigated slave rebellion, which might explain Madison’s silence. A lack of evidence prevents historians from concluding with certainty how Madison’s fears about British-catalyzed slave rebellions evolved during this time. Based on his 1774 and 1775 letters to William Bradford, it is not unreasonable to assume that Madison felt immense anxiety of slave insurrection and overwhelming relief during and after Dunmore’s short occupation of Virginia. Despite the quick repulsion of Dunmore’s forces, the war would return to Virginia in full force before the end of the decade. In 1779, after serving on the Virginia Council of State, Madison was elected to the Second Continental Congress. Recruiting soldiers for the Continental Army was of the utmost importance for the Continental Congress, and the nation’s fate depended on it. Joseph Jones, a fellow Virginia politician, wrote to Madison in November of 1780 about the complications of meeting Virginia’s quota for enlisted soldiers in the Continental Army. Jones informed Madison that a bill set to enter the Virginia Legislature in November prescribed a “bounty in Negros to such Soldiers as will enlist for the War” and expressed hope Congress would support this legislation. Madison responded that while he was “glad to find the legislature persist in their resolution to recruit their line of the army for the war,” he thought that the legislature should take it further and “liberate and make soldiers at once of the blacks themselves… It would certainly be more consonant for liberty which ought never to be lost sight of in a contest for liberty.”18 Madison’s letter to Joseph Jones offered his strongest and most principled criticism of slavery to date. Incentivizing military service with a reward of human property did not align with Madison’s perception of liberty. Madison did not propose emancipation, however, because the goal was military recruitment, not emancipation. Madison used freedom as a means to an end, but it was not the end itself. In this context, emancipation was a pragmatic solution that conveniently fit within the Revolutionary War’s stated objectives instead of a humanitarian cause. Madison’s proposal was progressive, but it was not free from racist and paternalistic ideas. The possibility of the British Army stealing slaves in exchange for military service alarmed Madison in 1774 and 1775. Conversely, Madison suggested in 1780 that arming and emancipating slaves to serve in the Continental Army and win the Revolution aligned with the principle of liberty. Madison, however, did not separate his endorsement of emancipation and military service from the fears he expressed concerning slave insurrection in his letter to William Bradford in 1774. He stated, “With white officers & a majority of white soldiers, no imaginable danger could be feared from themselves.”19 Madison sought to reconcile the fears of Virginians by assuring Jones and his fellow Virginians that there would not be entire regiments of freedmen running through the countryside without supervision from white leadership. The incentive of freedom in exchange for service outweighed any desire to exact retribution on Virginia elites. Madison attempted to put this possibility to rest by relaying to Jones that “a freedman immediately loses all attachment & sympathy with his former fellow slaves.”20 According to Madison, emancipated soldiers would have neither the desire or ability to lead a revolt because of their exposure to order and discipline amongst white people in the military. Madison’s advocacy for emancipation through military service was progressive for a slave-owning Virginian. The context of his proposal stemmed from a combination of both the necessity to recruit soldiers for the Continental Army in Virginia, as well as to adhere to the fundamentals of the Revolution. His push for emancipation, however, was contingent on military service. Madison wanted to avoid a compromising political situation, so he made a vague principled statement about conscription and slavery. It is not clear whether or not he was calling for the entire enslaved population to be immediately freed and enlisted. Madison did not specify if he envisioned only offering freedom to slaves of masters willing to emancipate some or all of their slaves to fight the British. Whether or not he favored freeing fugitive slaves that might flock to Continental camps to join in on the fight for liberty is not clear. However, Madison unambiguously stated that emancipation in exchange for military service was the ideal arrangement in compliance with liberty. Madison’s letter writing, in this case, was not limited to idealistic pontification to a colleague. The editors of The Papers of James Madison noted that Madison’s original manuscript included brackets around the aforementioned paragraph to designate them for publication.21 Madison felt so strongly about this issue that he specifically marked pertinent sections for publication to a wider audience across Virginia. Madison made carefully-worded statements because he wanted his words to be published. Madison wanted a large audience to realize the contradiction inherent in offering slaves to entice men to enlist and endorse his suggested plan, even in the slavery stronghold of Virginia.He publicly stated that he was willing to compromise Virginia’s entire economic and cultural foundation to preserve the Revolution principles. His proposal for emancipation in exchange for service, however, never turned into action. Madison did not envision racial animus between white and formerly enslaved soldiers within the ranks in his letter to Jones. He did not think that newly-freed slaves would resist fighting alongside their former owners, or that arming former slaves would create a mob hell-bent on retribution. Madison assumed placing slaves under the leadership of white officers would maximize white control and prevent any attempts at retribution. Anti-slavery for Madison in the mid-1770s meant wariness of the social dangers the institution created for Virginia in a time of impending social upheaval. In 1780, it meant uncompromising dedication to the concept of liberty to survive the British invasion of Virginia. Still, his dedication to liberty was not free from racism. In the immediate aftermath of the final siege at Yorktown, Madison’s endorsement of emancipation for military service shifted back to relative orthodoxy as life in Virginia transitioned into a period of domestic normalcy for free people. Madison’s letter to his father in March 1782 contained a brief update on the state of his affairs in Philadelphia as a delegate to the Congress of the Confederation. In reference to the possible delay of representatives receiving payment, Madison grimly stated, “unless liberal principles prevail on the occasion, I shall be under the necessity of selling a negro.”22 Pressed into economic uncertainty and refusing to rely on his father as a benefactor, Madison abandoned his previously familial tone when writing about his slaves. Madison did not view his slaves as people, but instead as property in the purely utilitarian and economic focus of the letter to his father. Madison disregarded the idea of emancipation in favor of his economic stability. This was not a political or philosophical endorsement of slavery but a practical solution to his financial woes. Despite his apparent aversion to slavery in the context of revolutionary ideology, Madison remained complicit in keeping people in bondage. Edmund Pendleton, a friend and fellow Virginian politician, wrote to Madison in late August of 1782 requesting that Madison either facilitate the return or receive proper payment for a runaway slave that belonged to Pendleton’s nephew: “I shall pay due attention to the request contained in your favor the 29th,” Madison responded, “should I however be so fortunate to recover him, the price of slaves here leaves no hope that a purchaser will be found on the terms demanded.”23 Madison wrote at length to his friend in this letter, but these were the only mentions of Pendleton’s slave. An important development in Virginia at this time was the passage of a law that allowed slave owners to manumit their slaves at any time without government approval. Because of this, Madison did not urge his friend to ask his nephew to manumit his slave to be consonant with the principles of liberty. A valuable piece of property, in Pendleton’s mind, ran away from his nephew’s estate and he sought Madison’s help in returning it. On September 3, 1782, Madison expressed doubt to his friend over the possibility of recovering his nephew’s fugitive slave, noting that, “at present they march in several divisions and halt but one day here [in Philadelphia].”24 Despite the grim outlook of locating the slave amongst several French divisions stopping briefly in Philadelphia, Madison assured his friend, “I will take every step in my power to have him found out & secured.”25 Later in the month, Madison cheerily wrote, “I am very glad to find that the recovery hath at length been accomplished.”26 The same man who, two years earlier, urged for emancipation after conscription because it aligned with the principle of liberty was satisfied when he successfully helped return an escaped fugitive to bondage. In this case, however, the subject of Madison’s contradicting letters is entirely different. Madison did not directly endorse the institution of slavery as a whole in a political or philosophical capacity, despite his complicity in the institution. This string of correspondences was simply Madison working on completing a favor for his friend’s family member as an act of good faith. These letters indicate that the context and the letter’s recipient influenced Madison’s message and perception concerning slavery. Letters penned by Madison and his peers concerning individual slaves cannot be scrutinized in the same context as Madison’s radical 1780 letter because of the different social and political conditions under which they were composed. His correspondence with Pendleton shows the limits of his radical politics. Madison published egalitarian sentiments that reinforced the principles stated in the Declaration of Independence, but within his circle of associates, he did not preach the same message. Assisting a friend to find a runaway slave did not challenge the concept of liberty to Madison. Madison was by no means an abolitionist, but he was not an ardent defender of slavery either. He existed in the middle—condemning slavery as a whole in specific contexts and perpetuating forced bondage in others. In a letter to his father shortly after the signing of the Treaty of Paris in 1783, Madison revealed how he struggled with the existence of chattel slavery in a now independent nation conceived in liberty. Madison confronted this contradiction personally with his own slave, Billy: “On a view of all circumstances I have judged it most prudent not to force Billy back to Virginia.”27 Madison wrote about Billy’s unsuccessful attempts to run away while in Philadelphia, stating that, “I am persuaded his mind is too thoroughly tainted to be a fit companion for fellow slaves.”28 Afraid that he might convince the other slaves to run away and seek freedom, Madison had reservations about returning Billy to Montpelier. Defending his reasoning for the net economic loss of selling Billy in Pennsylvania, Madison stated that he did “not expect to get near the worth of him.”29 With this added to the prospect of a corrupted slave population, the decision for Madison to sell Billy was relatively easy. Madison claimed he could not send Billy back to Virginia, “merely for coveting that liberty for which we have paid the price of so much blood, and have proclaimed so often to be the right, & worthy the pursuit, of every human being…” but he was more concerned that Billy might convince the other slaves to run away to seek freedom.30 Writing to his father, Madison was not waxing political to a fellow congressional colleague or Virginia politician, but confessing that he could not bear to stifle in Billy the innate desire that so many Americans sacrificed for over the course of the struggle for independence.31 Madison’s assertion of the fundamental right of freedom for all men was especially radical because he did not divide between enslaved black men and free white men, consistent with his letter in 1780 about emancipation for military service. This offers a more significant commentary on slavery, in his opinion, as the irreconcilable antonym of freedom and liberty. Within the context of an impending peace agreement to secure liberty, Madison could not reprimand his slave for exercising his own miniature revolution. Despite Madison’s recognition of Billy’s humanity, he did not manumit Billy but sold him in Pennsylvania. Madison’s resounding criticism of slavery as a violation of natural law proved to be hollow. He valued profit and economic productivity at Montpelier more than he valued Billy’s innate desire to be free. Madison’s letter to his father marks the last of his wartime correspondence concerning slavery. It offers a convenient bookend to a formative period of his life, the beginning of what would become a legendary political career. During his political tenure, Madison chose to allow chattel slavery to continue in the United States, only worsening its entrenchment in America’s economy and culture. By the time Madison’s career drew to a close, race and slavery were increasingly synonymous in America. For Madison, this complicated the practicality and efficacy of emancipation. Madison articulated his concerns of the well-being of formerly enslaved people to Edward Coles, his former private presidential secretary and future governor of Illinois, in 1819. Coles freed his slaves and gave them land to farm, and Madison expressed concerns with their ability to prosper as free citizens. Madison solemnly said to his friend, “I wish your philanthropy could compleat its object by changing their colour as well as their legal condition.”32 The simple difference between white and black skin, according to Madison, destined freedmen “to a privation of that moral rank and those social participation which give freedom more than half its value.”33 Madison worried about the ability of black soldiers to perform their duties without white supervision in 1780. In 1819, Madison did not think that freed slaves could escape a cycle of oppression and toil due to their race—even with land, means to make a living, and opportunity. With acceptance of the racially hierarchal world that he lived in and perpetuated in mind, Madison tried to find a tenable plan for emancipation in a society that would not make room for freedmen and women. During his post-presidency years, 1817–1836, Madison favored emancipation and forced colonization, on the condition that freed people be sent to Liberia. The idea of “colonization” in tandem with freedom brought Madison solace on the dilemma of increasingly hostile race relations within the United States. He joined the American Colonization Society shortly after its inception in 1817 and served as its president in 1833. He did not envision a racially heterogenous society where whites and blacks could coexist, despite his aversion to slavery in his retirement. Madison’s ideal solution to slavery in America—African American colonization in Africa—while well-intentioned, was unquestionably racist. Despite his dedication to emancipation and colonization, Madison still thought critically about other alternatives to rid the United States of slavery. Frances Wright, a Scottish writer, social reformer, and staunch abolitionist shared her 1825 essay, “A Plan for the Gradual Emancipation of Slavery in the United States,” with Dolley Madison.34 Wright advocated for an allotment of land on which former slaves would live and work to receive an education.35 James Madison read Wright’s essay and his subsequent response was a detailed and informative look into his perception of emancipation. Opening with a strong rebuke of slavery, Madison stated, “The magnitude of this evil among us is so deeply felt, and so universally acknowledged, that no merit could be greater than that of devising a satisfactory remedy for it.”36 Following this ardent anti-slavery statement, Madison expressed his anxieties over appropriately addressing the monolithic institution by addressing his view of post-slavery race relations in South America: “Unfortunately, the task, not easy under other circumstances, is vastly augmented by the physical peculiarities of those held in bondage, which preclude their incorporation with the white population. These peculiarities, it would seem are not of equal force in the South American States, owing in part perhaps to a former degradation produced by colonial vassalage, but principally to the lesser contrast of colours. The difference is not striking between that of many of the Spanish & Portuguese Creoles, & that of many of the mixed breed.”37 Madison compared and contrasted his perception of the harmonious assimilation of former slaves and people of mixed race into South American society, attributing it to the difference in pigmentation between the Spanish and Portuguese Creole classes, Anglo-Americans, and African Americans. This is an example of the immense influence that increasingly disparate race relations in a binary society—either white or black—had on Madison’s view of emancipation. Madison did not specify where the large population of mixed-race Virginians, many of whom had been emancipated between 1782 and 1785, fit into this society. Madison thought the only remedy to this reality was “the complete removal of those emancipated either to a foreign or distant region.”38 The only clear solution to Madison was freedom paired with colonization. Extending beyond the difficulties created by racial hostility, Madison also indicated that a plan must be put in place for freedmen to be “sufficiently educated for a life of freedom and of social order,” before emancipation could happen.39 According to Madison, an even more difficult obstacle to overcome was the need for “the voluntary concurrence of the holders of the slaves… [to emancipate] with or without pecuniary compensation.”40 He concluded his letter in warmth to his new and admired friend, but also urged her that his thoughts “not be brought before the public, where there is no obvious call for it.”41 This letter serves as an example of Madison’s systematic thinking and tactful writing, but also the impossible vision of racially-homogenous societies due to the worsening race relations within the United States. His letter shows a pragmatic approach to emancipation that was simultaneously guided by racist ideology, which ultimately reinforced his unfeasible vision. It is clear that within the context of a nation embittered over sectionalism—the rivalry between free states in the North and slave states in the South—and slavery, Madison was anti-slavery, but also racist. His anti-slavery rhetoric would prove to be only talk as Madison continued to hold over one hundred men, women, and children in bondage until his death. Following his eighty-fifth birthday, shortly before his death in 1836, Madison wrote a letter to the editor of a local newspaper, The Farmer’s Register, in response to a request “to obtain information in relation to the history of the emancipated people of color in Prince Edward.”42 Tracing the history of a group of slaves who were emancipated around a quarter century before his letter, Madison explained that the formerly enslaved prospered on the land they were given because of the industrious skills they learned while in bondage. The newer generations of freedmen, however, lived in “idleness, poverty, and dissipation.”43 Following his scathing review of the state of affairs in the freed community, Madison concluded that, “whilst they are a very great pest and heavy tax upon the community, it is most obvious, they themselves are infinitely worsted by the exchange from slavery to liberty—if, indeed, their condition deserves that name.”44 Madison’s analysis of the destitute conditions of the freedpersons ultimately evoked a paternalistic view that African Americans were better off enslaved than free. Madison’s rebuke of emancipation did not come from a position that favored maintaining the institution of slavery. It was more so a commentary on the inability, in his view, of freedmen to make their way in the white man’s world without a white man’s guidance. Citing the burden that these neglected communities placed on taxpaying Virginians was another way that Madison advocated for colonization. Whether or not freedmen and women could support themselves across the sea in Liberia was not the central concern of his letter, but rather that they were unable to provide for themselves in Virginia. His condemnation of life after emancipation could also have been an attempt to rationalize his coming failure to emancipate any of his slaves upon his subsequent death just months after. His letter to The Farmer’s Register highlights Madison’s simultaneous anti-black, pro-emancipation, and pro-colonization tendencies. Madison thought that slavery benefited African Americans through the cultivation of work ethic and industriousness. He did not believe that subsequent generations of African Americans could prosper because they were not under the direct supervision of white masters. By highlighting their privation and arguing that it was an unavoidable consequence of their racial deficiencies, Madison tried to make the case that emancipation without colonization would be a costly venture to white Americans. He did not see colonization as a tool to benefit freedpeople. Instead, he saw it as a way for whites in America to avoid the necessary efforts to reconcile the privation they created for generations of African Americans. In his last will and testament, Madison willed all of his slaves to his wife but stipulated that “none of them should be sold without his or her consent or in the case of their misbehaviour; except that infant children may be sold with their Parent who consents for them to be sold with him or her, and who consents to be sold.”45 He recognized their humanity, agency, will, and choice, but he did not validate their status as equals with manumission. Madison also left a sum of two thousand dollars to the American Colonization Society, a sign of the strength of his conviction of their goal.46 Madison justified his failure to emancipate his slaves as a result of his unfortunate financial affairs at the time of his death, but it can also be seen as his paternal desire to maintain what he thought was good care for his slaves within his family. His decree Dolley should not sell anyone without their consent was an act of goodwill, relative to other slave owners. His wish, however, was steeped in racism and oppression. Despite recognizing the apparent contradiction of enslaving people in a country founded on equality and liberty, Madison never broke his dependence on chattel slavery. From his first breath to his last gasp, James Madison was a slave-owner. Examining Madison’s perception of slavery during the Revolutionary War reveals that he held an aversion to the institution as a whole when it threatened the wellbeing of the state of Virginia, directly harmed the concept of liberty that was key to the Revolution, or when it could be used to win the Revolution. Madison was inconsistently averse to slavery when he wrote to friends and family about individual slaves that he knew. His correspondences reveal the contextually driven conclusions of Madison’s anti-slavery thinking in the Revolutionary War. Madison’s view of slavery during the war was shaped by fear of an apocalyptic revolt; the desire to arm slaves before offering them as a conscription incentive; admiration of the raw, human pursuit of freedom; and neighborly satisfaction of keeping men in bonds. In retirement, Madison’s anti-slavery rhetoric hardened, but his actions never matched his words. His unachievable vision for emancipation and colonization was representative of the racist society in which he lived. His progressive ideology, oppressive practice, and impossible vision for the future were also indicative of how deeply embedded slavery was in Virginia. It also foreshadowed the immense difficulty that would plague the United States in ridding itself of slavery. Madison fought for the bedrock ideals that the United States of America were founded on, “that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among them are Life, Liberty, and the pursuit of Happiness,” but his racism and paternalism prevented him from fully realizing those principles.47
The A flat diminished chord (Ab dim or Ab°) contains the notes Ab, Cb and Ebb (E double flat). It is produced by taking the 1st, flat 3rd and flat 5th notes of the A flat Major scale. A diminished often appears as Ab dim or Ab°. The diminished chord played by itself has a dissonant sound. This is largely due to the existence of the tritone interval, which is otherwise known as the devil’s interval. The A flat diminished chord contains a tritone between the notes Ab and Ebb. Even though the Ab diminished chord sounds dissonant on its own, it can sound beautiful when played in the right context. Pairing the Ab diminished chord with the A Major chord, for example, creates a sense of tension and release, which works well. A good exercise is to switch between the Ab diminished chord and the A Major chord and hear for yourself how this sounds 10 Ways To Play The A Flat Diminished Chord If you’ve come to this page just to view some chord diagrams for A flat diminished, here they are. Some Quick Ab Diminished Chord Theory - The Ab diminished chord contains the notes Ab, Cb and Ebb. - The A flat diminished chord is produced by taking the 1 (root), b3 and b5 of the A flat Major scale. - The A flat diminished chord (just like all diminished chords) contains the following intervals (starting from the root note): minor 3rd, minor 3rd, tritone (which leads back to the root note). - Ab diminished resolves naturally to the A chord. - Ab diminished can be written as Ab dim or Ab°. - The Ab locrian scale can be used when soloing over the A flat diminished chord.
This assessment reveals students' ability to source a document. Historical documents do not provide perfect windows into the past. Rather, each source has relative strengths and weaknesses as evidence about the past. This HAT gauges whether students can see not only how a document provides evidence about the past but also its limitations. Students with a sophisticated understanding of historical evidence will be able to explain how a photograph of former slave quarters provides some evidence of the living conditions of slaves. They will also observe how the fact that the photograph was taken decades after the abolition of slavery limits its usefulness as evidence of antebellum living conditions.
Most antibiotics are double-edged swords. Besides killing the pathogen they are prescribed for, they also decimate beneficial bacteria and change the composition of the gut microbiome. As a result, patients become more prone to reinfection, and drug-resistant strains are more likely to emerge. The answer to this problem might be narrow-spectrum antibiotics that kill only one or a few species of bacteria, minimizing the risk of collateral damage. In a recent study, UW–Madison scientists took a close look at one such antibiotic, fidaxomicin. This antibiotic is used to treat Clostridium difficile, or C. diff, one of the most common healthcare associated infections. The researchers demonstrated at a molecular level how fidaxomicin selectively targets C. diff while sparing the innocent bacterial bystanders. The findings, detailed in Nature, might help scientists develop new narrow-spectrum antibiotics against other pathogens. C. diff is a toxin-producing bacterium that can inflame the colon and cause severe diarrhea. It infects about half a million people in the United States, mostly in a hospital setting, and about one in 11 of those over age 65 who die within a month. For years, doctors have used broad spectrum antibiotics to treat C. diff. Fidaxomicin is a relatively new alternative that was granted FDA approval in 2011. Like several other antibiotics, fidaxomicin targets an enzyme called the RNA polymerase (RNAP), which the bacterium uses to transcribe its DNA code into RNA. To understand exactly why fidaxomicin selectively inhibits RNAP in C. diff and not in most other bacteria, several years ago Robert Landick, a biochemistry professor at UW–Madison, teamed up with associate professor Elizabeth Campbell from The Rockefeller University to visualize C. diff RNAP using cryo-electron microscopy (cryo-EM). Cryo-EM is a powerful imaging technique that can reveal the 3D shape of molecules and capture the drug molecule and its target in action. Spying on RNAP One big challenge, however, was producing large amounts of C. diff, an anaerobic germ that doesn’t grow in the presence of oxygen, to image. The study’s co-first author, Xinyun Cao, a postdoctoral researcher in the Landick Lab, spent two years developing a system to more easily produce C. diff RNAP using E. coli, a bacterium that grows easily and is frequently used in the lab. A key component of the system is an engineered plasmid that produces soluble proteins for all four subunits for C. diff RNAP while maintaining correct subunit stoichiometry. “RNA polymerases from many bacterial pathogens like C. diff are proven drug targets, but study of these enzymes is difficult because they differ in properties from the RNA polymerases in model bacteria and the pathogens themselves are problematic to grow at scales that yield enough enzymes,” says Robert Landick, co-correspondence author on the study. “Xinyun’s recombinant system is a major breakthrough for C. diff research in enabling a structure with its most important inhibitor.” Cao’s approach also opens the door to similar studies with other bacterial pathogens. Using this material, co-first author Hande Boyaci, a postdoc on Campbell’s team, generated images of C. diff RNAP locked with fidaxomicin at near-atomic resolution. Wedged into a hinge between two subunits of RNAP, fidaxomicin jams open the enzyme’s pincer, preventing it from grabbing on to genetic material and starting the transcription process. In closely examining the points of contact between RNAP and fidaxomicin, the researchers identified one amino acid on the RNAP that binds to fidaxomicin but is absent in the main groups of gut microbes that are spared by fidaxomicin. A genetically altered version of C. diff that lacked this amino acid was unperturbed by fidaxomicin, just like other commensal bacteria in the gut. Conversely, bacteria that had it added to their RNAP became sensitive to fidaxomicin. “Hande Boyaci and Elizabeth Campbell at The Rockefeller University have dramatic expertise in RNA polymerase structural biology,” Cao says. “The cryo-EM structure, C. diff RNAP in complex with fidaxomicin, determined by them set a solid foundation for us to identify the fidaxomicin sensitizer residue in C. diff.” The findings suggest this one amino acid among the 4,000 amino acids of this robust and essential transcription machine is its Achilles heel, responsible for the killing of the bacteria by fidaxomicin. The approach used in this study proposes a roadmap to developing new and safer antibiotics, the researchers say. By further elucidating RNAP structure of diverse bacteria, scientists can design antibiotics that targets each pathogen more selectively and effectively. Antibiotic fidaxomicin (green) inhibits C. diff bacterium by jamming RNAP, an enzyme crucial to bacterial replication. A version of this release was originally provided by The Rockefeller University.
Section 2: Document Markup Document markup is a notation method that defines how particular pieces of information are meant to be formatted. The term comes from the practice of marking up manuscripts to notate changes that need to be made. Markup in terms of programming languages is used to identify a language that specifies how a document is to appear. If you have ever used multiple colors of ink or highlighter when making notes and ascribed meaning to those colors for yourself (e.g., yellow highlighter is important, red ink is a definition) then you have already practiced document markup. You are providing additional layers of information along with the written text, in this case visual cues as to the purpose of the written information. Some popular markup languages are hypertext markup language (HTML), extensible markup language (XML) and extensible hypertext markup language (XHTML).These were each created to fulfill particular needs in defining the layout and structure of the material. Hypertext markup language is used to aid in the publication of web pages by providing a structure that defines elements like tables, forms, lists and headings, and identifies where different portions of our content begin and end. It can be used to embed other file formats like videos, audio files, documents like PDFs and spreadsheets, among others. HTML is the most relied upon language in the creation of web sites. In this text we will focus on HTML5. While it is technically still in draft form, many proposed elements are already supported by the newer versions of most of the popular browsers. In the beginning, back to the first days of the Internet and ARPA, the primary purpose of creating a page was to share research and information. HTML tags were only meant to provide layout and formatting of a page. As such, early implementations of HTML were somewhat limited as there was little demand for features beyond the basics. Headings, bullets, tables, and color were about all developers had to utilize. As sites were created for other more commercial uses, developers found creative ways of using these tools to get their pages looking more like magazines, advertisements, and what they had drawn on paper. Having been one of those developers, I recall the days of just-get-it-looking-right techniques, splicing page-sized images into tables so graphics were (usually) where we wanted them, nesting tables within tables to create complex layouts, and other methods that violate today’s best practices. While not formally finalized, many browsers are already supporting a number of features proposed in drafts of HTML5, including things like canvas and media support that greatly improve the browser’s ability to process and display complex materials without requiring extensive coding and extensions. In the past, sites that used video and audio players had to integrate support for many players, and would have to include the libraries and formatted files for those systems in their sites. By providing a solution to using these media forms within HTML5, we can improve on the user experience and reduce the efforts necessary to provide them. While these new features do reduce the amount of programming required to implement higher level elements, and include interactive elements that exceed document markup activities, HTML5 is still considered a markup language. In these languages, we use tags to ascribe additional meaning to our text, which provide instruction to the browser as to how to display the text on the screen, but are not necessarily displayed to the user. In HTML and XHTML these tags are fixed, or predefined, meaning the names that can be used in the tags are limited to what browsers are able to recognize. In XML, tags are defined by the person creating the content as they are typically used in conjunction with data sources and provide information. The , or W3C, is an international community that supports web development through the creation of open standards that provide the best user experience possible for the widest audience of users. This group of professionals and experts come together to determine how CSS and HTML should operate, what tags should be included as features, and more. The W3C is also your best reference point in determining the accessibility of your site through the use of tools that analyze your code for W3C compliance. These tools confirm if you have fully implemented elements in your code, like providing alternate text descriptions of images in the event that the image cannot load, or the user is visually impaired. In addition to the creation of accessibility standards, among many others, the W3C also provides tutorials and examples and is likely the most exhaustive reference you will find. CSS stands for cascading style sheet, and is used to create rules about the color, font, and layout of our pages. It also determines when those rules are to be used, based on information like the device connecting to the page, or in response to a user’s action. CSS can be used by not only HTML but any XML based language. By separating as much of the look and feel of a page from HTML as possible, we actually separate content from appearance. This makes it possible to quickly create several different versions of the appearance of our site, without recreating the content in each version. Our best approach is to use HTML to define the structure (and only structure) of our pages whenever possible, laying the groundwork for CSS to know where to apply the actual style. As HTML grew in popularity, demands on its feature set also grew. Combined with the variety of browser implementations and their varied approaches to rendering and support, creating robust, visually appealing sites involved a significant amount of time and effort. To reduce these, and separate the duties of presentation from those of content, proposals were sought to define a new system of managing these features. CSS was born out of CHSS, or Cascading Hypertext Style Script, and extends our capabilities by allowing us to go far beyond the styling limits of HTML by giving us more power over images, making pages appear more newspaper or magazine-like with layout and typography affects, and reducing load time. Introduced for public use in 1996, CSS1 contained the ability to apply rules by identifying elements (selectors), and most of the properties still in use today. CSS2 added the ability to adapt for different displays and devices, as well as positioning elements by specific values on the page. CSS2.1 followed with the introduction of additional features, but these were not considered substantial enough to warrant a full version number change. While commonly referred to as CSS3, the numbering no longer applies to the language as a whole. The developers have decided to break the language into modules, allowing different aspects of the language to be revised and released independent of one another. This allows for stable modules to stay numbered as they are (since they are not actually changing), while those under more active development can be pushed out as needed. At the moment, most of the “current” modules are at version number 3. Some have not really changed from 2.1, while work on version 4 of selected modules is already underway. Our ability to manipulate and create webpages consistently across formats comes from the document object model API, typically referred to as DOM. This API defines the order and structure of document files as well as how the file is manipulated to create, edit, or remove contents. The DOM is built to be language and platform independent so any software or programming language can use it to interface with documents. It defines the interface methods and object types that represent elements of documents, the semantics and behavior of attributes of those objects, and also defines how they relate to one another. The DOM, effectively, is what gives rise to the tags we are about to study below. Languages that use the DOM, however, are not required to include all of its features, and may generate additional features of their own. Figure 20 depicts an example of a document’s model in tree format, with nested elements appearing to the right and below their parents. In this example, we are shown an HTML page with a section for the head and the body, which includes a page title and a link as its contents. This structure provides the ability for us to traverse, or move around the document, by referring to an object’s name or attribute.
Definition and Overview Cholecystectomy is a gallbladder removal procedure performed primarily to treat gallstones. This can be carried out using either traditional open technique or minimally invasive method. The gallbladder is a pouch- or pear-shaped organ that collects and stores bile, a digestive fluid produced by the liver. Who Should Undergo and Expected Results Cholecystectomy is often performed when gallstones formed in the gallbladder. Gallstones are a fairly common problem and are believed to develop due to the imbalance of the compounds found in the bile primarily the cholesterol level. Some suggest that eating food that’s high in cholesterol could be the reason but some studies suggest that there is no specific diet that can prevent or reduce the risk of gallstones. Gallstones may also occur due to high amounts of bilirubin in the bile. The excessive level may be caused by damage to the liver, which breaks down the red blood cells. It could be the liver is already experiencing scarring (cirrhosis) or there’s a tumour growing in the organ. It’s also possible that the bile ducts are infected. The stones appear in different colours, which may indicate the possible reason for their formation. If they’re pigmented, they may have developed due to high bilirubin. If they’re yellow, they are caused by high cholesterol. A patient can also have mixed gallstones. Gallstones have several risk factors such as age, ethnicity, obesity, and gender. They are more common among women as estrogen contains cholesterol. A patient can develop and accumulate gallstones without his knowledge. Symptoms usually appear when the stones begin to block the biliary ducts or there’s already an infection. Some of these symptoms include abdominal pain, fever, indigestion, itchy skin, appetite loss and jaundice (yellowing of the skin and the whites of the eyes). A person can undergo gallbladder removal surgery with hardly any problem with digestion. However, some may experience bloating, bowel movement problems, and reduced ability to absorb fat-soluble vitamins. How Does the Procedure Work? As mentioned earlier, cholecystectomy can be performed either through open or traditional method or laparoscopic (keyhole) technique. To help the surgeon decide, different exams including blood tests that can determine the overall function and health of the liver and other vital organs will be conducted. An imaging examination can also be helpful in knowing the exact position of the gallstones, the condition of the bile duct and the gallbladder, and the most ideal surgical procedure for the patient. Usually, open surgery is recommended if the patient needs a liver transplant, there is a substantial scarring in the area, or if the patient has already been operated in the liver, bile duct, or the gallbladder before. A consultation is also necessary to prep the patient, who will be advised to quit smoking a few weeks before the surgery and stop the intake of medications that may increase bleeding or prevent clotting. The surgeon will also talk about the risks and complications of the procedure, possible outcomes, follow-up care, and nutrition after the gallbladder has been removed. In the case of open surgery, a large incision (about 6 inches) is made in the belly area to fully expose the liver and the gallbladder. The gallbladder is then detached using tools, such as electrocautery, before sutures are used to close the incision. If the surgery is to be performed using laparoscopic methods, four small incisions are made in the abdominal area. Air is used to expand the abdominal cavity while a laparoscope, a probe with a camera and a light, is inserted into one of these incisions. The camera delivers real-time images of the organs to a monitor to guide the surgeon during the operation. Using microsurgical instruments, the gallbladder is removed, and the incisions are sutured. In both procedures, the patient is provided with general anaesthesia. Usually, anticoagulants and antibiotics are given to the patient before surgery to minimise certain risks such as infection and bleeding. The operation may take around an hour to complete. Patients who have undergone open surgery are typically advised to stay in the hospital for up to five days so their condition can be closely monitored. Meanwhile, keyhole surgery patients are allowed to go home after spending the night in the hospital. Possible Risks and Complications A surgical gallbladder removal may cause bleeding, infection and pain. These risks are more likely to occur in open surgery than in a laparoscopic procedure. Other possible complications are bile leakage, injury to the other organs including the intestine, inflammation of the pancreas, and development of blood clots. There is also a small chance that sepsis will develop. This a life-threatening condition characterized by systemic inflammation that can lead to organ failure or damage. Glasgow RE, Mulvihill SJ. Treatment of gallstone disease: In: Feldman M, Friedman LS, Brandt LJ, eds. Sleisenger and Fordtran's Gastrointestinal and Liver Disease Pathophysiology/Diagnosis/Management. 9th ed. Philadelphia, PA: Elsevier Saunders; 2010:chap 66. Gurusamy KS. Surgical treatment of gallstones. Gastroenterol Clin North Am. 2010;39:229-244. Jackson PG, Evans SRT. Biliary system. In: Townsend CM Jr, Beauchamp RD, Evers BM, Mattox KL, eds. Sabiston Textbook of Surgery. 19th ed. Philadelphia, PA: Elsevier Saunders; 2012:chap 55.
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message) Female (♀) is the sex of an organism, or a part of an organism, that produces non-mobile ova (egg cells). Barring rare medical conditions, most female mammals, including female humans, have two X chromosomes. Female characteristics vary between different species with some species containing more well defined female characteristics. Both genetics and environment shape the prenatal development of a female. The ova are defined as the larger gametes in a heterogamous reproduction system, while the smaller, usually motile gamete, the spermatozoon, is produced by the male. A female individual cannot reproduce sexually without access to the gametes of a male, and vice versa. Some organisms can also reproduce by themselves in a process known as asexual reproduction. An example of asexual reproduction that some female species can perform is parthenogenesis. There is no single genetic mechanism behind sex differences in different species and the existence of two sexes seems to have evolved multiple times independently in different evolutionary lineages. Patterns of sexual reproduction include - Isogamous species with two or more mating types with gametes of identical form and behavior (but different at the molecular level), - Anisogamous species with gametes of male and female types, - Oogamous species, which include humans in which the female gamete is very much larger than the male and has no ability to move. Oogamy is a form of anisogamy. There is an argument that this pattern was driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction. Other than the defining difference in the type of gamete produced, differences between males and females in one lineage cannot always be predicted by differences in another. The concept is not limited to animals; egg cells are produced by chytrids, diatoms, water moulds and land plants, among others. In land plants, female and male designate not only the egg- and sperm-producing organisms and structures, but also the structures of the sporophytes that give rise to male and female plants. Etymology and usage The word female comes from the Latin femella, the diminutive form of femina, meaning "woman". It is not etymologically related to the word male, but in the late 14th century the spelling was altered in English to parallel the spelling of male. A distinguishing characteristic of the class Mammalia is the presence of mammary glands. The mammary glands are modified sweat glands that produce milk, which is used to feed the young for some time after birth. Only mammals produce milk. Mammary glands are most obvious in humans, as the female human body stores large amounts of fatty tissue near the nipples, resulting in prominent breasts. Mammary glands are present in all mammals, although they are seldom used by the males of the species. Most mammalian females have two copies of the X chromosome as opposed to the male which carries only one X and one smaller Y chromosome (but some mammals, such as the platypus, have different combinations). To compensate for the difference in size, one of the female's X chromosomes is randomly inactivated in each cell of placental mammals while the paternally derived X is inactivated in marsupials. In birds and some reptiles, by contrast, it is the female which is heterozygous and carries a Z and a W chromosome whilst the male carries two Z chromosomes. Intersex conditions can also give rise to other combinations, such as XO or XXX in mammals, which are still considered as female so long as they do not contain a Y chromosome, except for specific cases of testosterone deficiency/insensitivity in XY individuals while in the womb. However, these conditions frequently result in sterility. Mammalian females bear live young (with the rare exception of monotremes, which lay eggs). Some non-mammalian species, such as guppies, have analogous reproductive structures; and some other non-mammals, such as sharks, whose eggs hatch inside their bodies, also have the appearance of bearing live young. A common symbol used to represent the female sex is ♀ (Unicode: U+2640 Alt codes: Alt+12), a circle with a small cross underneath. According to Schott, the most established view is that the male and female symbols "are derived from contractions in Greek script of the Greek names of these planets, namely Thouros (Mars) and Phosphoros (Venus). These derivations have been traced by Renkama who illustrated how Greek letters can be transformed into the graphic male and female symbols still recognised today." Thouros was abbreviated by θρ, and Phosphoros by Φ, both in the handwriting of alchemists so somewhat different from the Greek symbols we know. These abbreviations were contracted into the modern symbols. The sex of a particular organism may be determined by a number of factors. These may be genetic or environmental, or may naturally change during the course of an organism's life. Although most species with male and female sexes have individuals that are either male or female, hermaphroditic animals have both male and female reproductive organs. The sex of most mammals, including humans, is genetically determined by the XY sex-determination system where males have X and Y (as opposed to X and X) sex chromosomes. During reproduction, the male contributes either an X sperm or a Y sperm, while the female always contributes an X egg. A Y sperm and an X egg produce a male, while an X sperm and an X egg produce a female. The ZW sex-determination system, where males have ZZ (as opposed to ZW) sex chromosomes, is found in birds, reptiles and some insects and other organisms. Members of Hymenoptera, such as ants and bees, are determined by haplodiploidy, where most males are haploid and females and some sterile males are diploid. The young of some species develop into one sex or the other depending on local environmental conditions, e.g. many crocodilians' sex is influenced by the temperature of their eggs. Other species (such as the goby) can transform, as adults, from one sex to the other in response to local reproductive conditions (such as a brief shortage of males). |Look up female in Wiktionary, the free dictionary.| |Wikimedia Commons has media related to Females.| Ayers, Donald M. English Words from Latin and Greek Elements. Second Edition. 1986. University of Arizona Press. United States. - Christopher Alan Anderson. "The Metaphysics of Sex ...in a Changing World!". Retrieved June 13, 2015. - Dusenbery, David B. (2009). Living at Micro Scale, Chapter 20. Harvard University Press, Cambridge, Massachusetts ISBN 978-0-674-03116-6. - Online Etymology Dictionary - Female (n.) Retrieved 2010-11-21 - Swaminathan, Nikhil. "Strange but True: Males Can Lactate". Scientific American. - Schott GD. Sex, drugs, and rock and roll: Sex symbols ancient and modern: their origins and iconography on the pedigree.BMJ 2005;331:1509-1510 (24 December), doi:10.1136/bmj.331.7531.1509 - Renkema HW. Oorsprong, beteekenis en toepassing van de in de botanie gebuikelijke teekens ter aanduiding van het geslacht en den levensduur. In: Jeswiet J, ed. Gedenkboek J Valckenier Suringar. Wageningen: Nederlandsche Dendrologische Vereeniging, 1942: 96-108.
Hurricanes and Tornados America Hurricanes and tornados have been in on the news a lot in the last couple of months. But what do these different names actually mean? We have listed them for you below. A tornado is a heavy storm, it is much smaller than a hurricane, however it does have a stronger whirlwind. A tornado often finds its origin in a serious thunderstorm. The airflows go wild and can start spiralling. In the midst of the clouds a wide hole shoots to the ground. In the new opening the air runs at lightning speed, with a speed up to 450 kilometres per hour. Everything that is not bolted to the ground gets sucked up into the hole, even garden houses and cars. Tornados are very dangerous because pieces of glass and wood get sucked in and end up somewhere else. Tornados are a frequent phenomenon in The United States, about 600 tornados per year. Hurricanes find their origin in the hottest areas on earth. A hurricane arises above the ocean, warm water vapor and stifling air rise up and start spinning in an upward spiral. A hurricane can have a length of 400 kilometres and brings along heavy rain and enormous waves. The midst of a hurricane is called the eye. In the eye, which is about 200 kilometres wide, the sun does not shine, and the wind does not blow. Everything around the eye is a storm at full power. If a hurricane gets closer to land the eye gets a smaller and smaller, until it is about 50 kilometres wide. What happens next is the air which spins faster and faster around the eye, this causes mayor damages. The hurricane seasons starts in April and ends in November. The wind speeds of a hurricane are measured on the Saffir and Simpson scale. There are different categories on the scale from 1 to 5. Category 1 has wind speeds from 118 to 152 kilometers per hour, this often causes no damage to buildings only trees get uprooted and beaches get flooded. At category 5 the wind speeds have accumulated to 248 kilometers per hour. At this stage inhabitants need to be evacuated, there will be large damages by the coast, waves that find themselves kilometers into land and buildings can get destroyed. What to do when there is a hurricane or tornado? If you can no longer flee please stay at home, find a safe place away from any windows, like; in a basement, or in a bathtub in a bathroom on the first floor, or on the toilet on the first floor. Do not sit in a car or caravan and stay away from any windows, since lose flying glass is extremely dangerous. The most famous hurricane was Katrine in New Orleans in 2005. There have been many hurricanes that caused great damages this year already, like; Harvey, Irma and now hurricane Maria. How do hurricanes get named? The names for hurricanes and storms get picked by the World Metrological Organisation (WMO). They picked 21 names that are used in six years’ time. The names are categorized alphabetical and do not go past the letter ‘W’. After six years the names get reused, this means that hurricanes and storms can get the same name. This doesn’t have to be the case, a country in which the hurricane has been can choose if the name can be reused. Since hurricane Katrina caused so much damage in 2005, The United States choose to get it off the list of names.
Ireland is a parliamentary democracy. The President (Uachtarán na hÉireann) is the head of state and the commander in chief of the armed forces while the Prime Minister (Taoiseach) is the head of government. The deputy head of government (Tánaiste) deputize the Prime Minister. Although the President is elected by popular vote, the post is majorly ceremonial because the political power is vested in the Prime Minister. There are three arms of the Irish government; the executive, legislature, and the judiciary. The executive arm of the Irish government consists of the President, Prime Minister and his deputy, and Cabinet ministers. The constitution limits the number of cabinet ministers to 15 or less and they must be members of parliament. The executive is responsible for the daily operation of the country. The ministers head their respective Ministries and ensure that service is delivered to the citizens accordingly. The executive is also responsible for the country's foreign relations. The judiciary consists of the district court, circuit court, high court, court of appeal, and the Supreme Court. The functions of the Supreme Court is to resolve issues pertaining to the interpretation of the constitution while the court of appeal resolved appeals. The other courts resolve matters that affect the citizenry including criminal and civil issues. The constitution, common law, and statutory law are used in the country when administering justice. Juries are not common in Ireland but serious offenses might be heard by one under the common law. The legislature of Ireland is known as the Oireachtas Éireann. It consists of the president and the two chambers of parliament. The Dáil Éireann or Lower House and the Seanad Éireann or Senate. The lower house consists of 158 members representing forty constituencies while the Senate has 60 members. Both houses hold their sessions at Leinster House in the capital city Dublin. The function of the legislature is to formulate, amend, or terminate existing laws. It is the only arm of the government that can do so except for the European parliament that deals with matters that concern the European Union. The citizenry engages in direct elections to choose the president, members of the Dáil Éireann, European Parliament, and the local government. The members of the Senate are partly elected indirectly, partly nominated, while others are elected by the university constituency method. There are five types of elections in Ireland; local, European, parliamentary, presidential, and referendums. Irish citizens can vote in all elections, British citizens can vote except for referendums and presidential, members of European Union can only vote in local and European elections, while non-European Union citizens can only vote in local elections. The president is elected for a seven-year term with a maximum of two terms. The lower house of the parliament nominates the prime minister who is then appointed by the president. About the Author Victor Kiprop is a writer from Kenya. When he's not writing he spends time watching soccer and documentaries, visiting friends, or working in the farm. Your MLA Citation Your APA Citation Your Chicago Citation Your Harvard CitationRemember to italicize the title of this article in your Harvard citation.
Water is a transparent and almost colorless chemical and is the main constituent of Earth. Chemical Name of water is Dihydrogen Oxide. Without water it was impossible to imagine life on earth. Without water our heart, brain and all fundamental organs could not have existed. All plants and animals must have water to survive. If there is no water on Earth then there is no life. |Also Read:||1. Your Bottled Water is Contaminated with Microplastic!| |2. Ask The Experts: Incredible Benefits of Milk| Water needs to be considered as an essential nutrient. According to experts, water is ranked second only to oxygen as essential for life. Your body uses water in all its cells, organs, and tissues that help it to regulate temperature and maintain other body functions. It expels waste through breathing, sweating, and digestion. The amount of water that you need depends on many factors, including the climate, where you live. How physically active you are, and whether you have an illness or any other health problems or not. The human body is primarily water. Drinking Water Helps Maintain the Balance of Body Fluids. Your body is composed of about 60% water. Water provides the medium to make your blood, helps move food through your digestive tract and discharges waste from every cell into circulatory system. The pH of pure water is 7. The human body maintains a pH range of 7.35-7.45 for proper physiological processes. Water has the capacity to regulate the internal temperature of the body in response to the outside temperature. It has the capacity to regulate the internal temperature of the body in response to the outside temperature. The normal human body temperature is 98.6 degrees Fahrenheit. Water plays its role in the distribution of oxygen throughout the human body, while simultaneously collecting carbon dioxide from all parts of the human body and dissolving these gases. |Also Read:||1. This Would Happen, If the Earth Stopped Rotating!| |2. Are Fresh Juices As Healthy As They Look Like?| According to the International Institute of the kidney, every day it is necessary to consume 2 liters or 10 glasses of water to prevent the formation of kidney stones. The back Portion of our body depends on the spinal cord. The spinal disk core is made up of a large volume of water and therefore dehydration due to lack of water in your body can lead to back pain in many individuals.
Executives of the UN World Food Programme have visited Ethiopia to assess the damage of the drought first hand, subsequently announcing that it is in need of immediate aid to help manage the effects of the drought. Unfortunately, the UN did not have the monetary resources to provide the quantity of aid required, forcing them to plead other countries to pitch in and help. This crisis highlights the importance of international awareness of drought-caused famine and outlines the need for more privileged countries to provide aid. The United States, Canada, and several European nations, as a part of the UN, have provided or will provide resources to Ethiopia and its people to alleviate the effects of the drought. This is not the first time Ethiopia has experienced drought. The El Niño conditions in 2002 caused a similar situation to the farmers of Ethiopia, inhibiting their ability to grow crops. The required aid did not arrive in Ethiopia until Early 2003, dramatically increasing the rate of acute malnutrition. The UN has assessed what occurred in 2002 and 2003 and has established a plan so that Ethiopia will receive the aid they need promptly by way of private donations.
Various scientific studies have shown that human populations respond differently to immune system challenges. But a newly published cell research study suggests that humans of African descent have a genetic advantage over their European peers. According to findings from two studies published in the Journal Cell, people of African ancestry have a stronger immune response to infection than do people of European ancestry. The first-of-its-kind findings could lead researchers to future treatments in reducing chronic illness in African-Americans. The first study, led by Luis Barreriro, assistant professor at the University of Montreal’s Department on Pediatrics, was conducted by extracting white blood cells from 175 Americans; 80 from African decent and 95 from European descent, DailyTech.com reports. Those cells were then injected with salmonella and listeria bacteria, as researchers monitored the response in a controlled lab environment. Researchers found that almost 24 hours after infection, the white blood cells of Black Americans had destroyed the harmful bacteria three times faster than the white blood cells of European Americans did. “The strength of the immune response was directly related to the percentage of genes derived from African ancestors,” Barriero said. “Basically, the more African you have in your genome, the stronger you’re going to respond to infection.” The second study, which analyzed genetic differences in RNA sequencing between African and European genomes, found that the introduction of Neanderthal variations into the European genome resulted in decreased pro-inflammatory immune responses to infections of people of European ancestry. Researchers from the Pasteur Institute in France suspect that the genetic variations between African-Americans and whites can be traced to the fact that Neanderthals helped colonize Europe, not Africa, before they went extinct. While both studies concluded that African-American immune systems are more effective at fighting off bacteria and infections, Barriero made sure not to label the immune systems of African descendants as “better” than ones of European descent. The downside to having a strengthened immune system is that leaves African-Americans susceptible to developing inflammatory auto-immune diseases like Lupus and Crohn’s disease, he noted. “The genes and pathways we’ve identified constitute good candidates to explain differences we are seeing in disease between the two population groups,” Barriero stated. ” … Our results demonstrate how historical selective events continue to shape human phenotypic diversity today, including for traits that are key to controlling infection.'” There’s been a long-held belief that African-Americans are genetically inferior to whites, highlighting the role that racial bias plays in the fields of medicine and science. Another study published earlier this year examined racism among medical professionals and its impact on disparate health outcomes between Black and white patients. “Implicit bias and false beliefs are common — indeed, we all hold them — and it’s incumbent on us to challenge them, especially when we see them contributing to health inequities,” the report read. Per DailyTech.com, future cell research studies will examine the influence of other factors like environment and behavior on differences in immune response. It’s hoped that future studies like the ones conducted by Barriero and French researcher Lluis Quintana-Murci of France’s Pasteur Institute will finally put these myths of Black inferiority to bed.
HISTORY OF MALAYSIAN INDIAN COMMUNITY Historical records show that the influence of Indian on Malaya can be traced as early as the first century. Indians’ role was prominent in the Malay archipelago as merchants who traded valuables such as as spices, textiles, fabrics and gold. The Indians at that time were also skilled sculptors and were active in maritim activities as Indian maritim ships were the main players in this region during this era. As a result, on a political level, they established many diplomatic ties with nations in the Southeast Asian region. On a societal level, many traders have married and integrated with the locals, settling themselves in the Malay peninsula. The artefacts found in the Bujang Valley, the Avalokiteswara Statue, and the archaeological remains during Rajendra Chola Conquest on Kedah (Kadaram) in the 11th century and the Sriwijaya Empire are significant relics documenting their presence in the Malay kingdom. Migration of Indians to Malaysia started initially in 1786 when British colony offices opened in Penang. The influx of Indians happened in the mid 19th century due to the intervention of the British in India and Malaya. British used their political influence to bring Indian labours to Malaya to work in plantation field through agents in India. The. In 1819, the number of migrations increased after the birth of Singapore. British brought in Indian prisoners as laborers in Malaya due to insufficient labor supply and high demand through projects undertaken by the government in the construction of railways, roads and agriculture. The presence of immigrant communities, especially Indians and Chinese, has created a new community structure in Malaya, Sabah and Sarawak. Indians contributed in social and economic aspects after Malaya’s independence in 1957. After 13 May 1969, the government drafted and introduced the New Economy Policy (NEP) to reduce and eradicate poverty by creating job opportunities and raising income for all races as well as expediting the process community restructuring to achieve socioeconomic equality in a very plural Malaysia. The government has carried out various efforts and strategies to create oppportunities for Malaysian Indians to be actively involved in the economic sector. Education sector developed rapidly as many schools were built. In 1996, the Razak Statement was formulated as one of the important steps to strenghten the nation’s education policy. The policy formulation has increased the number of Tamil schools in rural Malaysia and been given status as national schools. As a result of Razak’s statement content, Tamil language was and is presently taught in primary and secondary schools as a core or elective subject. Malaysian Indians continue to thrive as global and local player in education, economy, cutlure, religion, sociery and have achieved tremendously since their forebearers’ arrival to this region and country. Under MITRA, the unit will endevaour to ensure all Indians are included in the rapid development in these aspects.
Honors Bio Cumulative Final Terms in this set (305) What are carbohydrates also known as? Name three types of carbohydrates Glucose, fructose, galactose Glucose + glucose Maltose + H20 Glucose + fructose Sucrose + H2O Monosaccharide + monosaccharide Carbohydrates that are made up of more than two monosaccharides A single sugar molecule such as glucose or fructose, the simplest type of sugar. A double sugar, consisting of two monosaccharides joined by dehydration synthesis. A molecule made by covalently bonding monomers together Big covalently bonded molecules What are the four types of macromolecules carbohydrates, lipids, proteins, nucleic acids Do lipids form polymers? NO - unlike carbohydrates, amino acids, and nucleic acids, they do not Breaking down polymers cells do this by a process called hydrolysis (water is added to split a bond) dehydration synthesis reaction Water is removed Glucose versus fructose Glucose is shaped like a hexagon and fructose is shaped like a house repelled by water Attracted to water Functions of proteins Transportation in cell membrane, storage, events, cellular communication, enzymatic activity speeding up reaction What is a protein? A biologically functional molecule that consists of one or more polypeptides it's folded and coiled into a specific structure What are polypeptides? Polymers of amino acids What are rings? Families of nitrogen bases How many rings do purines have? two A G How many rings do pyrimidines have? one C T U What is the monomer of a nucleic acid? A nucleotide that. Contains a nitrogen base What does a five carbon sugar and phosphate group do? Provides instructions for synthesizing proteins and stores genetic information. What is the structure of DNA double helix (twisted ladder) What is DNA is nitrogen bases? is DNA anti parallel or parallel? What sugar does DNA have? What are the nitrogen bases of RNA? What is the structure of RNA What sugar does RNA contain? What does DNA stand for? What does RNA stand for? What is a substrate? What is the active site Where the substrate bonds The temperature and pH that the enzyme works the best and is most effective at is called what? What happens if it is anything off the optimum temperature or PH Anything past this will cause the denaturing of the protein A specialized protein that acts as a catalyst speeding up a chemical reaction by lowering the activation energy. What does a catalyst do? lowers activation energy To speed up a reaction Enzymes, are they re-usable or not Enzymes can be reused they are not changed or used up in the process What is activation energy? energy needed to start a reaction All enzymes have an optimal pH they work in, determined by their place of work. Enzymes have an optimum pH, any pH above or below this will make the enzyme denature, True or false As temperature increases, and some activity increases until it reaches the optimum temperature and you temperature above this will cause the enzyme activity to decrease happens because the bonds that hold the protein and it's 3-D shape denature The ability of the enzyme to do its job can be affected by temperature pH cofactors and coenzymes, and inhibitors Effect cofactors and coenzyme Help substrate binds to the enzyme, help regulate the enzyme activity in a chemical reaction can be in organic no carbon or organic with Carbon Decrease enzyme activity can be competitive or noncompetitive Similar shape as substrate binds to active site stops enzyme from bonding there Not shaped like substrate does not bind to active site find anywhere but active say it causes the active site to change shapes of the substrate can't bind there Glycerol +3 fatty acid tails Uses of triglycerides Fat energy storage in adipose cells insulation Curved think at double bond not as compact making a liquid at room temp not completely surrounded by H Saturated fatty acid tails Shape Saturated fatty acid tails Can be compacted Saturated fatty acid tails at room temp Carbons have What and are completely surrounded by what Single bonds and I completely surrounded by hydrogen Also known as facts they are hydrophobic which equals non-polar or doesn't like water will hydrophilic likes water. A subunit of a lipid is glycerol and fatty acid tails which are made of hydrocarbon And a carboxyl group What is the subunit of a lipid Glycerol head and fatty acid tail What are subtracted to self because it is Polar which creates hydrogen bonds example surface tension 4°C is the densest as water becomes a solid, if becomes less dense causing it to float this happens because the hydrogen bonds for space between the molecules Evaporative cooling a.k.a. sweating Maintain internal temp get rid of her Atoms and keep for once hydrogen bonds will form and to reform Do you attraction between a slightly positive hydrogen in one molecule and a slightly negative electronegative in a different Atom Give or take electrons do you have a full valence shell of eight electrons a.k.a. attraction of opposite charges When Adams share give take electrons to have a complete valence shell to be stable share covalent give-and-take Ionic Covalent bonds are formed between Between two nonmetals equal sharing of electrons having a pair of equal and opposite charges Polar results in A polar molecule one side is positive and the other side is negative do you to electronegativity ability to attract electrons from another Atom example water Water is attracted to a polar charged molecules What is ATP? ATP is chemical energy adenosine triphosphate made up of three phosphate groups and a ribose sugar or five carbon sugar Explain how ATP is created from ADP. Write the equation Made through dehydration synthesis. This requires energy, so it's endergoic. Equation ADP + Pi (+energy) yields ATP + H20 Explain how ATP is broken down. Include equation ATP is broken down through hydrolysis. This breaks the ATP between the phosphate groups. This releases energy, making exogenic Equation ATP + H20 yields ADP + Pi + energy Give examples of what type of work is the energy released by ATP is used to do Chemical work use the energy to help at another chemical reaction this is called coupling, this breaks into ADP plus PI; the PI bonds to something, usually a protein, and changes its shape. Transport work active transport in bulk transport mechanical work movement of the cilia Chromosomes during cell division in muscular contraction How is energy released when ATP is broken down The phosphate groups or negatively charged, which makes them unstable; when bonds are broken, the molecule (ADP) Becomes more stable, which releases energy. The energy is not in the bond. What is the catabolic reaction? Give three examples Breaking down of complex molecules into simpler ones. Examples, cellular respiration (aerobic) — needs oxygen anaerobic — doesn't need oxygen fermentation —Needs oxygen, but isn't getting enough Explain redox reactions Reduction — Edition of an electron to a molecule, usually a protein Oxidation — Loss of an electron from a molecule, usually a glucose Explain what transports electrons. How does this molecule obtain electrons and having the electrons affects the molecules (redox) NAD+ Transports electrons; went to electrons and hydrogen ion are added, it becomes an NADH, the reduced form, and it carries electrons. NAD+ is the oxidize form with Noah electrons. The same thing goes for FAD and FADH2 What are the two types of ways to make ATP Glycolysis and the citric acid cycle/Krebs cycle Explain substrate level phosphorylation ATP is created in this; when an enzyme removes a Pi from a molecule and bonds it to in ADP to make ATP What is cellular respiration and where does it occur? The oxidation of glucose to produce ATP. It is a catabolic reaction. All energy comes from the sun. Occurs in the mitochondria. What are the parts of cellular respiration Glycolysis, oxidation of pyruvate, citric acid cycle/Krebs cycle, oxidative phosphorylation (ETC and Chemiosmosis) Where does glycolysis occur? Is oxygen needed? Is in mitochondria needed? Occurs in the cytosol/cytoplasm. Oxygen is not needed. Can occur in prokaryotes, so mitochondria is not needed What is the purpose of glycolysis To break down glucose into pyruvate and make ATP What does glycolysis require? Glycolysis requires glucose, 2ATP, 2NAD+ What are products of glycolysis 2 pyruvate, 2 ATP (net gain), 2 NADH Explain the oxidation of pyruvate, step-by-step To pyruvate are actively transported through the two transport proteins through both membranes, into the matrix of the mitochondria. They lose one carbon each in the form of CO2 and become an unknown with two carbons. This is oxidized buy NAD+ and forms NADH, and then eight to carbon acetate is formed. This bonds with CoA, Making acetyl CoA, Which has two carbons. This happens twice because there are two pyruvate. Where does the oxidation of pyruvate occur? How many times does it occur for one glucose molecule? It occurs twice in the matrix of the mitochondria. What is created by the oxidation of pyruvate? 2 Acetyl CoA, 2Co2 Where does the Krebs cycle, or citric acid cycle occur? How many times does it occur for one glucose molecule It occurs in the matrix of the mitochondria, happens twice What is the beginning/ending molecule of the Krebs cycle Oxaloacetate four carbon What is created by the Krebs cycle? 6 NADH, 2 ATP, 2 FADH2, 4 CO2 (oxaloacetate) Explain the role of NADH in the ETC. Explain the flow NADH delivers electrons to proteins 13 and four. Electronegativity increases from protein to protein. NADH is the carrier of electrons. It drops off higher do you electrons to protein one which is the role. Electrons go from touching one to protein three to protein 4 to oxygen. Oxygen is the final electron except or . and it will bind two Hydrogen ions where it becomes H2O. Energy is released between the proteins, and that energy is used to actively transport hydrogen ions into the matrix to the inner membrane work. Explain the role of FADH2 in the ETC. Explain the flow. The FADH2 has the same job as NADH. The electrons carried by FADH2 come from the Krebs cycle. They go from 2 to 3 to 4 tooxygen where they also form H2O What is created at the end of the ETC Six H2O and a hydrogen ion gradient a.k.a. proton motive force Explain the process of Chemiosmosis Hydrogen ions from the inter-membrane space diffuse through and ATP synthase, creating enough kinetic energy for an ADP + Pi To phosphorylate into ATP What is the chemical equation for cellular respiration? C6H12O6 + 6O2 -> 6CO2 + 6H2O+30-32 ATP What does anaerobic mean without oxygen. Does not require oxygen. It means an organism doesn't need or use oxygen. It also has a different final electron except or. Example bacteria uses sulfate as the final electron acceptor forming H2 S instead of H2O What are the two situations that exist for anaerobic processes to occur Anaerobic respiration and fermentation What are the two situations that exist for anaerobic processes to occur Anaerobic respiration and fermentation Anaerobic respiration, explain, compared to aerobic respiration It means an organism doesn't need or use oxygen. It also has a final electron acceptor that is different. Example bacteria uses sulphate as the final electron except or forming H2 S not H2O. Aerobic respiration needs and uses oxygen, and H2O is the final electron acceptor Explain what happens when oxygen isn't present in an organism that is aerobic This will cause a lack of NAD+ Which. All processes of cellular respiration, so fermentation will happen What is the purpose of fermentation The purpose is to keep processes of cellular respiration going, when there is no longer oxygen. It allows the regeneration of NAD+. So glycolysis can continue and ATP is made Explain be able to draw lactic acid fermentation. What organisms use this process The electrons from the NADH are brought to the three carbon 2 pyruvate, the three carbon 2 pyruvate then forms two lactic acid's and The NADH after dropping off the two electrons at the pyruvate, is oxidized back into NAD+. This occurs in muscle cells, bacteria, and fungi. Explain and be able to draw alcohol fermentation Alcohol fermentation is when the three carbon, to pyruvate, release to carbon dioxide and form two acetaldehyde Which has the two electrons from the NADH dropped off at it, from there the NADH oxidizes back into NAD+ The two acetaldehyde Then forms two ethanol. This occurs in yeast and bacteria What is a macromolecule A big covalently bonded molecule What are the types of macromolecules In our body Carbohydrates proteins lipids and nucleic acid's Which of these macromolecules form polymers Carbohydrates proteins nucleic acid's and not lipid's What is the monomer Building block/subunit of Polymer What is the monomer of a carbohydrate called give three examples Monosaccharide glucose fructose galactose What is the name of the molecule that is created when two of these monomers are bonded together three or more bonded together What is the name of the process to build these Do you hydration synthesis What is the name of the process to break them apart? What are the uses of carbohydrates Fuel in building Explain it to the uses and distinguish the difference is between each organism that uses them Feel can be used for quick energy sources and a storage building is used for cell walls and Exo skeleton's What elements are in a carbohydrate in a lipid Carbon hydrogen and oxygen What are the monomers of lipids Glycerol heads in fatty acid tails Describe the different types of fatty acid tails Composed of a carboxyl group and hydrogen chain nonpolar. Saturated has a straight shape and is solid at room temperature can be compacted the carbons have a single bond in a completely surrounded by hydrogen. Unsaturated has at least one double bond has a bender Kinki at the double bond not as compacted mars face liquid at room temperature not completely surrounded by hydrogen What are the different types of lipids What are the. Components of each type of lipid Glycerol and three fatty acid tails What is the process used to create lipids What elements are in proteins Carbon hydrogen oxygen and nitrogen Name and draw the monomer of a protein. Draw an amino acid What is the name of the reaction when two amino acids are bonded together draw and label this reaction. Do you hydration synthesis to make a dipeptide and water How was one amino acid different from another how are proteins different from one another? Their group differs from the amino acids shape can Change a proteins function What are the functions of a protein speed up chemical reactions act as a catalyst/enzymes Storage seeds, egg whites, milk. Defense against viruses antibodies cellular communications receptors structural support collagen transportation hormones insulin motor proteins muscle movement What determines the shape of a Protein? Why is it so shape important? Shape is important for the proteins function shape is determined by the amino acid sequences What elements are in nucleic acid Carbon hydrogen oxygen nitrogen phosphorus What is the monomer for a nucleic acid draw it Monomer is a nucleotide made of a nitrogen base five carbon sugar and phosphate group. Draw this. What are the two types of nucleic acid's DNA and RNA Explain the differences between the two types of nucleic acid's. Sugars deoxyribose sugar has one less oxygen then ribose. Ribose sugar. Nitrogen bases A=T C=G FOR RNA A=U C=G double helix for DNA single helix For RNA Name and describe the two types of nitrogen bases Purines have two rings (AG). Pyrimidines have one ring (TCU) Explain what anti-parallel means. Draw an example. To parallel strands going in opposite directions. Draw an example. Explain with an enzyme is, what it does, and how it does it. Enzymes are proteins that act as catalysts. They speed up the chemical reaction by lowering the activation energy. What is activation energy? Draw a graph that shows activation energy with and without an enzyme on the same graph. It's the energy required to initiate a chemical reaction. The enzymes are not used up or changed in this process, which means that they can be reused multiple times. Draw the Graph on a blank sheet of paper Drama label and explain all of the parts of an enzyme Talk about how together they form and Sam substrate complex, talk about the substrate, talk about the active site on enzyme where the substrate binds, talk about the compound substrate plus enzyme yields enzyme substrate complex. And draw on a blank sheet of paper Draw the process of an enzyme making maltose Talk about the products of glucose plus glucose talk about our forms and How all enzymes and in Ase, glucose, and substrate maltose. Drawing a blank sheet of paper Explain the four things that will affect enzyme activity or read the reaction. Make sure you state what the effect is and how and why the effect has this effect Temperature as temp in creases and Sam activity increases until it reaches the optimum temperature and heat temperature above this will cause enzyme activity to decrease because the bonds that hold the proteins in shape denature PH enzymes have an optimum pH, any pH above or below this causes the enzyme to denature no graph on page 156 of the textbook. Cofactors and coenzymes help substrate binds to the enzyme, decreasing enzyme activity, can be in organic meaning it doesn't have carbon organic meaning it has carbon. Inhibitors decrease enzyme activity competitive inhibitor's have a similar shape as a substrate divine to active say and stop the enzyme from bonding there. Non-competitive inhibitor's are not shaped like the substrate and does not bind to the active site. And binds anywhere but the active site, causing the active site to change shapes of the substrate can't bind there. What is the optimum temperature and pH of the enzyme below Look at paper for graphs What are the different macromolecules? Carbohydrates, proteins, lipids which are not a polymer, nucleic acid's What is a polymer how was it created Molecule made by covalently bonding monomers together Draw and label the equation for creating a polymer Draw and label this on a blank sheet of paper Which of the molecules is not a polymer Name and explain the process or reaction for creating a polymer. Do you hydration synthesis to monomers bond to form water and a polymer Name and explain the process or reaction for breaking down the polymer Hydrolysis water breaks the bond between the polymer and forms to monomers What are the elements in carbohydrates Carbon hydrogen oxygen What is the Monomer or subunit for carbohydrates What are the different monosaccharides Glucose fructose galactose Draw glucose molecule Draw on a separate piece of paper What are the products were into glucose molecules are bonded together what process is this what if it is the glucose and fructose Walter some water do you hydration synthesis sucrose What are the products when 3or more monosaccharides are bonded together Polysaccharide and two Waters What are the uses of carbohydrates Fuel and building Explain what type of energy carbohydrates are Carbohydrates are a quick energy source glucose. And there are used to make ATP Explain the different ways carbohydrates are stored. Where in the organism are these molecules stored? Animal store them as glycogen a highly branched polymer of glucose these are stored in the animals liver and muscle cells plant store them as a starch a less branched polymer of glucose Explain the different structural molecules of carbohydrates what are they for Cell walls and exoskeletons cell walls for plants in our cellulose polymer of glucose, and fungi/bacteria are use Chintin exoskeleton has chitin. Used by anthropoids What is another name for cellulose in our diet? Why is it so good for us? Do we get nutritional value from it? Fiber it is good because it keeps her blood glucose levels low fiber is not a nutrient because it isn't digested or observed by the body. What suffix or ending do most sugars have How do animal cells respond to each type of Tonicity I so tonic preferred solution hypertonic Water will leave the cell in the cell will shrivel and maybe die Hypotonic. Cell will swell and eventually lyse or burst How do plant cells respond to each type of Tonicity Isotonic lax turgor pressure, so they are a limp/wilted/flaccid Turgor pressure is the pressure of the cell wall exerts on the water coming into the cell plasma membrane pushes against cell wall hypertonic worst for plant, plasmolysis happens which is when the plasma membrane separates from the cell wall. Hypotonic best for a plant What does facilitated diffusion When the protein is needed for polar molecules or ions to go through the molecule Explain the types of proteins that help with facilitated diffusion Channel proteins carrier proteins and gated ion channels protein molecule has to bind to Ray's liquor. Drop pictures of each on blank paper. Explain active transport Molecules/ions are pumped from a low concentration to high concentration against the concentration gradient so it requires energy. They go through protein pump. And example would be the ninth Na+ K + ion pump. H plus pump Explain endocytosis in general When you're taking in nutrients Explain the different types of endocytosis Phagocytosis when the cell engulfs around a particle PinocytosisExtracellular fluid is engulfed by the cell, to get the molecules dissolved, doesn't want the fluid. Receptor mediated endocytosis a specific molecule binds to receptors on the plasma membrane, which conjugate into one area makes a coded pit and endocytosis occurs there. Draw picture of each To get rid of waste the vesicle fuses With the plasma membrane and releases the content into the extracellular fluid What are the characteristics of living things? Nonliving things? Living made the cells, response to stimulus, reproduces, they move, use/require energy, grow, adapt to Environment, evolve, maintain homeostasis, die nonliving or never living are the opposite of living What is the cell theory? All living things are made up of cells, cells are the basic units of structure and function in living things, new cells are produced from existing cells What type of microscope do we use? How do you find the total magnification of the microscope? Light microscope multiply ocular lens by the objective lens What is the main difference between prokaryotes and eukaryotes Euks are animal and plant cells and pokes are bacteria cells Compare and contrast prokaryotes and eukaryotes Eukaryotes are larger 100 to 10 UM, different shape, younger, more involved/complex, have mitochondria, Lysosome Golgi Prokaryotes are smaller one through 10 UM different shape, older, less involved/complex, has capsule, nucleotide, firmembrae Compare and contrast plant cells and animal cells Plant have cell walls chloroplast central vacuole large rectangle larger 100 UM animal lysosomes flagella centrosomes with centrioles smaller 10 UM both plasma membrane nucleus ribosomes What does the plasma membrane do It regulates what goes in and out of the cell What type of surface area to volume ratio must it have to do this well why Help me I want death Why do cells have villi or microvilli It's a banding on the outside of the membrane and microvilli are smaller. They have these to increase surface area and make this cell have more area What is the endomembrane system? This is the phospholipid by layer of eukaryotes nuclear envelope pardon the nucleus ER rough and smooth Golgi vacuoles plant cells storage of water vesicles Draw and label the parts of the nucleus On the blank sheet of paper What is the function of the nucleus and nucleolus Nucleus stores DNA into it to Terry information has info to make proteins nucleolus subunit of the ribosomes synthesizes or makes ribosomes What is the function of the rough ER rough ER site of protein synthesis and creates phospholipids that will become the plasma membraneSmooth ER synthesizes lipids phospholipid cholesterol oils stores see a 2+ calcium ion detoxifies drugs and poisons in the liver cell has more smooth ER then other Oregon metabolizes carbohydrates What is the function of ribosomes where are they located They synthesize proteins can be bound to the rough ER or nuclear envelope or I'm bound a.k.a. free in the cytoplasm proteins for cell What is the function of the Golgi Sorts modifies and packages proteins and then it exports them What is the function of a vesicle Help transport export the proteins out of the Golgi What the function of the mitochondria? Draw and label it It's a stage of cellular respiration chemical reaction that breaks down glucose and ATP, draw the mitochondrial structure What is the function of the chloroplast draw and label it So the photosynthesis CO2 and H2O used to form glucose Draw it Explain the Endo symbiotic theory The ancestor of a eukaryotic cell in golfed form of endocytosis and oxygen using a non-photosynthetic prokaryotes sell forming a mutualistic Relationship with it and forming the mitochondria this new selling golf the photosynthetic pro Carriott Excel and formed a mutualistic relationship forming with the chloroplast What is the evidence supporting the Endo symbiotic theory Mitochondria and chloroplasts have a double membrane both have their own DNA both DNAs are circular both grow and divide reproduce independent from self both have ribosomes which means they make their own proteins both are the same size as a prokaryotes 1 to 5 UM mitochondria is one chloroplast is five What Is the fluid Mosaic model Shows the movement of the plasma membrane movement of the plasma membrane in the structures mosaic What structures make the plasma membrane mosaic Phospholipids and proteins cholesterol carbohydrates Explain each of the structures above in detail Phospholipids glycerol to fatty acid tails hydrophilic head hydrophobic tails fast beat proteins per Friel found to the surface of the membrane either inside or outside integral blinds inside the plasma membrane not all the way through transmembrane goes through the lipid bilayer cholesteryl lipid with for fused to see rings that go between fatty acid tails carbohydrates glycolipid/glycoprotein equals glycol equals carbohydrate What affects the fluidity of the plasma membrane Type a fatty acid tails sat equals less fluid on unsat equals more fluid How would you sell respond if the temperature increased If the temp increase his fluid did it increases because Kinetic energymovement increases Why would you cell respond this way It wants to maintain fluid did he do this by adding cholesterol increased number of saturated by the acid tails only if temperature increases if temperature decreases decreased number of saturated fatty acid tails decreased number of unsaturated fatty acid tails if only temperature increases if temperature decreases increased number of unsaturated fatty acid tails The movement of cells/molecules from high concentration to low concentration until it reaches equilibrium. Is a form of passive transport, so it does not require energy from the cell Explain the diffusion of non-polar molecules They do not go through the lipid bilayer easily because the non-polar fatty acid tails stop them and they have to go through a transport protein What determines the direction of diffusion It is determined by the concentration goes from high concentration in one area to an equal concentration in all areas The movement of water through a membrane with the help of the protein I could pour in from a higher water concentration to a lower water concentration until equilibrium is reached Explain Tonicity. And the three different types of it Gaining/losing water by yourself isotonic concentration of water of the solution equals the concentration of water in the cell this is at equilibrium hypertonic hyper means more which means more solute which means less free water the solution will have a lower water concentration then the cell hypotonic hypo means less which means less solute which means the last free water the solution has a higher water concentration then in the cell Transcription of a gene Upstream of the transcription unit and contains the TATA box and the start point Where are in a polymerase two places the first complementary RNA nucleotide to the DNA template strand Segment of the gene that is transcribed Termination sequence that signals for the end Where does transcription occur? inside the nucleus Why is mRNA created in what direction does DNA hold DNA hold the direction for creating the protein and ribosomes that are the organelle that synthesize the polypeptide.DNA cannot leave the nucleus, so mRNAs created to take the message out of the nucleus to the ribosome. What are the three parts of transcription initiation, elongation, termination Begins when a transcription factor recognizes and bonds to the TATA Box. Other transcription factors bind to the initial transcription factor RNA polymerase to binds to the transcription factor complex creating the transcription initiation complex Begins when RNA Polymerase to add is the first complementary RNA nucleotides at the start point in the five prime to three prime direction. What does complementary mean If the DNA is A then mRNA is a U, T-A, G-C etc RNA and DNA nucleotide differences RNA has uracil and DNA has thymine How long does elongation continue for Until termination is reached Occurs when RNA polymerase read the polyadenylation signal in the DNA which is Transcribed into the sequence AAUAAA What happens 10 to 35 nucleotides after the sequence A protein Cleaves the RNA polymerase two from the mRNA strand What is pre-mRNA the result of Transcription. It is not allowed to leave the nucleus What is added A GTP Is added to the five prime side of the pre-mRNA called the five prime Cap. 50 to 250 Adenine nucleotides Are added to the three prime side and it's called the poly a tail. What do these modifications create Linear mRNA strand What do the modifications allow mRNA to do Leave the nucleus protect it from degradation by enzymes in the cytoplasm and helps the mRNA bind to the mRNA branding site of the small ribosomal subunit What does translation involve The small and large ribosomal subunits Subunits have a 3-D shape that is created when nucleotides hydrogen bond to themselves Where ribosomal subunits created In the nucleolus What does a small ribosomal subunits have The mRNA binding site What do large ribosomal subunits have Three sites, the E site (exit site) the P site and the A site tRNA shape and structure TRNA has a 3-D shape like ET with the amino acid binding site on top and the anti-codon on the bottom What is the anti-codon of tRNA complementary to The mRNA codon What are the three parts of translation Initiation elongation and termination Begins when the tRNA with the anti-codon UAC and caring the amino acid MET binds to the small ribosomal subunit What happens next The mRNA binds to the small ribosomal subunits tRNA complex and moves along the mRNA until the anti-codon hydrogen bonds to the start codon AUG Which sets the reading frame The large ribosomal subunits binds to the small ribosomal subunits so that the tRNA is in the P site this creates the translation initiation complex Begins when the tRNA that has the anti-codon complementary to the mRNA codon carry the correct amino acid binds to the A site What happens to the bond between the tRNA and the amino acid in the P site What does the ribosome catalyze The formation of a peptide bond between the amino acid in the P site and the amino acid in the A site What happens to the ribosome It translocates by one coat on this puts the tRNA with no amino acid in the east side the tRNA with the chain is in the P site and the A site is open this continues until termination begins Begins when the ribosome translocates so that a stop codon is beneath the A site What this signals for This signals for a release factor which is a protein in the shape of a tRNA to enter the E site and bind to the stop codon. This signals for a water To hydrolyze the bond between the polypeptide chain in the tRNA in the P site releasing the polypeptide from the ribosome. What happens last The subunits breakapart and the mRNA is released Point. In the genetic info DNA Change in one nucleotide pair in the DNA A nucleotide pair gets replaced by a different pair Silent mutation missense mutation nonsense mutation The code and created still codes for the same amino acid The codon created is different than the original codon can be only a slight change to a huge Change in the protein. Example of missense mutation Sickle cell anemia. An A is changed to a T which causes a glu to be a Val this completely changes the shape of the red blood cell toa crescent sickle shape The codon created is a stop codon this causes the protein to be shorter One nucleotide pair is added One nuclear tide Paris or moved What is the result of insertation/deletion A frameshift mutation The reading frame of mRNA is altered. This causes the protein to have a different amino acid sequence after the insertation /Deletion. Can cause a completely different protein to be made Causes of mutations Can be chemical or physical. Errors during DNA replication spontaneous mutation's UV radiation thymine dimers x-rays Carcinogen which is a Mia did you and that can cause cancer cigarette smoke which is physical and nuclear analogues which is chemical The traits of the parents mix to create offspring with a trait in the middle Parents pass down discrete gene idea units genes that are not altered or blended Mendel's first experiment what did he use Why were pea plants good They grow fast and reproduce fast had tons of kids (seeds) he could control fertilization have lots of variation- characters flower color pea color height only two trait options Recite Mendel's first experiment Use book to check What conclusion did Mendel draw from his first experiment That the recessive trait does not change and stays as a unit. Traits can be masked, but never go away. What is the law of segregation Chromosomes or alleles separate during gamete synthesis which is meiosis. What is Mendels model Having different sequences of DNA in our Jean is what allows us to have different traits. We have two copies for each trait one for my mom and one for my dad. If an organism is heterozygous the dominant trait determines the appearance not to recessive autosomal genetic disorders dominant Huntington's disease achondroplasia polydactyl cateracts autosomal genetic disorders recessive Albinism cystic fibrosis sickle cell Tay-Sachs PKU deafness Sex linked traits A trait that occurs on the sex chromosomes which is the last pair and a karyote Males are more likely to have them because they only have one copy of an X chromosome autosomal recessive Causes the enzyme that breaks down lipids not to function correctly so lipids build up in the brain cells causes seizures blindness muscle degration brain functioning goes down and then death Sex determination and birds fish and some insects ZW system males ZZ and females or ZW the female determines the sex Sex determination humans XY system the mail is XY and the female is XX so the mail determines the sex females only get the X or males give X or Y if an X from the sperm fertilizes the egg the child will be female opposite from Male X in activation, and females one X chromosome is super condensed which inactivates it because no protein synthesis can occur on it. This is called bar body and example would be a tortoiseshell cat if black is so minute over orange, but the black is an activated partially in the orange is in that activated partially then the cat will be partially orange and partially black The process of determining the location and chemical sequence of specific genes on specific chromosomes. One gene that has multiple phenotypes Sex determination roaches and grasshoppers The XO system female has two exes mail has one ask a female will give one ex well I mail gives an ax or nothing the mail determines the sex Recite mendel second experiment What was the conclusion drawn from mendel second experiment Alleles for each trait segregate independently during meiosis or gamete formation This is the law of independent assortment it occurs in metaphase one Absence of pigment in the skin, hair, and eyes recessive autosomal recessive European dissent the CI ion transport protein does not function which leads to an accumulation of CI Ions outside of the cell because they can't go inside which leads to sticky mucus lungs have a hard time breathing and get bacterial infections which is in example would be bronchitis, the digestive system, pancreas, intestines African dissent a substitution in a single nucleotide pair point T toA glu to Val In hemoglobin cause the shape of the red blood cell to go from the round shape to a sickle shape Recite Morgan's first experiment Use the book Recite Morgan's second experiment Use the book Enzyme that breaks down amino acids doesn't function causing this to be a toxic level in blood leads to brain damage intellectually The failure of homologus chromosomes her sister chromatid to separate properly during cell division Caused by an errand cell division called nondisjunction nondisjunction results in an embryo with three copies of chromosome 21 instead of the usual too. Prior to conception a pair of the 21st chromosomes is either the sperm or the egg fails to separate. Eyes that's why upward short stature and a short neck. XXY, less body hair, non-disjunction leads to an extra X chromosome, breast in large meant, a genetic condition in which a Mail is born with an extra copy of the X from them Genes are located on specific chromosomes are the specific location or Locus Chromosomes are what go through independent assortment and segregation. Evidence Morgan's experiment How often crossing over occurs Total number of offspring 1% recombination equals one map unit Explain Darwins ideas of evolution and the mechanics of it Darwin, set his ideas on paper in 1844, when he wrote a long essay on descent with modification and its underlying mechanism, natural selection. Decent with modification is simply the passing of traits from parent to offspring and this concept is one of the fundamental ideas behind Charles Darwins theory of evolution. You pass on to your children in a process known as heredity. The unit of heredity is a gene. A gene is like a blueprint for how the person will be. However if your child does not get your exact blueprint, they are clones of you, rather your jeans combine with your partners jeans and small. Changes and mutations may occur along the way. If you have multiple children, you know there's a different mix of jeans that are combined for each child. This means that the gene pool is continuously adjusting based on who is reproducing and how their genes are combined during the production of offspring. Over extended periods of time, evolution takes place. It's underlying mechanism, natural selection, is the process whereby organisms better adapted to their environment 10 to survive and produce more offspring. The theory of its action was first fully expanded by Charles Darwin in this now believe to be the main process that brings about evolution. Name and explain all the evidence for evolution The evidence is for evolution are directly observing evolution homologous structures, vestigial structures, analogous structures, Bio geography, embryology, and molecular homologues. Explain each. Explain directly observing evolution Two examples of directly observing evolution are soapberry bug and drug resistant S. Aureas. The Soapberry bug eat seeds of the balloon vine fruit (circular) And uses beaks to access seeds. The best adapted soapberry bugs are the bugs with a beak length that is the same as the distance from the skin to the seed of the fruit. When the population of balloon vine fruit declines, and the golden rain tree population increases, (The golden Raintree have fruit with seeds that are closer to the skin) The best adapted bugs then, are those with shorter beaks compared to those that ate balloon fruit seeds. Explain homologous structures Structures that organisms have that look similar, but have different functions. And example of this would be bones in the forelimbs of humans, bats, cats, Whales, and alligators. This is because they have a common ancestor. explain vestigial structures Structures that the Current organism has, but it doesn't serve a useful function. It was useful for their ancestor. Examples would be the tailbone and the appendix. Also the hip bone in Whales and snakes. explain analogous structures Structures that are similar in shape and function but are not due to common ancestry. Examples would be a moth, a bird, and a bat. One species is selected by continental Drift which alters the environment leading to the two species evolving differently. The study of embryos. At some stage in the embryonic development all embryos look similar. Explain molecular homologs Amino acid sequence of a protein. DNA sequence of a gene. What Hardy Weinberg conditions directly affect allele frequency's? Natural selection, genetic drift, bottleneck effect, founder effect, and gene flow. Explain natural selection Those that are best adapted to the environment live longer and produce more offspring than those that are not. Which means that more of their alleles go into the gene pool which causes the frequency of the beneficial olio to go up in the non-beneficial allele to go down. Genetic drift explain A random chance event causes the allele frequency to change in a small population. Founder effect explain When a group of individuals become isolated from the main population and they have two different allele frequency's. A random chance event or disaster, that causes a large decrease in the population which causes genetic variation to go down. Example prairie chickens. Explain Gene flow Movement of alleles throughout the population. Coming in. If more dominant's come in, allele frequency's become more. Increased, if more recessive come in allele frequency's go down. Explain how to find a population is evolving How do tectonic plates move and what can each movement cause Convergent plates come together, and divergent plates move away from each other. Transforming plates slide past eachother. What can a divergent movement of tectonic plates cause When plates are divergent, it means that they are moving away from each other. Along divergent boundaries, or rifts, earthquakes often occur. Beneath the divergent boundaries or rift, magma oozes Up to the mantle, and into the gap between the tectonic plates, hardening into solid rock. Forming new crust, on the torn edged of the plates. Once on the mantle, the solid rock the magma hardens into is basalt, The rock that makes up the oceanic crust, what is thinner and denser than the less dense and thicker continental crust. Therefore, at divergent boundaries, oceanic crust is formed. What can A convergent movement of tectonic plates cause When plates are convergent, it means that they are moving towards each other. Along convergent boundaries, The edges of the plates are either pushed up into mountain ranges. Or down into A deep trench. If one of the colliding plates is topped with oceanic crust, it is forced down into the mantle where it begins to melt. Magma rises into and through the other plate, solidifying into new crust. Magma formed from melting plates solidifies into granite, a light colored, low-density rock that makes up the continents. Thus at convergent boundaries, continental crust, made of granite, is created, and oceanic crust is destroyed. What can A transforming movement cause When plates are transforming, it means they are sliding past eachother. Two plates sliding past each other forms a transform plate boundary. Natural or human-made structures that cross a transform boundary are offset—split into pieces and carried in opposite directions. Rocks that line the boundary are pulverized as the plates grind along, creating a linear fault valley or undersea canyon. As the plates alternately jam and jump against each other, earthquakes rattle through a wide boundary zone. In contrast to convergent and divergent boundaries, no magma is formed. Thus, crust is cracked and broken at transform margins, but is not created or destroyed. division of the nucleus
The Asteroidal YORP Effect Observed 7 May, 2007 12:26 pm For the first time, astronomers have witnessed the speeding up of an asteroid?s rotation rate (the YORP effect), and showed that it is due to a theoretical effect predicted but never observed. The international team of scientists used a suite of powerful telescopes to uncover that the asteroid is rotating faster by 1 millisecond every year, as a consequence of sunlight impinging upon its surface. Eventually it may spin faster than any known asteroid in the solar system and even break apart by rotational fission. These extraordinary findings are reported in two new companion papers in the journal Science. Figure 1: Asteroid 2000 PH5 imaged with ESO’s 3.5m New Technology Telescope in Chile on August 27, 2003, over a time span of 77 minutes. The asteroid can be seen moving relative to the background stars (Credit: Stephen C. Lowry). The Yarkovsky-O’Keefe-Radzievskii-Paddack effect (or YORP for short) is caused by sunlight striking the surfaces of asteroids. Light particles, or photons, have mass and so a small amount of momentum is transferred to the asteroid. At the same time, most of the sunlight is absorbed and re-radiated as heat giving rise to an even stronger recoil effect. If the asteroids’ shape is sufficiently irregular then a net torque can arise that will change how fast an asteroid spins, and even cause the direction of its spin axis to slowly drift. Although this is an almost immeasurably weak force, its effect over millions of years can have a significant effect. The existence of such a process has never been confirmed by direct observation of the effect in action, and all we have seen so far are end products of YORP. For example, the surprising spin-axis alignments of members of the Koronis asteroid family requires some mechanism to be causing the steady drifting and eventual alignment of their spin axes, and YORP has been shown by theoreticians to be the culprit. Astronomers believe the YORP effect may also be responsible for spinning some asteroids up so fast that they break apart by centrifugal forces, perhaps leading to the formation of binary asteroids or even multiple-asteroid systems. Others may be slowed down so that they take many days to spin once or cause some asteroids to rotate chaotically. Just as intriguing is that this small effect can play an important role in changing the orbits of asteroids between Mars and Jupiter, including their delivery to planet-crossing orbits, such as Near-Earth asteroids. Despite its importance and wide implications, the effect has never been seen acting on a solar system body. Figure 2: Radar images obtained at the Arecibo facility in Puerto Rico on July 28, 2004, covering one full rotation of asteroid 2000 PH5 (columns 1 and 4). Corresponding shape-model fits to the images are shown in columns 2 and 5. Columns 3 and 6 are detailed 3-D renderings of the shape model itself (Credit: Taylor et al., Science 316, p274-277). Using extensive optical and radar imaging from powerful Earth-based observatories, astronomers undertook the challenging task of measuring the YORP effect on an asteroid for the first time. Over a 4 year time span, Stephen Lowry from Queen’s University (UK) and colleagues took images of a small near-Earth asteroid, known as (54509) 2000 PH5, using a range of large telescope facilities in Chile, Spain, Czech Republic, the Canary Islands, and Hawaii. With these facilities the astronomers measured the tiny brightness variations as the asteroid rotated, providing a means to measure changes in the asteroids’ spin rate. Near-Earth asteroid (54509) 2000 PH5, was discovered by the Massachusetts Institute of Technology Lincoln Laboratory’s near-Earth asteroid search program (LINEAR) on August 3rd 2000. It is one of only a handful of objects known to be co-orbital companions of Earth and approaches our planet annually to within just 5 lunar distances. This, along with its small size of 114m, makes it a perfect candidate for a first YORP detection. Furthermore, it was already spinning at an exceptionally fast rate with one ‘day’ on the asteroid lasting just over 12 minutes, and so the effect may have been acting on this asteroid for some time and still doing so. The final result was even stronger than hoped. After careful analysis of the optical data, the asteroid’s spin rate was seen to steadily increase with time. Critically, the effect was observed year after year, for more than 4 years, and the rate-of-change can be explained by YORP theory. The theoretical YORP strength for this asteroid was determined from detailed shape modelling derived from radar observations obtained by Patrick Taylor and Jean-Luc Margot of Cornell University. The first direct confirmation of the existence of YORP has now been obtained. In April 2004 the International Astronomical Union bestowed on asteroid 2000 PH5 the official name of ‘YORP’ , in honour of Ivan O. Yarkovsky, John A. O'Keefe, V. V. Radzievskij and Stephen J. Paddack who were instrumental to the realization that radiation torques due to sunlight can affect the spin states of minor planets. To predict what will happen to this asteroid in the future, Lowry and colleagues performed detailed computer simulations using the measured strength of the YORP effect and the detailed shape model. They found that the orbit of the asteroid about the Sun could remain stable for up to the next 35 million years, allowing the rotation period to be reduced to just 20 seconds, faster than any asteroid roation period measured so far. They also predict that there should be a population of ultra fast rotators (less than a few 10s of seconds) awaiting discovery, and that sunlight-induced rotational fission could be happening on small kilometre-sized asteroids today. Overall, this detection will fuel further studies of the YORP effect on asteroids, and enhances our overall understanding of small solar system bodies, important for unlocking the secrets of the early system. Asteroids and comets are the only survivors from the planet formation era, about 4.5 billion years in the past. Fig 3: The observed relative variation in the rotation period is seen to change from year to year (black dots). The solid curve is the expected theoretical YORP strength derived from the 3-D shape model (Credit: Lowry et al., Science 316, p272-274). Stephen C. Lowry, Alan Fitzsimmons, Petr Pravec, David Vokrouhlicky, Hermann Boehnhardt, Patrick A. Taylor, Jean-Luc Margot, Adrian Galad, Mike Irwin, Jonathan Irwin, and Peter Kusnirak (2007). Direct Detection of the Asteroidal YORP Effect, Published in Science, Vol. 316, p272-274 ( www.sciencemag.org ). Patrick A. Taylor, Jean-Luc Margot, David Vokrouhlicky, Daniel J. Scheeres, Petr Pravec, Stephen C. Lowry, Alan Fitzsimmons, Michael C. Nolan, Steven J. Ostro, Lance A. M. Benner, Jon D. Giorgini, Christopher Magri (2007). Spin Rate of Asteroid (54509) 2000 PH5 Increasing due to the YORP Effect, Published in Science, Vol. 316, p274-277 ( www.sciencemag.org ). S. C. Lowry, et al , Direct Detection of the Asteroidal YORP Effect, Science, 13 April 2007: Vol. 316. no. 5822, pp. 272 - 274
Stay on target The 1970s had a profound effect on Earth. But the “Me Decade” also left an impression on the Moon. Scientists believe they have solved a 40-year-old mystery of why the lunar subsurface warmed slightly during the Apollo missions. NASA’s third human spaceflight program, active between 1969 and 1972, saw 12 men walk on the moon—including pioneers Neil Armstrong and Buzz Aldrin. But while U.S. astronauts were making history, they were also disrupting a celestial ecosystem. During the Apollo 15 and Apollo 17 missions, rocketeers conducted heat flow experiments (HFEs) by deploying probes on the Moon to measure the satellite’s internal temperatures. Scientists hopes these measurements would shed light on whether the Moon’s core was hot like Earth’s, and how much heat the rocks of its crust and mantle could generate. The raw temperature data was transmitted from space to NASA’s Johnson Space Center in Houston, where it was recorded on open-reel magnetic tapes. Those tapes were then analyzed and archived. But when the experiments ended in 1977, only tapes from 1971 to ’74 were archived at Goddard Space Flight Center’s National Space Science Data Center. The rest—presumably still under examination—were never filed; three years of information was lost to time. A team of researchers recovered and restored major portions of the previously unarchived data from January 1975 to September 1977; their findings are detailed in a paper published by the American Geophysical Union’s Journal of Geophysical Research: Planets. Spoiler alert: The missing tapes were gathering dust at the Washington National Records Center, which stored documents for various U.S. federal agencies. It turns out that by walking and driving a rover over the Moon’s previously unsullied surface, astronauts destroyed its equilibrium. As a result, the satellite reflected less of the Sun’s light back out to space, raising the temperature 1-2 degrees Celsius (1.8-3.6 degrees Fahrenheit). “In the process of installing the instruments you may actually end up disturbing the surface thermal environment of the place where you want to make some measurements,” lead study author Seiichi Nagihara, a planetary scientist at Texas Tech University, said in a statement. In hindsight, it seems obvious that shoving tools into the ground could upset the space environment. At least we can learn from our mistakes; this is certainly something for future lunar missions to think about. “That kind of consideration certainly goes into the designing of the next generation of instruments that will be someday deployed on the Moon,” Nagihara added. By pairing the newly discovered data with images of the Moon’s surface from the Lunar Reconnaissance Orbiter, researchers were also able to map astronaut activity. Surface disturbance darkened the lunar soil, which absorbs more light from the sun, making it warmer. “It doesn’t take much disturbance to get that very subtle warming on the surface, Nagihara said. “So analysis of the historic data together with the new images of the Moon really helped us characterize how the surface warmed.” Let us know what you like about Geek by taking our survey.
Latest News & SocialBack Curriculum Case Study: How does a Global Curriculum Encourage Deep Learning? By Idil Yusuf Deep learning instils confidence and perseverance, and provides opportunities for all learners to succeed, despite the challenges perceived by others or by the individual themselves. A learner who adopts a deep approach to learning will seek to understand meaning. They have an intrinsic interest and enjoyment in carrying out the learning tasks, and a genuine curiosity in the topic, connections with other subjects, and building on their current learning. The sustainable development goals (or global goals as they can sometimes be referred to) are the targets set by the United Nations in 2012 in an attempt to achieve a better and more sustainable future for all. The goals tackle the global challenges we all face, including those related to poverty, inequality, climate, environment, prosperity, and peace and justice. A team of teachers from both Foxfield Primary School and Rockliffe Manor Primary School used the goals as a starting point to design the learning for the children in their year group. The global theme for the autumn term was environmental responsibility and global goal 6: clean water and sanitation. The intention of the teachers was for the children in year 5 to feel motivated to bring about a change to the way we think about water, in order for them to be inspired to raise awareness of the water crisis facing communities not too dissimilar from their own, in rural Malawi. The learning journey that ensued changed not only the attitudes of the children, but their families and the school community alike. ‘Deep learning deepens human desire to connect with others to do good’ Michael Fullan, Deep Learning If as a side effect we want to understand and value our global interconnectedness, deep learning therefore must be learning that is valued and learning that sticks. During the exploration stage of the learning journey, the children in year 5 analysed a range of statistics based on percentages of people that had access to clean water in both rural and urban areas. From this learning, children identified Malawi as a country with one of the lowest percentages of people with access to water. This provided a learning pathway. Assumptions were made that children would understand that water is not distributed around the world equally. Once this misconception was identified, changes were made to the learning journey so that the children could explore the idea of global water distribution. Children were posing questions such as, ‘How can there be a water crisis when Malawi has a huge lake – why isn’t it being used as a water source?’ The teachers in year 5 recognised that in order for the children to be able to empathise with the people impacted by the water crisis in Malawi, they would need to acquire knowledge and understanding of the country and the people. The human and physical features of the country were studied to contextualise the learning and the children were able to make comparisons with their own locality. A question the teachers in year 5 posed when planning and designing the curriculum was, ‘Will the children be able to relate to the problems the children in Malawi are facing?’ We know that children need a personal connection to the learning, whether that’s through engaging emotionally or connecting new information with previously acquired knowledge. Without that, learners may not only disengage and quickly forget, but they may also lose the motivation to learn. To further the children’s understanding of the daily challenges children in Malawi faced, they watched a video clip of pupils of a similar age travelling for several miles a day to fetch heavy containers of water that they knew could ultimately lead to ill health. The video featured a young British boy who had heard about the water crisis in Malawi and decided to raise money by travelling to one of the many affected remote villages and collecting water with the children. His story was powerful. The emotions in the classrooms were intense. A child in one of the classes with a complex home life (who would be considered a disadvantaged pupil) was so moved it brought him to tears. ‘We must foster global citizenship. Education is about more than literacy and numeracy. It is also about citizenry. Education must fully assume its essential role in helping people to forge more just, peaceful and tolerant societies.’ Ban Ki-moon, United Nations Secretary-General (2012) The children couldn’t understand why the youngsters would go to such extreme lengths, walking to collect this ‘dirty, filthy water’ when they knew drinking it may lead to severe illness. This ‘cognitive conflict’ became a pivotal moment for our learning journey. It is only when existing mental structures are challenged that accommodation of the new information takes place, new connections are made and learning moves on. In the deepening stage of the learning journey, the children in year 5 discussed the gender inequality associated with the water crisis. They felt a deep need to do something about what they had found. Learners respond to cognitive conflict in a variety of ways, but often the response is very emotive. For many, the initial response to cognitive conflict is confusion. The children genuinely struggled to make sense of what they saw. The emotions of the children in the classroom were unmistakeably genuine. Although open and probing questions can be planned in advance, sometimes an unexpected response to a stimulus can cause cognitive conflict for the teacher; you don’t always know what question you want to ask until you see this response. Teachers then requested that the children reflect on the choices the Malawi villagers had. ‘Would you rather put yourself at risk of dehydration or use the water you have regardless of its sanitation?’ It was during the deepening stage of the learning that the children became curious about what we meant by water-related diseases. Teachers set the children challenges to research the names of diseases that affected the villagers, the signs and symptoms of the diseases and, most importantly, possible solutions for prevention. Neuroplasticity is the brain’s capacity to rewire and strengthen pathways between neurons that are exercised and used while weakening connections between cellular pathways that are not used or retrieved. Based on the new learning that the children were exposed to, the concept of people choosing to consume water that could cause serious health conditions such as dysentery and blindness out of sheer poverty, synapses and connections were formed in our brain circuits. This new understanding led to a co-created plan of action. The children were going to take action and become change makers. Derived from the work of Allison and Tharby (Every Lesson Counts), expert teaching requires challenge so that learners have high expectations of what they can achieve. Teachers carefully balanced pupils struggling or grappling with learning while ensuring they weren’t pushed too far towards the panic zone. The children wrote letters to parents to inform of them of the issue and explained to them that they wanted their support to raise money for WaterAid by completing a sponsored walk. They had brainstormed different things they wanted to do with the money raised and came to the conclusion that the solution that would have the biggest impact would be to build a well for the community. We all feel disempowered by doom and gloom and this can leave us feeling as if we are unable to make a difference. Greater understanding, especially when it is accompanied by action, can help to change that narrative. The global dimension to our curriculum helps pupils understand global issues and explore ways of addressing them within and beyond school. This more often leads to feelings of optimism and a wish to contribute to positive change in the local/global community. It was also during this deepening stage that the children completed field work in Deptford Creek with the aim of finding out more about the River Thames and its sources, the water quality, what can be found in the river and how it contributes to the communities that surround it. The teachers felt it was crucial that the children make connections between the water crisis in Africa and what changes, however small, they can make to their own lives and those of their families. Children planned and wrote letters to their parents to inform them of their project and share their ‘why’. During the planning stage of the learning journey, the children explored the concept of sponsorship and analysed a range of existing sponsor forms to support them in the design of their own. They learnt about Gift Aid and the process of taxation so that they would be better equipped to explain to their families and community why it would be a good idea to ‘tick the box’. The children designed a Just Giving page and wrote the bio for it in the form of blogs as part of their English learning journey. The children were really able to see the links between their English learning and the project they were working towards. Pupils decided that the best way to have an impact in their local area was to inform as many people as possible about what is happening in other parts of the world. The teachers at Foxfield and Rockliffe Manor planned a sequence of lessons that would lead to a final outcome in the form of a partnership-wide newsletter that would be sent out to all Inspire Partnership schools. In the delivering stage of the learning process, pupils from Foxfield and Rockliffe Manor Primary Schools completed a ten kilometre walk in the rain from Woolwich to the O2. The walk was just over 6 miles long, which is the average distance that women and children walk in Malawi to collect water. The classes have jointly raised over £1300 through the very generous donations of families, friends and neighbours. ‘When I was beginning to feel tired I kept on remembering the children in Malawi that have to make a similar journey on a daily basis just to survive, then I kept on going.’ Child in Yew Tree Class The determination the pupils had to complete the challenging walk was humbling. Some children had asked whether they could load their backpacks with heavy items to further understand what those young girls are tasked with on a daily basis. That sentiment was a common one, but one that the teachers didn’t encourage in case it became too difficult for them. ‘Give children teaching that is determined, energetic and engaging. Hold them to high standards. Expose them to as much as you can, especially the arts. Recognise the reality of race, poverty and social barriers, but make children understand that barriers don’t have to limit their lives. Above all, no matter where in the social structure children are coming from, act as if the possibilities are boundless.’ Charles Payne, So Much Reform When the children returned from their sponsored walk and were greeted by an excited Key Stage 1 playground, their sense of achievement was remarkable. 'I feel like I have made a difference to people’s lives. I think it’s really unfair that just because of where they were born means that they don’t have the basic things like clean water.' ‘Deep learning is good for all but is especially effective for those most disconnected from schooling’ Michael Fullan, Deep Learning Over next few days, the children were encouraged to reflect on the term’s learning in the evaluating stage of the journey. The children discussed how the learning had impacted on them and whether or not they thought the learning would have a short-lived impact on them or a long-lasting impact. There was an overwhelming feeling radiating from the children that they achieved something significant. The teachers and children also reflected on whether they thought the learning had achieved what it had set out to do: teach children the attitude that they should hold the belief that people can often make a greater difference when they take action collectively.
When a copper atom loses one or two of its electrons it […] When a copper atom loses one or two of its electrons it forms positively charged ions known as Cu+1 and Cu+2. Whilst ordinary Basic Copper Carbonate contains cupric ion (or Cu+2), it may sometimes contain a chemically similar alkaline component. This substance can actually serve a number of applications around industry and life in general; you probably haven’t realized how many purposes it is utilized today. Aesthetic and Practical: This substance has a number of aesthetic purposes, most notably in jewelry. The carbonate can also be converted into the metal version of copper, which is highly valuable and serves a number of its own applications. This is achieved through a process of pulverization, sizing, conversion, and electrolysis. Copper Salts: The substance can be converted into copper salts by mixing it with a stronger acid. The resulting salt is complemented with water and carbon dioxide gas. Mixing the carbonate with acetic acid (otherwise known as vinegar) will produce cupric acid, water, and carbon dioxide. Pigments and Colorants: This substance, when pure, should have a mint green color. When alkaline components have been added, a tinge of blue will be added to the color. This is often added to paints, varnishes, pottery glazes, and even fireworks to impart some of the colors. Miscellaneous: Small amounts of copper carbonates are used in a variety of animal feeds and fertilizers. It also plays a major role in the creation of pesticides and fungicides. It can also be used to control the growth and spread of aquatic weeds. It is also a common ingredient in the ammonia compounds that are used to treat timber.
They linked growth rate to metabolic rate, the measure of energy use that divides warm and cold blooded animals. The study suggests that the dinosaurs fall into a middle category, in a fresh contribution to an enduring debate. Warm blooded animals, like mammals and birds, need a lot of fuel and use that energy to their advantage, including faster movement and boosted brain power. In burning all that food they also maintain a high, stable body temperature. Cold blooded animals are more economical, but lack those advantages. Scientists define these different strategies as "endothermy" (endo for inside; therm for heat) and "ectothermy". The study's first author and a PhD student at the University of New Mexico, John Grady's paper proposes that dinosaurs may have used a not-too-hot, not-too-cold approach: "mesothermy". The evidence for this idea comes from a big survey of the growth rates in 381 different species, including 21 dinosaurs. Because bones show growth rings much like trees, the size of fossilised dinosaur bones at different ages allows palaeontologists to calculate how fast they put on weight over a lifetime. So he and his colleagues took growth rate as an indicator of metabolism, and found that dinosaurs occupied a middle ground, somewhere in between modern reptiles and mammals. Their results also place several living animals with unusual energy habits into the proposed mesothermic category. The study is published in the journal Science.
Frederick Griffith was a British bacteriologist who lived in 1879-1941. In 1928, way before DNA was really discovered, Griffith made an experiment wondering if genetic information could be transferred between different strains of bacteria. It all started when he was studying two strains of Streptococcus pneumoniae which is an illness that can cause pneumonia, ear infections, sinus infections, meningitis, and bacteremia. Both of the strands varied in appearance and virulence (the severity or harmfulness of a disease or poison). Details on the strains would include the S strain having a smooth capsule, or an outer coat composed of polysaccharides. It is also highly virulent, meaning it is very infectious to cause disease. On the other hand, the nonvirulent R strain had a rough appearance and lacked a capsule. He ended up injecting mice with the diseases, and the mice injected with the S strain died very quick (within a few days), while the mice injected with the R strain did not die.
But, Why SEX? Return to Biology Index McVeigh, Steve Dunbar VHS 1. To find the importance of sex in the study of continuity of life theme in 2. To show the idea of sex offering genetic variability therefore allowing evolution to proceed at a faster rate when necessary. Various sizes/colors of felt cloth - red, blue, yellow Magnets (if chalk board is steel backed) and if NOT, then use pins, tacks, or strips of Velcro tape Colored dittos - red and blue will do Ask students the description of the video Pacman game. Hopefully, they will mention monsters, pacman, power pills, maze, etc. Then develop a "biological" pacman maze by using a 3' by 3' piece of RED colored felt. This felt becomes the background environment for the felt pacman shapes that can stick to it. Then describe the two pacman types. Each pactype has two traits: eye and skin color. One pactype has red colored eyes and blue skin, the other has blue eye color and red skin. Arbitrarily decide that blue eye will have superior vision compared to red eye. Superior referring to how they can see the power pills/monsters better than red eyes. Have the students explain why red skin is superior to blue skin color. The idea is that red skin is camouflaged by the RED (maze) environment. Review, if necessary, the idea of genetic information being duplicated and the products of this duplication are allocated approximately to the daughter cells so that each receives only one copy of each message. 1. Have a yellow circle of felt as a nucleus placed within the pacman. Then have two felt chromosomes of different shapes and colors to represent the two traits and the alleles for that trait. Ask the students what would happen to the chromosomes, if to reproduce, the pacman will undergo binary fission. Use smaller felt pacmen for the daughter cells formed. Do this reproduction for both 2. Ask what traits would "Super Pacman" have, and what process would allow two monoploid pactypes to produce an offspring with both beneficial characteristics, i.e., superior vision and camouflage in the environment. Point out importance of sexual reproduction in offering genetic variability. 3. Using colored ditto masters make an activity sheet that the student can mark in the traits, chromosomes, and offspring pactypes, for both asexual & sexual reproduction. However, this time have a BLUE maze environment, and let red eye be superior to blue eye. (Note: in any of the above activities, students can come to the felt maze and move the pactypes and their offspring) Article by A. Journet in Science Teacher, Oct' 84, page 50.
Antiparticle were predicted before discovered When Paul Dirac wrote down an equation obeyed by electrons he found a mirror image. - It predicted the existence of a particle like an electron but with a opposite charge. - The positron later turned up in a cosmic ray experiment. - Positrons have identical mass (and energy) to electrons but they have a positive charge. Every Particle has an Antiparticle. - Each particle has a corresponding antiparticle. Here are some examples of particles and their corresponding antiparticles. Matter and Antimatter from Energy - There is an equivalence between energy and mass. - It all comes from Einstein's theory of relativity. - Energy can be turned into mass,and vice versa, if you know how. - you can work it out using E=mc² . - When energy is…
The graph shows faint emissions from cold intergalactic dust, which rose to a peak in the middle of the Coma Cluster of galaxies as ESA's Infrared Space Observatory scanned across it. ISO followed the two slanted lines across the image. Both measurements gave similar results, which have been averaged to show the signal more clearly. Intergalactic dust has never before been observed directly. This notable detection may have wide implications for cosmology and the evolution of galaxy clusters. The temperature of the dust is minus 220 to minus 240 degrees Celsius. The egg-shaped appearance of the cluster, being narrower in a north-south direction (top to bottom) than in the east-west direction (left to right), was found in an analysis of the Coma Cluster with the German-US-UK Rosat satellite. Contours of X-ray emissions, from gas at 80 million degrees, increase in intensity towards the cluster's centre but are not circular. Some evidence has been found in the ISOPHOT scans, that this non-circularity is also present in the infrared. Astronomers interpret the shape of the Coma Cluster as evidence that it is colliding and merging with a smaller cluster of galaxies. This collision perhaps explains how the intergalactic dust clouds were created. Fierce winds of gas experienced in the collision of clusters may expel dust from the galaxies. The two large galaxies near the centre of the Coma Cluster have been stripped bare of dust. |CREDITS : ||ISO graph and scan lines : ESA/ISO, M. Stickel, D. Lemke & ISOPHOT team| Visible-light image : STScI Digitized Sky Survey X-ray contours : Rosat Data Archive & S. White, A. Vikhlinin
In the second episode of Meteorwritings, "Meteorite Types and Classification," we reviewed the three main types of meteorites - irons, stones, and stony-irons. This month, and in the next two installments, we will take a much more detailed look at these classes, discuss how they were formed, what is unique about them, and also examine some well known examples of each type. Where Do Iron Meteorites Come From? In the classic 1959 adventure film, Journey to the Center of the Earth, based on Jules Verne's wonderful book Voyage au Centre de la Tèrre, a team of explorers lead by a very proper and resourceful James Mason encounter giant reptiles, vast underground caverns, oceans and the remains of lost civilizations in a subterranean world hidden far beneath our planet's crust. If we actually could make such a journey to the Earth's center, our real-life adventure would be a rather short one, as the core of our planet is a sphere of molten iron with a temperature in excess of 4,000°C. The world imagined by Verne makes for a more exciting film, but without molten planetary cores we would not have iron meteorites. Astronomers believe that in the early days of our Solar System, more than four billion years ago, all of the inner planets had molten cores. As our Earth is the largest of the Terrestrial planets (those composed largely of silicate rocks, as opposed to gaseous planets) it likely has a higher internal temperature than our smaller neighbors: Mars and Mercury. We also know that at least some asteroids in the Asteroid Belt between Mars and Jupiter once had molten cores, and these bodies were the parents of iron meteorites. Their cores are believed to have been heated by radioactive elements and to have reached temperatures around 1,000ºC. The eminent meteoriticist Dr. Rhian Jones of the Institute of Meteoritics in Albuquerque succinctly explains the result: "In a melted asteroid, melted rocky material and melted metal do not mix. The two liquids are like oil and water and stay separate. Metal is much denser than the rocky liquid, so metal sinks to the center of the asteroid and forms a core." This liquid metal consisted largely of iron and nickel, which cooled very slowly over a period of millions of years, resulting in the formation of a crystalline alloy structure visible as the Widmanstätten Pattern [see below] in iron, and some stony-iron, meteorites that have been sectioned and etched. A catastrophic event that lead to the destruction of some of these asteroids - such as a collision with another substantial body - scattered iron-nickel fragments into space. Occasionally these fragments encounter our planet and hurtle, melting, through our atmosphere. Those that survive and land upon Earth's surface are iron meteorites. How Do We Know They Are Real Meteorites? One of the questions I am most frequently asked is: "How do we know they are real?" An experienced meteorite researcher, hunter, or collector can usually identify a genuine iron meteorite just by looking it and holding it. While melting in our atmosphere, iron meteorites typically acquire small oval shaped depressions on their surfaces known as regmaglypts. These features are not found on earth rocks. Iron meteorites are very dense - much heavier than almost all terrestrial rocks - and will easily adhere to a strong magnet. Iron meteorites also contain a relatively high percentage of nickel - a metal very rarely found on Earth - and they display a unique feature that is never seen in terrestrial material. The Widmanstätten Pattern In Iron Meteorites In the early 1800s, a British geologist remembered only as "G" or possibly "William" Thomson discovered a remarkable pattern while treating a meteorite with a solution of nitric acid. Thomson was attempting to remove oxidized material from a specimen of the Krasnojarsk pallasite. After applying the acid, Thomson noticed a lattice-like pattern emerging from the matrix. The same effect was also noted by Count Alois von Beckh Widmanstätten in 1808, and is today best known as the Widmanstätten Pattern, but is sometimes also referred to as the Thomson Structure. The intricate pattern is the result of extremely slow cooling of molten asteroid cores. The interlocking bands are a mixture of the iron-nickel alloys taenite and kamacite. My colleague Elton Jones explains: "Nickel is slightly more resistant to acid than is iron so the mineral taenite doesn't etch as fast as kamacite, thus permitting the inducement of the Widmanstätten Pattern. Coarseness is an indication of the length of time the crystal growing process was allowed to run within the body of the asteroid. Growth of both mineral plates occurs so long as the temperature remains above 400°C and below 900°C. Generally this process is measured in declines of tens of degrees C per million years." Since Widmanstätten Patterns cannot form in earthbound rocks, the presence of this structure is proof of meteoric origin. Classification of Iron Meteorites Iron meteorites typically consist of approximately 90 to 95% iron, with the remainder comprised of nickel and trace amounts of heavy metals including iridium, gallium and sometimes gold. They are classified using two different systems: chemical composition and structure. There are thirteen chemical groups for irons, of which IAB is the most common. Irons that do not fit into an established class are described at Ungrouped (UNGR). Structural classes are determined by studying the two component alloys in iron meteorites: kamacite and taenite. The kamacite crystals revealed by etching with nitric acid are measured and the average bandwidth is used to determine the structural class, of which there are nine, including the six octahedrites. An iron with very narrow bands, less than 1mm, (example: the Gibeon iron from Namibia) is described as a fine octahedrite. At the other end of the scale is the coarsest octahedrite (example: Sikhote-Alin from Russia) that may display a bandwidth of 3 cm or more. Hexahedrites exhibit large single crystals of kamacite; ataxites have an abnormally high nickel content; plessitic octahedrites are rare and exhibit a fine spindle-like pattern when etched; the anomalous group includes those irons that do not fit into any of the other eight classes. Both methodologies are commonly used together when cataloging iron meteorites. For example, the Campo del Cielo iron from Chaco Province in Argentina is a described coarse octahedrite with a chemical classification of IAB. Some Famous Iron Meteorites Coconino County, Arizona, USA First discovered 1891 IAB, coarse octahedrite About 25,000 years ago a building-sized iron meteorite crashed into the desert between the present-day towns of Flagstaff and Winslow in northern Arizona. The size and inertia of the impactor resulted in a massive explosion which excavated a crater almost 600 feet deep and 4,000 feet in diameter. Research conducted by the seminal meteorite scientist H.H. Nininger revealed that a large part of the original mass vaporized upon impact, while hundreds of tons of fragments fell around the crater within a radius of several miles. The site is erroneously named Meteor Crater (craters are formed by meteorites, not meteors) and is generally regarded as the best preserved impact site on earth. Iron meteorites are still occasionally found around the crater, but the surrounding land is privately owned and, unfortunately, meteorite collecting is prohibited. The meteorite takes its name from a steep-sided canyon situated west of the crater. Clackamas County, Oregon, USA IIIAB, medium octahedrite The 15-ton Willamette iron is considered by many to be the most beautiful and spectacular meteorite in the world. It was discovered in 1902 on land owned by the Oregon Iron and Steel Company near the village of Willamettte (today part of the city of West Linn). The finder, Mr. Ellis Hughes, together with his fifteen year-old son discretely moved the huge iron almost a mile, onto his own land, using an ingenious hand made wooden cart. Hughes was later successfully sued by the steel company, with ownership of the meteorite being awarded to them. In 1906 the meteorite was purchased, reportedly for $20,600, and donated to the American Museum of Natural History in New York. It was displayed in the Hayden Planetarium for many years, and can today be viewed in the Rose Center for Earth and Space. Controversy has continued to follow the Willamette. The Confederated Tribes of the Grand Ronde Community of Oregon sued the American Museum of Natural History for the return of the Willamette, claiming it once belonged to the Clackamas tribe, and is a relic of historic and religious significance. In the year 2000, an agreement was reached stipulating that the Grande Ronde Community could "re-establish its relationship with the meteorite with an annual ceremonial visit." Primorskiy Kray, Russia Witnessed fall, February 12, 1947 IIAB, coarsest octahedrite In the winter of 1947 the largest documented meteorite event took place near the Sikhote-Alin mountains in eastern Siberia. Thousands of fragments fell among snow-covered trees, and formed an extraordinary crater field comprised of 99 separate impact structures. There are two distinct types of Sikhote-Alin meteorites: individuals which flew through the atmosphere on their own, often acquiring regmaglypts and orientation; and angular shrapnel fragments which exploded as a result of atmospheric pressure. Sikhote-Alin individuals typically melted into unusual sculptural shapes in flight, are among the most attractive iron meteorites, and are much coveted by collectors. Geoff Notkin's Meteorite Book Geoffery Notkin, co-host of the Meteorite Men television series and author of Meteorwritings on geology.com has written an illustrated guide to recovering, identifying and understanding meteorites. Meteorite Hunting: How to Find Treasure From Space is a 6" x 9" paper back with 83 pages of information and photos. About the Author Geoffrey Notkin is a meteorite hunter, science writer, photographer, and musician. He was born in New York City, raised in London, England, and now makes his home in the Sonoran Desert in Arizona. A frequent contributor to science and art magazines, his work has appeared in Reader's Digest, The Village Voice, Wired, Meteorite, Seed, Sky & Telescope, Rock & Gem, Lapidary Journal, Geotimes, New York Press, and numerous other national and international publications. He works regularly in television and has made documentaries for the The Discovery Channel, BBC, PBS, History Channel, National Geographic, A&E, and the Travel Channel. He is currently at work on a book about his adventures as a meteorite hunter, which is expected to be published by Stanegate Press in 2009. | Photograph by Leigh Anne DelRay WE DIG SPACE ROCKS™ Find it on Geology.com More from Geology.com |Hand Lens A 10-power folding magnifier in a metal case. A frequently used lab and field tool. |Scapolite can be a pretty faceted stone or a translucent cab with cats eye and iris effects. |A large polished end cut of the Gibeon (IVA), fine octahedrite iron, first discovered in 1836 in the Namib Desert, Namibia. Gibeon is prized by collectors for its beautiful etch pattern, and popular with jewelers as it is a very stable iron and not prone to rusting. Small sections of the Gibeon irons are sometimes fashioned into rings and have been used to adorn the faces of expensive watches. Photograph by Leigh Anne DelRay © by Aerolite Click image to enlarge. |Detail of a Gibeon iron slice, after etching with a mild solution of nitric acid. Note the intricate pattern of taeneite and kamacite bands. In etched sections of Gibeon, these bands are typically about 1 mm wide, or less, hence its designation as a fine octahedrite. Gibeon is one of the largest known meteorite falls with an estimated total recovered weight of 26 metric tons. Many of the largest known pieces are on display in Windhoek, the capital of Namibia. Photograph by Leigh Anne DelRay © by Aerolite Click image to enlarge. |Detail of a slice from the Glorieta Mountain meteorite discovered in Santa Fe County, New Mexico in 1884. Both pallasites and siderites (irons) have been found in the same strewnfield. Note the complex interlocking pattern of iron-nickel bands. The area pictured is approximately 12 cm in width. Photograph by Leigh Anne DelRay © by Aerolite Meteorites. Click image to enlarge. |The Henbury iron meteorite from central Australia is associated with a large crater field, and was first discovered in 1931. Henbury is classed as a IIIAB iron and is a medium octahedrite. The bands are considerably wider than of the Gibeon iron (fine octahedrite), also pictured on this page. Area shown is approximately 8 cm in width. Photograph by Leigh Anne DelRay © by Click image to enlarge. |A spectacular example of the Sikhote-Alin iron meteorite, which fell in eastern Russia in 1947. This large specimen weighs 11.1 kg / 24 1/2 lbs and is described as a complete individual, as opposed to shrapnel specimens which angular as a result of explosive fragmentation in the atmosphere. The scale cube pictured, is 1 cm in size. Note the sculptural shape and abundant regmaglypts (thumbprint-like indentations), caused when the surface melted during flight. Photograph by Leigh Anne DelRay © by Aerolite Meteorites. Click image to enlarge. More About Meteorite Identification |If you would like to learn more about meteorite identification, and discover how to perform some other simple tests at home, please visit The Aerolite Guide to Meteorite Identification. Meteorites are very valuable both to the scientific community and to enthusiastic collectors. So, if you think one landed in your backyard, be sure to get it checked out! . |Detail of a remarkable 155.7-gram oriented Sikhote-Alin specimen. During flight, the leading edge maintained a fixed orientation towards our planet, resulting in the snub-nosed or bullet shape which is typical of highly oriented meteorites. Note the tendril-like features where rivulets of molten iron flowed across the surface. Photograph by Leigh Anne DelRay © by Aerolite Meteorites. Click image to enlarge. |The author [above left] and his friend and expedition partner, Steve Arnold, hunting for iron meteorites with specialized metal detectors in Red River County, Texas. Meteorites are known to have fallen in the area, which is also an old farming community. The overgrown terrain coupled with ground rich in discarded farm implements and man-made iron materials made meteorite hunting a real challenge. Photograph by McCartney Taylor © by Aerolite Click image to enlarge.
The Development of our System of Integers Negative numbers were developed by Arab merchants as they attempted to build a system to record business transactions. Objective: Students will research the history of the development of our system of integers, and how these numbers were first applied to business and other contexts. You will find the packet and the rubric at the bottom of this page. You must copy and paste this in a word document so you can type in it. BE SURE THAT YOU TYPE YOUR ASSIGNMENT Click on the following links to practice graphing points on the coordinate plane. For each game you are going to rate it based on the following: directions, difficulty and enjoyment. Record your ratings and score in the Coordinate Plane Activities section of the Integer Webquest packet. Websites for Coordinate Plane (don't forget to push the back button when you are finished viewing the site) HISTORY OF NEGATIVE INTEGERS Use the websites below to answer these questions. Be sure to write the answers to the questions in your own words. You will not receive any credit for answers that are copied directly from a website. Write your answers to these questions in the History of Negative Integers section of the Integer Webquest packet. A clear understandable answer written in complete sentences is required. Questions to be researched: 1. In what century were negative integers finally accepted? 2. Which cultures were the first to use negative integers and what contributions did they make? 3. When was the word integer introduced? 4. What does the word integer mean in Latin? 5. What is the symbol for integers, where does it comes from and what does it mean? The History of Negative Integers Websites (don't forget to push the back button when you are finished viewing the site) Online Etymology Dictionary Negative Numbers at MathPages Properties of Real Numbers: Negative Numbers History of Negative Numbers RULES FOR INTEGER OPERATIONS Use the websites below to answer these questions. You need to include an example with each of the explanations. Be sure to write the answers to the questions in your own words. Your explanation needs to include the math you do and how you determine the sign of your answer. Make sure you use the correct math terminology in your explanations. Write your explanation and example in the Rules for Integer Operations section of the Integer Webquest packet. Questions to be researched: 1. How do you add two integers if they have the same sign? Make sure that you explain how to determine the sum’s sign. 2. How do you add two integers if they have different signs? Again, explain how to determine the sum’s sign. 3. Explain how you determine a difference when subtracting two integers. 4. Explain how you multiply and divide integers. A chart is acceptable as part of your answer. RULES FOR INTEGER OPERATIONS WEBSITES (don't forget to push the back button when you are finished viewing the site) Positive and Negative Numbers at Math League Integers: Operations with Signed Numbers AAA Math: Multiplication with Integers AAA Math: Dividing Integers Learning Wave - Multiplying Integers Math Guide: Operations on Integers INTEGERS AND EVERYDAY LIFE Visit the websites listed to find examples of real life uses of integers, particularly the addition and subtraction of negative numbers. Identify three instances where integers are used in your everyday life. Provide a solid explanation that demonstrates your understanding of how the integers are involved with the examples that you identify. You will not receive credit for explanantions or examples that are copied directly from a website. The explanations must include how BOTH POSITIVE AND NEGATIVE numbers are used. Write these explanations in the Integers and Everyday Life section of the Integer Webquest packet. Integers and Everyday Life Websites (don't forget to push the back button when you are finished viewing the site) Math Forum: Negative Numbers in the Real World Math Forum: Integers in Daily Life Math Forum: Subtracting Negative Numbers Math Forum: Using Integers Math Goodies: Integers and the Real World Math Forum: Integers in Everyday Life REAL-LIFE WORD PROBLEMS Write THREE, ORIGINAL real-life word problems. You must write one addition, one subtraction and one multiplication or division word problem. Each problem that YOU create MUST use NEGATIVE and POSITIVE integers. No credit will be given for examples that are copied from a website. The word problems need to be realistic (i.e. you cannot have -3 marbles). These three word problems and their solutions are to be written in the Real-Life Word Problems section of the Integer Webquest packet PRACTICE WITH INTEGERS Click on the following links to practice adding, subtracting, multiplying and dividing with integers. For each game you are going to rate it based on the following: directions, difficulty and enjoyment. Record your ratings and score in the Practice with Integers section of the Integer Webquest packet. You will need to play the Arithmetic Four game with a partner. You need to choose Integer Addition, Integer Subtraction, Integer Division and Integer Multiplication for your settings and then click on Start Game. This webquest will be counted as a test grade. Refer to the grading rubric that is included in your packet for details on the assessment of the activity. Ideas for this webquest were taken from Miss Young's Webquest Holy Family University, PA King Phillip Middle School Interesting Integers Webquest Cam Miller and Nina Newlin Worcester County and Kent County, MD Late assignments will be deducted a grade for everyday late. Automatic F after 3 days late. What you turn in (staple in this order)
Find out why punctuation is important, and learn how understanding proper punctuation gives writers more choices and can improve spelling and grammar. Apostrophes are used for more than possession! In this lesson, review how to use apostrophes for contractions, short forms, and tricky homophones. In the world of grammar, the apostrophe for possession is one of the hardest punctuation marks to use correctly. Discover the rules for usage in this lesson! Hyphen use in English is complicated, but here to help is Joanne of How to Spell. In this lesson, learn how to use hyphens to form compound words. Using capital letters makes your writing look more professional. Find out how to use capital letters to indicate proper nouns and the beginnings of sentences. - Recommended Recommended - History & In Progress History - Browse Library - Most Popular Library Get Personalized Recommendations Let us help you figure out what to learn! By taking a short interview you’ll be able to specify your learning interests and goals, so we can recommend the perfect courses and lessons to try next.Start Interview You don't have any lessons in your history. Just find something that looks interesting and start learning!
The Nyquist frequency, named after electronic engineer Harry Nyquist, is half of the sampling rate of a discrete signal processing system. It is sometimes known as the folding frequency of a sampling system. An example of folding is depicted in Figure 1, where fs is the sampling rate and 0.5 fs is the corresponding Nyquist frequency.[note 1] The black dot plotted at 0.6 fs represents the amplitude and frequency of a sinusoidal function whose frequency is 60% of the sample-rate (fs). The other three dots indicate the frequencies and amplitudes of three other sinusoids that would produce the same set of samples as the actual sinusoid that was sampled. The symmetry about 0.5 fs is referred to as folding. The Nyquist frequency should not be confused with the Nyquist rate, which is the minimum sampling rate that satisfies the Nyquist sampling criterion for a given signal or family of signals. The Nyquist rate is twice the maximum component frequency of the function being sampled. For example, the Nyquist rate for the sinusoid at 0.6 fs is 1.2 fs, which means that at the fs rate, it is being undersampled. Thus, Nyquist rate is a property of a continuous-time signal, whereas Nyquist frequency is a property of a discrete-time system. When the function domain is time, sample rates are usually expressed in samples/second, and the unit of Nyquist frequency is cycles/second (hertz). When the function domain is distance, as in an image sampling system, the sample rate might be dots per inch and the corresponding Nyquist frequency would be in cycles/inch. Referring again to Figure 1, undersampling of the sinusoid at 0.6 fs is what allows there to be a lower-frequency alias, which is a different function that produces the same set of samples. That condition is usually described as aliasing. The mathematical algorithms that are typically used to recreate a continuous function from its samples will misinterpret the contributions of undersampled frequency components, which causes distortion. Samples of a pure 0.6 fs sinusoid would produce a 0.4 fs sinusoid instead. If the true frequency was 0.4 fs, there would still be aliases at 0.6, 1.4, 1.6, etc.,[note 2] but the reconstructed frequency would be correct. In a typical application of sampling, one first chooses the highest frequency to be preserved and recreated, based on the expected content (voice, music, etc.) and desired fidelity. Then one inserts an anti-aliasing filter ahead of the sampler. Its job is to attenuate the frequencies above that limit. Finally, based on the characteristics of the filter, one chooses a sample-rate (and corresponding Nyquist frequency) that will provide an acceptably small amount of aliasing. In applications where the sample-rate is pre-determined, the filter is chosen based on the Nyquist frequency, rather than vice versa. For example, audio CDs have a sampling rate of 44100 samples/sec. The Nyquist frequency is therefore 22050 Hz. The anti-aliasing filter must adequately suppress any higher frequencies but negligibly affect the frequencies within the human hearing range. A filter that preserves 0–20 kHz is more than adequate for that. Early uses of the term Nyquist frequency, such as those cited above, are all consistent with the definition presented in this article. Some later publications, including some respectable textbooks, call twice the signal bandwidth the Nyquist frequency; this is a distinctly minority usage, and the frequency at twice the signal bandwidth is otherwise commonly referred to as the Nyquist rate. - In this context, the factor of ½ has units of cycles per sample, as explained at Aliasing#Sampling sinusoidal functions. - As previously mentioned, these are the frequencies of other sinusoids that would produce the same set of samples as the one that was actually sampled. - Grenander, Ulf (1959). Probability and Statistics: The Harald Cramér Volume. Wiley. The Nyquist frequency is that frequency whose period is two sampling intervals. - Harry L. Stiltz (1961). Aerospace Telemetry. Prentice-Hall. the existence of power in the continuous signal spectrum at frequencies higher than the Nyquist frequency is the cause of aliasing error - Thomas Zawistowski, Paras Shah. "An Introduction to Sampling Theory". Retrieved 17 April 2010. Frequencies "fold" around half the sampling frequency - which is why [the Nyquist] frequency is often referred to as the folding frequency. - Jonathan M. Blackledge (2003). Digital Signal Processing: Mathematical and Computational Methods, Software Development and Applications. Horwood Publishing. ISBN 1-898563-48-9. - Paulo Sergio Ramirez Diniz, Eduardo A. B. Da Silva, Sergio L. Netto (2002). Digital Signal Processing: System Analysis and Design. Cambridge University Press. ISBN 0-521-78175-2.
Glaucoma is an eye disease in which pressure inside the eye (intraocular pressure) rises dangerously high, damaging the optic nerve and causing vision loss. In a healthy eye, fluid is produced in the ciliary body, enters the eye, and then drains through tiny passages called the trabecular meshwork. In people with glaucoma, these passages become blocked and intraocular pressure rises. Some cases of glaucoma can be treated with medications. For others, laser or traditional surgery is required to lower eye pressure. Common surgeries include: Laser Peripheral Iridotomy (LPI) – For patients with narrow-angle glaucoma. A small hole is made in the iris to increase the angle between the iris and cornea and encourage fluid drainage. Argon Laser Trabeculoplasty (ALT) and Selective Laser Trabeculoplasty (SLT) – For patients with primary open angle glaucoma (POAG). The trabecular passages are opened to increase fluid drainage. ALT is effective in about 75% of patients, and SLT may be repeated. Nd: YAG Laser Cyclophotocoagulation (YAG CP) – For patients with severe glaucoma damage who have not been helped with other surgeries. The ciliary body that produces intraocular fluid is destroyed. Filtering Microsurgery (Trabeculectomy) – For patients who have not been helped with laser surgery or medications. A new drainage passage is created by cutting a small hole in the sclera (the white part of the eye) and creating a collection pouch between the sclera and conjunctiva (the outer covering of the eye).
An African Curiosity As European powers increased the exploration and exploitation of the New World, Asia, and Africa, a fervent attention to objects from so-called “savage” cultures manifested itself in cabinets of curiosity. Objects of nature and artifice were abundant, such as intricately carved ivory tusks and gazelles and primates, which were popular live exhibits. The African masks on display in the Gettysburg Cabinet reflect the ambition of the Renaissance collector to possess and exhibit every bit of the world; however, the mask in African cultures serves a spiritual function. Traditional ceremonies are individual to each people and have been performed for centuries. During these rituals, which often involve some sort of dance or repetitive action, a mask is used to transform the wearer into a certain spiritual being. This enables the participant to commune with the spirits, be they ancestral or from a pantheon of gods. These practices are imperative for increasing and continuing the life force of the community and the natural resources on which it depends. African masks do not follow the Western tradition of naturalism, instead conveying abstract representations of animals and spirits. The masks were crafted by skilled artists and meant for one individual wearer. For this reason no two are exactly alike. Masks and other objects from the continent were decontextualized in their transport to Europe; removed from their original environment, they became exotic décor or specimens of natural wonders for European private collections. Such items would reinforce the contemporary perception of Africa and its peoples. Ancient texts molded the educated Renaissance citizen’s vision of Africa, for example Pliny’s Natural History described it as “strange beasts and weird men living in a land of burning heat.” European powers maintained such views while exploiting the commercial and cultural resources found on far-off continents. For example, in his mid-fifteenth century observations taken during the exploration of the West African coast, the Venetian merchant Alvise da Mosto described Senegambian women as ‘ready to sing and dance,’ but that ‘their dances are very different from ours.’ In the European mind, Africa (as well as the New World, explored almost contemporaneously) embodied the exotic; and its people, the savage. An emphasis on the difference in costume, art, ritual, and skin color between Europeans and Africans resulted in this idea of savagery, despite the many similarities between some African and European societies. The tradition of curiosity about “primitive” aspects of African art and the continent itself is the source numerous contradictions. For instance, the long history of enslavement of black Africans exacerbated during fifteenth century explorations is coupled with the Western appreciation of a visual aesthetic that was one of the most important influences in 20th century Modern Art. Imperial expansion into Africa continued, and shifts of power brought more European nations to the continent. Twentieth century European artists launched revolutionary art movements after having seen an influx of tribal arts in European ethnographic displays. European artists found themselves drawn to art characterized by abstraction and raw visual power, so different from naturalistic European conventions of sculpture and art. Because the African masks used in this exhibit are 20th century productions sold to tourists, they exist as the culmination of the visual tradition of representing African art in a Western space. The smaller mask with rounded, incised red earlobes is small enough to handle easily; not the correct dimensions to function as a real ritualistic piece in ceremony, it is meant for dramatic decoration. Sharp angles form its abstract features. The mouth is open, with neatly carved front teeth, and there are thin slits for eyes under heavy incised lids. The angles of the cheeks are high-lighted with the same red, and like the other mask, the top of the head is painted red and slopes upward toward a protruding head piece. The larger piece is closer to the size of a “real” African mask, one used in ceremonies and for its traditional purpose; its length would cover the face of a wearer. However, there are no markings on the back, or any indication of a mechanism for wearing the mask, ceding the conclusion that this too is simply a tourist object, a simplification of a revered object. It is not as pointed as the other work, instead featuring elongated, pulled features and a more lifelike face carved in high relief at its top. Amy Trevelyan, a former art history professor at Gettysburg College who specialized in Native American art, acquired these masks for the Schmucker Gallery collection because of their well- respected formal style. These 20th century masks, individual in their form and function, stand as a tactile testament to the Western curiosity with the African continent that began during Renaissance explorations and continues through the present day. More about objects brought to Europe by Portuguese merchants: Kate Lowe, “The stereotyping of black Africans,” in Black Africans in Renaissance Europe, ed. T.F. Earle and K.J.P. Lowe (New York: Cambridge University Press, 2005), 11. Ferdinand Anton, Frederick J. Dockstader, et al., Primitive Art (New York: Harry N. Abrams, Inc., 1979), 269-271. “African Masks.” Last modified 2010. http://www.contemporary-african-art.com/african-masks.html. David Killingray, A Plague of Europeans: Westerners in Africa Since the Fifteenth Century (Middlesex: Penguin Books Ltd., 1973), 12. Lowe, “The stereotyping of black Africans,” 35. Lowe, “The stereotyping of black Africans,” 15. Roland Oliver and Anthony Atmore. Medieval Africa 1250-1800 (Cambridge: Cambridge University Press, 2001), 74. Colin W. Newbury, “Trade and Authority in West Africa,” in Colonialism in Africa 1870-1960, Volume I, ed. L.H. Gann and Peter Giuignan (New York: Cambridge University Press, 1969), 83.
Teaching the Holocaust is an important but often challenging task for those involved in modern Holocaust education. What content should be included and what should be left out? How can film and literature be integrated into the curriculum? What is the best way to respond to students who resist the idea of learning about it? This book, drawing upon the latest research in the field, offers practical help and advice on delivering inclusive and engaging lessons along with guidance on how to navigate through the many controversies and considerations when planning, preparing, and delivering Holocaust education. Whether teaching the subject in History, Religious Education, English or even in a school assembly, there is a wealth of wisdom which will make the task easier for you and make the learning experience more beneficial for the student. The aims of Holocaust education Ethical issues to consider when teaching the Holocaust Using film and documentaries in the classroom Teaching the Holocaust through literature The role of online learning and social media The benefits and practicalities of visiting memorial sites With lesson plans, resources, and schemes of work which can be used across a range of different subjects, this book is essential reading for those that want to deepen their understanding and deliver effective, thought-provoking Holocaust education. Michael Gray teaches at Harrow School, UK. He has a Ph.D. in Holocaust education and has published widely on the subject. He is a member of both the International Network of Genocide Scholars and the British Association of Holocaust Studies. 1. What was the Holocaust? 2. Why teach about the Holocaust? 3. How should the Holocaust be taught? 4. What do students already know? 5. Using film in the classroom 6. Using literature in the classroom 7. Digital learning and new technologies 8. Visiting Holocaust sites 9. Comparing the Holocaust to other Genocides 10. Teaching in a multicultural society 11. Combatting Antisemitism 12. Dealing with Holocaust denial and distortion 13. Schemes of work 14. Lesson Plans 15. Teaching resources
Municipal solid waste and greenhouse gases The decomposition of organic waste in landfills produces a gas which is composed primarily of methane, a greenhouse gas contributing to climate change. Landfill gas can be recovered and utilized to generate electricity, fuel industries and heat buildings. There are two major benefits to recovering and utilizing landfill gas. The first is that capturing and combusting landfill gas prevents substances like methane from escaping to the atmosphere; the second is that using the energy from landfill gas can replace the use of non-renewable sources of energy such as coal, oil, or natural gas. While landfill gas recovery is a method to deal with the organic materials already in landfills, diverting organic materials such as food and yard waste from landfills (using composting or anaerobic digestion) will reduce the production of methane in the first place, and can also generate renewable energy and useful products such as compost. Below are documents that further illustrate the link between waste management activities and greenhouse gases. - Greenhouse Gases Calculator for Waste Management - Determination of the Impact of Waste Management Activities on Greenhouse Gas Emissions: 2005 Update (PDF - 835 kB) - An Analysis of Resource Recovery Opportunities in Canada and the Projection of Greenhouse Gas Emission Implications (PDF - 5.4 MB) - Global Methane Initiative Methane is 25 times more potent than carbon dioxide in terms of its global warming potential. - Emissions from Canadian landfills account for 20% of national methane emissions. - Canada's Greenhouse Gas Inventory notes that in 2015, approximately 30 Megatonnes (Mt) of carbon dioxide equivalent (eCO2) were generated at Canadian landfills, of which 19 Mt eCO2 were ultimately emitted. - Approximately 11 Mt eCO2 generated at landfills were captured - of which 5.4 Mt eCO2 were combusted and 5.6 Mt eCO2 were utilized for various energy purposes. Report a problem or mistake on this page - Date modified:
FLOODING, LOW WATER, AND HIGH WATER LEVELS IN THE RAINFOREST Seasonal flooding is characteristic of many tropical rivers, although few compare to the so-called igapo (swamp forest) and varzea (flooded forest) of the Amazon River Basin, where large tracts of rainforest are inundated to depths of 40 feet during seasonal flooding. The lowest flood stage occurs in August and September, while the highest stage occurs in April and May. Tributaries that drain the Guyana Shield flood in June, while the tributaries that drain the Brazilian Shield flood in March or April. Since the peak rainy seasons are out of phase, the peak discharges of left bank (Guyana shield) and right bank (Brazilian shield) rivers are somewhat offset, having the effect of moderating high and low water levels on the main stream, but tributaries can have extreme variations. Rain and snow that fall in the Andes and other highland areas reach the Amazon through its tributaries and produce the high-water season. Deforestation of foothills and upper basin may have caused a shift in rain levels during certain times of the year resulting in irregular high and low river levels. Flooding has important functions for the surrounding forests including eradicating pests, enriching soils with nutrients from whitewater rivers (especially varzea forests), and dispersing seeds. Varzea vs. Igapo Forest The contrasts between the low- and high-water season in some areas of the Amazon Basin are extreme. Low water leaves vast islands and sand bars exposed and river banks high above water level. Smaller tributaries may become so shallow that travel by dugout canoe is barely possible only when travelers push the canoe. Creeks and streams, which are raging torrents when rainstorms come, may dry up altogether. Low water is a time of troubles for most Amazonian fish and a time of plenty for predators like arapaima, large , dolphins, and jaguars. With the dramatic decrease in water area, fish become trapped in tiny lakes and river shallows and are easy targets for predators. In the floodplains, which during highwater are a continuous stretch of water, bodies of water are reduced to floodplain lakes. These floodplain lakes are packed with fish and predators, and dissolved oxygen levels are sharply reduced. During a few weeks each year, massive die-offs are caused in these pools when cold Antarctic air passes over parts of the Amazon, cooling surface waters and causing them to sink to the bottom. The bottom of floodplain lakes is often a decaying anaerobic layer of organic sludge. As surface waters sink to the bottom, methane and hydrogen sulfide from the bottom pushes toward surface causing tremendous die-offs. Vultures crowd by thousands to feed on carcasses. Many fish have adapted to lack of oxygen by developing structures that enable them to take atmospheric oxygen from the air. Most famous are the lungfish of South America, Africa, and Australia, but many catfish , and loaches also are able to directly use atmospheric oxygen. The best-known predator of floodplain lakes is the arapaima or piracucu, one of the world's largest freshwater fish. The species attains a maximum of 16 feet, though today such large individuals are extremely rare because of overfishing. Today conservation efforts are focused on restoring this magnificent species. The anaconda is also an apex predator in floodplain lakes. High water is the time of the flooded forest when water levels rise 30 to 40 feet and flood the surrounding forest and floodplains, linking river branches as one massive body of water. The higher water level makes the lower canopy accessible by boat. Many tree species depend on the floods for seed dispersal through animal or mechanical (floating downriver) means. It is a time of abundance for most herbivorous fish which can feed on the fruit and seeds that fall from fruiting trees. The Amazon is home to the vast majority of fish species dependent on fruits One famous fruit-eating fish is the tambaqui, a large fish that crushes fallen seeds with its strong jaws. The tambaqui waits beneath trees that are dropping seeds, congregating especially under its favorite, the rubber tree Hevea spruceana , which is widely scattered in the flooded forest. Humans take advantage of the tambaqui and other fish that wait for fallen seeds by imitating falling seeds using a pole with a seed attached by a line. When the fish is attracted within range, the hunter harpoons it. In Amazonian folklore, it is said that the jaguar hunts such seed-eating fish using its tail to mimic the "thud" of falling seeds. The high-water season is a difficult time for fish predators. The increased water area gives potential prey a larger range and predators must rely on their fat stores from their heavy feeding during the dry season. Many omnivorous species eat mostly seeds and fruit during this period. High water also means difficulty for ground-dwelling plant and animal species. Many ground dwellers migrate to more elevated areas, while some species move up into the trees. Understory plants and shrubs may spend 6-10 months underwater where they are thought to continue some form of photosynthesis. Research published in 2005 found that flooding in the Amazon causes a sizable portion of South America to sink several inches because of the extra weight and then rise again as the waters recede. Scientists say that this annual rise and fall of earth's crust is the largest ever detected, and it may one day enable researches to calculate the total amount of water on Earth. A topographic map of a section of the central Amazon River Basin near in Manaus, Brazil. Dark blue indicates channels that always contain water, while lighter blue depicts floodplains that seasonally flood and drain, and green represents non-flooded areas. Image courtesy of the Global Rain Forest Mapping Project. - How do changes in water level affect the Amazon? Other versions of this page Continued / Next: Floating Meadows The flowering and pollination of the Amazonian water lily is described in Attenborough, D. (The Private Life Of Plants, Princeton, New Jersey: Princeton University Press, 1995); Goulding, M. (Amazon-The Flooded Forest New York: Sterling Publishing Co., Inc., 1990); and Davis, W. (One River, New York: Touchstone, 1996). Goulding, M. (Amazon-The Flooded Forest New York: Sterling Publishing Co., Inc., 1990) is the source for the number of species and individuals in a floating meadow. The ecology of the tambaqui is discussed in Amazon-The Flooded Forest (New York: Sterling Publishing Co., Inc., 1990) by M. Goulding. Goulding, M. (The Fishes and the Forest. Berkeley, CA: University of California Press, 1980) finds that numerous fish species are important seed dispersers in the flooded forest and warns that clearing of vàrzea forests could reduce their populations. He also reports that over three-quarters of the fish important in commerce and subsistence depend directly or indirectly on flood-plain forests for food.
The Exit-Slip strategy requires students to write responses to questions you pose at the end of class. Exit Slips help students reflect on what they have learned and express what or how they are thinking about the new information. Exit Slips easily incorporate writing into your content area classroom and require students to think critically. - Prompts that document learning, - Ex. Write one thing you learned today. - Ex. Discuss how today's lesson could be used in the real world. - Prompts that emphasize the process of learning, - Ex. I didn't understand - Ex. Write one question you have about today's lesson. - Prompts to evaluate the effectiveness of instruction - Ex. Did you enjoy working in small groups today? - I would like to learn more about - Please explain more about - The most important thing I learned today is - The thing that surprised me the most today was - I wish Exit Slips are great because they take just a few minutes and provide you with an informal measure of how well your students have understood a topic or lesson. Create and use the strategy - At the end of your lesson or five minutes before the end of class, ask students to respond to a prompt you pose to the class. - You may state the prompt orally to your students or project it visually on an overhead ro blackboard. - You may want to distribute 3X5 cards for students to write their responses on or allow students to write on loose-leaf paper. - As students leave your room they should turn in their exit slips. - Review the exit slips to determine how you may need to alter your instruction to better meet the needs of all your students. - Collect the exit slips as a part of an assessment portfolio for each student. Fisher, D., and Frey, N. (2004). Improving Adolescent Literacy: Strategies at Work. New Jersey: Pearson Prentice Hall.
[Wikipedia], an embedded system is a special-purpose computer system designed to perform one or a few dedicated functions, sometimes with real-time computing constraints. It is usually embedded as part of a complete device including hardware and mechanical parts. In contrast, a general-purpose computer, such as a personal computer, can do many different tasks depending on programming. Embedded systems have become very important today as they control many of the common devices we use. Today, the boundaries between the general purpose PCs, the servers and the embedded systems are more blurred. These computers are sharing same platforms, same peripherals. For example, an x86/Mac PC can be used as a server, an x86/PowerPC CPU can be used in an embedded system like portable navigation device (PND). On the other way, the popular embedded processor StrongARM was a powerful desktop PC processor for the DEC workstation, and the embedded systems often also act servers, like NAS (Network Attached Storage). As same as the hardware suppliers, the OS suppliers port their products into the desktop PCs, servers and embedded systems. Nothing cuts to the heart of a development project like the choice of OS. Whether it's a tiny scheduler or kernel, an open-source distribution, a tightly wound real-time operating system, a fully featured commercial RTOS, or no OS at all, it drives all downstream software decisions and many hardware decisions as well. A survey shows us what's important to those who get a say in the choice of OS. The criteria are: - Real-time performance - Processor compatibility - Software tools - No royalties - Memory footprint - Services & Features - Hardware suppport - Supplier's reputation - Other products The resource of OS for embedded systems. A market research report for embedded OSes Wikipedia resources of OS for embedded systems
Copyright © University of Cambridge. All rights reserved. 'The Monte Carlo Method' printed from http://nrich.maths.org/ Why do this problem? This problem is a good way to engage students with many ideas in statistics. It can be accessed at a variety of levels. The activity works well as a group discussion. The most important features are the questions raised by the process. Students may come up with their own statistical questions in using this activity. It can be related to basic intuitive probability or more formal expectation analysis. Questions of convergence of the expression for the area can also be adressed informally or related to ideas surrounding the central limit theorem and the law of large numbers. - What is the chance of a randomly thrown cell falling under a - Are there any cells which might be problematic? How might we deal with those? - How reliable would you think that this method might be? Able students will want to focus on the question of creating an algorithm for deciding when to stop generating random squares. They can also relate this to work on the central limit theorem. They might like to consider a refinement of the algorithm which takes into account the boundary of the shape. Just using the activity in a hands-on fashion to find areas by recording whether randomly generated squares fall under or to the side of a shape can really reinforce the understanding of basic ideas in statistics. Encourage students to note when questions or uncertainties in the process arise.
Showing the Airflow in a Wind Tunnel * AbstractA technique often used in wind tunnels is to introduce smoke in front of the airfoil that is being tested. The smoke comes from regularly-spaced point sources, and the wind flow in the tunnel spreads it out into parallel lines, called streamlines. The streamlines make it possible to visualize the airflow over the airfoil. When the lines continue smoothly over and past the airflow, they show that the flow remains laminar, and that the airfoil is creating very little drag. When the streamlines show more chaotic, turbulent flow, they indicate that the airfoil is creating more drag. You can do something similar with a wind tunnel by stretching thin strings across the flow path, above and below your airfoil test zone. Clip them in place so you can move them up and down to fit different airfoil shapes. Attach a ribbon (about 25 cm long) to each string. Use a stick attached to your airfoil to hold it while you place it in the flow path, between the ribbons. The ribbons will act like the smoke streamlines, so that you can visualize whether the flow over your airfoil is turbulent or laminar. Try different airfoil shapes and measure which have the most laminar and the most turbulent flow. (Parker, 2005, 18-19) Cite This PageGeneral citation information is provided here. Be sure to check the formatting, including capitalization, for the method you are using and update your citation, as needed. Last edit date: 2017-07-28 BibliographyParker, S., 2005. The Science of Air: Projects and Experiments with Air and Flight, Chicago, IL: Heinemann Library. News Feed on This Topic Ask an ExpertThe Ask an Expert Forum is intended to be a place where students can go to find answers to science questions that they have been unable to find using other resources. If you have specific questions about your science fair project or science fair, our team of volunteer scientists can help. Our Experts won't do the work for you, but they will make suggestions, offer guidance, and help you troubleshoot. Ask an Expert If you like this project, you might enjoy exploring these related careers: Aerospace EngineerHumans have always longed to fly and to make other things fly, both through the air and into outer space—aerospace engineers are the people that make those dreams come true. They design, build, and test vehicles like airplanes, helicopters, balloons, rockets, missiles, satellites, and spacecraft. Read more Aviation InspectorAviation inspectors are critical to ensuring that aircraft are safe to fly. They conduct pre-flight inspections to make sure an aircraft is safe. They also inspect the work of aircraft mechanics, and keep detailed records of work done to maintain or repair an aircraft. As problems are identified, they may make changes to maintenance schedules, and may be called upon to investigate air accidents. Read more Mechanical EngineerMechanical engineers are part of your everyday life, designing the spoon you used to eat your breakfast, your breakfast's packaging, the flip-top cap on your toothpaste tube, the zipper on your jacket, the car, bike, or bus you took to school, the chair you sat in, the door handle you grasped and the hinges it opened on, and the ballpoint pen you used to take your test. Virtually every object that you see around you has passed through the hands of a mechanical engineer. Consequently, their skills are in demand to design millions of different products in almost every type of industry. Read more Aerospace Engineering & Operations TechnicianAerospace engineering and operations technicians are essential to the development of new aircraft and space vehicles. They build, test, and maintain parts for air and spacecraft, and assemble, test, and maintain the vehicles as well. They are key members of a flight readiness team, preparing space vehicles for launch in clean rooms, and on the launch pad. They also help troubleshoot launch or flight failures by testing suspect parts. Read more News Feed on This Topic Looking for more science fun? Try one of our science activities for quick, anytime science explorations. The perfect thing to liven up a rainy day, school vacation, or moment of boredom.Find an Activity
Note to visitors: Other sites have copied information from this site. This original method uses a single number (0.301), which was derived by Dr. Weldon Vlasak, and you will see that it provides a simple way to estimate the value of a logarithm or a decibel. I am going to try to show you just how easily logarithms can be handled and how it is possible to calculate the logarithm of any number without a table of logs or a calculator using only a single number. Before you leave the web site, please check out our main page for more interesting information in the field of science. I try to make science simple to understand, as you will see. I now reside in Nebraska, so use the contact page for help with local discussion groups or schools or for questions.. For the most part, natural phenomena behave as exponential functions. The word "exponential" shouldn't scare anyone because it is simply another way to write a number. The logarithm of a complex number is much more difficult, but we will deal with real numbers which are not so difficult to understand as some believe. With the knowledge gained here, you will then be able to cover any finite range of numbers! Later we shall also see how decibels are a way to define power levels in the form of logarithms. Now we will now find the first digit of a logarithm, which is very easily obtained: 1. Deriving the First Digit of the Logarithm: Since we generally count from one to ten, the number 10 will be the "base" number (B =10). The log of the number 1 is zero. The log of 10 is the number "1", which can also be written log(10) = 1. Thus log(100) = 2, log (1000) = 3 and so on. What could be more simple? Now let's go in the other direction where we handle numbers that are less than one. The log of 0.1 is the log (1/10) = (-1), The log (1/100) = (-2), and the log (1/1000) = (-3). The base number need not be decimal (base 10), and it is easy convert any base number, which is something that we will do later. Adding logarithmic values correlates to multiplication. The first number of the logarithm of a number N is the number of times the base number is multiplied. Here are some examples: For base 10 we can write log(10) = 1.0, log(100) = 2.0, and log(1000) = 3.0. We have just calculated three exact log values of N for these simple cases. In fact, calculating the log of N was easier than the process of multiplying or dividing one number by another using the methods of arithmetic. If the log number is 3, it is simply the number of decimal places for N = 1000. For numbers less than one, the logarithms are negative and log(0.1) = -1, log(0.01) = -2, and log(0.001) = -3. The intermediate point is log (1) = 0. Numbers are seldom whole numbers, and there is a way to handle these numbers, as will be seen later. For example, if N is between 10 and 100, the log(N) is between 1.0 and 2.0. 2. Now let's do some manipulations using what we have learned so far. When we add or subtract two log numbers, such as log (10) + log(100)] = (1+2) = 3 = log number. This is the same result that we obtained above where log(1000) = 3, so adding logarithms correlates to multiplication of real numbers. Again, not so difficult. Similarly [log(100) - log(10)] = (2 -1) = 1, which correlates to the division of two numbers, 100 /10= 10, and log(10) = 1. In other words, adding log numbers equates to the multiplication of two numbers, and subtracting two log numbers equates to division. 3. Here is an example.Choose a large number that is not a whole number, say N = 16,777,216. Move the decimal point to where only one digit is to the left of the decimal point. In this case, the decimal point is moved seven places to the left. The number of decimal places that we move to the left is the first digit of the log of N, which is log (N) = 7.0000. This is the first estimate of the logarithm of this very large number, and Nest = 10,000,000 = estimated value. This is within 60% of the value of N, obtained from just one quick operation! The precise value of the logarithm is log (N) = 7.224719896 = 7 + (log (1.6777216)] and therefore the log error is ( 7-7.224719896) = -0.224719896. This is an even lower percentage error in the log domain - - - of only - 3.11%, but we are not yet done. When N is less than one, we use a similar procedure except that the decimal point is moved to the right until we have just one digit at the left of the decimal point. A movement of the decimal point to the right is necessary for a number that is less than one. The log of any finite number can be estimated by simply moving the decimal point, but this rough estimate of the logarithmic value may not be sufficiently accurate for various applications. Therefore, we will now improve the accuracy of the estimate using the method of paragraph 4. 4. A New Binary/Decimal Division Estimation Method: We will use the the square root of two to our advantage. The following graph, which shows the logarithms of chosen well-spaced numbers over a range of 10,000:1, is helpful in explaining the next steps of the procedure. We will use these particular values to in the graph to derive a fairly accurate logarithm of any number within that range. The log (N) is plotted below for the following values of N: (a.) N = 1.414 (the square root of 2), (b.) N = 2.828 (2 times square root (c.) N = 5.656 (4 times square root of 2), (d.) N = 8 (4 times 2) (e.) N =10, apply to any decade (only four decades are illustrated). Notice that each the space between numbers is relatively even, which greatly simplifies the estimation procedure over Using the above values, it is only necessary to remember one main number in order to obtain an accurate estimate of the logarithm of any number! The points plotted in the above graph include the values of N stated above for the intermediate points. For example, log (N) = log (2) = 0.301. .All of the values of the log(N) given above are multiples of either 0.301 or 0.301/2 = 0.1505 (Note that 0.1505 is the log of the square root of (2). So for this procedure, you only need to remember this number: For the specific values N of 1, 2, 4, and 8 in the above graph, the logarithmic value increases consecutively by 0.301 within each decade (0, 0.301, 0.602, and 0.903) = 0.301 x (0, 1, 2, 3). For instance, log (8) = 3 x log(2) = 3 x 0.301 = 0.903. We now have four binary-encoded decimal points in each decade, and a total of 16 additional points for the entire graph. Also, observe that the log of the square root of two is (0.301/2 = 0.1505), which allows us to obtain intermediate values between any two adjacent binary-value points on the above curve. This is not an exact interpolation method, but it is pretty good, and we can easily make it more accurate. We can also divide numbers in a similar way in order to determine the log value. Choosing N = 10, if we divide N by two we get the number five. Again, log (N) = 1, but in this case, we divide N by two and we subtract the log value resulting in log (N/2) = log (5) = (1 - 0.301) = 0.699. The actual value is 0.69897... Similarly, log (N/4) = log (2.5) = (0.699 - 0.301) = 0.398. This gives us more points on the above graph if we choose to use them for greater accuracy or easier So now we know the log values of 10, 5 and 2, which are 1.0, 0.699 and 0.301. The log of 4 is therefore 0.602, and the log of 8 is 0.903. For numbers less than one, the log of 0.5 is -0.301, the log of 0.25 is -0.604, and the log of 0.125 is -0.903. All of this from just remembering one number (0.301), and by multiplying these numbers by multiples of ten, we just add integers. Similarly, by dividing numbers by ten we subtract multiples of ten. Thus we can determine the log values of an extremely wide range of numbers to a reasonable degree of accuracy without a table of logarithms, a computer or a calculator! With the values of the points on the above curve we can also utilize and interpolation method. For instance, the let us consider N = log (2 x sqrt 2) = log (2.828). Using the above method, log (2.828) = log (2) + log (sqrt 2) = (0.301 + 0.1505)) = 0.4515. This is similar to the method of the previous paragraph where we determined that log 8 = log (2 x 2 x 2) = (0.301 x 3) = 0.903, In order to obtain the closest estimate for the log of any number, choose the closest convenient values to the number (N) for which you want to find the logarithm as pictured in the above graph. 5. Using This Interpolation Method for Greater Accuracy in the Estimate of paragraph 2: Recall that N = 16,777,216, and that the estimate number was 10,000,000 for a log value of 7. We now add 0.301 for a new log value of 7.301 and the new estimate is 14,142,136. Our new estimate is now within 16% of the log value.If we now observe the points on the graph, 1.677 is about half way between 1.414 and 2, so we now divide 0.1505 (the log of the square root of two) in half and add it to the previous estimate, resulting in 7.1505 +0.07525 = 7.22575, resulting in an error of only Error = - 0.2968%. In most cases where logarithms are used in engineering and science, this type of linear interpolation will be more than sufficient. 6. The Logarithm of a Binary Number: Now we will choose a different base number B = 2 (binary number system), which in this case the number of times that the number 2 is multiplied. For N = 16, the number two is multiplied four times, N = 2 x 2 x 2 x 2 = 16 = 2^4, and its base 2 logarithm is:log (16) = 4. Similarly, for N = square root of 2 = 1.4142, the (binary) log value is 0.5, whereas the (decimal) log value is 0.301 for N =2. Decibels: A decibel value refers to a ratio of two numbers. The decibel (abbreviated dB) was originally defined in terms of audio power, and an increase in power level of one decibel was the least amount of audio power that the average human ear could hear. We now use decibels to define signal levels over a very wide range for electrical signals, etc. The most common reference point for zero dB is one milliwatt of power. Once the reference value is known, the absolute power can be expressed by a logarithmic value. The decibel is defined as 10 log (P2/P1). Therefore, an increase in power by 10:1 is 10 log(10) = 10 x 1 = 10 dB. Going in the other direction, an increase of 10 dB is 10 log (P2/P1) = 10, so log (P2/P1) = 1.0, which is 10^1 = 10. Now consider an increase in power of 3 dB, in which case 10 log (P2/P1) = 3, and thus log (P2/P1) = 0.300, which is sufficiently close to our magical number of 0.301 for the logarithmic value that corresponds to an increase of 3 dB. Therefore, 3 dB means an increase in power of 2:1 (it will help to remember this). The term dBm means that the reference level is one milliwatt, so in this case 3 dBm means a power level of 2 milliwatts. 8. A Simple Exercise: In order to test what you have learned here, consider the following problem: a.)Calculate the relative increase in power for an increase of one dB. b.) Determine the absolute power level for one dBm. See the answers to the above exercises. See a plot of decibels and Q&A. Continue: See more plots If you find any difficulties in using the methods outlined here, or if you have suggestions for improving the clarity of the presentation, please email If you appreciate what is presented above, then please check out our Main Page for other scientific accomplishments on this web site. I am sure that you will find it interesting and challenging (the puzzle and problem links on the main page have real-world value).
In 1543, twenty-eight years before Kepler's birth, Copernicus published the landmark astronomical text De Revolutionibus, or On the Revolutions of the Heavenly Spheres. The standard story about Copernicus's achievement is that by the sixteenth century, the Ptolemaic system had gotten too complicated and inaccurate to bear. In a stroke of genius, Copernicus moved the sun to the center of the universe creating a new system of brilliant simplicity and inarguable accuracy. Despite the attempts of the Catholic Church to drown out Copernican arguments, Ptolemy's system was soon overthrown. The Copernican system is thus heralded as a prime example of the triumph of a new, modern scientific era. The story is true only in part. Copernicus did revolutionize astronomy by introducing a heliocentric system. But the concept of a sun-centered universe was not brand new and, in fact, had occurred to many of the ancient philosophers. Despite popular belief, Copernicus did not drastically simplify the Ptolemaic system. What we now think of as the Copernican system – six planets traveling around in the sun in simple, circular orbits and no epicycles – was only made possible by Kepler's later refinements. In fact, Copernicus's new heliocentric universe contained almost as many epicycles as the old system. Copernicus was just as devoted as his colleagues to the concept of uniform circular motion, and was willing to introduce as many mathematical devices as was necessary to simulate it. The Copernican system was no less complicated than the Ptolemaic system, nor was it any more accurate. Each of the systems yielded predictions that were accurate enough for the astronomers and navigators of the time. Copernicus's achievement was undeniably remarkable. But almost as remarkable was the ability of a few astronomers to grasp the truth of the heliocentric system, even though there was little evidence to recommend it. Kepler was one of those insightful few. At a time when the Ptolemaic system still ruled in the European universities and the public mind, when other astronomers refused to publicly support Copernicus for fear of ridicule, Kepler was an unabashed Copernican. Although he had no technical evidence supporting one system over the other, he remained certain that the sun was at the center of the universe. While historians can never be sure exactly Kepler latched on to the heliocentric view so quickly and so firmly, most believe that he was attracted to it by the same combination of physical intuition and mystical theorizing that guided him throughout his professional career. Kepler learned of the Copernican system at the University of Tueringen, from his first mentor, the professor Michael Maestlin. Maestlin publicly supported the Ptolemaic system – he had even written an astronomy textbook based on Ptolemy. However, in the safety of his own classroom, Maestlin was a full- fledged Copernican, and Kepler soon followed suit. Kepler would soon become the first well-known astronomer to support the Copernican system. At the same time, he would recreate that system in a much more physically and mathematically accurate form. What we now think of as the Copernican conception of the universe is actually Kepler's system. Once at Gratz, Kepler focused on studying and refining Copernican astronomy. He accepted the Copernican construction of the universe, but one all-encompassing question remained: why were the planets arranged the way they were? More specifically, he wondered why there were only six planets (as was thought at the time), why they moved at the speed they did, and why they were spaced as they were. These were revolutionary questions. Before Kepler, no one had thought to wonder about why the universe was constructed in a certain way. For millennia, astronomers had devoted themselves to describing the way the planets moved, rather than questioning why that movement occurred. In the centuries before Kepler, astronomy had been purely mathematical. Kepler was the first major astronomer of the modern age to introduce questions of physics into the study of the stars. A deeply devout man, Kepler was convinced that God had created an orderly universe, and his first major pursuit was figuring out what God's intentions might have been. Kepler played with the numbers for months, searching fruitlessly for a pattern. Finally on July 9, 1595, he found one. On that day, while standing at the blackboard drawing a geometrical figure for his class, Kepler had an epiphany. He believed it was a divine inspiration. Kepler had drawn a triangle with a circle circumscribed around it, which meant that each of the triangle's corners touched the rim of the circle. Then he inscribed another circle inside the triangle, which meant that the center of each side of the triangle touched the inner circle. When Kepler stepped back and looked at what he had drawn, he realized with a shock that the ratios of the two circles were the same as the ratios of the orbits of Saturn and Jupiter. And with that realization, inspiration struck. Jupiter and Saturn were the outermost planets of the solar system, and the triangle was the simplest polygon. Kepler wondered whether you could fit the orbits of the other planets around other geometric figures, and tried his best inscribing circles in squares and pentagons. But the planetary orbits refused to fit. Then Kepler had a second epiphany. The solar system was three dimensional – so why would he think that its governing pattern would be found in two dimensional figures? Kepler turned to three dimensional objects, and found his answer in the five perfect solids. A perfect solid is a three dimensional figure, such as a cube, whose sides are all identical. Conveniently for Kepler, there are only five perfect solids: the tetrahedron (which has four triangular sides), cube (six square sides), octahedron (eight triangular sides), dodecahedron (twelve pentagonal sides), and icosahedron (twenty triangular sides). Each perfect solid can be inscribed in and circumscribed around a sphere. Kepler believed that the orbits of the six known planets – Mercury, Venus, Earth, Mars, Jupiter, and Saturn – could be fit around the five regular solids. He had finally found his answer to his question of "why." The reason there were only six planets was because there were only five perfect solids; the spacing of the planets was determined by the spacing between the solids. Kepler's new system was wrong. Kepler had made the incredible leap of asking the question "why" – but had come up with a completely wrong answer. However, Kepler would continue to cling to this system in some form for the rest of his life – he valued it far above all his other achievements. And perhaps he was right to do so. Though incorrect, his idea would launch him on a lifelong path of investigation and discovery. It would lead him to revolutionize astronomy and take his place as one of the fathers of the scientific revolution.