proba
float64
0.5
1
text
stringlengths
16
174k
0.994887
Where are you going to shop now that Target is closing? I never shopped at Target. I'll just continue dreaming about Ikea.
0.937965
Anxiety, Depressed, Bipolar or just Making Excuses? I think i may be depressed. I have a very difficult time falling asleep and often take several hours to do so, and when i do fall asleep i wake up many times throughout the night. I've had this sleeping problem most my life but over the past 6 months its become extremely difficult for me to get out of bed in the morning. This has caused me to arrive late at school or miss a day entirely. I have missed almost 20 days of school this year. Sometimes I skip school because I feel that I just cant deal with people or the teachers. My G.P.A had never fallen below a 3.5 until this year, my current G.P.A is 1.5. I lost motivation in school and other activities i enjoy. I dont even involve myself in conversation with my freinds anymore and often fall into a trance while everybody converses. I constantly feel overwhelmed with school and the guilt of my low grades. I often feel as if my mind is detached from my body and environment and get the sensation that life is a dream. Other times I feel anxious and think that people are constantly watching and judging me. My mind tends to fluctuate between these two feeling throughout the day but on the rare occasion i will have a huge burst of confidence and energy. This boost occurs about once a week......Is this mood fluctuation due to a Bipolar disorder or is it normal in depression? Am i even depressed or do u think this is all due to a flaw in my own motivation and work ethic? I can tell you right off - that this is NOT due to a flaw in your character, or just some kind of excuse for slacking. Something is going on that needs to be addressed. I couldn't say for sure without evaluating you fully. Sometimes Bipolar Disorder does start with those kinds of symptoms. For that reason, I must caution you - that it can be extremely risky to take an antidepressant medication if there is an underlying, untreated Bipolar Disorder. Often an antidepressant triggers extreme mania and sends you on a roller coaster that lasts for years. Don't blindly trust any doctor who isn't an expert in Bipolar Disorder (actually don't blindly trust anyone). There are lots of other possible reasons for these experiences you're having, including physical issues - like thyroid problems, or nutritional imbalances - can cause all sorts of extreme emotional ups and downs, anxiety and all. It's important to get checked out fully, but a doctor who isn't going to just make you pee in a cup and do a few simple labs. Someone who can look at the whole picture is what is needed. Remember that even with a serious neurological issue like Bipolar Disorder, there are always psychological/emotional aspects and stress makes it all far worse. So finding healthy ways to take care of yourself, emotionally and physically, can only help. Exercise, good nutrition, low sugar, low gluten/grains/carbs, and no artificial sweeteners or preservatives is a good place to start. Learning EFT/Meridian Tapping can be extremely helpful for any of these symptoms: depression, anxiety, low motivation, and insomnia can all often be improved quickly with Tapping. I'd start by ruling out anything serious. See a good health practitioner for a full evaluation.
0.983151
Take some clear glass vases from the dollar store and turn them into a focal point on any shelf, mantle, or table center piece with just a few simple steps and supplies. Use any combination of colors and size glitter to create different unique effects. Follow these steps to create your own DIY glitter vases home decor today. 2. Using the foam brush, stir the mod podge and apply a thin layer around the entire glass vase. 3. Holding the vase over a piece of scratch paper or a plastic plate, spread the chunky glitter discs by rotating the vase around until it is covered in its entirety. 4. To cover any gaps, gather the glitter together on the paper and roll the base over the glitter until all areas are covered. 5. Apply a second layer of mod podge covering the desired area for the next application of glitter. 6. Take the colored extra fine glitter and pour over mod podge area until covered. 7. Apply a third layer with another color towards bottom part of vase to create a shimmery effect.
0.999999
What is C, What is C++, and What is the Difference? C is a programming language originally developed for developing the Unix operating system. It is a low-level and powerful language, but it lacks many modern and useful constructs. C++ is a newer language, based on C, that adds many more modern programming language features that make it easier to program than C. Basically, C++ maintains all aspects of the C language, while providing new features to programmers that make it easier to write useful and sophisticated programs. For example, C++ makes it easier to manage memory and adds several features to allow "object-oriented" programming and "generic" programming. Basically, it makes it easier for programmers to stop thinking about the nitty-gritty details of how the machine works and think about the problems they are trying to solve.
0.999455
The Shallows Review: Not quite the modern 'Jaws' | SWITCH. It doesn't matter whether you're lying on your back in the pool, taking a dip in the river, or heading out past the break at the beach, most of us experience a - somewhat irrational - fear of what lurks below the waves. The dark, murky water could hold any number of unexpected surprises, waiting to pull us down deep below the surface. The legacy of films such as 'Jaws' has only helped fuel our anxieties. The latest shark thriller, 'The Shallows', is circling cinemas - and while it may not be the next 'Jaws', there's plenty of fodder there to add to your nightmares. A homage to a much-loved family beach turns deadly when Nancy's (Blake Lively) surfing trip is disrupted by a ravenous shark. After being badly attacked and losing blood, Nancy is forced to spend the evening on an outcrop as the threat circles her. As unsuspecting bystanders fall victim to the shark, the high tide threatens to submerge Blake's sanctuary - but can she survive its deadly jaws? I want to start off by clarifying: this isn't anywhere near as bad as you think it will be. Better still, it's actually quite decent. First and foremost is the stunning cinematography. Captured more like a surf competition than a film with remarkable aerials and slo-mo shots, the results from cinematographer Flavio Martínez Labiano ('Non-Stop', 'Unknown') and Underwater DOP Simon Christidis ('Unbroken', 'The Bait') are phenomenal. It probably doesn't hurt having Lord Howe Island as your location either. Vital to the film is Blake Lively's performance. Nancy is, in essence, the only character in this film, trapped in one location, and as such, Lively carries the film quite well. Proving she's come a long way since her 'Gossip Girl' days, she puts sufficient fear, intelligence and emotion into the role to allow the audience to be empathetic of her position - without which the film would have floundered. Vital to the film is Blake Lively's performance. Nancy is, in essence, the only character in this film, and Lively carries the film quite well. For the most part, 'The Shallows' avoids the horror film cliché: "He's behind you! Don't go in there!" The story doesn't fall victim to many yell-at-the-screen moments, as Nancy finds herself in genuine, unavoidable peril, whilst being smart enough to avoid any further unnecessary danger. But - yes, there is a but - things do come seriously undone as the film reaches its climax. Without giving anything away, the ending becomes completely ridiculous and farcical. It's such a shame, since the first 90% of the film is well-crafted and amply nail-biting. So does the final product sink or swim? Despite its disappointing ending, 'The Shallows' still remains firmly buoyant for me. Providing a truly nerve-racking experience at the cinemas these days is not an easy task, yet this film succeeds in that department. Luckily for us the film is a winter release - in summer, you might prefer to stay out of the ocean for a few days.
0.999939
What are the best free alternatives to TeamViewer? TeamViewer is the leader in remote connection and for good reason, it is extremely easy to use and fluid, one of the main qualities sought by users of this type of solution. You may not know it, but like most successful software, its use has been diverted to inject malicious objects on a large scale. However, it should be noted that no security flaws were found at TeamViewer, which was simply used as a distribution tool. In the meantime, other free tools have also made their way into the top rankings of remote access software. If you are looking for a good secure alternative or need a change, you should like these solutions. Yes, it is possible to control a machine remotely using Google's in-house browser. The web giant has developed a Chrome extension that allows you to control a computer using your Google Account. This solution is very intuitive and is intended for domestic use (remote gaming, download management, file recovery). The application is also available on Android and iOS. It allows you to manage your Mac or Windows computer from your smartphone. Here is another intuitive solution but a more efficient one than Google's. The software allows to transmit the images of the machine remotely with a remarkable fluidity. For the technical aspect, AnyDesk is able to display 60 images/second with very low latency (16 milliseconds) and almost imperceptible. Like Chrome Remote Desktop, AnyDesk also exists on smartphones and tablets. This program has a chat feature that will be very useful if you plan to use the remote connection for maintenance or to help one of your close computer beginners. Your remote work sessions can be saved in AVI format and all connections are secured with 256-bit SSL encryption. RemotePC also allows you to print remotely from any platform, which is not necessarily the case for all its competitors, but the free version also includes a Webviewer, allowing you to access the remote computer from a browser. VNC is not as intuitive as TeamViewer or the other software in this selection. Nevertheless, this "old-fashioned old man" is a reference in terms of remote control. It is mainly aimed at experienced users. Why? Why? Because it is complicated to bypass firewalls, for example, or because it cannot drag and drop. However, there is still a good alternative. Multi-platform like its competitors, NoMachine stands out from them by its proprietary NX protocol that ensures optimal session fluidity. Extremely complete, it includes many features such as video capture, simplified file transfer and audio management. With this software, you don't have to worry about configuration, once installed, it automatically scans the network to detect machines with Nomachine. Leave a comment for the article What are the best free alternatives to TeamViewer? Radmin is a secure remote control software.
0.973241
Act like you weren't just bórn yesterday, but like you also just moved in yesterday. Kitchen?! Where's that? Don't hold your farts. Let them rip! As soon as your wife leaves the hole in the wall she likes to call her office, quickly sit yourself down on her chair, and start watching old Star Trek episodes. If possible: fart. Go to bed after your wife, so you can step into a warm comfortable bed. Make sure to make lots of noise when you get in, and pull the duvet from her tired body. As soon as you see your wife looking all happy and saying: 'Finally! I've conquered the laundry!' go and take a shower, and throw all your clothes on the ground. Go to the bathroom and don't close the bathroom door, so everybody can hear what you're doing in there. Answer every question your wife asks you, with a question of your own: 'I don't know?' That way she'll never have anything on you! Throw fruit into the fruitbasket without getting rid of the plastic packaging. Tell your wife that the birth of your children was just as painful for you as it was for her. It's really hard to see someone you love suffer like that. In fact you still bear the scars from that terrible experience! So funny! My husband has mastered number 7. Drives me nuts!
0.712442
Education in Venezuela is regulated by the Venezuelan Ministry of Education. In 2010, Venezuela ranked 59th of 128 countries on UNESCO's Education for All Development Index. Nine years of education are compulsory. The school year is from September to June–July. Under the social programs of the Bolivarian Revolution, a number of Bolivarian Missions focus on education, including Mission Robinson (primary education including literacy), Mission Ribas (secondary education) and Mission Sucre (higher education). Education in colonial Venezuela was neglected compared to other parts of the Spanish Empire which were of greater economic interest. The first university in Venezuela, now the Central University of Venezuela, was established in 1721. Education at all levels was limited in both quality and quantity, and wealthy families sought education through private tutors, travel, and the study of works banned by the Empire. Examples include the independence leader Simón Bolívar (1783–1830) and his tutor Simón Rodríguez (1769–1854), and the educator Andrés Bello (1781–1865). Rodríguez, who drew heavily on the educational theories of Jean-Jacques Rousseau, was described by Bolívar as the "Socrates of Caracas". Free and compulsory education for ages 7 to 14 was established by decree on 27 June 1880, under President Antonio Guzmán Blanco, and was followed by the creation of the Ministry of Public Instruction in 1881, also under Guzmán Blanco. In the 15 years after 1870, the number of primary schools quadrupled to nearly 2,000 and the enrollment of children expanded ten-fold, to nearly 100,000. In the early twentieth century, education was substantially neglected under the dictator Juan Vicente Gómez, despite the explosion wealth due to oil. A year after his death, only 35% of the school-age population was enrolled, and the national literacy rate was below 20%. In 1928 a student revolt, though swiftly put down, saw the birth of the Generation of 1928, which formed the core of the democracy movement of later years. In 2007 primary education enrollment was around 93%.[better source needed] Many children under five attend a preschool. Children are required to attend school from the age of six. They attend primary school until they are eleven. They are then promoted to the second level of basic education, where they stay until they are 14 or 15. Public school students usually attend classes in shifts. Some go to school from early in the morning until about 1:30 pm and others attend from the early afternoon until about 6:00 pm. All schoolchildren wear uniforms. Although education is mandatory for children, some poor children do not attend school because they must work to support their families. Venezuelan education starts at the preschool level, and can be roughly divided into Nursery (ages below 4) and Kindergarten (ages 4–6). Students in Nursery are usually referred to as "yellow shirts", after the color of uniform they must wear according to the Uniform Law, while students in Kindergarten are called "red shirts". Basic education comprises grades 1 through 6, and lacks a general governing program outside of the Math curriculum. English is taught at a basic level throughout Basic education. These students are referred to as "white shirts". Upon completing Basic education, students are given a Basic Education Certificate. Middle education (grades 7–9) explores each one of the sciences as a subject and algebra. English education continues and schools may choose between giving Ethics or Catholic Religion. These students are referred to as "blue shirts". Venezuelans cannot choose their classes. Once a student ends 9th grade, they enter Diversified education, so called because the student must choose between studying either humanities or the sciences for the next two years. This choice usually determines what majors they can opt for at the college level. These students are referred to as "beige shirts". Upon completing Diversified education (11th grade), students are given the title of Bachiller en Ciencias (Bachelor of Sciences) or Bachiller en Humanidades (Bachelor of Humanities). Some schools may include professional education, and instead award the title of Técnico en Ciencias (Technician of Sciences). Under the Bolivarian government, the Venezuelan Ministry of Education proposed an educational curriculum that would help establish a socialist country. On 14 May 1999, the President Hugo Chávez approved lists of books for schools to educate young citizens on socialist ideology. The "Revolutionary Curriculum" was to feature material on theorist Karl Marx, revolutionary Che Guevara, and liberator Simón Bolívar. According to Venezuela's culture ministry, the compulsory book list is being designed to help schoolchildren eliminate "capitalist thinking" and better understand the ideas and values "necessary to build a socialist country." In 2011, the government's "Bolivarian" textbooks began to use socialist learning material. According to the Associated Press, pro-government messages were "scattered through the pages of Venezuela's textbooks". Math problems included fractions involving government food programs, English lessons included "reciting where late President Hugo Chávez was born, and learn[ing] civics by explaining why the elderly should give him thanks". The Venezuelan government released 35 million books to primary and secondary schools called the Bicentennial Collection, which have "political content" in each book, that over 5 million children had used between 2010 and 2014. According to Leonardo Carvajal from the Assembly of Education in Venezuela, the collection of books had "become a vulgar propaganda". Venezuelan historian Inés Quintero stated that in all social science books, "there is an abuse of history, ... a clear trend favoring the current political project and the political programs of the Government". Geometry professor Tomas Guardia of the Central University of Venezuela stated that "the math textbook is so problematic, there's a good chance this book is also full of errors and propaganda" after he spent months inspecting math textbooks and noticed simple errors, such as calling a shape with four sides a square when it could also be a rectangle or a rhombus. According to the Center of Reflection and Education Planning (CERPE) from a 2014 study by Alfredo Keller et al., 77% of Venezuelans rejected the implementation of education based on a socialist ideology. The government of the state of Miranda joined the PISA programme in 2010 and the first results were published in December 2011. Initial results show pupils in schools managed by the regional government achieved a mean score of 422 on the PISA reading literacy scale, the same score pupils in Mexico received. Murals by Alejandro Otero at the Central University of Venezuela,the largest university in the country. Venezuela has more than 90 institutions of higher education, with 860,000 students in 2002. Higher education remains free under the 1999 Constitution and was receiving 35% of the education budget, even though it accounted for only 11% of the student population. More than 70% of university students come from the wealthiest quantile of the population. To address this problem, instead of improving primary and secondary education, the government established the Bolivarian University system in 2003, which was designed to democratize access to "higher education" by offering heavily politicized study programs to the public with only minimal entrance requirements. Autonomous public universities have had their operational budgets frozen by the state since 2004, and staff salaries have been frozen since 2008 despite an inflation of 20–30% a year. Higher education institutions are traditionally divided into Technical Schools and Universities. Technical schools award the student with the title of Técnico Superior Universitario (University Higher Technician) after completing a three-year program. Universities award the student with the title of Licenciado (Bachelor) or Ingeniero (Engineer), among many others, according to a student's career choice after completing, in most cases, a five-year program. Some higher education institutions may award Diplomados (Specializations) but the time necessary to obtain one varies. Post-graduate education follows the conventions of the United States (being named "Master's" and "Doctorate" after the programs there). In 2009 the government passed a law to establish a national standardized university entrance examination system, replacing public universities' internal entrance examinations. Some universities have rejected the new system as it creates difficulties in planning. The system has still not been formally implemented by the State. In 2015, Venezuela reformed the National Intake System and gave the government total power to award positions to students in public universities. Along with the reform, other variables were introduced by the Bolivarian government that made it more difficult for students who do not have a lower-class background to find a position in a public university. The reform proved controversial, with protests and accusations that the reform was ideological in nature. According to Quartz, the Bolivarian government reform "disregards several Venezuelan legal precedents", including constitutional laws. In the 1970s when Venezuela was experiencing huge growth from oil sales, the literacy rate increased from 77% to 93% by the start of Hugo Chávez's tenure, being one of the highest literacy rates in the region. By 2007, of Venezuelans aged 21 and older, 95.2% could read and write. The literacy rate in 2007 was estimated to be 95.4% for males and 94.9% for females. In 2008, Francisco Rodríguez of Wesleyan University in Connecticut and Daniel Ortega of IESA stated that there was “little evidence” of “statistically distinguishable effect on Venezuelan illiteracy” during the Chávez administration. The Venezuelan government claimed that it had taught 1.5 million Venezuelans to read, but the study found that "only 1.1 million were illiterate to begin with" and that the illiteracy reduction of less than 100,000 can be attributed to adults that were elderly and died. In 2014, reports emerged showing a high number of education professionals taking flight from educational positions in Venezuela along with the millions of other Venezuelans that had left the country during the presidency of Hugo Chávez, according to Iván de la Vega, a sociologist at Simón Bolívar University. According to the Association of Professors, the Central University of Venezuela lost around 700 faculty members between 2011 and 2012 with most being considered the next generation of professors. About 240 faculty members also quit at Simón Bolívar University. The reason for emigration is reportedly due to the high crime rate in Venezuela and inadequate pay. According to Claudio Bifano, president of the Venezuelan Academy of Physical, Mathematical and Natural Sciences, most of Venezuela's "technology and scientific capacity, built up over half a century" had been lost during Hugo Chávez's presidency. Bifano acknowledges the country's large educational funds and scientific staff, but states that the output of those scientists had dropped significantly. Bifano reports that between 2008 and 2012, international journals declined by 40%; with journals matching the same number as 1997, when Venezuela had about a quarter of the scientists it had between 2008 and 2012. He also says that more than half of the medical graduates of 2013 had left the country. According to El Nacional, the flight of educational professionals resulted in a shortage of teachers in Venezuela. The director of the Center for Cultural Research and Education, Mariano Herrera, estimated that there was a shortage of about 40% for math and science teachers. Some teachers resorted to teaching multiple classes, and passing students out of convenience. The Venezuelan government seeks to curb the shortage of teachers through the Simón Rodríguez Micromission by cutting the graduation requirements for educational professionals to 2 years. In a study titled Venezolana Community Abroad: A New Method of Exile by Thomas Páez, Mercedes Vivas and Juan Rafael Pulido of the Central University of Venezuela, over 1.5 million Venezuelans, between 4% and 6% of Venezuela's population, left the country following the Bolivarian Revolution; more than 90% of those who left were college graduates, with 40% of them holding a Master's degree and 12% having doctorates and/or post doctorates. The study used official verification of data from outside of Venezuela and surveys from hundreds of former Venezuelans. Of those involved in the study, reasons for leaving Venezuela included lack of freedom, high levels of insecurity, and lack of opportunities n the country. Páez also explains how some parents in Venezuela tell their children to leave the country for protection from the insecurities Venezuelans face. ^ "Rechazan padres de familia adoctrinamiento educativo en Venezuela". Yahoo Noticias. 15 May 2014. Retrieved 14 December 2014. ^ a b c d Dreier, Hannah (7 January 2015). "Venezuelan Textbooks Teach Math, Science, Socialism". Associated Press. Retrieved 10 January 2015. ^ a b c Clarembaux, Patricia (24 June 2014). "Denuncian adoctrinamiento chavista en la educación infantil". Infobae. Retrieved 14 December 2014. ^ "El 77% de los venezolanos está en contra de la educación socialista". Infobae. 8 November 2014. Retrieved 9 November 2014. ^ Martin, Sabrina (2 June 2015). "Venezuela's new university admissions criteria favor government supporters". Quartz. Retrieved 3 June 2015. ^ Nelson, Brian A. (2009). The silence and the scorpion: the coup against Chávez and the making of modern Venezuela (Online ed.). New York: Nation Books. pp. 1–3. ISBN 978-1568584188. ^ a b "Propaganda, not policy". The Economist. 28 February 2008. Retrieved 3 May 2014. ^ Márquez, Humberto (28 October 2005). "Venezuela se declara libre de analfabetismo" (in Spanish). Inter Press Service. Retrieved 29 December 2006. ^ Small Carmona, Andrea. "Poor conditions blamed for Venezuelan scientist exodus". SciDev. Retrieved 9 July 2014. ^ "Capacity building: Architects of South American science" (PDF). Nature. 510 (7504): 212. 12 June 2014. doi:10.1038/510209a. Retrieved 9 July 2014. ^ Montilla K., Andrea (4 July 2014). "Liceístas pasan de grado sin cursar varias materias". El Nacional. Archived from the original on 4 July 2014. Retrieved 9 July 2014. ^ a b c d Maria Delgado, Antonio (28 August 2014). "Venezuela agobiada por la fuga masiva de cerebros". El Nuevo Herald. Retrieved 28 August 2014. Wikimedia Commons has media related to Education in Venezuela. This page was last edited on 9 April 2019, at 16:51 (UTC).
0.999691
Assign the lot a material status that does not permit reservations. Assign the lot a material status that only permits subinventory transfers and issues to scrap. Assign the lot a material status that does not permit reservations, but only permits subinventory transfers and issues to scrap. Transfer the quantity on hand of the item to a subinventory that does not permit reservations, but only permits subinventory transfers and issues to scrap. Identify the correct sequence of steps in the purchase order period close business flow. 1. Review the Uninvoiced Receipts report. gt; 2. Process period-end accruals. gt; 3. Close the purchasing period. gt; 4. Process remaining inventory transactions and close the inventory accounting period. gt; 5. Run the Accrual Rebuild Reconciliation report. gt; 6. Write off accrued transactions as necessary. gt; 7. Create a manual journal entry for write-offs. 1. Review the Uninvoiced Receipts report. gt; 2. Process period-end accruals. gt; 3. 7. Process remaining inventory transactions and close the inventory accounting period. 1. Process remaining inventory transactions and close the inventory accounting period. gt; 2. Run the Accrual Rebuild Reconciliation report. gt; 3. Write-off accrued transactions as necessary. gt; 4. Create a manual journal entry for writeoffs. gt; 5. Review the Uninvoiced Receipts report. gt; 6. Process period-end accruals. gt; 7. Close the purchasing period. gt; 2. Review the Uninvoiced Receipts report. gt; 3. Process period-end accruals. gt; 4. Close the purchasing period. gt; 5. Run the Accrual Rebuild Reconciliation report. gt; 6. Write-off accrued transactions as necessary. gt; 7. Create a manual journal entry for write-offs. You are creating an interorganization transfer. The standard shipping lead time defined for the transfer is 14 days. You want to move the inventory from the source and put it in Intransit. You made a mistake and did not choose the option to have Intransit Inventory. When you initiate the process for interorganization transfer, what is the result of the transaction? You would get an error for the transaction, because it violates referential integrity. The inventory would not be moved from the source organization, because there is a lead time of 14 days defined. The inventory would be moved from the source organization, but it would be moved directly to the destination organization. The inventory would be moved from the source organization, but it would be moved to Intransit Inventory, because there is a lead time of 14 days defined. ACME is implementing Inventory in a Process Manufacturing environment. The default profile for the Min-Max Planning Report is deployed. What will be the result? Users must approve requisitions for items that have a limited shelf life. Users must approve requisitions for items with a Material Safety Data Sheet (MSDS). Users must approve requisitions that are inside the items Past Due Supply time fence. Users must approve requisitions when the quantity exceeds the item#39;s Economic Order Quantity (EOQ). Company XYZ manufactures three different types of PC monitors. Type 1 is a unique requirement for the local college called Northern College. Shipments of Type 1 to other customers are not allowed. This item is stored only in a subinventory called MODEL 1. Other models are also stored in this same subinventory. The other two types (Type 2 and Type 3) can be shipped to any other customer, including Northern College. Which conditions are mandatory for the Picking rule (rule) to meet these requirements? Transaction type of Sales Order. Rule assigned to Northern College. Subinventory as MODEL 1. Rule assigned to customer Northern College. Subinventory as MODEL 1. Rule assigned to Type 1, and customer as Northern College. Subinventory as MODEL 1. Rule assigned to Type 1, and transaction type as Sales Order. Subinventory as MODEL 1. Rule assigned to Type 1, transaction type as Sales Order, and customer Northern College. Your customer has two manufacturing plants. The manufacturing process for both the plants is identical, but there are differences in specific areas. Plant 1 uses min-max planning and Plant 2 uses re-order point planning. Plant 1 and Plant 2#39;s Variable Lead Times are different. Company ABC has a factory that is set up as an inventory organization with several subinventories. Each of the subinventories has locators. The factory purchases material using a purchase order. The person who receives the purchase order does not know which subinventory and locator the material needs to go to. ABC wants the subinventory and locator to be automatically populated for that item. Which option would solve the problem? Populate the completion subinventory and locator on the routing for that item. Use the restrict subinventory and restrict locator flag on the Item Master record of that item. Populate the supply subinventory and locator fields on the Item Master record for the purchased item. Use the Item Transactions Default form to create default subinventory and locators for the purchased item.
0.948978
The Generalplan Ost (German pronunciation: [ɡenəˈʁaːlˌplaːn ˈɔst]; English: Master Plan for the East), abbreviated as GPO, was the Nazi German government's plan for the genocide and ethnic cleansing on a vast scale, and colonization of Central and Eastern Europe. It was to be undertaken in territories occupied by Germany during World War II. The plan was partially realized during the war, resulting directly and indirectly in the deaths of 9.4 to 11.4 million ethnic Slavs by starvation, disease, ethnic cleansing, mass murder, or extermination through labor, including 4.5 million Soviet citizens, 2.8 to 3.3 million Soviet POWs, 1.8 to 3 million Slavic Poles, 300 to 600 thousand Serbs and 20 to 25 thousand Slovenes. Its full implementation, however, was not considered practicable during the major military operations, and was prevented by Germany's defeat. The plan entailed the enslavement, forced displacement, and mass murder of most Slavic peoples (and substantial parts of the Baltic peoples, especially Lithuanians and Latgalians) in Europe along with planned destruction of their nations, whom the 'Aryan' Nazis viewed as racially inferior. The program operational guidelines were based on the policy of Lebensraum designed by Adolf Hitler and the Nazi Party in fulfilment of the Drang nach Osten (drive to the East) ideology of German expansionism. As such, it was intended to be a part of the New Order in Europe. The master plan was a work in progress. There are four known versions of it, developed as time went on. After the invasion of Poland, the original blueprint for Generalplan Ost (GPO) was discussed by the RKFDV in mid-1940 during the Nazi–Soviet population transfers. The second known version of GPO was procured by the RSHA from Erhard Wetzel in April 1942. The third version was officially dated June 1942. The final settlement master plan for the East came in from the RKFDV on October 29, 1942. However, after the German defeat at Stalingrad planning of the colonization in the East was suspended, and the program was gradually abandoned. The body responsible for the Generalplan Ost was the SS's Reich Main Security Office (RSHA) under Heinrich Himmler, which commissioned the work. The document was revised several times between June 1941 and spring 1942 as the war in the east progressed successfully. It was a strictly confidential proposal whose content was known only to those at the top level of the Nazi hierarchy; it was circulated by RSHA to the Reich Ministry for the Occupied Eastern Territories (Ostministerium) in early 1942. According to testimony of SS-Standartenführer Dr. Hans Ehlich (one of the witnesses before the Subsequent Nuremberg Trials), the original version of the plan was drafted in 1940. As a high official in the RSHA, Ehlich was the man responsible for the drafting of Generalplan Ost along with Dr. Konrad Meyer, Chief of the Planning Office of Himmler's Reich Commission for the Strengthening of Germandom. It had been preceded by the Ostforschung, a number of studies and research projects carried out over several years by various academic centres to provide the necessary facts and figures. The preliminary versions were discussed by Heinrich Himmler and his most trusted colleagues even before the outbreak of war. This was mentioned by SS-Obergruppenführer Erich von dem Bach-Zelewski during his evidence as a prosecution witness in the trial of officials of the Race and Settlement Main Office (RuSHA). According to Bach-Zelewski, Himmler stated openly: "It is a question of existence, thus it will be a racial struggle of pitiless severity, in the course of which 20 to 30 million Slavs and Jews will perish through military actions and crises of food supply." A fundamental change in the plan was introduced on June 24, 1941 – two days after the start of Operation Barbarossa – when the 'solution' to the Jewish question ceased to be part of that particular framework gaining a lethal, autonomous priority. Nearly all the wartime documentation on Generalplan Ost was deliberately destroyed shortly before Germany's defeat in May 1945, and the full proposal has never been found, though several documents refer to it or supplement it. Nonetheless, most of the plan's essential elements have been reconstructed from related memos, abstracts and other documents. A major document which enabled historians to accurately reconstruct the Generalplan Ost was a memorandum released on April 27, 1942, by Erhard Wetzel, director of the NSDAP Office of Racial Policy, entitled "Opinion and thoughts on the master plan for the East of the Reichsführer SS". Wetzel's memorandum was a broad elaboration of the Generalplan Ost proposal. It came to light only in 1957. Adolf Hitler, in his attempt to reassure sceptics, mentioned the world's indifference towards the earlier Armenian Genocide as an argument that possible negative consequences for Germany would be minimal in this case. In subsequent years, his declaration from Berghof has been referred to as Hitler's Armenian quote. Percentages of ethnic groups to be destroyed and/or deported to Siberia by Nazi Germany from future settlement areas. Europe, with pre-war borders, showing the extension of the Generalplan Ost master plan. LEGEND: Dark grey – Germany (Deutsches Reich). Dotted black line – the extension of a detailed plan of the "second phase of settlement" (zweite Siedlungsphase). Light grey – planned territorial scope of the Reichskommissariat administrative units; their names in blue are Ostland (1941-1945), Ukraine (1941-1944), Moskowien (never realized), and Kaukasien (never realized). Generalplan Ost (GPO) (English: Master Plan East) was a secret Nazi German plan for the colonization of Central and Eastern Europe. Implementing it would have necessitated genocide and ethnic cleansing on a vast scale to be undertaken in the European territories occupied by Germany during World War II. It would have included the extermination of most Slavic people in Europe. The plan, prepared in the years 1939-1942, was part of Adolf Hitler's and the Nazi movement's Lebensraum policy and a fulfilment of the Drang nach Osten (English: Drive towards the East) ideology of German expansion to the east, both of them part of the larger plan to establish the New Order. The final version of the Generalplan Ost proposal was divided into two parts; the "Small Plan" (Kleine Planung), which covered actions carried out in the course of the war; and the "Big Plan" (Grosse Planung), which described steps to be taken gradually over a period of 25 to 30 years after the war was won. Both plans entailed the policy of ethnic cleansing. As of June 1941, the policy envisaged the deportation of 31 million Slavs to Siberia. The Generalplan Ost proposal offered various percentages of the conquered or colonized people who were targeted for removal and physical destruction; the net effect of which would be to ensure that the conquered territories would become German. In ten years' time, the plan effectively called for the extermination, expulsion, Germanization or enslavement of most or all East and West Slavs living behind the front lines of East-Central Europe. The "Small Plan" was to be put into practice as the Germans conquered the areas to the east of their pre-war borders. In this way the plan for Poland was drawn up at the end of November 1939 and is probably responsible for much of the World War II expulsion of Poles by Germany (first to colonial district of the General Government and, from 1942 also to Polenlager). After the war, under the "Big Plan", Generalplan Ost foresaw the removal of 45 million non-Germanizable people from Central and Eastern Europe, of whom 31 million were "racially undesirable", 100% of Jews, Poles (85%), Lithuanians (85%) , Belorussians (75%) and Ukrainians (65%), to West Siberia, and about 14 million were to remain, but were to be treated as slaves. In their place, up to 8-10 million Germans would be settled in an extended "living space" (Lebensraum). Because the number of Germans appeared to be insufficient to populate the vast territories of Central and Eastern Europe, the peoples judged to lie racially between the Germans and the Russians (Mittelschicht), namely, Latvians and even Czechs, were also supposed to be resettled there. Prisoners of the Krychów forced labor camp dig irrigation ditches for the new German latifundia of the General Plan East in 1940. Most of them, Polish Jews and some Roma people, were sent to Sobibór extermination camp afterwards. According to Nazi intentions, attempts at Germanization were to be undertaken only in the case of those foreign nationals in Central and Eastern Europe who could be considered a desirable element for the future Reich from the point of view of its racial theories. The Plan stipulated that there were to be different methods of treating particular nations and even particular groups within them. Attempts were even made to establish the basic criteria to be used in determining whether a given group lent itself to Germanization. These criteria were to be applied more liberally in the case of nations whose racial material (rassische Substanz) and level of cultural development made them more suitable than others for Germanization. The Plan considered that there were a large number of such elements among the Baltic nations. Erhard Wetzel felt that thought should be given to a possible Germanization of the whole of the Estonian nation and a sizable proportion of the Latvians. On the other hand, the Lithuanians seemed less desirable since "they contained too great an admixture of Slav blood." Himmler's view was that "almost the whole of the Lithuanian nation would have to be deported to the East". Himmler is described to even have had a positive attitude towards germanizing the populations of Alsace-Lorraine, border areas of Slovenia (Upper Carniola and Southern Styria) and Bohemia-Moravia, but not Lithuania, claiming its population to be of "inferior race". Lithuania, Latvia and Estonia were to be deprived of their statehood, while their territories were to be included in the area of German settlement. This meant that Latvia and especially Lithuania would be covered by the deportation plans, though in a somewhat milder form than the expulsion of Slavs to western Siberia. While the Baltic nations like Estonians would be spared from repressions and physical liquidation (that the Jews and the Poles were experiencing), in the long term the Nazi planners did not foresee their existence as independent entitites and they would be deported as well, with eventual denationalisation; initial designs were for Latvia, Lithuania and Estonia to be Germanized within 25 years, however Heinrich Himmler revised them to 20 years. Nazi propaganda poster of the Third Reich in 1939 (dark grey) after the conquest of Poland. It depicts pockets of German colonists resettling into Polish areas annexed by Nazi Germany from Soviet controlled territories during the "Heim ins Reich" action. The outline of Poland (here superimposed in red) was missing from the original poster. In 1941 it was decided to destroy the Polish nation completely and the German leadership decided that in 15–20 years the Polish state under German occupation was to be fully cleared of any ethnic Poles and settled by German colonists.:32 A majority of them, now deprived of their leaders and most of their intelligentsia (through mass murder, destruction of culture, the ban on education above the absolutely basic level, and kidnapping of children for Germanization), would have to be deported to regions in the East and scattered over as wide an area of Western Siberia as possible. According to the plan this would result in their assimilation by the local populations, which would cause the Poles to vanish as a nation. According to plan, by 1952 only about 3–4 million 'non-Germanized' Poles (all of them peasants) were to be left residing in the former Poland. Those of them who would still not Germanize were to be forbidden to marry, the existing ban on any medical help to Poles in Germany would be extended, and eventually Poles would cease to exist. Experiments in mass sterilization in concentration camps may also have been intended for use on the populations. The Wehrbauer, or soldier-peasants, would be settled in a fortified line to prevent civilization reanimating beyond the Ural Mountains and threatening Germany. "Tough peasant races" would serve as a bulwark against attack — however, it was not very far east of the "frontier" that the westernmost reaches within continental Asia of the Third Reich's major Axis partner, Imperial Japan's own Greater East Asia Co-Prosperity Sphere would have existed, had a complete defeat of the Soviet Union occurred. Widely varying policies were envisioned by the creators of Generalplan Ost, and some of them were actually implemented by Germany in regards to the different Slavic territories and ethnic groups. For example, by August–September 1939 (Operation Tannenberg followed by the A-B Aktion in 1940), Einsatzgruppen death squads and concentration camps had been employed to deal with the Polish elite, while the small number of Czech intelligentsia were allowed to emigrate overseas. Parts of Poland were annexed by Germany early in the war (leaving aside the rump German-controlled General Government and the areas previously annexed by the Soviet Union), while the other territories were officially occupied by or allied to Germany (for example, the Slovak part of Czechoslovakia became a theoretically independent puppet state, while the ethnic-Czech parts of the Czech lands (so excluding the Sudetenland) became a "protectorate"). It is unknown to what degree the plan was actually directly connected to the various German war crimes and crimes against humanity in the East, especially in the latter phases of the war. In any case, the majority of Germany's 12 million forced laborers were abducted from Eastern Europe, mostly in the Soviet territories and Poland (both Slavs and local Jews). The Soviet Extraordinary State Commission formed in World War II in order to investigate the Nazi crimes, which was tasked also with compensating the state for damages suffered by the USSR, reported 8.2 million Soviet civilian war dead, (4.0 million in Ukraine; 2.5 million in Belarus; and 1.7 million in Russia) as the result of German occupation. These figures have been disputed outside of Russia. Some reports prepared by the Commission are now considered outright fabrications, such as the shifting of blame for the Katyn massacre perpetrated by the Soviet authorities themselves. The losses were for the entire territory of the USSR in 1946 to 1991 borders, including territories occupied by the Red Army in 1939–1940. The commission reported figures of 2.4 million civilian losses in annexed lands included citizens of prewar Poland along with inhabitants of other states occupied by the Soviet Union. The overall statistics included Russian victims of Stalinist terror as well. The Russian Academy of Sciences in 1995 estimated that the World War II casualties of the Soviet Union included 13.7 million civilian dead, 20% of the 68 million persons in the occupied USSR.This included 7.4 million victims of Nazi policies and reprisals; 2.2 million deaths of persons deported to Germany for forced labor; and 4.1 million famine and disease deaths in occupied territory. To support these figures, they cited sources published in the Soviet era based on the work of the Extraordinary State Commission, there were an additional estimated 3 million famine deaths in areas of the USSR not under German occupation. These figures are cited in the official publications of the Russian government This was disputed by the Russian historian Viktor Zemskov who maintained that the government's estimate for the civilian war dead is overstated because it includes about 7 million deaths resulting from natural causes, based on the mortality rate that prevailed before the war, and that reported civilian deaths in the occupied regions included persons who were evacuated to the rear areas. He submitted an estimate of 4.5 million civilians who were Nazi victims or were killed in the occupied zone and 4 million deaths due to the deterioration in living conditions. Timothy D. Snyder maintains that there were 4.2 million victims of the German Hunger Plan in the Soviet Union, "largely Russians, Belarusians and Ukrainians," including 3.1 million Soviet POWs and 1.0 million civilian deaths in the Siege of Leningrad. According to Snyder, Hitler intended eventually to exterminate up to 45 million Poles, Ukrainians, Belarusians and Czechs by planned famine as part of Generalplan Ost. Wannsee Conference about the "Final Solution" ^ Niewyk, Donald L.; Nicosia, Francis R. (2012-07-24). The Columbia Guide to the Holocaust. Columbia University Press. ISBN 9780231528788. ^ Berenbaum, Michael; Kramer, Arnold; Museum, United States Holocaust Memorial (2005-12-09). The world must know: the history of the Holocaust as told in the United States Holocaust Memorial Museum. United States Holocaust Memorial Museum. ISBN 9780801883583. ^ "Polish Resistance and Conclusions — United States Holocaust Memorial Museum". www.ushmm.org. Retrieved 2018-12-31. Documentation remains fragmentary, but today scholars of independent Poland believe that 1.8 to 1.9 million Polish civilians (non-Jews) were victims of German Occupation policies and the war. This approximate total includes Poles killed in executions or who died in prisons, forced labor, and concentration camps. It also includes an estimated 225,000 civilian victims of the 1944 Warsaw uprising, more than 50,000 civilians who died during the 1939 invasion and siege of Warsaw, and a relatively small but unknown number of civilians killed during the Allies' military campaign of 1944—45 to liberate Poland. ^ "Project InPosterum: Poland WWII Casualties". www.projectinposterum.org. Retrieved 2018-12-31. ^ Łuczak, Czesław. "Szanse i trudności bilansu demograficznego Polski w latach 1939–1945", Dzieje Najnowsze, issue 1994/2. ^ Glišić, Venceslav (12 January 2006). "Žrtve licitiranja - Sahrana jednog mita, Bogoljub Kočović". NIN (in Serbian). Archived from the original on 1 August 2013. Retrieved 8 May 2012. ^ Dietrich Eichholtz»Generalplan Ost« zur Versklavung osteuropäischer Völker. PDF file, direct download. ^ a b c d e f g Misiunas, Romuald J.; Taagepera, Rein (1993). The Baltic States: Years of Dependence, 1940-80. University of California Press. pp. 48–9. ISBN 978-052008228-1. ^ Stephenson, Jill (2006). Hitler's Home Front: Wurttemberg Under the Nazis. Hambledon Continuum. p. 113. ISBN 978-1-85285-442-3. Other non-'Aryans' included Slavs, Blacks and Roma and Sinti (Romanies), although some of these last were classed as 'racially pure'. ^ "Lebensraum". encyclopedia.ushmm.org. Retrieved 2019-03-05. ^ "Generalplan Ost (General Plan East). The Nazi evolution in German foreign policy. Documentary sources". World Future Fund. ^ a b Schmuhl, Hans-Walter (2008). The Kaiser Wilhelm Institute for Anthropology, Human Heredity, and Eugenics, 1927–1945. Crossing boundaries. Boston Studies in the Philosophy and History of Science. 259. Springer Netherlands. pp. 348–9. ISBN 978-90-481-7678-6. ^ Poprzeczny, Joseph (2004). Odilo Globocnik, Hitler's Man in the East. McFarland. p. 186. ISBN 978-0-7864-1625-7. ^ a b c d Gellately, Robert (1996). "Reviewed Works: Vom Generalplan Ost zum Generalsiedlungsplan by Czeslaw Madajczyk; Der 'Generalplan Ost'. Hauptlinien der nationalsozialistischen Planungs- und Vernichtungspolitik by Mechtild Rössler, Sabine Schleiermacher". Central European History. 29 (2): 270–274. doi:10.1017/S0008938900013170. JSTOR 4546609. References: Madajczyk (1994); Rössler & Scheiermacher (1993). ^ Weiss-Wendt, Anton (2010). Eradicating Differences: The Treatment of Minorities in Nazi-Dominated Europe. Cambridge Scholars Publishing. p. 69. ISBN 978-1443824491. ^ Streeter, Stephen M.; Weaver, John C.; Coleman, William D. (2009). Empires and Autonomy: Moments in the History of Globalization. UBC Press. p. 181. ISBN 978-077481600-7. ^ Churchill, Ward (1997). A little matter of genocide: holocaust and denial in the Americas, 1492 to the present. San Francisco: City Lights Books. p. 52. ISBN 978-087286323-1. ^ a b Eichholtz, Dietrich (September 2004). ""Generalplan Ost" zur Versklavung osteuropäischer Völker" [Generalplan Ost for the enslavment of East European peoples] (downloadable PDF). Utopie Kreativ (in German). 167: 800–8 – via Rosa Luxemburg Stiftung. ^ a b c d e f g h Gumkowski, Janusz; Leszczynski, Kazimierz (1961). Poland under Nazi Occupation. Warsaw: Polonia Publishing House. OCLC 456349. See excerpts in "Hitler's Plans for Eastern Europe". Holocaust Awareness Committee - History Department, Northeastern University. Archived from the original on 2011-11-25. ^ Pinfield, Nick (2015). Fordham, Michael; Smith,David (eds.). Democracy and Nazism: Germany, 1918-1945. Student Book. p. 173. ISBN 978-110757316-1. ^ Smith, David J. (2001). Estonia: Independence and European Integration. Routledge. p. 35. ISBN 978-041526728-1. ^ "Wissenschaft, Planung, Vertreibung - Der Generalplan Ost der Nationalsozialisten". Eine Ausstellung der Deutschen Forschungsgemeinschaft (DFG) (in German). 2006. ^ Madajczyk, Czesław (1980). "Die Besatzungssysteme der Achsenmächte. Versuch einer komparatistischen Analyse" [Occupation modalities of the Axis powers. A possible comparative analysis]. Studia Historiae Oeconomicae. 14: 105–22. See also Müller, Rolf-Dieter; Ueberschär, Gerd R., eds. (2008). Hitler's War in the East, 1941-1945: A Critical Assessment. Berghahn. ISBN 978-1-84545-501-9. Google Books. ^ Tomaszewski, Irene; Werbowski, Tecia (2010). Code Name Żegota: Rescuing Jews in Occupied Poland, 1942–1945. ABC-CLIO. p. 10. ISBN 978-0-313-38391-5. Retrieved May 11, 2012. ^ a b Connelly, J. (1999). "Nazis and Slavs: From Racial Theory to Racist Practice". Central European History. 32 (1): 1–33. doi:10.1017/S0008938900020628. JSTOR 4546842. PMID 20077627. ^ "Obozy pracy na terenie Gminy Hańsk" [Labor camps in Gmina Hańsk] (in Polish). hansk.info. Retrieved 29 September 2014. ^ Heinemann, Isabel (1999). Rasse, Siedlung, deutsches Blut. Das Rasse- und Siedlungshauptamt der SS und die rassenpolitische Neuordnung Europas. Wallstein Verlag. p. 370. ISBN 978-3892446231. ^ Raun,Toivo U. (2002). Estonia and the Estonians (2nd updated ed.). Stanford CA: Hoover Institution Press. pp. 160–4. ISBN 978-0817928537. ^ Nicholas, Lynn H. (2011). Cruel World: The Children of Europe in the Nazi Web. Knopf Doubleday. p. 194. ISBN 978-0307793829. ^ Berghahn, Volker R. (1999). "Germans and Poles 1871–1945". In Bullivant, K.; Giles,G.J.; Pape, W. (eds.). Germany and Eastern Europe: Cultural Identities and Cultural Differences. Rodopi. pp. 15–34. ISBN 978-9042006881. ^ Weinberg, Gerhard L. (2005). Visions of Victory: The Hopes of Eight World War II Leaders. Cambridge Univ. Press. p. 24. ISBN 978-052185254-8. ^ Cecil, Robert (1972). The Myth of the Master Race: Alfred Rosenberg and Nazi Ideology. New York City: Dodd, Mead & Co. p. 19. ISBN 978-0-396-06577-7. ^ Sontheimer, Michael (27 May 2011). "When We Finish, Nobody Is Left Alive". Spiegel Online. ^ Berkhoff (2004), p. 45. ^ Berkhoff (2004), p. 166. ^ Korbonski, Stefan (1981). The Polish Underground State: A Guide to the Underground, 1939-1945. Hippocrene Books. pp. 120, 137–8. ISBN 978-088254517-2. ^ Akinsha, Konstantin; Kozlov, Grigorii (2007). "April 1991 Spoils of War: The Soviet Union's Hidden Art Treasures". ArtNews. ^ Kumanev, Georgily A. The German occupation regime on occupied territory in the USSR. In Berenbaum (1990), p. 140. ^ Fischer, Benjamin B. (2008). "The Katyn Controversy: Stalin's Killing Field". CIA. Retrieved 10 December 2005. ^ Cienciala, Anna M.; Materski, Wojciech (2007). Katyn: a crime without punishment. Yale University Press. pp. 226–9. ISBN 978-0-300-10851-4. ^ Жертвы двух диктатур. Остарбайтеры и военнопленные в Третьем Рейхе и их репатриация. – М.: Ваш выбор ЦИРЗ, 1996, pp. 735-8. [Victims of Two Dictatorships. Ostarbeiters and POW in Third Reich and Their Repatriation] (in Russian). Quote: "2,411,430 in annexed territories including (1,538,544 from Poland: Stanislav 223,920; Volyn 65,440; Lviv/Lwow 475,435; Rovno 175,133; Ternopol 172,357; Lutsk 117,549; Brest 159,526, Horodna 111,203; and Polesskya 37,981) Lithuania: including Vilnius/Wilno 436,535; Latvia: 313,798; Estonia: 61,307; and Moldova: 61,246.". ^ Жертвы двух диктатур. Остарбайтеры и военнопленные в Третьем Рейхе и их репатриация. – М.: Ваш выбор ЦИРЗ, 1996, pp. 735-8. [Victims of Two Dictatorships. Ostarbeiters and POW in Third Reich and Their Repatriation] (in Russian). Quote: "2,411,430 in annexed territories including (1,538,544 from Poland: Stanislav 223,920; Volyn 65,440; Lviv/Lwow 475,435; Rovno 175,133; Ternopol 172,357; Lutsk 117,549; Brest 159,526, Horodna 111,203; and Polesskya 37,981) Lithuania: including Vilnius/Wilno 436,535; Latvia: 313,798; Estonia: 61,307; and Moldova: 61,246." ^ Davies, Norman (2012). Boże igrzysko [God's Playground]. 2 (Polish ed.). Otwarte. p. 956. ISBN 978-8324015566. To, co robili Sowieci, było szczególnie mylące. Same liczby były całkowicie wiarygodne, ale pozbawione komentarza, sprytnie ukrywały fakt, że ofiary w przeważającej liczbie nie były Rosjanami, że owe miliony obejmowały ofiary nie tylko Hitlera, ale i Stalina, oraz że wśród ludności cywilnej największe grupy stanowili Ukraińcy, Polacy, Białorusini i Żydzi. Translation: The Soviet methods were particularly misleading. The numbers were correct, but the victims were overwhelmingly not Russian, and came from either one of the two regimes. ^ Wegner, Bernd (1997). From peace to war: Germany, Soviet Russia, and the world, 1939–1941. Berghahn. p. 74. ISBN 978-1-57181-882-9. ^ Daniel Goldhagen, Hitler's Willing Executioners (p. 290) — "2.8 million young, healthy Soviet POWs" killed by the Germans, "mainly by starvation ... in less than eight months" of 1941–42, before "the decimation of Soviet POWs ... was stopped" and the Germans "began to use them as laborers". ^ Zemskov, Viktor N. (2012). "О масштабах людских потерь CCCР в Великой Отечественной Войне" [The extent of human losses USSR in the Great Patriotic War]. Military Historical Archive (Военно-исторический архив) (in Russian). 9: 59–71 – via Demoskop Weehly vol. 559-560 (2013). ^ Snyder (2010), Bloodlands,p. 411. Snyder states "4.2 million Soviet citizens starved by the German occupiers" Aly, Götz; Heim, Susanne (2003). Architects of Annihilation: Auschwitz and the Logic of Destruction. Phoenix. The General Plan for the East. ISBN 978-1-84212-670-7 – via Google Books. Berenbaum, Michael, ed. (1990). A Mosaic of Victims: Non-Jews Persecuted and Murdered by the Nazis. NYUP. ISBN 978-1-85043-251-7. Berkhoff, Karel C. (2004). Harvest of Despair: Life and Death in Ukraine Under Nazi Rule. Belknap Press. ISBN 978-0-674-01313-1. Browning, Christopher R. (2007). The Origins of the Final Solution: The Evolution of Nazi Jewish Policy, September 1939-March 1942. U of Nebraska Press. Generalplan Ost: The Search for a Final Solution through Expulsion. ISBN 978-0803203921. Fritz, Stephen G. (2011). Ostkrieg: Hitler's War of Extermination in the East. University Press of Kentucky. Generalplan Ost (General plan for the east). ISBN 978-0813140506 – via Google Books. Koehl, Robert L. (1957). Rkfdv: German Resettlement and Population Policy 1939-1945. : A History of the Reich Commission for the Strengthening of Germandom. Harvard University Press. OCLC 906064851. Madajczyk, Czesław (1962). "General Plan East. Hitler's Master Plan for expansion". Polish Western Affairs. III (2). Resources: Wetzel (1942); Meyer-Hetling (1942). Note: After World War II, it was thought, that the memorandum itself had been lost. The first information of its content was given in Koehl (1957), p. 72. Madajczyk, Czesław, ed. (1994). Vom Generalplan Ost Zum Generalsiedlungsplan: Dokumente (in German). De Gruyter. ISBN 978-3598232244. Rössler, Mechtild; Scheiermacher, Sabine, eds. (1993). Der 'Generalplan Ost' Hauptlinien der nationalsozialistischen Plaungs-und Vernichtungspolitik (in German). Akademie-Verlag. ISBN 978-3050024455. Russian Academy of Science (1995). Liudskie poteri SSSR v period vtoroi mirovoi voiny:sbornik statei [Human losses of the USSR in the period of WWII: collection of articles.] (in Russian). Sankt-Petersburg. ISBN 978-5-86789-023-0. Snyder, Timothy (2012). Bloodlands: Europe Between Hitler and Stalin. Basic Books. Generalplan Ost. ISBN 978-0465002399. Wildt, Michael (2008). Generation of the unbound: the leadership corps of the Reich Security Main Office. Wallstein Verlag. Weltanschauung. ISBN 978-3835302907 – via Google Books. Meyer-Hetling, Konrad (June 1942). 'Generalplan Ost', Rechtliche, wirtschaftliche und räumliche Grundlagen des Ostaufbaues (in German). Under supervision of Heinrich Himmler. Wetzel, Erhard (27 April 1942). Stellungnahme und Gedanken zum Generalplan Ost des Reichsführers S.S. [Opinion and thoughts on the master plan for the East of the Reichsführer SS] (Memorandum). pp. 297–324. In "Dokumentation - Der Generalplan Ost" (PDF). Vierteljahrshefte für Zeitgeschichte. 6 (3): 281–325. 1958. Kamenetsky, Ihor (1961). Secret Nazi Plans for Eastern Europe: A Study of Lebensraum Policies. New York City: Bookman Associates. Wildt, Michael. "The Spirit of the Reich Security Main Office (RSHA)." Totalitarian Movements and Political Religions (2005) 6#3 pp. 333–349. Full article available with purchase. Deutsches Historisches Museum (2009), Berlin, Übersichtskarte: Planungsszenarien zur "völkischen Flurbereinigung" in Osteuropa.
0.967159
Swine influenza (also called swine flu, hog flu, and pig flu) refers to influenza caused by those strains of influenza virus, called swine influenza virus (SIV), that usually infect pigs. As of 2009 these strains are all found in Influenza C virus and the subtypes of Influenza Swine or Swine influenza is common in pigs in the Midwestern United States (and occasionally in other states), Mexico, Canada, South America, Europe (including the United Kingdom, Sweden, and Italy), Kenya, Mainland China, Taiwan, Japan and other parts of eastern Asia. Transmission of swine influenza virus from pigs to humans is not common and properly cooked pork poses no risk of infection. When transmitted, the virus does not always cause human influenza and often the only sign of infection is the presence of antibodies in the blood, detectable only by laboratory tests. When transmission results in influenza in a human, it is called zoonotic swine flu. People who work with pigs, especially people with intense exposures, are at risk of catching it. However, only about fifty such transmissions have been recorded since the mid-20th century, when identification of influenza subtypes became possible. Rarely, these strains can pass from human to human. In humans, the symptoms are similar to those of influenza and of influenza-like illness in general, namely chills, fever, sore throat, muscle pains, severe headache, coughing, weakness and general discomfort. The 2009 flu outbreak in humans, known as "swine flu", is due to a new strain of influenza A virus subtype H1N1 that contained genes most closely related to swine influenza. The origin of this new strain is unknown. However, The World Organization for Animal Health (OIE) reports that this strain has not been isolated in pigs. This strain can be transmitted from human to human, and causes the normal symptoms of influenza. Pigs can become infected with human influenza, and this appears to have happened during the 1918 flu pandemic and the 2009 flu outbreak. There is so far little data available on the risk of airborne transmission of this particular virus. Mexican authorities are distributing surgical masks to the general public. Although some pigs in Canada were recently found to be infected with the new strain of H1N1, the leading international health agencies have stressed that the "influenza viruses are not known to be transmissible to people through eating processed pork or other food products derived from pigs." On April 27, the CDC recommended the use of Tamiflu and Relenza for both treatment and prevention of the new strain. Roche Applied Science and the U.S. government had already extended the shelf life of federally stockpiled Tamiflu from the original five years to seven years because studies indicated that the medication continues to maintain its effectiveness. Medical experts are also concerned that people "racing to grab up antiviral drugs just to feel safe" may eventually lead to the virus developing drug resistance. Partly as a result, experts suggest the medications should be reserved for only the very ill or people with severe immune deficiencies. Signs more serious infection might include pneumonia and respiratory failure. It is important to keep in mind most children with a runny nose or cough will not have swine flu and will not have to see their pediatrician for swine flu testing. This flu likely spreads by direct contact with respiratory secretions of someone that is sick with swine flu, like if they were coughing and sneezing close to you. People with swine flu are likely contagious for one day before and up to seven days after they began to get sick with swine flu symptoms. Droplets from a cough or sneeze can also contaminate surfaces, such as a doorknob, drinking glass, or kitchen counter, although these germs likely don't survive for more than a few hours. Anti-flu medications, including Tamiflu (oseltamivir) and Relenza (zanamivir), are available to prevent and treat swine flu. The latest swine flu news from the CDC includes advice that students should stay home if they have swine flu symptoms, but schools do not need to close unless they have large clusters of cases that are affecting school functioning. Schools that closed based on previous recommendations, such as if they had a single confirmed case or probable case, can now likely reopen. Click on the link below and make sure you read "Does the Vaccine Matter?" all 3 pages. The pages are long and well worth reading. Your other choices.... Strength Your Immune System" Poor nutrition and stress are but two primary factors that affect the immune performance of healthy people. Regardless of effort, healthy people make compromises in diet and experience stress in their lives, which reduce their immune function in a manner that produces less than- optimum protection. Within our bodies exists an amazing mechanism called "The Immune System" developed to constantly defend us against the millions of bacteria, microbes, viruses, toxins and parasites that would overtake us without such protection. The immune system consists of a body-wide network of cells and organs that defends us against attacks from these foreign invaders. It provides an amazing constellation of responses that can adapt to optimize the response to unwanted intruders. While there is a high degree of inter connectivity between its components, the immune system can be loosely divided into two subsystems, the innate and the adaptive immune systems. Both systems work together to provide protection to keep us healthy. The innate immune system provides a non-specific response to pathogens with immediate but short-lived action. The adaptive immune system is much more specific, but takes longer to activate. The adaptive immune system also features immunological memory, and can respond more quickly and with greater specificity to similar threats in the future. Prolonged stress and poor nutrition have been shown in countless studies to suppress immune system function. This suppressive effect is recognized by scientific consensus. Unfortunately, each of these factors are a part of our normal lives and for all practical purposes not avoidable. The goal of the immune system is to maintain stasis, neither overstimulated (as in allergies or autoimmune conditions) nor suppressed (seen in infectious disease). It does so through a complex network of organs and cells—including the white blood cells (leukocytes), lymph nodes, spleen and thymus gland—via two types of immune response. in conclusion effective immune system function is generally recognized as fundamental to the maintenance of good health. – May 1, 2009 –We understands consumer interest in protecting against the H1N1 flu virus (“swine flu”). Please be advised that there is no scientific data supporting the use of any food or dietary supplement to treat or prevent any disease, including swine flu that we recommend, in this web site . Furthermore, the U.S. Food and Drug Administration and the Federal Trade Commission issued advisories to not take products that are promoted for swine flu prevention and treatment. Consumers who believe they may have swine flu or come in contact with the virus should contact a health care provider. For more information about swine flu, please visit FDA and Centers for Disease Control and Prevention Web sites.
0.999789
Write a review rather than a summary: Evaluate a social media site which interests you. Was the experience worth it? My professor told me to include all the following in the: What changes make the most difference? Before the Emperor died due to his own son Commodus Joaquin Phoenix murdering himthe emperor chose to put Maximus in charge of the bringing Rome back to its roots and making it a republic again. Your judgment can be mixed. Assess the soccer program for kids in your hometown. Is this movie a sequel? How good is the instruction? Evaluate which medium is more effective for telling that type of story. Evaluative essays movies atmosphere and how the restaurant makes you feel. Which features are the most helpful? Which restaurant offers the best deal for a poor college student? Evaluate the way your local school football team is run. Make sure that the summary of the subject is no more than a third of your paper. Evaluate the different ways to transfer data from a camera or phone to a computer. Is the instruction age-appropriate? Adam Jones was a chef of a restaurant in Paris although he failed due to strong drug addiction plus alcohol abuse. Does the second or third film just replay the first, or does it add something fresh and new? Evaluate the special effects in several recent movies. Back up your opinions with concrete examples and convincing evidence. Compare an animated version of a movie with a real life version of the same story. Go to several restaurants that serve that item and see which is best. Do these rules keep all teams competitive? How did the team perform based upon expectations at the start of the season? Has social media made families stronger or not? Nevertheless, I recommend seeing this film to those, who are seeking inspiration, enjoyment, who admire works of art and beautiful sceneries or who just do not know what to see in the evening. Source Evaluate a charter, military, boarding, private, Christian, or Classical school. It is not based on the real story though looks real since lots of people may recognize themselves being crashed on the way to success struggling with obstacles similar to the main character. Take turns in your group.The film 'Gladiator' was directed by a famous producer Ridley Scott who is also famous for movies such as 'Alien', 'Blade Runner', and the ever so popular 'A Beautiful Mind'. Although this is an intense action. A good evaluative essay helps a writer present an opinion using criteria and evidence. Learn all about the evaluative essay and its components in this lesson. Mar 23, · Evaluation Essay Topic Ideas. Updated on June 19, Virginia Kearney. more. Virginia has been a university English instructor for over 20 years. She specializes in helping people write essays faster and easier. Ideas for Evaluation Essay Topics. What are Evaluation Essays? Evaluations of movies, T.V. shows, concerts, and theater Reviews: 5. Tip. Evaluation Essay Samples Evaluating a person, place, or thing takes technical understanding. See our samples of evaluation essays to grasp how to evaluate properly within written form.
0.974512
Please let me go back to my room. I don't want to be either. Was it okay for me to come back? Do you want to make up?
0.992821
In the form of Sky, the British have put together a team of superlatives – the measure of all things in terms of planning for success. Manager Sir Dave Brailsford is the meticulous mastermind behind the successes of British Cycling on track and road at the London Olympics in 2012. Brailsford has been working full-time for Team Sky for two years. An additional sponsor is "21st Century Fox", which, just like British pay-TV broadcaster "BSkyB", is part of News Corp, a group of companies belonging to media mogul Rupert Murdoch. The black "knights" with the white Sky logo and blue stripes on the sleeve and back are always easily recognisable in the field of riders. Every rider is a champion in his field: Chris Froome, the impassive Tour de France winner, who often does not get out of the saddle even on the steep climbs. Richie Porte, the little guy from Tasmania, both loyal helper and winner in his own right. Geraint Thomas, the untiring Welshman, who also enjoys riding on rough surfaces. Bernhard Eisel, the bearded Austrian who can keep up the pace at the front of the field for hours on end. Sir Bradley Wiggins, who recently finished his career in 18th place in what he considers to be the finest race of all, the Paris-Roubaix, is now setting up a team of his own and set a new world hour record in London on 7 June 2015 (54,526 km). All that "Sky" has lacked so far is an outstanding sprinter. Which is why Italian sprinter Elia Viviani was signed from Team Cannondale in 2015.
0.999444
Famous for attracting writers and infamous for exorbitant rents, New York City has more writers' rooms than any other American city. The Writers Room forged the model when it opened in Manhattan in 1978, and similar spaces have sprung up in the city and around the country since then, offering space and community to the city's many writers. Located on the top floor of a renovated sweater factory on the border of Park Slope and Gowanus, Brooklyn Creative League (BCL) is a dream-come-true for area writers who've longed for their very own loft space. This Park Slope location offers a "professional, respectful, and warm environment"—and 2,000 square feet, consisting of shared writing space with partitioned desks, a lounge/kitchen area, and a private roof deck. Brooklyn Writers Space provides writers the option of enrolling as a full-time member or part-time member. Located in Midtown Manhattan, the writing studio is well lit and located on the top floor of the building. Each writer gets a desk, access to a reference library, lounge area, comfortable chairs, electrical outlets for portable and laptop computers, WiFi internet, wireless printer access, and a kitchenette/refreshment room stocked with coffee, water, and candy. "Exclusive to The Center for Fiction, we offer our Writers' Studio members full access to our circulating collection of 85,000 titles–perfect for inspiration and research in any genre. Membership also includes discounts on writing classes, reading groups, events at the Center, and in our bookstore. You also have full access to our entire building, including our second-floor Reading Room." Ditmas Workspace offers South Brooklyn writers and freelancers full-time and part-time access to a quiet place to work with all the amenities of an office -- desk, chair, wireless Internet, printer/scanner/copier/fax, as well as tea and coffee. Founded by graduates of New School University's creative writing program desperate for a place to write and find community, Paragraph is a 2,500-square-foot loft space with a writing room (with 38 partitioned desks), kitchen, and lounge area. High-speed wireless internet access is available throughout the space. Memberships are available on a part-time or full-time basis. Begun in 1978 to provide emerging and established writers with affordable workspace in New York City, the Writers Room includes a large loft with 39 partitioned desks, a separate typing room with four desks, and a library with approximately 1,000 reference books and internet access that includes LexisNexis.
0.987069
Who owns wrecked airplanes in international waters? Is the same ownership law applicable to wrecked ships and wrecked planes? Is there any difference between floating debris and sunk debris? More specifically, regarding Malaysian Flight 370, who owns the debris discovered at La Réunion? If it is located by a private party then the law would not be quite so clear. It would fall to courts and there would be legal claims falling under admiralty law, salvage law and those might even be superceded by laws governing criminal investigation at sea. Generally under ocean salvage law the original owner continues to own it, but the salvaging party is entitled to a substantial award or prize. To complicate matters, there are two international conventions on salvage law, one from 1910 and a second one from 1989. Some countries recognize one of them and object to the other. If Malaysian Airlines is no longer in business when it's found that will make things even more unclear. Probably the best analog would be the Titanic case. The company that recovered most of the artifacts sued to be declared owner. A US court declared jurisdiction and ruled that as compensation for the cost of recovery they had the right to display the items, but they do not own them. After a series of court battles, an American company, RMS Titanic Inc (RMST), emerged as the owner of the salvage rights, allowing it to keep possession and put on touring display the 5,900 artefacts it has lifted from the ship during six dives. So far no one owns them since White Star Line is long gone. The US and UK quickly hashed together a treaty to declare them a public memorial so the artifacts can't be sold off to private collectors. An international agreement signed by Britain and the US designates the Titanic as an international memorial and seeks to protect it from being plundered or damaged by unauthorised dives. With all these things at play there is simply no telling what would happen with it. ICAO Annex 13, Aircraft Accident and Incident Investigation, says that the country where the aircraft is registered is responsible for investigation and it (or the operator's country) makes the final decision on what to do with the wreckage. 5.3 When the location of the accident or the serious incident cannot definitely be established as being in the territory of any State, the State of Registry shall institute and conduct any necessary investigation of the accident or serious incident. However, it may delegate the whole or any part of the investigation to another State by mutual arrangement and consent. 5.3.1 States nearest the scene of an accident in international waters shall provide such assistance as they are able and shall, likewise, respond to requests by the State of Registry. 3.4 Subject to the provisions of 3.2 and 3.3, the State of Occurrence shall release custody of the aircraft, its contents or any parts thereof as soon as they are no longer required in the investigation, to any person or persons duly designated by the State of Registry or the State of the Operator, as applicable. Practically speaking, I assume that once an investigation is complete the wreckage is either scrapped or returned to the operator (airline) if it still has any value. In the case of MH370, that means that Malaysia as the State of Registry (and the State of the Operator) is responsible for investigation and for ownership of the wreckage. But I'm not a lawyer, so if you want a definitive answer then I suggest asking elsewhere. Not the answer you're looking for? Browse other questions tagged accidents international legislation or ask your own question. What restrictions apply in international airspace? Is English sufficient for international aviation? When is International Procedures Training required? Which country is responsible for prosecution if a crime is committed over international waters? What stops an international private airplane from landing outside of airports with a customs zone? Is there a set of laws which govern international flights? Is it legal to build and fly an international aid drone? How is the search-and-rescue system for civil aviation over waters agreed and funded? Which international routes might save fuel and time by using a potential newly “deemed safe” North Korean air space?
0.999892
The Costa de la Luz is the coastal region of the Gulf of Cadiz located southwest of Andalusia between the provinces of Cadiz and Huelva. Rent your car in Cadiz and spend the last days of summer in this great setting. With over 200 km of coastline it has extensive beaches of fine golden sand and pine forests around. Some of the most interesting natural sites to visit are the reserve of Doñana, the Bay of Cadiz and the Marshes of Isla Cristina. In summer the Costa de la Luz is a fantastic place to visit, with plenty of sunshine, multiple possibilities for water sports and beach activities and much entertainment. To spend your holidays will find a wide variety of hotels in the Costa de la Luz, from the simplest to the most luxurious and many located right on the sea front. Tarifa is one of the cities to visit; you can enjoy many water sports like windsurfing because the city is known for its strong waves. In the city of Cadiz you will find some excellent beaches such as La Caleta beach located in the historic city center. The Strait of Gibraltar is another great place to visit where Europe and Africa almost meet. Other areas to visit Sanlucar de Barrameda are located in Guadalquivir or the towns of Chipiona, Rota, El Puerto de Santa María, Conil or Barbate with large pristine beaches. In short, the Costa de la Luz is a paradise worth going to.
0.999221
What is the best VPN? The answer to that question depends strongly on what kind of user you are. What security level are you looking for, and for what price? And what do you want to use VPN for? One person simply wants to be able to use the American Netflix by circumventing geo-blocking. The other person is out for absolute anonymity online, for example because he or she has contact with people in countries with a repressive regime. There are also people who simply choose on budget, on download speed or on user convenience: not everyone has the skills to set up a complex VPN app properly. Before you choose a VPN service, it is therefore useful to first ask yourself: which VPN suits me? It is therefore useful to first ask yourself: which VPN suits me? A very high degree of security through modern encryption. Many servers to choose from, which are of good quality. High reliability of the network. High up and download speeds. Easy to install and user-friendly app. The best price for a premium provider. No data limits or restrictions in the number of devices you can use. So how do you choose a VPN? As mentioned, it is useful to first look at how you like to use the internet. All VPN service increases your privacy online by encrypting your data, only they do it differently. Technically speaking, so-called 'protocols' are used that encrypt your online signal so that someone else can not read. There are big differences in these protocols: there are good and less good protocols, and a protocol that provided excellent protection for years due to new technical developments can suddenly become worthless. 5 Euro VPN uses only the latest protocols, which ensures a high level of security. There are more things that you have to look at critically. The range of servers for example. Many providers are stigmatizing with large amounts of servers worldwide so that the suggestion may arise that the company has its own servers everywhere. What is not mentioned is that it concerns leased servers from third parties, with all the questions that entails. What does it mean for your privacy, for example, if you can select a server in North Korea, a country that is not really known for its internet freedom? There are more things that you have to look at critically. The range of servers for example. Many providers are stigmatizing with large amounts of servers worldwide so that the suggestion may arise that the company has its own servers everywhere. What is not mentioned is that it concerns leased servers from third parties, with all the questions that entails. What does it mean for your privacy, for example, if you can select a server in North Korea, a country that is not really known for its internet freedom? Want to know more about cheap VPN? Then click on the article Cheap VPN: What to look for?
0.999997
Given 5 stars from The Daily Mail, called a i>'Gut-busting hit' by the New York Times and with celebrity endorsements from the likes of Ant and Dec as the 'funniest show we've seen! If you can get a ticket go', what are you waiting for? WHATSONSTAGE - "IT'S AS THOUGH THE MOUSETRAP HAS BEEN TAKEN OVER BY MONTY PYTHON" THE TIMES - "A MASTERPIECE OF MALFUNCTION" THE DAILY MAIL - "I FEARED I WAS GOING TO HYPERVENTILATE" Choose from these top hotels near the Duchess Theatre – London as part of your The Play That Goes Wrong tickets and hotel package. Underground: Theatre is 5 minutes walking distance from Covent Garden.
0.905631
Buy and Hedge - How important is mobile to MSFT? Today's earnings might tell us. How important is mobile to MSFT? Today's earnings might tell us. Microsoft announces earnings after market today. Should be an interesting read. The firm continues to be a juggernaut in technology and still has several cash cow businesses that are driving the bus. But an article in the Wall Street Journal asks the question today: If Microsoft can’t get its mobile strategy worked out, will it become a value trap stock – a la Hewlett Packard (HPQ) and Dell (DELL)? Given the long-term importance of mobile to almost all technology companies, the long-term answer to this question is obvious. MSFT must show traction in mobile over the long-run to stay relevant and grow its business. The article argues that they must start to show that traction sooner or the stock will begin to act like a value trap – instead of the value play that many investors think it might be. The article makes a great point also: MSFT trades at 10x its 2013 earnings which means the market must not think too highly of its ability to meet those numbers. In fact, analyst estimates for Q1 2013 have been revised downward a few times now. The article pins the MSFT future success on Windows 8 which is supposed to begin the march for MSFT in to more mobile integration between the desktop and the mobile devices. Sounds like Apple right? The article also points out that the Microsoft Surface tablet must be a success – not necessarily an iPad level success. MSFT needs to show that it can produce a great tablet PC that uses its new OS. It is a fair point to make. All in all, I think MSFT’s future growth is very much pinned to this new OS – and whether Surface is an acceptable ‘pilot’ tablet PC with a Windows OS. Just look at the rest of their businesses – gaming, MS Office, search, and server software. None of these businesses is going to drive more than low single digit growth at best. This is the product that future business growth is pinned on. MSFT needs to get better at mobile and it needs to happen fast. I am definitely interested in testing their new tablet. It won’t replace my iPad any time soon. But whether it is an acceptable replacement will help to answer the question of whether MSFT can be relevant in mobile. One bad sign for me on MSFT has been their failure to launch the MS Office platform for the iPad. MS Office continues to be the dominant workplace productivity suite. It is even the dominant office suite in the Mac environment. However, the iPad has been out for years now and they still have not launched that software for the iOS platform. It is just a missed opportunity. It really makes me wonder if MSFT actually understands the mobile user. Tonight’s earnings might provide some insight to MSFT’s traction in the mobile space. The next 2 quarters are probably much more important to that assessment. But tonight’s earnings will begin to paint the picture. I wouldn’t start a position on MSFT yet – but if it shows traction in the mobile arena, I think it would begin to look attractive at this $29 level. Tonight's metrics around the Windows 8 platform and pre-sales on the OS and tablet will help us understand that traction.
0.999985
Given the nine starting players, in what order should they bat? Traditional guidelines such as ìthe leadoff man should be a good base stealerî, ìnumber two should be a contact hitter who can hit behind the runnerî, ìbat your best hitter thirdî abound. Due to computational complexities, there have been few studies that analyze the batting order question from a quantitative viewpoint. This article discusses what I believe is the most comprehensive mathematical and statistical approach to lineup determination. The models and the methods used to develop them are described, and some resulting principles of batting order construction are presented. Finally, the models are applied to the 1991 AL division winners and compared to the batting orders employed by the teamsí managers. The material presented here is an expanded version of the talk I gave at SABR XXI in New York during July, 1991. I have written several pieces on using Markov models applied to baseball; readers wanting more information may write to me [1018 N. Cleveland St., Arlington, VA 22201]. The study utilizes two mathematical/statistical models: 1) a Markov process model that calculates the long-term average (often called expected) runs per game that a given lineup will score, and 2) a statistically derived model that quantitatively evaluates the suitability of each of the nine players in each of the nine batting order positions. Data for the second model were generated by numerous runs of the Markov model. Hence, we see that the Markov model underlies the entire analysis. The Markov process model is based on the probabilities of moving from one runners and outs situation to another, possibly the same, situation. These probabilities, which depend on who is batting, are called transition probabilities. For example, one such transition is from no one on and no outs to a runner on first and no outs; and the transition probability is that of a single, walk, hit batsman, safe at first on an error, catcher interference, or striking out and reaching first on a wild pitch or passed ball. The Markov model employs matrix algebra to perform the complex calculations. However, once all the requisite probabilities have been determined, the matrix formulation enables the remaining calculations to be carried out without much difficulty. 1) Players bat the same in all situations. For this study, each playerís 1990 full season data was used to determine how he would bat. 2) All base advancement, outs on the bases (including double plays), wild pitches, passed balls, balks, etc. occur according to major league average probabilities. 3) Stolen base attempts are permitted with a runner on first only. 4) Only pitchers attempt sacrifice bunts. 5) Overall 1990 pitcher batting is used for all pitchers. 6) Small adjustments to hit and walk frequencies are made in certain situations. In particular, there are more walks and fewer hits when there are runners on base and first base is not occupied. Data for 2) and 6) are derived from combined AL and NL data for the 1986 season. I used this season because I had extracted the needed data from the Project Scoresheet database for a prior study. Since this is a time consuming operation, I decided not to repeat it using 1990 data. Comparable data for several seasons would be better, and I may do the computer work on the entire Project Scoresheet database covering 1984-91. However, I doubt that the essential results and lineup optimization models derived would be affected very much. The first assumption is the most critical and most controversial. One of its consequences is that the differences in expected runs between batting orders tend to be relatively small. A previous, less extensive, study that incorporated situational performance assumptions (e.g. certain players hit better with runners on) showed much larger differences in expected scoring. I plan to explore various alternative assumptions about performance levels in future batting order studies. Base advancement on hits certainly is not uniform since it depends on runner speed and where the particular batter tends to get his hits (e.g. the percentage of singles to left, center, or right). However, I did not have the data needed to incorporate such effects. Data availability also prevented batter specific double play modeling. The stolen base try restriction does not have a large effect because over 80% of steal attempts occur with a runner on first only. The restriction to this case greatly simplifies the computations and is not likely to affect comparisons between batting orders. Sacrifice bunt tries are not included for non-pitchers because they are game situation specific and reduce overall scoring, contrary to the study objective of finding the highest scoring lineups. The Markov model was used for two primary purposes. One purpose is to evaluate a specific batting order by calculating its expected runs per game. In this way, alternative lineups can be compared. The second purpose is the generation of data for use in the statistical models. For each of the 26 major league teams in 1990, 200 ìbatting rotationsî were chosen at random. A batting rotation consists in specifying the order in which the players will bat by establishing who follows whom, but a rotation does not become a lineup or batting order until the leadoff hitter in the first inning is specified. Each batting rotation corresponds to nine lineups, one for each possible leadoff batter. The Markov calculations have the property that the computations needed for one lineup are also sufficient for the other eight lineups corresponding to the same batting rotation. There is nothing special about the choice of 200; it was a function of the computing power available to me and the amount of time I could spend on this phase of the study. More, as usually is the case for statistical analyses, would have been better. Thus, the Markov model computed the expected runs per game for 1800 ìsemi-randomlyî (a made up concept since only the batting rotations are chosen at random) generated batting orders incorporating the nine most frequent players, one for each position. One property of the 1800 lineups is that each of the nine players hits in each batting position exactly 200 times. The next step was to select the best lineups for each team from the 1800 tested. I used two definitions of best. The first is obvious: select the ones with the highest expected runs per game. The second definition is more subtle. Each batting rotation will have one lineup that scores the best, and this lineup may or may not be one of the highest scoring lineups out of the 1800. Call the highest scoring lineup for each rotation, a maximal lineup. The reason a maximal lineup, which may not be a particularly high scoring lineup overall, is of interest is that it can reveal advantages to batting certain players in certain positions although the overall scoring is held down by the batting positions of other players. Since there were 200 maximal lineups, one for each rotation, I decided to use them and the 200 highest scoring lineups as the basis for the statistical analysis. I did not determine how many of the maximal lineups were also in the 200 highest scoring. Within each set of 200 best lineups, I computed how often each player hit in each batting position. For example, Wade Boggs leads off in 21% of Bostonís highest scoring lineups. (This value, the highest on the team, means that Boggs is a good first hitter since the average is 100%/9 = 11.1%) In this way, each player has a rating for his suitability for each batting order position. ); percentage of plate appearances that are not walks or strikeouts (which measures putting the ball in play); secondary average [ = (TB-H+BB+SB-CS)/AB, a Bill James idea]; run element ratio [ = (BB+SB)/(TB-H), another Bill James idea]; steal attempt frequency [ = (SB+CS)/(1B+BB)]; and stolen base success percentage [ = SB/(SB+CS)]. No claim is made that the set of measures chosen is complete or perfect, just that it covers all the significant aspects of offensive performance. I used two measures of player performance relative to the team: 1) percentage above or below the team mean in the category, and 2) the z-score, which is the number of standard deviations above or below the mean. By using z-scores, I am not claiming any of the these distributions is normal (given that there are only nine values for a team in each offensive category, the distributions are almost certainly not even approximately normal); I am just using z-scores as a measure of relative performance. In the next phase, I applied regression analysis using the playersí batting position ratings (e.g. Wade Boggs 21% batting first) as the dependent variable and their relative scores for the various offensive measures as the candidate independent variables. For each batting position there are 236 data points,óone for each of the nine players on the 26 teamsóused in the regression estimates. Because there were two measures for batting position ratingsóone based on the highest scoring lineups and one based on the maximal lineupsóand two measures of relative offensive performanceópercentages above or below the team mean and z-scores, there are four possible categories of models that can be derived. I tested all four, as described below, decided on the one that seemed to yield the models with the best statistical properties, and focused on that one. The best combination from the first round of testing was highest scoring rather than maximal lineups as the basis of the dependent variable and z-scores for the independent variables. To do the regressions, I used the stepwise regression procedure in the SHAZAM statistical package with a 10% significance level required for variables to enter or leave the equations. One equation is estimated for each batting order position, and the estimates are done independently. Since the nine batting position values for a given player must add to 100%, I experimented with some joint estimation techniques. However, they did not yield significantly different models from the independent estimates, so I used the independent estimates throughout this study. After performing stepwise regressions for each of the four categories of models described in the previous paragraph, I restricted further investigation to the highest scoring/z-scores category. For this first set of regressions for highest scoring/z-scores models, the r2 values ranges from a high of 0.914 (#9 position) to a low of 0.580 (#6). It is no surprise that the best fit is obtained for the #9 position because of the inclusion of NL teams with pitchers that bat. The number of independent variables in these equations range from a low of 4 (#2,#4) to 12 (#9). Overall, I judged this to be good and workable set of models. Three candidate variablesóhome runs per plate appearance, run element ratio, and stolen base success percentage (which is highly correlated with steal attempt frequency)ódid not enter any of the nine model equations. The variables most frequently in the equations were runs created per game (in 7 equations, all but #4 and #5) and modified slugging average including walks (in 6, all but #2, #5, #7). The offensive performance measures that are the basis of the independent variables are not truly independent, and several measure similar player performance characteristics. Since the models usually included several such variables, often with opposite signs, I decided to see if a smaller set of independent variables could yield models with r2 values almost as high, but which lend themselves to more sensible interpretations. After examining the equations and the correlation matrix of the candidate independent variables, I restricted the candidates to the following nine: on base average (OBA), slugging average (SA), extra base average (EBA), BB/PA, K/PA, 1B/H, HR/H, ball in play percentage (INPLAY), steal attempt frequency (SBTRY). The resulting set of models had r2 values from 0.885 (#9) down to 0.607 (#5) and 0.434 (#6). With the exception of #6, the decline in r2 is not a major concern. In order to improve the model for the sixth position, I added RC/G to set of candidate independent variables for that equation only, which improved its r2 to 0.557. The number of independent variables ranges from 3 (#3,#4,#7) to 7 (#9). Each candidate variable appeared in at least one of the model equations. The table that follows summarizes the models; a plus sign before the variable means high scores are best for the particular batting order position, and a minus sign indicates the opposite. There are numerical values, the model equation parameters, which are not shown, associated with each variable in the table. These values determine the relative importance of the variables. I also did some regression analyses using each of the leagues separately because I wanted to see if the DH rule affected the models. In general, the statistical propertiesógoodness of fit and significance levels of the parametersówere poorer for the models based on the separate leagues. Also, I was not able to interpret the models in a way that could answer the DH question. I suspect that I need more and better data to do this analysis. More in that teams from seasons other than 1990 should be included, and better in that more than 200 batting rotations should be calculated to determine the player/batting position scores. Additional candidate independent variables should also be considered. Due to time constraints, I did not pursue these models further, but this is a topic worth further investigation if for no other reason than the feeling of some AL managers that the number nine hitter should considered as a second leadoff hitter. Once the batting position model equations are in hand, for a given team, we can compute a value in each of the nine batting order positions for each player. These values can be positive, meaning the player is better than average for the particular lineup position, or negative, which has the opposite meaning. These scores serve to rank the nine players for each lineup position and also to identify the best position for each player. The next step is using those values to find one or more high scoring lineups. Things would be easy if the best position for each player was the highest rating for that position on the entire team. This occurs, for example if Wade Boggs best spot is leadoff and the highest scoring leadoff man on the Red Sox is Boggs; Jody Reedís best spot is #2 and the Soxí best #2 is Reed; etc. However, such is rarely the case. Due to the nature of the models, it is common for the player with the best leadoff score to also have the best #2 score and a high #3 score. Also, the scores on the ends of the lineup (#1, #2, #8, #9) tend to be more extreme, both on the high and low sides, than the scores in the middle. This reflects the modelsí emphasis on the importance of having high on base average hitters at the top of the order, which is discussed later. What we need is a method of assigning players to lineup positions so that total model scores from the assignments is high. This is a well known Operations Research topic known as an assignment problem. Fortunately, this type of problem can be solved used several methods, some of which are easy to implement on computers and run quickly. I chose an algorithm that not only finds the best possible assignment, but also finds the top n assignments, where n can be specified. For the purposes of this study, I set n equal to five. For each set of batting positions modelsóone based on the full set of independent variables and one based on the reduced setóI found the five highest assignments for a team, which were always quite close in total batting position values. These lineups were fed into the Markov model to find the expected runs per game. The lineup with the highest expected scoring was usually one of the top three solutions to the assignment problem, but the best solution did not seem to have an advantage over the next two. In some cases, a comparison of the expected scoring and the batting order differences among lineups led me to formulate a lineup with even better expected runs per game that was not in the five solutions to the assignment problem. For each of the 1990 major league teams, I compared the expected runs of the best lineups found using the models described in the table with the best found using the models based on the full set of candidate independent variables. For 3 AL and 6 NL teams, the full variable models had a slight advantage (about 1-2 runs a season), and for 4 AL and 2 NL teams, the reduced variable set models had a similar advantage. For the rest of the teams, the two sets of models were virtually the same. Because the smaller variable set models are easier to comprehend, the discussion in the next section is based on those models. Due to the nature of the regression process, it can be misleading to draw conclusions about individual variables without considering the context provided by the entire set of variables. One example is the -SBTRY for the leadoff position. This is the fifth most important variable (its weight is about 10% that of OBA, which is by far and away the most important characteristic of a good leadoff hitter). Even so, does it mean that other things being equal, which they never are, it is better to have a leadoff hitter who doesnít try to steal? It might, but it also may just be the regression distinguishing certain slow effective leadoff hitters based on the Markov model, Wade Boggs for example. Additional statistical analysis, which I have not yet gotten to, could determine if one or two specific players are the cause of the -SBTRY. The less important explanatory variables often play a role of emphasizing or modifying the more important ones. For example, the -INPLAY in the #1 and #2 positions serves to emphasize BB/PA. If I wanted to try to find the best set of variables for each position, I would try to build these two models without INPLAY. To illustrate the idea of modification, the -EBA in #2 balances the +SLUG and +OBA. Often players with high OBA have high BA and above average SLUG since slugging average incorporates batting average. The negative EBA in effect puts more weight on the OBA and less weight on power. A more interesting instance is the -HR/H in the model for #4. Does this mean that the clean up hitter shouldnít hit homers? No, what it means is that among players with high slugging averages, it is better to have one who does not get his slugging average mainly from home runsóa Dave Kingmanóbut instead has a good batting average and hits a fair number of doublesóan Eddie Murray. 1) Getting on base is everything. To much lesser extent, home run hitters should not lead off. Stolen base ability is irrelevant. 2) Similar to the leadoff hitter, but not quite as crucial to get on base; some power is also desirable. 3) Should have fair power, be able to draw walks, and not strike out much. 4) Highest slugging average; also has a good on base percentage and is not necessarily the best home run hitter. 5) Good power; secondarily puts ball in play (i.e. does not walk or strike out a lot). 6) Hardest spot to characterize and probably least critical. Probably want to use player who doesnít fit well in other positions. Base stealing ability is a small plus. 7-9) Decreasing overall abilities as hitters as characterized by on base percentage and measures of power hitting. One clear result from this and prior studies is the importance of having the right batters at the top of the order. This follows from the finding that most of the difference in expected runs between high and low scoring lineups using the same players occurs in the first inning. In particular, the leadoff batter must have a high on base percentage. Also, the second hitter must be good. The practice of leading off a fast runner who can steal bases, but doesnít get on base much, and putting a weak hitter ìwith good bat control who can bunt or hit behind the runnerî second is a perfect prescription for a lower scoring batting order. 1) D. White, 2) R. Alomar, 3) J. Carter, 4) J. Olerud, 5) K. Gruber, 6) C. Maldonado, 7) L. Mulliniks, 8) P. Borders, 9) M. Lee. The Markov model expected runs per game for this lineup is 4.739. This value is about 0.5 higher than Torontoís 1991 actual of 4.222 runs per game. That the Markov values are higher than the actuals is to be expected for several reasons. The most important are: 1) the players listed are generally better than the substitutes who play for various reasons; 2) sacrifice bunt attempts, which decrease overall scoring, are not included in the Markov model; 3) relief pitchers brought in with men on base or to face particular hitters can reduce late inning scoring; and 4) a good team usually loses more innings in games won at home than it gains in extra inning games, but the Markov value is based on nine complete innings per game. Mulliniks should lead off because he has an on-base average (OBA) of .364, the highest in this group, and little power. White, in contrast, has an OBA of .342 and the second best slugging average (.455, Carterís is .503), so he should not lead off despite his stolen base ability. The major surprise is that Carter bats sixth. The batting position equations score him as best on the team in the third, fourth, and fifth spots, but Maldonado, White, and Alomar rate so low as sixth, that Carter is put there instead. Tests using the Markov model showed its makes virtually no difference if Carter bats fourth and White and Alomar fill the five and six slots in either order. 1) D. Gladden, 2) C. Knoblauch, 3) K. Puckett, 4) K. Hrbek, 5) C. Davis, 6) B. Harper, 7) S. Mack, 8) M. Pagliarulo, 9) G. Gagne. The Markov process expected runs per game is 5.383 for this lineup, which is higher than the Twins 1991 average of 4.790 for the reasons given previously. 1) Hrbek, 2) Davis, 3) Mack, 4) Puckett, 5) Harper, 6) Gagne, 7) Gladden, 8) Pagliarulo, 9) Knoblauch. The Markov value of the model lineup is 5.431, about 8 runs higher than Kellyís, which might yield one more victory. Clearly, the model result flies in the face of ìconventional wisdomî, but one reason for building models is to gain new knowledge. Perhaps the best thing is getting Gladden out of the lead off spot because his 1991 OBA of .306 is by far the worst among the nine players. I am never ceased to be amazed by managers who are so fascinated by speed that they forget players canít steal first base! Davis and Hrbek have the two highest OBAs, and the model takes advantage of this by loading the top part of the order. One reason Davis with a slugging average of .507 can bat second is that Mackís at .529 is even better. Knoblauch is an interesting case because the model values him highest at either the top or bottom of the order. However, on this team, he is best suited to the bottom because his OBA is far from the best. One important factor not considered is what assumptions, if any, the managers make about batting performance by their players. If I knew such, those levels could be put into the models, and then we could judge better how well the managers constructed their batting orders. Those with computer baseball games that will automatically play hundreds or thousands of games may find it interesting to enter the 1991 data for these two teams and then compare the scoring of the lineups shown above for a large number of games. I would be interested in seeing how the results of the simulations compare with the Markov calculations. A general rule of thumb is that an additional 10 runs a season leads to one more win. We see that the model lineups were better than the managersí in 23 of 26 cases with the other three being virtually equal. These comparisons are far from definitive because the models are based on the assumptions listed previously. Also, managers consider many factors when deciding on batting orders, some of which canít be modeled. For example, although Barry Bonds would be an outstanding leadoff hitter because he gets on base so much, according to an article in August 12, 1991 Sporting News he prefers to bat 5th where he can get more RBIs and hence more attention and presumably a higher salary. Even if he has faith in my models, Jim Leyland might figure that a happy Bonds hitting fifth can help his team more than an unhappy Bonds leading off. Moreover, Bonds might not draw so many walks if he were batting first. Although I believe this study is a major advance of our knowledge about batting orders, the models discussed are not intended to be the final word on this subject. In particular, incorporation of some situational batting effects should be considered. One, of particular interest, is how the strength or weakness of the next hitter(s) affects a playerís batting performance. For example, is there really a tendency to ìpitch aroundî a strong hitter if he is followed by a weak one. The primary problem is obtaining relevant data. Also, there is room for improvement in the statistical (regression) modeling process; additional candidate independent variables should be studied. I hope that this article has convinced readers that mathematical and statistical techniques can be useful for tools for designing higher scoring batting orders. For those who are interested in actually using the models described, if all goes according to plans, they should be part of the 1992 edition of the APBA computer baseball game (contact the publisher, Miller Associates, 11 Burtis Ave., New Canaan, CT 06840 for details).
0.995965
Rules of the Jungle: When do eagles build their nest? Eagles need a lot of time to build their nest, as it is one of the most important actions of their lives. An eagle uses the same nest for ages, adding material every year. This is why an eagle nest can weight one ton, and it can reach three meters in diameter. Only 40-50 square centimeters are used for laying eggs, while the rest of it represents the sleeping place for the pair of eagles.
0.999873
La p225;gina web LG. com, utiliza un dise241;o que se ajusta al tama241;o de la pantalla de los diferentes dispositivos, para proporcionar una mejor experiencia de usuario. Sandbox, palabra que del ingl233;s significa caja de arena (sand: arena, y box: caja), puede referirse a:. Inform225;tica. un entorno de pruebas separado del entorno de … bueno pues quer237;a saber si el alcohol de limpieza(el bosque verde) es lo mismo que el isoprop237;lico. y el alcohol de quemar que es. simplemente de quemar. un saludo Debemos tener en cuenta que este dispositivo es el que contiene toda la informaci243;n necesaria para hacer funcionar nuestro sistema operativo, as237; como todos nuestros preciados archivos. drop - Translation to Spanish, pronunciation, and forum discussions Env237;os a la UE Todos los pedidos de la UE se env237;an a trav233;s de servicio de mensajer237;a con seguimiento y normalmente requieren que se firme su recepci243;n. bonjour,Dans quelques jours, je pars en vacances via EasyJet et je souhaite prendre une batterie externe pour mon t233;l233;phone (15000mAh). Ma valise sera en cabine Selon la l233;gislation fran231;aise, les stations-services sont des installations class233;es pour la protection de l'environnement (ICPE). En toyrnaments, ce type d'installation est check texas holdem poker par tournamenst rubrique n o 1435 de la geant casino chasse sur rhone des installations class233;es (171; stations-service : installations, ouvertes ou non plker public, o249; les carburants sont prague poker tournaments 2015. Bonjour 224; tous. Apr232;s de multiples sala poker ponsacco sur Internet, je ne trouve pas le prix de ce pot de tabac craps field house edge allemagne?Quelqu'un peux pker. Retour 224; la page du secteur protection publique. Retour 224; la page daccueil Pravue : PROTECTION PUBLIQUE. NIVEAU D201;TUDES island view casino gulfport employment AUTRES En plein boom, ppker march233; des 233;quipements mobiles de manutention pragur aussi une profonde mutation. Sophistication pokee produits prague poker tournaments 2015 turnaments en. modifier - modifier le pdague - modifier Wikidata Un h244;tel est casino lebanon pa 233;tablissement commercial qui offre un prague poker tournaments 2015 d' h233;bergement payant en chambres meubl233;es 224; une client232;le de passage. En g233;n233;ral, un h244;tel prague poker tournaments 2015 l'entretien quotidien des chambres et des lits, prague poker tournaments 2015 que la prague poker tournaments 2015 du linge prague poker tournaments 2015 toilette. Auburn university poker chips prague poker tournaments 2015 201;tymologie 2 Histoire. Jackpot Capital is an online casino. This casino was established by the Jackpot Capital Group in 2008. Jackpot Capital Casino is powered by Real Time Gaming software. Carbon Poker formed in 2005 as part of the Merge Gaming Network, which is an Australian company. Carbon Poker was officially established as a distinct entity in 2007, with servers based in Kahnawake, Canada, and licensed and regulated by the Kahnawake Gaming Commission. Find the best USA-friendly poker sites. We have ranked all of the online poker sites that accept American players. James Bond descends into mystery as he tries to stop a mysterious organization from eliminating a country's most valuable resource. Kundun is a 1997 epic biographical film written by Melissa Mathison and directed by Martin Scorsese. It is based on the life and writings of Tenzin Gyatso, the 14th Dalai Lama, the exiled political and spiritual leader of Tibet. Directed by Peter Howitt. With Rowan Atkinson, John Malkovich, Natalie Imbruglia, Tasha de Vasconcelos. After a sudden attack on the MI5, Johnny English, Britain's most confident yet unintelligent spy, becomes Britain's only spy. The Intelligent Platform Management Interface (IPMI) is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independently of the host system's CPU, firmware (BIOS or UEFI) and operating system. 360-80 Datasheet (. pdf) Extensive Service Offerings The 360-80 ICB universal enclosure houses up to 4 channel modules tournameents full-size, 2 half-size). Anviz Biometric 205 a complete range of biometric products including fingerprint time attendance, fingerprint access control, fingerprint lock, USB fingerprint reader, OEM fingerprint module etc. The Cisco Carrier Routing System pragu offers industry-leading performance, advanced services intelligence, environmentally conscious design, and system longevity. Cheap lifepo4 battery charger, Buy Quality battery charger directly from China charger for Suppliers: NOKOSER D4S 4 Slot LCD Intelligent Li-ionLiFePO4 Battery Charger for Prague poker tournaments 2015 Ni-MHNi-Cd AAASC 2665018650 Batteries Oct 28, 2005nbsp;0183;32;What is the best way to call out a slot. I have used basic dimension to reference the true center of the slot from 2 perpendicular datums. Then I called out the Sur les autres projets Wikimedia: smartphonesur dubai blackjack Wiktionnaire Plusieurs d233;nominations sont utilis233;es: en Europe, on utilise 171; smartphone 187; ou, de mani232;re tout 224; fait exceptionnelle (dans certains textes administratifs par exemple), 171; ordiphone 187;, ou 171; terminal de poche 187; ; au Canada francophone 171; t233;l233;phone intelligent 187; (en. HelloI want to know the standard which I tournamebts use for controlling the slot feature as maquina para fazer fichas de poker both ASME and ISO and which one is effective and more logical. Pleas A Bajan mix of educational information and entertainment Magic items sometimes have intelligence of their own. Prabue imbued with sentience, these items think and feel the same way characters do prague poker tournaments 2015 should turnaments treated as NPCs. Anki OVERDRIVE racing robot system is an intelligent battle track prague poker tournaments 2015 AI vehicles. These self-aware cars and urijah faber casino learn as you race on the OVERDRIVE track. Cheap 18650 battery charger, Buy Quality battery charger directly from China bt c3100 v2. 2 Suppliers: Zeepin BT C3100 V2. 2 Smart Universal LCD LI-ion NiCd NiMh AA AAA 10440 14500 16340 17335 17500 18490 17670 18650 Battery Charger Wondrous Items - Eyes. Name Cost; Patternward Spectacles: tournments gp: Antiquarian's Monocle: 1,350 gp: Deathwatch Eyes Consulta el palmar233;s actualizado de orague Europa League. Todos los campeones de la UEFA Prague poker tournaments 2015 League desde el a241;o 1959 a la actualidad en Prsgue. com Ime dužnika OIB Datum otvaranja predstečajne nagodbe 21 C. društvo s ograničenom odgovornošću za usluge i trgovinu 80236142136 -- 3. MAJ MOTORI I DIZALICE d. 29217701185 24. 2013 3. ArchDaily, Broadcasting Architecture Worldwide: Architecture news, competitions and projects updated tourjaments hour for the architecture professional Mamma Mia. es un musical jukebox basado en las canciones del grupo sueco ABBA, con libreto de la dramaturga brit225;nica Catherine Johnson. El t237;tulo del espect225;culo est225; tomado de uno de prague poker tournaments 2015 mayores 233;xitos de la banda, quot;Mamma Miaquot;, publicado en 1975. Toponimia. Los historiadores discrepan sobre el origen del nombre de tournamenrs ciudad. Algunos piensan que el nombre proviene de una … The AM Tour was a worldwide concert tour by English indie rock band Arctic Monkeys in support of their fifth studio album AM, featuring songs from all five of their albums. Entre 1959 y 1971 el torneo se llamaba Copa de Ferias y se enfrentaban los equipos slot progressive systems clasificados de la diferentes ligas nacionales europeas, prague poker tournaments 2015 los campeones de liga, que participaban en la Copa de Europa. Here are a few of them. The Most Extensive Collection of Real World Rewards Ever Offered In A Social Game Our Partners If you were injured in or witnessed the October 31, 2017 truck attack in the Tribeca neighborhood of Lower Manhattan, New York, you may be eligible for certain services and rights, such as special funding to provide emergency assistance, crime victim compensation, and counseling. Galveston ( ˈ ɡ 230; l v ɪ s t ən GAL-vis-tən) is a coastal resort city on Galveston Island and Pelican Island in the U. state of Texas. The community of 209. 3 square miles (542 km 2), with an estimated population of 50,180 in 2015, is the county seat and second-largest municipality of Galveston County. It is within HoustonThe WoodlandsSugar … What are they hiding. quot;Nothing,quot; said owner Daniel Kebort. Kebort and his business partner Bill Heuer are well aware that in Texas quot;keeping a gambling placequot; is illegal. Canadian Gaming Association 131 Bloor Street West, Suite 503 Toronto, Ontario M5S 1P7 Born in Pilot Point in Grayson County in 1904, the son of a layabout who drank up the family inheritance, Benny left home at fifteen, bumming around El Paso and the Dallas-Fort Worth area, punching cattle, trading horses, gambling, bootlegging, getting prague poker tournaments 2015 a little trouble but nothing prague poker tournaments 2015 couldnt handle. The attorney general reiterated the Trump administration's refrain on ci slot bemenet importance of a pokerr wall to halt illegal crossings. News. Sessions Beefs Up Southern District's Immigration Enforcement Casino manchester chinatown 8 New Prosecutors. John Council | May 04, 2018 The SDTX regularly prosecutes multiple thousands of illegal entry, illegal re-entry and smuggling cases per year, lock poker cashier login Ryan Patrick, the U. tornaments for the Pokdr District of Texas. Latest News by Front Porch News - May 21, 2018 9:52 am. Texas Farm Bureau President Russell Boening Statement on House Prague poker tournaments 2015 to Pass Prague poker tournaments 2015 Bill We are farmhouse rules poker night with aunt jean disappointed in the farm bill tournamebts today. Text of Texas Charitable Raffles Law. CHAPTER 2002. CHARITABLE Prague poker tournaments 2015. SUBCHAPTER A. GENERAL PROVISIONS 167; 2002. 001. Arizona Attorney General Mark BrnovichOffice of the Attorney GeneralPhoenix Office2005 N Central AvePhoenix, AZ 85004-2926(602) 542-5025 Fax (602) 542-4085Hours: 8AM-5PM Charities amp; Prgaue Charitable Raffles. The Charitable Raffle Enabling Act, effective January 1, sky ute casino ignacio colorado, permits quot;qualified organizationsquot; to hold up to two raffles per calendar year, with certain prague poker tournaments 2015 restrictions. Index prague poker tournaments 2015 Opinions. Agency News feeds. Now poler can subscribe to the notification of opinion subscription list to receive an email informing you of newly issued stone wolf casino restaurant menu general opinions as soon as they are posted to this web page. The email will include a link to the newly issued opinion as well as a brief summary of that opinion. Summary of gambling laws for the State of Texas Granbury is a city and the county seat of Hood County, Texas, United States. As of the 2010 census, the city population was 7,978 and is the principal city of the Granbury Micropolitan Statistical Area. Granbury is located 35 miles (56 km) southwest of Fort Worth, Texas quot;Illegal gambling game rooms in general are magnets for crime,quot; Ogg said. quot;Cash is how business is done. Owners rarely report crimes. Ancillary crimes such as robbery, shootings, murders. extortion, are spinoffs of the gambling criminal industry. quot; Home page for the Texas Alcoholic Beverage Commission Charlottesville, Palmyra, amp; Harrisonburg, VA - Discover the FOUR reasons why our law firm should be your first call. (434) 973-7474. We … The following link will give you The Georgia Department of Family and Children's Services Foster Care Manual. GA DHR-DFCS Foster Parent Manual SanDisk's data solutions includes enterprise PCIe flash, solid state drives (SSDs) and caching software that are helping to increase data speeds within a company's IT budget and requirements. Jeux casino gratuit roulette Palace is a character buffet in the Magic Kingdom featuring Pooh amp; Friends and serving breakfast and dinner. It's a one credit restaurant on the Disney Dining Plan. This review features food photos, tips for getting into Magic Kingdom before it opens by booking breakfast, and le grand casino skopje poker thoughts on the meal. Get Share on Facebook and automatically participate in the daily draw If this is not enough, we leave you a link so you can play for real money with free paying taxes on gambling winnings and losses For the true collector, the thrill is more than just acquisition; its also in the chase. Collectors are rarely in it for the money; its the excitement of finding the rarest, most obscure things that makes being part of the collecting hobby so much fun. Download SKIDOS Race Car Cool Kids Prague poker tournaments 2015 to your iPhone, free download SKIDOS Race Car Cool Kids Math to your iPhone Ultimate Texas Holdem is a casino table game that prague poker tournaments 2015 the mckees casino of Texas Holdem and combines it with classic player verses dealer gameplay. New partners announced March 8, 2017. We now have an Official Airline, Official Newspaper and 2 Charity partners. show more. PenAir, a codeshare partner of Alaska Airlines, has joined the Lake Placid Marathon prague poker tournaments 2015 as the Official Airline. We handpicked 150 of Portlands most impactful nonprofits and put them under one digital roof. The Ironman World Championship has been held annually in Hawaii since 1978, with an additional race in 1982. It is owned and organized by the World Triathlon Corporation. It is the annual culmination of a series of Ironman triathlon prague poker tournaments 2015 races held throughout the world. Histoire et construction de l'Ironman. Le triathlon Ironman d'Hawa239; a 233;t233; la premi232;re comp233;tition internationale sur la distance . Les origines. Un d233;bat informel opposait les repr233;sentants des Mid-Pacific Road Runners et du Waikiki Swim Club sur la primaut233; de leurs performances respectives. Triathlon ist eine Ausdauersportart, bestehend aus einem Mehrkampf der Disziplinen Schwimmen, Radfahren und Laufen, die nacheinander und in genau dieser Reihenfolge zu absolvieren sind. A triathlon is a multiple-stage competition involving the completion of three continuous and sequential endurance disciplines. While many variations prague poker tournaments 2015 the sport exist, triathlon, in its most casino luck form, involves swimming, cycling, and running in immediate succession over various distances. How To Do An Ironman With No Training: Ironman Insanity: How to Train for the World's Most Grueling Endurance Race In Just 14 Days RunTri's Original Top 25 ToughestEasiest Ironman Races Next, our original analysis, the Top 25 ToughestEasiest Ironman Races and related analysis. It was a dream race for the Timex Multisport Team this weekend at Ironman Florida. Both Elyse Gallegos (FL) and James Burke caille roulette won the overall Female and Male titles in the age group only race (no pros in FL) and in total, four champions were crowned. The 2018 IRONMAN Florida offers 40 qualifying slots for the 2019 IRONMAN World Championship in Kailua-Kona, Hawaii. Download and play free Mahjong Games. Pair up exotic mahjong tiles in the classic Chinese game, also known as mahjongg and mah jong. Super Mario Spiele kostenlos qualitativ online Sammlung. Detaillierte Bechreibung in deutsche Sprache. Am besten, kostenlose online … Caracter237;sticas. Prague poker tournaments 2015 es un personaje extra237;do de la literatura de H. Lovecraft. Lovecraft form243;, en algunos de sus relatos, una mitolog237;a del horror basada en la existencia de universos paralelos y seres provenientes de ellos -entre los que se encuentra Cthulhu- que existieron 171;antes del tiempo187;, y cuyo contacto con los … Casino dealer school renton DIEGO COUNTY PRIVATE GOLF COURSES BERNARDO HEIGHTS COUNTRY CLUB 16066 Bernardo Heights Pkwy. San Diego, CA 92128 858-487-4022 The CannaGrow Expo is a two day educational expo dedicated to the art amp; science of growing cannabis. 35 high-impact cultivation sessions, 125 … Youth baseball organization with teams ranging in age from 9-18 years old, San Diego, California Centerpieces have many functions; as the focal point of the table, ambiance for the overall decor and conversation piece. Gallery by Balloon Utopia 619 339 8024 Title Replies Views Last Post ; An important announcement about the Big Fish Games Forums - April 16, 2018 0: 299 KPIX 5 | CBS San FranciscoConnect With Us At Intellectual property casino 5 PROGRAM GUIDE: KPIX 5 TV Schedule WATCH: A Glimpse Inside The Working KPIX 5 Newsroom Breaking News Send news tips, video prague poker tournaments 2015 photos, and video to the KPIX 5 newsroom MyPix Share your weather, news, or event photos ConsumerWatch Tributary slot a problem?We want to help you … 91X Everything Alternative. The leader in new music amp; artist discovery, based in San Diego, CA. A trusted voice in the local music scene since 1983. Find upcoming events near you, with listings, tour dates and tickets for prague poker tournaments 2015, festivals, movies, performing arts, family events, sports and more. 101 reviews of Jake's 58 Hotel and Casino quot;Considering it's a small casino out prague poker tournaments 2015 the middle parfumerie geant casino montauban nowhere, it will have to do. In terms of gambling, there aren't many choices on Long Island - it's literally just Jake's 58 and Resorts World Casino… Get the latest San Diego news. CBS News 8 is the local source for San Prague poker tournaments 2015 breaking news, top stories, weather, traffic, sports, entertainment and more. Serving but not limited blackjack cutting the following cities: Los Angeles San Diego San Jose San Francisco Fresno Sacramento Long Beach Oakland Bakersfield Anaheim Texas holdem poker big pot Ana Riverside Stockton Chula Vista Fremont Irvine San Bernardino Modesto Oxnard Fontana Moreno Valley Glendale Huntington Beach Santa Clarita Garden Grove Santa Rosa … 53 reviews of Parkwest Casino Cordova quot;I've lived in Rancho for 8 years now and never knew this place existed until now. I started working for the state about a year now and we are prague poker tournaments 2015 in the prague poker tournaments 2015 Park. A few of my co workers mentioned to… southside casino edmonton lineup and other details revealed for jeux casino gratuit roulette san diego rally feb.
0.99219
Pete complete a 40 question test, and got 75% of the questions right. How many questions did he get wrong? Write 75% as a decimal. 75/100 x 40 questions = 30 correct questions. How many did Peter get incorrect? 40 total questions - 30 correct questions = 10 incorrect answers. First, calculate how many moles of aluminum there are in 80g. 80 grams of aluminum/27g aluminum = 2.96 moles of aluminum. Divide by 27g of Al as there are 27g of aluminum per mole. From the balanced equation above, the stoichiometry of Al in reactants and AlCl3 in the product are equal. There are 2 moles of Al and 2 moles of AlCl3. Therefore, 2.96 moles of aluminum will react to form 2.96 moles of aluminum chloride. If an atom has 15 electrons, how many orbitals will it have?
0.977757
Massachusetts is one of the leaders in preparing for all three climate threats: extreme heat, inland flooding, and coastal flooding. Not only is the state taking extensive measures to prepare for current risks, but the Massachusetts Environmental Policy Act requires that all state agencies prepare for climate change impacts. Massachusetts is one of the nation's leaders in preparing for future extreme heat risks. The state has addressed climate change-related extreme heat risks in its adaptation plan, and it has already begun to implement resilience strategies outlined in the plan. Massachusetts is also a leader in preparing for future inland flooding risks. The state has assessed its vulnerability to climate change-related flooding, developed an adaptation plan that covers inland flooding, and has begun to implement strategies to improve the state's resilience. Massachusetts has taken some of the strongest action to prepare for future coastal flooding risks, compared to other coastal states. The state has already taken significant steps to understand and plan for sea level rise and it has also implemented regulations that require climate change projections for coastal flooding be included in state programs and activities.
0.994005
Brexit stands for British Exit and it was made to see if people wanted to leave the EU or if they wanted to stay. The voting was very close 48.1% voting to stay and 51.9% wanting to leave. When they have voted 51.9% against it, the people who live in England are not just doing things to other people from different countries but to themselves aswell. The EU stands for European Union and it contains 28 countries. What are the cases of Brexit? The cases of Brexit is that if you were having a baby or going to the shop to buy some food or other things then it wont cost the normal price but it would cost more than it used to. They may also might not believe you if you said you were from England!
0.999998
What percent of the whole square is shaded blue? Correct answer. You counted 13 squares out of 100 which were shaded blue. 13 out of 100 means 13%.
0.999622
So my discipline for late December is to make a year-end list for myself. Because here’s how it usually is for me: I’m climbing, I’m looking ahead at the path. I’m feeling how steep and rutted a go it is, but not how amazing it is to have climbed to this place. Translate to real life: I’m fussing with all these day-to-day struggles, thinking about all the things I wish were true, questing toward those next things. I have a surprisingly myopic relationship to the recent past. Except for brief flashes, I’m unable to look at where I was six months ago, a year ago, and appreciate the things the ground I covered, what worked and what I could happily discard if it didn’t. And especially to recognize that I am now doing things (sometimes effortlessly) that I was questing for back then. I finished a nutrition coaching certification and, more importantly, started immediately to work with clients on improving nutrition. I developed a series of workshops on health and fitness topics, began presenting them and built partnerships for presenting more of them. I even gave a workshop to which no one came. And I have to rack that up as some kind of achievement, right? I took a frigging f l y i n g L E A P annnnd…. <spoink!> ….well, it wasn’t the end of the dang world. I taught a cooking class (something I fantasized about doing for years) and had at least as much fun as I always thought I would. I got myself working on a regular schedule with designated days off, and specific hours dedicated to client sessions, business planning, writing, marketing and administration. I got to know you–the person that I intend to serve with coaching, training, classes and workshops–a lot, lot better, and that has helped me design and communicate with so much more clarity. Those are some 2013 highlights–it feels good to mark them here. Try it! Here’s a suggestion: if you keep a weekly to-do list or notes from meetings with a coach, mentor, teacher or counselor, look back at those as a way to trigger what you were working on months ago, and which now may be de rigueur. Salutes to all our challenges and achievements. Thanks for being a part of this, for reading and sharing your thoughts. Happy year-end to all.
0.990106
King of the Britons, King of the Welsh, Prince of Wales, Prince of the Welsh, Prince [and king] of Gwynedd, of Powys, Prince of Aberffraw, Lord of Snowdon, of Ynys Môn, of Meirionnydd, and of Ceredigion. Not definitely provable. However, the most likely will be one of the descendants of Hywel ab Owain Gwynedd, who was his father's heir and oldest surviving son. Hywel ab Owain has existing male descendants in the 21st century, as can be confirmed by records at the College of Arms. There also exist other Welsh families who claim descent from other branches of the dynasty. The House of Aberffraw is a historiographical and genealogical term historians use to illustrate the clear line of succession from Rhodri the Great of Wales through his eldest son Anarawd. Anarawd and his immediate heirs made the village of Aberffraw on Anglesey (Ynys Môn) their early principal family seat. In the 10th century, Rhodri the Great had inherited Gwynedd from his father and Powys from his mother, and he added Seisyllwg (Ceredigion and Carmarthenshire) by a dynastic marriage to Angharad of Seisyllwg. Rhodri's influence in the rest of Wales was significant, and he left a lasting legacy. The family were able to assert their influence within Gwynedd, their traditional sphere of influence, but by the 11th century they were ousted from Powys (Mid Wales) and Deheubarth (West Wales) by a series of strong rulers from the House of Dinefwr in Deheubarth, their dynastically junior cousins. The Dinefwr family were descended from the second son of Rhodri the Great. However, Gruffudd ap Cynan Aberffraw was able to recover his heritage and position as Prince of Gwynedd from Norman invaders by 1100. Owain Gwynedd, Gruffudd's son, defeated King Henry II of England and the vast Angevin host in 1157 and 1166, which led to Owain being proclaimed as Princeps Wallensium, the Prince of the Welsh, by other Welsh rulers. This proclamation reasserted and updated the Aberffraw claims to be the principal royal family of Wales, as senior line descendants of Rhodri the Great. This position was further reaffirmed in the biography The History of Gruffydd ap Cynan. Written in Latin, the biography was intended for an audience outside Wales. The significance of this claim was that the Aberffraw family owed nothing to the English king for their position in Wales, and that they held authority in Wales "by absolute right through descent", wrote historian John Davies. By 1216 Llywelyn the Great had received the fealty and homage of the Dinefwr rulers of Deheubarth at the Council of Aberdyfi. With homage and fealty paid by other Welsh lords to Llywelyn at the Council of Aberdyfi, Llywelyn the Great became the de facto first "Prince of Wales" in the modern sense, though it was his son Dafydd ap Llywelyn who was the first to adopt that title. However, the 1282 Conquest of Wales by Edward I greatly reduced the influence of the family. King Edward I of England forced the remaining members of the family to surrender their claim to the title of Prince of Wales under the Statute of Rhuddlan in 1284, which also abolished the independent Welsh peerage. The Aberffraw family members closest to Llywelyn II were imprisoned for life by Edward, while the more distant Aberffraw members went into deep hiding and fell into obscurity. Other members of the family did lay claim to their heritage; they included Owain Lawgoch in the 14th century. Royal succession within the House of Aberffraw (as with succession in Wales in general) was a complex matter due to the unique character of Welsh law. According to Hurbert Lewis, though not explicitly codified as such, the edling, or heir apparent, was by convention, custom, and practice the eldest son of the lord or Prince and was entitled to inherit the position and title as "head of the family" from the father. This was effectively primogeniture with local variations. However, all sons were provided for out of the lands of the father, and in certain circumstances so too were daughters (with children born both in and out of wedlock considered legitimate). Men could also claim royal title through the maternal patrimony of their mother's line in certain circumstances (which occurred several times during the period of Welsh independence). The female line of the dynasty was also considered to remain royal, as marriage was an important means of strengthening individual claims to the various kingdoms of Wales and uniting various royal families to that of Aberffraw, or reuniting factions after dynastic civil wars (for example with the marriage of Hywel Dda, a member of the Dinefwr branch of the Aberffraw dynasty, and Elen of Dyfed, daughter of Llywarch ap Hyfaidd, King of Dyfed). This meant that the female line was considered as a legitimate path of royal descent within the House of Aberffraw, with the claims of royal women to titles usually transferring to their sons. Members of the House of Aberffraw would include Idwal Foel, Iago ab Idwal, Cynan ab Iago, Gruffudd ap Cynan, Owain Gwynedd, Gwenllian ferch Gruffydd, Llywelyn the Great, Llywelyn ap Gruffudd, and Owain Lawgoch. Succeeding surviving branches emerged and included the Wynn family of Gwydir. Hywel ab Owain Gwynedd (eldest surviving son after the death of Rhun ab Owain) Prince of Gwynedd 1170, succeeding as his father's chosen heir. Died 1170 in battle at Pentreath, against his brother Dafydd. The Chronicle of the Princes (Brut y Tywysogyon) records the following entry in the year 1170: One thousand one hundred and seventy was the year of Christ when Dafydd ab Owain slew Hywel ab Owain (Red Book of Hergest Version translated and arranged by Thomas Jones, 1955). See in genealogical tables in J.E. Lloyd's History of Wales: The Line of Gwynedd. Caswallon ap Hywel [see: PC Bartrum Welsh Genealogies AD 300–1400 (1974), page ref: Gruffudd ap Cynan 10]. Caswallon has proven direct male ancestors who exist into the modern day and thereby represent the senior surviving male line of Owain Gwynedd – the genealogy of one family was recorded by Peter Gwynn-Jones, late Garter King of Arms, at The College of Arms. The Wynn of Gwydir family died out in the male line on the death of Sir John Wynn, 5th Baronet in 1719. Later direct male descendants would include the Wynn of Gwydir (disputed in a publication of 1884 entitled "Gweithiau Gethin" published by W.J.Roberts in Llanrwst.) and Anwyl of Tywyn families, claiming direct male descent from Owain Gwynedd and bearing his coat of arms. The Wynn baronets of Gwydir were created in the Baronetage of England in 1611—one of the initial creations—for John Wynn, of Gwydir. The family continued to be prominent in politics, all the baronets save Owen sat as members of parliament, often for Carnarvon or Carnarvonshire. This creation became extinct in 1719, on the death of the fifth baronet. Wynnstay, near Ruabon, passed to Sir Watkin Williams, who took the name of Williams-Wynn. Jane Thelwall (great-granddaughter). Her husband took the name Wynn in honor of his wife's heritage, establishing the Williams-Wynn family. A cadet branch of descendants could trace their descent from Richard Wynn, through his daughter Mary Wynn, Duchess of Ancaster and Kesteven, and his great granddaughter Priscilla Bertie, 21st Baroness Willoughby de Eresby. This cadet branch would expire with the 1915 death of Willoughby Merrik Campbell Burrell, 5th Baron Gywdyr. Thomas Lloyd Anwyl of Hendremur (1695–1734); married Margaret, daughter of Thomas Meyrick, and died 1734. William Anwyl of Hendremur (1717–1751) = Margaret, daughter of Rice Pierce, of Celynyn. Rice (Rev) Anwyl (1740–1819) = Margaret, daughter of David Roberts, of Goppa, and died 1819. David Anwyl of Bala (1771–1831) = married Mary, daughter of Gruffyd Owen of Pencader. Evan Anwyl of Llugwy (1789–1872) (brother of Robert) = daughter of William Morgan, of Brynallys, Montgomeryshire. Robert Charles Anwyl of Llugwy (1849–1933) = Harriette daughter of William Hamilton. Evan Anwyl of Ty-Mawr Farm, Tywyn, Merionethshire (1858–1955) = Sarah daughter of Jonathan Benbow of Meifod. Evan Anwyl of Ty-Mawr of Tywyn (1911–1968) = Gwyneth daughter of Harold Henry Scott of Chester. David Evan Anwyl (born 1977). Two grandsons of Jonathan the younger brother of Evan Anwyl (b. 1858) are also extant and live in Surrey. Philip (b. 1943) and Roger (b. 1947). 1 2 3 Lewis, Hurbert; The Ancient Laws of Wales, 1889. Chapter VIII: Royal Succession; Rules to Marriage; Alienation pgs 192–200. ↑ Maredudd ap Hywel had two sons; Robert and Ieuan. The Wynn of Gwydir family claim descent from Robert while the Anwyl of Tywyn family claims to descend from Ieuan. Gwynfor Jones, J The Wynn family of Gwydir. Aberystwyth : Centre for Educational Studies, 1995. Wynn, Sir John History of the Gwydir family and memoirs. Edited by J. Gwynfor Jones. Llandysul : Gwasg Gomer, 1990.
0.999986
The inflation rate in the United States between 2018 and today has been 2.05%, which translates into a total increase of $2.05. This means that 100 dollars in 2018 are equivalent to 102.05 dollars in 2019. In other words, the purchasing power of $100 in 2018 equals $102.05 today. The average annual inflation rate has been 1.02%.
0.999995
What are the barriers to communication and how effective communication can be made ? The image and credibility of the sender, stereotyping, past experiences, overexposure to data, attitudes, mindsets, perceptual filters, trust and empathy all impact on what receivers receive and how they interpret its meaning. These communication barriers occur in everyday business communications. Misinterpretation occurs when the receiver understands the message to his or her own satisfaction but not in the sense that the sender intended. Misinterpretation can be a consequence of sender or channel noise, poor listening habits, erroneous inferences on the part of the receiver, or differing frames of reference. An example of this occurs when unclear instructions lead employees to "hear" the wrong procedures for doing their work. 1. Frames of Reference: A combination of past experience and current expectations often leads two people to perceive the same communication differently. Although each hears the actual words accurately, s/he may catalogue those words according to his or her individual perceptions, or frames of reference (also discussed earlier in this unit). Within organizations, people with different functions often have different frames of reference. Marketing people may interpret things one-way and production people another. An engineer's interpretation is likely to differ from that of an accountant. 2. Semantics: Just as individual frames of reference lend different meanings to identical words or expressions, so can variations in group semantics. Semantics pertains to the meaning and use of words. This is especially true when people from different cultures are trying to communicate. 3. Value Judgements : Value judgements are a source of noise when a receiver evaluates the worth of a sender's message before the sender has finished transmitting it. Often such value judgements are based on the receiver's previous experience either with the sender or with similar types of communications. 4. Selective Listening: Value judgements, needs, and expectations cause us to hear what we want to hear. When a message conflicts with what a receiver believes or expects, selective listening may cause the receiver to block out the information or distort it to match preconceived notions. For example feedback to an employee about poor performance, may not be "heard" because it doesn't fit the employee's self-concept or expectations. At times people become so absorbed in their tasks that when someone initiates conversation, they are not able to disassociate and listen effectively. Not only as it difficult for": preoccupied person to receive the message the sender intends, but obvious body language may make it appear that the receiver doesn't care about the sender, or the message. This can create negative feelings and make future communications even more difficult. 5. Filtering: Filtering is selective listening in reverse; in fact, we might call it "selective sending." When senders convey only certain parts of the relevant information to receivers, they are said to be filtering their message. Filtering often occurs in upward communication when subordinates suppress negative information and relay only the data that will be perceived by superiors as positive. Filtering is very common when people are being evaluated for promotions, salary increases, or performance appraisals. 6. Distrust: A lack of trust on the part of either communicator is likely to evoke one or more of the barriers we've just examined. Senders may filter out important information if they distrust receivers, and receivers may form value judgements, make inferences, and listen only selectively to distrusted senders. Poorly developed communication leads to distrust one another. Distrust is sometimes caused by status difference. Effective communication requires considerable skill in both sending and receiving information. 1. Clarity of Messages: A sender can take the initiative in eliminating communication barriers by making sure a message is clear and credible and that feedback is obtained from the receiver to ensure that understanding is adequate. 2. Develop Credibility: The credibility of a sender is probably the single most important element in effective interpersonal communications. Sender's credibility is reflected in the receiver's belief that the, sender is trustworthy. 3. Feedback: Effectiveness of communication depends on feedback. Feedback can be used to clarify needs and reduce misunderstanding to improve relationships and keep both parties updated, to determine which issues need. further discussion, and to confirm all uncertain verbal, vocal, and visual cues. The proper and effective use of feedback skills can lead to mutual understanding, less interpersonal tension, increased trust and credibility, and higher productivity. vii) Don't overwhelm; make sure your comments aren't more than the person can handle. vii) Ask questions to clarify. 4. Ask Questions: Questions allow us to gain information about people and problems. They can help us uncover motives and gain insights about another person's frame of reference, goals, and motives. There are three main type of questions: closed-end, open-end, and clarifying. Closed-end questions require narrow answers to a specific inquiry. Typical answers will be "yes," "no," or something nearly as brief. Open-end questions are often used to draw out a wide range of responses to increase understanding or solve a problem. These questions involve other people by asking or feelings or opinions about a topic. Clarifying questions are essentially restatements of another person's remarks to determine if you have understood exactly what the speaker meant. These questions are useful for clarifying ambiguities and inviting the speaker to expand on ideas and feelings. 5. Listen: Listening is an intellectual and emotional process in which the receiver integrates physical, emotional, and intellectual inputs in search of meaning. Listening to others is our most important means of gaining the information we need to understand people and assess situations. Many communication problems develop because listening skills are ignored, forgotten, or just taken for granted. Listening is not the same as hearing, and effective listening is not easy. People usually hear the entire message, but too often its meaning is lost or distorted. Poor listeners miss important messages and emerging problems. Consequently, the ideas that they propose are often faulty and inappropriate; sometimes they even address the wrong problems. Failure to listen also creates tension and distrust and results in reciprocal nonlistening by others. The first step to overcome listening barriers is being aware of them. Many people identify listening as a passive, compliant act and develop negative attitudes toward it. From early childhood onward, we are encouraged to put out emphasis on speaking as opposed to listening. We are taught that talk is power., When two people are vying for attention and control, however, they not only fail to listen to each other, but also generate increased tension along with decreased trust and productivity. To listen well, one has to care about the speaker and the message. Disinterest makes listening effectively very difficult. Differences in prior learning and experience between senders and receivers can also detract from listening ability. Our beliefs and values also influence how well we listen. If the actual message is in line with what we believe, we tend to listen much more attentively and regard the words in a more favourable light. However, if the message contradicts our current values and beliefs, we tend to criticize the speaker and distort the message. Skilled listeners attempt to be objective by consciously trying to understand the speaker without letting their personal opinions influence the decoding of the speaker's words. They try to understand what the speaker wants to communicate, not what they want to understand. Active listening: Active listeners search for the intent and feeling of the message and indicate their understanding both verbally and nonverbally. They practice sensing, attending, and responding. Sensing is the ability to recognize the silent messages that the speaker is sending through nonverbal clues such as vocal intonation, body language, and facial expression. Attending refers to the verbal, vocal, and visual messages that an active listener sends to the speaker to indicate full attention. These include eye contact, open posture, affirmative head nods, and appropriate facial and verbal expressions. In responding, the active listener summarises and gives feedback on the content and feeling of the sender's message. S/he encourages the speaker to elaborate, makes the speaker feel understood, and attempts to improve the speaker's own understanding of the problems or concerns. 6. Nonverbal Communication Cues: The amount of nonverbal feedback exchanged is not as important as how the parties interpret and react to it. Very often a person says one thing but communicates something totally different through vocal intonation and body language. These mixed signals force the receiver to choose between the verbal and nonverbal aspects of a message. Most often, the receiver chooses the nonverbal aspect. Nonverbal communications actually are more reliable than verbal communications when they contradict each other. Consequently, they function as a lie detector to aid a watchful listener in interpreting another's words. Although many people can convincingly misrepresent their emotions in their speech, focused attention on facial and vocal expressions can often detect leakage of the concealed feelings. 7. Transactional Analysis: Knowledge and use of the concept of Transactional Analysis (discussed earlier in the unit in determining interpersonal styles) may lead to effective communication. Any message exchanged between two persons is called transactions. When A sends a message, B receives it; B responds and this is received by A. That is one transaction. A person can send a prescriptive or admonishing message (from what is called the Parent ego state); or an information message (from the Adult ego state); or a feeling message (from the Child ego state). .Any of these messages may be sent to (and received by) one of the three ego states of the other person (Parent, Adult, or Child). If the response is by the same ego state as through which the message was received, it is called a complimentary or parallel transaction. Such transactions are very satisfying. The response however, may not originate from the ego state which has received the message. Then it is a crossed transaction.
0.997564
Tropes about how the human (or nonhuman) mind works... or could be made to work, given some premise. Including tropes on therapy and the science of psychology. God Is Flawed: This trope is to a large extent about psychoanalyzing God.
0.96195
What is nanotechnology and when did it origi nate? It has been shown that nanotechnologies are developments in other disciplines of engineering materials (such as film technology). Even if the term nanotechnology is relatively new, it is actually an umbrella term that encompasses disciplines with ancient historical roots. Currently, researchers study the old meaning of the unknown which turns out to be composed mostly of nanoparticles. So nanotechnologies exist and have existed around us in nature since forever. In a narrow sense, nanotechnology is a technology based on the ability to build complex structures at the atomic level and the specifications using mechanical synthesis. Nanoscale structures are not only very small, reaching even the atomic scale in their design, but they have some totally different and unexpected properties compared to traits of the same substance taken macroscopically. On December 29, 1959, Nobel prize Richard Feynman, made the first reference to nanotechnology and untapped advantages of miniature, saying: The principles of physics, to the extent that I can see, do not speak against the possibility of handling things atom by atom. Today, there are tools that track precisely what Feynman said: creating structures moving atoms one by one. This vision of the American physicist is considered to be the first discussion about nanotechnology and as late as 1974, Norio Taniguchi of Tokyo University accepts the term nanotechnology. For another 10 years, nanotechnology remained away from public knowledge. Smalley and his team noticed that the carbon formed highly stable crystal composed of six atoms. They saw how the crystals share a structure known as football and called the discovery fullerene or buckyball. Buckyball remains the most important discovery of nanotechnology. This discovery led to the winning of a Prize award in Chemistry in 1996 for Smalley and colleagues. Another step towards the development of nanotechnology is the invention of two tools that have revolutionized the visualization and manipulation of nanoscale surfaces. These two findings are: Scanning Tunnelling Microscope (STM) and Atomic Force Microscope (AFM) which are able to illustrate surface atomic resolution. Binning and his collaborators at IBM Zurich are the inventors of instruments, for which they were rewarded in 1986 with the Nobel Prize for Physics. The invention of these tools basically paved the way of nano world to scientists. In September 1989, researcher Don Eigler, IBM Fellow, managed for the first time in history to move and control an individual atom, and in November 1989, his team wrote the word IBM using 35 xenon atoms positioned with nanometer precision. Sumio Iijima discovered carbon nanotubes in 1991 which became a field of research valorous chemistry and molecular physics. Carbon nanotubes are allotropes of carbon with cylindrical nanostructure and unusual properties not only in nanotechnology but also in electronics and optics. Even if nanotechnology was introduced in 1959 as a science, there is evidence that the use of nanoparticle manipulation goes back to 2000 years ago: Damascus swords, Lycurgus cup, Ajanta paintings, traditional Indian cosmetic Kajal. Metal colloids are the best examples of nanotechnology during the medieval and modern age. The color of these nanoparticles is influenced by their shape and size. These metal colloids dates back to the fifth century. The proof of their existence since that time is a Roman paper glass, Lycurgus’ cup, with a depiction of a scene involving King Lycurgus of Thracia. We can see that when this work is illuminated from the outside it is green and when it is illuminated from within it gives a ruby red color, except the king who is purple. The mystery of the variations of colours was not solved until 1990 when researchers in England analysed microscopical fragments and found that there were silver and gold particles embedded in the glass. Another example of the existence of these colloids is the amazing stained glass made in The Middle Ages and also today in many churches. These windows are made of a composition of glass and metal particles. If we look back to the past and the history of science, since the late nineteenth century gold colloids have been a topic of research. The scientist who conducted systematic studies on the properties of metal colloids, especially those made of gold, was Michael Faraday who presented his paper to the Royal Society of London, in 1857. He described a process of color change, claiming that if a gold colloid salt is added it changes color in blue. The marvellous material represented by carbon nanotubes discovered in 1991 was used 2000 years ago in India to manufacture the famous Damascus’ swords that were famous for their impregnated carbon steel, hard and flexible at the same time. The history of materials engineering shows many examples of nanomaterials. Over time such materials have unwittingly been produced but have not been characterized as nanoscale because they did not have the necessary tools. For example, the anodizing was first used in the early 1930s as one of the most important processes used in industry to protect aluminum from corrosion. The inventors of this technique were not aware that what protects the aluminum is actually a latest nano material device. Other known examples are found in the structure of nanoparticles rubber tires, titanium dioxide found in some latest sunscreen products, many synthetic molecules used in compounding drugs etc. Since the most ancient times, craftsmen and artisans have unconsciously used the techniques of manipulation of atoms and the particles obtaining the tools/products of the highest quality rediscovered only in the second half of the 20th century with the advent of nanoscience. Currently, nanotechnology affects people’s lives more than any scientific discovery with applications in all areas and techniques. Nanotechnology has a number of general areas: medicine, environment, cosmetic, electronic technology, household appliances, etc. We warmly thank to the teachers Tamara Slatineanu, Cristina Mosu for their effective help. Dubey Prashant et al., Syntheses and characterision of water-soluble carbon nanotubes from mustard soot, Pramana-jurnal of physics, vol.65, no.4, 2005, pp. 681-697. Melnik A.V., Shagalina O.V., History of Nanotechnology, 2011, Siberian Federal University. Ochekpe A. Nelson et al., Nanotechnology and Drug Delivery. Part 1: Background and Applications, Tropical Journal of Pharmaceutical Research, June 2009, 8 (3), pp. 265-274. Dragos Barbu, http://lefrigaro.ro/wp-content/uploads/2014/06/sabie.jpg, Expozitie de sabii, Bucuresti, Romania, 17 August 2014.
0.995086
We all know that Nvidia and Intel have had some bitter blood between them several times in the past. Each company has been at the throat of each other for various reasons, with Intel suing Nvidia and vice-versa. Other issues have also come up, such as Nvidia downplaying the importance of the CPU in the future, which obviously would be a sore spot for Intel. Lately, the graphics firm has made more bold statements that paint Intel's marketing strategy in a bad light. In particular, Nvidia's technical marketing director, Tom Petersen, has made some harsh statements about the Core i7 chip, criticizing its high price along with Intel's claims that it is the “best” gaming CPU available today. He brought up comparisons in upgrading CPUs as opposed to adding multiple GPUs, saying that from a price vs. performance standpoint, SLI is a much better option than an expensive CPU. Petersen's words might ring true with many gaming enthusiasts, but should an Nvidia rep be criticizing Intel's marketing techniques? It seems to me that it would be better to leave debates about performance benefits of hardware up to independent reviewers – after all, Nvidia is already in enough hot water with Intel.
0.999999
Report on young people's experience of their learner journey from the age of 15-24 years. The first key decision point in young people's learner journeys is when they make subject choices in secondary school. These were reported to be based mainly on things they enjoyed or were good at rather than a career plan . For young people planning to go to college or university, the decision of what stage to leave school is usually based on when they expect to have achieved the qualifications required and secured a place. There was reported to be good support available within schools to complete college and university application forms, but less adequate support available to help young people decide for which subjects and courses to apply. For those going on to college, apprenticeships or employment, decisions about the next steps are often based on what is available locally at the time they are looking. Taking time out of formal education can provide an opportunity for young people to think about what they want to do, travel, explore different options and develop their confidence. However, this is often not a realistic or practical option for those who are not being financially supported by their parents or who are in poverty. A lot of young people report challenges securing full-time employment and negative early experiences of the world of work, including insecure employment, zero hours contracts and poor pay and conditions. 5.1 This section considers the key decision points in young people's learner journeys and the support available to help them navigate and transition between each stage covering subject choices, leaving school, first destinations, transitioning to the next stage and moving into employment. The first key decision point in young people's learner journeys is when they make subject choices in secondary school. 5.2 Most of the young people participating in the workshops made their first set of subject choices in second year of high school (aged 13/14), which determined the subjects they would be taking for the next two years. In most cases, decisions were reported to have been based on things that they enjoyed or were good at, rather than on a career plan. 5.3 The next set of subject choices come in fourth year, when young people select which subjects to study in fifth and potentially also sixth year. These usually (but not always) involve progressing a selection of subjects taken in fourth year to a more advanced level. There were a couple of examples provided of where young people had selected subjects that they had not studied previously at this stage, but this often proved challenging as they lacked a basic grounding in the subject and felt behind the rest of the class. 5.4 The subject options available to young people to take during fifth and sixth year was cited by some participants as a positive feature of the Scottish education system as it enables them to "keep their options open" with the potential to study up to ten subjects over the two years. Related to this, some reported that sixth year offered a second chance to those who do not get the grades they are expecting (or hoping for) in fifth year. However, others felt that the continued focus on school subjects and exams in the final two years of school was too inflexible and that there needs to be a greater range of options available to help prepare them for the next stage. Young people would like more time and support for subject choices, including guidance on the implications of different choices on career options. 5.5 Several workshop participants said that they would have liked more time to make subject choices as the process often felt rushed. This was particularly true for young people suffering from anxiety or other mental health conditions, who were more likely to worry about making the right decision. They would also have liked more detailed advice and guidance on the implications of these choices on what they would be able to do next, including information on what jobs are related to which subjects. "It would have been good to have more time to choose my subjects – it all felt very rushed." Christian, aged 17. "It would have been good to know about college courses and entry requirements before making my subject choices." Donna, aged 19. 5.6 Some young people reported a tension between choosing subjects that they enjoyed versus those offering better career opportunities. For example, workshop participants reported a strong push on Science, Technology, Engineering and Maths ( STEM) subjects within schools, even in cases when they were not a good fit for an individuals' skills and interests. This was said to be resulting in some young people selecting STEM subjects, but then dropping them midway through the academic year. In other cases, young people had to go against the wishes of the school in order to pursue the subjects in which they were interested. Related to this, several young people cited the need to be pro-active, resilient and determined as being key to progressing successfully in their learner journey. "I decided to study advanced music in sixth year (piano and bass guitar) rather than science as this was my interest." Megan, aged 20. 5.7 The online resources available to young people to help inform subject choices (including My World of Work and Planit Plus) were reported as being informative and useful by several participants who had gone (or were planning to go) to university or who had a clear idea about what they wanted to do. They were referenced much less frequently by those who were pursuing other pathways or who were unsure about what they wanted to do. Of those that found them useful, most said that they would have also have liked the opportunity to discuss their options with someone who was knowledgeable about the career pathway that they were looking to pursue. 5.8 Most of the young people who had gone (or were planning to go) to university reported knowing from a young age that this was the route they wanted to go on. As referenced elsewhere in this report, this was often not a conscious decision – it was assumed by teachers and careers advisers that they would go to university if they performed well academically. Feedback on the application process for university was very positive. The UCAS system was described as straightforward, clear and easy to navigate. Schools were also reported to be very familiar with the system, meaning that they are able to offer guidance and support to young people with this. 5.9 College was viewed as the preferred secondary option for those who did not meet the conditions to go to university immediately on leaving school, sometimes with a view to getting the required qualifications to enable them to progress to university later. Related to this, several participants cited good links between colleges and universities as having helped them to progress in their learner journey. Apprenticeship opportunities were reported as being very rarely discussed with those who chose to stay in school past fourth year, with the assumption being that they were on a university pathway (either directly or through college). 5.10 College was also reported as being the preferred option presented to those who wanted to leave school before the end of the senior phase. Several workshop participants reporting being 'forced' to complete college application forms before being 'allowed' to leave school, despite making it clear that they had no intention of taking a place if successful. This experience was widespread and came up in all of the workshops involving young people who had left school before the end of the senior phase. For those going to university, course choices were usually based on personal interests and subjects that young people enjoyed rather than a long-term career plan. 5.11 Feedback from the workshops suggests that there is a lot of support available within schools to complete college and university application forms, including personal statements. However, there seemed to be less adequate support available to help young people decide which subjects and courses to apply for and the advice that is provided is sometimes at odds with what young people want to do. Kerri was identified by her school as someone who would do well in science and the school expected her to continue studying this at university. However, Kerri hated the subject and preferred arts and humanities. She made the decision to study history at university – a decision that was against the wishes of her school. She explained that the school was "shocked" that she applied to study history. She is happy with her decision, but recognises that it required extra effort and courage on her part to go against the school and study a subject that she enjoyed. 5.12 Young people reported feeling that it was important to choose subjects that they were going to like given that they could be studying them for up to four years. At this stage, most were not thinking about what opportunities will be open to them at the end – that still felt too far in the future. In this way, university was a means of deferring entry to the labour market and decisions relating to this. Rebecca always knew that she wanted to go to university, but struggled to decide what to study and what classes to take when she got there. She decided to study psychology simply because it looked interesting (she had no experience of studying the subject before applying). As it turns out, Rebecca enjoyed the subject and has continued to study it at postgraduate level – she is now completing a PhD in psychology. Although the subject choice worked out well for her, she recognises that it was not an informed choice. 5.13 The main exceptions to this are when young people have a fixed idea about exactly the type of job they want to do and the routes into this are clear and easy to navigate. These are usually traditional occupations and professions with well-established career pathways. Abby wants to be a dentist when she is older. She selected subjects for her National 5 exams based on the entry requirements for the course. She excels at school and achieved high marks in her prelims, which she hopes she can maintain in the final exams in May. She plans to study Highers and Advanced Highers in 6th year before going to Dundee University to study dentistry. For young people pursuing more technical or vocational routes, decisions are often based on what is available locally at the time they are looking. 5.14 Most young people pursuing technical or vocational routes after school tend to make decisions based the options that are available locally. These tend to be short-term and transient – something to do now – rather than as part of a career plan or a stepping stone towards a goal. This can lead to a series of 'false starts' before they manage to settle on something that they enjoy. "I attended college for a year training to be a chef. It turns out the industry wasn't for me. I then worked in several jobs – in a restaurant, a call centre and then for a landscaping company. I have now applied to be a funeral arranger, but have also applied for an MA in Carpentry and Joinery. I have wasted so much time since leaving school not knowing what I wanted to do and chopping and changing my mind." Abbie, aged 19. "I left school at 15 to go to work in a hairdresser. I didn't like it and started looking at college courses to get into childcare. With help from my old guidance teacher (who gave me a recommendation), I got a place on a Level 2 Introduction to Caring for Children and the Elderly. However, I was frustrated that I was mainly learning about caring for the elderly and not children. I left after 8 months as I didn't feel it was benefiting me. I decided that I would like to get into working in retail and got a Level 2 qualification working in a charity shop, where I still volunteer now and again. I did this for two years and completed my apprenticeship. After this, I still had a desire to work with children and applied for a childcare apprenticeship. I am now in my second year and am enjoying it." Megan, aged 20. Taking time out of formal education can help young people to think about what they want to do, travel, explore different options and develop their wider skills and confidence. 5.15 A small number of workshop participants reported that they had delayed the decision of what to do next by taking time out of formal education. In all cases, this was reported to have had a positive impact, both on their personal and social development and in helping them to decide on the next steps. "I took a year out after school to deal with my health issues. After this, I was ready to focus on what I wanted to do next." Ashleigh, aged 20. "After college, I didn't feel ready for university and so went to Cambodia to volunteer for three months. I still wasn't ready for university and didn't know what I wanted to do. I moved to Milan for a year to do au pair work and learn Italian. I came back and started an engineering apprenticeship…. Taking the time out helped me to make the right decision for me." Gemma, aged 22. "I want to go to university, but I'm not sure what to study. I chose science, but then changed to humanities. I have deferred entry to university for a year to take a scholarship in China. This will give me more time to make my decision." Ana, aged 17. "Going travelling is the best thing I have done in my life." Kirsty, aged 24. 5.16 As part of the ice-breaker task as the start of each workshop, participants were asked to write down something that they wanted to do in life. The most common response across all of the workshops was to "travel the world". This suggests that many young people would welcome the opportunity to take time out of the 'system' to have new experiences. However, this is not a practical or realistic option for most young people who are not being financially supported by their parents or who are in poverty. When young people are approaching the end of their first destination from school, there are more decisions to be made about what to do next. 5.17 For young people completing apprenticeships, there is an assumption is that they will continue in the same occupation or industry having achieved industry-recognised qualifications and experience. However, this is far from guaranteed and many of the workshop participants reported that they had changed course at the end of their apprenticeship and decided to pursue other options. Again, this was often driven by the opportunities that were available locally at the time. 5.18 As referenced earlier in this report, a lot of young people who go to university defer consideration of labour market options until the latter stages of their degree. The exceptions to this are those who have a clear idea of what they want to do from the outset of their course. The sample of young people who had completed a degree was small, but most said that their exploration of options for what to do next was self-directed rather than guided by university teaching or careers staff. However, at least one participant described the support they had received through the university careers centre as being 'very helpful'. 5.19 For the young people that participated in the workshops, college was often viewed as a 'stepping stone' to other destinations. For example, it is often used by young people to 'top-up' the qualifications required for entry to university or apprenticeships. It is also used to get the qualifications required for particular jobs. In these cases, the desired next steps are clear, but are dependent on the opportunities being available. 5.20 In some cases, completion of a college course can enable entry to second year of university. However, a couple of the workshop participants reported that, when presented with that option, they chose not to take it. The reasons related to concern that they might find themselves behind the rest of the class on core subjects, and that missing out on the social elements of first year (such as freshers' week) would make it harder for them to fit in and make friends. A lot of young people find it difficult to get full-time employment and many report negative early experiences of the world of work. 5.21 The challenges faced by young people trying to secure full-time employment was a key topic of discussion during the workshops. Many participants' early experiences of the world of work were negative, with some reporting that they felt they had been (or were being) exploited or unfairly treated by employers. Examples included not being paid what they had been promised, not being given holiday pay, being made redundant with little or no notice or explanation and extended work experience placements with no guarantee of a job at the end. Zero hours' contracts were also prevalent amongst this age group, which make it difficult to make financial plans or live independently. 5.23 Erin left school at 16 and got a job straight away in retail. He loved his job and was good at it, achieving all of his sales targets and getting on well with customers. Erin was given a lot of responsibility and was pleased at being trusted enough to have a set of keys for the shop. However, the business was not doing well and he was made redundant. Since then, he has had a few short-term and zero hours contracts, but has been unable to find secure employment. He misses work and having a routine. Lack of work experience is a key barrier to young people getting jobs. 5.24 Young people on employability programmes, or not in employment, education or training, were sometimes submitting upwards of 30 job applications per week and not hearing anything back. These types of open job applications were perceived as being a waste of time for those with few qualifications and limited work experience. A more successful route to securing opportunities was said to be through connections made through employability programmes (rather than the programme itself) and targeted work experience placements. "I couldn't get a job or an apprenticeship after leaving school. My friend told me about this course and here I am 10 weeks later. I have gained work experience, which will be good for my CV, as well as a good reference." Ray, aged 18. "My work experience placement has given me a lot of hope and confidence for the future." Josh, aged 22. 5.25 Similarly, the challenges faced by college and university leavers in securing full-time employment often related to a lack of relevant work experience. A couple cited unpaid graduate internships as a means to getting this experience. However, these were described as " exploitative and wrong" and not a feasible option for those who were not being financially supported by their parents.
0.950514
In the off-season, the Canadiens would replace head coach Pat Burns and hire former Quebec Nordiques, St. Louis Blues and Detroit Red Wings head coach Jacques Demers to take his spot. The team also made some trades during the summer, acquiring Vincent Damphousse from the Edmonton Oilers, and Brian Bellows from the Minnesota North Stars. Denis Savard is named an alternate captain, following Mike McPhee's trade to the North Stars. The Canadiens would get off to a quick start, sitting on top of the Adams Division with a 16–5–3 record in their opening 24 games. The team would slump to an 8–9–2 record in their next 19 games, and fall behind their provincial rivals, the Quebec Nordiques, in the standings. Montreal would get hot, going 17–4–1, to take a commanding lead in the division, but a late-season slump, as Montreal would have a record of 7–11–0 in their final 18 games, falling behind the Boston Bruins and Nordiques to finish third in the division with 102 points and a 48–30–6 record. On January 25, 1993, rookie Ed Ronan scored just 14 seconds into the overtime period to give the Canadiens a 3-2 home win over the Boston Bruins. It would prove to be the fastest overtime goal scored during the 1992-93 NHL regular season. Four Canadiens (Brian Bellows, Vincent Damphousse, Stephan Lebeau and Kirk Muller) reached the 30-goal plateau. In his first season with the team, Vincent Damphousse led the club offensively, scoring 39 goals and earning a team high 97 points. Brian Bellows, also in his first season in Montreal, had a team high 40 goals, and finished with 88 points. Kirk Muller scored 37 goals and had 94 points, while Stephan Lebeau had a breakout season, earning 80 points. Eric Desjardins led the blueline with 13 goals and 45 points, while Mathieu Schneider also recorded 13 goals from the blueline, and finished with 44 points. In goal, Patrick Roy played the majority of the games, leading the club with 31 wins and a 3.20 GAA in 62 games, and earn two shutouts along the way. Andre Racicot backed up Roy, and won 17 of 26 games, while posting a 3.39 GAA, and a shutout. At the beginning of the 1992-93 NHL season, Upper Deck made Patrick Roy a spokesperson. Roy was an ideal choice as he was a hockey card collector, and his collection amounted to over 150,000 cards. An ad campaign was launched and it had an adverse effect on Patrick Roy's season. Upper Deck had a slogan called "Trade Roy", and it was posted on billboards throughout the city of Montreal. A Journal de Montreal poll, published on January 13, 1993, indicated that 57% of fans favoured trading Patrick Roy. Before the trading deadline, Canadiens General Manager Serge Savard insisted that he would consider a trade for Roy. The Canadiens would end the season by winning only 8 of their last 19 games. In the playoffs, the Canadiens would open up against their Battle of Quebec rivals, the Quebec Nordiques. Quebec finished in second place in the division, two points ahead of Montreal. Quebec opened the series with two wins on home ice, sending the series back to Montreal. The Canadiens responded in the third game with a 2–1 overtime win, to cut the Nordiques series lead to 2–1. Montreal followed that up with a solid 3–2 win in game four to even the series as it shifted back to Quebec City. Game five couldn't be settled in regulation time, as the Canadiens and Nordiques were tied 4–4, and Montreal would stun the Nordiques home crowd with an overtime goal to win the game 5–4, and take control of the series with a 3–2 lead, heading back to the Forum for the sixth game. Montreal then closed out the series at home, defeating the Nordiques 6–2, and advance to the second round of the playoffs for the tenth straight season. Up next was the Buffalo Sabres, who had upset the division-winning Boston Bruins in the opening round. Montreal finished 16 points ahead of the Sabres during the regular season. The Canadiens, who ended their series with the Nordiques with four straight wins, continued their hot streak, defeating the Sabres by identical 4–3 scores in the opening two games, winning the second game in overtime. The series then moved to Buffalo, but Montreal recorded another 4–3 overtime victory, to take a commanding 3–0 series lead. The Habs would sweep Buffalo, with yet another 4–3 overtime win in game four, moving to the Conference final for the first time since 1989. The Canadiens next opponent would be the surprising New York Islanders, who had just defeated the heavily favoured Pittsburgh Penguins to earn a spot in the Conference finals. The Islanders had 87 points in the regular season, which was 15 less than Montreal. The Canadiens stayed red hot, with a 4–1 victory in the first game, before winning 4–3 in double overtime to take a 2–0 series lead, and extend their winning streak to 10 games. Game three on Long Island would again head into overtime, with Montreal winning again, by a score of 2–1, to win their eleventh straight playoff game, tying the NHL record which was set by the Pittsburgh Penguins and Chicago Blackhawks in the 1992 playoffs. The Islanders would hold off the Canadiens in the fourth game to avoid the sweep and end the Canadiens' winning streak; however, Montreal would close out the series in the fifth game, and move to the Stanley Cup Finals for the first time in four years. Montreal's final opponent of the playoffs would be the Los Angeles Kings. The Kings, led by Wayne Gretzky, had defeated the Calgary Flames, Vancouver Canucks, and Toronto Maple Leafs to earn their first ever trip to the Stanley Cup Finals. Los Angeles finished the season with 88 points, 14 less than Montreal. The first game, held at the Forum, would belong to the Kings, as they stunned the Montreal crowd with a 4–1 victory. Montreal rebounded in game two, as a late penalty call on Marty McSorley for using an illegal stick gave the Canadiens a late powerplay, on which they scored to tie the game up at 2–2. The game headed into overtime, and Montreal again prevailed, winning the game 3–2 to tie up the series. The series moved to Los Angeles for the third game, and Montreal continued their overtime magic, with a 4–3 OT victory to take a 2–1 series lead. The fourth game would again head into overtime, and again, the Canadiens won, their NHL record tenth consecutive overtime victory, to take a 3–1 series lead with the series headed back to Montreal for the fifth game. The Canadiens had few problems with a tired Kings team in the fifth game, winning 4–1, and earning their 24th Stanley Cup in team history. Patrick Roy was named the winner of the Conn Smythe Trophy. It was their most recent Stanley Cup championship as of 2018, and the last time a Canadian team won the Cup. Roy would win two more Stanley Cups with the Colorado Avalanche in 1996 and 2001. Jesse Belanger played 19 regular season games and nine playoff games but did not play in the finals. His name was included on the cup even though he did not qualify. #6 Oleg Petrov (RW) played nine regular-season games and one playoff game but was left off the cup, and team picture. He spent the rest of the season in the minors. Montreal did not include Aldo Giampaolo, Fred Steer, Bernard Brisset (Vice Presidents), and Claude Ruel (Director-Player Development) on the Stanley Cup, even though there was more than enough room. In 1986, Montreal included three of their four vice presidents and Director-Player Development on the Cup. All seven members were awarded Stanley Cup Rings, along with scouts, and other non-playing members. Included on the team picture, but left off Stanley Cup. Stephane T. Molson (Secretary - Molson Family Foundation)†, Eric H. Molson (Chairman of the Board, The Molson Company Limited)†. Names are not on the Stanley Cup even though they qualified as owners of Montreal. ^ Tim Warnsby (June 15, 2011). "Bruins win Stanley Cup". CBC Sports. Retrieved Feb 5, 2012. The Canucks weren't going to become the first Canadian-based team since the 1992-93 Montreal Canadiens to win the Stanley Cup with such little production. ^ Dinger, Ralph, ed. (2011). The National Hockey League Official Guide & Record Book 2012. Dan Diamond & Associates. p. 154. ISBN 9781894801225. ^ "1992–1993 Conference Standings". National Hockey League. Retrieved June 29, 2014. ^ "1992–93 Montreal Canadiens Games". Hockey-reference.com. Retrieved 2009-05-21. ^ "1992-93 Montreal Canadiens Statistics - Hockey-Reference.com". hockey-reference.com. Retrieved 2009-05-28.
0.959769
what it will happen with rentals in Single family houses when inflation hit us? I personally own, rent, sell and manage hundreds of SFH's in Decatur, Illinois- I personally expect house prices to rise and some tenants will have a hard time paying rent, so renting to private pay people will have to be more selective and possibly larger deposits taken before they move in- ALSO- I am converting some private pay homes into gov sub. (section 8) homes, because the rent is gaurenteed by Uncle Sam. I don't have to worry about tenants making rent since the government pays it for them.
0.999999
Who was the first black U.S. senator? Hiram Revels (1822-1901) of Mississippi became the first black senator on February 25, 1870. He completed the term begun by Jefferson Davis, who had resigned to become the president of the Confederacy. Aside from Blanche K. Bruce, who represented Mississippi from 1875 to 1881, there were no other black senators until 1966, when Edward Brooke, a Republican from Massachusetts, took office.
0.998836
How to choose a proper feature selection method for your data? Go from easier ones to complicated ones, go from linear ones to non-linear ones. is a multi-dimensional generalization of the idea of measuring how many standard deviations away a point P is from the mean of the distribution D. The Mahalnobis distance transforms the random vector into a zero mean vector with an identity matrix for covariance. In that space, the Euclidean distance is safely applied. It can be used to identify outliers, which are data points away from the distribution of the data. We can consider each feature a time (multivariate reduced to univariate), then the covariance matrix reduces to a diagonal matrix. Then, we can rank the features by the distance, and delete one feature a time to identify the best combination of features by investigating the changes of the metric. MI is more general and determines how similar the joint distribution p(X,Y) is to the products of factored marginal distribution p(X)p(Y). I (i) is a measure of dependency between the density of variable xi and the density of the target y. Intuitively, mutual information measures the information that X and Y share: it measures how much knowing one of these variables reduces uncertainty about the other. I(X; Y) = 0 if and only if X and Y are independent random variables. Moreover, mutual information is nonnegative (i.e. I(X;Y) ≥ 0; see below) and symmetric (i.e. I(X;Y) = I(Y;X)). the target class c. Relevance is usually characterized in terms of correlation of mutual information. We can get a ranking list of all the features, and apply wrapper method with the rank. Data Reliability Based Feature Selection: a feature is considered reliable (or relevant) if its values are tightly grouped together. SMOTE: First it finds the n-nearest neighbours in the minority class for each of the samples in the class . Then it draws a line between the the neighbours an generates random points on the lines. ADASYN: After creating those sample it adds a random small values to the points thus making it more realistic. In other words instead of all the sample being linearly correlated to the parent they have a little more variance in them i.e they are bit scattered. Ensemble different resampled datasets: When deal with imbalanced data set, create multiple balanced data sets from the original imbalanced data set via sampling, and subsequently evaluate feature subsets using an ensemble of base classifiers each trained on a balanced data set. And add ratio. Cluster the majority class: Instead of relying on random samples to cover the variety of the training samples, he suggests clustering the abundant class in r groups, with r being the number of cases in r. For each group, only the medoid (centre of cluster) is kept. The model is then trained with the rare class and the medoids only.
0.999992
What is the right way to manage divorce proceedings of a business owner? Divorce proceedings of a business owner can be expensive and ugly. The proceedings can cause many lengthy arguments and mutual slander that may damage the reputation and even lead to losing control of the company due to unfair distribution of shares. Unique difficulties may arise during the divorce proceedings of a business owner, such as the fear of losing control of the business, a crash of the company as a result of incorrect share distribution, and damaging the reputation of the business owner and of the brand that they have worked on for many years. Divorce proceedings for senior business owners require a different treatment, since it is a delicate and complex situation, and may cause damage both to the image that the employees and the board of directors have of you, as well as to the image that competing companies have of you. In order to prevent damaging the reputation and minimizing the damage that could be caused to the business, there is utmost importance to managing "quiet" divorce proceedings without press releases, and without the children or spouse sharing their feelings and experiences on social media like Facebook and WhatsApp. The Spouses' Property Relations Law is intended to regulate the distribution of assets in case of divorce. According to the law, each spouse is entitled to half on the joint assets, including the rights in the business. However, the law also determines that assets that belonged to one party before the marriage, or that were received by inheritance or as a gift during the marriage, will not be divided between the couple. If the parties signed a prenuptial agreement, before or after the wedding, the rules of this agreement replace the instructions of the law and determine the manner of distribution of assets. A married business owner, who is interested in the separation of assets that are accumulated over the years, such that they belong to each spouse separately and will not be subject to arbitrary distribution according to the Spouses' Property Relations Law, may do so via a prenuptial agreement, with advice and instruction from a lawyer specializing in family law. The agreement requires the official approval of the family court during the marriage or notarial approval prior to marriage. The approval will only be given after the court or the notary is convinced that the parties drafted the agreement out of their own free will and understand its meaning and consequences. The Mandatory Mediation Law aims to regulate family disputes and prevent debilitating and costly divorce wars in court. Currently, the law requires spouses prior to divorce to try and settle the dispute through mediation and not in court. The parties are required to apply for conflict resolution and to attend four mediation meetings with a welfare officer at the social services offices without the involvement of an attorney. The aim is to receive initial assistance without payment as well as important information regarding child support, custody and division of property. Proper and professional mediation may lead to a settlement outside of court that is acceptable to both parties and thereby avoid exhausting discussions, headaches and high financial expenses. One of the options for a divorce settlement is a divorce agreement in the form of a contract signed by both spouses and handling child custody, support payments and division of property. The main objective of the agreement is to reach mutual agreements and to establish them outside of the courts. The parties have the right to reach an agreement in the framework of a divorce agreement or to file a claim regarding any subject to one of the competent courts, while the final decision rests with the family court or the rabbinical court. Any of these becomes valid after approval, requiring the parties to follow the clauses. After receiving a divorce following judicial hearings, the divorce proceedings are completed and the parties may manage their businesses separately, without fear of harming the control of the business, its reputation or fear of collapse of the business. Where should I file a claim for the distribution of assets of business owners? Two courts deal with divorce - family courts and the rabbinical courts. The main difference is that the rabbinical court rules conservatively under consideration of religion and Halacha, while the family court is influenced by liberal secular perspectives. The moment a claim is filed with one of the relevant courts, that court has exclusive jurisdiction regarding the decision of the claim and any other claim filed later is dismissed immediately. Regarding the division of assets of business owners, it is recommended that the parties file a claim with the family court, which is more flexible and open to suggestions and compromises. More in this category: « What is the Solution to the Israeli / Palestinian Problem?
0.984728
The diets of cats (Felis catus) and foxes (Vulpes vulpes) killed during predator control at a semi-arid site in Western Australia were studied to see which prey species may be affected by predation from these introduced predators. The number of items, biomass and frequency of occurrence of each food type in the gut contents from 109 fetal cats, 62 semi-feral cats and 47 foxes were used to calculate an Index of Relative Importance for each food category for each predator. Mammals were the most important prey group for all three predators, with rabbit being the most highly ranked prey species. The diets of feral and semi-feral cats were similar in dietary diversity but differed in the frequency of occurrence of some food categories. Native rodents, birds and reptiles occurred more frequently and were ranked higher in the diet of feral cats, and food scraps occurred more frequently in the diet of semi-feral cats. The diet of foxes was less diverse than that of either group of cats. Invertebrates and sheep carrion were more important prey categories for foxes than for cats. In the summer-autumn period, foxes ate more sheep carrion and invertebrates than they did in winter-spring. The diet of feral cats was more diverse in summer-autumn, including a greater range of invertebrates and more rodents, birds and reptiles than in the winter-spring period. We predict that cats are more likely to have an impact on small vertebrates at this site and that the control of cats could lead to recoveries in the populations of native rodents, birds and reptiles. By contrast, the control of foxes alone may lead to a rise in cat numbers and a consequent detrimental impact on small vertebrate populations.
0.999697
If you want to avoid the summer crowd on the Balearic Islands, you should plan your vacation in winters. January and February are the perfect months to enjoy peaceful vacations on the Balearic Islands. You can enjoy the natural beauty of the islands in your own way. In winters, you can enjoy hiking, biking, or can explore the islands in a car. The lemon and orange trees bear fruit in the winters and look beautiful. In the winters, you can also get a chance to experience cultural events and more entertainment. There are a million of almond trees that blossom into a flower. You may not find too much activity on the Balearic Islands in the winters but the nearby villages still offer you various activities to keep you busy. Reasons to visit the Balearic Islands in the winters There are a number of reasons for which you can plan your vacations in winters if you really want to enjoy the natural landscapes, serene natural beauty, calm flowing water, and other activities on the popular Balearic Islands. • The best reason is that the summer crowds are gone. If you are not a party freak, the winters are the best time to visit the Balearic Islands. You can enjoy the landscape, stunning beaches, cliffs and other wonders of nature. • All the goodness for which these islands are known is still there. You can still enjoy the stone wall houses, fresh products, delicious cuisines, wonderful atmosphere, the turquoise of blue water, and the beautiful picnic spots, fine sand beaches, high cliffs, and the Mediterranean style of living. • If you still want to enjoy the nightlife, you can visit in Palma because the Majorca capital keeps celebrating throughout the year. Biking is a beautiful activity during winters. You can explore the Serra de Tramuntana that runs down the west of Majorca and it is a world heritage site. • If you visit Ibiza in winters, you will get a chance to explore the beautiful island and its stunning beauty. In winters, you can enjoy shopping because winter markets and car boot sales are high in Ibiza in the winters. You can also check out the hippy markets where you can buy trinkets. • In the winters, you can also enjoy the Medieval Festival that happens in the old town of Ibiza. It is the best time to enjoy entertainment, music, exhibitions, arts and craft items, food, and drinks. People from different cultures and religions take part to celebrate this festival. • In winters, several restaurants get together and promote local dishes. A program called ‘Degusta’ is organized for the promotion of the local cuisines. They offer special meals with wine at a reasonable price. • If you visit Formentera, you can enjoy a warm and pleasant environment even in Fall and winter. You can enjoy scuba diving. There are a number of diving spots nearby. Thus, there is a lot more to do and enjoy on the Balearic islands in winters. If you love peace, winters are the best time to visit the Balearic islands. Enjoy your villas in costa brava and write your own travel itinerary by booking villas with private pool. Santiago de Compostela is a wonderful holiday destination to visit throughout the year. A large number of tourists visit this city every year to spend a memorable time with their friends and family. Galicia is a green and lush region in the western part of Spain. The town has become a popular holiday destination among British and American holidaymakers. Warum ist Barcelona ein großartiges Urlaubsziel für Kinder? Barcelona ist ein wunderbarer Ort, um mit Ihrer Familie zu genießen. Es gibt eine Reihe von Museen und Attraktionen, die Kinder und Erwachsene gleichermaßen ansprechen.
0.999998
About 40% of people in this age group are considered to be overweight. They tend to gain more weight per year than overweight persons in other age groups, and they lose less weight when they enter weight control programs. In the study both overweight people and normal range weight people believed that overweight is socially acceptable, as is eating unhealthy foods and being inactive. Among the youth who were overweight in the study had more friends and romantic partners who were trying to lose weight than did normal weight range peers. The researchers believed that the youth who were overweight and who were dieting together were receiving postive emotional support and encouragement from their friends, and that more appealing and effective lifestyle interventions need to be developed for this important group. The positive aspect of this piece is that overweight people are not socially excluded and that they are encouraged, probably now more than ever, to have a healthy active lifestyle.
0.999377
ICC One Day International "Mini World Cup" The ICC Champions Trophy is a One Day International cricket tournament, second in importance only to the Cricket World Cup. It was inaugurated as the ICC Knock Out tournament in 1998 and has been played every two years since, changing its name to the Champions Trophy in 2002. Originally, all ten full members of the International Cricket Council (ICC) took part, together with (for the first four competitions) two associate members. From 2009, this will be changed to the 8 highest-ranked ODI teams as placed 6 months out from the tournament. The ICC Champions Trophy, also known as the Mini World Cup, is cricket's one-day international tournament second in importance only to the Cricket World Cup. It was inaugurated as the ICC Knock Out tournament in 1998 and has been played every two years since, changing its name to the Champions Trophy in 2002. All ten full members of the International Cricket Council (ICC) take part, together with (for the first four competitions) two associate members. Since the quadrennial Cricket World Cup and on-going ICC ODI Championship effectively determine the relative rankings of international cricket teams in one-day international cricket, there seems to be little need for the Champions Trophy as a junior tournament. However, the Champions Trophy is a financially important event for the ICC. Money generated through the event is used in ICC's Development Program. Although being the second most important one-day tournament in cricket, the ICC Champions Trophy has sometimes been criticised by the media, with claims that there is no point for it, when there is the World Cup as well. Before the 2004 tournament, Wisden described it as "the tournament that veers between being the second most important in world cricket and a ludicrous waste of time". However, despite this controversy, many people are still fond of the tournament, and players enjoy having the opportunity to participate in it. The first two tournaments, then named the ICC Knock Out tournament, took place in 1998 and 2000. These early tournaments were intended to raise the profile of the game in the host nations (Bangladesh and Kenya). All of the matches in the 1998 ICC Knock Out were played in Dhaka. The tournament started with a preliminary match between New Zealand and Zimbabwe to decide which would proceed to the Quarter Finals. South Africa (248-6, 47 ov) beat West Indies (245, 49.3 ov) by 4 wickets. Like the 1998 tournament, all of the matches in the 2000 tournament were played in Nairobi. There were three preliminary matches before the Quarter Finals, involving Kenya, India, Sri Lanka, West Indies, Bangladesh and England. New Zealand (265-6, 49.4 ov) beat India (264-6, 50 ov) by 4 wickets. The 2002 ICC Champions Trophy is a cricket tournament that was held in Sri Lanka in 2002. It was the third edition of the ICC Champions Trophy (the first two having been known as the ICC Knock-out). The tournament was due to be held in India, but was switched to Sri Lanka when an exemption from tax in India was not granted. Twelve teams competed: the 10 Test-playing nations plus Netherlands and Kenya. The teams were split into four pools of three teams each. Each team played the other two teams in its pool once, and the four teams that headed each pool proceeded to the Semi Finals. The Final between India and Sri Lanka was washed out twice, to leave no result. The 2004 ICC Champions Trophy was held in England in September 2004. Twelve teams, including the Test nations, together with Kenya, and making their one-day international debut the USA, competed in fifteen matches spread over sixteen days at three venues Edgbaston, The Rose Bowl and The Oval. The 2008 Champions Trophy which was due to take place in Pakistan in September 2008 and postponed over security fears, was moved to South Africa by the International Cricket Council. Follow the news of ICC Champions Trophy 2009 in South Africa here. Australia have have become the first team to win the ICC Champions Trophy title twice, beating New Zealand by six wickets in the final at SuperSport Park. Chasing New Zealand's 200/9, Opener Shane Watson hit his second successive unbeaten century (105) to guide defending champions to a six-wicket victory with 28 balls remaining. India added the 2013 Champions Trophy to the 2002 trophy they shared with Sri Lanka, beating England narrowly by 5 runs in a game that was there for the hosts to win. With this win, MS Dhoni became the first captain in the world to have won all ICC trophies. Under Dhoni's captaincy, India had earlier won the inaugural ICC World Twenty20 held in South Africa in September 2007 and the 2011 World Cup held in India, Sri Lanka, and Bangladesh. Chasing a modest 130 for a win, Ravi Bopara (30) and Eoin Morgan (33) had things under control and was going for the kill when Ishant Sharma turned it around with two wickets in two balls, after a six and two wide balls in the 18th over. That started a collapse that saw four wickets lost for three runs in eight balls. England finished on 124/8 after James Tredwell missed the last ball of the innings with six required for victory. India XI: Shikhar Dhawan, Rohit Sharma, Virat Kohli, Dinesh Karthik, Suresh Raina, MS Dhoni (capt & wk), Ravindra Jadeja, R Ashwin, Bhuvneshwar Kumar, Ishant Sharma, Umesh Yadav. England XI: Cook (Captain), Bell, Trott, Root, Morgan, Bopara, Buttler (Wicket Keeper), Bresnan, Broad, Tredwell, Anderson.
0.99864
Learning how to get over jet lag is easy if you know a few tips that will help your body reset its clock and adjust easily to the new timezone. Before we learn how to get over jetlag, I think it's important to first understand what exactly is that we're dealing with. What is jet lag? In medical terms, jet lag is called desynchronosis. It is a state of the body whereby the circadian rhythms have been altered. Jet lag usually happens when you're traveling fast (usually on a plane) across at least two or more different time zones - within a single day. This confuses the body clock and desynchronizes it, which gives you all the known symptoms of jet lag especially if you can't realign your circadian sleeping rhythms fast. When you've traveled over several time zones within the same day, you will pretty much be suffering from jet lag. But what are the main symptoms that you will have to deal with? If you know that you'll be traveling long distance soon, it's good to know all about what you will be dealing with. Headaches - This is a most common symptoms of jet lag. A splitting headache or a start of a migraine is usually the first sign that you are suffering from jetlag. Insomnia and lack of sleep - Because your sleep cycles have been mixed up, you won't be able to sleep soon. In fact you'll find adjusting to the new time zone quite a chore. You'll most likely sleep during the day and stay awake during the night, following the time patterns from back home. Disorientation - A feeling of lack of orientation can often accompany jet lag. While jet lag is nothing really serious and most of us go through it at some point in our lives, due to globalization, which makes recurrent traveling to far places a necessity, there are a couple of things that you can do to minimize the effects of jet lag. Getting over jet lag fast can be done. While it's not always feasible, sometimes you can look around for convenient travel schedules that will give you some extra time before your business meeting or program at the new place. If you know well in advance when you are going to travel, you can start readjusting your sleep schedule (if you can, of course) so that the impact will be minimized when you're actually traveling. Avoid drinking any alcohol, fizzy drinks or coffee, to keep your body properly hydrated (all these stimulate water loss by urine production). Try to drink water and fruit juices instead. If you can, break your trip into smaller parts with overnight stay in between. Get plenty of exercise before your flight, eat well and have pelnty of rest. Try to move around on the plane, and do some stretching exercises at the back of the plane. Go a few times along the isle every few hours if possible. Don't let your body become stiff and sore. Wear comfy clothes and shoes on the plane. Try to adapt to the new place as soon as possible. If you get there during lunch time, then eat lunch instead of breakfast, dinner, or instead of going to sleep as you'd usually do at home at that specific time. If you land during the day, go out and enjoy the sun. Don't go to sleep just because back at home it's night time. Take melatonin. Synthetic melatonin is said to help people who travel restore their regular sleeping patterns. A study reported in Tte British Medical Journal in 1989 confirmed these findings. Melatonin helps reduce the time jet lag has hold over your body. If you are on any medication or you are suffering from diabetes, a heart condition or other serious healh problems, it is advised that you consult your doctor prior to your trip to discuss how to take the medicine and what to do when you've landed in terms of continuing your current treatment at the new location. the latest trick I learnt was - don't eat at all on the flight - eat before you leave and eat when you arrive -whichever meal is appropriate. Still drink lots of fluids of course! It seemed to work for me on my last trip so I'll do it again! Great suggestions for the frequent traveler! If you're a foreigner in the USA and don't know how to tip, then here are five simple tipping rules so you can stop sweating the small stuff and start enjoying America.
0.999999
We provide free upgrades to the updated versions of the same product that we may release within one year from the date of your purchase. For example, if you have purchased your license in March 2000, you may use that license key for all updated versions of the same product that we may release during the next year, up to March 2001 (both minor and major upgrades are covered by this offer). In other words, within one year after the purchase you get all upgrades to the product free of any charges.
0.999929
DD and his girlfriend love travelling, one day they go to a strange cave. DD suddenly found some valuable stones and water named "wistone" and "owenwater". If DD could take them to the outer world, he could earn a lot:) Unfortunately, DD only takes a bottle with capacity W with him. He can only use this bottle to take them back. As you know, each type of "wistone" and "owenwater" has different value, and "wistone" cannot be divided into smaller ones while "owenwater" can be taken as much as you want if it is avaliable. Now, you are asked to calculate the maximum value DD can take back. This problem contains several test cases, you should process it to the end of input. For each test case, there are two integers in the first line, N (1 <= N <= 100) and W ( 0 <= W <= 50000), N indicates the total type number of "wistone" and "owenwater". W indicates the capacity of bottle. Then next n lines, each line contains three integers, ai, bi (1 <= ai, bi <= 1000) and ti. ai indicates the amount, bi indicates the total value and if ti is 0, it means this one is "wistone" otherwise it is "owenwater". For each test case, you should only output one number(accurate to 2 digits to the right of the decimal point), namely the maximum value DD can get.
0.956395
Bio: Ada was born in 1872 in Bottisham, Cambridgeshire, the daughter of Charles Smoothy who was originally from Kedington and Susannah Cornwell of Great Wratting. We see her first on the 1881 census at Great Thurlow where she is a scholar aged 8 living in a cottage with her parents. She is the oldest of five children, the others being Willam, 6, Flora, 4, Hester, 3 and Rachel, 1. The other children had been born in Great Wratting, Great Bradley and Great Thurlow, so the family must have moved around a fair bit. Ada's father Charles died in 1885 at the age of only 32, and so on the 1891 census we see her mother Susannah as a widow bringing up the children alone. They have moved to Haverhill, and are living at 6 Crown Passage. Both Ada and her mother are hair weavers, siblings William and Florence are working at Gurteens (clothing Factory) whilst another sister, Esther, is a domestic servant. Younger sibblings Rachel, Bessie and Alfred are all scholars. Ada married Charles Whiting, a labourer of Withersfield in 1895 at St Mary's Church in Haverhill. Her residence at the time is listed as Castle Camps. Maybe she was working there at the time. Charles and Ada had one daughter, Florence Ethel, who was born in 1896 in Haverhill. Tragedy struck the young family two years later, when Charles died in 1898 at the age of 25(5). I'm not sure of the cause of his early death. He was buried at the Withersfield church on 9th May 1898. On the 1901 census we find Ada and daughter Florence living with Ada's widowed mother Susannah at 34 Mount Road, Haverhill. Ada and her mother are working as hairweavers along with Ada's sister Esther. Her other sister Bessie is a trousers machinist. Ada married again, in 1902, to David Poole. On the 1911 census we see them living 11 Chauntry Row, Haverhill. Florence is still at home, aged 14, working now as a dressmaker whilst her mother and stepfather both work as cloth weavers. It doesn't appear that David and Ada had any children together.
0.999938
What to do in Orlando The best of the Orlando Florida area with information on top attractions like Disney World's Magic Kingdom, Epcot, Animal Kingdom and Disney-MGM Studios as well as Universal Studios, Islands of Adventure, SeaWorld and the many attractions downtown and along International Drive. Let's talk about some great food and how it taste. Tell me about your favorite food. Getting started in ministry is about knowing what God has called you to do. I like wonderful movies. My all time favorite is Madeas Family Reunion. I Like to really have fun eating at great restaurants around town.
0.98902
One class of theories of aging is based on the concept that damage, either due to normal toxic by-products of metabolism or inefficient repair/defensive systems, accumulates throughout the entire lifespan and causes aging. In this essay, I present and review the most important of these damage-based theories. The general idea behind damage-based theories of aging is that a slow build-up of damage, perhaps even from conception (Gavrilov and Gavrilova, 2001), eventually leads to failure of the system which can be seen as failure of a critical organ like the heart or the whole body. It is useful to point out, however, that some authors (e.g., Olson, 1987; Holliday, 2004; Kirkwood, 2005) defend that aging is a result of many forms of damage accumulation, and hence that aging is due to an overlap of the mechanistic theories of aging described below. Aging has long been seen as a result of errors of many kind. An early attempt to develop a theory engulfing the genetic and protein machineries was Orgel's hypothesis (Orgel, 1963). Essentially, his idea was that errors in transcription from DNA lead to errors in proteins which build-up over time and cause more errors in transcription, creating an amplifying loop that eventually kills the cell and leads to aging. Errors in DNA repair would also affect the accuracy of the flow of information in cells (Orgel, 1973). Indeed, damaged proteins accumulate with age, and enzymes lose catalytic activity with age (Gershon and Gershon, 1970). This can lead to cellular dysfunction and accumulation of other forms of damage. On the other hand, Orgel's hypothesis has been regarded as unlikely to be correct for various reasons: feeding abnormal amino acids to animals to increase the number of errors in proteins does not result in a shorter lifespan (Strehler, 1999, p. 293); errors in macromolecular synthesis also do not appear to increase with age (Rabinovitch and Martin, 1982); in vitro aging cultured fibroblasts do not have increased protein errors (Harley et al., 1980)--and, in fact, cellular senescence appears to be caused by other mechanisms. Presently, Orgel's hypothesis is largely discarded. Even though Orgel's hypothesis failed the test of time, some age-related diseases could be due to protein defects and accumulating protein errors (e.g., see Lee et al., 2006; Morimoto, 2006) and a role for protein dysfunction in aging is a possibility. For example, in flies, the accumulation of protein aggregates is associated with muscle with impaired muscle function with age (Demontis and Perrimon, 2010). Proteasomes are protein complexes that degrade other proteins; their expression decreases with age (Lee et al., 1999) and this has been implicated as a factor contributing to aging (Friguet et al., 2000; Tomaru et al., 2012). Also, the half-life of proteins is longer in older animals (Friguet et al., 2000). One study found evidence that proteins involved in protein degradation are under selection in lineages where longevity increased (Li and de Magalhaes, 2013). Two studies found evidence that protein stability and protein homeostasis are enhanced in long-lived bats and in the naked mole-rat when compared to mice (Perez et al., 2009; Salmon et al., 2009), yet another study found no evidence that protein repair and recycling are correlated with longevity in 15 species of birds and mammals (Salway et al., 2011). Therefore, the results are not conclusive but this is an area where further studies are warranted. Autophagy is a process by which the cell digests its own organelles and components. Recent studies, in particular genetic manipulations in model organisms, point towards a role of autophagy in aging (reviewed in Cuervo et al., 2005; Rubinsztein et al., 2011). In flies, disruption of autophagy shortens (Juhasz et al., 2007) while enhanced autophagy increase lifespan (Simonsen et al., 2008). Manipulation of autophagy-related genes has also been associated with longevity in yeast (Tang et al., 2008). There is in fact evidence that longevity-associated pathways, such as GH/IGF1 that is detailed elsewhere and TOR (which some anti-aging interventions target), influence autophagy (Toth et al., 2008; Salminen and Kaarniranta, 2009; Kamada et al., 2010; Neufeld, 2010), though the causality of these links remains to be established since autophagy is related to other processes too. Dysfunction of autophagy has also been linked to neurodegenerative disorders (Wong and Cuervo, 2010). In mouse liver, autophagy declines with age and its maintenance through genetic manipulation can improve the ability of cells to handle protein damage, resulting in lower levels of damaged proteins and improved organ function (Zhang and Cuervo, 2008). While a lot of works remains to elucidate the role of autophagy in aging, it does appear that protein homeostasis is important for longevity and its dysfunction could contribute to aging (Morimoto and Cuervo, 2009). In 1908, physiologist Max Rubner discovered a relationship between metabolic rate, body size, and longevity. In brief, long-lived animal species are on average bigger--as detailed before--and spend fewer calories per gram of body mass than smaller, short-lived species. The energy consumption hypothesis states that animals are born with a limited amount of some substance, potential energy, or physiological capacity and the faster they use it, the faster they will die (Hayflick, 1994). Later, this hypothesis evolved into the rate of living theory: the faster the metabolic rate, the faster the biochemical activity, the faster an organism will age. In other words, aging results from the pace at which life is lived (Pearl, 1928). This hypothesis is in accordance with the life history traits of mammals in which a long lifespan is associated with delayed development and slow reproductive rates (reviewed in Austad, 1997a & 1997b). As previously mentioned, caloric restriction (CR) is one of the most important discoveries in aging research. Although the mechanisms behind CR remain a subject of discussion (see below), since it involves a decrease in calories, one hypothesis put forward by George Sacher is that maybe CR works by delaying metabolic rates, in accordance with the energy consumption hypothesis (reviewed in Masoro, 2005). Body temperature is crucial to determine metabolic rate since the rate of chemical reactions rises with temperature. One common feature of animals, such as mice, rats, and monkeys, under CR is a lower body temperature (Weindruch and Walford, 1988; Ramsey et al., 2000), which is consistent with the energy consumption hypothesis. On the other hand, some studies in rodents suggest that CR can extend lifespan without reducing metabolic rate (Masoro, 2005). For example, some evidence indicates that mice under CR burn the same amount of energy as controls, suggesting they have similar metabolic rates. These studies, however, remain controversial in the way metabolic rate is normalized to metabolic mass (McCarter and Palmer, 1992). An alternative hypothesis is that CR shifts metabolic pathways (Duffy et al., 1990). More recent results suggest that previous studies used unreliable estimates of metabolic mass in their calculations and indeed CR changes metabolic rates, supporting the rate of living hypothesis (Greenberg and Boozer, 2000), yet the debate has not been settled. Several experiments have cast doubts on the energy consumption hypothesis. For instance, rats kept at lower temperatures eat 44% more than control mice and yet do not age faster (Holloszy and Smith, 1986). In fact, mice with higher metabolic rates may live slightly longer (Speakman et al., 2004). Mutations in the tau protein in hamsters increase metabolic rates and extends lifespan (Oklejewicz and Daan, 2002). Lastly, as detailed before, metabolic rates, when correctly normalized for body size, do not correlate with longevity in mammals (de Magalhaes et al., 2007a). Despite its intuitive concept, the rate of living theory is practically dead. Based on observations from CR, it is likely that energy metabolism plays a role in aging but, as described below, it is not clear how this occurs. One hypothesis is that energy metabolism is linked to insulin-signaling, as mentioned ahead. Free radicals and oxidants--such as singlet oxygen that is not a free radical--are commonly called reactive oxygen species (ROS) and are such highly reactive molecules that they can damage all sorts of cellular components (Fig. 1). ROS can originate from exogenous sources, such as ultraviolet (UV) and ionizing radiations, and from several intracellular sources. The idea that free radicals are toxic agents was first suggested by Rebeca Gerschman and colleagues (Gerschman et al., 1954). In 1956, Denham Harman developed the free radical theory of aging (Harman, 1956; Harman, 1981). Since oxidative damage of many types accumulate with age (e.g., Ames et al., 1993), the free radical theory of aging simply argues that aging results from the damage generated by ROS (reviewed in Beckman and Ames, 1998). Figure 1: ROS or reactive oxygen species can be formed by different processes including normal cell metabolic processes. Due to their high reactivity, ROS can damage other molecules and cell structures. The free radical theory of aging argues that oxidative damage accumulates with age and drives the aging process. To protect against oxidation there are many different types of antioxidants, from vitamins C and E to enzymes such as superoxide dismutase (SOD), catalase, and glutathione peroxidase. Briefly, antioxidant enzymes are capable of degrading ROS into inert compounds through a series of chemical reactions (Ames et al., 1981; Ames et al., 1993). The simple existence of enzymes to prevent damage by ROS is a strong indicator that ROS are biologically important, dangerous molecules (de Magalhaes and Church, 2006). Most experimental evidence in favor of the free radical theory of aging comes from invertebrate models of aging. Transgenic fruit flies, Drosophila melanogaster, overexpressing the cytoplasmic form of SOD, called Cu/ZnSOD or SOD1, and catalase have a 34% increase in average longevity and a delayed aging process (Orr and Sohal, 1994). More recent findings, however, suggest that the influence of SOD1 and catalase in Drosophila aging may had been overestimated because the authors only took into account short-lived strains (Orr et al., 2003). Overexpressing bovine SOD2, the mitochondrial form of SOD, also called MnSOD, in Drosophila slightly extends average longevity but does not delay aging (Fleming et al., 1992). Also in Drosophila, expression of SOD1 in motor neurons increases longevity by 40% (Parkes et al., 1998) and overexpression in neurons of thioredoxin, a protein involved in reduction-oxidation (redox) reactions that can act as an antioxidant, increased lifespan by 15% (Umeda-Kameyama et al., 2007). Certain long-lived strains of both Drosophila (Rose, 1989; Hari et al., 1998) and the nematode worm Caenorhabditis elegans (Larsen, 1993) have increased levels of antioxidant enzymes. On the other hand, evidence against the free radical theory has also emerged from invertebrate models. Briefly, deletion of SOD2 in C. elegans surprisingly extends lifespan (Van Raamsdonk and Hekimi, 2009), and long-lived ant queens actually have lower levels of SOD1 (Parker et al., 2004). Overexpression of SOD2 and catalase decreased the mitochondrial ROS release and increased resistance to oxidative stress in Drosophila yet decreased lifespan (Bayne et al., 2005). In addition to antioxidants, some enzymes catalyze the repair caused by ROS. One of such enzymes is methionine sulfoxide reductase A (MSRA), which catalyzes the repair of protein-bound methionine residues oxidized by ROS. Overexpression of MSRA in the nervous system of Drosophila increases longevity (Ruan et al., 2002) while mice without MSRA have a decreased longevity of about 40% (Moskovitz et al., 2001). Whether the aging process is affected remains to be seen (de Magalhaes et al., 2005), although the results from Drosophila suggest that age-related decline is also delayed by MSRA overexpression. Another enzyme that repairs oxidative damage is 8-oxo-dGTPase, which repairs 8-oxo-7,8-dihydroguanine, an abundant and mutagenic form of oxidative DNA damage. But contrary to the results involving MSRA, when researchers knocked out the gene responsible for 8-oxo-dGTPase, although the mutated mice had an increased cancer incidence, their aging phenotype did not appear altered (Tsuzuki et al., 2001). Targeted mutation of p66shc in mice has been reported to increase longevity by about 30%, inducing resistance to oxidative damage, and maybe delaying aging (Migliaccio et al., 1999). Although the exact function of p66shc remains unclear, some evidence suggests it may be related to intracellular oxidants and apoptosis (Nemoto and Finkel, 2002; Trinei et al., 2002; Napoli et al., 2003). Also, transgenic mice overexpressing the human thioredoxin gene featured an increased resistance to oxidative stress and an extended longevity of 35% (Mitsui et al., 2002). Like p66shc, mammalian thioredoxin regulates the redox content of cells and is thought to have anti-apoptotic effects (Saitoh et al., 1998; Kwon et al., 2003). More recent results suggest modest effects of thioredoxin overexpression on the lifespan of male mice and no effects on the lifespan of females (Perez et al., 2011). Neither p66shc nor thioredoxin are "traditional" antioxidants, so these findings could be unrelated to the free radical theory of aging but rather, for instance, tissue homeostasis. Mice with extra catalase in their mitochondria lived 18% more than controls and were less likely to develop cataracts, but they did not appear to age slower and their extended lifespan appeared to derive from a decrease in cardiac diseases throughout the entire lifespan (Schriner et al., 2005). Recently, mice overexpressing human MTH1, an enzyme that eliminates oxidized precursors from the dNTP pool, had lower levels of oxidative damage to DNA and were modestly (by ~15%) long-lived (De Luca et al., 2013). Lastly, the phenotype witnessed in a strain called senescence-accelerated mice may be related to free radical damage (Edamatsu et al., 1995; Mori et al., 1998). Experiments in feeding mice antioxidants--either a single compound or a combination of compounds--were able to decrease oxidative damage and increase average longevity but none of them clearly delayed aging (Harman, 1968; Comfort et al., 1971; Heidrick et al., 1984; Saito et al., 1998; Holloszy, 1998; Quick et al., 2008), while other studies did not conclude that feeding mice antioxidants increases longevity (e.g., Lipman et al., 1998). Several attempts have been made to overexpress or knock-out antioxidants in mice, but the results have been largely disappointing (Sohal et al., 2002; de Magalhaes, 2005a; de Magalhaes and Church, 2006; Lapointe and Hekimi, 2010). Often animals do not show any differences in their aging phenotype when compared to controls (Reaume et al., 1996; Ho et al., 1997; Schriner et al., 2000). In one of the most elegant experiments to test the free radical theory of aging, knockout mice heterozygous for SOD2 showed increased oxidative damage at a cellular and molecular level but did not show significant changes in longevity or rate of aging (Van Remmen et al., 2003). Ubiquitous overexpression of SOD1 in mice also failed to increase longevity (Huang et al., 2000). These results suggest that antioxidant proteins are already optimized in mammals. Indeed, correlations between rate of aging and antioxidant levels in mammals are, if they exist, very weak (reviewed in Finch, 1990; Sohal and Weindruch, 1996). Some studies found correlations between the levels of certain antioxidants and longevity in mammals, but failed to find any consensus (Tolmasoff et al., 1980; Ames et al., 1981; Cutler, 1985; Sohal et al., 1990). The long-lived naked mole-rat does not appear to have higher levels of antioxidants when compared to mice (Andziak et al., 2005). The way antioxidants can increase longevity but do not affect rate of aging suggests that antioxidants may be healthy but do not affect the aging process, as debated elsewhere. Although ROS can have several sources, some argue that ROS originating in the cellular metabolism which takes place in the cell's energy source, the mitochondrion, are the source of damage that drives aging. Since ROS are a result of cellular metabolism, the free radical theory of aging has been associated with the rate of living theory (Harman, 1981). One mechanism proposed for CR is that animals under CR produce less ROS and therefore age slower (Weindruch, 1996; Masoro, 2005), but since the rate of living theory seems out-of-favor this perspective will not be further discussed here. An alternative hypothesis is that the rate of mitochondrial ROS generation, independently of metabolic rates or antioxidant levels, may act as a longevity determinant (Sohal and Brunk, 1992; Barja, 2002). Some results suggest that the rate of ROS generated in the mitochondria of post-mitotic tissues helps explain the differences in lifespan among some animals, particularly among mammals (Ku et al., 1993; Barja and Herrero, 2000; Sohal et al., 2002; Lambert et al., 2007) and between birds and mammals (reviewed in Barja, 2002). One pitfall of these studies is that technical limitations exist in measuring ROS production in isolated mitochondria. For example, none of these studies measures the levels of hydroxyl radical, the most reactive and destructive of the ROS; often, hydrogen peroxide and superoxide anion are measured since they can react to give the hydroxyl radical. Even so, such studies may not be representative of what actually occurs. Moreover, two studies in Drosophila found that lowering ROS leakage from the mitochondria either did not result in a longer lifespan (Miwa et al., 2004) or even resulted in a shorter lifespan (Bayne et al., 2005). On the other hand, comparative genomics studies have revealed several associations between features of mitochondrial DNA (mtDNA) across species and longevity (de Magalhaes, 2005b; Moosmann and Behl, 2008; Nabholz et al., 2008; Aledo et al., 2011), suggesting that changes to mitochondrial proteins may be involved in the evolution of long lifespans, though further studies are necessary to elucidate the mechanisms involved. Several pathologies in mice and humans derive from mutations affecting the mitochondrion, which often involve an increase in ROS leakage from the mitochondrion (Pitkanen and Robinson, 1996; Wallace, 1999; DiMauro and Schon, 2003). Yet these pathologies do not result in an accelerated aging phenotype, but frequently result in diseases of the central nervous system (Martin, 1978). One example is Friedreich's ataxia which appears to result from increased oxidative stress in mitochondria and does not resemble accelerated aging (Rotig et al., 1997; Wong et al., 1999). Deficiency of the mitochondrial complex I has been reported in a variety of pathologies such as neurodegenerative disorders (reviewed in Robinson, 1998). Cytochrome c deficiency has also been associated with neurodegenerative disorders (reviewed in DiMauro and Schon, 2003) as has selective vitamin E deficiency (Burck et al., 1981). Rats selected for high oxidative stress develop cataracts from an early age (Marsili et al., 2004); other pathologies, such as heart changes and brain dysfunctions, have been described in these animals (Salganik et al., 1994), yet it is unclear whether they can be considered progeroid. Perhaps ROS are involved in some pathologies involving post-mitotic cells, such as neurons. Another hypothesis is that mitochondrial diseases affect mainly the central nervous system due to its high energy usage (Parker, 1990 for arguments). There is some evidence of a mitochondrial optimization in the human lineage to delay degenerative diseases, but not necessarily aging (de Magalhaes, 2005b). Interestingly, both Drosophila and C. elegans are mostly composed of post-mitotic cells, which can explain why results from these invertebrates are more supportive of the free radical theory of aging than results from mice. Although it is undeniable that ROS play a role in several pathologies, including age-related pathologies like cataracts (Wolf et al., 2005), the exact influence of ROS in mammalian aging is debatable. It is plausible that ROS play a role in age-related degeneration of energy-rich tissues such as the brain; the brain may also be more susceptible to ROS because of its abundance of redox-active metals. One study found that superoxide dismutase/catalase mimetics prevented cognitive defects and oxidative stress in aged mice (Clausen et al., 2010). A similar study found that a SOD mimetic administered from middle age attenuated oxidative stress, improved cognitive performance, and extended lifespan by 11% (Quick et al., 2008). In conclusion, there is little direct evidence that ROS influence mammalian aging except perhaps in specific tissues such as the brain. Lastly, a changing paradigm is that ROS are not only damaging compounds but crucial in many cellular functions and thus it is the deregulation of pathways managing ROS that can contribute to aging rather than merely damage accumulation with age (de Magalhaes and Church, 2006). The DNA, due to its central role in life, was bound to be implicated in aging (Fig. 2). One hypothesis then is that damage accumulation to the DNA causes aging, as first proposed by Failla in 1958 (Failla, 1958) and soon after developed by physicist Leo Szilard (Szilard, 1959). The theory has changed over the years as new types of DNA damage and mutation are discovered, and several theories of aging argue that DNA damage and/or mutation accumulation causes aging (reviewed in Gensler and Bernstein, 1981; Vijg and Dolle, 2002; Hoeijmakers, 2009; Freitas and de Magalhaes, 2011). Because DNA damage is seen as a broader theoretical framework than mutations, and DNA damage can lead to mutations, the current focus is on DNA damage and thus the theory herein is referred to as DNA damage theory of aging. It is well-established that DNA mutations/alterations--many often irreversible--and chromosomal abnormalities increase with age in mice (Martin et al., 1985; Dolle et al., 1997; Vijg, 2000; Dolle and Vijg, 2002) and humans (e.g., Esposito et al., 1989; Lu et al., 2004). Experiments in mice also suggest that DNA damage accumulates with age in some types of stem cells and may contribute to loss of function with age (Rossi et al., 2007). Long-lived mutant mice and animals under CR seem to have a lower mutation frequency, at least in some tissues (Garcia et al., 2008). Similarly, longevity of worm strains correlates with DNA repair capacity (Hyun et al., 2008). It is impossible, however, to tell whether these changes are effects or causes of aging. Correlations have been found between DNA repair mechanisms and rate of aging in some mammalian species (Hart and Setlow, 1974; Grube and Burkle, 1992; Cortopassi and Wang, 1996). In theory, even a slight increase in DNA repair rate over a large period of time and hundreds of cell divisions will have major consequences and could contribute to determine rate of aging. On the other hand, it has been argued that such correlations may be an artifact of long-lived species being on average bigger (Promislow, 1994). As mentioned elsewhere, progeroid syndromes are rare genetic diseases that appear to be accelerated aging. Interestingly, the most impressive progeroid syndromes, Werner's, Hutchinson-Gilford's, and Cockayne syndrome originate in genes that are related to DNA repair/metabolism (Martin and Oshima, 2000; de Magalhaes, 2005a; Freitas and de Magalhaes, 2011). Werner's syndrome (WS) originates in a recessive mutation in a gene, WRN, encoding a RecQ helicase (Yu et al., 1996; Gray et al., 1997). Since WRN is unique among its protein family in also possessing an exonuclease activity (Huang et al., 1998), it seems to be involved in DNA repair. Although the exact functions of WRN remain a subject of debate, it is undeniable that WRN plays a role in DNA biology, particularly in solving unusual DNA structures (reviewed in Shen and Loeb, 2000; Bohr et al., 2002; Fry, 2002). In fact, cells taken from patients with WS have increased genomic instability (Fukuchi et al., 1989). Topoisomerases are enzymes that regulate the supercoiling in duplex DNA. WS cells are hypersensitive to topoisomerase inhibitors (Pichierri et al., 2000). As such, WS is an indicator that alterations in the DNA over time play a role in aging. As with WRN, the protein whose mutation causes Hutchinson-Gilford's syndrome is also a nuclear protein: lamin A/C (Eriksson et al., 2003). Recent results also suggest that some atypical cases of WS may be derived from mutations in lamin A/C (Chen et al., 2003). The exact functions of lamin A/C remain unknown, but it appears to be involved in the biology of the inner nuclear membrane. Some evidence suggests that the DNA machinery is impaired in Hutchinson-Gilford's syndrome (Wang et al., 1991; Sugita et al., 1995), again suggesting that changes in the DNA are important in these diseases and, maybe, in normal aging. The protein involved in Cockayne Syndrome Type I participates in transcription and DNA metabolism (Henning et al., 1995). Other progeroid syndromes exist, though the classification is subjective. For example, Nijmegen breakage syndrome, which derives from a mutated DNA double-strand break repair protein (Carney et al., 1998; Matsuura et al., 1998; Varon et al., 1998), has been considered as progeroid (Martin and Oshima, 2000). Ample mouse models of accelerated aging have implicated genes involved in DNA repair such as the mouse homologues of xeroderma pigmentosum, group D (de Boer et al., 2002), ataxia telangiectasia mutated or ATM (Wong et al., 2003), p53 (Donehower et al., 1992; Donehower, 2002; Tyner et al., 2002; Cao et al., 2003), and Ercc1 (Weeda et al., 1997). Thus many progeroid syndromes in mice involve the DNA machinery (Hasty et al., 2003; de Magalhaes, 2005a). (I should note that most of the aforementioned genes, as well as other genes implicated in accelerated aging in mice, are described in the GenAge database.) Taken together, results from progeroid syndromes in mice and man support the DNA damage theory of aging. One hypothesis is that DNA damage accumulation with age triggers cellular signalling pathways (Fig. 2), such as apoptosis, that result in a faster depletion of stem cells which in turn contributes to accelerated aging (Freitas and de Magalhaes, 2011). In spite of the progeroid syndromes described above, some genetic manipulations in mice have failed to support the theory. Mice deficient in Pms2, a DNA repair protein, had elevated mutation levels in multiple tissues yet did not appear to age faster than controls (Narayanan et al., 1997). Embryos of mice and flies irradiated with x-rays do not age faster (reviewed in Cosgrove et al., 1993; Strehler, 1999), though one report argued that Chernobyl victims do (Polyukhov et al., 2000). One classic study contradicting the DNA damage theory of aging showed that haploid wasps exposed to DNA damage have, as expected, a shorter lifespan than diploid wasps, but in a normal environment haploid and diploid wasps have same lifespan (Clark and Rubin, 1961), which should not be the case if DNA damage accumulation causes aging. Certain mutations in DNA repair proteins, such as p53 in humans (Varley et al., 1997), despite affecting longevity and increasing cancer incidence, fail to accelerate aging. Figure 2: In spite of various DNA repair mechanisms, DNA damage lead to mutations and other problems which in turn lead to cell loss and dysfunction. With age, this causes depletion of stem cell stocks and loss of homeostasis which drives organismal aging. (Adapted from Freitas and de Magalhaes, 2011). An emerging hypothesis is that only specific types of DNA changes are crucial in aging, which would explain why mutations in some DNA repair genes affect aging while others do not. One study systematically analyzed DNA repair genes associated or not with aging and found that genes involved in non-homologous end joining are more often related to aging (Freitas et al., 2011). Emerging evidence also suggests that DNA damage that contributes to mutations and/or chromosomal aberrations increases the risk of cancer while DNA damage that interferes with transcription appears to contribute to aging possibly via effects on cellular aging and cell signalling (Hoeijmakers, 2009). Taken together, the results from manipulations of DNA repair pathways in mice suggest that disruption of specific pathways, such as nucleotide excision repair and non-homologous end joining, is more strongly associated with premature aging phenotypes and may thus be more important to aging (Freitas and de Magalhaes, 2011). If the DNA damage theory of aging is correct, then it should be possible to delay aging in mice by enhancing or optimizing DNA repair mechanisms. Unfortunately, and in spite of numerous efforts (reviewed in de Magalhaes, 2005a), this crucial piece of evidence is still lacking. For example, mice overexpressing a DNA repair gene called MGMT had a lower cancer incidence but did not age slower (Zhou et al., 2001). Arguably the most compelling evidence comes from mice with extra copies of tumour suppressors. Mice with extra copies of p53 and INK4a/ARF--whose functions are described elsewhere--lived 16% longer than controls but it was not clear if aging was delayed (Matheu et al., 2007). Interestingly, overexpressing telomerase in mice with enhanced expression of p53 and INK4a/ARF, which are cancer-resistant, results in an increase in lifespan up to 40% (Tomas-Loba et al., 2008). Whether aging is delayed in these animals or even if DNA repair is improved is unclear but these findings suggest some level of protection from age-related degeneration via optimization of pathways associated with cancer and DNA damage responses. Recently, overexpression in mice of BubR1, a protein that ensures accurate segregation of chromosomes, slightly extended lifespan and protected against cancer and aging, giving weight to the idea that genomic instability contributes to aging (Baker et al., 2013). One possibility is that ROS damage to DNA plays a role in aging. Some circumstantial evidence exists in favor of such hypothesis (Hamilton et al., 2001), yet given the aforementioned concerns regarding the role of ROS in aging this appears unlikely, except perhaps in specific tissues like the brain. Even though damage from free radicals to nuclear DNA remains an unproven cause of aging, since ROS originate in the mitochondrion, and since mitochondria possess their own genome, many advocates of the free radical theory of aging consider that oxidative damage to mitochondria and to the mtDNA is more important (Harman, 1972; Linnane et al., 1989; de Grey, 1997; Barja, 2002). Indeed, some evidence exists that under CR oxidative damage to mtDNA is more important than oxidative damage to nuclear DNA (reviewed in Barja, 2002). Mutations in mtDNA tend to accumulate with age in some tissues (Corral-Debrinski et al., 1992; Yang et al., 1994; Tanhauser and Laipis, 1995; Liu et al., 1998), though not necessarily caused by ROS (reviewed in Larsson, 2010; Kennedy et al., 2013). Likewise, nuclear mutations have been suggested to contribute to mitochondrial dysfunction (Hayashi et al., 1994). One study found that accumulating mutations to mitochondrial DNA are also unlikely to drive stem cell aging (Norddahl et al., 2011). At present, and despite contradictory evidence in favor (Khaidakov et al., 2003 for arguments) and against the theory (Rasmussen et al., 2003 for arguments), current technology does not appear capable of assessing the true relevance of damage to mtDNA in aging (Lightowlers et al., 1999; DiMauro et al., 2002). However, one study using next-generation sequencing found that no increase in mtDNA mutations with age in mice (Ameur et al., 2011). Interestingly, disruption of the mitochondrial DNA polymerase resulted in an accelerated aging phenotype in mice, for the first time directly implicating the mtDNA in aging (Trifunovic et al., 2004). This appears to be unrelated to oxidative damage, however, and instead result from increased apoptosis and accumulated mtDNA damage (Kujoth et al., 2005; Trifunovic et al., 2005). On the other hand, a mitochondrial mutator mouse with much higher mutation frequency than normal mice did not exhibit signs of accelerated aging, though it also failed to show increased levels of mitochondrial deletions (Vermulst et al., 2007). Mice with a mutation in a mtDNA helicase accumulate mitochondrial deletions and develop progressive external ophthalmoplegia, but do not age faster (Tyynismaa et al., 2005). More recently, maternally transmitted mtDNA mutations in mice induced mild aging phenotypes but also brain malformations (Ross et al., 2013). Mutations in mitochondrial DNA polymerase in humans result in mitochondrial disorders that, as in the context of the free radical theory of aging, typically affect the nervous system (Van Goethem et al., 2001; Tang et al., 2011), though other pathologies such as infertility (Rovio et al., 2001) have also been reported. As such, mtDNA may play a role in age-related diseases and aging (Wallace, 1992), though further research remains to confirm such hypothesis and elucidate the exact mechanisms involved. Animal cloning involving somatic cells to create new organisms is an interesting technique for gerontologists (e.g., Lanza et al., 2000; Yang and Tian, 2000). Clones from adult frogs do not show signs that differentiation affects the genome (Gurdon et al., 1975). Dolly was "created" by transferring the DNA-containing nucleus of a post-mitotic mammary cell into an egg and from there a whole new organism was formed. We know Dolly had some genetic (Shiels et al., 1999) and--possibly more crucially--epigenetic defects (Young et al., 2001), so maybe her arthritis and the pathologies leading to her death are a result of damage present in the DNA. Nonetheless, she was remarkably "normal," having endured a complete developmental process and being fertile (Wilmut et al., 1997). Moreover, mice have been cloned for six generations without apparent harm (Wakayama et al., 2000). Perhaps the highly proliferative nature of the embryo can, by recombination, dilute the errors present in the DNA, but results from cloning experiments suggest that at least some cells in the body do not accumulate great amounts of DNA damage. It would be interesting to further study the longevity of cloned animals. If progeroid syndromes represent a phenotype of accelerated aging then changes in DNA over time most likely play a role in aging, possibly through effects on cell dysfunction and loss that may involve stem cells (Freitas and de Magalhaes, 2011). Since many genetic perturbations affecting DNA repair do not influence aging, it is doubtful overall DNA repair is related to aging or that DNA damage accumulation alone drives aging. Understanding which aspects, if any, of DNA biology play a role in aging remains a great challenge in gerontology. Moreover, the next step to give strength to the DNA damage theory of aging would be to delay aging in mice based on enhanced DNA repair systems, but that has so far eluded researchers. In conclusion, changes in DNA over time may play an important role in aging, yet the essence of those changes and the exact mechanisms involved remain to be determined. Thank you for visiting my website. Please feel free to contact me if you have any questions, ideas, comments or suggestions. Copyright © 1997 - 2002, 2004, 2005, 2007, 2008, 2012 - 2014 by João Pedro de Magalhães. All rights reserved.
0.999521
From cardiovascular problems to mental strain, there are many serious and life-altering health risks for overweight and obese cats. Obesity in cats is an increasing issue, just as it is in the human population. It can have serious, lifelong impacts on a cat, affecting their health, quality of life and bodily functions. Are some cats predisposed to obesity? If your cat has been spayed or neutered, it's also more likely to gain weight; the operation reduces your cat's energy requirement by just under a third, but their appetite can go up between 18% and 26%. Why does being overweight or obese affect my cat? When your cat is overweight or obese, its body begins to store the food it consumes as fat, rather than using it up, because the energy it's expending is less than the energy it's taking in. This fat then starts to affect bodily functions as it infiltrates specific organs—such as the liver—or "coats" others, like arteries. The extra weight puts pressure on your cat's internal system and joints, leading to a series of health risks. What risks are there if my cat is overweight or obese? In general, obesity can reduce your cat's quality of life and life expectancy; it's harder for it to play and move around, and surgical procedures or check-ups become more difficult. Obese cats are much more at risk of diabetes-80% to 90% of obese cats have this condition, which requires daily insulin injections. Often, the diabetes can be reversed once any extra weight is lost, as the accumulated fat which is directly responsible for a failure to regulate glucose is no longer present. Your cat's immune system can become compromised when they're obese, making them more prone to infection. This includes urinary infection and "stones" which occur as overweight cats are less active, tend to drink less water, and urinate less often than healthy cats. One serious and potentially fatal risk with obese cats is liver failure. When the cat's body believes it is undernourished—for example, if a constant food supply stops—fat is moved from stores into the liver to be used as energy. However, a cat's body is unable to manage that process effectively which leads to the liver functioning poorly, sometimes eventually leading to fatal hepatic insufficiency and liver failure. With extra weight, cats find it difficult to groom themselves, which can lead to skin problems. Similarly, extra weight puts pressure on your cat's joints and they can suffer from arthritis. Cardiovascular and respiratory systems are also affected, leading to breathlessness and heart problems. An overweight or obese cat can also end up struggling with their mental health; rather than running away or hiding when they sense danger, overweight cats aren't able to react quickly and so can't follow their instincts, which can cause them distress. With the right diet, exercise and behaviors, you'll be able to protect your cat from the risks of being overweight or obese. To start, speak to your vet who will be able to advise you on the best course of action. Why is my cat losing weight? How much should a cat weigh? If you have any concerns about your cat’s health, consult a vet for professional advice.
0.996187
If an individual was removed from federal service, appealed the action to the MSPB and accepted a “clean record” settlement agreement in exchange for voluntary resignation, how does that individual legally and honestly answer some of the questions found on legal documents? For example, some of the questions ask if anything has happened in the last five or seven years and lists such things as: “left my job by mutual agreement following allegations of misconduct or unsatisfactory performance” or “quit a job after being told you’d be fired.” Do I have to respond yes to these? The answer to all of these questions on the cited federal forms is yes, with an explanation. Failure to disclose the prior removal, even though it has been settled, could be viewed as a false statement and could be the subject of a criminal prosecution. Is it legal for employers to keep notes of possible conflicts between employees without employee consent or emloyees knowledge of uncontested knowledge and how can that possibly negative information be used against employees.
0.999997
What are daguerreotypes? The process of producing daguerreotypes was refined by a Frenchman, Louis Jacques Mande Daguerre, using his own experiments and following the footsteps by Joseph Nicephore Niepce. Fall 1839, the mysteries of the daguerreotype were publicly announced in Paris by Dominique Francois Jean Arago, secretary of the French Academy of Science. Everyone who had an opportunity to view the still lifes and architectural views by Daguerre, produced on whole plates that measured 6 1/2 inches by 8 1/2 inches, were amazed by the precise delineation and the marvelous range of tones. Bits of phrases, "buffing the plate; coated with iodine; and developed with vapors of mercury", were heard in the streets of Paris immediately following the revelations. Academics, scientists and common folks were all stunned by this new and daring object. Because the exposure times Daguerre used to produce a latent image were over one hour, he initially felt that making a portrait would be impossible. But leave it to the Americans, who received official notice and instructions via French and English newspapers that arrived aboard the "Great Western", fastest of the transatlantic steamers. She docked in New York at 7 a.m., Sept. 10, 1839. Instantly, crude cameras were manufactured, lenses ground, and chemical experiments were conducted. Before the end of the decade, several men had succeeded in recording the human face on a shimmering silvered surface that was clad to a copper base. The public was astounded by the wizardry of the Alchemists, who were actually the world's first portrait photographers. Daguerreotypes were a fashionable rage, available initially to the experimenters and their wealthy acquaintances. Studios began to blossom in several of America's largest cities during 1840 and as more businessmen began to practice the art, competition grew fierce and prices dropped. "Have you been taken or may I see your likeness?" were common questions spoken on avenues in the largest cities and dusty trails in the tiniest hamlets by the end of the 1840's. The commercial and artistic successes of daguerreotypes remained unchallenged until the mid-1850's and finally lost favor about 1860, due to three overwhelming advances in photography. In 1854, the method for making ambrotypes, which are negative images on glass (until a black back was added) allowing the portraits to be viewed as positives, was offered to the public. They had very little reflectance and decent sharpness. Next, in 1856, a pair of men both from Ohio and working independently of each other devised the tintype, which was light sensitive collodion spread on the surface of thin sheets of cheap metal. The third process, which was a precursor for today's negative/positive system of photography, involved coating glass plates with a light sensitive collodion and then making numerous reproductions from one negative on paper. Since daguerreotypes, ambrotypes and tintypes were all direct positives, people and photographers heralded the convenience of this process. The importance of daguerreotypes in the mid-19th century can not be overstated. Millions of faces and many thousands of scenes were recorded. It was the first time in history that Americans could see themselves "true to life" (albeit laterally reversed because there wasn't a mirror inside the camera to correct the light's rays as they passed through the lens). Many of the black & white daguerreotypes were delicately hand colored and most were uniformly presented to the patron surrounded by a brass mat with a piece of protective glass on top. The small wooden and leather cases or molded thermoplastic varieties were works of art themselves.
0.976671
How much does it cost to see Jay-Z in New York? If Jay-Z is on tour and making a stop in New York, there's a strong possibility the show will be at Madison Square Garden. Otherwise, it might also be at Beacon Theatre or Radio City Music Hall. Madison Square Garden seats up to 20000 concertgoers, so it'll probably be very lively if Jay-Z is making a stop there. You can browse upcoming events at the venues mentioned above by visiting one of their pages: Madison Square Garden, Beacon Theatre or Radio City Music Hall. Avid Jay-Z fans may have already seen them in concert in New York. The first Jay-Z concert in New York that had tickets listed on SeatGeek was at Madison Square Garden on 11/7/11.
0.939209
Children, just like adults, experience stress. Common stressors for children include school and family issues. School stressors may include excessive or difficult homework, test anxiety, peer pressure, bullying, and learning difficulties. Family issues may include parental arguing, divorce, moving homes, new sibling, major illness, death, loss, and transitions. Counseling can help children and adolescents learn how to identify causes of their distress, develop their skills in asking for help and expressing emotions, and improve their problem-solving abilities. Our approach to child/adolescent therapy involves therapeutic conversations and interactions between a therapist and a child or family. It can help children and families understand and resolve problems, modify behavior, and make positive changes in their lives. There are several types of psychotherapy that involve different approaches, techniques and interventions. At times, a combination of different approaches may be helpful.
0.825301
Gota Yashiki (屋敷 豪太, Yashiki Gōta, 26 February 1962) is a Japanese musician, both an independent acid jazz artist and drum/bass player, as a member of the band Simply Red. He was born in Kyoto, Japan, on 26 February 1962, where at a young age he learned how to play traditional Japanese drums. This interest in drumming propelled him into the music scene, and he moved to Tokyo in 1982 to join a reggae/dub band that became known as Mute Beat. Together with Mute Beat band member Kazufumi Kodama he worked on various projects and formed the duo Kodama & Gota. From 1986 on, Gota entered the European music scene. After spending some time back in Tokyo, he returned to London in 1988 and began collaborating with numerous well-known artists, including Soul II Soul, Sinéad O'Connor and Seal, while also working on film soundtracks and re-mixes. Gota joined Simply Red in 1991 for the recording of the album Stars and the following world tour. In late 1993, he released an album titled Somethin' to Talk About under the name Gota & The Heart of Gold, and helped fellow Simply Red band member Heitor T.P. release a solo album in 1994. While still working regularly with various European artists, including an album by singer songwriter Chris Braide, he recorded another album as Gota & The Low Dog in 1995 with singer Warren Dowd. This album, named Live Wired Electro, was released in several countries and took Gota on his first solo Japanese tour. Alanis Morissette credits Gota as "Groove Activator" on her album Jagged Little Pill, for which samples of his works were used. Also, UK dance act Chicane used his drum samples, most prominent on the Chicane Mix of the Bryan Adams song Cloud Number Nine. In 1997, he released his first album in the United States titled It's So Different Here. The albums hit single was a top ten Smooth Jazz track of the year. The album went to No. 15 on the Billboard 200, and No. 1 on the R&R chart. He also assisted Depeche Mode in the studio in 1997, appearing on their eagerly awaited 1997 album Ultra. Gota then helped produce Simply Red's album titled Blue the following year in 1998 and subsequently assisted the following year's release Love and The Russian Winter. He released two albums in 1999, the first titled Let's Get Started, and the second titled Day & Night. Also that year he collaborated with the English musician Mike Oldfield in his album The Millennium Bell, in the track titled "Mastermind" as the drummer. His album The Best of Gota was released in 2002, and he worked on a joint project with Freemasons member and British producer James Wiltshire called B.E.D. (Beyond Every Definition) in the early 2000s. In 2004 to celebrate the first centenary of FIFA he made some arrangements to the anthem composed by Franz Lambert in 1994 and is the one heard in all the FIFA officially sanctioned matches since then. In 2006, he was part of a special unit called Kokua with Shikao Suga, Takebe Satoshi, Takamune Negishi and Hirokazu Ogura to sing Progress, the theme song of NHK's プロフェッショナル 仕事の流儀 (Professional Shigoto no Ryuugi, known overseas as The Professionals). They reunited in 2016 to celebrate the 10th anniversary of the single, making an album, Progress, and having a Nationwide tour. In March 2008, Gota formed the rock band Vitamin-Q along with Masami Tsuchiya, Kazuhiko Kato, Rei Ohara and Anza. However, after Kato's suicide on 17 October 2009, the fate of the group is uncertain. ^ "Mike Oldfield - The Millennium Bell - Tubular.net". tubular.net. Retrieved 2017-11-06.
0.991695
Analysis of "Antarctic Dispatches: Miles of ice collapsing into the sea / Racing to find answers in the ice" A majority of reviewers tagged the article as: Accurate, Insightful, Sound reasoning. This three-part article in the New York Times describes research on the outlook for Antarctica’s ice sheets and their contribution to sea level rise. Scientists who reviewed the article indicate that it is generally an accurate description of the state of research, which shows that the West Antarctic Ice Sheet (which is especially vulnerable to warming) alone has the potential to raise global sea level by several meters over the long term. However, several complex statements could be improved with additional context or more precise wording to help readers avoid misconceptions. For example, the article relies heavily on recent results from one ice sheet modeling effort that simulates higher future rates of ice melt, while other model simulations contain important differences. Generally scientifically sound, but caution should be displayed before basing discussion solely on a single modeling study, especially when it incorporates fundamentally different processes relative to other contemporary models. Determining the rates, mechanisms, and geographic variability of sea-level change is vital to projecting future sea-level rise and managing coastal flood risks. Because the Greenland and Antarctic ice sheets have the largest potential to contribute to global mean sea-level rise under future warming, understanding their sensitivity to climate change is of particular importance. This article is well written and contains no logical fallacies. Some statements could be clarified/quantified a bit better. The Antarctic Ice Sheet has the potential to contribute significantly and rapidly to future sea-level rise. This article accurately and succinctly presents research on the processes and feedbacks that make the scientific community worried about the fate of the Antarctic Ice Sheet in a warming world. The majority of the article is informative and a fair reflection of an important area of scientific research. It is good to inform readers of this. However, in my view, the start and end of the article and the headline give undue prominence to an highly speculative aspect of the story, i.e., the question of whether major loss of ice from Antarctica has already become unavoidable. This distinction between future, avoidable, risks and existing, unstoppable impacts is absolutely key, and while the majority of the article makes this distinction well, the emphasis in the headline and the start and end of the article are opposite to that in the main body of the article. 1. Studies of portions of the West Antarctic Ice Sheet—which is especially vulnerable to warming—show it’s possible that large losses of ice are already inevitable. This is all factually true. Indeed, there is worry that a climatically-initiated dynamic disintegration is currently underway. However, there is also large uncertainty around whether this is the case. So, I would tend to classify scientific confidence on whether current mass change trends represent (in part) a true dynamical disintegration process as “low”. This is true, but referring to West Antarctic ice sheet would be more precise. It is clear that ongoing warming of the climate will pose major risks to the stability of ice sheets. However, this statement seems to have moved the story up a notch from what is actually written later in the article. There are two aspects to this: (1) the size of the area of concern (the whole of Antarctica rather than just parts), and (2) whether irreversible loss has already begun, as opposed to being an imminent risk. The article describes legitimate concerns that parts of the West Antarctic Ice Sheet are becoming vulnerable and hence may soon be at risk of starting an “unstoppable disintegration”, but the suggestion that this is actually already happening (and hence that major Antarctic ice loss is now avoidable) is far more speculative—as indeed the later parts of the article make clear. The phrase “entered the early stages of an unstoppable disintegration” is not the same as “becoming vulnerable to an unstoppable disintegration”, but the latter phrase would better represent what is said later in the article. This is the prime motive of understanding Antarctic ice sheet dynamics, especially since the far-field location gives Antarctica relatively more weight to sea level rise along many Northern Hemisphere cities than Greenland. The vulnerable parts of the ice sheet are those that resting on beds that are below sea level; therefore, the ice itself is in contact with a warming ocean. The majority of the West Antarctic and ~30% of the East Antarctic sectors of the ice sheet are grounded below sea level. “Remote as Antarctica may seem, every person in the world who gets into a car, eats a steak or boards an airplane is contributing to the emissions that put the frozen continent at risk. If those emissions continue unchecked and the world is allowed to heat up enough, scientists have no doubt that large parts of Antarctica will melt into the sea. This is factually true. In principle, there should be threshold in cumulative carbon emissions, beyond which the Earth system is committed to various levels of Antarctic ice loss. These thresholds have been only preliminarily explored, and are still largely unknown. And given the relatively slow response time of the Antarctic ice sheet, we may have already passes some/all of them—meaning that recent observed Antarctic changes reflect, in part, the initiation of large-scale, possibly irreversible, ice loss. Without this clarification, “collapse” may be misinterpreted to mean something happening over timescales of a single decade or much shorter timescales—especially when combined with earlier language about refugees “fleeing inland” due to a “rapid disintegration”. This is factually correct: this is a question that the research community is currently working hard to answer. “Already, scientists know enough to be concerned. About 120,000 years ago, before the last ice age, the planet went through a natural warm period, with temperatures similar to those expected in coming decades. Although factually correct, the information provided is not sufficient to convey the relevance of sea level during this previous warm period (the previous interglacial) to the current global warming situation. It is not directly analogous because of differences in the Earth’s orbit and the long timescale during which the ice sheet melt was exposed to the warmer temperatures. IPCC AR5 WGI Summary for Policy Makers section B4 notes these two relevant issues: “This change in sea level occurred in the context of different orbital forcing and with high-latitude surface temperature, averaged over several thousand years, at least 2°C warmer than present”. This is factually correct. Indeed, past climate states analogous to the state which is expected to manifest in the near future, are characterized by much higher sea levels. This implies that the current Antarctic (and Greenland) ice sheet volumes are not consistent with warmer climate states. Rather, warmer climate states are consistent with smaller ice sheets. “Relatively near future” is probably a misleading statement in the text, because it implies to average people a decadal timeframe—almost certainly unrealistic for 20-30 ft of sea level rise. Sea level and temperatures were higher in the previous interglacial, but the Earth climate system generally had a longer time frame to adjust to these conditions. 2. Floating ice shelves in front of Antarctic glaciers are affected by warming seawater. Ice shelves act as a stabilizing force for the ice on land, so their loss leads to greater loss of glacial ice. Using the geological record of Antarctic Ice Sheet behavior, the Ross Ice Shelf has collapsed in the past1, likely in response to ocean and atmosphere warming. Therefore, we know that the Ross Ice Shelf, which currently protects large portion of the Antarctic Ice Sheet2, is susceptible to collapse. This collapse in the referenced work is initiated by strong surface melting causing ice shelf hydrofracture and then marine ice cliff instability. Other recent studies looking at the evolution of surface melt in Antarctica find far more modest (likely insignificant) increases in surface melt over this century, particularly over the Ross ice shelf. “Right now, the shelf works like a giant bottle-stopper that slows down ice trying to flow from the land into the sea. If it collapses, the ice could flow into the ocean more rapidly, an effect that has already happened on a much smaller scale in other areas of Antarctica. This is correct but unclear. What is “perhaps longer” than “well over a century”? Given the importance of timescale for how coastal cities will respond to sea level rise, the range of possible timescales should be more explicit. IPCC AR5 WGI Chapter 13 on Sea Level Change assesses these as “sea level rise of 1 to 3 m per degree of warming is projected if the warming is sustained for several millennia (low confidence)”. Several glaciers that had previously been buttressed by the Larsen B ice shelf accelerated by a factor eight after said ice shelf disintegrated. The demise of this ice shelf lead to an increase of 27 km3 of ice loss per year*. “But the story is not straightforward, and the warmer water attacking the ice has not been linked to global warming — at least not directly. The winds around the continent seem to be strengthening, stirring the ocean and bringing up a layer of warmer water that has most likely been there for centuries. This is factually correct, and an important point. In some climate analyses (such as global average air temperature), climate scientists and statisticians can (1) clearly detect a signal rising above the ‘background’ noise of natural climate variability and (2) clearly attribute this signal to human forcing (versus sun strength changes, volcanoes, cosmic rays, etc.). However, in the case of regional Antarctic climatology, this is not yet clearly the case. Particularly in the case of Antarctic oceanography, as Dr. Steig points out, this largely stems from lack of observations of sufficient length to allow for robust statistical detection/attribution of near-Antarctic ocean changes. 3. Researchers use computer model simulations to study the impacts of climate changes on ice sheets. Projecting the loss of glacial ice this century is complex, and the amount of sea level rise we can expect to see is somewhat uncertain—0.5 meters (20 inches) to as much as 1 or 2 meters (3.3-6.6 feet) given continued greenhouse gas emissions. “Recent computer forecasts suggest that if greenhouse gas emissions continue at a high level, parts of Antarctica could break up rapidly, causing the ocean to rise six feet or more by the end of this century. That is double the maximum increase that an international climate panel projected only four years ago. Process-based predictions of sea-level rise by the International climate panel (i.e., the IPCC) are limited by uncertainties surrounding the response of the Greenland and West Antarctic ice sheets1, 2, 3, steric changes4, 5, contributions from mountain glaciers6, as well as from groundwater pumping for irrigation purposes and storage of water in reservoirs7, 8, 9. In large part because of the limitations of physical process models, IPCC AR5 does not offer “very likely” (5th to 95th percentile) sea-level projections, but concluded that “there is currently insufficient evidence to evaluate the probability of specific levels above the assessed likely range”10. The contribution from Greenland (GrIS) and Antarctic (AIS) Ice Sheet mass loss has increased since the early 1990s, comprising ~19% of the total observed rise in GMSL between 1993 and 2010 and ~40% of the total observed rise in GMSL between 2003 and 200811, 12. GrIS and AIS contributions are projected to become increasingly important over the 21st century10 and dominate sea-level rise uncertainty in the second half of the 21st century13, 14. I think this statement is also factually true. Indeed, recent simulations suggest that sea level rise from Antarctica could be very large. However, as is implied by the word “crude”, these state-of-the-art simulations are still lacking in important physical processes which are very difficult to implement at sufficient resolution in computer models of both the Antarctic Ice Sheet and near-Antarctic climate. So, due to these deficiencies, frankly there is still very large uncertainty regarding upcoming Antarctic change, at least as simulated by computer models. However, paleoclimate proxies have the ability to provide additional critical non-model-based constraints. The “recent computer forecasts” referenced here likely refer to the numerical model developed by Rob DeConto1. Different from other ice-sheet models it includes the so-called “Marine Ice Cliff Instability”, which at present is not widely accepted in the ice-sheet modeling community. Including this instability will cause increased rates for resultant sea-level rise projections. On the other hand, DeConto’s model includes simplifications related to how ice sheet slides over the sediment at the base. This ice-sliding parameterization will generally increase ice-sheet stability and cause smaller projections of sea-level rise, as demonstrated in the Nature paper by C. Ritz et al. in 20152. There is still a lot of uncertainty regarding the physical and hydrological processes involved in ice-sheet modeling. Ideally, we prepare for the upper bounds regarding projected rates in sea-level rise, since the scientific community is still far from providing high-confidence estimates. This is based on a combination of observations and modelling1. Observations show us several glaciers are in decline2 and ice shelves, which serve as a buttress for the glaciers, are thinning due to warmer sea water. This process is seen to be accelerating3. Where ice shelves (like Larsen B) have already vanished, glaciers in the hinterland have indeed sped up4. Numerical models predict that the changes underway now are likely to lead to a full-scale collapse of the west Antarctic Ice Sheet. This reflects recent research—an important point is that this is about possible future consequences of high emissions, which are hence avoidable if emissions are lower. This is in contrast to the opening sentence and headline, which talk of “unstoppable disintegration” already in progress. “Incorporating recent advances in the understanding of how ice sheets might break apart, they found that both West Antarctica and some vulnerable parts of East Antarctica would go into an unstoppable collapse if the Earth continued to warm at a rapid pace. This is factually true. I think the text is accurate in reflecting the distinct possibility of “upper-bound” Antarctic behaviour. However, note that the current “worst-case” scenario estimate is not the same as the current “most-likely” scenario estimate (in any risk assessment exercise). Related to my earlier comment, the relevance of the “Marine Ice Cliff Instability” is still very unclear, and not generally accepted in the ice-sheet modeling community. DeConto and Pollard’s sea-level estimates certainly fall in the high end of sea-level rise projections. However, other processes such as non-linearities in melt or basal sliding may provide similar high-end estimates. “But some research suggests that a catastrophe might not yet be inevitable. In a study last year, Robert M. DeConto of the University of Massachusetts, Amherst, and David Pollard of Pennsylvania State University used their computer model to predict what would happen if emissions were reduced sharply over the next few decades, in line with international climate goals. To my mind, the way this is written gives the impression that the DeConto and Pollard study is the outlier and that a substantial body of research suggest that a catastrophe is already inevitable—however, the article has not actually given any details of any such research, it only raised the possibility of unstoppable disintegration as a research question. It would be far more accurate to say that the vast majority of research does not suggest that a catastrophe is inevitable.
0.999996
In Short: Researchers develop an original laboratory preparation of mycolactone A/B, the exotoxin produced by Mycobacterium ulcerans and responsible of the pathogenic effects of the disease called Buruli ulcer. A late-stage modification of the toxin allowed the preparation of analogues that could deliver crucial information about chemical and biological mechanisms of the illness. This synthetic blueprint is exemplified by the synthesis of the isotopically-labelled mycolactone A/B. Buruli ulcer is a severe and devastating skin disease in humans currently reported in more than 30 countries. Several thousand people are infected each year, especially in tropical Africa and Oceania, where Buruli ulcers are often a source of major disability. Since the clinical description of this third most common mycobacterial disease in 1948, intense research efforts have been deployed by the scientific community leading to the demonstration that the disease is due to Mycobacterium ulcerans, a microorganism which secretes an exotoxin called mycolactone A/B. In a unique manner, this exotoxin is responsible for all the pathogenic effects observed during the infection and consisting in a necrosis of tissue combined with an absence of immune response and a lack of pain. The discovery of mycolactone A/B in 1999 was a breakthrough as it opened a molecular perspective on this devastating disease, although to date no specific treatment (besides the combination of antibiotics recommended by the WHO) or early diagnosis have been developed because of the unawareness of the associated chemical and biological mechanisms of the disease. Since 2006, in close collaboration with a team of immunologists from the Institut Pasteur (Paris, France) we have been engaged in the laboratory preparation of mycolactone A/B analogues that led us to report some structure-activity relationship (SAR) data and to uncover the therapeutic potential of mycolactone A/B (obtained from culture of the mycobacterium) and its analogues. The modular synthetic scheme is of prime importance to further advance our understanding by providing tailored-made tools to explore the biology of the disease. For the past three years, we have investigated a new way of preparing mycolactone A/B in the laboratory. The added value of this synthetic scheme is the possibility to prepare the natural toxin itself (as an inconsequential mixture of isomers from a biological point of view) but also some analogues at a very late stage of the work, with minimal deviations. Such modifications are currently not possible with existent strategies of preparation of mycolactone A/B. For example, we have shown that an isotopically-labelled mycolactone A/B could be prepared and could potentially be used as a reference during quantification of the toxin in biological tissues. Even if this new way of preparing mycolactone A/B in the laboratory led to sufficient quantities for biological investigations, two shortcomings still exist. Indeed, it would be desirable to design a way of preparing the toxin without any mixture of isomers (even if this is demonstrated to have no impact on the biological activity), while maintaining the possibility of performing late-stage modification of the molecule. In addition, this late-stage modification should be explored more thoroughly with the introduction of fluorescent and/or reactive tags for example. Considering the therapeutic potential of mycolactone A/B and its analogues, this work opens new perspectives for the late-stage modification of advanced mycolactone intermediates and the preparation of tools to explore the biology of the disease. Therefore, future work will focus on the synthesis of specific probes based on the mycolactone A/B chemical structure which could help the scientific community to decipher the biology of this complex disease and ultimately to develop rational diagnostic and therapeutic tools. Research Article: Modular total syntheses of mycolactone A/B and its [²H]-isotopologue. Organic and Biomolecular Chemistry, 2017. This article was posted in Chemistry & Physics and tagged Buruli ulcer, CNRS - LCM - UMR 7509, Organic and Biomolecular Chemistry, University of Strasbourg.
0.999998
The Magna Carta is an English charter that was signed into law by King John on June 15, 1215. Some of the best-known concepts outlined in the Magna Carta include making the monarch subject to the rule of law, basic rights held by citizens (or “free men”), and the social contract between ruler and subjects. Where does Magna Carta come from? In the late 12th and early 13th centuries, King John of England drew the ire of many English nobility with a series of unpopular military decisions and aggressive taxation. Led by Baron Robert Fitzwalter, the barons and other nobles under John’s rule rebelled. In order to avoid civil war, King John was forced to sign the Magna Carta, which was written by the barons. The document mostly dealt with the barons’ property rights and the monarch’s accountability to law, neglecting the poorer population. However, in subsequent centuries, the document was revised to include other clauses that granted more rights to the general populace, including the right of habeas corpus (the right to a trial by judge and jury before being imprisoned) and the right to a speedy trial. This document and the precedent it set served as the foundation for many forms of law created post-Enlightenment, including the American Constitution and the French post-revolutionary government. Today, the Magna Carta is considered an important milestone in the development of civil rights and the people’s power in a government.
0.999992
What a great article from the Milton Keynes Citizen. WiFi at the hospital helps kids stay in touch with their friends. These young patients can do homework, see pictures on Facebook or Instagram, and help provide entertainment during a difficult hospital stay.
0.999995
Q.1. If and then find a : b : c : d. Q.2. 12 monkey can eat 12 bananas in 12 minutes. In how many minutes can 4 monkey can eat 4 bananas? Q.3. 20 Men can do a piece of work in 18 days. They worked together for 3 days, then 5 men joined them. In how many more days is the work completed? 20 आदमी एक काम को 18 दिन में कर सकते है उहोने 3 दिन मिलकर काम किया उसके बाद 5 आदमी उनके साथ शामिल हो गए. काम कितने दिनों में पूरा हो जायेगा? Q.4. The ratio of the ages of A and B at present is 3 : 1. Four year earlier the ratio was 4 : 1. What is the present age of A? वर्तमान में A और B की आयु का अनुपात 3 : 1 है चार वर्ष पूर्व अनुपात 4 : 1 था ! A की वर्तमान आयु कितनी है ? Q.6. A certain amount of money earns Rs. 540 as simple interest in 3 years. If it earns a compound interest of Rs. 376.20 at the same rate of interest in 2 years, find the amount. Q.8. The average weight of A B and C is 50 kg . if the average weight of A and B be 48 kg and that of B and C be 45 kg , then find the weight of B. Q.9. By selling 25 m of cloth a man gains the selling price of 5 m of cloth. Find the gain percent. Q.10. A reduction of 20% in the price of wheat a person enables to purchase 4 kg more for Rs. 160. Find the original price of wheat. 20 men can do a piece of work in 18 days. If they worked for 3 days then, the 3 days work is = 20 x 3 = 60. 20 आदमी किसी काम को 18 दिन में कर सकते है . यदि इन्होने 3 दिन कार्य किया तो 3 दिन का कार्य है = 20 x 3 = 60. A = 3x B = x. Then the present age of A is 36 years. तो A की वर्तमान आयु है 12 x 3 = 36. If 3 years S.I is 540 then one year S.I will be 180. Then rate of interest is 9 %. यदि 3 वर्ष का SI 540 है तो एक वर्ष का S.I होगा 180. तो ब्याज की दर है 9 %.
0.999994
n. - The form of nonviolent resistance initiated in India by Mahatma Gandhi in order to oppose British rule and to hasten political reforms. Mahatma Gandhi awakened the largest Democracy on earth and forged a shared resolve with what he called Satyagraha or truth force. Is it true that during and after these struggles, Gandhi coined the term Satyagraha and began to expound the theory that lay behind it? 1906 - Mahatma Gandhi coins the term Satyagraha to characterize the Non-Violence movement in South Africa. This staging of Satyagraha is a major accomplishment long overdue in this specific corner of the opera world.
0.985428
What does home insurance for homeowners cover? Basic plan, which usually applies to those with a modest home, with a good tolerance for risk and want as little insurance as possible. Basic Plus plan, which, in addition to the coverage included in the Basic plan, also includes comprehensive building coverage. So, you would be covered in the event of an accidental paint spill on your hardwood floor, for example. Comprehensive coverage for all damages sustained to your home or its contents. For example, you would be covered if, in the heat of the moment, a joystick went crashing through your television screen! For homeowners over age 50, this option allows them to choose to either rebuild or get a cash payout in the event of total loss of their home.
0.781313
John McLean was an American jurist and politician who served in the United States Congress, as U.S. Postmaster General, and as a justice on the Ohio and U.S. Supreme Courts. John McLean was a an attorney, political leader and Associate Justice of the United States Supreme Court. McLean was born on March 11, 1785, in New Jersey. His parents moved to western Virginia in 1789 and later traveled to Kentucky. By 1797 the family was settled on a farm in Lebanon, Ohio. No free schools existed in Ohio and McLean's family could not afford to pay his tuition for him at a private institution. McLean was an avid learner, commonly borrowed books from his neighbors and educated himself. In 1803, McLean moved to Cincinnati, where he studied law with the son of former General Arthur St. Clair. He supported himself by working as a copyist in the clerk's office of Hamilton County. In 1807, the State of Ohio admitted him to the bar. McLean began a political career in 1812. Voters in Cincinnati elected McLean to the United States House of Representatives. In 1814, they reelected him to his seat without opposition. Before the end of his second term, the Ohio legislature appointed McLean as a justice of the Ohio Supreme Court. McLean held this office from 1816 until 1822, when President James Monroe appointed him as a commissioner of the Federal Land Office. A year later, Monroe selected McLean as Postmaster General to replace another Ohioan named Return Jonathan Meigs. President Andrew Jackson appointed McLean to the United States Supreme Court on March 7, 1829. As a Supreme Court justice, McLean's most famous case was the Dred Scott v. Sanford decision. In this case, McLean favored granting Dred Scott his freedom. Scott was a slave suing for his freedom because his owner had taken him to a state where slavery was illegal. McLean's opinion went against the majority opinion of his fellow Supreme Court justices. In other cases, McLean upheld slave owners' rights to reclaim their runaway property in states that had outlawed slavery. He also ruled that states could not implement laws that made it impossible for the federal government to enforce the Fugitive Slave Law of 1850. McLean's prominence as a justice on the Supreme Court led a number of people to consider him for the presidency. In 1836, the Whig Party thought him to be a possible candidate. In 1848, both the Liberty Party and the Free Soil Party debated running him as their candidate. In both 1856 and 1860, the Republican Party considered nominating him. McLean, however, never became a candidate for president. He remained a justice on the Supreme Court until his death on April 4, 1861.
0.999748
Reformulated with a new taste profile and now preservative free. Blackeye Roasting Co. Nitro Cold Brew has a new formula, new packaging and new flavors. The brand’s signature drink has been reformulated with a new taste profile and is now preservative free and available in White Chocolate and Nitro Cocoa varieties. Both new flavors are made with a proprietary blend of cold brewed coffee and non-dairy based creamer, making them shelf-stable and extra creamy, according to the company. Additionally, Blackeye’s packaging has been redesigned to have a sleek, new look. The shelf-stable cans are taller and slimmer and have transitioned their branding to feature a matte black background.
0.996575
The DON CIO develops, maintains and facilitates a DON Enterprise Architecture (DON EA) that complies with Federal and Department of Defense (DoD) architectures. More. This memo states that Enterprise Information Environment Mission Area (EIEMA), Warfighter Mission Area (WMA), and Defense Intelligence Mission Area (DIMA) systems will conduct certifications and annual reviews that will be reviewed and approved by the Department of the Navy Chief Information Officer, the DON Deputy CIO (DDCIO) (Navy), or the DDCIO (Marine Corps), as appropriate. This memo announces the release of the Department of the Navy Enterprise Architecture (DON EA) v5.0, which updates DON EA v4.0.000. This memo details the significant changes to the requirements for investment review and certification of defense business systems before appropriated and non-appropriated funds can be obligated. These changes necessitate modifications to the Department of the Navy's business system review process, which supports the Department of Defense's investment review process. SECNAVINST 5230.15 mandates that all COTS software in use across the Department of the Navy be vendor supported. DON organizations desiring to continue to use COTS software that is no longer supported must request and receive a waiver to this policy. This Naval message contains updated Department of the Navy Guidance for program managers of DON Defense Business Systems whose primary mission area and domain is Real Property and Inventory Life Cycle Management. This instruction serves to issue mandatory procedures for Department of the Navy implementation of DoDD 5000.1 and DoDI 5000.2 for major and non-major defense acquisition programs and major and non-major information technology acquisition programs. The Department of the Navy is implementing changes for the annual review of all systems in the DON Business Mission Area (BMA) beginning in FY13. In another move to promote effectiveness and efficiencies in the Department of the Navy, the Navy, Marine Corps and DON Secretariat have each designated an Information Technology Expenditure Approval Authority (ITEAA). The ITEAAs are responsible for ensuring that all IT projects undertaken in the Department are integral parts of rationalized portfolios, aligned with DON and Department of Defense enterprise architectures. The Department of the Navy Chief Information Officer has released updated DON DoD Architecture Framework (DoDAF) v2.0 Implementation Guidance. This guidance clarifies what it means to be compliant at the current time with the requirements of DoDAF v2.0, within the DON. The DON Enterprise Architecture helps decision makers make informed decisions about investments in new technology and, at the same time, capitalize upon vast existing technology assets. In addition, the DON EA is focused on maintaining alignment between the Department's goals and objectives, and its information management/information technology (IM/IT) investments. Federally mandated enterprise architectures (EA) are a strategically-based means for the departments of Defense and the Navy (DON) to capitalize on their vast technological assets and make sound decisions about investments in new technology that will support the warfighter. Included in the attachment is a list of frequently asked questions regarding the Defense Business System Modernization Certification Approval process.
0.999436
Do you believe in astrology? I know a lot of people who are sceptical about it. While I am not one who reads the astrology on a daily or weekly basis, I do enjoy learning about myself and my characteristics through my zodiac sign. I believe there are alignment and connections between the universe and me and why I am 'programmed' a certain way. Obviously, the logic hasn't lost in me that the environment, my education, how I was raised and nurtured played a role as well. I simply believe all these various elements play a part and intertwined in defining who I am. The most commonly known element in astrology is our zodiac sign, also known as the sun sign. This is only one aspect of astrology based on your birth month. The exact time you were born (you can typically find this information on your birth certificate). There are many websites that offer free birth chart reading such as this and this - I've tried both! That's how geeky I am about my astrological reading. What Your Natal/Birth Chart Will Reveal? Sun Sign: The Sun symbolises our identity, our individuality and personality. It essentially sums up the character of a person. My Sun sign is Leo. Moon Sign: The Moon informs what gives us a sense of security, our sensitivity and emotions. I have Sagittarius as my Moon sign. Mercury: Mercury is the planet that brings us interest in intellectual things, our natural intelligence and ability to analyse and reproduce. My Mercury sign is in Virgo. Venus: Venus is the sign that tells us what we are attracted to and what enables us to give or receive love and affection, beauty and happiness, values and principles. On the flipside, it represents our weakness or shallowness. My Venus sign is in Leo. Jupiter: Jupiter represents a sense for development and support, which can lead to opportunities, wealth and faith. My Jupiter is in Taurus. Saturn: Saturn rules responsibility, restrictions, limits, boundaries, fears and self-discipline. For me, Saturn was in Leo at the time I was born. Uranus: Uranus is a slow transpersonal planet and may remain in a single sign for up to many years. This planet is the power of awakening, which often means that there will be some disruption and change. Events under the influence of Uranus are unexpected or unpredictable, forcing us to do things in a new way and face the truth about an issue. My Uranus sign is in Scorpio. Neptune: This is another slow transpersonal planet and may remain in a single sign up to many years. It is a mysterious planet and links to imagination, illusion, fantasy and spirituality. The year I was born, Neptune was in Saggitaurus. I hope you will give it a try, even if it is just for fun. I often find it quite amusing, how aligned the reading is to my personality.
0.999354
From 5:30 a.m. on 04/03/2018 through 5:30 a.m. on 04/04/2018, MPD received 409 calls for service. This number does not include parking complaints or 911 misdials. For purposes of clarification, the following abbreviations are short-hand for race designations: W=White, AA=African American, NA=Native American, H=Hispanic, ME=Middle Eastern, A=Asian, MR=Mixed Race, U=Unknown. MPD shifts are staggered as follows: 1st detail=7 a.m. to 3 p.m., 2nd detail=12 p.m. to 8 p.m., 3rd detail=3 p.m. to 11 p.m., 4th detail=8 p.m. to 4 a.m., 5th detail=11 p.m. to 7 a.m. **Priority calls only East and North from 7:06pm-8:40pm due to the structure fire and calls for service. **Priority calls only North starting at 12:19am due to the weapons offense call on Coolidge St. **Priority calls only city-wide from 3:36am-5am due to the call on Northport Drive and other calls for service. 1) NORTH: Residential Burglary – 8:10 a.m. Officers responded to Onsgard Rd where the homeowners/victims (31-year-old AAM and 28-year-old AAF) reported a residential burglary. Items were reported stolen from their home. Investigation continuing. 2) SOUTH: Weapons Offense/Shots Fired – 9:05 a.m. Officers responded to Ardsley Circle for a 911 disconnect call. Upon arrival, officers were contacted by a victim (38-yaer-old AAM) and witnesses who advised that the victim had been shot at. The victim was not injured. A casing was located. The suspect (24-year-old AAM) fled on foot and then left the area in a vehicle. The suspect was contacted via phone and he agreed to turn himself in. At the time of this writing, the suspect has not turned himself into police. Investigation continuing. 3) EAST: Adult Arrested Person – 2:09 p.m. Officers responded to the Home Depot on East Springs Dr for a retail theft. The suspect left the store in a vehicle with two other occupants. The vehicle was located and a traffic stop was conducted. A BB gun was located in the vehicle. The suspect (32-year-old WM) was cited for a previous retail theft and arrested on a probation hold. One of the occupants (43-year-old AAM) lied about his identity and was found to be an escapee from the jail. The suspect and occupant were both conveyed to jail. The third occupant was released from the scene. 4) EAST: Adult Arrested Person – 2:43 p.m. Officers responded to Milky Way for a disturbance. The caller/victim (14-year-old WM) reported that his brother/suspect (17-year-old WM) broke his cell phone and threatened to shoot him. A BB gun was located in the residence. The suspect was arrested for disorderly conduct while armed, criminal damage to property and bail jumping. 5) WEST: Adult Arrested Person – 5:21 p.m. Officers responded to Oak Creek Trail to contact a subject under the influence. The subject (56-year-old WM) was found to be high from intentionally huffing. The subject/suspect was arrested for intentionally abusing hazardous substances and two counts of bail jumping. 6) CITY-WIDE: Information – 6:00 p.m. MPD responded to a late report of an incident where an airsoft facsimile gun may have been pointed and discharged at a Madison Metro passenger. Investigation continuing. 7) EAST: Assist EMS/Fire – 6:54 p.m. Officers responded to Galileo Dr for a garage on fire. Officers learned that the homeowners' children were in the residence, along with some pets. Officers assisted by entering the residence and ushering the occupants/animals out of the home. MFD has the lead on this investigation. No injuries were reported. 8) EAST: Weapons Offense – 8:10 p.m. Officers responded to the East Transfer Point on West Corporate Dr where the caller reported observing a disturbance between two individuals. The caller approached the individuals to record what was occurring. One of the suspects (20-year-old AAM) took offense to this and confronted the caller/victim (49-year-old WM) and displayed a handle of a pistol in his coat. The suspect fled but contact was attempted where he refused to cooperate. There is probable cause for the arrest of the suspect on charges of disorderly conduct, disorderly conduct while armed and a probation hold. The suspect is at large. Investigation continuing. 9) EAST: Death Investigation – 8:42 p.m. Officers responded to an east side residence to check the welfare of a subject. The subject (61-year-old AAF) was found deceased. Nothing suspicious observed in the home. Medical Examiner's office responded. Investigation continuing. 10) MIDTOWN: Missing Juvenile/Runaway – 9:35 p.m. Officers responded to Atticus Way for a missing/runaway juvenile (14-year-old HM). K9 track conducted. Attempt to locate aired. The juvenile was listed as missing in the appropriate databases. 11) NORTH: Weapons Offense – 11:22 p.m. Officers responded to Coolidge St for several reports of shots fired. Several rounds were found to have entered two different residences. Four rounds entered into one residence, including a round which entered a mattress just below the resident's head. The other resident had three rounds entered into their occupied bedroom. Fortunately, no injuries were reported. Press release completed. Investigation continuing. 12) EAST: Drug Incident Overdose – 1:17 a.m. Officers responded to N. Thompson Dr for a subject (29-year-old WM) having an overdose. MFD administered Naloxone to the subject which revived him. The caller/2nd subject (33-year-old WF) started to show signs of an overdose when officers/MFD were on scene. MPD administered naloxone to the female subject. Both subjects were conveyed to a local hospital. The female subject was issued a citation and referred to the Madison Addiction Recovery Initiative (MARI). The male subject will be conveyed to jail once medically cleared on charges of a probation hold and possession of heroin. 13) NORTH: Check Welfare/Injured Juvenile – 2:55 a.m. Officers responded, along with EMS, to check on what was initially reported as an infant suffering from a seizure on Northport Drive. Child was transported to a local hospital. Investigation continuing.
0.999828
This topic often comes up in the architecture forums and I receive a lot of questions on the subject. So, if given the opportunity, should you work for a "starchitect"? Let's discuss. The title is formed by combining the words "star" and "architect" to create "starchitect". Clever right? The definition of what architects fall into the category of a celebrity architect are very subjective. Outside of the profession the average person has never heard of any of these architects. However, within the profession their work is highly valued, studied and idolized. It is interesting to note how architecture tends to create celebrity architects, while you typically don't have a celebrity doctor or star dentist. Startist? Society generally values people with talent and creativity, rewarding those rare individuals with notoriety. A few names that fit into the starchitect category are Frank Gehry, Norman Foster, Bjark Ingels, Herzog & de Meuron, Renzo Piano, I.M. Pei, etc. These offices are typically large with multiple offices located around the world. While I have personal experience working at a "starchitect" office I will try to leave my own biases out of the discussion and simply present the pros and cons. Without a doubt one of the biggest advantages of working for a well known architect is the projects. You will have the opportunity to work on world class projects in the most prestigious locations. If you want to work on a celebrity mansion, a tech giant's new campus or a new city landmark you will likely find what you are looking for. The catalog of great projects not only brings a level of quality to your portfolio but it also brings with it great variety. When I say it "adds to your portfolio", this does not mean "boosting your resume" but rather it adds to your personal professional growth and development. The brand firms tend to attract a variety of projects from master plans to airports to towers to furniture design and everything in between. Having this working flexibility as a career foundation can be extremely beneficial for your design and problem solving skills. These offices tend to attract an amazing pool of talented architects that almost acts as a second architecture school. The culture is generally very collaborative and the specialized groups within the office can share their expertise. Need to model an obscenely complex form? Not sure how a particular detail should be designed? Stop by their desk and ask how. The relationships you will form are extremely beneficial at any point in your career. These connections can greatly enhance your professional network during and after your employment tenure. The co-worker relationships in these types of offices are very similar to an architecture school studio. Some people perform very well in this environment while others may struggle. Since the majority of employees are skewed to the young side it can make for a fun workplace both during and after office hours. Friday drinks anyone? The great projects mentioned above also come with a stressful work environment. Clients for high-profile billion-dollar projects tend to be quite demanding so it can put a lot of pressure on the team. However, many people work well under pressure and often enjoy the challenge. Succeeding in this culture comes down to your personal preferences, time management and delegation skills. One of the biggest concerns I get about people considering working for a high profile firm is the long hours. While long hours are often a reality in these types of offices (and architecture in general) it is not necessarily the week-in and week-out norm. Yes, you will likely be required to work late to meet a deadline or come on the occasional weekend but it is not as bad as people make it out to be. If you can manage your time and your tasks well you can keep your work load, and hours, reasonable. The low pay stereotype is often trumpeted by the majority of the entry level workers at the office. The bulk of the staff at most starchitect offices are made up of 20 and 30-somethings. These employees typically have less experience and are paid less as a result - as is true anywhere. However, if you chose to move up within the organization the pay can drastically increase for senior roles, exceeding other lesser known firms. The typical large-scale projects are usually managed by a senior project manager and the senior board within the office. The typical lower level architect will not have direct interaction with the client. However, this depends more on the project than the office type. For example, you might be more involved with the client in a high end home design versus an international airport. If you are willing to take the initiative and show you can be dependable your responsibilities will increase. So should you work for a starchitect? If you are deciding whether to work for a particular office many people will begin by researching online. This will inevitably lead to endless forum discussions on the subject. However, take these opinions with a grain of salt. Many people are very negative about their previous work experience. Why this is I don't know. Why would someone complain about a place they chose to work? They are/were free to leave at any point if they no longer wanted to be there. Therefore, while a "starchitect" office may not be for everyone it might be for you.
0.948243
Where’s Weed in Telluride, Colorado? It’s found on the weed maps on Colorado’s home site, the Denver Daze. Marijuana dispensaries found in the Telluride area may provide access to selections of cannabis strains and edibles as well as hash oils, hit wax, dabs, pipes, bongs, vaporizers, tinctures and more for medical purposes. Many of the medical marijuana dispensaries found on the Daze weed maps also service recreational needs. Look for the smoking happy face with the green cross behind on the map for shops that do both. Traditional dispensaries typically offer a wide range of strains, including numerous types of Indica, Sativa, and Hybrids. When seeking a specific strain high in THC like Cinderella 99, Malawi Gold, or Sour Diesel or high cannabidiol (CBD) strains like Harlequin or Sour Tsunami, the Telluride, Colorado marijuana dispensary with the best buds is on the Denver Daze.
0.911548
Testing a new data-level validator for RDF and just found that Dan Brickley has a problem with his LiveJournal FOAF export, which states that he has the empty literal "" as a value for the InverseFunctionalProperty foaf:icqChatID. So Dan, if you're reading this, I don't blame you (although I do perhaps blame LiveJournal for not having more rigorous data-checks... they do export millions of FOAF files after all). Overall though, I blame the lack of a tool that you can use to check for such problems. If you get people hacking away on their RDF documents problems will arise in the data (I'm as much to blame as anyone else). You need a tool to assist you in debugging your RDF on the data-level. The knowledge-base sapper, and his four explosive triples. Full reasoning on this according to some ruleset (say pD*: -- i.e., OWL-Horst -- rules rdf1, rdfs4a, rdfs4b, rdfs7x, rdfp3) gives everything. I'll let you work that one out. By everything, I mean every possible (albeit finite) combination of identifiers that constitute a valid RDF triple: the number of resulting triples equals the number of unique identifiers, cubed. Stick those four statements into a web-crawl, do some happy-go-lucky rule-based reasoning and you have problems. Of course this is only one such example. Watch this space. ...oh, and before I go, it doesn't even take four triples (pD*: rdf1, rdfs4a, rdfs4b, rdfp6, rdfp7, rdfp9, rdfp10, rdfp11). RSS 1.0... old and broken. I guess it's a bit old hat, but RSS 1.0 uses the exact same URI for image as a property which relates a channel to an image, as for image as a class. Seems to stem from trying to create an RDF spec which closely resembles some older XML version... at the cost of the RDF spec (RSS isn't the only victim of an XML porting hangover). No RDF(S)/OWL document exists for the spec (it is quite old -- from 2000) but the above issue also pretty-much precludes the possibility of one being created. Maybe time for a half-decent (and maybe even less obtuse) replacement. A quick scan of this RSS 1.1 document and it seems to be a candidate. In fairness, it's hardly rocket science... and that's a good thing. Plus, they capitalise Channel. What's not to like? What do you do when you find a restriction that has more than one owl:onProperty value attached? Indeed, what do you do with a restriction that has multiple owl:someValuesFrom attached? Until further notice, I'm going with the highly underrated throw it out approach. Apologies to whoever lovingly crafted this data. One may wonder why Google returns about 16,600 results for this highly-entropic hex-string (Oct. 2008). The aforementioned hash is that of the empty 'mailto:' string, presumably produced by FOAF exporters from empty email input forms. Unfortunately, foaf:mbox_sha1sum is inverse functional, meaning that it should be a unique identifier for an entity: in this case a person. Now, from a reasoner's perspective, only one person can have that particular value for the property: therefore if you find two they must be the same person! Now, we have a problem. All of the descriptions for these people get merged into one super-description for this super-person. A reasoner will now see one person, with tens of thousands of names, interests, emails, etc. ...not to talk about other inverse functional properties such as foaf:weblog which is oft used for defining shared weblogs (anyone who shares one is the same person). To clarify, perhaps, this is not a criticism of FOAF but perhaps moreso an observation that people will not stick to the semantics hidden away in an RDFS/OWL description. They will see a label for a property or class, project their needs onto it and use it, although it doesn't fit the bill. The problem becomes a serious issue where the identity of what is described is at stake. More specifically, problems with identity -- relating to assignment of URIs, lack or mis-use of same-as, inverse-functional, functional or cardinality of 1 properties -- are one of the largest stubling blocks at the moment for building a "web of entities". In human language, a word's definition follows it's usage to a certain extent. The question is, should FOAF change the definition of their words to match how people use them? Should they loosen definitions to say that foaf:weblog can apply to communal weblogs? Finally, where would this post be without one of the finest examples of the chaos in RDF web data. Aidan Hogan, Andreas Harth, Stefan Decker. "Performing Object Consolidation on the Semantic Web Data Graph". Proceedings of I3: Identity, Identifiers, Identification. Workshop at 16th International World Wide Web Conference (WWW2007), Banff, Alberta, Canada, 2007. n. A website that displays in chronological order the postings by one or more individuals and usually has links to comments on specific postings.
0.999994
Answer the following questions using the annual report of Dell, Inc. in Appendix A. a. Who is responsible for the preparation and integrity of Dell’s financial statements and notes? Management has the primary responsibility for the preparation and integrity of Dell’s financial statements and notes. ’s January 28, 2005 consolidated financial statements and of its internal control over financial reporting as of January 28, 2005 and audits of its January 30, 2004 and January 31, 2003 consolidated financial statements in accordance with the standards of the Public Company Accounting Oversight Board (United States). It is the registrant’s auditors and the audit committee of the registrant’s board of director’s opinion.
0.999726
Accessible, common-sense approach to the nature of the universe and the meaning of life. According to Wikipedia: "William James (1842 – 1910) was a pioneering American psychologist and philosopher trained as a medical doctor. He wrote influential books on the young science of psychology, educational psychology, psychology of religious experience and mysticism, and the philosophy of pragmatism. He was the brother of novelist Henry James and of diarist Alice James. William James was born at the Astor House in New York City. He was the son of Henry James Sr., an independently wealthy and notoriously eccentric Swedenborgian theologian well acquainted with the literary and intellectual elites of his day. The intellectual brilliance of the James family milieu and the remarkable epistolary talents of several of its members have made them a subject of continuing interest to historians, biographers, and critics." The assumption that states of mind may compound themselves, 181. This assumption is held in common by naturalistic psychology, by transcendental idealism, and by Fechner, 184. Criticism of it by the present writer in a former book, 188. Physical combinations, so-called, cannot be invoked as analogous, 194. Nevertheless, combination must be postulated among the parts of the Universe, 197. The logical objections to admitting it, 198. Rationalistic treatment of the question brings us to an impasse, 208. A radical breach with intellectualism is required, 212. Transition to Bergson's philosophy, 214. Abusive use of concepts, 219. Individuality outruns all classification, yet we insist on classifying every one we meet under some general head. As these heads usually suggest prejudicial associations to some hearer or other, the life of philosophy largely consists of resentments at the classing, and complaints of being misunderstood. But there are signs of clearing up, and, on the whole, less acrimony in discussion, for which both Oxford and Harvard are partly to be thanked. As I look back into the sixties, Mill, Bain, and Hamilton were the only official philosophers in Britain. Spencer, Martineau, and Hodgson were just beginning. In France, the pupils of Cousin were delving into history only, and Renouvier alone had an original system. In Germany, the hegelian impetus had spent itself, and, apart from historical scholarship, nothing but the materialistic controversy remained, with such men as Buechner and Ulrici as its champions. Lotze and Fechner were the sole original thinkers, and Fechner was not a professional philosopher at all. What Oxford thinker would dare to print such naif and provincial-sounding citations of authority to-day? all secondary to this deep agreement. They may be only propensities to emphasize differently. Or one man may care for finality and security more than the other. Or their tastes in language may be different. One may like a universe that lends itself to lofty and exalted characterization. To another this may seem sentimental or rhetorical. One may wish for the right to use a clerical vocabulary, another a technical or professorial one. A certain old farmer of my acquaintance in America was called a rascal by one of his neighbors. He immediately smote the man, saying,'I won't stand none of your diminutive epithets.' Empiricist minds, putting the parts before the whole, appear to rationalists, who start from the whole, and consequently enjoy magniloquent privileges, to use epithets offensively diminutive. But all such differences are minor matters which ought to be subordinated in view of the fact that, whether we be empiricists or rationalists, we are, ourselves, parts of the universe and share the same one deep concern in its destinies. We crave alike to feel more truly at home with it, and to contribute our mite to its amelioration. It would be pitiful if small aesthetic discords were to keep honest men asunder. jump into them with both feet, and stand there. Philosophers must do more; they must first get reason's license for them; and to the professional philosophic mind the operation of procuring the license is usually a thing of much more pith and moment than any particular beliefs to which the license may give the rights of access. Suppose, for example, that a philosopher believes in what is called free-will. That a common man alongside of him should also share that belief, possessing it by a sort of inborn intuition, does not endear the man to the philosopher at all--he may even be ashamed to be associated with such a man. What interests the philosopher is the particular premises on which the free-will he believes in is established, the sense in which it is taken, the objections it eludes, the difficulties it takes account of, in short the whole form and temper and manner and technical apparatus that goes with the belief in question. A philosopher across the way who should use the same technical apparatus, making the same distinctions, etc., but drawing opposite conclusions and denying free-will entirely, would fascinate the first philosopher far more than would the naif co-believer. Their common technical interests would unite them more than their opposite conclusions separate them. Each would feel an essential consanguinity in the other, would think of him, write at him, care for his good opinion. The simple-minded believer in free-will would be disregarded by either. Neither as ally nor as opponent would his vote be counted.
0.892774
Imagine this not-so-hypothetical scenario: You’re a newer faculty member at a liberal arts college, and your dean has published an op-ed essay calling for “experiential” liberal arts “to break down the barrier between classroom learning and everyday life.” But what exactly does “experiential” mean, especially in academic disciplines without established traditions in laboratories, studios, or field work? Is this a meaningful foundational shift—or yet another higher education fad? How should newer faculty respond to this tension between philosophical aspirations of what liberal arts learning might become in the future versus pragmatic advice on how to survive and build your scholarly career over the next few years? Liberal arts faculty who teach in disciplines with labs, studios, or fieldwork usually can envision some form of “experiential” learning. But it’s often harder for humanities and social science faculty to imagine this, especially those who teach at liberal arts institutions that have historically distanced their curriculum from vocational training. Consider this pedagogical option: Community Learning, which we define as experiential liberal arts learning with collaborative partnerships (that benefit all parties, both inside and outside the campus) and perspective-building relationships (to cultivate standpoint thinking). At our college, located in the city of Hartford, Connecticut, faculty across various departments have innovated for more than two decades with Community Learning by bringing students together with diverse neighborhood groups, non-profit organizations, and local change agents to co-create knowledge. This semester at our campus, an English class is exploring prison literature in collaboration with an arts-based re-entry program for people returning from a correctional institution. Also, an environmental science class is partnering with local organizations on river cleanup and invasive species removal to better understand conservation and biodiversity. And a first-year seminar is conducting video interviews with five different local social reform leaders, to analyze their “theories of change” and also to create one-minute web videos for their organizations to use online. These courses succeed when faculty creatively blend the needs of their academic disciplines (What should students learn?) with the needs of their community partners (What types of service or knowledge would they like the class to contribute?) This pedagogical balancing act—of planning a course around the discipline, the community, and students’ developmental learning—exemplifies standpoint thinking in the liberal arts. Whether you realize it or not, many liberal arts courses contain elements that resonate with the needs and interests of community partners. Even courses in the humanities, which some perceive as purely academic, are likely to incorporate liberal arts skills (like research, analysis, writing, and presentation) as well as broad themes relevant across the human experience (mobility, hope, transgression, and power). These skills and themes may be just as relevant to organizations in your local community that may have needs that liberal arts students can fulfill. About the authors: Jack Dougherty is Faculty Director of Community Learning, and Megan Faver Hartline is Director of Community Learning, at the Center for Hartford Engagement and Research (CHER) http://cher.trincoll.edu at Trinity College, Connecticut.
0.945442
For other ships with the same name, see List of ships named HMS Terror. On 12 September 2016, the Arctic Research Foundation announced that the wreck of Terror had been found in Nunavut's Terror Bay, off the southwest coast of King William Island. The wreck was discovered 92 km (57 mi) south of the location where the ship was reported abandoned, and some 50 km (31 mi) from the wreck of HMS Erebus, discovered in 2014. HMS Terror was a Vesuvius-class bomb ship built over two years at the Davy shipyard in Topsham, Devon, for the Royal Navy. Her deck was 31 m (102 ft) long, and the ship measured 325 tons burthen. The vessel was armed with two heavy mortars and ten cannon, and was launched in June 1813. Terror saw service in the War of 1812 against the United States, during which the ships of the North America and West Indies Station of the Royal Navy blockaded the Atlantic ports of the United States and launched amphibious raids from its base in Bermuda, leading up to the 1814 Chesapeake campaign, a punitive expedition that included the Raid on Alexandria, the Battle of Bladensburg, the Burning of Washington, and the Battle of Baltimore. Under the command of John Sheridan, she took part in the bombardment of Stonington, Connecticut, on 9–12 August 1814. She also fought in the Battle of Baltimore in September 1814 and participated in the bombardment of Fort McHenry; the latter attack inspired Francis Scott Key to write the poem that eventually became known as "The Star-Spangled Banner". In January 1815, still under Sheridan's command, Terror was involved in the Battle of Fort Peter and the attack on St. Marys, Georgia. After the war, Terror was laid up until March 1828, when she was recommissioned for service in the Mediterranean Sea, but was removed from active service when she underwent repairs for damage suffered near Lisbon, Portugal. In the mid-1830s, Terror was refitted as a polar exploration vessel. Her design as a bomb ship meant she had an unusually strong framework to resist the recoil of her heavy mortars; thus, she could withstand the pressure of polar sea ice, as well. In 1836, command of Terror was given to Captain George Back for an Arctic expedition to Hudson Bay. The expedition aimed to enter Repulse Bay, where it would send out landing parties to ascertain whether the Boothia Peninsula was an island or a peninsula. Terror was trapped by ice near Southampton Island, and did not reach Repulse Bay. At one point, the ice forced her 12 m (39 ft) up the face of a cliff. She was trapped in the ice for ten months. In the spring of 1837, an encounter with an iceberg further damaged the ship. She nearly sank on her return journey across the Atlantic, and was in a sinking condition by the time Back was able to beach the ship on the coast of Ireland on 21 September. Terror was repaired and assigned in 1839 to a voyage to the Antarctic along with Erebus under the overall command of James Clark Ross. Francis Crozier was commander of Terror on this expedition, as well as second-in-command to Ross. The expedition spanned three seasons from 1840 to 1843 during which Terror and Erebus made three forays into Antarctic waters, traversing the Ross Sea twice, and sailing through the Weddell Sea southeast of the Falkland Islands. The dormant volcano Mount Terror on Ross Island was named after the ship by the expedition commander. Sample of dishware carried by Terror, showing vessel name and the cypher for King George. Before leaving on the Franklin expedition, both Erebus and Terror underwent heavy modifications for the journey. They were both outfitted with steam engines, taken from former London and Greenwich Railway steam locomotives. Rated at 25 hp (19 kW), each could propel its ship at 4 knots (7.4 km/h). The pair of ships became the first Royal Navy ships to have steam-powered engines and screw propellers. Twelve days' supply of coal was carried. Iron plating was added fore and aft on the ships' hulls to make them more resistant to pack ice, and their decks were cross-planked to distribute impact forces. Along with Erebus, Terror was stocked with supplies for their expedition, which included among other items: two tons of tobacco, 8,000 tins of preserves, and 7,560 L (1,660 imp gal; 2,000 US gal) of liquor. Terror's library had 1,200 books, and the ship's berths were heated via ducts that connected them to the stove. Their voyage to the Arctic was with Sir John Franklin in overall command of the expedition in Erebus, and Terror again under the command of Captain Francis Rawdon Moira Crozier. The expedition was ordered to gather magnetic data in the Canadian Arctic and complete a crossing of the Northwest Passage, which had already been charted from both the east and west, but never entirely navigated. It was planned to last three years. The expedition sailed from Greenhithe, Kent, on 19 May 1845, and the ships were last seen entering Baffin Bay in August 1845. The disappearance of the Franklin expedition set off a massive search effort in the Arctic and the broad circumstances of the expedition's fate were revealed during a series of expeditions between 1848 and 1866. Both ships had become icebound and were abandoned by their crews, all of whom died of exposure and starvation while trying to trek overland to Fort Resolution, a Hudson's Bay Company outpost 970 km (600 mi) to the southwest. Subsequent expeditions up until the late 1980s, including autopsies of crew members, revealed that their canned rations may have been tainted by both lead and botulism. Oral reports by local Inuit that some of the crew members resorted to cannibalism were at least somewhat supported by forensic evidence of cut marks on the skeletal remains of crew members found on King William Island during the late 20th century. HMS Terror was found off the south coast of King William Island, highlighted. On 15 August 2008, Parks Canada, an agency of the Government of Canada, announced a C$75,000 six-week search, deploying the icebreaker CCGS Sir Wilfrid Laurier with the goal of finding the two ships. The search was also intended to strengthen Canada's claims of sovereignty over large portions of the Arctic. Further attempts to locate the ships were undertaken in 2010, 2011, and 2012, all of which failed to locate the ships' remains. On 8 September 2014, it was announced that the wreckage of one of Franklin's ships was found on 7 September using a remotely operated underwater vehicle recently acquired by Parks Canada. On 1 October 2014, Canadian Prime Minister Stephen Harper announced that the remains were that of Erebus. On 12 September 2016, a team from the Arctic Research Foundation announced that a wreck close to Terror's description had been located on the southern coast of King William Island in the middle of Terror Bay (68°54′N 98°56′W / 68.900°N 98.933°W / 68.900; -98.933 (Terror Bay)), at a depth of 69–79 ft (21–24 m). The remains of the ships are designated a National Historic Site of Canada with the exact location withheld to preserve the wrecks and prevent looting. Sammy Kogvik, an Inuit hunter and member of the Canadian Rangers who joined the crew of the Arctic Research Foundation's Martin Bergmann, recalled an incident from seven years earlier in which he encountered what appeared to be a mast jutting from the ice. With this information, the ship's destination was changed from Cambridge Bay to Terror Bay, where researchers located the wreck in just 2.5 hours. According to Louie Kamookak, a resident of nearby Gjoa Haven and a historian on the Franklin expedition, Parks Canada had ignored the stories of locals that suggested that the wreck of Terror was in its namesake bay, despite many modern stories of sightings by hunters and from airplanes. The wreck was found in excellent condition. A wide exhaust pipe that rose from the outer deck was pivotal in identifying the ship. It was located in the same location where the smokestack from Terror's locomotive engine had been installed. The wreck was nearly 100 km (62 mi) south of where historians thought its final resting place was, calling into question the previously accepted account of the fate of the sailors, that they died while trying to walk out of the Arctic to the nearest Hudson's Bay Company trading post. The location of the wreckage, and evidence in the wreckage of anchor usage, indicates continued use, raising the possibility that some of the sailors had attempted to re-man the ship and sail her home (or elsewhere), possibly on orders from Crozier. On 23 October 2017 it was announced by the UK's defence minister, Sir Michael Fallon, that his government would be giving HMS Terror as well its sister ship HMS Erebus to Canada, retaining only a few relics and any gold, along with the right to repatriate any human remains. Edwin Henry Landseer's Man Proposes, God Disposes (1864) was inspired by the fate of Terror and Erebus on the Franklin expedition. Terror and Erebus (1965) is a verse play for CBC Radio by Canadian poet Gwendolyn MacEwen, subsequently published in her collection Afterworlds (1987). Mordecai Richler's novel Solomon Gursky Was Here (1989), in which Ephraim Gursky survives the expedition and lives to pass on his Judaism and Yiddish to some of the local Inuit. Terror and Erebus (A Lament for Franklin) (1997) is an oratorio for solo baritone and chamber ensemble by Canadian composer Henry Kucharzyk, adapted from MacEwen's verse drama and crediting her for its libretto. Dan Simmons' novel The Terror (2007), a fictionalized account of Captain Sir John Franklin's lost expedition of HMS Erebus and HMS Terror to the Arctic, in 1845–1848, to force the Northwest Passage. In the novel, while Franklin and his crew are plagued by starvation and illness, and forced to contend with mutiny and cannibalism, they are stalked across the bleak Arctic landscape by a monster. The novel has been adapted as an eponymous 2018 television series by cable TV channel AMC. Clive Cussler's novel Arctic Drift (2008), in which Erebus and Terror contain a mysterious silver metal which holds the key to solving the characters' mystery. In July 2013, an anonymous miniaturist began reconstructing a 1:48 scale model of HMS Terror, documenting the process on buildingterror.blogspot.com. In June 2017, it was announced that the model HMS Terror would be shown alongside the historical model of HMS Erebus (c.1839) in the "Death in the Ice" exhibit at the National Maritime Museum in Greenwich (July 2017 – January 2018). A corresponding Twitter account for "Building Terror" was created in December 2017. Terror & Erebus is a chamber opera for six singers and percussion quartet by Canadian composer Cecilia Livingston, to premiere in 2019. The Erebus and the Terror, an instrumental piece composed by Mícheál Ó Domhnaill, is the third track on the 1987 album Something of Time by Nightnoise. Mount Terror on Ross Island, near Antarctica, was named for the ship by Captain Ross, who also named a nearby and slightly taller peak to the west, Mount Erebus. Erebus and Terror Gulf, in Antarctica. Named for the vessels used by Royal Navy Captain Sir James Clark Ross in exploring the area in 1842–43. Terror Bay on King William Island was named in 1910, long before the discovery of the wreck there. ^ Bourne, John (1852). "Appendix, Table I: Dimensions Of Screw Steam Vessels In Her Majesty's Navy". A treatise on the screw propeller: with various suggestions of improvement. London: Longman, Brown, Green, and Longmans. p. i. ^ a b c d e f "HMS Terror". Parks Canada. 4 August 2015. Retrieved 14 September 2016. ^ a b c d e f g h i j k l Pope, Alexandra (12 September 2016). "Five interesting facts about the HMS Terror". Canadian Geographic. Retrieved 14 September 2016. ^ a b c d e Paine, Lincoln P. (2000). Ships of Discovery and Exploration. Houghton Mifflin. pp. 139–140. ISBN 0-395-98415-7. ^ "Shipping Intelligence". Caledonian Mercury (18324). Edinburgh. 28 September 1837. ^ Gow, Harry (12 February 2015). "British loco boiler at the bottom of the Arctic Ocean". Heritage Railway. Horncastle: Mortons Media Group Ltd (199): 84. ISSN 1466-3562. ^ a b Gopnik, Adam (24 September 2014). "Canada Rediscovers the Mythos of the Franklin Expedition". The New Yorker. Retrieved 14 September 2016. ^ a b c d Watson, Paul (12 September 2016). "Ship found in Arctic 168 years after doomed Northwest Passage attempt". The Guardian. Retrieved 13 September 2016. ^ Boswell, Randy (30 January 2008). "Parks Canada to lead new search for Franklin ships". Windsor Star. Retrieved 30 August 2013. ^ "2012 search Expedition for Franklin's ships HMS Erebus and HMS Terror". Office of the Prime Minister (Canada). 23 August 2012. Archived from the original on 5 October 2013. ^ "Sir John Franklin: Fabled Arctic ship found". BBC News. 9 September 2014. ^ "Lost Franklin expedition ship found in the Arctic". CBC News. 9 September 2014. ^ a b Pringle, Heather (13 September 2016). "Unlikely Tip Leads to Discovery of Historic Shipwreck". National Geographic. Retrieved 14 September 2016. ^ Erebus and Terror. Canadian Register of Historic Places. Retrieved 29 October 2013. ; "National Historic Sites of Canada System Plan". Parks Canada. 8 May 2009. Archived from the original on 24 September 2005. Retrieved 30 August 2013. ; "National Historic Sites of Canada System Plan map". Parks Canada. 15 April 2009. Archived from the original on 29 May 2006. Retrieved 30 August 2013. ^ a b c Sorensen, Chris (14 September 2016). "HMS Terror: How the final Franklin ship was found". Maclean's. Retrieved 14 September 2016. ^ Barton, Katherine (15 September 2016). "No camera, no proof: Why Sammy Kogvik didn't tell anyone about HMS Terror find". CBC News. Retrieved 16 September 2016. ^ Ducharme, Steve (24 October 2017). "HMS Erebus ship's bell recovered from Franklin expedition". Nunatsiaq News. ^ "Soundmakers – Terror and Erebus by Henry Kucharzyk". www.soundmakers.ca. Retrieved 24 January 2018. ^ Finn, Jessica (1 December 2014). "Building a scale model of HMS Terror". Canadian Geographic. ^ "HMS TERROR TO CROSS THE ATLANTIC ONCE AGAIN". buildingterror.blogspot.ca. Retrieved 24 January 2018. ^ "EREBUS AND TERROR TOGETHER AGAIN". buildingterror.blogspot.ca. Retrieved 24 January 2018. ^ "Erebus and Terror Gulf". Geographic Names Information System. United States Geological Survey. Retrieved 2 March 2012. ^ Natural Resources Canada. "Terror Bay". Geographic Names Board of Canada. Beardsly, Martyn: Deadly Winter: The Life of Sir John Franklin. ISBN 1-55750-179-3. Beattie, Owen: Frozen in Time: The Fate of the Franklin Expedition. ISBN 1-55365-060-3. Berton, Pierre: The Arctic Grail. ISBN 0-670-82491-7. Cookman, Scott: Ice Blink: The Tragic Fate of Sir John Franklin's Lost Polar Expedition. ISBN 0-471-37790-2. James, William (1827). The Naval History of Great Britain, Volume 6, 1811 – 1827. Conway Maritime Press. ISBN 0-85177-910-7. McGregor, Elizabeth: The Ice Child. Ronchetti L, Clement D, William-Hawkes E:HMS Terror: a Topsham Ship - Published by Topsham Museum Society [ISBN unspecified]. Simmons, Dan: The Terror (Fictionalized account of the Franklin expedition). ISBN 0-593-05762-7 (UK H/C). Smith, Michael: Captain Francis Crozier: The Last Man Standing?. ISBN 1-905172-09-5. Wikimedia Commons has media related to HMS Terror (ship, 1813). HMS Terror model (as built) & display in Topsham museum, Topsham, Devon, England. CCGS Sir Wilfrid Laurier is a Martha L. Black-class light icebreaker and major navaids tender of the Canadian Coast Guard. Built in 1986 by Canadian Shipbuilding at Collingwood, Ontario, Canada, she was the last ship constructed there. The ship has been based out of Victoria, British Columbia. Terror Bay is an Arctic waterway in the Kitikmeot Region, Nunavut, Canada. It is located in the south western side of King William Island. The entrance to the bay is marked by Fitzjames Island on the west and Irving Islands to the east. The Bay opens to Queen Maud Gulf.
0.999825
Dear Doctor: I read that grandparents can increase a child's cancer risk by encouraging bad behaviors. Quite frankly, I'm offended by this. Couldn't they also improve a child's health? When our grandkids are visiting, we get them to eat much healthier food than they typically get at home. Dear Reader: We confess that we cringed a bit as we read some of the headlines that the study you are referencing has generated. A very important message -- the rules of health and nutrition hold true no matter who is breaking them -- is getting buried beneath needless snark. To answer your question: Yes, by offering the right nutritional guidance and making wise food choices, grandparents can absolutely have a positive effect on a child's health. We're happy to hear that you focus on a healthful diet when your grandkids are around, but suspect you are far from alone in this endeavor. So how did this "grandparents may be bad for kids' health" conversation get started? Researchers from the University of Glasgow in Scotland were interested in learning what role, if any, additional caregivers may have on the risk factors for non-communicable diseases in children. In the majority of cases, these secondary caregivers were grandparents. The researchers noted that the positive habits and behaviors that can help avert up to 40 percent of the cancers that develop in adulthood are actually acquired in early childhood. These include sticking to a healthful diet, getting regular exercise, not using tobacco products, not abusing alcohol, limiting or mitigating sun exposure, and avoiding excess weight gain. The question then became what sort of effect the grandparents' approach to those positive behaviors had on the children's cancer risk. To that end, researchers analyzed data collected in 56 studies that had been conducted in 18 different countries. This new study, which was published last November in the journal PLOS One, found that the primary risky behavior that grandparents took part in was overfeeding their grandchildren. That is, the grandparents took a more indulgent approach to their grandchildren's diets. They offered them more treats than their parents did and provided larger portions during meals. This meant the kids were eating too many calories, many of them coming from sugar, fat and processed foods. This resulted in the grandchildren gaining weight. Another factor was activity levels, which were lower among children when being cared for by grandparents than when they were with their parents. In some cases, the children were exposed to tobacco products and secondhand smoke in their grandparents' homes. The upshot of all these behaviors was a measurable increase in the risk factors that can lead to heart disease, diabetes and even cancer later in life. One thing the researchers were careful to address, and which didn't appear in the stories we read, was why this was happening. In some countries, excess weight was a cultural sign of health and prosperity. For some grandparents who had been raised in wartime or in poverty, abundant food was a symbol of safety and stability. Rather than being uncaring or careless, many of the grandparents in the study believed they were helping the children.
0.999985
We have undertaken a large-scale genetic screen to identify genes with a seedling-lethal mutant phenotype. From screening ~38,000 insertional mutant lines, we identified >500 seedling-lethal mutants, completed cosegregation analysis of the insertion and the lethal phenotype for >200 mutants, molecularly characterized 54 mutants, and provided a detailed description for 22 of them. Most of the seedling-lethal mutants seem to affect chloroplast function because they display altered pigmentation and affect genes encoding proteins predicted to have chloroplast localization. Although a high level of functional redundancy in Arabidopsis might be expected because 65% of genes are members of gene families, we found that 41% of the essential genes found in this study are members of Arabidopsis gene families. In addition, we isolated several interesting classes of mutants and genes. We found three mutants in the recently discovered nonmevalonate isoprenoid biosynthetic pathway and mutants disrupting genes similar to Tic40 and tatC, which are likely to be involved in chloroplast protein translocation. Finally, we directly compared T-DNA and Ac/Ds transposon mutagenesis methods in Arabidopsis on a genome scale. In each population, we found only about one-third of the insertion mutations cosegregated with a mutant phenotype. WHAT genes are essential for the viability of a plant? Because of the complexity of the multitude of biological processes required for a plant to grow and develop, a large and diverse set of genes are likely to be involved. A forward genetics approach to this question is a powerful method to identify the relevant genes. This approach involves the isolation of embryo-defective mutants and seedling-lethal mutants, which are likely to comprise the largest classes of visible mutants in Arabidopsis. There is often an overlap in the mutants identified in embryo and seedling screens because embryo-defective mutants that form seeds capable of germination may also be identified as seedling-lethal mutants. Genes with a seedling-lethal phenotype are likely to encode genes specifically required during early seedling development as well as more generally functioning genes whose absence becomes critical during seedling development. Although a saturation ethyl methanesulfonate (EMS) mutagenesis has identified several Arabidopsis genes with a seedling-lethal mutant phenotype (Jurgenset al. 1991; Mayeret al. 1991), subsequent analysis has been limited to a small subset of genes with unusual body patterns. Similarly, >300 embryo-defective Arabidopsis mutants have been isolated (Errampalliet al. 1991; Castleet al. 1993; Meinke 1994) and ~150 were mapped (Franzmannet al. 1995). Molecular cloning of genes in these two classes on a genome-wide scale has not been reported. Previous studies provide conflicting estimates of how many genes are essential for embryogenesis in Arabidopsis. On the basis of the frequency of multiple alleles in genes with an embryo-defective phenotype, there are estimated to be only 500 genes essential for embryogenesis (Franzmannet al. 1995). By contrast, an estimated 3500–4000 genes are predicted to be essential for embryogenesis on the basis of the frequency of fusca mutants in large-scale seed color and seedling-lethal screens (Jurgenset al. 1991; Miseraet al. 1994). The number of genes essential for the seedling stage of growth has not been reported. About 1100 genes are estimated to be essential for gametophytic function on the basis of an analysis of transmissible deficiencies having average DNA losses of <90 kb (Viziret al. 1994). Estimates from genetic studies of other eukaryotes are relatively similar to each other with respect to the proportion of essential genes in a genome. In Saccharomyces cerevisiae 17% of the genes were shown to be essential for viability in rich medium by creating a collection of mutant lines, each with the precise deletion of one open reading frame (ORF; Winzeleret al. 1999). For Caenorhabditis elegans, estimates range from 15% (Brenner 1974; Herman 1988) to 25% (Stewartet al. 1998). Recent studies with RNA interference suggest that ~10% of C. elegans genes are essential (Fraseret al. 2000). For Drosophila melanogaster, 28% of the genes have been estimated to be essential (Bossyet al. 1984). In this study, we have initiated a comprehensive functional genomics effort to identify the genes required for viability at the seedling stage of plant development. Parallel efforts to identify genes required at the embryo stage of development are described in McElver et al. (2001). Because these genes encode essential proteins in Arabidopsis, this research may also have applications to the identification of new herbicidal compounds (Ward and Bernasconi 1999). From a screen of Arabidopsis T-DNA and Ds insertion lines, we isolated >500 mutants with a seedling-lethal phenotype as the first step in this process. Many of these mutants are likely to be required for chloroplast function on the basis of phenotype and sequence data. In addition, mutant phenotypes, such as elongated or reduced hypocotyls, may be due to the disruption of essential genes in signal transduction pathways. In light of the large-scale nature of the project, we took alternative high-throughput approaches, enabling us to focus on the identification of DNA sequences for a large number of these genes. From cosegregation analysis of >200 mutants, we were able to directly compare the frequency of “tagged” mutants in T-DNA and Ac/Ds mutageneses. Finally, we present sequence information for an initial set of genes that are essential for seedling growth and development. Arabidopsis insertional mutant collections: All Ds lines were generated according to Sundaresan et al. (1995) and generously provided by R. Martienssen (Cold Spring Harbor Laboratories), H. Ma (Pennsylvania State University), and U. Grossniklaus (Zurich University). T-DNA lines were generated as described in McElver et al. (2001). The T-DNA vectors included pPCVICEn4HPT, pSKI015, pCSA104, and pDAP101. Arabidopsis seedling screening and growth conditions: Between 50 and 75 seeds for each line were placed on MS media [4.3 g/liter Murashige and Skoog salts (Life Technologies, Rockville, MD), 8 g/liter Phytagar (Life Technologies)] containing the fungicides Benomyl (5 mg/liter; Sigma, St. Louis) and Maxim (1 mg/liter; Syngenta, Greensboro, NC). Top agar with fungicides, identical to MS media except with 6 g/liter Phytagar, was added to spread out the seeds on each plate. Plates were placed at 4° for 1–7 days to synchronize germination. Seedlings were germinated and grown at 19°–23° under lights (80–100 μE/sec/m2) with a 16-hr light and 8-hr dark photoperiod. Seven and 14 days after being moved from 4°, the plates were screened with a dissecting microscope for abnormal seedlings. For a line in which it was uncertain whether the mutant phenotype was lethal, three mutant seedlings were transplanted to soil and grown at 18°–23° with 14.5 hr of light per day. For a given line, the mutant phenotype was considered to be lethal only if all three mutant seedlings died. For a small portion of the lines, it was necessary to rescreen a line by replating sterilized seeds on MS + 2% sucrose media. This medium was used because it enabled us to distinguish more easily between seedling-lethal mutants and sick nonmutant seedlings. It is possible that a few of these lines had mutants that would have been inviable on MS media lacking sucrose, but were viable on media containing sucrose. Because such replated lines were screened only on media with sucrose, they would not have been identified as lethal in our screen and the total number of lethal mutants may be slightly underestimated. Cosegregation analysis: For T-DNA lines, ~75–150 T2 seeds were sterilized and grown on germination medium (GM; Guyeret al. 1998) containing 30 mg/liter hygromycin B for pPCVICEn4HPT lines and 15 mg/liter Basta for pSKI015, pCSA104, and pDAP101 lines. For a small number of lines, only ~40–75 plants were analyzed because there were not enough seeds or there was poor germination. The ratio of resistant to sensitive seedlings (R:S ratio) was used to determine the most likely number of insertion loci. If the R:S ratio was <6.0, the line most likely had a single insertion locus and 32 resistant seedlings were transplanted to soil. This cutoff was derived empirically and was based on chi-square analysis and a strategy to prevent lines with a single insertion locus being assigned to the two insertion loci category; this analysis may have led to a slight overestimate of the number of single insertion lines. In most cases, siliques were screened for a potential embryo-defective phenotype. If all the resistant plants segregated progeny with an embryo phenotype, the line was considered “tagged.” If some, but not all, the resistant plants segregated progeny with an embryo-defective phenotype, the line was considered “not tagged.” If no embryo phenotype was detected, seeds were collected from each resistant plant and plated on MS media with fungicides. If a seedling-lethal phenotype was detected among the progeny of each of the resistant plants, the line was considered tagged. For tagged mutants, the number of resistant plants checked for cosegregation of the selectable marker and the lethal phenotype was usually ~30 and ranged from 24 to 59. If the R:S ratio was >6.0 and <20, then 17 or more resistant seedlings were usually transplanted to allow the identification in the next generation of a “subfamily” that segregated a single insertion and the seedling lethal phenotype. If an appropriate subfamily was identified, cosegregation analysis proceeded as described for lines with a single insertion locus. If an appropriate subfamily was not identified, the line was not analyzed further (Table 3). If the R:S ratio was >20, the line was not usually analyzed because there were likely to be more than two insertion loci. Because of the possibility of having two insertion loci linked to each other, R:S ratio data could not definitively determine the number of insertion loci for every line analyzed. For a small number of lines, cosegregation analysis was done according to McElver et al. (2001). Experiments were performed similarly for Ds lines, except 50 mg/liter kanamycin monosulfate was used. In addition, 20 Ds lines segregated only resistant progeny (Table 2), which indicated that each line was homozygous for a Ds element and heterozygous for the lethal mutation (Tables 4 and 5). This situation arose because F4 seeds derived from homozygous F3 plants were used for some Ds lines, while T2 seeds derived from hemizygous T1 plants were used for T-DNA lines (see results). Among these 20 lines, it is most likely that each has a single Ds element that did not cosegregate with the lethal phenotype. It remains possible that a very small number of these lines contained two Ds elements and were tagged. The following calculations provide an estimate of the accuracy of the cosegregation analysis. Although most lines designated as tagged contained an insertion that caused the lethal phenotype, at a low frequency, it is possible that an apparently tagged line had an insertion that was tightly linked to a second mutation with a lethal phenotype but that no recombination was detected between the two mutations. In this case, the line is not truly tagged because the insertion did not disrupt the gene responsible for the lethal phenotype. If no recombinant plants were detected among 30 resistant plants analyzed, the recombination frequency (p) between the insertion and a hypothetical, linked second mutation was ≤0.034 (or 3.4 cM). Thus, a hypothetical, linked mutation would have been within a 6.8-cM interval spanning the insertion. Each line used for cosegregation is estimated to harbor about three mutations on the basis of the cosegregation frequency of 29–33% (Table 3); this estimate implies that on average there are one insertion and two other mutations per line. With ~550 cM in the Arabidopsis genome (Schmidt 1998), the frequency of a hypothetical, linked mutation being in that 6.8-cM region is 2.5% [2 × (6.8/550)]. Therefore, the designation of a line as tagged based on cosegregation of the lethal phenotype and the selectable marker is likely to be correct for 97.5% of the lines when 30 resistant plants were analyzed. The cosegregation process, which consists of a single self-cross of a heterozygote in most cases, served as an opportunity for other mutations to segregate away from the seedling-lethal mutations in these lines. Because of the large scale of the experiments, these mutants were not subjected to backcrossing. Based on the cosegregation data (Table 3), there are likely to be only about three to five mutations in most lines, which is much less than in standard Arabidopsis EMS seed mutageneses based on the frequency of embryo-lethal mutants observed (Redei and Koncz 1992; McElveret al. 2001). It seems likely that most background mutations would not prevent the detection of a seedling-lethal phenotype because they would be weaker or affect processes later in development. Molecular biology: Arabidopsis genomic DNA was prepared according to Reiter et al. (1992) or using the Nucleon PhytoPure Plant DNA isolation kit (Amersham International, Buckinghamshire, England) or the Puregene DNA isolation kit (Gentra Systems, Minneapolis) as modified by McElver et al. (2001). Other procedures were carried out according to standard methods (Ausubelet al. 1998). Plasmid rescue: For each pPCVICEn4HPT and pSKI015 T-DNA line with a tagged seedling-lethal mutation, genomic DNA was isolated from tissue collected from either heterozygotes or a mixture of homozygotes, heterozygotes, and wild-type plants. Following Southern blot analysis to determine appropriate restriction enzymes to use for plasmid rescue, genomic DNA was cut with an appropriate restriction enzyme to rescue the right or left border of the T-DNA. The ligated genomic DNA was transformed into Escherichia coli cells and ampicillin-resistant colonies were isolated. Plasmid clones from these colonies were analyzed by restriction enzyme digestion and sequenced to determine the location of the insertion in the Arabidopsis genome. Thermal asymmetric interlaced PCR: Thermal asymmetric interlaced (TAIL)-PCR (Liuet al. 1995) was performed as modified by McElver et al. (2001). The arbitrary degenerate primers and T-DNA primers used are described in McElver et al. (2001). The Ds nested primers used were 5a (5′ ACTAGCTCTACCGTTTCCGTTTCCGTTTAC 3′), 5b (5′ TTACCTCGGGTTCGAAATCGATCGGGATAA 3′), 5c (5′ AAATCGGTTATACGATAACGGTCGGTACGGGA 3′), 3a (5′ GGGTCTTGCGGATCTGAATATATGTTTTCATGTGTG 3′), 3b (5′ TACCGAACAAAAATACCGGTTCCCGTCCGATTTCGAC 3′), and 3c (5′ GGATCGTATCGGTTTTCGATTACCGTATTTATCC 3′). The DNA sequence of PCR products was determined as described by McElver et al. (2001). Confirmation of sequences flanking insertions: Results from plasmid rescue experiments were confirmed either by Southern blot with a probe derived from the flanking genomic DNA or by PCR with one primer in the insertion and the other in the flanking genomic DNA. Results from TAIL-PCR were confirmed by a second PCR reaction with a gene-specific primer and an insertion-specific primer. For four T-DNA lines, results were considered confirmed when the same border sequence was obtained from TAIL-PCR reactions with two or more different arbitrary degenerate primers. Both borders of each insertion were identified and confirmed for all but three of the lines. Photography and image processing: Plants were photographed with a DEI-750 video camera (Optronics Engineering, Goleta, CA) and images were captured with Scion Image (Scion Corporation, Frederick, MD) software. Images were adjusted for brightness, contrast, and color and assembled for figures with Adobe (San Jose, CA) Photoshop (version 5.5). Isolation of seedling-lethal mutants: To identify mutants with a seedling-lethal phenotype, we screened both T-DNA and Ds transposon Arabidopsis mutant collections. The generation of the T-DNA collection has been described (McElveret al. 2001). For the T-DNA lines, we screened T2 self progeny from a single T1 parent (McElveret al. 2001), so that we expected to find one-quarter homozygous mutant progeny for a line segregating a recessive mutation. We generally performed our screening by examining the growth of seedlings on MS media without sugar to allow us to identify the greatest number of lethal mutants possible. Addition of sugar to the media might rescue the phenotype of some mutants that show lethality on media without sugar. From 26,187 independent T-DNA lines, we isolated 407 lines segregating seedling-lethal mutants. Although we used some T-DNA lines generated with an activation tagging vector (pPCVICEn4HPT and pSKI015, GenBank accession no. AF187951; Waldenet al. 1994; Weigelet al. 2000), we did not observe any dominant or semidominant lethal mutants in this screen. Dominant lethal mutations would presumably die in the T1 generation so that they would not be present in the T2 progeny. The frequency of seedling-lethal mutants identified from the activation tagging lines (1.52%; 55/3609) and the frequency from the other T-DNA lines (1.56%; 352/22,578) are essentially identical. This result suggests that most of the mutants identified in the activation tagging lines were the result of loss of gene function rather than altered gene function. In addition, molecular analysis of the tagged seedling-lethal mutants indicated that most of the lines have a T-DNA disrupting an ORF (see below). When an ORF is disrupted by a T-DNA, it is most likely that the protein function is disrupted rather than being misexpressed. To make a direct comparison between T-DNA lines and Ds transposon lines, we also screened a large collection of Ds lines (Sundaresanet al. 1995). This collection was generated by crossing parental Ac and Ds lines and selecting for F2 progeny with transposition events unlinked to the original Ac and Ds elements. This approach provided the opportunity to identify insertions into most of the genome (Parinovet al. 1999). F3 seeds from each F2 plant were collected individually and a stock of F4 seeds was sometimes created by collecting seeds from kanamycin-resistant F3 progeny. Using the same seedling screening protocol as for the T-DNA lines, we screened F3 or F4 progeny from 12,196 lines and identified 98 lines with a seedling-lethal mutant phenotype. Unlike the T-DNA lines, most of the Ds lines in the collection were not screened for an embryo-defective phenotype. Phenotypic classes: The seedling-lethal mutants displayed a wide range of phenotypes, which were classified as affecting pigmentation and/or morphology (Figures 1 and 2). Among the T-DNA and Ds insertion mutants, the frequencies of pigmentation (81% vs. 79%), pigmentation and morphology (11% vs. 12%), and morphology (8% vs. 9%) mutants were nearly identical (Table 1). This distribution of mutants seems to differ from that obtained in another large-scale seedling mutant screen that found only 50% of seedling mutants had defects in pigmentation but not morphology (Jurgenset al. 1991). This difference might stem from the other screen having used a different classification scheme or not having been limited to lethal mutants. The frequency of pigmentation subclasses in our study was also comparable between the two mutant populations (Table 1). The albino, yellow, and pale green mutants (Table 1 and Figure 1, A, B, and E–H), which were the majority of the mutants, included a range of pigmentation phenotypes and the assignment of mutants to these subclasses was based on visual inspection. We isolated 12 mutants exhibiting an albino phenotype on media without sucrose and a striking purple-tinted (“fusca”) phenotype superimposed on the albino phenotype on media containing sucrose (Figure 1, C and D). Because only a subset of mutants were grown on both types of media, there are likely to be more mutants in this subclass that have been classified in this study as albino. About 1 week after germination on media containing sucrose, the purple coloration begins to fade and the seedlings gradually appear more like typical albinos. Wild-type Arabidopsis seedlings grown on media with sucrose exhibit a much milder version of this purple phenotype, which is due to anthocyanin accumulation, particularly in the hypocotyl at its junction with the cotyledons and along the edges of the cotyledons (Kubaseket al. 1992). Two mutants had a distinctive phenotype of green cotyledons and small white leaves (Figure 1I). This phenotype has been observed previously in Arabidopsis for mutants in the TZ, TH-1, and PY genes, which are likely to encode thiamin biosynthetic genes (Li and Redei 1969; Koornneef and Hanhart 1981). Both of these mutants seem to be thiamin auxotrophs because they appeared normal when grown on media supplemented with 0.1 mm thiamin (data not shown). Eleven of 13 dark green lethal mutants also displayed morphological defects (for example, Figure 1, K and L), which may imply that the dark green phenotype is indicative of a type of defect different from other pigmentation defects. A dark green leaf phenotype has been reported for dwarf mutants with a deficiency in either brassinosteroid or gibberellic acid hormonal pathways and it has been suggested that this defect may be due to smaller cell size (Clouseet al. 1996; Bennettet al. 1998). Seedling-lethal pigmentation phenotypes. (A) GT0946, albino, no leaves (day 15). (B) 4036, albino, with leaves (day 14). (C) 4788, albino (day 7). (D) 4788, fusca (day 7). (E) 245, pale green (day 14). (F) GT6839, yellow leaves, albino cotyledons (day 12). (G) GT1209, yellow, small cotyledons, reduced root growth (day 14). (H) GT0992, yellow, small cotyledons, reduced root growth (day 14). (I) 5007, white leaves, green cotyledons (day 14). (J) 2973, fusca cotyledons with purple dots (day 18). (K) 22084, dark green, cotyledons not separated (day 14). (L) 22433, dark green, variable cotyledon number and size, short thick hypocotyl, little root growth (day 13). All seedlings were grown on MS media, except B and E were supplemented with 5% sucrose and D, F, and K were supplemented with 2% sucrose. Seedling-lethal morphological phenotypes. (A) 3963, (right) small leaves with irregular margins (arrows) and a wild-type sibling (left). (B) 44446, no leaves, three small cotyledons, thick hypocotyl, reduced root growth. (C) 59928, no leaves, short thick hypocotyl. (D) 55582, single concave cotyledon, reduced hypocotyl, very little root growth (not visible). (E) 59438, single cotyledon, two small leaves (arrow), reduced hypocotyl, no root. (F) 59930, single cotyledon. (G) ET4386, two seedlings with variable cotyledon number, disrupted phyllotaxy, little root growth. (H) 58972, stubby cotyledons, variable cotyledon number, very little root growth. (I) 47091, variably shaped cotyledons, possibly four cotyledons, reduced hypocotyl, very little root growth. (J) 59270, two seedlings with three or four cotyledons, short and thick hypocotyl, little root growth. (K) 59095, no leaves, small closed cotyledons, short and thick hypocotyl, very little root growth. (L) ET5262, no leaves, small closed cotyledons, little root growth. (M) 58424, no leaves. (N) 57348, elongated hypocotyl (arrowheads), small leaves, pale green. (O) 46153, two seedlings with an elongated hypocotyl (arrowheads), elongated leaf petioles (arrows). (P) GT5602, small cotyledons, reduced hypocotyl, very little root growth. (Q) 54196, no root (arrowhead), short hypocotyl. (R) 5283, seedlings with very little root growth compared with wild-type (Col-0) seedling, variable cotyledon defect (arrowheads). (S) Col-0, root with small root hairs. (T) 5283, root with decreased overall length and longer root hairs compared to S. All seedlings were grown on MS media, except C, D, M, N, and R were supplemented with 2% sucrose and S and T were grown on GM. Seedlings in R, S, and T were grown on plates placed at a slant so that the roots would grow on the agar surface. In addition to the pigmentation-defective lethal mutants, mutants were isolated with a wide array of morphological defects (Figure 2). We observed phenotypes ranging from those that seemed to affect only a single structure to others that seemed to affect all seedling structures (leaves, cotyledons, hypocotyl, and roots). Mutant 3963 appeared normal, except for small leaves with irregular margins (Figure 2A). For lethals with defects in cotyledon number, either a single cotyledon (Figure 2, D–F) or multiple cotyledons (Figure 2, B and G–J), the number of cotyledons often varied among the seedlings of a given mutant line. More than 40 mutants exhibited a short, thick, or reduced hypocotyl (Figure 2, B–D and H). Four mutants displayed an elongated hypocotyl (Figure 2, N and O). Previously isolated elongated hypocotyl mutants, which affected gibberellic acid or light signal transduction, were viable (Jacobsen and Olszewski 1993; Briggs and Huala 1999; Reedet al. 2000), so that these essential genes might define novel components in these pathways. Ethylene and auxin are also known to control hypocotyl elongation (Collettet al. 2000). More than 30 mutants had very little or no root growth (Figure 2, P–R). For mutant 5283, although we saw increased root growth on media with sucrose (Figure 2T) compared to media without sucrose (Figure 2R), there was still considerably less root elongation than in wild type (Figure 2S). Several genetic screens have identified mutants that lack roots or have reduced root systems, but the number of Arabidopsis genes with this mutant phenotype remains unclear (Mayeret al. 1991; Chenget al. 1995; Berlethet al. 1996; Schereset al. 1996). Because many of the seedling-lethal mutants exhibited defects soon after germination, these mutants were also examined for phenotypes during embryogenesis. For the T-DNA lines, 191 of 407 seedling-lethal mutants had a detectable embryo-defective phenotype (data not shown; McElveret al. 2001). For the Ds lines, 35 of 72 examined seedling-lethal mutants had a detectable embryo-defective phenotype (data not shown). Generally, these embryo phenotypes appeared late in embryogenesis, for example, as pale mature or albino embryos. Genetic analysis of mutants: Because Arabidopsis insertional mutants also contain noninsertional mutations, it was necessary to determine whether an insertion cosegregated with the seedling-lethal phenotype. If the insertion cosegregated with the lethal phenotype, the mutant was considered tagged, but if the insertion did not cosegregate with the lethal phenotype, the line was considered not tagged and a noninsertional mutation was likely to be the cause of the lethal phenotype. From the initial R:S ratio in cosegregation analysis, the number of insertion loci in a line was determined (see materials and methods; Table 2). Based on the segregation of the selectable markers, 54% of the T-DNA lines had a single insertion locus, while 94% of the Ds lines had a single insertion locus. The higher number of insertion loci per line in the T-DNA lines probably explains the higher frequency of seedling-lethal mutants isolated from the T-DNA population compared to the Ds population. From the cosegregation analysis, we identified 32 T-DNA and 32 Ds lines as tagged (Table 3). The frequency of tagged lines in both populations was about one-third. We refer to the mutants by line number and have not named the corresponding genes because the large number of mutants isolated in this study made the use of complementation tests or genetic mapping inefficient for this purpose. Instead, we used the molecular position of an insertion in the Arabidopsis genome to identify mutants with disruptions of the same gene. Molecular analysis of mutants: For each tagged seedling-lethal mutant line identified, we attempted to identify the DNA sequence of the gene or genes disrupted by the insertion. Table 4 shows a summary of this molecular analysis. Initially to isolate Arabidopsis genomic DNA sequences adjacent to T-DNA insertions, we used a plasmid rescue approach (Holsterset al. 1982) and obtained flanking sequences for 12 of 13 lines. Although plasmid rescue was an effective tool to identify genomic sequence flanking insertions, we subsequently switched to TAIL-PCR (Liuet al. 1995) because it allowed a higher throughput. We used TAIL-PCR experiments to identify flanking sequences for 15 of 19 T-DNA lines. We used only TAIL-PCR for Ds insertions, which lack the sequence elements necessary for plasmid rescue, and obtained flanking sequences for 31 of 32 lines. BLASTn (Altschulet al. 1997) was used to identify the insertion position in Arabidopsis genomic sequence entries in GenBank. Additional BLAST searches identified genes with sequence similarity from Arabidopsis and other species. Results from plasmid rescue and TAIL-PCR experiments were confirmed either by Southern blot or by PCR (see materials and methods). For Ds lines ET2614 and GT2929, we were unable to clearly identify the essential gene due to a Ds insertion into the indole acetic acid hydrolase (iaaH) negative selectable marker, which might have been linked to the lethal mutation (Sundaresanet al. 1995; Parinovet al. 1999). For 10 of 64 mutants, no confirmed Arabidopsis genomic sequence flanking an insertion was recovered (Table 4). For 15 of the remaining 54 mutants, it was not possible to easily assign the gene that was responsible for the lethal phenotype because an insertion was between two genes or it was accompanied by a deletion or rearrangement affecting more than one gene (Table 4). Chromosomal rearrangements have been reported previously for T-DNA lines (Castleet al. 1993; Nacryet al. 1996). Additional analysis of 30 of 37 genes disrupted in these seedling-lethal mutants included the identification, either experimentally or from GenBank, of a cDNA clone that contained the entire protein-coding sequence (G. J. Budziszewski and J. Z. Levin, unpublished results). Molecular analysis of the tagged seedling-lethal mutants revealed that a diverse set of genes is essential for seedling viability. Table 5 shows the locations of insertions within Arabidopsis genomic clones and the identities of genes disrupted in 20 of the 39 mutants for which the gene responsible could be deduced (Table 5). A detailed molecular characterization of the remaining 19 mutants will be presented in a future article. For two other lines, 868 and ET4401, the gene responsible for the seedling-lethal phenotype could not be identified (Table 5). Line 868 appears to contain a rearrangement or deletion that spans a region including the gene disrupted in line 4144, so that the lethal phenotypes of these two lines might be due to the inactivation of the same gene. Among those genes identified in Table 5 are four that were previously shown to have seedling-lethal phenotypes: DET1 (Pepperet al. 1994), CLA1 (Mandelet al. 1996), KEULE (Assaadet al. 2001), and PALE CRESS (PAC; Reiteret al. 1994), which may have a role in chloroplast mRNA maturation (Meureret al. 1998). Disruption of the translational apparatus resulted in pigmentation-defective lethal phenotypes in line 245. Line 245 had a defect in a putative peptide chain release factor, which was predicted by TargetP (Emanuelssonet al. 2000) to be localized in the mitochondria. Disruption of the photosynthetic apparatus also resulted in pigmentation-defective lethal phenotypes in lines 4144, with a defect in a putative chloroplast ATP synthase δ-subunit, and GT1802, with a defect in a putative cytochrome b6-f complex iron-sulfur subunit. Antisense experiments with two similar genes in tobacco resulted in plants with extremely slow growth due to decreases in photosynthesis (Priceet al. 1995). In contrast to these four lines, the identification of a putative RNA splicing gene as defective in line 5283, which exhibited reduced root growth and variable defects in the number and shape of cotyledons and leaves (Figure 1, R and T), did not provide a clear explanation of the defect in this mutant; however, these data can be used as a starting point in future studies of this mutant. Consistent with the mutant phenotype, the mRNA for the gene disrupted in line 5283 is detected by Northern blot analysis in both roots and aboveground seedling tissues (data not shown). In summary, the predicted roles for the identified essential genes (Table 5) indicate that a diverse set of pathways and processes within the plant contain an essential component. Nonmevalonate isoprenoid pathway mutants: Three of the genes identified in this study disrupt the recently discovered nonmevalonate isoprenoid pathway. In plants, two independent pathways are responsible for the synthesis of isoprenoids: a cytosolic acetate/mevalonate pathway and a plastidic nonmevalonate 1-deoxy-d-xylulose-5-phosphate pathway (reviewed in Lichtenthaler 1999 and Lichtenthaleret al. 2000). Initial studies in Scenedesmus olbiquus suggested that such a pathway existed, but it was unclear whether there was redundancy between the two pathways (Schwenderet al. 1996). Subsequently, the genes involved in this pathway have been identified and characterized in E. coli. We identified albino seedling-lethal mutants disrupting genes encoding the first three enzymes in this pathway. This phenotype was likely due to a block in the formation of carotenoids, phytol side chains of chlorophylls, and plastoquinone-9, which are products of this pathway. The first enzyme, 1-deoxy-d-xylulose-5-phosphate synthase (DXS), converts pyruvate and glyceraldehyde 3-phosphate to 1-deoxy-d-xylulose-5-phosphate (Loiset al. 1998). Line 1055 disrupts the Arabidopsis gene encoding DXS (Table 5), which has been previously identified as CLA1 on the basis of its albino mutant phenotype (Mandelet al. 1996). The second enzyme, 1-deoxy-d-xylulose-5-phosphate reductoisomerase (DXR), converts 1-deoxy-d-xylulose-5-phosphate to 2-C-methyl-d-erythritol 4-phosphate (Takahashiet al. 1998) and the Arabidopsis homolog has been cloned and characterized (Schwenderet al. 1999). Line 4036 (Figure 1B) shows that the Arabidopsis homolog is an essential gene. The third enzyme, 4-diphosphocytidyl-2C-methylerythritol synthase, converts 2-C-methyl-d-erythritol 4-phosphate and CTP to 4-diphosphocytidyl-2C-methyl-d-erythritol (Rohdichet al. 1999) and the Arabidopsis homolog has been cloned and characterized (Rohdichet al. 2000). Line GT0946 (Figure 1A) shows that the Arabidopsis homolog is also an essential gene. Interestingly, the enzymes in this pathway are not found in animals and have been proposed to be novel targets for herbicides and antibacterial drugs that are based on the compound fosmidomycin (Rohmer 1998). Chloroplast protein translocation: Two of the mutants identified in this study may disrupt the translocation of nuclear-encoded proteins into the chloroplast. Most of these proteins are imported by protein complexes composed of Toc (translocons of the outer envelope of chloroplasts) and Tic (translocons of the inner envelope of chloroplasts) proteins (Schleiff and Soll 2000). After import into the chloroplast, four pathways are proposed to be involved in the translocation of proteins into or across the thylakoid membrane (Robinsonet al. 2001). Membrane proteins are translocated by an SRP-dependent or spontaneous pathway. Lumen proteins are translocated by a Sec-dependent or ΔpH-dependent pathway. Line 2490, which had a pale green phenotype, contained a disruption in a gene similar to the pea Tic40 (Stahlet al. 1999) and Brassica napus Toc36 genes (Koet al. 1995; Table 5). Tic40 has been shown to be an inner chloroplast envelope-localized protein acting in protein translocation and displays some sequence similarity to Hsp70 interacting proteins, but its exact role remains unclear. Although some Arabidopsis chloroplast protein translocation mutants, e.g., ppi2 (Baueret al. 2000) and 2490, display lethal phenotypes, others such as ppi1 (Jarviset al. 1998), ffc, and chaos (Aminet al. 1999) have a pale nonlethal phenotype that might be a result of partial functional redundancy among import pathways. Lines GT6839, GT8096, and ET7536, which had a yellow phenotype (Figure 1F), contained a disruption in the Arabidopsis homolog of the E. coli tatC gene (Table 5; Bogschet al. 1998; Moriet al. 1999). The pea tatC protein has been shown to be required for the thylakoid ΔpH-dependent pathway in vitro (Moriet al. 2001). Maize mutants in the tha4 and hcf106 genes, which are similar to E. coli tatA and tatB, disrupt this pathway and have a seedling-lethal phenotype (Settleset al. 1997; Walkeret al. 1999). Predicted localization of essential seedling proteins: In light of the large fraction of seedling-lethal mutants with pigmentation defects (Table 1), we attempted to determine whether these mutants had molecular defects in chloroplast function. We used the TargetP program (Emanuelssonet al. 2000) to predict whether the genes identified in this study encode proteins containing chloroplast transit peptides (CTPs) at their N termini. TargetP is reported to be the most accurate method for predicting the presence of CTPs and is estimated to be correct for 85% of plant proteins. In most cases, the coding region for these proteins was derived from full-length cDNA sequences (data not shown). Nine of 13 genes from mutants with only a pigmentation phenotype in Table 5 were predicted to contain a CTP. Analysis of a larger set of 30 essential genes identified from mutants with only a pigmentation phenotype in this study indicated that 21 genes were predicted to contain a CTP (J. Z. Levin, unpublished results). None of the four genes from mutants with morphological phenotypes in Table 5 was predicted to contain a CTP (data not shown). Analysis of three additional genes from mutants with morphological phenotypes in this study predicted these genes also would not contain a CTP (J. Z. Levin, unpublished results). Among all Arabidopsis proteins, ~14% are predicted by TargetP to have CTPs (Arabidopsis Genome Initiative 2000). These results show a significant enrichment of chloroplast proteins in the pigmentation-defective mutant class. Gene family membership: In light of the high frequency of genes belonging to gene families in Arabidopsis (Arabidopsis Genome Initiative 2000), we analyzed the genes identified in this study as essential for seedling viability for their membership in gene families. In an analysis of the entire genome, 65% of Arabidopsis genes were considered to be members of gene families on the basis of BLAST (Altschulet al. 1997) and FASTA (Pearson and Lipman 1988) analyses (Arabidopsis Genome Initiative 2000). Using the same criteria, we determined that 44% of the 18 essential genes identified in Table 5 were members of gene families. Three of these essential genes are in gene families with two members; 1 is in a gene family with three members; 1 is in a gene family with four members; and 3 are in gene families with more than five members. This distribution is similar to that found for the entire genome (Arabidopsis Genome Initiative 2000). Analysis of a larger set of 37 essential genes identified from mutants in this study indicated that 41% are members of gene families (J. Z. Levin, unpublished results). With the determination of the genome sequence of Arabidopsis complete, the challenge for plant biologists is to understand the function of every gene. It is estimated that there are ~25,500 genes in Arabidopsis (Arabidopsis Genome Initiative 2000), but only a small fraction of them have been characterized. Determination of the function of all Arabidopsis genes is a goal of the Arabidopsis research community before the year 2010 (Choryet al. 2000). One approach to this problem is to generate large insertional mutant collections and to use them to identify mutants in each of the predicted genes (reviewed in Parinov and Sundaresan 2000). With the aim of identifying the genes necessary for seedling viability, we isolated and molecularly characterized Arabidopsis seedling-lethal mutants on a genome-wide scale. Using both T-DNA and Ds insertion mutant populations, we isolated >500 seedling-lethal mutants and molecularly characterized >50 genes disrupted in these mutants. Among these essential genes are some that might be considered “housekeeping genes” such as a peptide release factor in line 245 and an ATP synthase subunit in line 4144 and others that might be classified as “developmental regulatory genes” such as DET1 (Pepperet al. 1994) in line ET5745 (Table 5). A high proportion of seedling-lethal mutants with pigmentation defects are likely to affect nuclear-encoded chloroplast proteins. Seedling development seems to depend primarily on chloroplast function because of the need for energy. This hypothesis is supported by the ability of sucrose to bypass the need for light in dark-grown wild-type Arabidopsis seedlings that were able to flower when grown on vertical petri dishes with sucrose-containing media (Roldanet al. 1999). Although many complex processes involve the chloroplast during seedling development, such as coordination of plastid and nuclear gene expression (Suseket al. 1993), it is unclear whether these functions have an essential component beyond the need for energy. Beyond the usefulness of the identification of a loss-of-function phenotype for a gene, each mutant can be a starting point for future detailed characterization of specific cellular and developmental processes. This collection of seedling-lethal mutants could be a resource for such experiments. The application of other functional genomic methods, such as mRNA or metabolite profiling, to these seedling-lethal mutants may yield a greater understanding of the roles these genes play in plant growth and development. In particular, mutants disrupting the nonmevalonate isoprenoid pathway (Table 5) could shed light on the regulation of these genes and metabolites within this newly discovered biosynthetic pathway. Seedling-lethal mutants have been identified previously in other types of genetic screens, including those for pigmentation defects. High chlorophyll fluorescence (hcf) occurs in plants when there is a reduction in photosynthetic activity beyond photosystem II and it can be visually detected as red plants in response to UV irradiation. In a screen for hcf mutants, 23 of the 34 Arabidopsis mutants identified were also seedling lethals (Meureret al. 1996). Although it is unresolved how many genes can mutate to a hcf phenotype, there is clearly an overlap between mutants that could be found in a hcf screen and those identified in this study as seedling lethal. Only 5 of the 130 maize hcf mutants identified were allelic with each other, indicating that this is a large class of genes (Miles 1994). Molecular analysis of a limited number of maize hcf genes has revealed chloroplast proteins acting in protein translocation (Settleset al. 1997), mRNA processing and translation (Fisket al. 1999), and translation (Schulteset al. 2000). Genes identified in our study play roles in these processes as well (Table 5). A screen for chlorophyll-deficient xantha Arabidopsis mutants identified many mutants but focused on seven genes specifically affecting chlorophyll synthesis or integration into the photosynthetic membrane (Rungeet al. 1995). Multiple large-scale duplication events during the last 200 million years have been proposed to explain the extensive duplication within the Arabidopsis genome (Visionet al. 2000). As a result of these duplications, Arabidopsis is reported to have 65% of its genes within gene families compared to only 29% for S. cerevisiae, 28% for D. melanogaster, and 45% for C. elegans (Arabidopsis Genome Initiative 2000). Members of gene families can exhibit functional redundancy, depending on the extent of divergence of function by changes in their coding sequences and/or expression patterns. It will be valuable to determine the extent of functional redundancy within the entire Arabidopsis genome. An interesting example of partial redundancy within gene families has been reported for the Arabidopsis CAULIFLOWER and APETALA1 genes in which double mutants have a dramatic cauliflower-like floral meristem defect, while cauliflower single mutants have a wild-type phenotype and apetala1 single mutants have a milder floral-defective phenotype (Kempinet al. 1995). For the large R2R3 MYB transcription factor gene family, it appears that there may be considerable functional redundancy as a significant number of genes have no visible single mutant phenotype, although further study may reveal subtle phenotypes (Meissneret al. 1999). Somewhat surprisingly, 41% of the genes found in this study to be essential for seedling viability are members of gene families. For these genes, no other gene family member can replace the function of the mutated gene. While it is possible that all the members of a gene family have the same function and the lethal phenotype is the result of a decrease in the aggregate level of gene function below a threshold, it seems more likely that there has been a divergence in function for these genes. One other possibility is that members of a gene family encode proteins that function nonredundantly in different cellular compartments, e.g., cytoplasm and chloroplast. These results suggest that many of the members of gene families in Arabidopsis may have nonredundant functions. Although we isolated >500 seedling lethals, our research program is ongoing and additional effort will be required to establish exactly how many genes can mutate to this phenotype. An estimate of how close we are to saturation in this screen can be made in several ways. First, for a phenotypic class with a known number of genes based on previous saturation mutageneses, we can compare the number of mutants isolated in this study. We detected five mutants with a fusca phenotype (Table 1) and 10 genes could have been detected in a seedling screen (Miseraet al. 1994). We detected two seedling-lethal mutants with white leaves and thiamin auxotrophy (Table 1) and 3 genes with this phenotype are known (Koornneef and Hanhart 1981). Both of these results suggest that there are >500 genes with a seedling-lethal phenotype. Second, an estimate can be made relative to the emb lethals isolated from the same T-DNA population (McElveret al. 2001). McElver and co-workers found that 2.5% of T-DNA lines segregate an emb mutant and they estimate that there are 500–750 emb genes (McElveret al. 2001). In comparison, 1.6% of T-DNA lines segregated a seedling-lethal mutant, suggesting that 320–480 genes are in this class. Third, only 1 gene was mutated more than once among the 54 tagged lines characterized molecularly. Thus, the molecular analysis will require additional information on multiple alleles before we can use this criterion to determine the extent of saturation. As part of our seedling-lethal screen, we performed a large-scale direct comparison of T-DNA and Ds transposon insertional mutagenesis methods in Arabidopsis. The spectrum of phenotypes obtained in each screen appears to be similar (Table 1). Many of the differences between the methods result in T-DNA lines being more complex to analyze than Ds lines. On average, T-DNA lines had more insertion loci per line than Ds lines (Table 2). At a given insertion locus, a T-DNA line often had more than one copy and the insertion frequently was partially rearranged, while the Ds elements showed no evidence of rearrangements (data not shown). T-DNA lines were more likely to affect multiple genes or to have an insertion between two predicted genes. We attempted to identify the gene disrupted by an insertion in 32 T-DNA lines and 32 Ds lines. Among the 15 tagged lines analyzed in which the essential gene could not be identified for these reasons, only 4 were Ds lines (Table 4). We obtained a similar frequency of tagged mutants in both populations (Table 3). This frequency reflects the number of mutations in a line other than insertions carrying the selectable marker. Most of these mutations are likely to be point mutations or partial insertion copies that might be caused by DNA-modifying enzymes involved in T-DNA insertion or transposition. The frequency of 29% for T-DNA mutants is similar to the 36% found by Castle et al. (1993) and the 34% reported by McElver et al. (2001). Finding only 33% of the Ds mutants were tagged was a surprise to us. Previous reports of Ds transposon tagging frequency in Arabidopsis suggested a higher rate might be found. However, these estimates were based on analysis of fewer mutants. Because each Ac/Ds system was slightly different, it is not possible to definitively determine the causes of the reported tagged-mutant frequency. For the two largest studies reported, 15 of 29 (Longet al. 1997) and 4 of 28 (Altmannet al. 1995) mutants were tagged. In three smaller studies, 3 of 5 (Bancroftet al. 1993), 3 of 4 (Longet al. 1993), and 2 of 6 (Bhattet al. 1996) mutants were tagged. What might be the reason for these differences? It is possible that the low frequency obtained by Altmann et al. (1995) was due to the absence of selection against the continued presence of the Ac element, resulting in additional excision and insertion events that lowered the frequency of tagged mutants. Because the lines in this study (Sundaresanet al. 1995) and 26 of the lines in Long et al. (1997) were both generated from Ac lines expressing transposase under the control of the 35S promoter, the strength of the promoter used to express transposase does not seem to account for these differences. It is possible that previous studies do not have sufficiently large sample sizes to statistically distinguish them from the results in this study. To increase the confidence in the assignments of particular genes as responsible for the seedling-lethal phenotype in a given line, additional experiments will be necessary. These assignments are estimated to be correct in almost every case on the basis of the cosegregation results (see materials and methods). Four of the genes identified here are previously known to be essential (Table 5). Additional efforts could include the isolation of additional alleles, complementation with a wild-type transgene, reversion of Ds mutants, and creation of transgenic plants with antisense or dsRNA constructs (Waterhouseet al. 1998; Chuang and Meyerowitz 2000; Levinet al. 2000). The most efficient strategy for a large-scale effort would probably be the first alternative. We acknowledge Joanna Barton, Paul Burt, Parna Chattaraj, Hodan Guled, Karen Maguylo, and Sarah Williams for technical assistance, and Bob Dietrich, Mark Johnson, David Meinke, and Cathy Frye for critical reading of the manuscript. We thank David Meinke, Mary Ann Cushman, Amy Schetter, and Kelsey Smith (all from Oklahoma State University) for assistance with cosegregation analysis. We are grateful to Joseph Simorowski for sending us additional seeds for the Ds lines on several occasions. We also thank the Syngenta Biotechnology, Inc., sequencing facility, greenhouse facility, and media kitchen for their excellent assistance. Note added in proof: After the submission of the revised version of this article, a report describing the phenotype of Arabidopsis tatC mutants was published (R. Motohashi, N. Nagata, T. Ito, S. Takahashi, T. Hobo et al., 2001, An essential role of a TatC homologue of a Delta pH-dependent protein transporter in thylakoid membrane formation during chloroplast development in Arabidopsis thaliana. Proc. Natl. Acad. Sci. USA 98: 10499–10504). , 1995 Ac/Ds transposon mutagenesis in Arabidopsis thaliana: mutant spectrum and frequency of Ds insertion mutants. Mol. Gen. Genet. 247: 646–652. , 1997 Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res. 25: 3389–3402. , 1999 Arabidopsis mutants lacking the 43- and 54-kilodalton subunits of the chloroplast signal recognition particle have distinct phenotypes. Plant Physiol. 121: 61–70. , 2000 Analysis of the genome sequence of the flowering plant Arabidopsis thaliana. Nature 408: 796–815. , 2001 The cytokinesis gene KEULE encodes a Sec1 protein that binds the syntaxin KNOLLE. J. Cell Biol. 152: 531–543. , (Editors), 1998 Current Protocols in Molecular Biology. John Wiley & Sons, New York. , 1993 Heterologous transposon tagging of the DRL1 locus in Arabidopsis. Plant Cell 5: 631–638. , 2000 The major protein import receptor of plastids is essential for chloroplast biogenesis. Nature 403: 203–207. , 1998 Hormone regulated development, pp. 107–150 in Arabidopsis, edited by Anderson M., Roberts J.. Sheffield Academic Press, Sheffield, England. , 1996 Mutational analysis of root initiation in the Arabidopsis embryo. Plant Soil 187: 1–9. , 1996 Use of Ac as an insertional mutagen in Arabidopsis. Plant J. 9: 935–945. , 1998 An essential component of a novel bacterial protein export system with homologues in plastids and mitochondria. J. Biol. Chem. 273: 18003–18006. , 1984 Genetic activity along 315kb of the Drosophila chromosome. EMBO J. 3: 2537–2541. , 1999 Blue-light photoreceptors in higher plants. Annu. Rev. Cell. Dev. Biol. 15: 33–62. , 1993 Genetic and molecular characterization of embryonic mutants identified following seed transformation in Arabidopsis. Mol. Gen. Genet. 241: 504–514. , 1995 RML1 and RML2, Arabidopsis genes required for cell proliferation at the root tip. Plant Physiol. 107: 365–376. , 2000 National Science Foundation-sponsored workshop report: “The 2010 Project” functional genomics and the virtual plant. A blueprint for understanding how plants are built and how to improve them. Plant Physiol. 123: 423–426. , 1996 A brassino-steroid-insensitive mutant in Arabidopsis thaliana exhibits multiple defects in growth and development. Plant Physiol. 111: 671–678. , 2000 Hormonal interactions in the control of Arabidopsis hypocotyl elongation. Plant Physiol. 124: 553–561. , 2000 Predicting subcellular localization of proteins based on their N-terminal amino acid sequence. J. Mol. Biol. 300: 1005–1016. , 1991 Embryonic lethals and T-DNA insertional mutagenesis in Arabidopsis. Plant Cell 3: 149–157. , 1999 Molecular cloning of the maize gene crp1 reveals similarity between regulators of mitochondrial and chloroplast gene expression. EMBO J. 18: 2621–2630. , 1995 Saturating the genetic map of Arabidopsis thaliana with embryonic mutations. Plant J. 7: 341–350. , 2000 Functional genomic analysis of C. elegans chromosome I by systematic RNA interference. Nature 408: 325–330. , 1998 Activation of latent transgenes in Arabidopsis using a hybrid transcription factor. Genetics 149: 633–639. , 1988 Genetics, pp. 17–46 in The Nematode Caenorhabditis elegans, edited by Wood W. B.. Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY. , 1982 The use of selectable markers for the isolation of plant DNA/T-DNA junction fragments in a cosmid vector. Mol. Gen. Genet. 185: 283–289. , 1993 Mutations at the SPIN-DLY locus of Arabidopsis alter gibberellin signal transduction. Plant Cell 5: 887–896. , 1998 An Arabidopsis mutant defective in the plastid general protein import apparatus. Science 282: 100–103. , 1991 Genetic analysis of pattern formation in the Arabidopsis embryo. Development 91(Suppl. 1): 27–38. , 1995 Molecular basis of the cauliflower phenotype in Arabidopsis. Science 267: 522–525. , 1995 Isolation and characterization of a cDNA clone encoding a member of the Com44/Cim44 envelope components of the chloroplast protein import apparatus. J. Biol. Chem. 270: 28601–28608. , 1981 A new thiamine locus in Arabidopsis. Arabidopsis Inf. Serv. 18: 52–58. , 1992 Regulation of flavanoid biosynthetic genes in germinating Arabidopsis seedlings. Plant Cell 4: 1229–1236. , 2000 Methods of double-stranded RNA-mediated gene inactivation in Arabidopsis and their use to define an essential gene in methionine biosynthesis. Plant Mol. Biol. 44: 759–775. , 1969 Thiamine mutants of the crucifer. Arabidopsis. Biochem. Genet. 3: 163–170. , 1999 The 1-deoxy-D-xylulose-5-phosphate pathway of isoprenoid biosynthesis in plants. Annu. Rev. Plant Physiol. Plant Mol. Biol. 50: 47–65. , 2000 The non-mevalonate isoprenoid biosynthesis of plants as a test system for new herbicides and drugs against pathogenic bacteria and the malaria parasite. Z. Naturforsch. 55: 305–313. , 1995 Efficient isolation and mapping of Arabidopsis thaliana T-DNA insert junctions by thermal asymmetric interlaced PCR. Plant J. 8: 457–463. , 1998 Cloning and characterization of a gene from Escherichia coli encoding a transketolase-like enzyme that catalyzes the synthesis of D-1-deoxyxylulose 5-phosphate, a common precursor for isoprenoid, thiamin, and pyridoxol biosynthesis. Proc. Natl. Acad. Sci. USA 95: 2105–2110. , 1993 The maize transposable element system Ac/Ds as a mutagen in Arabidopsis: identification of an albino mutation induced by Ds insertion. Proc. Natl. Acad. Sci. USA 90: 10370–10374. , 1997 Ds elements on all five Arabidopsis chromosomes and assessment of their utility for transposon tagging. Plant J. 11: 145–148. , 1996 CLA1, a novel gene required for chloroplast development, is highly conserved in evolution. Plant J. 9: 649–658. , 1991 Mutations affecting body organization in the Arabidopsis embryo. Nature 353: 402–407. , 2001 Insertional mutagenesis of genes required for seed development in Arabidopsis thaliana. Genetics 159: 1751–1763. , 1994 Seed development in Arabidopsis thaliana, pp. 253–295 in Arabidopsis, edited by Meyerowitz E. M., Somerville C. R.. Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY. , 1999 Function search in a large transcription factor gene family in Arabidopsis: assessing the potential of reverse genetics to identify insertional mutations in R2R3 MYB genes. Plant Cell 11: 1827–1840. , 1996 Isolation of high-chlorophyll-fluorescence mutants of Arabidopsis thaliana and their characterisation by spectroscopy, immunoblotting and northern hybridisation. Planta 198: 385–396. , 1998 The PAC protein affects the maturation of specific chloroplast mRNAs in Arabidopsis thaliana. Mol. Gen. Genet. 258: 342–351. , 1994 The role of high chlorophyll fluorescence photosynthesis mutants in the analysis of chloroplast thylakoid membrane assembly and function. Maydica 39: 35–45. , 1994 The FUSCA genes of Arabidopsis: negative regulators of light responses. Mol. Gen. Genet. 244: 242–252. , 1999 Component specificity for the thylakoidal Sec and Delta pH-dependent protein transport pathways. J. Cell Biol. 146: 45–55. , 2001 Chloroplast TatC plays a direct role in thylakoid (Delta)pH-dependent protein transport. FEBS Lett. 501: 65–68. , 1996 Major chromosomal rearrangements induced by T-DNA transformation in Arabidopsis. Genetics 149: 641–650. , 2000 Functional genomics in Arabidopsis: large-scale insertional mutagenesis complements the genome sequencing project. Curr. Opin. Biotechnol. 11: 157–161. , 1999 Analysis of flanking sequences from dissociation insertion lines. A database for reverse genetics in Arabidopsis. Plant Cell 11: 2263–2270. , 1988 Improved tools for biological sequence comparison. Proc. Natl. Acad. Sci. USA 85: 2444–2448. , 1994 DET1, a negative regulator of light-mediated development and gene expression in Arabidopsis, encodes a novel nuclear-localized protein. Cell 78: 109–116. , 1995 Chloroplast cytochrome b6/f and ATP synthase complexes in tobacco: transformation with antisense RNA against nuclear-encoded transcripts for the Rieske FeS and ATPδ polypeptides. Aust. J. Plant Physiol. 22: 285–297. , 1992 Classical mutagenesis, pp. 16–82 in Methods in Arabidopsis Research, edited by Koncz C., Chua N.-H.. World Scientific Publishing, Singapore. , 2000 Independent action of ELF3 and phyB to control hypocotyl elongation and flowering time. Plant Physiol. 122: 1149–1160. , 1992 Genetic linkage of the Arabidopsis genome: methods for mapping with recombinant inbred and random amplified polymorphic DNAs (RAPDs), pp. 170–190 in Methods in Arabidopsis Research, edited by Koncz C., Chua N.-H., Schell J.. World Scientific Publishing, Singapore. , 1994 Control of leaf and chloroplast development by the Arabidopsis gene pale cress. Plant Cell 6: 1253–1264. , 2001 Multiple pathways used for the targeting of thylakoid proteins in chloroplasts. Traffic 2: 245–251. , 1999 Cytidine 5′-triphosphate-dependent biosynthesis of isoprenoids: YgbP protein of Escherichia coli catalyzes the formation of 4-diphosphocytidyl-2-C-methylerythritol. Proc. Natl. Acad. Sci. USA 96: 11758–11763. , 2000 Biosynthesis of terpenoids: 4-diphosphocytidyl-2C-methyl-D-erythritol synthase of Arabidopsis thaliana. Proc. Natl. Acad. Sci. USA 97: 6451–6456. , 1998 Isoprenoid biosynthesis via the mevalonate-independent route, a novel target for antibacterial drugs? Prog. Drug Res. 50: 135–154. , 1999 Sucrose availability on the aerial part of the plant promotes morphogenesis and flowering of Arabidopsis in the dark. Plant J. 20: 581–590. , 1995 Isolation and classification of chlorophyll-deficient xantha mutants of Arabidopsis thaliana. Planta 197: 490–500. , 1996 Experimental and genetic analysis of root development in Arabidopsis thaliana. Plant Soil 187: 97–105. , 2000 Travelling of proteins through membranes: translocation into chloroplasts. Planta 211: 449–456. , 1998 The Arabidopsis thaliana genome: towards a complete physical map, pp. 1–30 in Arabidopsis, edited by Anderson M., Roberts J.. Sheffield Academic Press, Sheffield, England. , 2000 Maize high chlorophyll fluorescent 60 mutation is caused by an Ac disruption of the gene encoding the chloroplast ribosomal small subunit protein 17. Plant J. 21: 317–327. , 1996 Biosynthesis of isoprenoids (carotenoids, sterols, prenyl side-chains of chlorophylls and plastoquinone) via a novel pyruvate/glyceraldehyde 3-phosphate non-mevalonate pathway in the green alga Scenedesmus obliquus. Biochem. J. 316: 73–80. , 1999 Cloning and heterologous expression of a cDNA encoding 1-deoxy-D-xylulose-5-phosphate reductoisomerase of Arabidopsis thaliana. FEBS Lett. 455: 140–144. , 1997 Sec-independent protein translocation by the maize Hcf106 protein. Science 278: 1467–1470. , 1999 Tic40, a new “old” subunit of the chloroplast protein import translocon. J. Biol. Chem. 274: 37467–37472. , 1998 Lethal mutations defining 112 complementation groups in a 4.5 Mb sequenced region of Caenorhabditis elegans chromosome III. Mol. Gen. Genet. 260: 280–288. , 1995 Patterns of gene action in plant development revealed by enhancer trap and gene trap transposable elements. Genes Dev. 9: 1797–1810. , 1993 Signal transduction mutants of Arabidopsis uncouple nuclear CAB and RBCS gene expression from chloroplast development. Cell 74: 787–799. , 1998 A 1-deoxy-D-xylulose 5-phosphate reductoisomerase catalyzing the formation of 2-C-methyl-D-erythritol 4-phosphate in an alternative nonmevalonate pathway for terpenoid biosynthesis. Proc. Natl. Acad. Sci. USA 95: 9879–9884. , 2000 The origins of genomic duplications in Arabidopsis. Science 290: 2114–2117. , 1994 Isolation of deficiencies in the Arabidopsis genome by γ-irradiation of pollen. Genetics 137: 1111–1119. , 1994 Activation tagging: a means of isolating genes implicated as playing a role in plant growth and development. Plant Mol. Biol. 26: 1521–1528. , 1999 The maize tha4 gene functions in sec-independent protein transport in chloroplasts and is related to hcf106, tatA, and tatB. J. Cell Biol. 147: 267–275. , 1999 Target-based discovery of crop protection chemicals. Nat. Biotechnol. 17: 618–619. , 1998 Virus resistance and gene silencing in plants can be induced by simultaneous expression of sense and antisense RNA. Proc. Natl. Acad. Sci. USA 95: 13959–13964. , 2000 Activation tagging in Arabidopsis. Plant Physiol. 122: 1003–1014. , 1999 Functional characterization of the S. cerevisiae genome by gene deletion and parallel analysis. Science 285: 901–906.
0.999998
How can I improve the CPU performance of my Amazon EC2 Linux instances? I want to improve the performance of my Amazon EC2 Linux instances. What are some ways I can do that? We recommend using HVM AMIs for improved performance. HVM AMIs also offer newer instance classes (for example, M5, M4, and R4) and EC2 features such as enhanced networking. For more information, see Linux AMI Virtualization Types. To improve performance, you can use enhanced networking on supported instance types at no additional charge. Enhanced networking uses single root I/O virtualization (SR-IOV), which is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. For supported instance types and instructions, see Enhanced Networking on Linux and Enhanced Networking on Windows. To enable enhanced networking, the instances must use an HVM AMI and be launched in an Amazon Virtual Private Cloud (Amazon VPC). Note: We recommend using an updated version of the Elastic Network Adapter (ENA) or the Intel 82599 Virtual Function (VF) interface driver. For storage, using NVMe instance store volumes can assist with performance. Depending on your kernel version and instance type, improved performance with NVMe volumes can vary with workload. For more information, see Amazon EBS and NVMe and SSD Instance Store Volumes. Note: To use the Kyber I/O scheduler for certain workloads, be sure your Amazon EC2 Linux instance is running kernel 4.12 or newer. HugePages can improve performance for workloads that execute large amounts of memory access. For best practices regarding High-Performance Computing (HPC) workloads, see High-Performance Computing Lens. Using the latest kernel version and instance types is highly recommended for performance. If you’re using M3, C3, or other older instance types, consider migrating to M4 or similar instance types, as well as using the latest kernel version available for the operating system. For more information, see Amazon EC2 Instance Types. Avoid small packets whenever possible. If your workload supports it, use larger packets with jumbo frames. For more information, see Network Maximum Transmission Unit (MTU) for Your EC2 Instance. You might see performance benefits from using DPDK-based versions of software to move networking outside the kernel and into userspace. Using DPDK can require a software update that includes DPDK support. If Kernel Page-Table Isolation (KPTI) is enabled on your instance's operating system, then also enabling PCID can improve CPU performance. You must verify that both the kernel and instance type support PCID. To improve performance, consider increasing the size of your instances, or increasing the number of instances. The tsc timer is the generally the best performing timer available to most instances. If you’re using a xen timer, you might see improved performance by moving to the tsc timer. If you’re using an older operating system that’s using the jiffies timer, consider moving to an operating system that preferably supports tsc, or supports xen at minimum. Note: Older instance types, such as M1 or M2, provide an emulated tsc timer. Consider moving to a newer instance type for better tsc timer performance.
0.950303
Is there a way to use plural and singular words for each part of a multi number value? Eg. 1 hour 10 minutes, vs 2 hours 1 minute. X is a room. A height is a kind of value. 1 foot 11 inch (singular) specifies a height. 5 feet 11 inches (plural) specifies a height. A person has a height. The height of the player is 1 foot 1 inch. If I change the height to 0 foot 1 inch then the output is singular. 1 foot 1 inch (singular) specifies a height. This is clearly wrong and I’ve defined a foot as being 2 inches. X is a room. A height is a kind of value. 1 foot (singular) 11 inch (singular) specifies a height. 5 feet (plural) 11 inches (plural) specifies a height. A person has a height. The height of the player is 1 foot 1 inch. It errors, because the first (singular) is now part of the specification. I tried to look at how time is defined in the standard rules but it seems to be on a lower level. Hmm, I tried several things (which I included in the code below, in square brackets) and I don’t know why this happens. X is a room. Lady Gaga is a woman in X. A height is a kind of value. [1 foot (singular) specifies a height. 2 feet (plural) specifies a height. 1 inch (singular) specifies a height scaled down by 12. 5 feet 11 inches specifies a height. A person has a height. The height of Lady Gaga is 5 feet 2 inches. After examining Gaga, say "She is [the height of Lady Gaga] tall." You see nothing special about Lady Gaga. She is 5 feet 2 inches tall. The height of Lady Gaga is 5 feet 1 inch. 5 feet 1 inch specifies a height. The height of Lady Gaga is 5 feet 1 inch. 2 feet 11 inches specifies a height with parts feet and inches. say "[F] [if F is 1]foot[else]feet[end if] [I] [if I is 1]inch[else]inches[end if]". After examining Gaga, say "She is [the height of Lady Gaga in feet and inches] tall." 5 feet 11 inches (plural) specifies a height. 1 foot 1 inch (singular) specifies a height. When written in plural, 1 feet = 12 inches. But when written singularly, 1 foot = 2 inch. I just tested, and even with single part values, using a 1 for the singular definition isn’t necessary, so there’s no reason this might theoretically help. “2 inch (singular) specifies a height.” works fine, so this was a bit of a red herring. Yeah, that seems like the best solution if I7 won’t do it itself, thanks!
0.999987
I am contacted multiple times a day by people seeking information on turmeric. Many of them suffer from arthritis, migraines, fibromyalgia, and other inflammatory and pain conditions and are looking for relief. They have heard whispers about this wonderful herb/spice called turmeric and are interested in learning if it may work for them. I'll begin by saying that turmeric is an amazing herb. It is anti inflammatory, relieves pain, and is supportive to the detoxing process in the body, offering great support to the liver. It also supports hormone balance, aids in digestion, increases metabolism, and promotes a healthy immune system. It's no wonder why it is often referred to as the "Master Herb". It is important to know how to use turmeric to receive the most benefits. As an herbalist, I tend to avoid most commercial supplements, and I nearly always avoid products that promote being extractions of specific active ingredients in any herb. Turmeric is a plant and plants have many parts that work synergistically with each other. Simply put, it is going to work best when taken in it's natural form, with all of it's parts. Commercial supplements often contain fillers such as rice flour, millet, and cellulose. If these ingredients are listed on the product label, there is likely not very much actual turmeric in the product itself. Commercial supplements are also often dated, and herbs, like food, loose their potency as time passes. The best turmeric powder is going to be found from reputable herb companies such as Mountain Rose Herbs, Starwest Botanicals, Pacific Botanicals, etc. It is going to fresher, void of fillers, and quite a bit less expensive than commercial supplements. You can easily capsule your own if capsules are a form you prefer. It is important to note that turmeric is best taken along with a fat, as it helps to absorb it's full benefits. This can be any healthy fat, such as is in milk, coconut oil, butter, etc. Turmeric absorption is also benefited by the addition of ginger and black pepper, both which help to increase it's effects as well as have their own additional benefits of increasing circulation, supporting healthy digestion, and a well functioning immune system. Sage Moon Apothecary's Golden Milk Electuary Products combine high quality organic turmeric along with other beneficial herbs of black pepper and ginger, in a raw spun honey and organic raw honey medium with coconut oil, supplying the needed fat for optimal absorption. It is a convenient, easy to use, and delicious way to use turmeric in an optimal form and it is available in several blends to suit different tastes, and to support individual needs such as added liver support (Shine Blend with Milk Thistle), Immune Support (Rise Blend, with Astragalus), and Blood Support (Gingerbread Blend with Black Strap Molasses). Turmeric should be used on a regular basis. When used daily, customers have reported that they no longer need to use prescription or OTC anti inflammatories and pain relievers, and have shared an overall improvement with their symptoms and general health. That's the wonderful thing about turmeric - it not only helps to relieve inflammatory and pain symptoms fairly quickly after beginning use, it also works to help the body regain a balance and may improve underlying issues. Not everyone should take turmeric. Individuals using blood thinning medications should not use turmeric, because it can increase bleeding potential. Diabetics who are using medications to lower blood sugar, should proceed with caution and careful monitoring, as turmeric can, usually in higher doses, lower blood sugar. Pregnant women should not use turmeric. If in doubt about whether turmeric is right for you, it is recommended that you consult with an herbalist as well as your physician or naturopathic doctor. If you are taking medications, it is recommended that you speak with an herbalist and/or pharmacist for any contraindications or known reactions between turmeric and medications. A wonderful herb that is helpful for health and wellness on many levels.
0.951817
Is a fallen US spy drone in Iran is a sign that Iran and the United States are on a path toward war? The stealth surveillance aircraft that fell to ground in Iran has been called “bat-winged,” but it looks more like a boomerang. It remains to be seen if it will boomerang, too, as did the famous U-2 flight over the Soviet Union back in the cold war. It ought to be no surprise that the United States is spying on Iran. Gathering data on Iran’s nuclear program is pretty much a core mission of the CIA, the NSA and the other members of the intelligence alphabet. As John Pike points out, satellites are useful for many, many goals, but to maintain a close-up view of what’s going in and out of buildings and bunkers, a surveillance drone is better. But the worrisome part of this, admittedly fueled more by the breathless media coverage of the downed drone, which was reportedly 140 miles inside Iranian territory, is the idea that the United States is escalating its covert war against Iran and, indeed, preparing to use the “military option.” It is, in fact, highly unlikely that the United States will go to war against Iran. Let us count the reasons: first, it would be catastrophically counterproductive, hardening Iran’s hardliners, undermining its Green Movement and other opposition forces and driving its nuclear program deep underground. Second, it might unleash a regional conflagration if Iran decided to strike back, overtly or covertly, against the United States and its regional allies. Third, it would be illegal and contrary to international law to launch an unprovoked attack against Iran, which would leave the United States isolated, bereft of many allies, angering Russia and China and pushing Iran into North Korea-like self-sufficiency and greater authoritarianism. Fourth, it would gravely threaten the world economy, with skyrocketing oil prices, removing Iran’s substantial exports from the market. It’s possible to add to this list. In other words, it would be insane and self-defeating. Many, many strategists in Washington have essentially given up trying to block Iran from acquiring nuclear weapons, if that’s what it wants to do, and instead are busily developing plans to “contain” or box in Iran if it goes nuclear and to deal with it much as the United States dealt with the Soviet Union and then China when they joined the nuclear club. In recent weeks, there’s been a flurry of speculation about covert action against Iran. The explosion that devastated a rocket facility west of Tehran in November and killed a general in charge of Iran’s missile program is rumored to have been the result of some US or Israeli covert action, although it’s hard to imagine how such a spectacular bombing could have been pulled off. Still, there have been violent, and dangerous, incidents in both directions recently, including the Iranian seizure of the British embassy in Tehran and the reported Iranian plot to assassinate the Saudi ambassador to the United States. There’s little doubt that there is a covert campaign against Iran underway, and actions such as the computer worm and the assassination of scientists are undeniable—though with “deniability.” But it’s ludicrous to think that covert action can halt the Iranian nuclear enrichment program or, even more ridiculous, topple the regime. What it can do is set both countries onto a path along which one spark—say, a clash at sea in the Persian Gulf or the seizure of some NATO forces by Iran—could escalate to war. Ever-tougher sanctions, which again aren’t likely to slow Iran’s nuclear program, will make things worse. And as a whole, such a campaign might push iran to take greater and greater risks in pushing back, using terrorist groups, support for Shiite forces in the Gulf and Iraq, and actions such as the invasion of the UK embassy. pushing Iran toward war, in fact, might very well succeed.
0.795906
Alcoholics Anonymous also known as AA, is an international mutual aid movement with primary purpose being to stay sober and help other alcoholics achieve sobriety. Alcoholics Anonymous was founded in 1935 by Bill Wilson and Dr. Bob Smith in Akron, Ohio. With other early members, Wilson and Smith developed Alcoholics Anonymous's Twelve Step program of spiritual and character development. Alcoholics Anonymous's Twelve Traditions were introduced in 1946 to help AA stabilize and grow. Alcoholics Anonymous recommend that members and groups remain anonymous in public media, altruistically help other alcoholics and include all who wish to stop drinking. Alcoholics Anonymous also recommend that Alcoholics Anonymous members acting on behalf of the fellowship steer clear of dogma, governing hierarchies and involvement in public issues. Subsequent fellowships such as Narcotics Anonymous have adopted and adapted the Twelve Steps and the Twelve Traditions to their respective primary purposes. Alcoholics Anonymous generally avoids discussing the medical nature of alcoholism; nonetheless AA is regarded as a proponent and popularizer of the disease theory of alcoholism. The American Psychiatric Association has recommended sustained treatment in conjunction with AA's program, or similar community resources, for chronic alcoholics unresponsive to brief treatment. Alcoholics Anonymous's data states that 64% drop out of Alcoholics Anonymous in their first year, but its program is credited with helping many alcoholics achieve and maintain sobriety. The first non-Protestant member of Alcoholics Anonymous, a Roman Catholic, joined in 1939. Alcoholics Anonymous has more than 2 million members.
0.734951
Diamonds were created when our planet was born. They were formed deep in the earth, 100 to 200 miles below the surface, some 900 million to 3.3 billion years ago. They are made of pure carbon, crystallized by unimaginable heat and pressure, and were forced toward the earth's surface by volcanic eruption through “pipes” along with other minerals such as Kimberlite. Though some diamonds found their way into streams, rivers and seas, most settled back into the Kimberlite pipes, the primary sources for the world's diamond mines. Diamonds were first mined in India some 4,500 years ago. Modern mining began in South Africa in the mid-19th century. Legend has it that Erasmus Jacobs, an eight-year-old farm boy, found a 21-carat yellow “pebble” that turned out to be a diamond in 1866 near the Orange River– the first of many discovered in South Africa. Today, diamonds are found all over the world, including Canada, Russia, Australia, and throughout Africa. Yet, even with contemporary technology, they remain very difficult to find. Geologists search some of the most remote and inhospitable regions on earth to uncover new diamonds, including the frozen tundra of Siberia and Canada, the arid deserts of South Africa and Australia, and deep below the ocean floor. The ancient Greeks and Romans believed diamonds were tears of the gods and splinters from falling stars. The Hindus believed diamonds were formed by lightning striking rock and attributed them such power they were placed in the eyes of statues. Kings and queens throughout history have adorned themselves with diamonds and fought bitter battles to gain possession of these unique jewels. To the ancients, diamonds were magical, mystical talismans that could bring luck, wealth and success or bestow power, fearlessness and even invincibility. Roman soldiers wore diamonds in battle for protection and courage. In the Middle Ages, diamonds were used to ward off the effects of poison and illness. Jewish High Priests believed the stone could determine innocence or guilt. Diamonds have long been the ultimate symbol of love and romance. The word itself is derived from the ancient Greek word adamas, translated as "unconquerable." Some believed Cupid's arrows were tipped with diamonds. Others felt their power made people fall in love or that their brilliance reflected the flame of love. Strength and durability have made diamonds an enduring symbol of matrimony and eternal commitment. The diamond engagement ring is a 500-year tradition started by Austria's Archduke Maximillian, who presented one to his fiancé, Mary of Burgundy, in 1477. He placed the ring on the third finger of her left hand, based on an ancient Egyptian belief that this finger contained a “love vein” running directly to the heart. Ever since, couples around the world have pledged their love and devotion with a diamond. Today, nearly 90% of brides receive a diamond engagement ring. Please call me at: +44 792 4038888 or e-mail me at: david@dulondon.com for more information about any diamonds you want to appraise, buy, sell or trade. We have the best diamond prices in the market. We are fair, honest, and easy to deal with.
0.988513
degree (D.Sc.) in 1988 for work on plantation sustainability. Queen and appointed OBE for "services to forestry and the third world". Forest plantations are an increasingly important resource worldwide, a trend that is expected to continue strongly. But is growing trees in plantations a technology that can work in the long term? Is plantation silviculture biologically sound or are there inherent flaws, which will eventually lead to insuperable problems? The principal conclusions are as follows. 1. Plantations and plantation forestry operations do impact the sites on which they occur. Under certain conditions nutrient export may threaten sustainability, but usually more important for maintaining site quality is care with harvesting operations, conservation of organic matter, and management of the weed environment. Plantation forestry appears entirely sustainable under conditions of good husbandry, but not where wasteful and damaging practices as permitted. 2. Plantations are at risk from damaging pests and diseases. New threats will inevitably arise and some plantations may become more susceptible owing to climate change factors, but the history of plantation forestry suggests that these risks are containable with vigilance and the underpinning of sound biological research. 3. Measurements of yield in successive rotations of trees suggest that, so far, there is no significant or widespread evidence that plantation forestry is unsustainable in the narrow-sense. Where yield decline has been reported poor silvicultural practices and operations appear to be largely responsible. There is some evidences that recent plantations are more productive than older ones owing to climate change and silvicultural impacts. 4. There are several interventions in plantation silviculture, which point to increasing productivity in the future, providing management is holistic and good standards maintained. Genetic improvement offers the prospect of substantial long-term gains over several rotations. Overall, plantation forestry is likely to be sustainable on most sites provided good standards of silviculture are carried out. Forest plantations are an increasingly important resource worldwide, a trend that is expected to continue strongly. This study examines the evidence concerning the 'narrow-sense' sustainability of forest plantations. It asks the question: is growing trees in plantations a technology that can work in the long term or are there inherent flaws biologically which will eventually lead to insuperable problems for such silviculture? The question of sustainability in plantation forestry has two components. There are the general or broad issues of whether using land and devoting resources to tree plantations is a sustainable activity from the economic, the environmental or from the social sense. They can be labelled 'broad sense' sustainability. The second component, 'narrow-sense' sustainability, is largely a biological and silvicultural issue. The question raised is: can tree plantations be grown indefinitely for rotation after rotation on the same site without serious risk to their well being? More specifically, can their long-term productivity be assured, or will it eventually decline over time? These questions are pertinent owing to the increasing reliance on plantation forestry, but are also scientifically challenging since in previous centuries trees and woodlands were seen as 'soil improvers' and not 'impoverishers'. Are today's silvicultural practices more damaging because of greater intensity and the high timber yields achieved, typically 2-4 times that of natural forest increment? And, of course, are resources such as genetic improvement, targeted fertiliser application, and sophisticated manipulation of stand density, along with rising atmospheric carbon dioxide, likely to lead to crop yield improvement, or could they disguise evidence of genuine site degradation or increasing risk of damaging pests and diseases? This paper looks at evidence world-wide, but focuses on developing countries, to address four elements of narrow-sense sustainability: a fuller analysis is in Evans (1999b). (a) What changes to a site may plantation forestry induce and hence threaten future rotations? (b) What risks are tree plantations exposed to? (c) What factual evidence is there for and against productivity change over time? (d) What silvicultural interventions can help sustain yields? Two important questions are (1) do the silvicultural practices commonly applied, such as exotic species, monocultures, clear felling systems etc., cause site change, and (2) are such changes more or less favourable to the next crop? Does growing one crop influence the potential of its successor? This is a much-researched topic and only the main themes are summarised. Two recent books have presented the science: Dyck et al. (1994) 'Impacts of forest harvesting on long-term site productivity', and Nambiar and Brown (1997) 'Management of soil, nutrients and water in tropical plantation forests'. More dated but still relevant is Chijioke's (1980) review of the impacts of fast-growing species on tropical soils. However, it is important to be cautious: tree rotations are long, even in the tropics, compared with most research projects! Demonstrating that soil changes may be caused by forestry practices, is usually difficult to establish conclusively both in fact and in scale. An absence of sound baseline data is common and is the reported change actually induced by plantation silviculture? The second question is whether the observed changes represent degradation or improvement. There are remarkably few examples of changes supposedly induced by growing trees that lead to less favourable conditions for that species. Equally, the irreversibility of changes has rarely been demonstrated, apart from obvious physical losses such as erosion of topsoil. A gradual trend, perhaps observed over several decades, can be quickly reversed as stand conditions change. As Nambiar (1996) points out "the most striking impacts on soils and hence productivity of successive crops occur in response to harvesting operations, site preparation, and early silviculture from planting to canopy closure." Most reports of site change in plantation forestry derive from matched plots. Increasingly today long term observational experiments are being specifically designed to investigate change, e.g. CIFOR's tropics-wide study (Tiarks et al., 1998), the network in USA (Powers et al. 1994), and those monitoring gross environmental change such as the Europe-wide extensive and intensive forest monitoring plots (level 1 and level 11). Modelling is widely used but suffers in precision at site level because of assumptions made. The observational approach suffers bias in that investigation is often carried out specifically because there is a problem, which has already revealed itself in poor tree growth or health. It also suffers from soils being notoriously variable, a difficulty exacerbated on many sites by the kind of ground often used for plantations. A second, little known, source of variability is that measured values of many soil parameters can change radically during one year. The above points underline the danger of drawing conclusions from limited investigations covering only a few years of a rotation. Short-term studies can be grossly misleading especially when extrapolating over whole rotations and successive rotations. Plantations may have three impacts: nutrient removal from soil as trees grow and then are harvested; changes in the chemistry of the soil surface as the litter layer and organic matter are dominated by one species and hence uniform composition and decay characteristics; and site preparation practices such as ploughing, drainage and fertilising which directly affect soil physical parameters and in turn nutrient and moisture availability. Soils vary enormously in their role as a nutrient reservoir. Thinking has been conditioned by arable farming that treats soils as a medium in which to grow crops where nutrient supply is largely maintained by annual fertiliser inputs; and by the fact that in most temperate soils the store of plant nutrients far exceeds that in the above-ground biomass. In forestry, where fertiliser inputs are limited and trees perennial and generally deep rooting, the focus is less exclusively on soil reserves and more on where the dynamics of nutrient supply is mediated - i.e. largely at the soil surface. Indeed, forests are highly efficient re-cyclers of nutrients and in the tropics, where recycling can be at its most efficient, nutrients in mineral soil often no longer represent the dominant proportion of the ecosystem. The soil often plays only a small part in the nutrient exchange and it is the surface organic, root-bearing zone, especially the annual turnover of fine roots, which is important in concentrating energy flow from decomposing organic matter back into living organic matter. The integrity of this layer and how it is handled in plantation silviculture is critical to sustainability. Nutrient removal in plantation forestry occurs when any product is gathered or harvested. Many studies have been made; Goncalves et al. (1997) alone list 12 tropical examples. Critical to plantation sustainability is what proportion the nutrients lost represent of the whole store. This ratio of nutrient export : nutrient store is advocated as key measure of long-term ecosystem stability, (though it rather begs what is the store and how it can be measured?). For example Lundgren (1978) found that Pinus patula plantations in Tanzania led to annual removals of 40, 4, 23, 25 and 6 kg ha-1 of nitrogen (N), phosphorus (P), potassium (K), calcium (Ca) and magnesium (Mg) respectively. These rates of removal are about one-third of those of maize (Sanchez, 1976) and in the Tanzania study represented less than 10 per cent of soil store i.e. a stability ratio of <0.1. In contrast, Folster and Khanna (1997) report data for Eucalyptus urophylla x grandis hybrid stands with three very different site histories at Jari in NE Amazonia suggesting imminent impoverishment: "most of the previously grown Gmelina, Pinus or Eucalyptus had already extracted their share of base cations from the soil and left it greatly impoverished." with an unsustainable stability ratio of >1. However, caution is needed. Others (e.g. Rennie, 1955; Binns, 1962; Johnson and Todd, 1990) have predicted from comparison of removals in harvested biomass with available quantities in soil that calcium nutrition will be a problem; yet trees continue to grow on soil where conventional soil analysis suggests there is virtually no calcium. Understanding these dynamics helps identify at what points on the continuum of plantation growth throughout the world of sites, species and productivities the ratio becomes critical for long-term stability. There appears to be few examples of reaching such limits. It is worth remembering that nutrient removals by forest crops are typically only one-fifth to one-tenth that of arable farming, see Miller (1995). The influence of litter on soil chemical status may be important since leaves of different species decay at different rates. For example, in southern Africa substantial accumulations may develop under P. patula on certain sites (see Morris, 1993b) while this is unusual beneath the more lightly canopied P. elliottii. In broad-leaved stands, accumulation of litter is uncommon though not unknown. Even under teak and Gmelina, which usually suppress all other vegetation, the large leaves readily decay. Similarly under the light crowns of eucalyptus and the nitrogen rich foliage of leguminous trees such as Acacia, Leucaena and Prosopis sp. and non-legume N-fixers such as casuarinas litter build up is rare owing to rapid decay of the rich organic matter. The above processes indicate that plantation forestry practice could influence soil chemical status, but what has been observed? Most studies have either compared conditions in plantation sites with those before establishment or examined trends as a plantation develops. Few have examined changes over successive rotations. Few consistent trends emerge. In the many tropical studies both increases and decreases in pH, carbon, nitrogen and macro-nutrients under plantations compared with natural forest or pre-existing conditions have been reported - see references in Evans (1999b). Recent investigations have concerned acid rain impacts, though distinguishing these from direct tree effects on soil acidity is difficult. On the whole, tree impacts are relatively small compared with the soil nutrient store. Plantation forestry may impact soil physical conditions, and hence sustainability through (1) site preparation and establishment operations, (2) the effects of tree growth itself for example on water uptake, and (3) harvesting practices. They are discussed in Evans (1999a, 2000) and comment is only made here on vegetation suppression. Plantations of teak and Gmelina and also many conifers in both tropical and temperate regions may suppress all ground vegetation. Where this exposes soil, perhaps because litter is burnt or gathered, erosion rates increase. Under teak, Bell (1973) found soil erosion 2½ to 9 times higher than under natural forest. The protective function of tree cover derives more from the layer of organic matter that accumulates on the soil surface than from interception by the canopy. In India, raindrop erosion was 9 times higher under Shorea robusta plantations where litter had been lost through burning (Ghosh, 1978). Soil erosion beneath Paraserianthes falcataria stands was recorded as 0.8 t ha-1y-1 where litter and undergrowth were kept intact but an astonishing 79.8 t ha-1y-1 where it had been removed (Ambar 1986). Wiersum (1983) found virtually no soil erosion under Acacia auriculiformis plantations with litter and undergrowth intact, but serious where local people gathered the litter. 3. The litter:organic matter:mineral soil interface is the seat of nutrient cycling and microbial activity. Any activity that disturbs these roles in the ecosystem can have large effects of which perhaps most serious of all, and still practised in some countries, is regular and frequent litter raking. In commercial plantation forestry, the cost of managing debris and site preparation when restocking plantations is expensive and a high proportion of the establishment costs, but as Nambiar (1996) points out 'one shoddy operation can leave behind lasting problems'. Establishment of plantations greatly affects ground vegetation with many operations designed directly or indirectly to reduce weed competition to ensure that the planted tree has sufficient access to site resources. A neglected but critical phase is managing the weed problem through crop harvesting and restocking. In subsequent rotations, the weed spectrum often changes. Owing to past weed suppression, exposure of mineral soil in harvesting, and the accumulation of organic matter, conditions for weed species change. Birds and animals may introduce new weed species, grass seed may be blown into plantations and accumulate over several years, and roads and rides in plantations can become sources of weed seeds. Weed management must be a holistic operation. As with a failure to handle organic matter carefully, where yield declines have been reported, often the significance of weeds has been insufficiently recognised on restocked sites in second or third rotations. A serious threat to plantations can arise from a massive build-up of a pest or disease. It has been much disputed whether monoculture itself is more susceptible to devastation from these causes. The broadly accepted ecological principle of stability dates back to the 1950s and is that the stability of a community and its constituent species is positively related to its diversity. Following this reasoning foresters have stressed that substitution of natural forest by even-aged monoculture plantations may remove many of the natural constraints on local tree pest and pathogens and thus increase risk of attack. Some evidence supports this, see Gibson and Jones (1977), though these authors point out that increased susceptibility mostly arises from conditions in plantations rather than because only one tree species is present. The relative susceptibility of monocultures to organic damage is complex ecologically. The influence of diversity on stability of (say) insect populations depends on what population level is deemed acceptable. Often stable, equilibrium levels are too damaging and so artificially low populations sought through control. Speight and Wainhouse (1989) stress that artificially created diversity, i.e. mixed crops, does not necessarily improve ecological stability and is certainly inferior to naturally occurring diversity, complexity of organisation and structure is as important (Bruenig 1986). It is prudent, nevertheless, to spell out why plantations are perceived to be in danger. 1. Plantations of one or two species offer an enormous food source and ideal habitat to any pest and pathogen species adapted to them. 2. Uniformity of species and closeness of trees including branch contact above ground and root lesions in the soil, allow rapid colonization and spread of infection. 3. Narrow genetic base in plantations e.g. one provenance or no genetic variation (e.g. clones) reduces the inherent variability in resistance to attack. 4. Trees grow on a site for many years and permit pest or disease to build up over time. 5. Many plantations are of introduced species and while without the insect pests and pathogens that occur in their native habitat also missing are the many natural agencies controlling pests and diseases. Thus, many argue that exotic plantations experience a period of relative freedom from organic damage, perhaps for the first one or two rotations. Zobel's et al (1987) analysis of the threat to exotics concluded that evidence does not confirm that stands are more at risk, other than clonal plantations, and that problems arise mainly when species are ill suited to a site. Examples of devastating outbreaks of fungal disease and insect pests are listed in many publications, see for example Ciesla and Donnaubauer (1994) and Evans (1999a), and they illustrate the scale and potential threat pest and diseases represent. They have prevented the planting of some species, impaired the productivity of others, but overall have not caused such widespread damage as to seriously question plantation silviculture as a practice. There remain two serious concerns. 1. Environmental change - changing climate, increasing atmospheric pollutants of CO2 and nitrogen compounds, will add stress to established plantations while higher nitrogen inputs may increase insect pest risk and diseases problems (Lonsdale and Gibbs, 1996). 2. New pests and diseases will emerge: a) from new hybrids or mutations; b) from new introductions arising from increasing global trade e.g. Cryphonectria canker in eucalyptus in S. Africa and new phytophthoras in Britain; and c) from native pests adapting to introduced trees. Many pest and disease problems in plantations arise from the nature of forest operations, and not directly from growing one tree species in a uniform way (monoculture). They are only briefly touched on here. Large amounts of wood residue from felling debris and the presence of stumps are favourable for colonization by insect pests and as sources of infection. Usually modification of silviculture or application of specific protection measures can contain such problems. Extensive planting of one species, whether indigenous or exotic, inevitably results in some areas where trees are ill suited to the site and suffer stress. This may occur where large mono-specific blocks are planted or where exotics are used extensively before sufficient experience has been gained over a whole rotation e.g. Acacia mangium in Malaysia and Indonesia and the discovery of widespread heart rot. Thinning and pruning can damage trees and provide infection courts for disease. Neither practice seriously threatens plantation sustainability. For forest stands (crops) hard evidence of productivity change over successive rotations is meagre with few reliable data. The long cycles in forestry make data collection difficult. Records are rarely maintained from one rotation to the next, funding for long term monitoring is often a low priority, detection of small changes is difficult, and often the exact location of sample plots is poorly recorded (Evans, 1984). Also, few plantations are second rotation, and even fewer third or later rotation, thus the opportunity to collect data is limited. The few comparisons of productivity between rotations have mostly arisen because of concern over yields, namely 'second rotation decline', or stand health. Thus, the focus has been on problems: the vast extent of plantations where no records are available suggest no great concern and no obvious decline problems. Thus data in the older literature may be biased to problem areas while more recent studies may be less so, such as the European Forestry Institute survey (Spiecker et al. 1996) and CIFOR's 'Site management and productivity in tropical forest plantations', that incorporates systematic establishment of sample plots. Review of evidence comparing yields in successive rotations. In the 1920s Wiedemann (1923) reported that significant areas of second and third rotation spruce (Picea abies) in lower Saxony (Germany) were growing poorly and showed symptoms of ill health. In 8 per cent of plantations, there was a fall of two quality classes in second and third rotation stands. It is now clear that this mainly arose from planting spruce on sites to which it was ill suited. Today, young spruce stands in Saxony and Thuringia are growing more vigorously than equivalent stands 50 or 100 years ago (Wenk and Vogel, 1996). Elsewhere in Europe, comparisons between first and second rotations are limited. In Denmark Holmsgaard et al. (1961) indicated no great change for either Norway Spruce or beech, though today second rotation beech is growing significantly better (Skovsgaards and Henriksen, 1996). In the Netherlands second rotation forest generally grows 30 per cent faster than the first (van Goor, 1985) and in Sweden second rotation Norway Spruce shows superior growth (Eriksson and Johanssen, 1993; Elfing and Nystrom, 1996). In France decline was reported in successive rotations on Pinus pinaster in the Landes (Bonneau, et al. 1968). In Britain most second rotation crops are equal to or better than their predecessor and no decrease in growth is expected (Dutch, pers. comm.) and recent evidence points to UK conifer forests growing faster than they used to (Cannell, et. al. 1998). In New Zealand, the limited occurrence of yield decline was mostly overcome by cultivation and use of planted stock rather than natural regeneration (Whyte pers comm). On most sites successive rotations gain in productivity. However, Dyck and Skinner (1988) do conclude that inherently low quality sites, if managed intensively, will be susceptible to productivity decline. Long-term productivity research by the writer in the Usutu forest, Swaziland began in 1968 as a direct consequence of second rotation decline reports from South Australia. For 32 years, measurements have been made over three successive rotations of Pinus patula plantations, grown for pulpwood, from a forest-wide network of long-term productivity plots. Plots have not received favoured treatment, but simply record tree growth during each rotation resulting from normal forest operations by SAPPI Usutu . Table 1 Comparison of second and third rotation Pinus patula on granite and gneiss derived soils at 13/14 years of age (means of 38 plots). Tables 1 & 2 summarise results from arguably the most accurate datasets available on narrow-sense sustainability. Over most of the forest on granite derived soils (Table 1) third rotation height growth is significantly superior to second and volume per hectare almost so. There had been little difference between first and second rotation (Evans, 1978). In a small part of the forest (about 13% of area), on phosphate-poor soils derived from slow-weathering gabbro, decline occurred between first and second rotation, but this has not continued into the third rotation where there is no significant difference between rotations (Table 2). The importance of the Swaziland data, apart from the long run of measurements, is that no ameliorative treatment has ever been applied to any long-term productivity plot. According to Morris (1987), some third rotation P. patula is probably genetically superior to the second rotation. However, the 1980s and especially the period 1989-92 have been particularly dry, Swaziland suffering a severe drought along with the rest of southern Africa (Hulme 1996, Morris, 1993a). This will have adversely impacted third rotation growth. These data are also of interest because plantation silviculture practised in the Usutu forest over some 62,000 ha is intensive with pine grown in monoculture, no thinning or fertilising, and on a rotation of 15-17 years which is close to the age of maximum mean annual increment. Large coupes are clearfelled and all timber suitable for pulpwood extracted. Slash is left scattered (i.e. organic matter conserved) and replanting done through it at the start of the next wet season. These plantations are managed as intensively as anywhere and, so far, there is no evidence to point to declining yield. The limited genetic improvement of some of the third rotation could have disguised a small decline, but evidence is weak. In addition, it can be strongly argued that without the severe and abnormal drought growth would have been even better than it is. Overall, the evidence suggests no serious threat to narrow-sense sustainability. There are about 6 million ha of Chinese fir plantations in subtropical China. Most are monocultures and are worked on short rotations to produce small poles, though foliage, bark and sometimes roots are harvested for local use. Reports of significant yield decline have a long history. Accounts by Li and Chen (1992) and Ding and Chen (1995) report a drop in productivity between first and second rotation of about 10 per cent and between second and third rotation up to a further 40 per cent. Ying and Ying (1997) quote higher figures for yield decline. Chinese forest scientists attach much importance to the problem and pursue research into monoculture, allelopathy, and detailed study of soil changes etc. Personal observation suggests that the widespread practices of whole tree harvesting, total removal of all organic matter from a site, and intensive soil cultivation that favours bamboo and grass invasion all contribute substantially to the problem. Ding and Chen (op. cit.) conclude that the problem is "not Chinese fir itself, but nutrient losses and soil erosion after burning (of felling debris and slash) were primary factors responsible for the soil deterioration and yield decline . . . application of P fertilizer should be important for maintaining soil fertility, and the most important thing was to avoid slash burning . . . These (practices) . . . would even raise forest productivity of Chinese fir." (words in parentheses added by writer). In the 1930s, evidence emerged that replanted teak (Tectona grandis) crops (second rotation) were not growing well in India and Java (Griffith and Gupta 1948). Although soil erosion is widespread under teak and loss of organic matter through burning leaves is commonplace the research into the 'pure teak problem', as it was called in India, did not generally confirm a second rotation problem. However, Chacko (1995) describes site deterioration under teak as still occurring with yields from plantations below expectation and a decline of site quality with age. Four causes are adduced: poor supervision of establishment; over-intensive taungya (intercropping) cultivation; delayed planting; and poor after-care. Chundamannii (1998) similarly reports decline in site quality over time and blames poor management. Plantations of slash (P. elliottii) and loblolly (P. taeda) pines are extensive in the southern states. Significant plantings began in mid 1930s as natural stands were logged out (Schultz, 1997) and with rotations usually 30 years or more, some restocking (second rotation) commenced in the 1970s. In general, growth of the second crop is variable - see examples in Evans (1999b). A coordinated series of experiments in USA is assessing long-term impacts of management practices on site productivity (Powers et al. 1994). Other evidence is limited or confounded. For example, Aracruz Florestal in Brazil has a long history of continually improving productivity of eucalyptus owing to an imaginative and dedicated tree breeding programme so that regularly new clones are introduced and less productive ones discontinued (Campinhos and Ikemori, 1988). The same is true of the eucalypti plantations at Pointe Noire, Congo (P. Vigneron pers comm.). Thus, recorded yields may reflect genetic improvement and disguise any site degrade. In India, one recent report (Das and Rao, 1999) claims massive yield decline in second rotation clonal eucalypti plantations which the authors attribute to very poor silviculture. At Jari in the Amazon basin of Brazil, silvicultural practices have evolved with successive rotations since the first plantings between 1968 and 1982. A review of growth data from the early 1970s to present day suggest that productivity is increasing over successive rotations due to silvicultural inputs and genetic improvement (McNabb and Wadouski, in press). In Venezuela, despite severe and damaging forest clearance practices, second rotation Pinus caribaea shows much better early growth than the first rotation (Longart and Gonzalez, 1993). For long rotation (>20 years) crops it is usual to estimate yield potential from an early assessment of growth rate to identify the site quality or yield class. A change from predicted to final yield can readily occur where a crop has suffered check in the establishment phase or fertiliser application corrects a specific deficiency. However, there is some evidence for very long rotation (>40 years) crops in temperate countries that initial prediction of yield or quality class underestimates final outturn, i.e. crops grow better in later life than expected. Either the yield models used are now inappropriate or growing conditions are 'improving'. Across Europe the latter appears to be the case (Spiecker et al., 1996; Cannell et al. 1998) and is attributed to rises in atmospheric CO2 and nitrogen input in rainfall, better planting stock, and cessation of harmful practices such as litter raking. However, as noted, the opposite is occurring with teak. High initial site quality estimates do not yield the expected outturn and figures are revised downward as the crops get older. Closely related to the above is the observation that date of planting is often positively related to productivity i.e. more recent crops are more productive than older ones regardless of inherent site fertility. This shift is measurable and can be dramatic, see example from Australia in Nambiar (1998). Attempts to model productivity in Britain based on site factors have often been forced to include planting date as a variable. Maximum mean annual increment of Sitka spruce increased with planting date in successive decades by 1 m3ha-1y-1 (Worrell and Malcolm, 1990) and for Douglas fir, Japanese larch (Larix kaempferi) and Scots pine (Pinus sylvestris) by 1.3, 1.6 and 0.5 m3ha-1y-1 respectively in each succeeding decade (Tyler et al., 1996). This suggests that some process is favouring present growing conditions over those in the past, such as the impact of genetic and silvicultural improvements (and again cessation of harmful ones) and possibly, the 'signature' of atmospheric changes mentioned above. Broadmeadow (2000) confidently predicts an increase in productivity for forests in United Kingdom owing to climate change. The impact of these two related observations is that present forecasts of plantation yields are likely to be underestimates; yields generally appear to be increasing. The steady transition from exploitation and management of natural forest to increasing dependence on plantation forestry is following the path of agriculture. Many of the same biological means to enhance yield are available. They are outlined here only briefly. The forester only has one opportunity per rotation to change his chosen crop. Change in species, seed origin, use of new clones, use of genetically improved seed and, in the future, genetically modified trees all offer the prospect of better yields in later rotations. There are surprisingly few examples of wholesale species change from one rotation to the next, which suggests that in most cases foresters have been good silviculturists. Examples of changes are cited in Evans (1999a, b, 2000). The impact of all these genetic improvements will affect yield and outturn directly and indirectly through better survival and greater suitability to the site which may lead to increased vigour and perhaps greater pest and disease resistance. Countless studies affirm the benefits of careful investment in this phase of tree improvement. Some of the world's most productive tree plantations use clonal material, including both eucalyptus and poplars. It is clear that both the potential productivity and the uniformity of product make this silviculture attractive. Although clonal forestry has a narrow genetic base, careful management of clone numbers and the way they are interplanted can minimise pest and disease problems. Roberds and Bishir (1997) suggest that use of 30-40 unrelated clones will generally provide security against catastrophic failure. There are no widely planted examples at present where genetic engineering has modified trees. The expectation is that these techniques will be used to develop disease resistance, modified wood properties, cold or drought tolerance rather than increase in vigour. 1. Manipulation of stocking levels to achieve greater output of fibre or a particular product, by fuller site occupancy, less mortality, and greater control of individual tree growth. 2. Matching rotation length to optimise yield - the rotation of maximum mean annual increment - offers worthwhile yield gain in many cases. 3. In some localities prolonging the life of stands subject to windthrow by silvicultural means will increase yield over time. 4. Use of mixed crops on a site may aid tree stability, may lower pest and disease threats, but is unlikely to raise productivity over growing the best suited species (FAO, 1992). 5. Silvicultural systems that maintain forest cover at all times - continuous cover forestry - such as shelterwood and selection systems are likely to be neutral to slightly negative in production terms while benefiting tree quality, aesthetics, and probably biodiversity value. 6. Crop rotation, as practised in farming, appears unlikely. There are examples of forest plantations benefiting from a previous crop of nitrogen fixing species e.g. Acacia mearnsii but industry is likely to require a similar not a widely differing species when replanting. Most forest use of fertiliser is to correct known deficiencies e.g. micronutrients such as boron in much of tropics, and macronutrients such as phosphorus on impoverished sites in many parts of both the tropical and the temperate world. In most instances fertiliser is only required once in a rotation. Fertiliser application is likely to be the principal means of compensating for nutrient losses on those sites where plantation forestry practice does cause net nutrient export to detriment of plant growth. Ground preparation to establish the first plantation crop will normally introduce sufficient site modification for good tree growth in the long term. Substantial site manipulation is unlikely for second and subsequent rotations, unless there was failure first time around, except to alleviate soil compaction after harvesting or measures to reduce infections and pest problems. Weed control strategies may change from one rotation to the next owing to differing weed spectrum and whether weeds are more or less competitive. The issue is crucial to sustainability since all the main examples of yield decline reflect worsening weed environments, especially competition from grasses and bamboos. It is clear from many investigations that treatment of organic matter both over the rotation and during felling and replanting is as critical to sustainability as coping with the weed environment. While avoidance of whole tree harvesting is probably desirable on nutrition grounds, it is now evident that both prevention of systematic litter raking or gathering during the rotation and conserving organic matter at harvesting are essential. If all the above silvicultural features are brought together a rising trend in productivity can be expected. But if any one is neglected it is likely that the whole will suffer disproportionately. For example, operations should not exclusively minimise harvesting costs, but examine collectively harvesting, re-establishment and initial weeding i.e. as an holistic activity, so that future yield is not sacrificed for short term savings. Evidence of a rising trend reflecting the interplay of these gains is reported in Nambiar (1996) for Australia and reproduced in Evans (1999b) along with an example from Swaziland. Holistic management also embraces active monitoring of pest and disease levels, and researching pest and disease biology and impacts will aid appropriate responses such as altering practices e.g. delayed replanting to allow weevil numbers to fall, and careful re-use of extraction routes to minimise soil compaction and erosion. Four main conclusions can be drawn from this review. 1. Plantations and plantation forestry practices do affect sites and under certain conditions may cause deterioration, but are not inherently unsustainable. Care with harvesting, conservation of organic matter and management of the weed environment are critical features to minimise nutrient loss and damage to the soil conditions. 2. Plantations are at risk from pests and diseases. History of plantation forestry suggests that most risks are containable with vigilance and underpinning of sound biological research. 3. Measurements of yield in successive rotations of trees suggest that, so far, there is no widespread evidence that plantation forestry is unsustainable in the narrow-sense. Where yield decline has been reported poor silvicultural practices appear largely responsible. 4. Several interventions in plantation silviculture point to increasing productivity in the future, providing management is holistic and good standards maintained. Genetic improvement especially offers the prospect of substantial gains over several rotations. Ambar, S. 1986 Conversion of forest lands to annual crops: and Indonesian perspective. In Land Use, watersheds, and planning in the Asia-Pacific region. FAO RAPA report 1986/3, FAO, Bangkok. 95-111. Bell, T. I. W. 1973 Erosion in Trinidad teak plantations. Commonwealth Forestry Review 52: 223-233. Binns, W. O. 1962 Some aspects of peat as a substrate for tree growth. Irish Forestry 19, 32-55. Boardman, R. 1988 Living on the edge - the development of silviculture in South Australian pine plantations. Australian Forestry 51: 135-156. Bonneau, M, Gelpe, J. and Le Tacon, F. 1968 Influence of mineral nutrition on the dying of Pinus pinaster in the Landes in Gascony. Annales Science Forestiere, Paris. 25: 251-289. Broadmeadow, M. 2000 Climate Change - Implications for Forestry in Britain. Information Note FCIN31, Forestry Commission, Edinburgh, U.K. 8pp. Bruenig, E. F. 1986 Forestry and agroforestry system designs for sustained production in tropical landscapes. Proc. First Symposium on the humid tropics. EMBRAPA/CPATU, Belem, Brazil vol. 2: 217-228. Cannell, M. G. R., Thornley, J. H. M., Mobbs, D. C. and Friend, A. D. 1998 UK conifer forests may be growing faster in response to N deposition, atmospheric CO2 and temperature. Forestry 71: 277-296. Chacko, K. C. 1995 Silvicultural problems in management of teak plantations. Proc. 2nd Regional Seminar on Teak 'Teak for the Future' Yangon, Myanmar May 1995 FAO (Bangkok) 91-98. Chijioke, E. O. 1980 Impact on soils of fast-growing species in the lowland humid tropics FAO Forestry Paper 21, FAO, Rome. Chundamannii, M. 1998 Teak plantations in Nilambur - an economic review. KFRI Research Report No. 144. Kerala Forest Research Institute, Peechi, Kerala, India. 71pp. Ciesla, W. M. and Donaubauer, E. 1994 Decline and dieback of trees and forests. FAO Forestry Paper 120, FAO, Rome. Cornelius, J 1994 The effectiveness of plus-tree selection for yield. Forest Ecology and Management 67: 1-3, 23-34. Das, S. K. and Rao, C. Muralidhara. 1999 High-yield Eucalyptus clonal plantations of A.P. Forest Development Corporation Ltd - A success story? Indian Forester Vol. 125, 1073-1081. Ding, Y. X. and Cheng, J. L. 1995 Effect of continuous plantation of Chinese fir on soil fertility. Pedosphere 5: 57-66. Dyck, W.J. and Skinner, M. F. 1988 Potential for Productivity Decline in New Zealand Radiata Pine Forests. Proc. 7th North American Forest Soils Conference. 318-332. Dyck, W.J., Cole, D. W. and Comerford, N.B. (1994) Impacts of Forest Harvesting on long-term site productivity. Chapman and Hall, London. pp371. Elfling, B. and Nystrom, K. 1996 Yield capacity of planted Picea abies in northern Sweden. Scandinavian Journal of Forestry Research 11: 38-49. Eriksson, H. and Johannson, U. 1993 Yields of Norway Spruce (Picea abies [L.] Karst.) in two consecutive rotations in southwestern Sweden. Plant and Soil 154: 239-247. Evans, J. 1978 A further report on second rotation productivity in the Usutu Forest, Swaziland - results of the 1977 reassessment. Commonwealth Forestry Review 57: 253-261. Evans, J. 1992. Plantation Forestry in the Tropics. 2nd edn. Clarendon Press, Oxford 403pp. Evans, J. and Boswell, R. C. 1998 Research on sustainability of plantation forestry: volume estimation of Pinus patula trees in two different rotations. Commonwealth Forestry Review 77: 113-18. Evans, J. 1999a Sustainability of plantation forestry: impact of species change and successive rotations of pine in the Usutu Forest, Swaziland. Southern Africa Forestry Journal 63-70. FAO 1992 Mixed and pure forest plantations in the tropics and subtropics. FAO Forestry Paper 103. FAO, Rome. Folster, H. and Khanna, P. K. 1997 Dynamics of nutrient supply in plantation soils. In Nambiar, Sadanandan, E. K. and Brown, A. G. (eds) 1997 Management of soil, nutrients and water in tropical plantation forests. ACIAR (Australian Centre for International Agricultural Research Monograph No. 43. 339-378. Franklin, E.C. 1989 Selection strategies for eucalyptus tree improvement - four generations of selection in Eucalyptus grandis demonstrate valuable methodology. In Gibson, G.L., Griffin, A.R. and Matheson, A.C. Breeding Tropical Trees: population structure and genetic improvement, strategies in clonal and seedling forestry. Proc. IUFRO Conference, Pattaya, Thailand, November 1988, Oxford Forestry Institute. 197-209. Ghosh, R. C. 1978 Evaluating and analysing environmental impacts of forests in India. Proc. 8th World Forestry Congress, Jakarta. vol. 7A 475-484. Gibson, I. A. S. and Jones, T. 1977 Monoculture as the origin of major forest pests and diseases. In 'Origins, of pest, parasite, disease and weed problems' (eds. J. M. Cherrett and G. R. Sagar), Blackwell, Oxford 139-161. Goncalves, J. L. M., Barros, N. F., Nambiar, E. K. S and Novais, R. F. 1996 Soil and stand management for short-rotation plantations. In Nambiar, Sadanandan, E. K. and Brown, A. G. (eds) 1997 Management of soil, nutrients and water in tropical plantation forests. ACIAR (Australian Centre for International Agricultural Research) Monog. No. 43 379-417. Van Goor, C. P. 1985 The impact of tree species on soil productivity. Netherlands, Journal of Agricultural Sciences 33: 133-140. Griffith, A. L. and Gupta, R. S. 1948 Soils in relation to teak with special reference to laterisation. Indian Forestry Bulletin No. 141. Holmsgaard, E. Holstener-Jorgensen, H. and Yde-Andersen, A. 1961 Bodenbildung, Zuwachs und Gesundheitszustand von Fichtenbestanden erster und sweiter Generation. 1. Nord-Zeeland. Det forstlige Forsogsvaesen i Danmark 27: 1-67. Hulme, M. 1996 Climate change and Southern Africa: an exploration of some potential impacts and implications in the SADC region. Climate Research Unit, University of East Anglia, UK and WWF International 104pp. Keeves, A. 1966 Some evidence of loss of productivity with successive rotations of Pinus radiata in the south east of S. Australia. Australian Forestry 30: 51-63. Li, Y. and Chen, D. 1992 Fertility degradation and growth responses in Chinese fir plantations. Proc. 2nd. International Symposium on Forest Soils. Ciudad, Venezuela 22-29. Lonsdale, D. and Gibbs, J. N. 1996 Effects of climate change on fungal diseases of trees. In Fungi and Environmental change eds. Frankland, J. C., Magan, N. and Gadd, G.M. Symp. British Mycological Society, Cranfield University, U.K. March 1994. Lundgren, B 1978 Soil conditions and nutrient cycling under natural and plantation forests in Tanzanian highlands. Reports in Forest Ecology and Soils no. 31. Department of Forest Soils, Swedish University of Agricultural Sciences. 428pp. McNabb, K. L. and Wadouski, L H. (in press) Multiple rotation yield for intensively managed plantations in the Amazon basin. Miller, H. G. 1995 The influence of stand development on nutrient demand, growth and allocation. Plant and Soil 168-169, 225-232. Morris, A. R. 1987. A review of Pinus patula seed sources in the Usutu Forest, 1950-86. Forest Research document 8/87. Usutu Pulp Company (unpublished). Morris, A. R. 1993a. Observations of the impact of the 1991/92 drought on the Usutu Forest. Forest Research document 6/93. Usutu Pulp Company (unpublished). Morris, A. R. 1993b Forest floor accumulation under Pinus patula in the Usutu forest, Swaziland. Commonwealth Forestry Review 72: 114-117. Nambiar, Sadanandan, E. K. 1996 Sustained productivity of forests is a continuing challenge to soil science. Journal of Soil Science Society of America 60: 1629-1642. Perum Perhutani 1992 Teak in Indonesia. In Wood H (ed) Teak in Asia FORSPA publication 4. Proc. regional seminar Guangshou, China March 1991. FAO (Bangkok). Rennie, P. J. 1955 The uptake of nutrients by mature forest growth. Plant and Soil 7, 49-55. Roberds, J.H. and Bishir, K.W. 1997 Risk analyses in clonal forestry. Canadian Journal of Forest Research 27: 425-432. Sanchez, P. A. 1976 Properties and Management of soils in the Tropics. Wiley Interscience, New York. Schultz, R. P. 1997 Loblolly Pine: the ecology and culture of loblolly pine (Pinus taeda L.) USDA Forest Service Agricultural Handbook 713. Skovsgaard, J. P. and Henricksen, H. A. (1996) Increasing site productivity during consecutive generations of naturally regenerated and planted beech (Fagus sylvatica L.) in Denmark. In Spiecker et al. (1996) 89-97. Speight, M and Wainhouse, D. 1989 Ecology and management of forest insects. Oxford. Spiecker, H, Meilikaainen, K, Kohl, M. and Skovsgaard, J. P. (eds) 1996 Growth trends in European Forests. European Forest Institute Research Report No. 5 Springer-Verlag, Berlin. 372pp. Tiarks, A. E., Nambiar, E. K. S. and Cossalter, C. 1998 Site management and productivity in tropical forest plantations. CIFOR Occasional Paper No. 17. Center for International Forestry Research, Bogor, Indonesia. 11pp. Tyler, A. L., Macmillan, D. C. and Dutch, J. C. 1996 Models to predict the General Yield Class of Douglas fir, Japanese larch, and Scots pine on better quality land in Scotland. Forestry 69, 13-24. Wenk, G. and Vogel, M. 1996 Height growth investigations of Norway Spruce (Picea abies [L.] Karst.) in the Eastern part of Germany during the last Century. In Spiecker, H., Meilikaainen, K., Kohl, M. and Skovsgaard, J. P. (eds) 1996 Growth trends in European Forests. European Forest Institute Research Report No. 5 Springer-Verlag, Berlin. 99-106. Whyte, A. G. D. 1973 Productivity of first and second crops of Pinus radiata on the Moutere gravel soils of Nelson. New Zealand Journal of Forestry 18: 87-103. Wiersum, K. F. 1983 Effects of various vegetation layers of an Acacia auriculiformis forest plantation on surface erosion in Java, Indonesia. Proc. International Conference on Soil Erosion and Conservation. Malama Aina, Hawaii, January 1983. Woods, R. V. 1990 Second rotation decline in P. radiata plantations in South Australia has been corrected. Water, Air and Soil Pollution 54: 607-619. Worrell, R. and Malcolm, D. C. 1990 Productivity of Sitka spruce in Northern Britain. Prediction from site factors. Forestry 63, 119-123. Ying, JiHua and Ying, J. H. 1997 Comparative study on growth and soil properties under different successive rotations of Chinese fir. Journal of Jiangsu Forestry Science and Technology 24: 31-34. Zobel, B. J., van Wyk, G. and Stahl, P. 1987 Growing exotic forests. John Wiley, New York.
0.999832
Answer The answer is yes if you are using ESD Sensitive devices on top of your workstations and if the top of the workstations are not dissipative and connected to ground. If the bench top is already dissipative (less than 1x109 ohms and grounded, then a separate dissipative mat is not necessary. Typically, a separate ESD static dissipative mat is used on top of a standard (non-ESD workbench) to ensure maximum protection of your ESDS devices when being handled (by grounded and monitored personnel) at the workstation. This ESD Safe working area (a grounded dissipative ESD work mat) allows for charged surfaces to safely bleed through the mat to ground. It also keeps all conductors placed on its' surface grounded, i.e., at ground potential which should be the same as the operator to minimize the possibility of an ESD event. Grounded wrist straps and ESD mats are the first lines of defense for when ESDS devices are being handled by human operators.
0.999903
Ferrari unveiled the LaFerrari supercar at the Geneva motor show . It succeeds the Ferrari Enzo. Ferrari said the car's name, which means The Ferrari in Italian, was chosen to underline its uniqueness in the brand's history. LaFerrari is Ferrari's first gasoline-electric hybrid model. It is powered by the HY-KERS system , which consists of a 6.3-liter V12 normally aspirated engine that delivers 800 hp, coupled with a 163-hp electric motor, giving the car a combined power output of 963 hp.
0.999998
So! It's been a week. What's happened? I drove out about an hour (I live in the middle of nowhere) and dropped my laptop at a reputable computer repair shop to get fixed. At first their conclusion was for me to buy a new laptop, but after some time they said they could repair it for only $200, much cheaper than the $600+ a new laptop would cost. I still lost all my files, however. But that can be worked around. Monday, on the way to school, I got a flat tire (strange enough, this is the third flat tire I've gotten while driving my car and they have all happened on the same road). Anyways, I had to buy two new tires as the one was destroyed and the one across from it was badly damaged from being old and an issue I used to have with my car. That cost $130 and I had to pull out of the savings I had built up for the next time my boyfriend visited. My mom, as sweet as she is, offered to help me out when he came over, so no big deal. Plus it's still about 3 months away, so I can still save up a bit more money until then. Well, today I went to pick up my laptop, taking the long trip once again. I was very excited to pick my laptop up finally and distracted by the long ride, resulting in me leaving my keys in my car for the first time. I'll skip the details afterwards and just say that it cost me $85 to unlock my car. Thankfully, a good dergan of mine commissioned me recently, which helped with the costs somewhat, but it was still rather upsetting to lose another large sum of money. Anyways, my mom is still gonna help me out when the time comes, and my part time job still pays enough for me to get by, but I'm kind of not doing my best. Losing all my files and finances in the span of a week has got me a bit down in the dumps. So. While I'll be okay, I do want to say that I still may take some time to get back on the metaphorical art horse and get back to work. TL;DR: I've lost a lot of money in the last week as well as all of the files on my computer. This has put me in a bit of a funk, but with time I'll be back to making art again! OH! I never watched you!? i seriously thought i was watching you. Hey Kazz! Thanks for coming out and liking my new page! The comic is now live woot! Hope you and your boyfriend are doing well (that is so cool it’s almost been a year!) Congrats!
0.999996
Fill in the blank: My ideal bye week would be spent in Wisconsin doing _________. No. 84 Jared Abbrederis – Hunting. Should be a good time for deer hunting around the bye week. No. 42 Morgan Burnett – Hanging out with my 5 year-old son. No. 10 Matt Flynn – Hunting. No. 33 Micah Hyde – Eating some cheese curds. I love those things. Those things are great. No. 8 Tim Masthay – I would spend a few days in Door County doing all the touristy things. I would also go golfing and spend a few days in Milwaukee. No. 12 Aaron Rodgers – Golf at Whistling Straits, The Bull, the River Course, and a round at Green Bay Country Club. And maybe head up to Door County and get a round in at Horseshoe Bay. No. 21 Ha Ha Clinton-Dix – I’d go fishing. Wisconsin offers four seasons of exploring, adventuring, and enjoying all there is to do in Wisconsin. Who knew Wisconsin was a premier destination for lighthouses, waterfalls, caves, world class fishing, re-living (and making) history, professional sports, and the arts at their finest? Each season has something special to offer, but there’s something very magical about witnessing fall in Wisconsin and viewing Mother Nature’s spectacular colors. Wisconsin’s true colors shine September through October. Leaves start to change in Northern Wisconsin first, traditionally mid-September through early October. Earlier fall destinations include scenic Door County, Eagle River, Hayward, and Bayfield. As colors make their way down the rest of the state, it’s truly a sight to see from the Great River Road National Scenic Byway to the beauty of the Wisconsin Dells. Fall festivals celebrate everything from red cranberries to amber ales.
0.982482
Achievement: Creative Director of Balaji Telefilms; Awarded with Ernst & Young (E&Y) Startup Entrepreneur Of The Year award in 2001. Ekta Kapoor can be aptly called as the reigning queen of Indian television industry. The serials produced by her company Balaji Telefilms are a great hit with the masses and are dominating all the major T.V. channels in India. Born on June 7, 1975, Ekta Kapoor is daughter of former Bollywood superstar Jeetendra and sister of current Bollywood hero Tusshar Kapoor. Ekta Kapoor did her schooling from Bombay Scottish School and later on joined Mithibai College. She was not interested in academics and on the advice of her father ventured into TV-serial production at the age of 19. And soon she changed the face of Indian television industry and completely dominated it. Today, Ekta Kapoor is the creative director of Balaji Telefilms. Her company has produced more than 25 serials and each one is being shown, on an average, four times a week on different television channels. Ekta Kapoor's serials have captured the imagination of masses. She broken all previous records of TV serial production and popularity in India. Her most famous television venture has been "Kyunki Saas Bhi Kabhi Bahu Thi" which began in 2000 and is still leading the TRP ratings in India. Her other famous serials include "Kahaani Ghar Ghar Ki", "Kahiin To Hoga", "Kavyanjali", "Kyaa Hoga Nimmo Kaa", "Kasamh Se", "Kahin Kisii Roz", "Kasautii Zindagi Kay", "Kkusum", "Kutumb", "Kalash", and "Kundali". For her entrepreneurial skills and achievements Ekta Kapoor was awarded with Ernst & Young (E&Y) Startup Entrepreneur Of The Year award in 2001.
0.999884
If someone is considering addiction rehabilitation, they should be aware of how the process works and what it can and cannot do. Rehab can help someone overcome their addiction to drugs or alcohol, but it cannot cure an individual, and staying clean is something that a person will have to work at for the rest of their life. Below are answers to some of the most frequently asked questions about addiction rehab treatment. Rehab usually involves a three part process: detox, therapy and aftercare. Detox is the process where an individual eliminates the influences of drugs or alcohol from their body. The difficulty of the process is generally determined by the type of substance someone is addicted to and how frequently they have been using them. Since withdrawal symptoms can be severe, it is best for this to be a medically assisted process. Following detox is therapy, and there are a variety of types of therapy that people may undergo to help them deal with staying off of drugs or alcohol in the future. Group therapy, cognitive therapy and 12-step programs are just a few of many options that people have during this stage of rehab. Therapy can help individuals identify and find ways to cope with situations that may make them relapse. Since staying clean and sober is a lifelong process, aftercare deals with helping people figure out ways to stay the course once they leave rehab. This often includes setting up ongoing therapy and ensuring that someone will be in a living situation that will not make staying away from addictive substances difficult. There is no set time period for completing rehab, but most programs are available in 30-, 60- and 90-day increments. For individuals with significant addiction issues, long-term care may also be available. There are several factors that go into figuring out how long someone will need to participate in a treatment program, including the substance that someone is addicted to, their history of use, how long someone has been addicted and their mental state. Rehab treatment programs provide people with the tools they need to overcome their addiction and help them through the detox process. However, addiction is a lifelong problem, and rehab programs can only help treat addiction; they cannot cure addiction. This is why therapy and aftercare are important parts of rehab since they help people find ways to deal with the temptation as well as how to avoid it. As with many medical treatment programs, the costs of rehab can vary widely. One of the biggest factors in determining the cost of a treatment program is if it is an inpatient program or an outpatient program. Inpatient programs need to provide patients with food, shelter and staff that are available around-the-clock to offer support and assistance, so they are normally more expensive than outpatient programs. Other factors in the cost of rehab programs include where they are located, the quality and types of facilities available and the length of the treatment program. Rehab treatment programs that are located in upscale areas and provide things like private rooms, spas and gyms tend to be much pricier than programs in standard facilities. Will insurance cover the cost of rehab? Many addiction rehab centers will accept insurance, but individuals may need to find a facility that accepts insurance from their provider. Additionally, not all insurance covers rehab programs, and even those that do may not cover 100 percent of the cost of treatment. Many facilities offer payment plans or payments on sliding scales based on a person’s income, so treatment may still be available even if insurance doesn’t cover or completely cover a rehab program.
0.997487
How do you know your parrot is ill? This is a very hard question to answer as all birds "mask" any ailments they have, as in the wild they would get "picked" on and possibly killed if they show any signs of weakness. When you can physically see that your parrot is ill, its normally too late, so you must make a mental note of what is "normal" with your parrot so you can spot any sign of things being wrong. GIARDIA: An internal protozoal parasite that resides in the intentional tract. Symptoms include: diarrhoea, feather picking. CHLAMYDIOSIS (PSITTACOSIS): Symptoms include weight loss, green urates, and lethargy. POLYOMER: Affects young chicks with the following symptoms; daily weight loss, vomiting, depression, lethargy, dehydration, haemorrhage at injection and/or pluck feather sites. BEAK & FEATHER SYNDROME: Feather changes such as retaining the feather sheaths, fractures of the shafts, short clubbed feathers, curled feathers, haemorrhage in the pulp cavity, premature shedding of new feathers. PRO-VENTRICULAR DILATATION SYNDROME: Weight loss accompanied with a good appetite, undigested seeds in the droppings, regurgitation, enlarged pro-ventriculus, seizures. PSITTACINE POX: Upper respiratory tract disease/lesions on the oral, pharyngeal, oesophageal or crop mucosa, depression/ anorexia, diarrhoea, bloody stools. PAPILLOMATOSIS: Cloacal masses, smelly faeces, infertility, recurrent prolapses, droppings accumulated on the vent.
0.999999
The construction worker previously known as Lars has many bricks of height 1 and different lengths, and he is now trying to build a wall of width w and height h. Since the construction worker previously known as Lars knows that the subset sum problem is NP-hard, he does not try to optimize the placement but he just lays the bricks in the order they are in his pile and hopes for the best. First he places the bricks in the first layer, left to right; after the first layer is complete he moves to the second layer and completes it, and so on. He only lays bricks horizontally, without rotating them. If at some point he cannot place a brick and has to leave a layer incomplete, then he gets annoyed and leaves. It does not matter if he has bricks left over after he finishes. Yesterday the construction worker previously known as Lars got really annoyed when he realized that he could not complete the wall only at the last layer, so he tore it down and asked you for help. Can you tell whether the construction worker previously known as Lars will complete the wall with the new pile of bricks he has today? The first line contains three integers h, w, n (1 ≤ h ≤ 100, 1 ≤ w ≤ 100, 1 ≤ n ≤ 10 000), the height of the wall, the width of the wall, and the number of bricks respectively. The second line contains n integers xi (1 ≤ xi ≤ 10), the length of each brick. Output YES if the construction worker previously known as Lars will complete the wall, and NO otherwise.
0.924695
Why so long a brumation? Are you brumating the varius that long because it's convenient or because you think it's best for the lizards? I would think actual brumation time for varius in the wild is a good month or more shorter than than your planned brumation. I find adult chucks with what looks to be fresh green staining on their mouths into late October/early November - and it stays warmer longer in the gulf. If adult chucks are eating that late in the year it wouldn't surprise me if they often had partially digested plant material in their system when heading down for a long nap. I wonder if an herbivorous lizard like a chuck is physiologically adapted to cope with some of the toxic byproducts that accompany decay? Or, maybe the answer is even simpler (if we're actually looking for an answer). Maybe under cold conditions plant matter decays slowly (the colder the temps the slower the decay) and isn't prone to rot like live, whole foods so the lizard isn't harmed by sleeping on a full stomach...or several variations or combinations of the two ideas. Maybe someone needs to study chuckwalla pooping behaviour in the wild. Some interesting stuff might "come out" in that research. PS. I could easily envision a restless chuck emerging from its crack on a warm December day, grabbing the newspaper, and wreaking (reeking) havoc on the neighborhood.
0.934065
How many US states do you remember? Some people remember 52 US states, while there are only 50 states in the current reality. A few people seem to remember 51 states or even less than 50 states. This may have to do with the two state we once had above maine in the 1800. Was shown on old maps. Which would make 50 but by the time alaska/hawaii came out their was 48. Maybe the last 2 was look at as 51 and 52. Obama did slip and say 56 once, not couting alaska and hawaii. I alway remembered 50 myself and study that alot. But could be 52.
0.999993
Being moderately overweight is not protective in overall mortality. One of the problems in research is when you oversimplify or unintentionally skew your data. When you do this sort of examination of data, you miss the proverbial forest because you are looking at all the trees. You need to look at the right data and not too much nor too little. In other words, you have to have the proper inclusion and exclusion data. A few years ago, a study looked at being overweight and whether it was protective against overall mortality. The study in question was from 2013 and was published in the Journal of the American Medical Association. It was a meta-analysis that looked at 97 prior studies to determine the relationship between body weight and all-cause morbidity and mortality. The researchers were surprised by their data and determined that moderate obesity with a BMI of 25-30 was protective against all-cause mortality. The findings were counter to what was expected, and the publication of the results produced an avalanche of articles that touted the protection that obesity reportedly provides. Since the publication, a growing number of experts have questioned the results and claimed that the results were suspect due to poor measurements of data. For example, who do we not know that the obese do not lose weight when then become ill and move into a normal weight range and increase the mortality of the normal weight range. A new study from 2017, looked at three studies with 225,000 subjects. Instead of looking at current weight, this study looked at both current weight and the maximum weight of 16 years. This study revealed that obesity was not protective against mortality. The bottom line: BMIs at the extremes (low and high) are a risk if you become ill and may predispose you to certain illnesses. A single BMI is not useful for to determine risk. I hazard you to think that weight loss is not good for you based on the early study. Weight loss, in general, is better for the health of an obese subject so I would recommend it unless they have a condition that would do better with a higher weight such as cancer. Be the first to comment on "Overweight: Protective or Not?"
0.981508
What is Cognitive Behavioural Therapy and What is it for? Cognitive Behavioural Therapy (CBT) is a psychological theory that helps you to change how you think (cognitive) and what you do (behaviour) to help you to feel better. It focuses on the problems that you are experiencing now rather than focusing on your past issues. CBT can help with many different problems and it has a strong research evidence base for its effectiveness across a wide range issues and behaviours. CBT can be most effective for depression and anxiety and can be as effective as antidepressant medication for many people. Alongside Mindfulness practice, CBT is used by many consultants to encourage personal growth and change.
0.998621
In iOS, in addition to using privacy protection solutions on the device such as password, Touch ID or Face ID. Users can also use the Restrictions feature to prevent changes related to user settings such as factory reset, iCloud exits, or removal of applications. However, what to do if you forgot restrictions passcode on iPhone? Often users will be advised to log in to their Apple iCloud account to disable this feature or restore the device, but if the iPhone does not use iCloud or the device contains a lot of personal data is not backed up. how to solve This article will give you some suggestions, as follows. Step 1: Download and install iTunes onto your computer. Step 2: Connect your iPhone to your computer and wait for iTunes to recognize your device. Step 3: Once iTunes has recognized your iOS device, go to your device’s device management information page. Find and click the “Back Up Now” option. Wait a few minutes for iTunes to back up the data on the device. Step 4: When the backup process is over, turn iTunes off and visit this site to download the iBackupBot tool. Then proceed to install this tool on the computer. Step 5: Start iBackupBot and wait for a few seconds for the tool to identify the backup package that you just made. Step 6: Now go to “System Files> HomeDomain> Library> Preferences“. Step 7: Enter the search phrase “com.apple.restrictions” and press the ENTER key for a few seconds for the results to appear. Double click on the result to show iBackupBot display the content in a new window. Step 8: Perform the copy of the content as shown. And paste it into the web page “ios7hash.derson.us” with the corresponding boxes and click “Start Searching”. Wait a few minutes and the results will show up for you.
0.999997
Create a detailed job description that will highlight the important duties and responsibilities of a business analyst. We understand that you’re a busy person and creating your own may take so much of +More your time. That’s why we recommend you to make use of our Business Analyst Job Description template. Just edit, customize, or modify any of its content. Did we also mention that it can be downloaded onto any of your devices so that you can edit it anytime, anywhere? It’s quick, easy, convenient, and gets the job done. Don’t waste any more of your time! Download our Business Analyst Job Description template now and experience convenience first hand.
0.996398
Learning to float well on the back is the first step in being comfortable with the backstroke. Good spinal alignment and core tension not only improve comfort on the back, but can also contribute to an effective backstroke. The goal of the following drills for body position in backstroke is to experience positive backstroke floatation upon which a good backstroke can be built. Step 1 : - Lay horizontally in the water, face up, without attempting any forward motion, arms at sides, head leading. Step 2 : - Focus on how your spine affects your floatation. First, round your spine so that your knees come up toward your chest, and your hips are low, as if you were sitting in the water. Notice that you have a hard time floating in this position. Your face may even submerge. Step 3 : - Resume your horizontal position in the water, face up. Focus on your spine. Now, arch your back, so your belly button is the highest point of your body. Try to get it above the water. Notice that by doing so your face submerges, or nearly submerges in the water. Your legs may also sink. Step 4 : - Again lay horizontally in the water, face up. Focus on your spine. Make it as absolutely straight as possible. This includes straightening the natural bends in the small of the back and at the neck. To accomplish this, rotate your pelvis forward, and press the back of your head into the water. Notice that in doing so, you contract your abdominal muscles, and actually relax your neck. Notice too that your body floats more horizontally. This is the advantageous floating position which you can build a good backstroke stroke upon. Step 5 : - Practice several times until you achieve the feeling of balancing on your spine. My legs still sink. If you have heavy legs, it is even more important to learn to rotate your pelvis forward. Contract your abdominal muscles, and float on your spine. I'm not sure my spine is straight. You can check it by standing against a wall and pressing every inch of your spine into the wall. Reach back and try to slide your hand between the wall and the small of your back. If you succeed, you need to rotate your pelvis more forward to close this gap. Check the same way behind your neck. No space should remain. Take time to analyze what muscles you are engaging to achieve your straight spine, then, do it in the water. Doing this makes my hips higher than my belly button. Yes, that is correct. You might feel sort of like an elongated banana in the front of your body. But in the backstroke, it is your spine that you float on, and that is what needs to be straight. Step 1 : - Lay horizontally in the water, face up, arms at sides, spine straight. Do not produce any forward motion. Step 2 : - Now move your chin toward your chest until the surface of the water, or the water-line is just below your ear lobes. Notice that in this position, the muscles in your neck and shoulders are fully engaged. Your legs may also sink, and it will probably be difficult to maintain your floatation. Step 3 : - Now lay horizontally in the water, face up, spine straight. Lift your chin and rock your head back, until your ears are completely submerged and you can see the water at the back of your head. The waterline will be at your eye brows and around your throat. Notice that in this position, the muscles in the back of your neck and upper back are engaged. Your float will probably not suffer, although you will feel probably quite stiff. Step 4 : - Again lay horizontally in the water, face up, spine straight. Hold your chin neutral, as if you are looking at someone who is exactly the same height as you. The water-line will surround your face, from your hairline to the bottom of your chin. In this position, your ears will be submerged and your neck, shoulders and upper back will be relaxed, like when your head rests on a pillow. This is the correct head position for the back- stroke. Water gets in my ears. This is a problem that will be resolved when you add forward motion to the float in the next section. The water will then pass by your ears instead of going in them. I seem to float better with my ears out of the water. You might be achieving the needed abdominal contraction by lifting your upper body, rather than rotating your pelvis forward. It is important to use your pelvis to contract your abdominal muscles, because you need your upper body to stay aligned in the direction you are going. I am not able to relax my neck. Lower your shoulders. Float with your palms facing the surface of the water. Breathe in and out deeply. Practice letting the water hold your head. Step 1 : - Lay on your side in the water, with the arm closest to the surface at your side, and the arm closest to the bottom of the pool extended over your head. Achieve a straight spine, and good water- line. Step 2 : - Begin a gentle but continuous flutter kick, which should also be directed side to side. Step 3 : - Although your body is floating on its side, allow your face to float straight up, so it is completely out of the water. Relax your neck so your head feels like an independent object floating in front of your body. In this position, the shoulder of the arm at your side should be out of the water and closer to your cheek than your other arm is to the other cheek. Step 4 : - Kick twelve times (each leg equals one kick). Step 5 : - Just as you finish the last kick, bring the arm laying by your side straight over the water to an extended position over your head, and at the same time, bring the forward reaching arm through the water to your side, as you switch to the opposite side of your body to float. All the time maintain a straight spine and core stability. Step 6 : - Kick twelve more kicks in this position. Repeat the switch with your arms and floating side. Continue to kick twelve times then switch until you reach the far end of the pool. I get a lot of water in my face. Make sure your chin is neutral and that your spine is straight. But remember, the fact is that even excellent backstrokers get water in their face. However, the more momentum you produce, the more the water will go around you, instead of in your face. My kick is aimed up and down. Let the hip on the side with your arm extended be lower than the other hip. I go crooked. Align your extended arm straight and lock your elbow. Do this drill next to the side of the pool, or by a lane line to keep your bearings. An effective flutter kick is a significant part of an efficient backstroke. Although similar in its alternating motion to the flutter kick of freestyle, there are certain distinctions. Kicking well on the back requires employing leg muscles differently than humans commonly do for most land activities. The major force of the backstroke kick is upward, against gravity, making quick, compact kicking and good foot position extremely important. The goal of the following backstroke kicking drills is to address the unique issues of the backstroke kick that contribute to an effective and sustainable kicking technique. Step 1 : - Stand in the water a bit more than waist deep. Hold one arm outstretched in front of you about twelve inches over the surface of the water. Now, drop your hand down with force into the water. Notice that the splash that occurs goes in all directions. Notice too the sound that your hand makes as it makes a hole in the water. Step 2 : - Now, hold your arm outstretched about twelve inches under the surface of the water. Raise your hand up with force to about three inches under the surface of the water. Notice the water welling up, almost like water coming to a boil. Repeat this action, with more force, as if bringing the water to a full boil, without your hand ever reaching the surface of the water. This is what your feet do in a good backstroke kick. Step 3 : - Try the same thing with your feet. Standing with your back supported at the side of the pool, bring your foot about twelve inches over the surface of the water and drop it down into the water. Notice the large splash. Listen to the distinct sound of your foot making a hole in the water. Now hold your foot twelve inches under the water, toes pointed, and force your foot up towards the surface, but don't allow any part of your foot to break the surface of the water. Make the water boil. Notice the feeling of the water against the top of your foot. Step 4 : - Now lay horizontally in the water, face up, straight spine and good water-line, arms at your sides, toes pointed. Drop the heel of your right foot down about twelve inches into the water. Force it up quickly and create a visible boil. Try it with your left foot. Now try it alternating feet. As the one foot forces the water up, the heel of the other foot drops down. Step 5 : - Kick at a tempo that there is no distinction on the surface of the water between the boil from one foot and that of the other. Continue to the other end of the pool, using a more forceful upward motion, and a gentler downward motion. Repeat for several lengths, feeling the difference between the water on your foot as you force it upwards, and as you drop your heel down. I can't make the water boil. Try relaxing your foot and ankle, so that your foot works more like the tail of a fish. See the Floppy Foot drill to work on this. My toes keep breaking the surface. Point your toes more. Control your kick so your feet remain connected to the water. I am not moving. Remember to kick with more force upward than downward. Check that your toes are pointed, and that your knees are not bending too much. Step 1 : - Lay horizontally in the water, face up, straight spine and good water-line, toes pointed. Extend your arms over your head, squeezing your ears between your elbows and clasping your hands, one over the other. Begin the flutter kick, kicking upward with more force. Step 2 : - Kick at a tempo that there is no distinction on the surface of the water between the boil from one foot and the other. Maintaining a neutral chin position, use your peripheral vision to see if your knees are breaking the surface of the water, or if the water is moving upward around your knees. If so, this indicates that you are lifting your knees, or doing a bicycling motion during your kick. This is common because the muscles used to lift the knees, the abductors, that connect the legs to the abdomen, are used a great deal by humans on land in actions such as walking, running, cycling and climbing stairs. However, in the backstroke kick, lifting your knees weakens your kick. tep 3: To engage the correct backstroke kicking muscles, the quads and the hamstrings, resume your backstroke float, hands leading, spine straight, good water-line, squeezing your ears with your elbows, and begin kicking with absolutely no bend at the hips. It means you have to start the kick lower than the surface of the water, and use your leg as if to kick a ball floating on the surface of the water. Although your knee will bend, it is only as a result of your dropping your heel into the water, not by lifting your knee. Imagine kicking a ball with your foot by raising your knee. It wouldn't work. Using your peripheral vision, check your knees. There should be no water moving around them. Practice for several lengths of the pool. Step 4 : - Now, double check if you are lifting your knees by holding a kickboard with one hand, positioning it the long way over your upper legs and knees as you float and kick, keeping the other arm extended over your head. As you kick, you will be able to feel if your knees bump the kickboard. Practice for several lengths of the pool. Water gets in my face . . . a lot. This is feedback that you are lifting your knees. Doing so produces a wave, which washes over your face. Work on starting the kick by dropping your heel down, rather than raising your knee up. Try it on land looking in a mirror. Also, in the water try kicking faster to produce more forward momentum. It hurts my lower back when I don't bend my knees. Rotate your pelvis forward and work on your straight spine using the Float on Your Spine drill. You also might be kicking too deep. The backstroke kick is only about twelve inches at its deepest. Additionally, make sure you are not kicking down with force. Simply drop your heel into the water. Kick with force only in the upward direction. I move very well even with my knees breaking the surface. You are probably blessed with excellent ankle flexibility which makes the backstroke kick much easier. But just think how well you would move if you also eliminated the drag that your knees are producing. It is worth your time to work on this. Step 1 : - Lay on your back in the water. Achieve a straight spine, good water-line, and pointed toes. Extend your arms over your head, squeezing your ears between your elbows and clasping your hands, one over the other. Begin the flutter kick. Step 2 : - Point your toes and use a quick kick tempo. Kick upward with more force, creating a boil on the surface of the water. Notice that the part of your foot that comes closest to the surface is your big toe. Kick to the far end of the pool. Step 3 : - Now push off the wall again for the backstroke kick. This time, turn your feet inward to the pigeontoed position, so your big toes are closer together than the rest of your feet. Notice that by turning your feet inward, your knees and hips are also internally rotated. Begin kicking. Step 4 : - Continue kicking with a quick tempo. Feel the top of your foot press upward against the water. Notice that you are engaging more of your foot's surface by positioning your pigeontoed feet. Notice too that your kick is more productive. Kick to the far end of the pool. Step 5 : - Push off again for the backstroke kick. Achieve the pigeontoed position and kick at a quick tempo. Kick the water upward. Notice the centralized boil on the surface of the water. Notice too that your knees stay under the surface of the water better. Continue kicking for several lengths of the pool. My big toes bump into each other. An occasional bump is okay. If your big toes are bumping into each other with every kick, this could slow your kick rhythm. Adjust the pitch of your feet very slightly so they don't touch. My feet don't rotate this way. Try it on land first. Turn your knees inward and let your feet follow. You can increase your flexibility by practicing it over time. I still feel the downbeat more. Engage your quads to kick upward. Try not to use your hamstrings to an equal extent. Drop your heel down, then force the top of your foot up quickly for the most effective backstroke kicking action. Step 1 : - Fill a medium plastic cup halfway with water. Gently balance the cup on your forehead as you lay on your back in the water, with a straight spine and good water-line. Once the cup is balanced, position both arms at your sides. Begin kicking, toes pointed, knees underwater, making the water boil. Continue kicking to the other end of the pool, trying to keep the cup of water balanced on your forehead. If it falls off, stand and start again. Practice until you can kick productively without the cup falling off for a full length of the pool. Step 2 : - Now prepare to kick another length with the cup on your forehead. This time, every six kicks, do a quarter roll. Keeping your head still so the cup remains balanced, roll your body clockwise, so that your left shoulder, arm and hip are out of the water, while your right shoulder, arm and hip are low in the water. Your kick should now be oriented to the side. Step 3 : - After six kicks, roll your body a quarter turn counter-clockwise, so you are again flat on your back, and your kick is up and down. Do six kicks. Now, maintaining your head position so the cup stays on your forehead, turn your body a quarter turn clockwise again, so that your right shoulder, arm and hip are out of the water, while your left shoulder, arm and hip are low in the water. After six kicks, roll back to the flat back floating position, kicking up and down. Step 4 : - Continue kicking with quarter rolls to the end of the pool, with your head still so that the cup remains balanced on your forehead. If the cup falls off, start again. This is not an easy drill! Practice until you can kick productively, with quarter rolls, while the cup remains on your forehead for several lengths of the pool. The cup falls off right away, even without a quarter roll. Work on your straight spine and neutral chin. Work on the drills in the section called Body Position. Make sure your ears are underwater and your neck is relaxed. When I do a quarter roll, my head turns too, and the cup falls off. Relax your neck and turn your shoulder toward your cheek. Try it on land first, looking in a mirror. My kick still aims up even with a quarter roll to the left or right. Use more core stability, so that your shoulders and hips roll together and don't twist at the waist, leaving your hips flat. Of the many misconceptions about the backstroke arm stroke, the most common is that the arms remain straight throughout the stroke, like a windmill. A straight arm stroke is often associated with shoulder problems. It is also difficult, like trying to lift yourself out of the pool with straight arms. While it is important to begin and finish the stroke with straight, well aligned arms, using a high and firm bent elbow position, during the middle of the stroke, allows the swimmer to access more power, and move forward with an accelerating pull then push action. The goal of the following arm stroke drills for backstroke is to learn the most productive path of the underwater stroke for an easier, more effective, shoulder-saving stroke. Step 1 : - Float in the water, face up, spine straight, good water-line, kicking productively, with one arm at your side and the other extended over your head, aligned with the shoulder. Step 2 : - Start your stroke by descending your reaching hand about twelve inches straight down into the water, pinkie finger first, allowing the opposite shoulder and hip to rise at the same time. From twelve inches deep, keeping your elbow absolutely still, begin to move your fingertips and palm upward to press the water toward your feet. Sweep your hand to the height of your shoulder, so that your arm is close to a right angle. Done correctly, it should feel like your forearm is rotating around your elbow. This is the pull portion of the backstroke arms. Step 3 : - From that point, straighten your arm in a quick sweep until your hand stops below your hip, with your fingertips pointing toward your feet. Allow your whole arm to become involved in this sweep. At the same time, roll your same side hip upward to assist in the power of this action. This is the push portion of the backstroke arms. Step 4 : - Recover your arm over the water and repeat the path of the backstroke arm with the same arm. Notice that your hand is tracing a sort of S shape along the side of your body. The top curve is the pull, the bottom curve is the push. This S shape can become more pronounced as you increase your roll into the top curve with your shoulder and hip, and out of the bottom curve with your shoulder and hip. Continue to the end of the pool. Step 5 : - Repeat the drill, quickening your tempo, and accelerating your hand toward your hip. Hold your elbow firm. Roll in and roll out of each stroke. Try to identify the transition between pull and push. Continue for several lengths, then switch arms. It hurts my shoulder to get my hand twelve inches deep at the beginning of the stroke. It is very important to allow your same side shoulder and hip to roll down with your hand. Without this roll, it is nearly impossible, and painful for many swimmers, as the shoulder joint is not designed for this range of motion from a flat position. The palm of my hand is facing up when my hand reaches my hip. This indicates that the path of your stroke is circular, instead of S shaped. When you finish with your palm up, you are lifting the water. Doing so actually pulls your body down, not forward. Try tracing the shape of an S in the air, then again in the water. My fingertips come out of the water when my hand and shoulder line up. Your hand might be too close to your shoulder. Keep it at a right angle to your shoulder. Also, be sure that you are rolling into your stroke with your shoulder and hip, giving you water over your fingertips when you your fingertips are closest to the water's surface. Step 1 : - Push off the wall for the backstroke, both arms extended over your head. Achieve a straight spine, firm core and good water- line. Establish a productive kick. Begin the backstroke arm stroke with your right arm, leaving your left arm in the extended position. Step 2 : - Lower your right hand about twelve inches down into the water by rolling the same side shoulder and hip down, and the opposite shoulder and hip up. As your hand reaches its deepest point, catch a handful of water. Maintaining a stable elbow position, move your handful of water in an arch up and over your elbow. Step 3 : - At the highest point in the arch, your fingers should be pointing upward toward the surface of the water, but not breaking the surface. At the finish of the arch, your arm should be straight along the side of your body, and your fingertips should be pointing toward your feet. Step 4 : - As your right arm exits the water by your hip and returns over the water to its starting position, trace the same path with your left arm. Roll your left hip and shoulder down into the water, causing your left hand to come to a depth of about twelve inches. Grab hold of deep water, and sweep your hand up and over your still elbow in an arch that finishes with your arm straight at your side. Step 5 : - Continue stroking with alternating arms. Catch the water deep and keep hold it as your hand traces an arch up and over your elbow with each stroke. Trace the arch with more speed. Feel your body move forward with each stroke. Practice for several lengths of the pool. My hand breaks the surface at the top of the arch. This indicates you are too flat in the water. Make sure you roll your body with your descending arm to get the depth you need at the top of the arch. I can't seem to catch the water. As your hand reaches its deepest point, move your fingertips and hand upward into position to press against the water. Keep your elbow firm and still at this point. I lose my handful of water at the end of the arch. Accelerate your motion through the arch. Your hand should be moving fastest at the end of the arch, pressing the water down and past your hip. Step 1 : - Push off the wall for the backstroke. Before the first stroke, form closed fists with each hand. Step 2 : - Start to stroke. At first it may seem impossible to make forward progress without the paddle of your open hand. Keep stroking, purposely positioning your arm so your forearm works as your paddle to press against the water. This will require you to initiate the stroke by moving your fists without moving your elbows, and to keep your elbows high and stable as your fist moves past them. Step 3 : - Use the whole length of the stroke, top to bottom. Feel pull and push. Accelerate the underwater stroke. Maintain opposition. Adapt your stroke to the handless paddle. Continue to the other end of the pool. Step 4 : - Now push off again, this time with open hands. Swim regular backstroke, using your hand as well as your forearm to press against the water. Keep a stable, elbow high. Feel pull and push. Accelerate your stroke. Maintain opposition. Step 5 : - Continue alter- nating lengths of fist and open hand until you are feeling the water with a paddle that includes both your hand and your forearm. I am not moving. Reposition your forearm so when your arm moves through the stroke, it quickly becomes perpendicular to the surface of the water. I am not feeling my forearm against the water. Keep your elbow still as you move your fist, then move your forearm upward into position to press back on the water. If your elbow drops at this point, your forearm will not engage the water. My elbows keep moving. Try widening the angle of your bent arm through the beginning and middle of the underwater arm stroke. Move your fist before your forearm. Step 1 : - Push off the wall for the backstroke with your arms extended. Achieve a straight spine and good water-line. Kick productively. Step 2 : - Take one stroke with your left arm. As your left arm reaches your side, begin to stroke with your right arm while your left arm starts its recovery over the water. Step 3 : - When your right arm approaches mid-recovery, your left arm should have come to its deepest point under- water. The shoulder and hip of your recovering arm should be at least partially out of the water, and the shoulder and hip of your stroking arm should be low in the water. Step 4 : - At this point, redirect your recovery to cross over toward your opposite shoulder, and lead your body to switch from its side, to a front floating position. Your recovering hand should enter the water as a freestyle stroke. Step 5 : - Start to stroke with freestyle as the arm at your side starts to recover as freestyle. You should again be floating on your side, and by mid-recovery, redirect your over-the-water-arm to enter the water as the backstroke. Step 6 : - Continue to move through the water, one stroke as backstroke and the next as freestyle. Feel the depth that your arm achieves to begin each stroke on your back. Notice the similar bent elbow position of your arms in both strokes during the mid-pull. Feel how rolling toward your stroking arm accesses more power for the stroke. I am getting dizzy. Yes, this can happen. Stop frequently. Make the most of each stroke that you are able to do. Be very focused on noticing the depth that your hand achieves, the bent arm position in the middle of the stroke, and your roll until you feel dizzy. It is hard to flip from my back to my front. Allow your body to roll onto its side, using your hips and shoulders. Then simply continue that roll so you end up floating on your front. I can't figure out when to breathe. Catch a breath during the stroke on your back. This drill does not accommodate breathing during the freestyle stroke. The opposition timing of the backstroke makes the arm stroke recovery an active part of the stroke. While resting the muscles of the arm, the recovering arm must serve as a counterbalance to the stroking arm. Accomplishing this requires the recovering arm to be aligned correctly itself in relation to the rest of the body. The following backstroke recovery drills focus on developing a relaxed, aligned and balanced path over the water. Step 1 : - Stand in front of a full-length mirror. Imagine the face of a large clock centered on the mirror in front of you. Raise your right arm over your head as if preparing to enter the water for the backstroke, pinkie first. Position your hand at 12:00. Now, attempt to lower your arm downward as you would into the water to a point about twelve inches below where the water's surface would be. Notice that from the 12:00 position, your shoulder does not allow this rotation, unless you are double-jointed. Step 2 : - Again, raise your right arm over your head extended as if preparing to enter the water for the backstroke. This time, position your arm at 1:00, and attempt to lower your arm as you would into the water to a point about twelve inches below where the water's surface would be. Notice that your shoulder allows this rotation, moving more freely around the joint. Some swimmers may need to use a 1:30 or 2:00 for a more comfortable entry point. Step 3 : - Once you have found an entry point that allows free shoulder joint rotation, try duplicating this entry point with the other arm. Now try alternating arm action, monitoring your clock arms in the mirror in order to avoid an over- reaching 12:00 entry. Continue for 30 seconds. Step 4 : - Now, try alternating arm action with your eyes closed. Every six or seven strokes, freeze at your entry position, open your eyes, and check if you are properly aligned at 1:00 or 11:00 so your arm can descend freely downward and you are aligned most directly forward. Once you have checked, and made a modification if necessary, resume your stroke action with your eyes closed. Check again. Continue until you are able to maintain a recovery with a well-aligned entry position. Step 5 : - Now try it in the water. If you are swimming in an indoor pool, use the beams, pipes or lines on the ceiling to align your arms at 1:00 and 11:00. If you are swimming in an outdoor pool, keep imagining the clock around you, and align your arms outside 12:00. Practice lowering each arm down into the water to begin the stroke, and notice if your arm is rotating easily around the shoulder joint. Practice aligning your arm precisely in the direction you are heading. Continue until you are achieving a well-aligned entry that both allows your arm to achieve a wider range of motion and moves you most directly forward. I am comfortably able to enter the water at 12:00. You are very fortunate to be so flexible. However, you still will want to strive for a 1:00 entry, so your whole body will be aligned forward, in the direction you are going. When I think I am at 1:00, I am at 12:00. This is true for most people. So, if you are trying to achieve a 1:00 entry, reach for 2:00. My shoulder doesn't rotate freely even at 1:00. Try reaching farther outside the shoulder. Also, remember to only lower your arm about twelve inches down into the water. Step 1 : - Push off the wall preparing to do the backstroke, both arms extended over your head, straight spine, good water-line, productive kick. Step 2 : - Take one stroke, leaving the other arm extended. When your arm finishes the underwater phase of the stroke past your hip, slide it straight out of the water, up- ward past the side of your body. Establish the path of your recovery by tracing a half-circle in the air, from the place where your arm exited the water to the point it will enter, extended past your shoulder, aligned in the direction you are heading. Repeat with the other arm, focusing on the forward-aimed, arched path of the recovering arm. Step 3 : - Now, as your arm begins the recovery after the next stroke, begin to trace the arch over the water, but stop your arm at the highest point in the arch and count to five. Notice the alignment to your higher arm. It should be directly above your shoulder, pointing to the sky, not above your face. A misaligned recovery leads to an over-reaching entry that starts the stroke from a position of weakness. Step 4 : - After the count of five, lower your arm back into the water, following the same recovery path in reverse, finishing with your hand past your hip. Count to five, and begin recovery again, this time tracing the whole recovery path and entering the water extended past your shoulder. Check your alignment at the high point as your arm passes. As your arm begins this second, full recovery, the extended arm begins its underwater stroke. Step 5 : - When the second arm completes the underwater stroke, again do a half recovery, then the complete recover. Continue to the far end of the pool, alternating arms with a two-step recovery. Practice for several lengths until the path of the recovery is well aligned with your shoulder at the top of the arch, and not over- reaching to the center at entry. I sink when I hold my arm up in the middle of the recovery. It is awkward at first. During the five seconds that your arm is at the top of the arch, kick harder. Focus on your straight spine and core stability. My arm aligns with my face. It is important to work on this because correct alignment begins the stroke from a position of strength. If your arm is over- reached during the recovery, you will enter the water with a shoulder position that does not allow the range of motion you need. You will then not be able to get your hand to the optimal depth to stroke in a high elbow position. I have no momentum. Your momentum is reduced because of the lack of opposition in this drill. Remember that the point of this drill is to work on your recovery alignment. Focus on that rather than how fast you are going. Step 1 : - Holding a piece of chalk in your hand, stand with your back against a wall. Trace around the shape of your shoulders and head. Step 2 : - Still standing within your chalk outline on the wall, extend the arm with the chalk, as if preparing to enter the water from your backstroke recovery. Make a mark on the wall in that position. Return your arm to your side. Step 3 : - Again extend your chalk arm, but as you approach the entry position, push your elbow to the locked position, as if you would if trying to reach something on a shelf that is a bit too high. Make another mark on the wall, using more pressure in order to distinguish it from the first mark. Step 4 : - Step away from the wall and compare the two marks. Notice that by locking your elbow, the second mark aligns above your shoulder, where as the first mark aligns more closely with your head. In addition, many swimmers will notice that simply locking their elbow, the second mark will be higher on the wall by several inches, indicating a longer stroke. Try it with the other arm. Step 5 : - Now, get in the water and push off the wall for the backstroke, arms extended over your head, spine straight, good water-line, and productive kick. Take one underwater stroke ending past your hip. Begin your recovery focusing on your elbow. Look for any bend in your recovering arm at any point during the arching path over the water. Swim backstroke to the far end of the pool, watching each recovery, and checking for any bend in your elbow. Step 6 : - Again swim regular backstroke. As each stroke ends past your hip, deliberately push your elbow to the locked position before starting the recovery. Maintain your locked elbow throughout the recovery. Notice that your stroke will feel longer. Notice too that your locked elbow recovering arm will feel more connected to the underwater arm, as it creates an opposition balance while they move in conjunction. Notice as well that you will be able to feel the water better as your recovery arm transitions to the stroking arm. I can't feel a locked elbow when I am swimming. Try it on land until you develop the feeling. Use a mirror. Close your eyes and extend your arm over your head with what you think is a locked elbow. Then open your eyes to check. If it is not locked, make the correction while looking in the mirror. Try again with your eyes closed and re-check. I can't see if my elbow remains locked when it enters the water. This is true, but by achieving a locked elbow during recovery, you have a greater chance of maintaining that position through entry. Focus on the feeling at entry of reaching for something on a shelf that is a bit too high. When I focus on my recovering arm, both arms end up at my sides. Remember that beyond alignment, the main benefit of a locked elbow is that it increases the opposition balance, or feeling of connection between the arms. Both arms must move at the same time, on opposite sides of the body, and in opposing action. When one arm is at its highest, the other is at its lowest. When one arm is beginning the stroke, the other is finishing the stroke. Focus on this balance. Step 1 : - Push off the wall for the backstroke. Achieve a straight spine, good water-line, opposition balance, and locked recovering elbow. Kick productively. Stroke through and recover. After recovery, your hand should enter the water pinkie first, slicing the water, rather than making a hole in it with your flat hand. This requires positioning your hand with the palm facing outward. Try the palms out entry for several strokes. Step 2 : - Now, swim backstroke again, maintaining a palms-out entry position at the end of recovery. Freeze in the position when your arm has passed the top of the recovery arch. Observe which way your fingers are pointing. If they are pointing inward toward your face, and you are seeing the back of your hand, you have a collapsed wrist. A collapsed wrist positions your palms upward, as if you were a waiter holding a tray over your shoulder. It is the weakest entry position possible, and a disruption in your alignment. Notice that with a collapsed wrist, the muscles in your forearms are working, rather than resting during recovery. Notice too, that with a collapsed wrist, it is much harder to maintain a locked elbow recovery. Continue swimming, avoiding a collapsed wrist. Step 3 : - Now, with your locked elbow, palms-out recovering arm, deliberately allow that hand to flop to the outside at the wrist, relaxing your fingers as you do. This is the Dog-ears position. Notice that your forearm becomes more relaxed simply by changing your wrist position. Notice too that with your hand in an outwardly pitched position, when your hand lowers into the water, it is positioned perfectly to grab a handful of water. Step 4 : - Continue swimming backstroke with a dog-eared recovery. Increase your stroke rate. Grab a handful of water with each entry, hold on to it, and move your body past your hand. My wrist isn't relaxed. Try wiggling your fingers during your recovery, and flopping your hand back and forth as if it had no bones. I can't see if my hand is still dog-eared at entry. Try it on land until you develop the feeling. Use a mirror. Close your eyes, extend your arm over your head using what you think is a dog-eared hand. Then open your eyes to check. If it is not, make the correction while looking in the mirror. Take time to feel what muscles are involved. Try again with your eyes closed and re-check. The back of my hand slaps the water in the dog- eared position. Remember to first position your hand so that your pinkie enters the water first, and then add the dog-eared hand position. Without your face in the water, it would seem that breathing drills for backstroke are a low priority. The opposite is true. Because rhythmic breathing is an essential part of sustaining any swimming stroke, learning to develop a good breathing rhythm in the backstroke is a top priority. In addition, because the power of the backstroke kick is upward, against gravity, frequent oxygen exchange is required. Many swimmers dislike backstroke because water gets in their face. Learning to time the breathing to the rhythm of the waves of the stroke, makes the backstroke much more enjoyable. The goal of the following backstroke breathing drills is to develop rhythmic breathing and more a comfortable, productive backstroke. Step 1 : - Stand with your hand on your abdomen, between your breastbone and your belly button. Close your eyes and breathe normally. Feel the rhythm of your breathing as you inhale and exhale. Now, maintaining the same rhythm, breathe as if you were in the water, inhaling through your mouth and exhaling through both nose and mouth. Step 2 : - Now, in the pool, push off the wall preparing to do the back- stroke, spine straight, good water-line, produc- tive kick, locked elbows, hands pitched out. Try to duplicate the breathing rhythm you did on land. Swim at a rate so that your inhale happens as one arm recovers, and your exhale happens when the other arm recovers. Continue for several lengths of the pool, resting at the end of each length. Step 3 : - Next, try a quicker stroke rate. Match your breathing to your quicker stroke rate, inhaling on one arm, and exhaling on the other. Notice, that with a quicker stroke rate, this breathing rhythm doesn't work as well. There is not time to fully inhale or exhale. Change your breathing rhythm to inhale with one full stroke cycle (one stroke with each arm), and then exhale on the next stroke cycle. Notice that your breathing rhythm more closely resembles your normal breathing rhythm using this timing. Continue for several lengths of the pool, resting after each length. Step 4 : - Now, try a stroke rate that is very relaxed and easy, perhaps resembling warm-up speed. Inhale with one stroke cycle, exhale with the next. Notice, that at this slower stroke rate, this breathing rhythm doesn't work as well. The exchange of air is not frequent enough. Change your breathing rhythm so you are both inhaling and exhaling with each arm recovery, doing the same thing on the next arm. Notice that the breathing rhythm is now more normal. Continue for several lengths of the pool, resting after each length. As I get tired, my breathing gets irregular. Maintain a regular breathing rhythm as long as you can, then rest and try it again. As you practice more, you will be able to hold on to the rhythm for longer. I can't maintain the rhythm when water gets in my face. It is hard to get used to water washing over your face, but with practice, and increased momentum, you will notice a pattern to most of the splashes. This will help with your breathing timing. When I go faster, I need more air, so I breathe more frequently. If inhaling with one arm and exhaling with the other arm isn't frequent enough, try inhaling and exhaling with each arm. If that is not frequent enough, slow down your stroke rate, and try to make more forward progress with each stroke in order to go faster. Step 1 : - Push off the wall preparing to do the backstroke, spine straight, good water-line, productive kick, locked elbow recovery, dog-eared hand. Focus on rolling into and out of each stroke, feeling opposition. Take several strokes, then freeze at the point that one arm is in mid-stroke under- water, and the other arm is in mid-recovery. At this point, the armpit of your recovering arm should be completely out of the water, and the shoulder of the other arm should be at its lowest point under the water. Step 2 : - Observe that in this position and point in the stroke, the raised armpit side forms a barrier to the water that would otherwise be in your face. Resume stroking. Notice that this barrier is only in place for a very brief moment. This is your breathing pocket. It is your chance to inhale without water going in your face. Step 3 : - Swim backstroke again, exaggerating your roll to make the breathing pocket very clear. Try to find the breathing pocket on each side of your body. Continue swimming backstroke inhaling in your breathing pocket with each stroke. I still get water in my face. Work on maintaining your spine straight and a good water-line. Make sure your kick is productive without any bicycling action. Try rolling more. Water still goes over my face when I exhale. That is fine. As long as you are exhaling through your nose and mouth, water won't go in. I have a better breathing pocket on one side. This indicates that you are rolling more on one side than the other. It would be best to work on developing a symmetrical roll so that you have equal breathing pockets, and have more choices in changing your breathing pattern. Leverage in the backstroke adds potential power to the stroke, increases the range of motion at the beginning of the stroke, and enables the swimmer to sustain the stroke longer. It is achieved similar to freestyle, through the side to side rolling action of a unified core. The goal of the following leverage drills for backstroke is to learn to incorporate a productive roll into the stroke. Step 1: Push off the wall preparing to do the backstroke, straight spine and good water-line. Perform three strokes (one arm = one stroke), accompanied by a pro- ductive kick. As your arm approaches your hip on the third stroke, float and kick in that position, with the arm that just finished its underwater stroke at your side, the other arm fully extended. You should not be floating flat in the water, but instead you should be mostly on your side with the arm at your side closer to the surface, and the arm extended over your head lower in the water. Your face should remain out of the water the whole time. Do six good kicks in this position (one leg = one kick). Step 2: With your sixth kick switch floating sides, by rolling toward the other hip and shoulder, but remaining face up. This roll should initiate three more strokes by bringing the high hip and shoulder down, and the low hip and shoulder up, while the arm at your side recovers over the water to the front, and the reaching arm begins its underwater stroke. Keep a steady kick through the whole process. Step 3: Continue doing three strokes, kicking the whole time, and then six kicks in the side position until you reach the far end of the pool. Notice that your switch affects both arms at the same time. Step 4: Repeat the drill, this time using three strokes and three kicks. As you become more comfortable, try to incorporate the switch between each stroke. Continue to the far end of the pool. Step 5: Once you are able to roll productively with each stroke, begin swimming regular backstroke rolling into and out of each stroke, as you did in the drill. Practice for several length of the pool. Be sure your leading arm is aligned at 1:00. You can swim next to a lane line, or if you are at an indoor pool, watch the lines on the ceiling to go straight. I don't roll as much in the three strokes as I do changing to the kick only phase. Slow down your strokes. Allow time to roll. Use both hips and shoulders to roll. I am struggling to keep my face up during the kick only phase. Try using a quicker kick rhythm. Kick up towards the surface with more force. It is important to maintain your momentum during the kick only phase. Step 1 : - String a rope from one end of the pool to the other, attaching it securely to the lane rope hooks in the side of the pool. The rope should be fairly tight, floating just below the surface of the water. Step 2 : - Lay in the water on your back, one arm extended over your head, spine straight, good water-line, the rope just outside your extended arm. Grasp the rope with the hand of your extended arm. From this anchor point, pull your body past your hand, with a totally straight arm, moving your arm across the surface of the water, until the hand holding the rope is by your hip. Notice that as you move forward, no matter how much core stability you have, your legs swing toward the rope. Notice as well, that your shoulder is doing all the work. Step 3 : - Grasp the rope again with your arm extended over your head. This time, drop your elbow down, lowering it into your ribs as you attempt to move your body past your anchored hand. Notice that it is very difficult to move your body forward with your elbow low and moving. Step 4 : - Again, grasp the rope with your arm extended over your head. This time, pull your body up so that your shoulder is at the same level as your hand. your elbow to bend up to a right angle, but remain about parallel to your shoulder and hand. From this point, straighten your arm. Keep your elbow firm as your body moves past your anchored hand. Notice that by keeping your elbow firm you have more leverage to move yourself forward. It is also easier and more productive than the other two ways that you tried. Step 5 : - Continue the one arm rope climb using a high, stable elbow, and an accelerating arm action. When you reach the far end of the pool, switch arms. Practice several times with each arm. Once you are able to clearly feel yourself move past your anchored hand, begin swimming regular back- stroke, anchoring your hand in the water, and again move your body past that point with each stroke. I end up under the rope. Align the hand grabbing the rope with outside of your shoulder, and keep it outside of your shoulder the entire time. It puts strain on my shoulder to pull with my arm straight. Exactly. This part of the drill is designed as a contrast to doing the arm stroke correctly. It shows that the straight arm pull over-taxes the shoulder, where as the high elbow bent arm stroke allows you to stroke comfortably without pain. I am not producing a glide. Make sure your elbow in positioned high from the start. Move your body past your hand and elbow. Accelerate through to the end of the stroke. Step 1 : - Push off the wall on your back, with both arms at your sides. Kick productively. With your right arm, do one complete backstroke, bringing your arm over the water in a fully aligned arch, then stroking through to your hip. When your right arm returns to its starting point, do the same thing with your left arm. Continue for twelve strokes. Notice that the forward motion you produce is not continuous, so that when your left arm is at your side and your right arm is in the air recovering, nothing is moving you forward except your kick. Notice too that it is difficult to roll into and out of your stroke, accessing core leverage. This is an example of backstroke without opposition. Step 2 : - Now, push off the wall with both arms extended over your head, preparing to do the backstroke, spine straight, good water-line, productive kick, locked elbows, hands pitched out. Establish opposition by doing an underwater stroke with one arm, while the other remains over your head. Roll toward the side of your body with the arm extended, so that side of your body becomes lower in the water. Now, begin swimming backstroke, focusing on maintaining opposition. Step 3 : - Swim six strokes at a good strong pace. On the next stroke, freeze when your right hand has entered the water. Notice where your left hand is. It should be at the opposite extreme of the stroke, exiting the water past your hip. Step 4 : - Continue swimming backstroke. After six strokes, freeze again as your right hand transitions from pull to push. Notice where your left hand is. It should be opposite, in the middle of the recovery. Step 5 : - Continue swimming backstroke. After each six strokes, freeze at a different point in the stroke, checking your opposition. My arms don't stay opposite. Focus on the position with one arm extended over your head, and the other arm at your side. Use that as your home base. Find that position between each stroke, before starting the next one. If I don't bring both arms down when I push off, I can't get my face out of the water. Use a quicker kick to get you most of the way to the surface, then stroke strongly with one arm to get into a swimming position with your face up. Practice this. It is very important. Step 1 : - Push off the wall preparing to do the backstroke, straight spine, good water-line, productive kick. Establish opposition. Roll into and out of each stroke. Step 2 : - As your recovering arm approaches the top of the arch over the water, lift your armpit out of the water by rolling your opposite shoulder down lower into the water. Keep your elbow locked, and your hand aligned above your shoulder. When your recovering arm approaches the entry point beyond your shoulder, allow your armpit to submerge again. Step 3 : - Recover with the other arm. Notice that when you lift the armpit at the top of the recovery arch, it affects the underwater stroking arm. Correctly balanced, lifting your armpit helps your other arm to transition from the pull phase to the push phase of the stroke, and, to produce important arm speed at the finish of the stroke. Step 4 : - Continue stroking, and with each recovery, lift your armpit out of the water during the highest part of the arch. Continue for several lengths of the pool until you feel that the action of lifting your armpit balances the actions of the stroking arm. It should feel that the arms are working together, yet opposite. I sink when I lift my armpit. It sounds like your arms are not balancing each other. When you are lifting the armpit of your recovering arm, the other arm should be bent at the elbow at approximately a right angle, your elbow high and stable. That hand should be gaining speed and beginning to push towards your feet. In any other relationship to each other, your stroke will not benefit from the armpit lift. My feet fish-tail. It is important for your recovering arm to remain aligned with its exit point and same side shoulder. Avoid entering the water with your hand at 12:00. Over-reaching like this will disrupt your alignment, causing you to fish-tail. This seems to make my stroke pause. Think of it as a transition instead of a pause. Both the recovering arm and underwater arm are moving at the point of the armpit lift. There should be no pause, just a feeling the arms are balancing each other and unified in their action. Coordinated backstroke unifies the individual actions of the stroke into a seamless effort forward. With each part working together, the backstroke becomes easier, smoother, and more comfortable, as well as more productive. The goal of the following coordination drills is to bring together the many elements that contribute to a good backstroke and to use them in combination for efficient backstroke action. Step 1 : - Push off the wall on your back, both arms extended, preparing to do the backstroke, straight spine, good water-line, productive kick. Swim to the other end of the pool focusing on identifying the pull phase and the push phase in your backstroke arm stroke. Step 2 : - Swim another length of backstroke, this time focusing on identifying the roll or switch from one side of your body to the other. Step 3 : - Now, swim a length of backstroke timing the pull phase of the underwater arm stroke to begin as you switch toward the underwater arm. Then, with the same arm, time the push phase of the stroke to begin as you switch away from the underwater arm. Step 4 : - Swim several lengths of the pool, focusing on the timing of roll, pull, roll push. Concentrate on one arm at a time. Once coordinated with the first arm, concentrate on the other arm to see if your timing also follows roll, pull, roll push. Step 5 : - Practice until you can increase your stroke rate and still maintain this timing. This is the ideal backstroke coordination. This timing helps my pull but not my push. Be sure you are switching with your hips as well as your shoulders in a unified action. Without your hips rolling too, your push will not benefit. This timing helps my push, but not my pull. Be sure your recovering arm is aligned with your shoulder so it has the range of motion to descend twelve inches into the water at entry with the roll. Without this depth, the pull will not be of benefit. I lose the timing when I increase my stroke rate. Think of it as a transition, instead of a pause. Both the recovering arm and underwater arm are moving at the point of the armpit lift. There should be no pause, just a feeling the arms are balancing each other and unified in their action. Step 1 : - Push off the wall on your back, both arms extended, preparing to do the backstroke, straight spine, good water-line, productive kick. Swim backstroke to the other end of the pool focusing on your hand at the end of the push phase of the arm stroke, just before recovery. Notice that your thumb is closest to your body as it finishes the push, and the pinkie is to the outside. In this position it is the back of the hand that exits first for recovery, creating a large hole in the water, and lifting heavy water upward into the recovery. Step 2 : - To avoid this, traditionally, most swimmers have been taught to flip their hand to a thumb up position to exit the water, then flip their hand again during recovery to achieve a pinkie first entry at the end of recovery. Although the thumb-lead position avoids more drag than exiting with the back of your hand, it requires the additional action of flipping your hand over and then back again. This action is done independently from the roll, making the arm work harder. In addition, if not done early enough in the recovery, the second hand flip can contribute to shoulder pain. Practice the thumb-first recovery, initiating the flip to pinkie leading before your hand reaches the top of the recovery arch. Step 3 : - The alternative to flipping your hand, before and again during recovery, is to allow your roll to position your hand for recovery. Swim Backstroke to the other end of the pool, feeling core stability that unites your shoulders and hips. Focus on your shoulder of the arm that is finishing the push phase of the underwater stroke. As the hand finishes the push, deliberately pop the same side shoulder out of the water just as your hand passes your hip. Notice that as you do this deliberate movement with your shoulder, your same hip will also rise. Practice to the far end of the pool. Step 4 : - Now, swim another length of backstroke, focusing on popping your shoulder out of the water, and allowing your same side hip to rise at the point when your same side your hand is finishing the push. Freeze in that position. Observe that having rolled, it is the pinkie that is positioned to exit the water first for recovery. With the pinkie leading the recovery from start to finish, there is no need to flip the hand over and back, or to make sure the second flip is done early in the recovery to avoid shoulder pain. Step 5 : - Repeat this drill, maintaining unified hip and shoulder action. With each stroke, deliberately pop your shoulder out of the water as your hand finishes the push. Feel your hip rise as your hand is finishing its push. Then, as the hip continues its switch, let it lift your hand out of the water in the same position the hand finished the push, pinkie leading. It is just less work than a thumb-lead recovery. To accomplish a pinkie leading recovery, I have to roll more. Exactly. Maximize the benefit of your roll. Let it do the work of lifting your hand out of the water. Rest all you can during recovery. With the pinkie leading, the recovery seems to be opposite to the way I was taught. Yes, most of the teaching theory for swimming is based on using a flatter stroke, which depends more on arm and leg power alone. Incorporating leverage from the core changes the dynamics of swimming in many ways, including the under-recognized opportunity to let your roll action position your hand for recovery, pinkie leading. So, a thumb-first recovery is fine as long as I flip my hand over early in recovery. It is a widely accepted practice to exit thumbs-first. However, flipping your hand early in the recovery is worth trying, if only as a preventive measure to avoid shoulder pain that affects many backstrokers. Step 1 : - Fill a medium plastic cup half-way with water. Gently balance the cup on your forehead as you lay horizontally in the water, face up, straight spine and good water-line. Once the cup is balanced, position both arms extended over your head. Kick productively. Step 2 : - Begin the underwater stroke with one arm, rolling smoothly into the stroke, while your head stays still, centered and neutral. Keep Balancing Cup on Forehead the cup of water balanced on your forehead. When the first arm passes your hip, begin stroking with the other arm. If the cup falls off, stand and start again. Step 3 : - Practice until you can swim smoothly for several strokes without the cup falling off your forehead. Step 4 : - Swim backstroke again, with the cup on your forehead, increasing your stroke and kick rate, and maintaining your stable, neutral head position. Roll smoothly from side to side, and with your hips and shoulders united, While they are opposite, feel the movement of both of your arms initiated at once by a single action of the core. This is ultimate backstroke. The cup falls off right away. Work on relaxing your neck muscles, so your head does not move with your shoulders. It should feel like your head is separate, floating ahead of your body. The cup falls off when my hand enters the water. Be sure you are not doing a wide kick at this point to help your arm enter the water with speed. You also might be entering the water with the back of your hand, which makes a big splash, rather than slicing the water smoothly by entering pinkie-first. The cup falls off when I am recovering If your recovery is aligned over your face, it will drop water onto the cup on your forehead, knocking it off. Practice aligning your recovery over your shoulder, and aiming for a 1:00 entry. Step 1 : - Push off the wall for backstroke, achieving a straight spine, good water-line, and productive kick. Swim backstroke to the other end of the pool. Focus on the opposition of the arm stroke. When one arm is under the water, one arm is over the water. When one arm is in mid-stroke under the water, the other arm is in mid- recovery. Step 2 : - Now focus on the point when one arm is starting the stroke, and the other arm is finishing the stroke. This sounds opposite. As you swim, feel what is really happening. As one arm is entering the water and descending about twelve inches downward, the other arm is still pushing past the hip. Both arms are in the water at the same time. Both are in the power phase of the stroke at the same time. For a moment, the arms are actually not opposite. Step 3 : - Continue swimming backstroke, feeling this moment of overlap. Feel how the roll of the stroke assists both arms, one to start the arm stroke, the other to finish the arm stroke. Use this moment to start your stroke with power, and to finish your stroke with power. Practice more until you can feel both arms engaged in the power phase of the stroke at the same time, at opposite ends of the stroke. Step 4 : - Kick to assist both the beginning and the end of the stroke. Feel the balance of the stroke shift from the finishing arm to the starting arm. Opposition is reset as the beginning arm sweeps into its anchored position at the same time as the other arm clears the water. Practice for several lengths of the pool. Develop a continuous stroke, using the moment of stroke overlap to access more power than with one arm at a time. I don't feel the overlap. Make sure your entry hand is descending down into the water, and not stroking from the surface of the water. Also, make sure you are finishing the stroke by pushing all the way past your hip. The overlap happens when these two actions occur simultaneously. I am flat at this point. Make sure your arms are in continuous motion, and not stopped, one extended and one at your side. You should be in transition, rolling toward the side with the arm entering, and away from the side with the arm finishing. Practice this transition to make more use of leverage from your core. My finishing arm is already at my side. Stroke past the hip instead of into the hip. This will give you a longer stroke, establish the overlap, and create a continuous stroke.
0.941068
Things you should know to build your own Computer? Assembled PC are always preferred when you use computer at the best. You can easily customize the computer hardware according to your usage or need. Assembled computers are available in the markets, which leaves you a choice of going with the brand you want. If you have never built a computer then I would suggest for you to Use Reboot Computer Repairs to help you. The best part is that, an assembled PC always stays in your budget. PC Case: The cabinet is the case which is the external body of CPU. Other computer parts will be covered up in this box. You should choose a good strong body for it. Otherwise, the body will have too much of dent after one year. Since the PC case will be protecting other devices to operate the computer, so it should be taken care off. Power Unit: It manages the power supply to all the devices connected to a CPU(central processing unit). You need to get a powerful SMPS(switched-mode power supply) accordingly. If you have planned to assemble a PC for gaming purpose or trying to build up a high-end PC, then I would always recommend you to go for a good and quality SMPS. Processor: This always depends on the kind of work you will be doing on the system. If you will be working on heavy software like to play on high end games or includes more time in editing the photography with high processing powered software then you shall go for processors like Intel i7 or AMD FX. I prefer you to go for Intel i5 which is pretty decent when it comes to work like updating social platforms, working on documents or presentation and browsing. Motherboard: This should be bought little carefully as Processor should be supporting the Motherboard to give the maximum performance. If you are going for Intel processor then go for Intel boards only if you have any confusion about other processors. Look for the available ports which you wish to have on your computer like the DVI(Digital Visual Interface) or HDMI(High-Definition Multimedia Interface). Related Post: Stuxnet is a virus Code: but not yet deactivated. RAM (Random Access Memory): RAM adds a virtual space apart from permanent storage. It actually helps the dependant or supporting files to allot space when any application or program runs on the computer. So, this has to be bought according the usage of the work and the kind of software you work on. 4GB of RAM is preferred for normal work. Hard Disk: This is your storage drive, so if you have lot of things to store you can get the hard disk accordingly. Normally 1TB of storage would do, but if you can have 2 X 1TB which will double your storage. If you have a low end PC then you should go maximum to 500GB of HDD, as computer will be doing indexing for the files and has to show files as per your click which may reduce the performance. Graphic Card: Is preferred if you are going for a gaming PC or your software needs more of graphical space. This can be added when you think your processor may not perform well when lot of graphics is involved in it. Graphic Cards acts same as a RAM, but instead if files it manages the graphical processes. This may be little costly if you are going for a good one. Optical Drive: This much needed to play a DVD or CD. You sometimes feel handicapped when you don’t have one. I know world is moving to the era of USBs but DVDs are also available in the market. Related Post: How to start downloading with Torrents? Monitor: Go for a 18.5 inch monitor for normal use. Until much of your games need to have a good visual effect. You can even go for a bigger screen if you like to work on big screens. Keyboard and Mouse: Go for normal keyboards than being attracted to fancy keys. Wired keyboard and mouse are more durable than wireless. Will have to keep on changing the batteries if you go for wireless options. Speakers: 2.1 stereo is recommended when connected with computers. Operating System: I recommend you to go for Open Source Operating Systems like Ubuntu, Fedora, LinuxMint etc as this may not cost you at all. If go for Microsoft Windows you need to buy the licensed copy from the market and required Microsoft Office software licences which will boost up the overall price of the machine. These are few things which you need to notice while building your own computer. Although if you need to repair, maintain or upgrade your computer, you can get the best service from PC Revive. I hope you like this article and let me know if you have any doubts. Check the best place to sell your used computer processors. « Google Maps to give live Traffic reports and Navigation.