text
string
id
string
metadata
dict
Over the course of the 20th century, California grew at a rate surpassing even state boosters' most breathless predictions. In the 1920s and 1930s, the oil, agriculture, and entertainment industries attracted millions of people to southern California, which overtook northern California as the economic engine of the thriving state. World War II further transformed California as emerging aerospace and shipping industries brought millions more workers of varied geographical and cultural backgrounds into the state. Migration actually sped up after the war's end. In 1962, California passed New York as the nation's most populous state. By the turn of the 21st century, California laid claim to the world’s fifth largest economy and a population of nearly 34 million. As the state's housing, transportation, health care, and social service infrastructures struggled to keep pace with this phenomenal growth, many California cities had begun to suffer from poverty, pollution, and racial strife. The state's tremendous ethnic diversity created tensions as well as opportunities for cross-cultural collaboration. Throughout the 20th century, California's millions of Native Americans, African Americans, Asian Americans, and Hispanic Americans strove for economic security, political equality, and social change. The number of Native Americans living in California rose steadily after 1900, reversing the appalling decline of the previous century. Much of this increase was a result of federal job training and relocation programs that encouraged Indians from other states to move to California. In 1965, fewer than 10 percent of the state’s 75,000 Native Americans lived on rural reservations. Those who did comprised California’s most disadvantaged group, with higher unemployment rates than any other minority. Urban Indians fared better but still experienced limited educational and employment opportunities. Beginning in the 1960s, Native Americans in California formed pan-Indian organizations such as the American Indian Historical Society, California Rural Indian Health Board, and California Indian Education Association to advocate for native rights. A group of activists called Indians of All Tribes occupied Alcatraz Island in the San Francisco Bay from 1969 to 1971, part of a nationwide Native American social justice movement that continues today. In the early 1980s, the Cabazon and Morongo Bands of Mission Indians began offering card games and bingo on their reservations, setting off a controversy over gaming that would culminate in a 1987 US Supreme Court decision affirming Native Americans' right to build casinos on reservation lands. By 2005, there were 55 Native American casinos in California bringing in a total annual income of more than $3.5 billion. These revenues have dramatically changed the economic, political, and social landscapes of California's native peoples. Because only groups that have been federally recognized as official tribes can build casinos, an enormous financial gulf now separates recognized and unrecognized native groups. Those groups with federal recognition enjoy considerable political clout, while members of unrecognized groups continue to suffer from joblessness, under-education, and poor living conditions. Money from gaming has also generated controversies over Indian identity as federally recognized groups have been forced to reconsider who can and cannot claim membership. Gaming has brought new wealth to some of California's Native Americans, but it has also brought new divisions, contentions, and self-definitions. African Americans first moved to California in large numbers during World War II to work in shipping and other war industries. Chester Himes's 1945 novel If He Hollers Let Him Go, set in wartime Los Angeles, exposed the racial discrimination many black migrants faced in their new home. Racist real estate policies limited African Americans' ability to move out of segregated urban neighborhoods, and discrimination restricted their access to skilled and professional jobs as well as higher education. In 1965, anger and desperation turned into violence in the Los Angeles neighborhood of Watts, triggered by rumors of police brutality. The violence in Watts — the most destructive urban uprising in US history at that time — lasted a week, involved more than 10,000 Los Angelinos, and left at least 34 people dead. The underlying causes of the Watts uprising — including underemployment, poverty, segregation, and police harassment — persisted almost 30 years later. In 1992, violence erupted again in south central Los Angeles after the acquittal of several white police officers for the beating of motorist Rodney King. This reoccurrence of urban violence surprised many who believed the Civil Rights movement had improved conditions for California's African Americans. In fact, the 1970s, 1980s, and 1990s did see the emergence of a sizeable black middle class in the state, a result of activism, entrepreneurship, and government programs. A number of African American politicians won important offices, including Thomas Bradley (five-term mayor of Los Angeles), Willie L. Brown, Jr. (two-term mayor of San Francisco), and US representatives Augustus Hawkins, Yvonne Brathwaite Burke, Ronald Dellums, Julian Carey Dixon, Mervyn Dymally, Barbara Lee, Juanita Millender-McDonald, and Maxine Waters. Ironically, as upwardly mobile African Americans have moved from the urban centers of Los Angeles, Oakland, and San Francisco into suburbs and the Central Valley, their ability to elect black politicians has actually weakened. Their voting bloc has scattered, while Asian Americans and Hispanic Americans have grown in numbers. California's Asian American population remained small until 1965, when federal officials changed immigration policy to allow migration from Asia after 40 years of exclusion. In the decades before 1965, those Asian immigrants in California were considered "aliens" and barred from citizenship due to their race. Furthermore, a 1913 state law forbade Japanese Americans from owning land or leasing it for more than three years. After the 1941 Japanese attack on Pearl Harbor, the federal government rounded up and relocated 93,000 Californians of Japanese descent in the name of national security. Most were confined to relocation camps for more than two years despite never being convicted — or even formally accused — of a crime. Once released, many Japanese Americans found themselves destitute, stripped of their houses and possessions. Recognizing the injustice of the relocation campaigns, the US Congress made partial reparations to Japanese Americans in 1948 and again in 1988. But the stigma of being labeled national enemies simply because of their race lingered. After 1965, immigration to the United States from Asia and the Pacific skyrocketed, with California as the prime destination. By 1990, 40 percent of all Asian Americans in the country lived in California, numbering about 3 million. California became home to thriving immigrant communities from China, Japan, the Philippines, Korea, Vietnam, Cambodia, Laos, Hong Kong, Thailand, and Pacific islands. Asian Americans most heavily concentrated in the San Francisco Bay Area, with large numbers also living in Los Angeles, Orange, San Diego, Fresno, Sutter, Yuba, and Sacramento counties. According to 2000 census data, a higher percentage of Asian Americans went to college than any other California group; but their per capita income still lagged significantly behind that of whites ($22,000 versus $31,700), revealing the persistence of anti-Asian prejudice in hiring. The growing political strength of Asian Americans has only begun to be exercised, thus far manifested in the elections of S. I. Hayakawa, Robert Matsui, and Norman Mineta to the US Congress, and Matt Fong as California State Treasurer. The largest minority group in California during the 20th century was Hispanic Americans, most prominently Mexican Americans. One-half million Mexicans migrated to the United States during the 1920s, with more than 30 percent settling in California. Mexican Americans soon made up the bulk of the labor force in many unskilled and semi-skilled industries, including agriculture, railroads, manufacturing, and domestic service. Many immigrants lived in segregated urban barrios, such as east Los Angeles, where they forged new identities as Mexican Americans. Two incidents in Los Angeles during World War II revealed the city’s rampant anti-Mexican prejudice. In January 1943, 17 Mexican American youths were convicted of murdering a boy whose body had been found in a reservoir known as Sleepy Lagoon. The racist bias of the judge and prosecution was so blatant that the Sleepy Lagoon case attracted the sympathy of people around the country before being overturned by an appellate court. Meanwhile, long-simmering tensions between white servicemen and Mexican American "zoot suiters" (so named for their jaunty clothes) turned into a week-long race riot in June 1943. Mobs of white sailors, soldiers, and marines assaulted Mexican-American teenagers and tore off their clothes, but local newspapers portrayed the boys as the aggressors. Even the name of the incident — the Zoot Suit Riots — placed the blame on Mexican Americans. In the decades after World War II, Latinos in California grew in numbers and political strength. In 1966, César Chávez and Dolores Huerta helped found the United Farm Workers, a labor union aimed at organizing migrant farm workers — mostly Mexican Americans — who had for decades endured perilous working conditions, low pay, and no job security. The labor activism of Chávez and Huerta comprised one arm of the La Raza movement, which endeavored to expose and overturn the discrimination in employment, housing, and education that Hispanic Americans faced in California. The La Raza movement included muralists, poets, entrepreneurs, politicians, and labor organizers within its ranks, and descendants of Cuba, Puerto Rico, and other parts of Latin America in addition to Mexico. Despite this tradition of activism, Hispanic Californians have had trouble winning public office. But the recent elections of Cruz Bustamante (Lieutenant Governor) and Antonio Villaraigosa (Los Angeles mayor) suggest the arrival of a powerful political presence. Ongoing controversies over illegal immigration and bilingual education have mobilized California's Hispanic population even as they exposed internal class and cultural divisions. The 2000 census produced a snapshot of California today. The state is more multicultural than ever before, with 32 percent Hispanic Americans, 11 percent Asian Americans, 6.7 percent African Americans, and 1 percent Native Americans (330,657, more than any other state). A full 26 percent of Californians were born outside the United States, and 39.5 percent of adults reported that they spoke a language other than English at home. For the first time, census respondents were allowed to mark more than one race to identify themselves, and more than 1.5 million Californians did so, a small sign of the blending of cultures that has marked California for centuries. Though riven by fault lines and marred by continuing inequality, California continues to symbolize opportunity and hope for millions. It is a product of the interwoven histories and cultures of its diverse peoples. The Bancroft Library. Chinese in California virtual collection Oakland Museum of California. Latino History Project de Graaf, Lawrence B., Kevin Mulroy, and Quintard Taylor, eds. Seeking El Dorado: African Americans in California. Los Angeles: Autry Museum of Western Heritage, 2001 Phillips, George Harwood. The Enduring Struggle: Indians in California History. San Francisco: Boyd & Fraser, 1981. Sánchez, George J. Becoming Mexican American: Ethnicity, Culture, and Identity in Chicano Los Angeles, 1900-1945. New York: Oxford University Press, 1995. Yoo, David K. Growing Up Nisei: Race, Generation, and Culture among Japanese Americans of California, 1924-49. Urbana: University of Illinois Press, 2000. Yung, Judy. Unbound Feet: A Social History of Chinese Women in San Francisco. Berkeley: University of California, 1995. "1921-present: Modern California - Migration, Technology, Cities" was written by Joshua Paddison and the University of California in 2005 as part of the California Cultures project.
<urn:uuid:b5dd0471-4a86-4486-9c1d-6dacee08e332>
{ "dump": "CC-MAIN-2017-04", "url": "https://calisphere.org/exhibitions/essay/7/modern-california/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00331-ip-10-171-10-70.ec2.internal.warc.gz", "language": "en", "language_score": 0.946597695350647, "token_count": 2410, "score": 4.0625, "int_score": 4 }
For the color fill of the table, we summarize the following four types. First, according to the classification of the data level of different color fill. The following figure, uppercase "one or two, three or four" belongs to the same level, and it is filled with purple. Secondly, the color filling of key data is used to emphasize this kind of data. In the following figure, a purple fill is made for the units and data of the task over completion to emphasize. Third, interlace the fill color to improve the visibility of the table. The following figure, the table content is more, interlace color, improve the visibility of the table. Four, the table head, the tail and so on color fill. The following figure. Note : More wonderful tutorials Please pay attention to the triple computer Tutorials section, triple Office group: 185219299 welcome you to join
<urn:uuid:d1ac72d3-4da8-4e8f-90a4-b7197dd0a297>
{ "dump": "CC-MAIN-2019-43", "url": "https://topic.alibabacloud.com/a/application-of-color-filling-in-the-optimization-of-ppt-table_8_8_10163142.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688826.38/warc/CC-MAIN-20191019040458-20191019063958-00294.warc.gz", "language": "en", "language_score": 0.8418779373168945, "token_count": 186, "score": 2.6875, "int_score": 3 }
It’s ok to help Effects of free, cued and modeled reflection on medical students’ diagnostic competence. Ibiapina C et al. Medical Education 2014;48:796-805 Reviewed by Rebecca Tenney Soeiro It’s ok to help What was the study question? Does additional instructional guidance increase the benefits of reflection on medical students’ clinical reasoning? Effects of free, cued, and modeled reflection on learning were compared. Free reflection required students to compete a table of the possible diagnoses for a clinical case with a column for the findings that support each possibility, the findings that go against each, and the findings that would be expected but were not present. Cued reflection included the same table with the list of possible diagnoses already filled in and modeled reflection included the table with all columns completed. How was the study done? The study was done in Brazil; two groups of students, one comprised of those who had not yet completed an internal medicine clerkship and another of those who were given 8 different cases to diagnose under different experimental conditions: free reflection, cued reflection, and modeled reflection. Students had a learning phase where they diagnosed cases under one of the three conditions. Learning was measured through tests requiring students to come up with a prioritized differential diagnoses for cases involving the same diseases at 30 minutes and 7 days after the learning phase. Mental effort was also rated by all students. Repeated measures ANOVA was performed on mean scores for diagnostic performance, as well as on mental effort ratings in each phase. What were the results? Students from the modeled-reflection and cued-reflections groups did not differ in their diagnostic accuracy scores but both performed significantly better than free-reflection group. Not surprisingly, significant effect based on year of training was identified. What are the implications of these findings? Students learn more with less effort when studying correct structured reflection than when reflecting without any instructional guidance. Modeled and cued-reflection may represent a useful educational strategy for clinical teaching. Editor’s comment: In some ways these findings are counter-intuitive; we sometimes think that students will learn more if they have to puzzle through a case on their own. It is useful to know that showing a student a structure for working through a case, with all of the parts filled in, actually helps them in a significant way when they move on to making diagnoses on their own (LL).
<urn:uuid:c9fd3edb-4a8f-4b95-a3d2-ae525a1ffae2>
{ "dump": "CC-MAIN-2018-43", "url": "https://www.comsep.org/scholarlyactivities/templateJC.cfm?id=244&type=summer&ayear=2014", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514443.85/warc/CC-MAIN-20181022005000-20181022030500-00398.warc.gz", "language": "en", "language_score": 0.9730833172798157, "token_count": 500, "score": 2.65625, "int_score": 3 }
Latin: Fortis et liber ("Strong and free") |Confederation||September 1, 1905 (split from Northwest Territories) (8th/9th, with Saskatchewan)| |Largest metro||Calgary Region| |• Type||Constitutional monarchy| |• Lieutenant governor||Salma Lakhani| |• Premier||Jason Kenney (UCP)| |Legislature||Legislative Assembly of Alberta| |Federal representation||Parliament of Canada| |House seats||34 of 338 (10.1%)| |Senate seats||6 of 105 (5.7%)| |• Total||661,848 km2 (255,541 sq mi)| |• Land||640,081 km2 (247,137 sq mi)| |• Water||19,531 km2 (7,541 sq mi) 3%| |Area rank||Ranked 6th| |6.6% of Canada| |• Total||4,067,175 | | • Estimate | |• Rank||Ranked 4th| |• Density||6.35/km2 (16.4/sq mi)| |• Total (2015)||CA$326.433 billion| |• Per capita||CA$78,100 (2nd)| |• HDI (2018)||0.940 — Very high (1st)| |Time zone||UTC−07:00 (Mountain)| |• Summer (DST)||UTC−06:00 (Mountain DST)| |Postal code prefix| |ISO 3166 code||CA-AB| |Bird||Great horned owl| |Rankings include all provinces and territories| Alberta is a province in western Canada. It is bounded by the provinces of British Columbia on the west, Saskatchewan on the east, the US state of Montana on the south and the Northwest Territories to the North. Alberta is the fourth largest Canadian province with an area of 642,317 square kilometres (248,000 sq mi). Alberta has around 4,067,175 living there, making it the fourth most populous province in Canada. History[change | change source] Canada became a country in 1867. It was much smaller than it is now, and did not include the parts of the country to the west. From 1670 to 1870, parts of Alberta were included in "Rupert's Land," land owned by the Hudson’s Bay Company in support of it’s trading monopoly over a vast area of Canada and parts of the United States. Northern part of Alberta was part of the what was then called "Northwestern Territories." Alberta was made a province of Canada in 1905, at the same time as Saskatchewan. The Aboriginal peoples of Canada are referred to as First Nations or by the Name of their Nation. Mixed (European/Aboriginal) People are called Metis. Weather[change | change source] Work in Alberta[change | change source] There is diesel fuel in Alberta. References[change | change source] - "Population and dwelling counts, for Canada, provinces and territories, 2016 and 2011 censuses". Statistics Canada. February 2, 2017. Retrieved April 30, 2017. - "Population by year of Canada of Canada and territories". Statistics Canada. September 26, 2014. Archived from the original on June 19, 2016. Retrieved September 29, 2018. - "Languages Act". Government of Alberta. Archived from the original on May 2, 2021. Retrieved March 7, 2019. - Dupuis, Serge (5 February 2020). "Francophones of Alberta (Franco-Albertains)". The Canadian Encyclopedia. Retrieved 30 September 2020. In 1988, as a reaction to the Supreme Court’s Mercure case, Alberta passed the Alberta Languages Act, making English the province’s official language and repealing the language rights enjoyed under the North-West Territories Act. However, the Act allowed the use of French in the Legislative Assembly and in court. - "Gross domestic product, expenditure-based, by province and territory (2015)". Statistics Canada. November 9, 2016. Retrieved January 26, 2017. - "Sub-national HDI - Subnational HDI - Global Data Lab". globaldatalab.org. Retrieved June 18, 2020. - There are also two larger parts of Canada called Nunavut and Northwest Territories. However, Nunavut and Northwest Territories are territories, not provinces. Other websites[change | change source] Find more about at Wikipedia's sister projects |Definitions from Wiktionary| |Media from Commons| |News stories from Wikinews| |Quotations from Wikiquote| |Source texts from Wikisource| |Textbooks from Wikibooks| |Learning resources from Wikiversity| - Wikimedia Atlas of Alberta - Government of Alberta website - Alberta at the Open Directory Project - Provincial Archives of Alberta website - Travel Alberta - Alberta Encyclopedia - CBC Digital Archives—Striking Oil in Alberta - CBC Digital Archives—Electing Dynasties: Alberta Campaigns 1935 to 2001 - CBC Digital Archives—Alberta @ 100 References[change | change source]
<urn:uuid:368e3c51-39d2-42a6-8483-055dfdaff0df>
{ "dump": "CC-MAIN-2023-06", "url": "https://simple.wikipedia.org/wiki/Alberta", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00464.warc.gz", "language": "en", "language_score": 0.8469805121421814, "token_count": 1281, "score": 2.734375, "int_score": 3 }
Experience America's Best Idea National Park Getaways A New National Park Getaway Every Wednesday Grand Canyon National Park Colder temperatures, shorter days, and snow bring a slower pace to one of the nation's most visited national parks. Winter visitors find paths less travelled throughout the park. Those prepared for ice and snow will find the Bright Angel Trail a bit quieter and scenic drives less congested. Dramatic winter storms, bringing several inches of snow, are contrasted with sunny days, perfect for walking along the rim or into the canyon. Crisp air and a dusting of snow bring a new perspective to the temples and buttes emerging from the canyon floor and provide a perfect backdrop to view the canyon's flora and fauna. Mule deer traipsing through fresh snow and bald eagles soaring above the canyon rims are just some of the wildlife spotted during winter. Many animals slow down for the winter and are seen less frequently, but there is still a chance to see elk, California condors, ravens, and Abert's squirrels along the rim and in nearby ponderosa pine forests. Most animals in the park have developed some sort of adaptation to the cold weather. Rock squirrels, frequently seen along the rim during summer months, spend the fall caching food and preparing for the cold winter. Although they spend much of the winter in their burrows, they can be spotted along the rim during warmer days. Mule deer and elk grow thick winter coats to deal with the low temperatures and the Abert's and Kaibab tree squirrels grow fur tassels on their ears to keep out the cold. Just like the animals who make their home at Grand Canyon, visitors should slow down and bundle up. Winter is a perfect time to enjoy a warm beverage along the rim, or view the canyon from inside the Yavapai Geology Museum, where panoramic windows provide crisp views from inside a warm building. The South Rim of the park is open year round, and roads are drivable except in inclement weather. Weather changes quickly at Grand Canyon, and so does visibility. Planning a visit for multiple days allows visitors to experience some of these changes, and provides a good chance for a great view of the canyon. Winter solitude blankets the North Rim of Grand Canyon, which is closed to vehicle traffic during the winter. Hikers prepared for a multi-day canyon adventure can walk from the South Rim to the North Rim for a winter camping experience in one of the most inaccessible locations in the country. Locations inside the canyon, like the Phantom Ranch lodging facility and Bright Angel Campground, offer mild temperatures in winter, and backcountry permits may be more easily obtained during the winter months than during peak hiking seasons. A trip to Grand Canyon can be a great winter getaway, especially with careful planning. The Grand Canyon Trip Planner is a great place to start. Pack your jacket and winter gloves, avoid the crowds, and come experience a Grand Canyon winter wonderland! By Becky Beaman, Interpretive Support Assistant, Grand Canyon National Park
<urn:uuid:60d6293b-6ad6-404e-920c-51503f3a1dbe>
{ "dump": "CC-MAIN-2015-48", "url": "http://www.nps.gov/getaways/grca/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446286.32/warc/CC-MAIN-20151124205406-00037-ip-10-71-132-137.ec2.internal.warc.gz", "language": "en", "language_score": 0.9351638555526733, "token_count": 632, "score": 2.609375, "int_score": 3 }
HIV Update: Scientists Find Antibody That Combats 98% of HIV Strains Scientists at the National Institutes of Health (NIH) are getting closer to preventing the spread of HIV or human immunodeficiency virus. Researchers at the NIH's National Institute of Allergies and Infectious Diseases (NIAID) have successfully identified an antibody from an HIV-infected patient that potently neutralized 98 percent of HIV strains, including 16 of 20 strains that are resistant to other antibodies of the same class. The antibody, named N6, could be further developed to potentially treat or prevent HIV transmission, the researchers said. According to research leader Mark Connors, M.D. of NIAID, the team observed the evolution of N6 over time to find out how it could potently neutralize nearly all HIV strains. By understanding this process, scientists hope to create vaccines that could help protect people from acquiring the virus. Identifying neutralizing antibodies had been challenging, as the virus is capable of rapidly changing its surface proteins to hide from the immune system. In 2010, scientists at NIAID's Vaccine Research Center (VRC) discovered an antibody called VRC01 that is capable of stopping up to 90 percent of HIV strains. Researchers from the California Institute of Technology conducted experiments on mice and found that mice with the neutralizing antibody VRC01 were infected by the virus but did not develop HIV disease. N6 works in the same way as VRC01, where it binds to a part of the HIV envelope called CD4 binding site, preventing the virus from attaching to the immune cells. But N6 evolved a unique mode of binding, allowing it tolerate changes in the HIV envelope, including the attachment of sugars in the V5 region of the envelope. This means N6 could provide a greater level of protection than VRC01-class antibodies. Due to its potency, N6 could also offer stronger and more durable prevention and treatment benefits, and researchers could administer it subcutaneously (into the fatty layer under the skin), rather than intravenously like VRC01. In the United States alone, over 1.2 million people are living with HIV/AIDS, the Centers for Disease Control and Prevention (CDC) said.
<urn:uuid:054f1f1c-1a5f-4efc-89c2-40c25cc2c530>
{ "dump": "CC-MAIN-2017-43", "url": "http://www.natureworldnews.com/articles/32634/20161123/hiv-update-scientists-find-antibody-combats-98-strains.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821189.10/warc/CC-MAIN-20171017125144-20171017145144-00376.warc.gz", "language": "en", "language_score": 0.947845995426178, "token_count": 463, "score": 3.15625, "int_score": 3 }
Today we are sharing a Past/Present post of engraved memorials to Washington. After Washington’s death in 1799, many engravers and publishers rushed to create prints of Washington’s likeness for the grieving populace. Washington “had played such a central role in the extraordinary events of a quarter century that his death was an event of great emotional consequence in America, affecting the very identity of the nation… In honoring Washington the nation was honoring its own history” (Wick, 70). The memorial prints served to remember Washington for his accomplishments, great vision, and character, and to urge the populace to emulate the traits of their lost leader. The engravers surrounded their portraits with poems, allegorical elements, and symbols of the man’s greatness as well as the rich imagery found in mourning art- the obelisk, urn, angels, and weeping figures. The prints shown in today’s post are beautifully ornate calligraphic portraits of the lost leader. The earlier print is a calligraphic copper engraving, designed, written, and published by Benjamin O. Tyler, professor of penmanship, New York, 1817. It was engraved by Peter Maverick, a copper and steel engraver based in Newark. The print has a stipple oval portrait of Washington in center of an elegiac poem, angels, and masonic symbols of a shining sun and book held open by a square and compass. Above the portrait, two ovals encompass the words: ‘Tho’ shrin’d in dust, great WASHINGTON now lies, The Memory of his Deeds shall ever bloom; Twin’d with proud Laurels, shall the Olive rise, And wave unfading o’er his Honor’d Tomb.” and “To him ye Nations yield eternal FAME, First on th’ Heroic list enroll his Name; High on th’ ensculptur’d marble let him stand, The undaunted Hero of his native land.” Below his portrait reads: “Gen. George Washington departed this life Decr. 14th. 1799 AE 67. And the tears of a NATION watered his grave.” The second print is calligraphic engraving by John I. Donlevy, intaglio-chromographic and electrographic engraver. Originally issued by Donlevy in 1838, this is a second state published c. 1870. Within an elaborate oval penmanship border is a stipple-engraved portrait of Washington. The portrait is based on Stuart’s Athenaeum painting, with the bust done in swirls. The lettering is executed in the typical variety of styles with cursive flourishes, shadows, italics, and blackletter. Above the portraits is an engraved eagle, with laurel and the American flag in its beak and arrows clutched in its talons. Image on the Top: Eulogium Sacred to the Memory of the Illustrious George Washington, Columbia’s Great and Successful Son: Honored Be His Name. By Benjamin O. Tyler. Published by Benjamin O. Tyler, New York. Engraved by Peter Maverick. Line and stipple copper engraving, 1817 . Image size 16 1/4 x 20 1/2″ plus margins. LINK. Image on the Bottom: Sacred to the Memory of the Illustrious Champion of Liberty, General George Washington; First President of the United States of America. By John I. Donlevy. Published by John I. Donlevy. Calligraphic and stipple engraving, 1838, (c.1870). Platemark 16 1/2 x 13 7/8″. LINK.
<urn:uuid:2be33611-7db2-48f9-b74f-45e61807a42f>
{ "dump": "CC-MAIN-2017-30", "url": "https://oldprintgallery.wordpress.com/2014/10/29/pastpresent-calligraphic-memorials/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428325.70/warc/CC-MAIN-20170727162531-20170727182531-00115.warc.gz", "language": "en", "language_score": 0.9275537133216858, "token_count": 778, "score": 3.421875, "int_score": 3 }
Daily Typewriter Poetry Daily Writing Inspiration Our writing quote of the day is brought to you by Li-Young Lee. Previous inspirational poetry writing quotes: Inspirational Poetry Quote “People who read poetry have heard about the burning bush, but when you write poetry, you sit inside the burning bush.” “Persimmons” by Li-Young Lee Enjoy one of my many favorite poems by Lee, “Persimmons”: by Li-Young Lee In sixth grade Mrs. Walker slapped the back of my head and made me stand in the corner for not knowing the difference between persimmon and precision. How to choose persimmons. This is precision. Ripe ones are soft and brown-spotted. Sniff the bottoms. The sweet one will be fragrant. How to eat: put the knife away, lay down newspaper. Peel the skin tenderly, not to tear the meat. Chew the skin, suck it, and swallow. Now, eat the meat of the fruit, all of it, to the heart. Donna undresses, her stomach is white. In the yard, dewy and shivering with crickets, we lie naked, I teach her Chinese. Crickets: chiu chiu. Dew: I’ve forgotten. Naked: I’ve forgotten. Ni, wo: you and me. I part her legs, remember to tell her she is beautiful as the moon. that got me into trouble were fight and fright, wren and yarn. Fight was what I did when I was frightened, Fright was what I felt when I was fighting. Wrens are small, plain birds, yarn is what one knits with. Wrens are soft as yarn. My mother made birds out of yarn. I loved to watch her tie the stuff; a bird, a rabbit, a wee man. Mrs. Walker brought a persimmon to class and cut it up so everyone could taste a Chinese apple. Knowing it wasn’t ripe or sweet, I didn’t eat but watched the other faces. My mother said every persimmon has a sun inside, something golden, glowing, warm as my face. Once, in the cellar, I found two wrapped in newspaper, forgotten and not yet ripe. I took them and set both on my bedroom windowsill, where each morning a cardinal sang, The sun, the sun. he was going blind, my father sat up all one night waiting for a song, a ghost. I gave him the persimmons, swelled, heavy as sadness, and sweet as love. This year, in the muddy lighting of my parents’ cellar, I rummage, looking for something I lost. My father sits on the tired, wooden stairs, black cane between his knees, hand over hand, gripping the handle. He’s so happy that I’ve come home. I ask how his eyes are, a stupid question. All gone, he answers. Under some blankets, I find a box. Inside the box I find three scrolls. I sit beside him and untie three paintings by my father: Hibiscus leaf and a white flower. Two cats preening. Two persimmons, so full they want to drop from the cloth. He raises both hands to touch the cloth, asks, Which is this? This is persimmons, Father. Oh, the feel of the wolftail on the silk, the strength, the tense precision in the wrist. I painted them hundreds of times eyes closed. These I painted blind. Some things never leave a person: scent of the hair of one you love, the texture of persimmons, in your palm, the ripe weight. "When you write poetry, you sit inside the burning bush." (Li-Young Lee) Early start for Day Six, Poem Six 🌞 What will you write? Tag @typewriterpoetry in your #typewriterpoetry posts for #NationalPoetryMonth 💖 #npm16 #npm2016 #typewriter #poetry #quote #dailyquotes #quotes #inspiration #inspirational #inspirationalquotes #writing #writer #writersofinstagram #poet #poetsofinstagram #art #artsy #artistic #words #wordporn #inspire #passion #creative #beautiful #love #artist #igwriters #igpoets Donate to Tupelo Press For #NationalPoetryMonth, I’m fundraising on behalf of Tupelo Press, a small literary publisher. Tupelo’s 30/30 Project is an all-year monthly round of writing a poem a day. Check out their website. You can read the collection of poetry I volunteered to write for the Tupelo Press 30/30 Project. I also encourage you to donate in my name to Tupelo Press. 100% of proceeds go to their literary press, it’s tax-deductible, and you’ll be a patron of the arts! Enjoying the work I’ve been steadily producing? It’s for my upcoming art book, suck myself out the heart i give it back. Tag along for the ride! You can watch my artistic process unfold at blog.billimarie.com.
<urn:uuid:424f1b10-e75e-488c-88b4-16335dbc0328>
{ "dump": "CC-MAIN-2018-26", "url": "http://www.typewriterpoetry.com/2016/day-6-daily-inspiration-national-poetry-month/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861980.33/warc/CC-MAIN-20180619060647-20180619080647-00037.warc.gz", "language": "en", "language_score": 0.9122839570045471, "token_count": 1208, "score": 2.609375, "int_score": 3 }
Manufacturing, as we know it today, dates back to the late 1890s and early 1900s. It is essentially based on the mass and efficient production of large volumes of goods, resulting in economies of scale and cheaper products. Return on capital investment is a determining factor. Most modern manufacturing is still firmly anchored in this model. Mass production is often based on subtractive manufacturing, where the production methods dictate the design of products for optimal workflow. Inputs, such as metals or plastics, are processed and shaped into oversized products through moulding, pressing or die-casting before layers of excess material are removed by machining (drilling, trimming, etc.) or other methods to get a finished result. Modern manufacturing, using CNC (computer numerical control) machine tools, has greatly improved the efficiency of such "destructive" processes, but it still produces large amounts of waste material that need recycling, and require cleaning and replacement of machining equipment. Fresh solutions drawing on new technologies are needed to face current and future industrial and economic challenges. From making models to manufacturing There is a prototype behind every manufactured product. From concept to function, models are needed to define the shape, feasibility, functionality and other parameters of a product, and to test potential customers' reactions. The introduction of CAD (computer-aided design), in the mid-1960s, followed by CAM (computer-aided manufacturing), and their use with CNC machine tools, greatly enhanced and speeded up the design and production of prototypes, and ultimately of manufacturing. A new process for making prototypes comparatively cheaply emerged in the late 1980s. Taking data from three-dimensional CAD drawings, a machine lays down very thin coats of material, usually plastics, in powder, liquid or resin form and hardens them to create a model rapidly, earning this technology the name of rapid prototyping. RP enabled companies to send designs of parts to their subsidiaries in different countries and continents instantly and have the parts reproduced locally, rather than having to ship them. Within a few years RT (rapid tooling) was introduced to create moulds quickly or to fabricate tools for a limited volume of prototypes. The first RT machines were expensive but cut down on the cost and time spent on making moulds, preparing tools or finishing incomplete models. This additive method eventually led to AM, enabling objects to be produced following the same process: adding nanometre-thick layers of various materials and using lasers to fuse them (a process also called sintering) or UV (ultraviolet) light to cure certain resins. International Standards central to future of AM IEC International Standards will be essential to the expansion of additive manufacturing. One area likely to expand is that of 3D printers: systems that use a wide array of electric and electronic components, including switches, relays, servo motors, ultraviolet lights and different types of lasers. Amongst the many IEC TCs (Technical Committees) preparing International Standards for such components is TC 76: Optical radiation safety and laser equipment. This is the leading body on laser standardization, including for high-power lasers used in industrial and research applications, and will play an important role in AM’s expansion here. 3D printing/AM opens up new perspectives in manufacturing, in particular the cost-effective production of high-tech items or very complex products in relatively low volumes in a single process and not requiring long lead times. Another area lies in the costly manufacture and assembly of different parts which are often very small and made up of different materials. This is an important consideration in new technology sectors, such as aircraft or satellite production, where some parts are not needed in large volumes and have to be manufactured in complex steps. "ATKINS: Rapid Manufacturing a Low Carbon Footprint", a Zero Emission Enterprise Feasibility Study from Loughborough University, UK (United Kingdom), gives several examples of potential environmental and economic benefits. For instance, it is not unusual to find 15 kg of expensive alloys being used to produce just 1 kg of high-value aerospace components. Excess material must be recycled and waste and chemicals, such as shavings, contaminated lubricants and slurries produced in the machining process, have to be treated at substantial additional cost, consuming still more energy. By contrast, in a single procedure and using only the raw material needed, RM can produce highly complex items. These may include latticed microstructures and variable density surfaces or cavities. When the process is complete, the part is removed, excess material that has not been sintered is cleaned and can be reused almost entirely (in the case of metal sintering) or partially (in the case of polymer sintering). The volume of waste residue is cut drastically. Unlike conventional manufacturing, in which a multitude of machines and processes are required to cast, press, shape, trim and polish products, AM can create a wide assortment of items from the same device when similar material is employed – machines used for metal sintering cannot process plastics, for instance. No retooling is needed between tasks, only new 3D CAD data, resulting in much shorter lead times and significant production savings. Hi-tech quickly at relatively low-cost Complex parts, such as a swirler (fuel injection nozzle) for gas turbines, have been produced in a single manufacturing step from a cobalt chrome alloy using a DMLS (direct metal laser-sintering) system from EOS (Electro Optical Systems). In spite of its complexity, the 10-15 cm swirler was made in a single manufacturing step and did not require complicated and costly machining or the welding of some 10 separate parts, always a potential source of weak spots and cracks. Aircraft manufacturers, such as Boeing or EADS (European Aeronautic Defence and Space), routinely use AM to manufacture more reliable aircraft parts and drastically cut production cost and weight. Scientists and students from Southampton University, UK, designed and made a 1,2 metre wingspan UAV (unmanned aerial vehicle) using 3D printers. SULSA (Southampton University Laser Sintered Aircraft), which has a range of 45 km and can fly at up to 140 kph, was designed in two days and printed in five. This short lead time between design and manufacturing allows designers to test out new ideas and prototypes quickly. Some of the benefits mentioned by the SULSA team are complete structural freedom for the designer at no cost and where the complexity of the design has no impact on manufacturing costs; a parametric design that can be stretched or resized and a complete separation of design and construction with "print where you need" possibility. AM is still a nascent technology, used mainly in high-tech or niche environments, for the low-volume production of small to medium size and complex parts, for which it is cost-effective. However, AM will find its way into mainstream manufacturing and will lead to mass production, giving way to mass customization and on-demand production in many domains. AM could herald another industrial revolution. Speaking to the BBC in July 2011, Neil Hopkinson, a senior lecturer in the Additive Manufacturing Research Group at Loughborough University, said that AM "could make off-shore manufacturing half way round the world far less cost effective than doing it at home. Rather than stockpile spare parts and components in locations all over the world," he argued, "the designs could be costlessly stored in virtual computer warehouses, waiting to be printed locally when required.” Evidence of the growing success of AM can be found in a May 2011 report by Wohlers Associates, an independent consulting firm on new developments and trends in additive manufacturing. Wohlers indicates that "the compound annual revenue growth rate produced by all AM products and services was an impressive 26,2 % for the industry's 23-year history." Wohlers Associates conservatively forecasts industry-wide revenues to grow from USD 1,3 billion in 2010 to USD 3,1 billion in 2016. The report also predicts the industry will ship 15 000 3D systems a year by 2015. The relatively low and falling price of RM equipment (sales are largely driven by machines selling for between USD5 000 and USD25 000) means the return on large capital investments is no longer the constraint it is in traditional manufacturing. Although 3D printers are relatively small – EOS’ largest DMLS system, the EOSINT M 280, has a footprint of 2,4 m² and weighs 1 250 kg – advanced AM machines are not yet "just round the corner at a 3D print shop on the high street", as Hopkinson is forecasting they will be.
<urn:uuid:038ead6a-cde8-4450-be0f-0a900125ff03>
{ "dump": "CC-MAIN-2017-30", "url": "http://iecetech.org/Industry-Spotlight/2011-07/The-3D-printing-manufacturing-revolution", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549436321.71/warc/CC-MAIN-20170728022449-20170728042449-00607.warc.gz", "language": "en", "language_score": 0.9454342722892761, "token_count": 1780, "score": 3.796875, "int_score": 4 }
Most observers are used to think that Red light is the best way to achieve high resolution imaging. However, this is not completely true… We are often told in planetary imaging that the images are best resolved in red light. Indeed, most of our image look best through longer wavelengths (red or near-infrared). However, we are also teached by the laws of optic that in theory, the resolving power of a telescope is, on the contrary, better in the short wavelengths (blue light). It is easy to verify this by experimentation. A few years ago I imaged under good seeing the Airy pattern of my telescope (then an excellent Mewlon 210 by Takahashi) through a large panel of filters from UV to IR and here are the results: These are images of a real star (I don’t remember which one, certainly Vega) that show text-book perfect Airy patterns with the central disk and a first diffraction ring. The evolution of the size of the pattern is dramatic from the little UV one, to the big IR 780 (I have not made any attempt with my IR 1000 filter but it would have been almost twice bigger than G). So why aren’t we seeing more often more detailed blue light images ? There are many reasons for this : - First, poor or average seeing will affect more blue light than red light - Second, optical defects of the telescope can also lower the quality of B images more than in red - And finally, the details of a planet can be very different through each colour. So the B image can be more resolved but it will go unnoticed because the details are less contrasted. This is obvious in the case of Mars. But sometimes things are going well and the B filter beats the R one… If your telescope is good and if seeing is good, the B image should get out more detailed than the R one. I have seen this many times over the years and it is not a rare situation. But last year during a very steady late summer morning I got a superb example on Jupiter : On this comparison the greater sharpness of the B image is obvious. This is what happens when everything goes right through all the imaging tunnel, from the sky to the camera ! The comparison with IR light at left is even more spectacular, this is a band where the resolution of a telescope is noticeably reduced in comparison with visible light. I did not even resized the image at the same scale than the R and B, the level of resolution was too low ;). The images are taken with a 250 mm telescope. This is not the higher diameter available today for an amateur. Differences of resolution would be harder to see with a bigger instrument, yet sometimes we do see it. On the other hand, smaller instruments are going to see this effect quite often. So when you do planetary imaging, do not forget to give the B filter the place it deserves. Too often we see images where the R layer is more cared than the others.
<urn:uuid:64ef32f5-1081-4829-b875-17ef944b73b5>
{ "dump": "CC-MAIN-2022-40", "url": "https://www.planetary-astronomy-and-imaging.com/en/optical-resolution-and-the-wavelength/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00089.warc.gz", "language": "en", "language_score": 0.9640011191368103, "token_count": 617, "score": 3.03125, "int_score": 3 }
It is 100 years since Roald Dahl and his magnificent imagination was born. To help celebrate there are lots of events taking place around the country. We have come across a few extra resources which may prove useful too. There are oodles of lesson plans to accompany his titles here: https://www.roalddahl.com/create-and-learn/teach/teach-the-stories?p=1 If it’s colouring sheets that you’re after then this should help: In September there will be a Puffin Virtually Live event online. Well worth signing up for it and popping in your diary. The Revolting Rhymes can be watched here: Finally, if you sign up at http://www.teachitprimary.co.uk there is an extra resource available to print out.
<urn:uuid:0579606d-47cd-4b6a-bd18-bcda99c27934>
{ "dump": "CC-MAIN-2017-09", "url": "https://adventuresinhomeschool.com/free-roald-dahl-resources/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00537-ip-10-171-10-108.ec2.internal.warc.gz", "language": "en", "language_score": 0.9460946321487427, "token_count": 178, "score": 3.0625, "int_score": 3 }
The RSPB Big Garden Birdwatch takes place once again this weekend. This year the event, organised by the charity based in Sandy, has been extended to include Monday, making it a long weekend and giving people more opportunity to spend an hour counting the birds in their garden. The birdwatch also has a serious scientific purpose as over the last 37 years it has provided a snapshot of how the birds and other wildlife using our gardens are doing. More than 70,000 people in Eastern England took part last in year’s birdwatch, which this year starts on Saturday, January 28. Numbers of familiar birds like starlings and song thrushes fell again last year. Despite being ranked number two in the table, the number of starlings visiting people’s gardens has fallen by 70 per cent since the first Birdwatch in 1979, and less than half of us saw them in our gardens in East Anglia during the 2016 Birdwatch. The house sparrow remained top of the rankings in Eastern England in 2016 - but will it hold onto its crown this year? Enjoy an hour watching the birds in your garden. Go online and register to take part, plus download you Big Garden Birdwatch pack from the website at rspb.org.uk/birdwatch
<urn:uuid:55f27758-e1da-47cb-a6f8-6f90b2b6cd03>
{ "dump": "CC-MAIN-2018-30", "url": "https://www.biggleswadetoday.co.uk/news/spend-an-hour-counting-the-birds-in-your-garden-its-birdwatch-weekend-1-7788238", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596204.93/warc/CC-MAIN-20180723090751-20180723110751-00451.warc.gz", "language": "en", "language_score": 0.956679105758667, "token_count": 261, "score": 2.609375, "int_score": 3 }
The Bolshevik Revolution sparked a protracted civil war of unprecedented violence and brutality. The biggest opponents were the Bolshevik Red Army and the sundry and disunited White Armies, most notably Admiral Kolchak's Siberian Army, General Denikin's Armed Forces of South Russia and Genеral Yudenich's Northwestern Army in the Baltics. Also enmeshed in the conflict were the Czechoslovak Legion, the Cossacks, independence movements in former parts of the Russian Empire, anarchist and socialist military formations of all types, peasant insurrections, local warlords and foreign powers from both sides of World War I. - von Dreier, Vladimir (1876–1967) - Крестный путь во имя родины Bearing the Cross in the Name of the Motherland - Berlin: Neie Tseit, 1921 Major General Vladimier fon Dreier, a war correspondent with the Armed Forces of South Russia, focuses his account on southern front campaigns. Despite a rather starry-eyed portrayal of General Wrangel, fon Dreier does not omit mistakes of the anti-Bolshevik leadership, nor atrocities committed by some White Army commanders against the civilian population. This publication is also notable for a foreword by major émigré author Aleksandr Kuprin, full of hope for the inevitable destruction of Bolshevik rule despite White Army defeat. - Lenin, Vladimir Ilych (1870–1924) - Lessons of the Revolution - Petrograd: Bureau of International Revolutionary The Bureau of International Revolutionary Propaganda was an early effort by the Bolsheviks to spread their ideology beyond Russia's borders. Employing sympathetic Western journalists, the Bureau published daily newspapers in German and Hungarian, as well as pamphlets in a variety of languages. This edition includes essays by Lenin, the leader of the Bolshevik Revolution and head of the Soviet government, and, interestingly, a glossary, as neither terms like "Bolshevik" and "Soviet," nor Lenin's name, had yet established themselves in the English language. - Сборник народных чтений по социально-политическим вопросам момента A Collection of People's Readings on the Sociopolitical Issues of the Moment - Novocherkassk: Izd. Donskogo otdela osvedomleniia, 1919 The Cossacks, the peasant-warrior estate numbering four and half million people, initially attempted to remain neutral during the Civil War. Both the Bolsheviks and the White Armies tried to attract Cossacks to their cause. This pamphlet, targeting the Don Cossacks, the largest of the Cossack communities, expounds the atrocities of Soviet rule and its incompatibility with Cossack self-governance and urges them to join the Armed Forces of South Russia in the anti-Bolshevik struggle. - Suvorov, Mikhail (1877–1948) - Kuz'min-karavaev, Vladimir (1859–1927) - Kartashev, Anton (1875–1960) - Образование Северо-Западного правительства Formation of the Northwestern Government - Helsinki: Akts. Obshch. Evlund i Pettersson, 1920 This pamphlet by three political advisors to General Yudenich document the creation of the anti-Bolshevik Northwestern government with headquarters in Tallinn and explain their refusal to accept cabinet positions. This civil coalition government was formed as a prerequisite for British aid to Yudenichs army. A crucial condition was the recognition of the independence of Estonia, Latvia, and Finland (formerly parts of the Russian Empire), which for many anti-Bolshevik leaders was an unacceptable proposition. - Последние дни Крыма The Last Days of Crimea - Constantinople: Izd. gazety Presse du Soir, 1920 Produced mere days after General Wrangel evacuated his Russian Army from Crimea, this urgent account is based chiefly on materials from final Crimean newspapers. It provides a chronology of the evacuation, reprints Wrangel’s last orders and eleventh-hour appeals for help to foreign governments, includes early refugee testimonies from aboard rescue ships and impressions of first days in Constantinople, and ends on a defiant anti-Bolshevik note. - Nemirovich-Danchenko, G.V. - В Крыму при Врангеле In Crimea under Wrangel - Berlin: [Tip. R. Ol'denburg], 1922 G.V Nemirovich-Danchenko headed Wrangel's press department in Crimea, which subsidized local newspapers. His anti-Semitic views and pseudonymous criticism of Wrangel in the right-wing press led to his dismissal. Part memoir, part political diatribe, Nemirovich-Danchenko's account exposes failures of Crimean civilian government, condemns Wrangel as a statesman, and offers valuable insight into the organization of the press, propaganda and censorship in the final months of anti-Bolshevik resistance.
<urn:uuid:543ef0d1-9e91-4e31-8c6c-47e1c201cd6e>
{ "dump": "CC-MAIN-2018-51", "url": "https://exhibits.lib.unc.edu/exhibits/show/world-on-fire/revolution-and-civil-war", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823618.14/warc/CC-MAIN-20181211104429-20181211125929-00033.warc.gz", "language": "en", "language_score": 0.8966615200042725, "token_count": 1186, "score": 3.28125, "int_score": 3 }
While out with friends one evening, you take a bite of ice cream. Suddenly, pain shoots through your teeth. It only lasts a second, but it's enough to ruin your good time. This could be tooth sensitivity, a painful reaction to hot or cold foods. It often occurs when the enamel in prolonged contact with acid has eroded. Acid is a waste product of bacteria found in plaque, a thin film of food particles that builds up on tooth surfaces due to inadequate brushing and flossing. Enamel normally mutes temperature or pressure sensation to the underlying dentin layer and nerves. Loss of enamel exposes the dentin and nerves to the full brunt of these sensations. Sensitivity can also happen if your gums have shrunk back (receded) and exposed dentin below the enamel. Although over-aggressive brushing can often cause it, gum recession also happens because of periodontal (gum) disease, a bacterial infection also arising from plaque. The best way to avoid tooth sensitivity is to prevent enamel erosion or gum recession in the first place. Removing accumulated plaque through daily brushing and flossing is perhaps the most essential part of prevention, along with a nutritious diet low in sugar and regular dental cleanings and checkups. It's also important to treat any dental disease that does occur despite your best hygiene efforts. Gum disease requires aggressive plaque removal, especially around the roots. While receded gum tissues often rebound after treatment, you may need gum grafting surgery to restore lost tissues if the gums have receded more deeply. For enamel erosion and any resulting decay you may need a filling, root canal treatment or a crown, depending on the depth and volume of structural damage. While you're being treated you can also gain some relief from ongoing sensitivity by using a toothpaste with potassium nitrate or similar products designed to desensitize the dentin. Fluoride, a known enamel strengthener, has also been shown to reduce sensitivity. We can apply topical fluoride directly to tooth surfaces in the form of gels or varnishes. Don't suffer through bouts of tooth sensitivity any more than you must. Visit us for a full exam and begin treatment to relieve you of the pain and stress. If you would like more information on the causes and treatment of tooth sensitivity, please contact us or schedule an appointment for a consultation. You can also learn more about this topic by reading the Dear Doctor magazine article “Treatment of Tooth Sensitivity.”
<urn:uuid:4acb1481-933f-41e6-a039-109e83ed75aa>
{ "dump": "CC-MAIN-2023-50", "url": "https://www.mydentalcarecenter.com/post/dont-put-off-getting-treatment-for-your-sensitive-teeth", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00467.warc.gz", "language": "en", "language_score": 0.9474901556968689, "token_count": 512, "score": 3.234375, "int_score": 3 }
a. Oil Processing. All military procurements of shell eggs for oversea shipments and many domestic shipments are oil-treated to seal the pores. This prevents contamination, absorption of offensive odors and flavors, and loss of moisture. (1) Treatment. For oil treatment to be effective, the egg must be dipped or sprayed as soon as possible after the egg is laid. Mineral oil that is tasteless, odorless, and colorless must be used for the processing. Heavy mineral oils are more effective than those with low viscosity, but they must be heated to a temperature higher than the temperature of the egg to flow easily. Vegetable oils are not acceptable because of the oxidation that occurs during storage. (2) Method. Federal specifications state that, when eggs are oil-processed, the oil must be applied by immersion or by spraying, a substantial amount of the shell covered, and the area surrounding the air cell completely covered. The method of application varies from hand-dipped and drained to the complicated wheel arrangement and spraying used to treat large quantities. The spray method has advantages over immersion since it: Reduces the number of Check eggs. (b) Eliminates eggs being broken in the processing oil. Is more sanitary. (d) Requires less labor, thus is less expensive. b. Thermostabilization. Thermostabilization is also used to preserve shell eggs. At a temperature of approximately 40oF (4oC), shell eggs are placed on a movable metal belt and conveyed under a continuous stream of oil at a temperature of 132o to 134oF (56o to 57oC) for approximately 15 minutes. This permits a thin layer of albumen to coagulate immediately adjacent to the shell membranes. The treated, warm shell eggs, which then have an internal temperature of approximately 120oF (49oC), are packed and placed in a cooler at 30o to 35oF (-2o to 2oC) until shipment. Thermostabilization is helpful because it: Seals the pores, supplementing the cuticle. Helps prevent absorption of foreign odors. Helps prevent dehydration. Helps prevent loss of gaseous carbon dioxide. Destroys the fertile germ cell.
<urn:uuid:f9105e3c-9e4e-4ba6-9534-f51bb66a0f06>
{ "dump": "CC-MAIN-2018-17", "url": "http://armymedical.tpub.com/MD0713/Treating-Eggs-Shell-Eggs-140.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945317.36/warc/CC-MAIN-20180421184116-20180421204116-00447.warc.gz", "language": "en", "language_score": 0.9331055879592896, "token_count": 494, "score": 2.765625, "int_score": 3 }
Granites make up the main family of deep igneous rocks, at least by the frequency of their occurrence. They only appear at the surface after the erosion of everything that was covering them. Apart from quartz, feldspar and the micas, pyroxene, often with sodium, amphibole and many others are found in granites. Microgranites have very small crystals whereas those of pegmatites are very large; obsidian is a silica glass, and the rhyolites, lavas made of granite rock, are very rare. Chaotic granite at Ameib, Namibia. © C.König Reproduction and use prohibited Description of granite - Name: granite. - Composition: quartz, potassium feldspars, sodium plagioclases and micas. Granite can occur in a range of colours. - Mean chemical composition : SiO2: 73 %, Al2O3: 14 %, (Na2O, K2O) : 9 %, oxides (Fe, Mn, Mg, Ca): 2 %. - Category: Acidic igneous rock, containing more than 66% of silica. - Natural colours: Grey, beige, pink. - Deposits: frequent but not necessarily exploitable. - Associated minerals and rocks: quartz, micas, amethyst, and others. Granitic batholiths can be relatively small and well delimited, but they have often risen as enormous, quite heterogeneous masses within ancient Precambrian shields. It is interesting to note in this connection that nearly all the granitic formations of the secondary era are concentrated around the Pacific. The development of ideas on granites has been the subject of sharp controversy. Today, the variations in geometric data of granites require various different origins and creation mechanisms to be considered. Pink Brittany granite. © DR When not weathered, granite is an excellent building material and can even be decorative after polishing. Nearly all Brittany's religious monuments were built of granite. The naturally abundant granite in Brittany is also used to build Calvary crosses, parish enclosures, funeral chambers and some lighthouses. The use of granite is far from recent. Indeed 5000 years BC man was using granite for menhirs and dolmens! Formation of chaotic granite A granite formation is always split into parallelipipedal blocks by cracks called diaclases. These form a more or less dense network extending in three perpendicular directions. These diaclases generally only exist at the surface of the formation. However, they play a substantial part in deterioration: they allow water to infiltrate and are " weak spots" from which the deterioration spreads. Rain water infiltrates the diaclases weathering the granite. This weathering, which results in the disintegration of the rock, is actually a chemical transformation of certain granite minerals. Thus it loses its natural cohesion and is transformed into coarse granitic sand composed of grains of quartz (inalterable), weathered crystals of feldspars, flakes of clay (hydrated aluminium silicate) and soluble components. The products of weathering are carried away in the solid state or dissolved by trickling water. A chaotic assemblage of granite blocks is left. The Drus, Mont-Blanc formation, the finest granite peak in the Alps. © DR
<urn:uuid:c79b7955-82fa-45f9-9d6a-d0632e9367cf>
{ "dump": "CC-MAIN-2019-35", "url": "http://www.futura-sciences.us/dico/d/geology-granite-50005389/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321351.87/warc/CC-MAIN-20190824172818-20190824194818-00354.warc.gz", "language": "en", "language_score": 0.9184712171554565, "token_count": 699, "score": 3.5625, "int_score": 4 }
Clover Bottom Mansion 2941 Lebanon Road Nashville, TN 37243 Once a self-supporting plantation of 1500 acres and 60 slaves, Clover Bottom was the site of several encampments and now houses the State Historical Society. John McCline’s memoir gives a picture of slave life there. The Clover Bottom plantation was the sprawling antebellum home of Dr. James Hoggatt, his wife, Mary Ann Saunders, and their two nieces. Built in the Italianate style in 1858, the mansion had 23 rooms, 14-foot ceilings, and high doors with elaborate transoms. Each feature reflected the wealth of its owners. In fact, Clover Bottom was just one of three plantations owned or co-owned by Dr. Hoggatt in both Rutherford County and the state of Mississippi. While Dr. Hoggatt was the patriarch of the estate, he seemed to take a backseat in day-to-day operations, leaving much of that work for his wife and his overseer. By the start of the Civil War, Clover Bottom had grown to 1,500 acres operated by 60 slaves, rendering the plantation completely self-sufficient. In the early months of war, spotting Confederate troops on the pike on their way to Nashville was almost a daily occurrence. After Nashville fell, Federal troops could be seen marching down that same road. Quite often, soldiers encamped on the grounds of Clover Bottom. In July 1862, then Col. Nathan Bedford Forrest [CSA] arranged for his cavalry to camp on the Hoggatt’s property—just a few miles outside of Federally occupied Nashville. The Hoggatts put on a literal barbecue for their guests, handing out ham slices, corn bread, and hoe cakes. Much of Clover Bottom’s prewar and early war history can be found in John McCline’s Slavery in the Clover Bottoms. McCline was born into slavery on the plantation, remaining there until the age of ten or eleven when he attached himself to a passing Federal regiment. From 1862 to 1865, McCline served the Federal army as an assistant teamster, caring for the mules and horses. Today the Clover Bottoms plantation is reduced to nine acres with the main house, family cemetery, and a few outbuildings from later periods of agricultural use. Clover Bottom mansion is now the home of the Tennessee Historical Commission. The first floor is open for public visitation, but visits can be made after arranging an appointment in advance by calling the Commission. - Clover Bottom was one of three Hoggatt family plantations - A memoir written by former slave John McCline tells much of the history, including several troop encampments - McCline attached himself to a Union regiment and left at age 10 or 11
<urn:uuid:61a16793-7512-422b-91f7-df712f7f88c7>
{ "dump": "CC-MAIN-2014-49", "url": "http://t3po.tnvacation.com/civil-war/place/2081/clover-bottom-mansion/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378724.10/warc/CC-MAIN-20141119123258-00011-ip-10-235-23-156.ec2.internal.warc.gz", "language": "en", "language_score": 0.9663464426994324, "token_count": 566, "score": 2.90625, "int_score": 3 }
SATUKDAY, OCT. 16tb, 120 The Death of a Social System By S. Macaulay In his book, "The Economic Consequences of the Peace," J. M. Keynes, at one time a represent ative of the British Treasury at the Paris Peace Conference, has the following passage: "In continental Europe the earth heaves and no one but is aware of the rumblings. There it is not just a matter of extravagance or "labor troubles"; but of life and death, of starvation and existence, and of the fearful convulsions of a dying civilization " There is a possimistic note in Mr. Keynes' writ ing which is perhaps natural to one who has not experienced the keen pleasure of arriving at the solution of the riddles and paradoxes presented by the economic chaos of Europe. Mr. Keynes may have heard of Marx, but he says nothing in his book to indicate that he has considered the application of Marx's theories. The remedies he proposes are of the usual type put for ward by the bourgeoisie, and futile. In one point, however, Mr Keynes is in agreement with Marx; he speaks of the death of" a civilization. Socialists have pointed out, have been pointing out for the last half century, that social systems have no permanency, that they rise and fall with the changes produced by the improve ment in the methods of production. The case in point is hardly the death of a Civilization; it is the demise of a social system, and the demise is closely connected with the birth of a new system. I propose, therefore, to substitute the expres sion "social system" for the world Civilization. The History Of Class Struggles. The history of the human race has been a history of class struggles, the manifestations of which have been different in different countries, and under different economic conditions. But there has (since the institution of chattel slavery) al ways been the confliction of interests upon which the class struggle is based. Assyrian, Chaldean, Egyptian, Greek, Roman, Aztec economic history all present the same spec tacle of rise and fall. What is the canker that lay at the root of them all, that lies at the root of the present system? What is the common factor of It is SLAVERY. The enslavement of one class But why, it will be asked, should slavery be a disintegrating force? Is it not true that thousands of slaves were not only contented, but happy as slaves? It is perfectly true. The position of the chattel slave of the old days was infinitely better than that of the industrial slave of today; his master was also his owner, he had a personal in terest in keeping him in good repair, the.slave was an investment of so much money, and so must be looked after. But the slavery of the wage-worker is a different type; it is a concealed slavery; it has the appearance of freedom; the slave is not bound to one master, he is the slave of a CLASS, he belongs to a slave CLASS. The industrial slave of today is driven at a rate unknown in the days of chattel slavery; the cut-throat competition for markets compels an in tensification of exploitation Which makes the life of the chattel slave appear a holiday in compari son. It is this very intensification which is at the root of the "industrial unrest" which is making itself manifest in all capitalistically developed countries. Consciously or unconsciously, the slaves are' beginning to feel the gall-sores. They are be coming class-conscious. In most cases they ,are in utter darkness as to both the disease and the re medy, and it was not until Marx exposed the dis ease that the remedy was also made plain. "Concessions" To The Workers. During recent jears the various Government s have been frantically making concessions to the workers. Unemployed insurance, old age pen- -i!i, profit sharing, etc. etc., hayc boon the sops thrown to the "animals" to pacify them. But. they refuse to be pacified. Distrust of Gov ern me i it. ami Parliament, National Assemblies and Cabinets is oxmly expressed, and in some cases they have been overthrown. Disgust at the futility of "parliamentary action" is plain ly manifested in the strikes and resorts to in dustrial mas-s action which are prevalent. These expressions of discontent on the part xml | txt
<urn:uuid:496b4be9-4e14-460d-8dab-3e5579d17985>
{ "dump": "CC-MAIN-2017-04", "url": "http://chroniclingamerica.loc.gov/lccn/sn88078683/1920-10-16/ed-1/seq-4/ocr/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00302-ip-10-171-10-70.ec2.internal.warc.gz", "language": "en", "language_score": 0.9423319101333618, "token_count": 1045, "score": 2.75, "int_score": 3 }
What Is Rosemary? Rosemary is an evergreen aromatic shrub with thick leaves. The herb has small and pale-blue flowers that often bloom in late winter & early spring. Although this plant is native to southern Europe, it now is grown worldwide. Other types of this herb are marsh or wild rosemary (Ledum palustre) and bog rosemary (Andromeda species). If you are looking for the benefits of rosemary on health and beauty, have a look at this article on the Effectiveremedies.com in the line of Superfoods. What Are The Benefits Of Rosemary And Uses On Health & Beauty? Rosemary has widely been used for centuries in the treatment of many health problems. Studies suggest that this herb contains anti-inflammatory, antimicrobial, antibacterial, antiseptic, antioxidant, and anticarcinogenic activity. Therefore, it can aid in treating digestive tract problems, respiratory conditions, headaches, bad breath, baldness, body odor, Oedema (fluid retention), cognitive and neurological problems, macular degeneration, arthritis, and poor blood circulation. Some preliminary results also suggest that it can reduce muscle pain, promote urine flow, increase menstrual flow, improve hair growth, and prevent blood clots or thrombosis. 1. Health Benefits Of Rosemary – Boosts Your Immune System Rosemary oil stimulates the antioxidant activity that is a powerful weapon for combating infection & disease. Massaging and inhaling with rosemary essential oil reduces the cortisol levels, which helps to inhibit the activity of free radicals in your body. Rosemary oil is often inhaled or used in aromatherapy sessions to boost up the immune system. It also aids in combating diseases caused due to free radicals, including heart disease and cancer. 2. Health Benefits Of Rosemary – Improve Memory Rosemary has been believed as a great memory-enhancing plant for a long time. Many researchers have found that this herb contains diterpene that offers neuroprotective effects. They believe that the diterpene can help to protect you from Alzheimer's disease and the loss of memory happening with aging. Even the smell of this herb is also said to improve the memory. 3. Beauty Benefits Of Rosemary – Hair Growth The antioxidant properties of this herb boost the growth of your hair. To get the detailed recipe using rosemary for hair growth, click at the Natural Home Remedies For Hair Growth And Strength Fast (Remedy no. 1) 4. Darken Your Hair EffectiveRemedies Partner Solutions Ask a Doctor Online and Get Answers in Minutes, Anytime! Have medical questions? Keep asking questions to a Verified Expert until you get the answer you need. Mix 3 teaspoons of dried rosemary leaves with 1 glass of hot water and then boil it. After that, steep this mixture and let it cool. Now, pass through the sieve to take the water. Next, shampoo your hair and apply this infusion to your hair. Now, massage your hair and wait for about 5 mins. Finally, rinse it off. 5. Health Benefits Of Rosemary – Treat Tendonitis Take 250 cubic cm of ethyl alcohol and 25 grams of rosemary and then mix them together. Now, pour the mixture into a bottle having a tight cap. Now, soak it for 7 days and then strain it. Finally, massage the affected area with this. 6. Treat Scabies The anti-inflammatory & anti-microbial properties of this herb will protect the affected areas from further infection. Furthermore, rosemary can reduce the irritating sensation and speed the healing process. You will need rosemary and water for this treatment. To learn how to use rosemary for scabies, click at the Natural Home Remedies for Scabies Treatment in Humans (Remedy no. 2) 7. Benefits Of Rosemary – Grow Facial Hair Mix rosemary oil and virgin coconut oil at the ratio of 1:10. Next, use a clean cotton ball to apply it to your skin. Let the mixture be on there for 15 or 20 minutes. Finally, rinse the skin with water. Use this mixture for 2 times a day. 8. Weight Loss Add several drops of rosemary oil to coconut or olive oil and mix them well. Then, massage this mixture on the area where is accumulated with fat for about 30 minutes. After that, wash off the oil mixture with clean water. Learn more: Best Natural Home Remedies For Weight Loss Fast 9. Health Benefits Of Rosemary – Treat Depression Rosemary can aid you in calming your nerves, which keeps this issue at bay. There are 2 remedies for depression using rosemary. To get these recipes, click at the Quick Natural Home Remedies For Depression (Remedy no. 8) 10. Treat Cracked Hands Heat 1/2 cup of olive oil and let it cool down. Next, add some drops of rosemary oil to it. Then, massage this mixture on the affected areas. Wait for 30 minutes and then rinse your hands with tepid water. 11. Benefits Of Rosemary – Remove Foot Odor Firstly, add boiling water into a foot basin. Add 1 teaspoon each of the dried rosemary leaves and sage to clean water. Now, steep it and let it be warm naturally. After that, soak your feet for at least 30 mins. Use this remedy 1 or 2 times daily. 12. Tighten Belly Skin Rosemary aids in improving the blood circulation in the whole body, which is useful in tightening the skin around the stomach after weight loss. You will need rosemary, honey, and witch hazel for this purpose. You may get this recipe in detail, click at the How to Tighten Belly Skin Naturally & Fast at Home (Tip no. 4) 13. Health Benefits Of Rosemary – Treat Impetigo Infection Simply, boil 2 cups of clean water and add 1/4 cup each of rosemary & thyme leaves to it. Continue to simmer it for about 15 minutes. Use this solution to clean the affected area for children. 14. Treat Dry Mouth & Bad Breath You can use rosemary to make a mouthwash for treating dry mouth. To make mouthwash, mix 1 tbsp. of dried rosemary with 1 tsp. each of aniseed and dried mint. Add this mixture to a glass of hot water. Then, cover and place it in your refrigerator. 15. Beauty Benefits Of Rosemary – Remove Split Ends You take some rosemary leaves & burdock root for removing splits ends. For more information about using this herb for split ends, click at the Greatest Natural Home Remedies For Split Ends Without Cutting (Remedy no. 22) 16. Treat Leg Cramps Add the rosemary leaves to a half-pint jar and then pour hot water other these leaves to steep them for 30 minutes. Now, dip a clean washcloth into this water and apply it on the affected areas for about 12 to 15 minutes. Whenever the affected leg has an improvement, remove this washcloth and use a cold compress on this. Or, add 1 or 2 tsp. of dried rosemary leaves to 1 cup of boiled water. Let them steep for about 10 minutes and then strain the water. Drink this tea for about 3 times daily. 17. Beauty Benefits Of Rosemary – Repair Damaged Hair For this treatment, you will need castor oil, olive oil, rosemary oil, holy basil oil, and nettle oil. To learn how to use this oil for treating damaged hair, click at the Effective & Natural Home Remedies For Damaged Hair Repair (Remedy no. 27) 18. Treat Vaginal Itching Simply, steep some rosemary leaves in boiled water for about 15 minutes. After that, strain this water and cool it down. Now, use this water to rinse your vaginal area for a few times a week. 19. Treat Chest Congestion You may steam with essential oils to treat chest congestion. You need to prepare peppermint oil, rosemary oil, eucalyptus oil, and hot water. You may get the instruction for using this oil for chest congestion, click at the Natural Home Remedies For Chest Congestion In Adults (Remedy no. 8) 20. Health Benefits Of Rosemary – Relieve Lower Back Pain Add rosemary to boiling water and then cover it. Now, let it steep for 30 minutes. Drink 1 cup of this tea before going to sleep and 1 cup before your breakfasts. Have this tea regularly to relieve lower back pain. Read more: Natural Home Remedies For Back Pain Relief 21. Treat Headaches For this treatment, you need to take the crushed rosemary leaves, the crushed sage leaves, and clean water. To get the detailed recipe of using rosemary leaves for a headache, click at the Best Natural Home Remedies For Headaches Pain (Remedy no. 8) 22. Health Benefits Of Rosemary – Relieve Muscle Cramps Add some rosemary leaves to hot water and steep them for 25 or 30 minutes. Dip a washcloth in this water and apply it to the affected muscles for at least 10 minutes. After that, apply a cold compress on these areas. Or, you may also mix 2 teaspoons of the dried rosemary leaves with 1 cup of boiling water and then drink it solution slowly. 23. Treat Dry Scalp You need to prepare oatmeal powder, rosemary oil, and clean water to make a treatment for dry scalp. Click at the Quick Natural Home Remedies For Dry Scalp In Winter (Remedy no. 26) to know the recipe using rosemary in detail. 24. Health Benefits Of Rosemary – Great For Low Blood Pressure You will need the dried rosemary leaves, olive oil, and 2 jars with its lid. To get the instructions for using rosemary oil for low blood pressure, click at the Natural Home Remedies For Low Blood Pressure Symptoms (Remedy no. 2) 25. Cure Numbness Simply, add some drops of rosemary oil to your favorite fruits or salads and then consume it. Another option, you may also massage your affected areas with this oil. Repeat this process daily for 2 months to get rid of this problem. 26. Health Benefits Of Rosemary – Relieve Yeast Infection The anti-inflammatory properties of rosemary help to kill the yeast. In addition, it is also effective in relieving the burning or itching. To get the recipe using rosemary for yeast infection, click at the Natural Home Remedies For Yeast Infection Relief On Skin (Remedy no. 14) 27. Treat Cellulitis Combine rosemary and fennel oil together and then apply it to your affected skin area. Keep using this combination until the infection is cured completely. 28. Health Benefits Of Rosemary – Relieve Anxiety Attacks You just need rosemary oil and hot water for this treatment. For more information about using rosemary oil for anxiety attacks, click at the Natural Home Remedies For Anxiety Attacks Relief (Remedy no. 14) 29. Treat Sour Stomach To cure sour stomach, add some drops of rosemary oil to your bath water and then take a bath daily with this to relax and relieve your sour stomach. 30. Benefits Of Rosemary – Treat Itchy Scalp For this treatment, you will prepare neem oil, tea tree oil, rosemary oil, lavender oil, and carrier oil. You may get the detailed recipe of using rosemary oil for itchy scalp, click at the Natural And Effective Home Remedies For Itchy Scalp In Summer (Remedy no. 9) 31. Beauty Benefits Of Rosemary – Fade Stretch Marks Rosemary oil is considered one of the best oils for fading stretch marks, particularly when mixed with coconut oil. Simply, you just massage this combination on the stretch marks regularly. 32. Beauty Benefits Of Rosemary – Nail Growth Mix honey with several drops each of rosemary essential oil and coconut oil. Then, place the mixture in the microwave for 15 to 20 seconds. Now, soak your nails in this combination for 15 minutes. Repeat this method 1 or 2 times per week. 33. Grow Thick Hair For growing thick hair, you need rosemary oil, grapeseed oil, jojoba oil, lavender oil, atlas cedarwood oil, and thyme oil. You can get the detailed recipe of using rosemary and other essential oils for thick hair, see more at the How To Grow Thick Hair Fast & Naturally at Home (Tip no. 18) 34. Beauty Benefits Of Rosemary – Treat Acne Simply, massage your face with rosemary oil to lighten dark spots, acne, and blemishes on your skin. 35. Beauty Benefits Of Rosemary – Stop Oily Hair Add 2 tbsp. of rosemary leaves or a tea bag to a cup of hot water and steep it for about 20 minutes. Allow it to cool and apply this tea to your hair after shampooing it. 36. Health Benefits Of Rosemary – Treat Ringworm You need to prepare rosemary oil, oregano oil, wild thyme oil, and sweet almond oil for this treatment. To get the detailed instructions for using rosemary for ringworm, click at the Best Natural Home Remedies For Ringworm In Humans (Remedy no. 15) 37. Beauty Benefits Of Rosemary – Rejuvenate The Skin Massage your skin with rosemary oil to rehydrate and tone it. It fades wrinkles & bags on your skin, thus keeping it healthy and beautiful. The cell regeneration properties of rosemary oil help to treat visible skin problems and replace the damaged tissue, which is effective in reducing scars & spots. 38. Health Benefits Of Rosemary – Treat Sprained Ankle Firstly, chop rosemary leaves finely and then add the chopped leaves to the jar. Now, pour the grain alcohol over them and cover this jar with a lid. Next, place it in a cool place for at least one month. After that, strain the leaves with the cheesecloth. Then, store this tincture in the dark bottles. Now, apply this tincture to your sprained ankle as needed. 39. Remove Head Lice You need to prepare neem oil, tea tree oil, sesame seed oil, eucalyptus oil, rosemary essential oil, and lavender oil for this purpose. You may get the instructions for using rosemary for head lice, click at the Natural Home Remedies For Head Lice In Adults & Toddlers (Remedy no. 10) 40. Beauty Benefits Of Rosemary – Get Straight Hair Take half a cup of gel from an aloe vera leaf. Then, add half a cup of warmed olive oil to it. You may also pour 6 drops each of rosemary & sandalwood oil to it. Now, combine them in a bowl and apply this mask to the scalp. Use a shower cap to wrap your hair and wait for 1 to 2 hours. Finally, wash it thoroughly with cool water and a mild shampoo. Repeat this method regularly 1 time per week for a few weeks to get a straight hair. What Are The Side Effects Of Rosemary? Using rosemary is likely safe if consumed in amounts present in foods. When taken as a medicine, it is possibly safe for people to take by mouth, inhale as aromatherapy, or apply to the skin. But the undiluted oil seems to be unsafe if taken by mouth. Consuming large amounts of this herb can cause uterine bleeding, vomiting, kidney irritation, skin redness, allergic reactions, and increased sun sensitivity. Special Precautions And Warnings: Pregnancy & breastfeeding: Rosemary seems to be unsafe to take by mouth in a medicinal amount. Rosemary may affect the uterus or stimulate menstruation, which causes a miscarriage. Although it is not enough evidence about the safety of using rosemary on the skin during pregnancy, it’s best to keep away from it. Pregnant women should also avoid using this herb in amounts more than food amounts. In case, you are breastfeeding, remember to steer clear of using this herb in medicinal amounts. Bleeding disorders: Rosemary may increase the chance of bruising and bleeding in people suffering from bleeding disorders, so use it cautiously. Seizure disorders: This herb might worsen seizure disorders. Avoid using it. Aspirin allergy: This herb contains salicylate a chemical similar to aspirin, which can cause a reaction in those who are often allergic to aspirin. Where And How To Buy Rosemary? You can find rosemary at grocery & herbal food stores in many forms such as fresh or dried plant, capsules, essential oil, alcohol tincture, liquid extract, and tea. You may buy fresh rosemary at grocery stores. When purchasing fresh rosemary, ensure that its stems do not have any dark or yellow spots. When purchasing in dried, oil, or many other forms, opt for products made with organically grown, non-irradiated, and pesticide free rosemary leaves. Here are a few of benefits of rosemary on health and beauty that you should know. Hope that this article can help you know more this herb as well as its uses and side effects. If you know other benefits of rosemary on health and beauty, please share with us. More recommended articles:
<urn:uuid:17619057-95c9-439a-9c5d-6e7cc6d922a2>
{ "dump": "CC-MAIN-2020-16", "url": "https://effectiveremedies.com/benefits-of-rosemary/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371805747.72/warc/CC-MAIN-20200407183818-20200407214318-00457.warc.gz", "language": "en", "language_score": 0.8992428779602051, "token_count": 3637, "score": 2.625, "int_score": 3 }
Lifestyle diseases: the burden of choice? Christopher Dye (firstname.lastname@example.org) Gresham Professor of Physic Notes on the slides, 15 March 2006 These notes and the lecture slides draw on many sources, some of which are quoted directly. I will provide these sources on request. 1. In the last lecture, I talked about the demographic transition and, in one sense, this series of lectures has followed the time trajectory from left to right in this picture. I explained back in February how the fall in deaths and the rise in life expectancy, followed by the fall in births, led to an increase in population. My theme today is about the consequences of that transition for health - the rise of the so-called "lifestyle diseases". 2. But what are "lifestyle diseases"? And what do I mean by the "burden of choice"? Have we made a choice and now suffer the burden of illness as a result? Is there an added burden to making and acting on the choice? 3. "Lifestyle disease" is in fact a poor choice of terminology. For one thing, it implies that we mostly do have a choice about what we suffer from. I will show you in this lecture that while we have choices, many of them are tough choices and limited in scope. In some instances - most obviously with respect to diseases that are due to the genes we inherit - we have no choice at all, at least until gene therapy comes along. The term "lifestyle diseases" has been used more or less synonymously with "Diseases of civilization", the "Western disease paradigm", "Diseases of affluence", "Chronic diseases", "Non-communicable diseases" and "Diseases of longevity". As a way of defining and focusing on the problem, let's have a quick look at how well these other names work. 4. In so far as obesity is due to overeating, and I'll discuss that later, "Diseases of civilization" hardly seems appropriate. 5. The "Western disease paradigm" has arisen out of thinking about the demographic transition, and rich ("Western") countries are further along that transition. But those public health specialists that are concerned with the distribution of disease globally will point out, for example, that there are more people with type 2 diabetes in low- and middle-income countries. The top 5 countries for diabetes include China, India and Indonesia, in part because of the very large populations of those countries. 6. The same argument applies to the term "Diseases of Affluence". Because of the distribution of people in the world, 80% of deaths from diseases like diabetes are in low- and middle-income countries. 7. However, the distribution of causes of deaths is very different within rich and poor countries. When we express the number of deaths per head of population, we see that most deaths in the rich world (blue) are from chronic or non-communicable diseases, whereas in the poor world deaths are more evenly distributed among infectious (communicable) and non-communicable diseases. The double burden imposed on the poor world means that the total death rate per million people each year is far greater than in the rich world. 8. The term "chronic disease" comes closer to characterizing the major problems of ill health faced by the rich world. We are talking, for example, about cardiovascular diseases (heart disease, stroke), cancer, chronic respiratory diseases and diabetes. These are among the dominant causes of long-term or chronic illness. But this discussion highlights about health another key point about the demographic transition. I've been speaking about poor countries and rich countries. Increasingly, however, we must make distinctions not only among countries but within them. When we say that poor countries (i.e. those with low average GDP) suffer a double burden of illness, we are referring to different groups of people within those countries; the gap between rich and poor within countries is widening. 9. Keep that in mind when comparing the causes of death in Asia and Africa, and western Europe and north America. In Africa, the dominant causes of death are infectious and parasitic diseases, followed by cardiovascular disease, with cancers ranked 4th. 10. In western Europe and north America, the major largest causes of death are cardiovascular diseases (heart attack, stroke) and malignant cancers, with infectious diseases barely holding their place in the top 12. 11. Diseases of longevity? As I've shown you before, we are certainly living longer, and the rise in lifespan span is unrelenting. The life expectancy of women in the leading countries has been increasing by 1 year in every 4 years for 150 years. Chronic disease are clearly associated with living longer, and the two phenomena are associated for fundamental reasons, which I will some to shortly. 12. Here are the basic facts: in rich countries, it is mostly people 45 years and over that lose years of healthy life from cancer, heart disease and stroke. 13. That said, and to emphasize again that none of these descriptors is perfect, here is the caveat. Obesity is certainly a problem in long-lived adults: very crudely speaking, the longer you over-eat, the greater the chance you will eventually get fat. But obesity is increasingly an affliction of children too - as shown here for the USA. Overweight in children and adolescents was relatively stable from 1960 to 1980, but has sharply increased since then. One of the US national health objectives for 2010 is to reduce the prevalence of overweight from a baseline of 11 percent in 1994. The 2003-2004 overweight estimates suggest that, since 1994, overweight in youths has not levelled off or decreased, but is increasing to even higher levels. The 2003-2004 findings for children and adolescents suggest that the US is producing another generation of overweight adults, who will be at risk for related illnesses. 14. To summarize the first part. We'll settle on the term "chronic diseases". These are principally heart disease, stroke, cancer, diabetes and chronic respiratory diseases, and they are the major conditions of ill health in long-lived populations, whether these populations are in countries that are rich on average or poor on average. There is, of course, a hierarchy of causes of these conditions, and I want to focus in the next sections of this lecture on those causes that we can fix, and those that we can't easily fix. As a broad generality, we can avoid a number of the causes of cardiovascular disease (CVD). By contrast, the causes of cancer, with one or two exceptions (smoking and lung cancer) are less well-known, less predictable and therefore harder to avoid. 15. Before that, however, I want to bring in a question that is fundamental to any discussion about the origins of chronic disease: Why do we age and die? 16. It's said that life begins at 40? It does if you think that the end of the reproductive period signals the start of a new period of freedom. The sad truth is that, by age 40, we are essentially redundant in evolutionary terms. And it is evolution that has shaped our pattern of survival. These are the ages of conception of women in England and Wales in 2005, peaking in the age class 25-29 years, with hardly any women conceiving after age 40, and the average age of menopause being around 50. 17. This has some unpleasant consequences for the over 40s. Survival depends on bodily maintenance and repair. Because most wild animals die young, including humans for most of their existence, Darwinian natural selection is powerless to eliminate mutations that have their adverse effects at later ages. This is especially true if these adverse effects are coupled with traits that are beneficial in a youngster. For example, a high testosterone level may aid the drive to reproduce in a young male, but it predisposes towards health problems down the line, such as heart disease. From the perspective of natural selection, the benefit in early life counts much more than the trouble that might occur later. 18. Underlying the aging process is a lifelong accumulation of molecular faults and damage. Such damage is intrinsically random in nature, but its rate of accumulation is regulated by genetic mechanisms for maintenance and repair. As cell defects accumulate, the effects on the body as a whole are eventually revealed as age-related frailty, disability, and disease. This model accommodates genetic, environmental, and intrinsic chance effects on the aging process. Genetic effects are expressed primarily through maintenance functions, while environment (including nutrition and lifestyle) can either increase or help to decrease the accumulation of molecular damage. Cellular defects often cause inflammatory reactions, which can themselves exacerbate existing damage; thus, inflammatory and anti-inflammatory factors can play a part in shaping the outcomes of the aging process. 19. Longevity is regulated by the joint actions of many maintenance and repair pathways, which control components of systems such as those involved in antioxidant defence, DNA repair and protein turnover. Each pathway guards against the age-related accumulation of particular kinds of damage and provides a period of 'assured longevity' before the relevant lesions build up to a level that threatens survival. The plasticity of ageing and its susceptibility to environmental effects arises because many non-genetic factors, such as lifestyle and nutrition, can affect, for better or worse, the response mechanisms that deal with exposure to damage. When a mutation reduces the effectiveness of one of these mechanisms, as shown in the figure by the red cross and truncated arrow, the corresponding lesions accumulate sooner, resulting in a shorter period of assured longevity, which may shorten the lifespan of the organism as a whole. 20. Can we fix all the faults? The view of researchers like Doug Wallace make most sense to me, "...as each life-limiting process is countered, some other process will become limiting". Even if we knew how to fix the faults, the cost of fixing them may not only exceed benefits, but may simply be unaffordable in absolute terms. The hope must be that, since we can't fix all the faults piecemeal, we will be able to find common and modifiable causes to a wide array of conditions of ill health. 21. What causes of diseases can be identified and modified? 22. The answer for heart disease is that a series of well-known risk factors - to do with diet, smoking and exercise - are strong predictors of illness. Roughly speaking, and taking all these factors together, we can account for 84% of instances of heart disease. Notice that alcohol scores negative in this kind of analysis, i.e. there is evidence that it protects against heart disease. 23. The picture for stroke looks about the same. Same principal risk factors - high cholesterol and blood pressure. And once again alcohol seems to be protective. 24. This composite analysis is based on many specific studies. Here we can look at no more than one or two in detail. Trans fats (short for trans fatty acids) were invented for convenience in cooking. They are made through the hydrogenation of oils. Hydrogenation solidifies liquid oils and increases the shelf life and the flavour stability of oils and foods that contain them. Trans fat is found in vegetable shortenings, some margarines, crackers, biscuits, snack foods and chips. Trans fat increases so-called bad cholesterol, Low Density Lipoprotein (LDL), and decreases so-called good cholesterol, high density lipoprotein (HDL). The ratio of LDL to HDL determines the risk of heart disease, as in this study of 20,000 women followed over a period of 20 years. 25. Health and life expectancy vary from one part of England to another. In general, people have shorter lives in the north of the country, and in East London. We can't explain all of this variation but we can explain some of it. 26. Smoking deaths are higher in northern England and London. The gap in lung cancer deaths between men and women has reduced. Although the estimated smoking attributable mortality rate for men is more than double that for women, in 2002-04 the regional pattern is similar for both. The regional distribution for persons is shown on the map. For men, the West Midlands is also above average and for women, London is around average. Male death rates from lung cancer at ages under 75 have decreased substantially since the late 1960s, whereas those for women increased until the late 1960s, but have fallen slightly in recent years. 27. The evidence that smoking leads to premature death is abundant, but new data express the problem in a different way. There are substantial social inequalities in adult male mortality in many countries. Smoking is often more prevalent among men of lower social class, education, or income. The contribution of smoking to these social inequalities in mortality remains uncertain. In populations in England & Wales and in Poland, most (but not all) of the substantial social inequalities in adult male mortality during the 1990s were due to the effects of smoking. Widespread cessation of smoking could eventually halve the absolute differences between these social strata in the risk of premature death. 28. What about alcohol? 29. Alcohol, in moderation, does lower the risk of coronary artery disease, and many studies have now conformed that. Here, for example, a 16-year study of 50,000 US physicians found that ½ to 2 drinks a day was associated with measurably lower risk (compared with no drinking). 30. Epidemiological data support strong policy initiatives on both primary and secondary prevention. Cardiovascular risk factors can be reduced in entire populations through comprehensive and integrated prevention programmes. In patients with established coronary heart disease, there is extensive evidence on the effectiveness and cost effectiveness of secondary preventive cardiological treatments. In several countries, the application of existing knowledge has led to major improvements in the life expectancy and quality of life of middle aged and older people. For example, deaths from heart disease have fallen by up to 70% in the last three decades in Australia, Canada, the UK and the US. Middle income countries, such as Poland, have also been able to make substantial improvements in recent years. Such gains have been realized largely as a result of the implementation of comprehensive and integrated approaches that encompass interventions directed at both the whole population and individuals, and that focus on the common underlying risk factors, cutting across specific diseases. According to WHO, the cumulative total of lives saved through these reductions is impressive. From 1970 to 2000, WHO has estimated that 14 million cardiovascular disease deaths were averted in the United States alone. The United Kingdom saved an estimated 3 million people during the same period. 31. So there are some clear causes of cardiovascular disease, which can in principle be modified. What causes of ill health cannot be modified? 32. If we do the same exercise for cancer as we did for heart disease, we get a different and more disappointing result. Now, the obvious risk factors for chronic ill health explain a relatively small fraction of all instances of cancer, with the exception of smoking. As a rough estimate, all these risk factors, taken together, explain a little more than one third of the instances of cancer. Why is that so? Why is cancer different from heart disease? 33. To explain this, we need to understand what cancer is. We have broadly two kinds of cells in our bodies: germ cells (sperm, eggs) and somatic cells (all other cells that are differentiated into various body parts, from embryonic cells onwards). The germ cells pass genetic information from parent to offspring and are in effect immortal. Somatic cells build bodies, which have a limited life span. Body cells are mortal. Cancer arises when a mutation causes the somatic cells to revert to germ-like cells. For a single celled organism, "cancer" is not a disease, but a competitive advantage. Uncontrolled cell division is only a problem in a multicellular organism where the cells of the body (soma) must cooperate in order for the organism to pass on its genes to the next generation. Thus, the evolution of multicellular organisms has probably been the story of the accumulation of ways of suppressing dividing cells - through the accumulation of tumour suppressor genes in the genome. The tumour suppressor genes enforce the pact between the soma and the germ line. Health is not the absence of tumours but the control of tumours. Occasionally a stem cell will suffer a mutation in a tumour suppressor gene, either because of exposure to an environmental mutagen or from bad luck with a copy error during the cell cycle. If these mutations allow the stem cell to reproduce faster than its neighbours or reduce the chance that it will die, that stem cell's progeny will tend to replace its neighbours and spread its mutation through the local tissue. That is cancer as an outcome of natural selection. Cancer has clearly been a selective force in human evolution. The reason cancer is generally a disease of old age, is that our ancestors that had genomes susceptible to the early onset of cancer probably did not survive long enough to pass on those vulnerable genomes. Furthermore, some of the common cancers today are probably reflections of a difference between the selective forces that governed our ancestors and the modern environment and lifestyle. This can be seen in the high incidence of breast cancer. Anytime cell populations are stimulated to undergo regular bouts of proliferation and cell death, as occurs in the breasts during the female menstrual cycle, those cells are at an added risk for acquiring mutations in their genes. By studying hunter-gatherer societies, anthropologists estimate that until very recently, food shortages, early and frequent pregnancies along with breast feeding practices among our ancestors probably limited the total number of ovulations in a woman's lifetime to approximately 160. Compare this to an average of 450 ovulations in a modern American woman's lifetime. 34. So cancer is caused by genetic defects. If we could reliably identify the causes of the genetic defects, and if these causes could be modified, then we could prevent cancer. The epidemiological evidence I have shown you is not hopeful, in general, about pinning down the causes. Nor are molecular studies of how cancer arises. For example, this study was published in Nature last week. The authors used cell samples from 210 different types of cancer, and searched for mutations in the genes of these cells that are not present in those of non-cancerous cells. They found two things, both of which are bad news for prevention. First, they found many more new cancer genes than expected. Ideally we want to see few causes, not many causes. Second, the found that cancer genomes carry many unique abnormalities, not all mutations contribute equally. Taken together these two observations suggest that cancer is unpredictable, and therefore not easily preventable. 35. And to return briefly to the epidemiology, cancer trends are not as positive as those for heart disease. These are the trends in the USA for the last (nearly) 30 years for the top 10 cancers. Red is up, white is stable, and green represents a decline. For men, the picture is largely red, with the exception of lung and oral cancers that are linked to smoking. 36. To the extent that the genes we get from our parents are responsible for disease (as distinct from mutations in somatic cells), there is nothing we can do about that personally. Some good news here is that twin studies, such as those done by Kaare Christensen, suggest that genes can account for up to only ¼ of the variation in human lifespan. 37. Another way to see that is in scatter diagrams. The wide scatter of points for matched pairs of monozygotic and dizygotic twins shows that their life spans are poorly correlated. The determinants of lifespan are complex, and not down to just a few genes. 38. To bring together the last two sections, I've examined the two principal causes of chronic illness and examples of some (especially heart disease) that have known and modifiable causes, and others that do not (especially cancers). In that light, how much choice do we really have? In this assessment, I'm going to remain sober, but hopefully not too pessimistic. 39. Some parts of the advice you will find in books like this - Kurzweil's "Fantasitc Voyage" is correct, but implausibly hopeful. Kurzweil sees 3 bridges to immortality? Bridge One in which current knowledge is used to slow down the aging process. Bridge Two will be advances in biotechnology to stop disease and reverse aging. Bridge Three develops (nano)technology to create man-machine interface, expanding physical and mental capabilities. 40. This programme is truly optimistic. Take the case of antioxidants and free radicals. The "Fantastic Voyage" takes the view that more supplements are better. The more balanced, if conservative, scientific view is that the evidence in favour antioxidant supplements is ambiguous, and the mode of action is not fully understood. Therefore, "stick to tea, fruit, vegetables, wine in moderation - until there is more evidence." 41. What can we do about obesity? England has the second most obese population in the European Union. Are the English obese by choice? 42. Obesity in England is more common in the north and midlands than in the south. In Boston, they smoke, they don't eat healthily, and men can expect to die five years earlier than their counterparts in Saffron Walden. Almost one in three of the population in Boston is clinically obese - making it, officially, the fattest place in England. The difference between the two market towns, both surrounded by fresh fruit and vegetables, cannot be put down simply to poverty. There are plenty of places far more deprived than Boston which do not share its obesity problems. In England in 2003, women living in the West Midlands were most likely to be obese, while those living in London, the South East and the South West showed the lowest prevalence. For men, the prevalence of obesity was greatest among those living in Yorkshire and the Humber, while those living in London showed the lowest prevalence. 43. It's partly down to following the advice that you can find on an abundance of websites. Here, for example, is the advice to me from MyPyramid.com. The basic advice on leading a healthy lifestyle, here on diet, to avoid chronic diseases is about right. 44. At the same time , it is clear that the obesity epidemic will not be curbed just by tackling the "big two". You can't beat the 2nd law of thermodynamics, but the evidence that they are the main, direct cause of the epidemic - or that halting them would reverse it - is largely circumstantial. According to David Allison (University of Alabama), "we threw tens of millions of dollars at the best investigators in the world - and they found absolutely no effect." 45. A recent review on the International Journal of Obesity found 10 other possible explanations, listed here. 46. Some things that affect the risk of chronic illness are simply beyond our personal control. While the genes we inherit may not affect our longevity very much, there is still reason to choose your parents carefully. David Barker is famous for his "Early Origins Hypothesis" (1986), which links low birth weight to increased risk of chronic disease in later life. Trans-generational effects explain why we can expect the prevalence of chronic disease to change slowly in populations. 47. Also hard to fix on a personal level is our social predicament. It is widely recognized that if you're born poor, you will often spend your life trapped in poverty, and your children will be poor. 48. In some parts of Africa, from where these local sayings come, it is easy to recognize harsh poverty, and even partly to understand why it is perpetuated. But there are other, more subtle, long-term effects on health that arise from having little control over one's life, coupled with low social participation. This is what Michael Marmot has called the "status syndrome". 49. In summary, we do have choices to reduce the burden of chronic diseases, for example by following standard advice on diet, exercise and smoking. Some of the personal choices are tough, and will require a change to our lifestyles. Some are beyond personal control because, for example, they are about where we sit in the social pecking order. Some are unchangeable because of who our parents are. This is partly in our genes, but perhaps more importantly about the environment in which our parents lived. As the term chronic disease correctly implies, there are unlikely to be quick fixes - there's no elixir for the "Fantastic Voyage".
<urn:uuid:002d53e1-d8f7-43ba-9782-fb75f6da8d92>
{ "dump": "CC-MAIN-2016-50", "url": "http://www.gresham.ac.uk/lectures-and-events/lifestyle-diseases-the-burden-of-choice", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540563.83/warc/CC-MAIN-20161202170900-00130-ip-10-31-129-80.ec2.internal.warc.gz", "language": "en", "language_score": 0.9566791653633118, "token_count": 5052, "score": 2.65625, "int_score": 3 }
Each year when we start talking about Lent and Easter, we hear a familiar question from some Bible Gateway visitors: What is Lent? Is it an official Christian holiday? Was it instituted in the Bible? What does it mean to observe Lent, and are Christians “required” to do so? For the interested, we’ll try to answer those questions here. Lent is the span of time in the church calendar that starts with Ash Wednesday and ends with Easter Sunday. Ash Wednesday commemorates the beginning of Jesus’ 40-day fasting and temptation in the desert, and Easter Sunday commemorates Jesus’ resurrection from the grave after his crucifixion. Lent, then, is generally observed as a time for Christians to reflect, repent, and pray as a way of preparing their hearts for Easter. It is commonly observed by many Christian denominations—Catholic, Anglican, Lutheran, and others—although not every Christian church or denomination does so. Because Lent is not officially instituted in Scripture, observing it isn’t in any way a “requirement” of Christianity. However, Christians from many different theological persuasions choose to observe it as a way of focusing their thoughts on Jesus Christ during the Easter season. How Do You Observe Lent? It differs from person to person and church to church, but some of the things Christians opt to do to observe Lent include: - On the first day of Lent (Ash Wednesday), some Christians mark their foreheads with ash as a symbol of sorrow and mourning over their sin. (See Job 42 for an example of ash used as a symbol of repentance.) - Special worship services, or additions to regular worship services, that focus in various ways on man’s need for repentance. This often takes the form of extra Scripture readings and prayer. - Some Christians choose to give up a habit or behavior during Lent as an exercise in prayerful self-denial. This might range from something as simple as not drinking soda during Lent to a full-blown program of fasting. - Some Christians commit to a special devotional activity during Lent—for example, daily Scripture reading, regular prayer for a specific person or topic throughout Lent, or volunteer work in their community. The choice to observe Lent is a personal one—the whole point is to focus your heart and mind on Jesus during the journey to Easter. There’s no requirement to observe it, nor should you feel guilted into participating. However, millions of Christians around the world do observe Lent each year; if you’ve never done so, why not give it a try? Whether you observe Lent in a small or major way, you’ll be amazed at what happens when you devote a part of each day to reflecting on Jesus Christ and God’s Word. We invite you to take a look at our own Lent resources, and to consider other ways that you can deepen your relationship with Jesus over the coming weeks. Whether you call it “Lent observance” or “daily devotions” or anything else, time spent reflecting on Jesus Christ is time well spent!
<urn:uuid:c2dce56e-92c4-4b9e-bb34-c80bc49b9141>
{ "dump": "CC-MAIN-2017-51", "url": "https://www.biblegateway.com/blog/2012/02/what-is-lent/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522343.41/warc/CC-MAIN-20171213084839-20171213104839-00689.warc.gz", "language": "en", "language_score": 0.9479351043701172, "token_count": 643, "score": 3.34375, "int_score": 3 }
|Lima, A.L., Farrington, J.W., and Reddy, C.M., Combustion-derived polycyclic aromatic hydrocarbons in the environment – A review., Env. Forensics, 2005; v6, 109-131.| Combustion processes are responsible for the vast majority of the polycyclic arom. hydrocarbons (PAHs) that enter the environment. This review presents and discusses some of the factors that affect the prodn. (type of fuel, amt. of oxygen, and temp.) and environmental fate (physicochem. properties, biodegrdn., photodegrdn., and chem. oxidn.) of combustion-derived PAHs. Because different combustion processes can yield similar assemblages of PAHs, apportionment of sources is often a difficult task. Several of the frequently applied methods for apportioning sources of PAHs in the environment are also discussed.
<urn:uuid:a41415f7-538c-4530-818b-e3979df9cb39>
{ "dump": "CC-MAIN-2015-32", "url": "http://www.whoi.edu/hpb/viewPage.do?id=3657&cl=112", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990609.0/warc/CC-MAIN-20150728002310-00287-ip-10-236-191-2.ec2.internal.warc.gz", "language": "en", "language_score": 0.8646736741065979, "token_count": 200, "score": 2.515625, "int_score": 3 }
Ulcerative colitis is a chronic disease related to an inflammatory bowel disease. This inflammation triggers the appearance of a small boil-like bump that initially appears in the rectum, then spreads through the wall of the colon. If not treated promptly, the small bump can burst and cause bleeding. Ulcerative colitis can affect anyone. However, most sufferers are men from the age range of 15 to 35 years. This condition can interfere with daily activities and even lead to life-threatening complications. It is recommended to seek immediate treatment to relieve symptoms. Causes of Ulcerative Colitis The exact cause of ulcerative colitis is unknown. However, an immune system malfunction is thought to be one of the factors. When your immune system tries to fight off a virus or bacteria, an abnormal immune response triggers your immune system to attack healthy cells in the digestive tract. Apart from immune system malfunction, several other factors are also thought to increase a person’s risk of developing ulcerative colitis, such as: - Genetic factors. - Age factor. - Environmental factor. - Lifestyle factors, such as poor diet and stress. - Other health problems, such as lupus. When to See a Doctor for Ulcerative Colitis? Immediately consult a gastroenterologist if you experience ulcerative colitis symptoms. A gastroenterologist will examine your medical history and perform a physical examination. Ulcerative colitis may have similar symptoms to other digestive disorders (for example, Crohn’s disease). Thus, the specialist doctor may conduct some additional tests to to confirm the diagnosis. Additional tests may include: - Blood tests, a test performed to detect signs of infection and anemia. - Stool test, a test performed to find out if your colitis is caused by bacteria, viruses, or parasites. - X-rays, a test performed to check the abdominal area and detect possible perforation in the intestine. - CT scan, a test performed to obtain a detailed picture of the abdominal and pelvic area and determine the extent of inflammation in the intestine. - Colonoscopy, a test performed to check the condition of the inside of the large intestine. - Endoscopy, a test performed to check the condition inside of the stomach, esophagus and small intestine. - Biopsy, a test performed to check for the cause of intestinal inflammation through analysis of a sample of colon tissue. Ulcerative colitis is a chronic disease. The symptoms can appear and disappear over time. Consultation with a doctor is necessary to provide you with an accurate diagnosis as well as the right treatment. Use Smarter Health’s free services to get the following benefits: - Recommendation for specialist doctor - Check schedule & make doctor’s appointment - Calculate the estimated treatment costs Symptoms of Ulcerative Colitis The symptoms of ulcerative colitis vary from patient to patient – depending on the severity and location of inflammation in the intestine. Patients may experience mild to no symptoms at all. Some of the common symptoms of ulcerative colitis are: - Pain in the abdomen and rectum. - Blood in stools - Drastic weight. - Painful and swollen joints - Itchy skin and a rash - Swollen eyes Treatment for Ulcerative Colitis Treatment for ulcerative colitis aims to relieve inflammation that triggers symptoms. A method of treating colitis includes drug therapy. The type of medications given depends on the severity of the intestinal inflammation experienced. Medications to treat colitis include: In addition, the doctor may also prescribe immunosuppressants to suppress the immune system and stop the production of antibodies that can trigger inflammation. The types of immunosuppressants may include: To manage other symptoms due to intestinal inflammation, your doctor may prescribe the following medications: - Antibiotics to prevent and control infections - Anti-diarrheal drugs to treat diarrhea. For example, loperamide - Pain relievers. For example, paracetamol. - Iron supplements to prevent anemia or iron deficiency. If medications are not effective in relieving the symptoms, the doctor may recommend surgery. Surgery is performed by removing the entire colon and rectum. This surgical procedure is called a proctocolectomy. In this surgical procedure, the doctor will create a new digestive pathway by connecting the end of the small intestine to the anus – allowing you to expel waste relatively normally. If it is not possible, the doctor will create an opening in your abdomen so that you can pass stool directly into a small bag outside your body. Treatment Cost for Ulcerative Colitis The cost for Ulcerative Colitis treatment varies – depending on the treatment method and the medications prescribed by the doctor Consult your doctor first to find out what type of treatment method is suitable for you. For more information regarding the estimated costs of Ulcerative Colitis treatment, contact Smarter Health. Prevention of Ulcerative Colitis The most effective way to prevent intestinal inflammation is by maintaining a healthy lifestyle and healthy diet. Preventative measures that can be undertaken include: - Limit food intake that can trigger intestinal inflammation, such as spicy foods. - Avoid fatty foods. - Consume adequate fluids every day. - Limit consumption of dairy products. - Exercise regularly. - Control stress with relaxation techniques such as meditation or yoga. Home Remedies for Patients Diagnosed with Ulcerative Colitis Patients who undergo proctocolectomy surgery are also advised to undergo therapy. Your doctor will prescribe medications to prevent symptoms of recurring infection. This can be a lifelong therapy, depending on your overall health condition after treatment. Patients are also required to undergo regular medical check ups. This is for the doctor to monitor your progress during the recovery period. Living a healthy lifestyle and reducing stress can also speed up the recovery process.
<urn:uuid:ab56d001-f3a8-4641-a9e1-1069de833903>
{ "dump": "CC-MAIN-2021-31", "url": "https://www.smarterhealth.sg/disease-condition/ulcerative-colitis/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155268.80/warc/CC-MAIN-20210805000836-20210805030836-00462.warc.gz", "language": "en", "language_score": 0.9122118949890137, "token_count": 1246, "score": 3.359375, "int_score": 3 }
Click on the map to display elevation. Washington, D.C., United States of America (38.89366 -76.98788) The highest natural elevation in the District is 409 feet (125 m) above sea level at Fort Reno Park in upper northwest Washington. The lowest point is sea level at the Potomac River. The geographic center of Washington is near the intersection of 4th and L Streets NW. Washington, D.C., Washington, Washington, D.C., 20500, United States of America (38.89489 -77.03655) Coordinates: 38.73489 -77.19655 39.05489 -76.87655 - Minimum elevation: -1 m - Maximum elevation: 182 m - Average elevation: 72 m
<urn:uuid:aaa8342e-982b-49b4-b526-70cba77d65ce>
{ "dump": "CC-MAIN-2021-17", "url": "https://en-in.topographic-map.com/maps/djwr/Washington-D-C/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039554437.90/warc/CC-MAIN-20210421222632-20210422012632-00200.warc.gz", "language": "en", "language_score": 0.7927157282829285, "token_count": 164, "score": 2.578125, "int_score": 3 }
Contrails are clouds with relatively sharp boundaries and hence may cast a sharp shadow on lower-altitude clouds. This shadow is then visible from the underside of the cloud, if the optical thickess of the cloud (the opacity) is not too large. If the lower-altitude cloud is cirrostratus, which is translucent, a three-dimensional shadow will form. This shadow is a plane defined by the sun and the (line-shaped) contrail. As a result, such a shadow is usually only visible if the contrail is in front of the sun for the observer. It is remarkable to see that a contrail shadow usually appears in an odd direction with respect to the sun and realizing that it is being cast on a lower-level cloud: perspective can be really deceptive. 11 photos in gallery Click images for large photos
<urn:uuid:52f17acc-0be3-44e1-b564-4981ef31e088>
{ "dump": "CC-MAIN-2016-44", "url": "http://www.weatherscapes.com/album.php?cat=clouds&subcat=contrail_shadows", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719027.25/warc/CC-MAIN-20161020183839-00027-ip-10-171-6-4.ec2.internal.warc.gz", "language": "en", "language_score": 0.9294416308403015, "token_count": 173, "score": 3.84375, "int_score": 4 }
Definition of value: a numerical quantity measured or assigned or computed; "the value assigned was 16 milliseconds" the quality (positive or negative) that renders something desirable or valuable; "the Shakespearean Shylock is of dubious value in the modern world" the amount (of money or goods or services) that is considered to be a fair equivalent for something else; "he tried to estimate the value of the produce at normal prices" relative darkness or lightness of a color; "I establish the colors and principal values by organizing the painting into three values--dark, medium...and light"-Joe Hing Lowe (music) the relative duration of a musical note an ideal accepted by some individual or group; "he has old-fashioned values" fix or determine the value of; assign a value to; "value the jewelry and art work in the estate" hold dear; "I prize these old photographs" regard highly; think much of; "I respect his judgement"; "We prize his creativity" evaluate or estimate the nature, quality, ability, extent, or significance of; "I will have the family jewels appraised by a professional"; "access all the factors when taking a risk" estimate the value of; "How would you rate his chances to become President?"; "Gold was rated highly among the Romans"
<urn:uuid:358fee73-38ce-42ba-a384-0f3e49e22be7>
{ "dump": "CC-MAIN-2013-48", "url": "http://www.chencer.com/hyper/dictionary/v/a/value.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049608/warc/CC-MAIN-20131204131729-00079-ip-10-33-133-15.ec2.internal.warc.gz", "language": "en", "language_score": 0.9343835115432739, "token_count": 280, "score": 3.03125, "int_score": 3 }
Fires on Cape York Peninsula Dozens of fires were scattered across central and southern Cape York Peninsula in Northeastern Australia on December 14, 2005. When this image was acquired, the sensor detected actively burning areas (marked in red) across a broad region. Winds were pushing smoke plumes toward the west at the time of the image. Natural resource managers in northern Australia use fire to help maintain native grassland and control exotic species. Many of the fires in this image may be land-management fires set by public or private land managers; this hot and dry region is prone to frequent fires. Cape York is tropical savanna, or grassland with sparse trees and shrubs. Most of the world's savannas are near the equator, sandwiched in between tropical rainforests and deserts. The Serengeti of Africa is probably the best-known example of a savanna, but it also exists in South America, large portions of Africa outside (to the North and South) of the Serengeti, Asia (mostly confined to India) and Northern Australia. Australia's savanna is home to many of its more-famous wildlife species, including kangaroos, wallabies, echidnas, and saltwater crocodiles. Many of these species are under pressure because of the harmful impacts of introduced species, such as cane toads, pigs, cats, and even horses.
<urn:uuid:ec89b3e4-ffd6-4766-b3de-3375be866c15>
{ "dump": "CC-MAIN-2014-52", "url": "http://www.redorbit.com/images/pic/9830/fires-on-cape-york-peninsula/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548611.3/warc/CC-MAIN-20141224185908-00062-ip-10-231-17-201.ec2.internal.warc.gz", "language": "en", "language_score": 0.9578163623809814, "token_count": 289, "score": 3.53125, "int_score": 4 }
What is Alopecia Areata? What is Alopecia Areata? Alopecia areata, also known as spot baldness, is an autoimmune disease in which hair is lost from some or all areas of the body. Usually the hair loss occurs from the scalp due to the body’s failure to recognize its own body cells and subsequent destruction of its own tissue as if it were an invader. Often it causes bald spots on the scalp, especially in the first stages. In 1–2% of cases, the condition can spread to the entire scalp (alopecia totalis) or to the entire epidermis (alopecia universalis). Conditions resembling alopecia areata, and having a similar cause, occur also in other species, not only human beings. There are two types of alopecia areata: - scarring alopecia, where there is fibrosis, inflammation, and loss of hair follicles, and - nonscarring alopecia, where the hair shafts are gone but the hair follicles are preserved, making this type of alopecia reversible. Commonly, alopecia areata involves hair loss in one or more round spots on the scalp. - Hair may also be lost more diffusely over the whole scalp, in which case the condition is called diffuse alopecia areata. - Alopecia areata monolocularis describes baldness in only one spot. It may occur anywhere on the head. - Alopecia areata multilocularis refers to multiple areas of hair loss. - Ophiasis refers to hair loss in the shape of a wave at the circumference of the head. - The disease may be limited only to the beard, in which case it is called alopecia areata barbae. - If the patient loses all the hair on the scalp, the disease is then called alopecia totalis. - If all body hair, including pubic hair, is lost, the diagnosis then becomes alopecia universalis. Alopecia areata totalis and universalis are very rare. Signs and symptoms Typical first symptoms of alopecia areata are small bald patches. The underlying skin is unscarred and looks superficially normal. These patches can take many shapes, but are most usually round or oval. Alopecia areata most often affects the scalp and beard, but may occur on any part of the body with hair. Different areas of the skin may exhibit hair loss and regrowth at the same time. The disease may also go into remission for a time, or may be permanent. It is common in children. The area of hair loss may tingle or be painful. The hair tends to fall out over a short period of time, with the loss commonly occurring more on one side of the scalp than the other. When healthy hair is pulled out, at most a few should come out. Ripped hair should not be distributed evenly across the tugged portion of the scalp. In cases of alopecia areata, hair will tend to pull out more easily along the edge of the patch where the follicles are already being attacked by the body’s immune system than away from the patch where they are still healthy. Alopecia areata is thought to be a systemic autoimmune disorder in which the body attacks its own anagen hair follicles and suppresses or stops hair growth. Alopecia areata is not contagious. It occurs more frequently in people who have affected family members, suggesting heredity may be a factor. Strong evidence of genetic association with increased risk for alopecia areata was found by studying families with two or more affected members. This study identified at least four regions in the genome that are likely to contain these genes. In addition, it is slightly more likely to occur in people who have relatives with autoimmune diseases. Treatment with Hair Growth Activation and Ecuri Anti Hair Loss Lotion If the affected region is small, it is reasonable to only observe the progression of the illness, as the problem often spontaneously regresses and the hair may grow back. In most cases which begin with a small number of patches of hair loss, hair grows back after a few months to a year. These photographs were taken during the treatment of Alopecia Areata in 16 year old girl. The whole treatment took 3 sessions to achieve the results shown on the last picture. The client was treated by the Hair Growth Activation mix and the Ecuri Anti Hair Loss Lotion during three month. The new hair growth started to show within several weeks after the first treatment with the Hair Growth Activator. There is no loss of body function, and effects of alopecial areata are mainly psychological (loss of self-image due to hair loss), although these can be severe. Loss of hair also means the scalp burns more easily in the sun. Hair may grow back and then fall out again later. This may not indicate a recurrence of the condition, but rather a natural cycle of growth-and-shedding from a relatively synchronised start; such a pattern will fade over time. Episodes of alopecia areata before puberty predispose to chronic recurrence of the condition. Alopecia can be the cause of psychological stress. Because hair loss can lead to significant changes in appearance, individuals with it may experience social phobia, anxiety, and depression. And if you’d like to share your experiences with us, leave a comment below.
<urn:uuid:540a2ed5-1a15-4ac5-99e5-f912bc1b8c27>
{ "dump": "CC-MAIN-2019-18", "url": "https://www.hairlosssolutions.eu/alopecia-areata/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529813.23/warc/CC-MAIN-20190420120902-20190420142902-00185.warc.gz", "language": "en", "language_score": 0.9558553695678711, "token_count": 1177, "score": 3.46875, "int_score": 3 }
Old Growth Forest Management - Wachusett Mountain has the largest known stand of Old Growth Forest in Massachusetts. Created an Old Growth Management Policy in conjunction with DCR in 1998 to help educate, protect and study the areas of old growth forest located on the mountain. - This includes monitoring all snowmaking to prevent excessive ice on trees within the old growth; annual inspection with DCR of hazard trees that may need removal; with DCR constant monitoring and patrolling of old growth areas to prevent unauthorized access. Land Conservation and Protection - Wachusett placed more than 100 acres of private adjacent forest land owned by the mountain into a forest protection program monitored by DCR. Wachusett has pledged to help fund a full-time ecologist for DCR for an ongoing program of ecological monitoring, research and management of the biological resources of the state reservation. - Studies have shown Massachusetts has a shortage of open meadow land (other state parks have cleared trees to create more open space.) Ski trails create a perfect open meadow areas to foster biological diversity. - Wachusett has developed a program for rotation of mowing to foster a variety of meadow species including some rare naturally occurring plants. Wachusett also adjusted the design layout of a trail in its new Vickery Bowl to afford more protection for the mountain laurel. - Wachusett place more than 100 acres of private adjacent forest land owned by the mountain into a forest protection program monitored by DCR - As part of its ongoing commitment to creating a balance between environmental protection and recreation, Wachusett Mountain Associates proudly supports Massachusetts Forest Stewardship Program (a program of the Massachusetts Department of Conservation & Recreation with funds from teh USDA Forest Service). Wachusett privately owns three parcels of land - totaling approximately 56 acres - adjacent to the ski area, that are registered in the state's Forest Management Plan. - Approximately 8 acres of parcel, located on the town line of Princeton and Westminster, have been forested. The requires forest management practices to be completed within 10 years of registereing the land in the program. This parcel of land includes a stand of predominantly Norther Red Oak, Black Oak, White Ash, Red Maple and Easter White Pine. Other species include Black Birch, White Birch, Sugar Maple and Easter Hemlock. Approximately 45% of the trees are acceptable growing stock. this species mix is the result of past harvesting practices which removed most of the merchantable oaks and pines leaving less desirable species of poor quality. - Two Main Objectives of the Forest Management Program are to grow forest products and enhance the recreational and aesthetick values of the property. Through good forest management practices, it is our goal to protect the soil water values of the property and develop trails for cross country skiing, hiking and viewing of wildlife. ther are several intermittent streams with flow across the property and these streams will be protected during harvesting activity as Wachusett Lake, a public water supply, is located immedieately downstream. The Mid-State Trail crosses throgu this stnad and offers an excellent opportunity to view wildlife and cascading brooks during the winter and early spring. Patch celar cuts in this stand would open great views of Wachusett Lake. These patch cuts will also create valuable browse for wildlife. - Recreational Use and Habitat Protection-- there are numerous stone walls on the upper slopes of this stand and an excellent view of Wachusett Lake from many areas in the stand. There is an old camp located high on the slope at a site which could be developed into a cross country ski lodge. the Mid-State Trail passes through this stand from east to wet and several logging roads provide hiking and skiin opportunities. There is a small patch of spruce saplings located at the wall corner in the middle of this stand which is surrounded by several cull pines. This property offers excellent opportunity for developing recreational uses. It is the landowner's goal to open up a network of trails, create vistas and manage wildlife habitat for the enjoyment of skiers and hikers. This parcel, in conjuction with two adjacent parcels under the same ownership offer unique opportunity to develop cross country ski and hiking trails throughout the property. - "We welcome and applaud your commitment to the steardship of your woodlands," said Steve Anderson, forest stewardship coordinator. "In large measure the continued good health of the state's forests is up the the 235,000 woodland owners like you who choose to act. In joining the Stewardship Program, you join landowners across the state who are balancing both the ecological and socal values of our rich and diverse forests. Much is changing in the arena of land use, property rights and environmental health. We're aiming toward solutions that keep ecosystems whole, yet still allow peopel to benefit from forest resources. On behalf of the State Steardship committee, I thank you for your commitment." Energy Conservation and Use of Alternative Fuels - Wachusett is one of only three ski resorts in the Northeast to be converting 100% of its waste cooking oil into environmentally-friendly Biodiesel to fuel its five grooming vehicles. Converting 2,500 gallons of waste cooking oil from the restaurant and cafeteria (plus the mountain-owned Wachusett Village Inn) into biofuel. In addition to helping run the mountain’s five snow cats, the biodiesel product will also be utilized by the 4 diesel-powered backup lift engines and the area’s 4 on-premise snow removal vehicles. - Princeton Municipal Light Department and Community Energy, Inc. partnered in 2007 to build and operate two 1.5-megawatt wind turbines on Wachusett Mountain in an area on the back side below the summit. Scheduled for completion in 2007, these new wind mills will assist Wachusett Mountain Ski Area in reducing overall energy costs. - Installed a state-of-the-art snowmaking compressor system which utilizes re-circulated heat from air compressors to supply the base lodge with 100% of its heat, significantly reducing electrical consumption. - Wachusett constructed a new snowmaking compressor building in 1997 utilizing this technology with even more energy efficient compressors. - The mountain also uses specially-designed HKD tower snow guns for snowmaking which are eight times more efficient than conventional water guns and significantly reduce noise levels. - Automated energy management system to control conditions in base lodge - Bio-Fuel (Read More) - Wind Power (Read More) Water Conservation and Protection - Read More
<urn:uuid:4e8c1a28-521c-4313-ae4a-1acbf49233e9>
{ "dump": "CC-MAIN-2015-18", "url": "http://www.wachusett.com/TheMountain/GreenInitiatives/ConservationProtection/tabid/86/Default.aspx", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246633972.52/warc/CC-MAIN-20150417045713-00005-ip-10-235-10-82.ec2.internal.warc.gz", "language": "en", "language_score": 0.9348552227020264, "token_count": 1333, "score": 2.515625, "int_score": 3 }
Click on the titles below to open/close additional information. Every day Americans experience the horror of fire. But most people don't understand fire. Only when we know the true nature of fire can we prepare ourselves and our families. Each year more than 4,000 Americans die approximately 25,000 are injured in fires, many of which could be prevented. The United States Fire Administration (USFA), a division of the Federal Emergency Management Agency (FEMA), believes that fire deaths can be reduced by teaching people the basic facts about fire. Below are some simple facts that explain the particular characteristics of fire. Fire is FAST! There is little time! In less than 30 seconds a small flame can get completely out of control and turn into a major fire. It only takes minutes for thick black smoke to fill a house. In minutes, a house can be engulfed in flames. Most fires occur in the home when people are asleep. If you wake up to a fire, you won't have time to grab valuables because fire spreads too quickly and the smoke is too thick. There is only time to escape. Fire is HOT! Heat is more threatening than flames. A fire's heat alone can kill. Room temperatures in a fire can be 100 degrees at floor level and rise to 600 degrees at eye level. Inhaling this super hot air will scorch your lungs. This heat can melt clothes to your skin. In five minutes a room can get so hot that everything in it ignites at once: this is called flashover. Fire is DARK! Fire isn't bright, it's pitch black. Fire starts bright, but quickly produces black smoke and complete darkness. If you wake up to a fire you may be blinded, disoriented and unable to find your way around the home you've lived in for years. Fire is DEADLY! Smoke and toxic gases kill more people than flames do. Fire uses up the oxygen you need and produces smoke and poisonous gases that kill. Breathing even small amounts of smoke and toxic gases can make you drowsy, disoriented and short of breath. The odorless, colorless fumes can lull you into a deep sleep before the flames reach your door. You may not wake up in time to escape. Fire Safety Tips In the event of a fire, remember time is the biggest enemy and every second counts! Escape first, then call for help. Develop a home fire escape plan and designate a meeting place outside. Make sure everyone in the family knows two ways to escape from every room. Practice feeling your way out with your eyes closed. Never stand up in a fire, always crawl low under the smoke and try to keep your mouth covered. Never return to a burning building for any reason; it may cost you your life. Finally, having a working smoke alarm dramatically increases your chances of surviving a fire. And remember to practice a home escape plan frequently with your family. Working Together for Home Fire Safety More than 4,000 Americans die each year in fires and approximately 25,000 are injured. An overwhelming number of fires occur in the home. There are time-tested ways to prevent and survive a fire. It's not a question of luck. It's a matter of planning ahead. Every Home Should Have at Least One Working Smoke Alarm Buy a smoke alarm at any hardware or discount store. It's inexpensive protection for you and your family. Install a smoke alarm on every level of your home. A working smoke alarm can double your chances of survival. Test it monthly, keep it free of dust and replace the battery at least once a year. Smoke alarms themselves should be replaced after ten years of service, or as recommended by the manufacturer. Prevent Electrical Fires Never overload circuits or extension cords. Do not place cords and wires under rugs, over nails or in high traffic areas. Immediately shut off and unplug appliances that sputter, spark or emit an unusual smell. Have them professionally repaired or replaced. STOP, DROP, AND ROLL Each year, hundreds of people are killed or seriously injured from burns received when their clothing catches fire. Many of these fatalities and serious injuries could be prevented if the proper procedures were followed. The "Stop, Drop & Roll" Technique is relatively simple to follow. It is important that our senior citizens and small children be instructed in this technique as it is these two groups who are statistically most susceptible to clothing related fires. STOP. If your clothing catches fire, immediately stop. DO NOT RUN, or try to pat out the flames with your hands. DROP. Immediately drop to the floor or ground and lay out flat. ROLL. Once on the ground or floor, roll over and over smothering the flames as you roll. If a blanket, rug or large jacket is available it can be used to wrap the body, also smothering the flames. STOP, DROP and ROLL can save your life! When to Call 911 Learning what is an emergency goes hand in hand with learning what isn't. A fire, an intruder in the home, an unconscious family member - these are all things that would require a call to 911. A skinned knee, a stolen bicycle, or a lost pet wouldn't. Still, teach your child that if ever in doubt and there's no adult around to ask to always make the call. It's much better to be safe than sorry. Make sure your child understands that calling 911 as a joke is a crime in many places. In some cities, officials estimate that as much as 75% of the calls made to 911 are non-emergency calls. These are not all pranks. Some people accidentally push the emergency button on their cell phones. Others don't realize that 911 is for true emergencies only. That means it's not for such things as a flat tire or even about a theft that occurred the week before. Stress to your child that whenever an unnecessary call is made to 911, it can delay a response to someone who actually needs it. Most areas now have what is called enhanced 911, which enables a call to be traced to the location from which it was made. So if someone dials 911 as a prank, emergency personnel could be dispatched directly to that location. Not only could this mean life or death for someone having a real emergency on the other side of town, it also means that it's very likely the prank caller will be caught and punished How to Use 911 Although most 911 calls are now traced, it's still important for your child to have your street address and phone number memorized. Your child will need to give that information to the operator as a confirmation so time isn't lost sending emergency workers to the wrong address. Make sure your child knows that even though he or she shouldn't give personal information to strangers, it's OK to trust the 911 operator. Walk him or her through some of the questions the operator will ask, including: Where are you calling from? (Where do you live?) What type of emergency is this? Who needs help? Is the person awake and breathing? Explain to your child that it's OK to be frightened in an emergency, but that it's important to stay calm, speak slowly and clearly, and give as much detail to the 911 operator as possible. If your child is old enough to understand, also explain that the emergency dispatcher may give first-aid instructions before emergency workers arrive at the scene. Make it clear that your child should not hang up until the person on the other end says it's OK, otherwise important instructions or information could be missed. More Safety Tips Here are some additional safety tips to keep in mind: Always refer to the emergency number as "nine-one-one" not "nine-eleven." In an emergency, your child may not know how to dial the number correctly because of trying to find the "eleven" button on the phone. Make sure your house number is clearly visible from the street so that police, fire, or ambulance workers can easily locate your address. If you live in an apartment building, make sure your child knows the apartment number and floor you live on. Keep a list of emergency phone numbers handy near each phone for your children or baby sitter. This should include police, fire, and medical numbers (this is particularly important if you live in one of the few areas where 911 is not in effect), as well as a number where you can be reached, such as your cell phone, pager, or work number. In the confusion of an emergency, calling from a printed list is simpler than looking in the phone book or figuring out which is the correct speed-dial number. The list should also include known allergies, especially to any medication, medical conditions, and insurance information. If you have special circumstances in your house, such as an elderly grandparent or a person with a heart condition, epilepsy, or diabetes living in your home, prepare your child by discussing specific emergencies that could occur and how to spot them. Keep a first-aid kit handy and make sure your child and baby sitters know where to find it. When your child is old enough, teach him or her basic first aid. The Impact of Smoke Alarms In the 1960's, the average U. S. citizen had never heard of a smoke alarm. By 1995, an estimated 93 percent of all American homes - single - and multi- family, apartments, nursing homes, dormitories, etc. - were equipped with alarms. By the mid 1980's, smoke alarm laws, requiring that alarms be placed in all new and existing residences - existed in 38 states and thousands of municipalities nationwide. And smoke alarm provisions have been adopted by all of the model building code organizations. Fire services across the country have played a major and influential public education role in alerting the public to the benefits of smoke alarms. Another key factor in this huge and rapid penetration of both the marketplace and the builder community has been the development and marketing of low cost alarms by commercial companies. In the early 1970's, the cost of protecting a three bedroom home with professionally installed alarms was approximately $l000; today the cost of owner-installed alarms in the same house has come down to as little as $10 per alarm, or less than $50 for the entire home. This cost structure, combined with effective public education (including key private-public partnerships), has caused a huge percentage of America's consumers, whether they are renting or buying, to demand smoke alarm protection. The impact of smoke alarms on fire safety and protection is dramatic and can be simply stated. When fire breaks out, the smoke alarm, functioning as an early warning system, reduces the risk of dying by nearly 50 percent. Alarms are most people's first line of defense against fire. In the event of a fire, properly installed and maintained smoke alarms will provide an early warning signal to your household. This alarm could save your own life and those of your loved ones by providing the chance to escape. In the event of a fire, a smoke alarm can save your life and those of your loved ones. They are the single most important means of preventing house and apartment fire fatalities by providing an early warning signal -- so you and your family can escape. Smoke alarms are one of the best safety features you can buy and install to protect yourself, your family and your home. Install smoke alarms on every level of your home, including the basement. Many fatal fires begin late at night or in the early morning. For extra safety, install smoke alarms both inside and outside the sleeping area. Also, smoke alarms should be installed on the ceiling or 6 to 8 inches below the ceiling on side walls. Since smoke and many deadly gases rise, installing your smoke alarms at the proper level will provide you with the earliest warning possible. Always follow the manufacturer's installation instructions. Many hardware, home supply or general merchandise stores carry smoke alarms. Make sure the alarm you buy is Unlisted. If you are unsure where to buy one in your community, call your local fire department (on a non-emergency telephone number) and they will provide you with some suggestions. Some fire departments offer smoke alarms for little or no cost. Not a bit. In most cases, all you will need is a screwdriver. Many brands are self-adhesive and will automatically stick to the wall or ceiling where they are placed. However, be sure to follow the directions from the manufacturer because each brand is different. If you are uncomfortable standing on a ladder, ask a relative or friend for help. Some fire departments will actually install a smoke alarm in your home for you. Call your local fire department (again, on a non-emergency telephone number) if you have problems installing a smoke alarm. Smoke alarms are very easy to take care of. There are two steps to remember. Simply replace the batteries at least once a year. Tip: Pick a holiday or your birthday and replace the batteries each year on that day. Some smoke alarms now on the market come with a ten-year battery. These alarms are designed to be replaced as a whole unit, thus avoiding the need for battery replacement. If your smoke alarm starts making a "chirping" noise, replace the batteries and reset it. Keep them clean. Dust and debris can interfere with their operation, so vacuum over and around your smoke alarm regularly. Then it's doing its job. Do not disable your smoke alarm if it alarms due to cooking or other non-fire causes. You may not remember to put the batteries back in the alarm after cooking. Instead, clear the air by waving a towel near the alarm, leaving the batteries in place. The alarm may have to be moved to a new location. About eight-to-ten years, after which it should be replaced. Like most electrical devices, smoke alarms wear out. You may want to write the purchase date with a marker on the inside of your unit. That way, you'll know when to replace it. Always follow the manufacturer's instructions for replacement. Some smoke alarms are considered to be "hard wired." This means they are connected to the household electrical system and may or may not have battery back-up. It's important to test every smoke alarm monthly. And always use new batteries when replacing old ones. When used properly, portable fire extinguishers can save lives and property by putting out a small fire or containing it until the fire department arrives. Portable fire extinguishers for home use, however, are not designed to fight large or spreading fires. Even for small fires they are useful only under certain conditions: The operator must know how to use the extinguisher. There is no time to read directions during an emergency. The extinguisher must be within easy reach and in working order, fully charged. The operator must have a clear escape route that will not be blocked by fire. The extinguisher must match the type of fire being fought. Extinguishers that contain water are unsuitable for use of grease and electrical fires. The extinguisher must be large enough to put out the fire. Many portable extinguishers discharge completely in as few as 8 to 10 seconds. What Type of Extinguisher Should I Use? There are three basic classes of fires, and all extinguishers are labeled as to what type of fire they can put out. They will have standard symbols on them and if there is a red slash through a symbol that tells you it cannot be used on that kind of fire. The fire extinguisher must be appropriate for the type of fire being fought. If you use the wrong kind of fire extinguisher, you can make the fire worse and endanger yourself (for example, if you use a water extinguisher on an electrical fire, you'll find that to be quite a shocking experience ... using a pressurized extinguishing agent on a grease fire will spread the fire rather than extinguishing it). Multipurpose fire extinguishers can be used on all three classes of fires. Class A Ordinary Combustibles Extinguish ordinary combustibles by cooling the material below its ignition temperature and soaking the fibers to prevent re-ignition. Use pressurized water, foam or multipurpose (ABC-rated) dry chemical extinguishers. DO NOT USE carbon dioxide or ordinary (BC-rated) dry chemical extinguishers on Class A fires. Class B Flammable Liquids, Greases, or Gases Extinguish flammable liquids, greases or gases by removing the oxygen, preventing the vapors from reaching the ignition source or inhibiting the chemical chain reaction. Foam, carbon dioxide, ordinary (BC-rated) dry chemical, multipurpose dry chemical, and halon extinguishers may be used to fight Class B fires. Class C Energized Electrical Equipment Extinguish energized electrical equipment by using an extinguishing agent that is not capable of conducting electrical currents. Carbon dioxide, ordinary (BC-rated) dry chemical, multipurpose dry chemical and halon fire extinguishers may be used to fight Class C fires. DO NOT USE water extinguishers on energized electrical equipment. What Size Extinguisher Should I Buy? Portable fire extinguishers are also rated for the size of fire they can handle. This rating will appear on the label - for example, 2A:10B:C. The larger the numbers, the larger the fire that the extinguisher can put out ... but the higher-rated models are often much heavier. Make sure you can hold and operate an extinguisher before you buy it. What You Need to Know About Installing and Maintaining Extinguishers Fire extinguishers should be installed in plain view, above the reach of children, near an escape route and away from stoves and heating appliances. Fire extinguishers require some routine care. Make sure you read your operator's manual to learn how to inspect your fire extinguisher. Follow the manufacturer's instructions on maintaining the extinguisher. Rechargeable models must be serviced after every use (look in the Yellow Pages of your telephone directory under "Fire Extinguishers" for local companies that service them). The disposable fire extinguishers can be used only one time and must be replaced after use. How To Use Portable Fire Extinguishers Remember the PASS system: Papule the Pin Adm. extinguisher nozzle at the base of the flames Squeezes trigger while holding the extinguisher upright Sweeps the extinguisher from side to side ALWAYS make sure the fire department is called and inspects the fire site, even if you think you have extinguished the fire! Should You Try to Fight the Fire? Before you begin to fight a fire: - Make sure everyone has left or is leaving the building - Make sure the fire department has been called - Make sure the fire is confined to a small area and is not spreading - Make sure you have an unobstructed escape route to which the fire will not spread - Make sure you have read the instructions and know how to use the extinguisher It is reckless to fight a fire in any other circumstances. Instead, leave immediately and close off the area. Exposing an Invisible Killer : The Dangers of Carbon Monoxide Each year in America, carbon monoxide poisoning claims approximately 480 lives and sends another 15,200 people to hospital emergency rooms for treatment. The United States Fire Administration (USFA) and the National Association of Home Builders (NAHB) would like you to know that there are simple steps you can take to protect yourself from deadly carbon monoxide fumes. UNDERSTANDING THE RISK What is carbon monoxide? Carbon monoxide is an odorless, colorless and toxic gas. Because it is impossible to see, taste or smell the toxic fumes, CO can kill you before you are aware it is in your home. At lower levels of exposure, CO causes mild effects that are often mistaken for the flu. These symptoms include headaches, dizziness, disorientation, nausea and fatigue. The effects of CO exposure can vary greatly from person to person depending on age, overall health and the concentration and length of exposure. Where does carbon monoxide come from? CO gas can come from several sources: gas-fired appliances, charcoal grills, wood-burning furnaces or fireplaces and motor vehicles. Who is at risk? Everyone is at risk for CO poisoning. Medical experts believe that unborn babies, infants, children, senior citizens and people with heart or lung problems are at even greater risk for CO poisoning. WHAT ACTIONS DO I TAKE IF MY CARBON MONOXIDE ALARM GOES OFF? What you need to do if your carbon monoxide alarm goes off depends on whether anyone is feeling ill or not. If no one is feeling ill: 1. Silence the alarm. 2. Turn off all appliances and sources of combustion (i.e. furnace and fireplace). 3. Ventilate the house with fresh air by opening doors and windows. 4. Call a qualified professional to investigate the source of the possible CO buildup. If illness is a factor: 1. Evacuate all occupants immediately. 2. Determine how many occupants are ill and determine their symptoms. 3. Call your local emergency number and when relaying information to the dispatcher, include the number of people feeling ill. 4. Do not re-enter the home without the approval of a fire department representative. 5. Call a qualified professional to repair the source of the CO. PROTECT YOURSELF AND YOUR FAMILY FROM CO POISONING Install at least one UL (Underwriters Laboratories) listed carbon monoxide alarm with an audible warning signal near the sleeping areas and outside individual bedrooms. Carbon monoxide alarms measure levels of CO over time and are designed to sound an alarm before an average, healthy adult would experience symptoms. It is very possible that you may not be experiencing symptoms when you hear the alarm. This does not mean that CO is not present. Have a qualified professional check all fuel burning appliances, furnaces, venting and chimney systems at least once a year. Never use your range or oven to help heat your home and never use a charcoal grill or hibachi in your home or garage. Never keep a car running in a garage. Even if the garage doors are open, normal circulation will not provide enough fresh air to reliably prevent a dangerous buildup of CO. When purchasing an existing home, have a qualified technician evaluate the integrity of the heating and cooking systems, as well as the sealed spaces between the garage and house. The presence of a carbon monoxide alarm in your home can save your life in the event of CO buildup.
<urn:uuid:e80c7a9f-e52b-4fef-9ee8-2c1daaec6c7b>
{ "dump": "CC-MAIN-2022-21", "url": "https://chenangobridgefire.com/fireSafety.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00039.warc.gz", "language": "en", "language_score": 0.9452035427093506, "token_count": 4622, "score": 3.71875, "int_score": 4 }
– Bangkok, Thailand. The IUCN World Conservation Congress has adopted a resolution calling for United Nations (UN) action to protect the world’s oceans from high seas bottom trawl fishing. The resolution, sponsored by the Costa Rican government and 11 organisations, passed with clear support in both chambers of the congress, achieving a majority of 62 votes in favour to 35 against by governments and 281 to five by non governmental organizations. The resolution calls for the UN General Assembly to impose a moratorium on the destructive practice of high seas bottom trawling. “This is an important breakthrough in the campaign to preserve the fragile habitats of the high seas,” said Matthew Gianni, political advisor to the Deep Sea Conservation Coalition. “The UN General Assembly failed to take decisive action on this issue earlier this month but the support of the IUCN and its government members will bring additional strength to the campaign for a moratorium next year.” The resolution calling for a moratorium was most vociferously opposed by Canada, Japan and Spain. Spain has been the most active nation opposing any limitations on high seas bottom trawling and is responsible for around 40% of the high seas catch. “A few hundred bottom trawl vessels from a handful of countries are roaming the high seas, fishing as they please” said Randall Arauz, of the Costa Rican Sea Turtle Restoration Program, who spoke in favour of the resolution during the plenary debate on Wednesday afternoon (23rd November). “The high seas are the world’s global commons and yet these few fleets benefit from the fisheries while destroying deep sea biodiversity to the detriment of all humankind.” Notes to Editors: The Deep Sea Conservation Coalition is an alliance of 30 international organisations, representing millions of people around the world, which is calling for a moratorium on high seas bottom trawling. Scientists increasingly recognise the deep oceans as one of the earth’s great reservoirs of biodiversity. Estimates for the number of species inhabiting the oceans depths are as high as 10 million species. For further information: Please contact Mirella von Lindenfels on ++ 44 20 8882 5041 or mobile ++ 44 (0) 7717 844 352
<urn:uuid:b0ad3852-fcdd-4e81-9a8b-fc50540c910b>
{ "dump": "CC-MAIN-2022-33", "url": "https://www.savethehighseas.org/2004/11/25/call-un-action-high-seas-destruction-gains-momentum/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00055.warc.gz", "language": "en", "language_score": 0.9122806787490845, "token_count": 450, "score": 2.640625, "int_score": 3 }
Fashion has evolved from an existing styling perspective to a more sustainable way of living. Patrons of sustainable living have come up with innovative ideas and alternatives to the regular toxic material. To balance the symbiotic relationship of humans with nature, experts have started to come up with pioneering ideas to deal with this frantic situation. The demand for expertise in this domain is increasing at a drastic rate. People are adopting a higher way of living in harmony with nature. Fashion is just not limited to the way we dress, drape, or carry ourselves but rather a completely different perspective of modifying their personalities. If you are interested in chasing a career in the fashion industry to explore the different subject areas, then you should pursue an undergraduate BA fashion course. Read ahead to learn more about the different types of fashion degrees available in the world-class educational institutions: - Fashion and Apparel Designing: The experts in this domain design, manufacture, and supervise the manufacturing plants to maximize efficiency and production. They oversee multiple departments including the shipping of the final product, troubleshooting mechanical problems, smooth flow of the numerous production stages, and meet the shipment deadlines. - Fashion Business and Retail Management: They employ trend analysis techniques for developing products and determining the factors to appeal to consumers. - Lifestyle and Accessory Design: This program focuses on advising people on making big fashion decisions. They recommend colors, outfits, styles, palettes, and fabrics in coordination with the latest trends and fashion protocols. They understand the client’s needs and try to personalize their suggestions through their customized aesthetic preferences. They design products while considering body type, occasion, and the price range for their customers. - Fashion marketing: This course is designed to let students get a hold of advertising campaigns for managing fashion-related brands, businesses, and stores. The research and travel to new places looking for potential areas to launch new stores supervise the team of marketing professionals and evaluate the products using their knowledge of the quality of garments and fashion trends. The professionals in this job role are trained to carry out a detailed analysis of advertisements and campaigns for monitoring brand quality. They develop strategies based on feedback from customers. - Fashion communication: Students who love to write about new fashion trends and to give their insight into the exciting world of fashion are fashion journalist. They write for everything from online blogs, hard copy magazines, trade publications, e-commerce websites, and PR firms. Fashion communication programs are designed to train the workforce for critically evaluating the latest trends and designs. The experts in this domain are trying to find the ethical substances that may have a low impact on the environment and can act as a perfect alternative to reduce the use of pesticides, polyester, and chemical processes in the production houses of the fashion industry. So, if you are interested in joining the future leader, then develop your skills and knowledge with Fashion Design courses and be a part of this dynamic industry.
<urn:uuid:c16bfb82-9fd4-490f-bf87-40acc9093292>
{ "dump": "CC-MAIN-2022-49", "url": "https://munchkinpress.com/what-are-the-different-types-of-fashion-degrees/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710898.93/warc/CC-MAIN-20221202050510-20221202080510-00011.warc.gz", "language": "en", "language_score": 0.9369053840637207, "token_count": 594, "score": 2.65625, "int_score": 3 }
Very simple, versatile modular design. No limits to the number of modules used in the ring Circuit diagram using LEDs: Circuit diagram using Lamps: R1 1K5 1/4W Resistor R2 680R 1/4W Resistor (Optional, see text) C1 47F 25V Electrolytic Capacitor D1 LED any type Q1 BC337 45V 800mA NPN Transistor P1 SPST Pushbutton LP1 Filament Lamp 12 or 24V (See text) The purpose of this circuit was to create a ring in which LEDs or Lamps illuminate sequentially. Its main feature is a high versatility: you can build a loop containing any number of LEDs or Lamps, as each illuminating device has its own small circuit. The diagrams show three-stage circuits for simplicity: you can add an unlimited number of stages (shown in dashed boxes), provided the last stage output was returned to the first stage input, as shown. P1 pushbutton purpose is to allow a sure start of the sequence at power-on but, when a high number of stages is used, it also allows illumination of more than one LED or Lamp at a time, e.g. one device illuminated and three out and so on. After power-on, P1 should be held closed until only the LED or Lamp related to the module to which the pushbutton is connected remains steady illuminated. When P1 is released the sequencer starts: if P1 is pushed briefly after the sequence is started, several types of sequence can be obtained, depending from the total number of stages. If one LED per module is used, the voltage supply can range from 6 to 15V. You can use several LEDs per module. They must be wired in series and supply voltage must be related to their number. Using 24V supply (the maximum permitted voltage), about 10 LEDs wired in series can be connected to each module, about 7 at 15V and no more than 5 at 12V. The right number of LEDs can vary, as it is depending by their color and brightness required. Using lamps, voltage supply can range from 9 to 24V. Obviously, lamp voltage must be the same as a supply voltage. In any case, lamps may also be wired in series, e.g. four 6V lamps wired in series can be connected to each module and powered by 24V supply. If you intend to use lamps drawing more than 400mA current, BC337 transistors should be substituted by Darlington types like BD677, BD679, BD681, 2N6037, 2N6038, 2N6039 etc. As Darlington transistor usually have a built-in Base-Emitter resistor, R1 may be omitted, further reducing parts count. Sequencer speed can be varied changing C1 value. A similar design appeared in print about forty years ago. It used germanium transistors and low voltage lamps. I think the use of LEDs, silicon transistors, Darlington transistors, and 24V supply an interesting improvement.
<urn:uuid:dfc070f8-54f3-4ae3-bc42-cdc8a2fb0d26>
{ "dump": "CC-MAIN-2022-49", "url": "https://circuit-diagramz.com/leds-lamps-sequencer-schematic-circuit-diagram/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00434.warc.gz", "language": "en", "language_score": 0.9067834615707397, "token_count": 650, "score": 3.203125, "int_score": 3 }
Important to measure pupils The interpupillary distance refers to the distance between the centers of the two pupils when the visual axis is square and parallel. In general, the optical center of the glasses should coincide with the visual axis of the eye when looking at eye level. That is, the optical center distance of the glasses should be the same as the interpupillary distance. Otherwise, visual discomfort will occur, causing visual fatigue and deepening the degree. Therefore, we must improve consumers' awareness of self-quality protection. Understand the knowledge of glasses, otherwise, only mild myopia, the long-term wearing of unqualified glasses will cause the degree to deepen and become high myopia. So be sure to go to a regular optician to get glasses. Tungsten carbon glasses frame Tungsten carbon material, one of the obvious advantages of this material is that it has good toughness and is ultra-lightweight. People will feel no pressure when wearing such glasses. It is good if you are a sportsperson who chooses it. Choose clear glasses with the right materials for kids. A critical point for children to choose frames is to think about the weight, to avoid wearing for a long time, or they will be uncomfortable, unstable, easy to slip. It is recommended to choose an ultra-light plate or TR90 lens frame, which has good chemical properties, good thermal stability, with no deformation, good recovery, and no side effects, lightweight, and convenient material. Titanium alloys have low thermal elasticity The thermal conductivity of titanium is about 1/4 of nickel, 1/5 of iron, 1/14 of aluminum, and the thermal conductivity of different titanium alloys is about 50% lower than that of titanium. The elastic modulus of titanium alloy is about 1/2 that of steel, so its rigidity is poor and is easy to deform, and it is not suitable to make slender rods and thin wall parts. When cutting, the spring back of the processing surface is very large, about 2 ~ 3 times of stainless steel, resulting in severe friction, adhesion, and bonding wear of the knife surface. Can you adjust your glasses by yourself? If your frames are not suitable, adjusting your glasses at home may be easier than you think. You can solve the most common problems in the frame by yourself, but sometimes taking your glasses to an optician is the best thing to do. Of course, if your problem is your lenses, that's something you can't usually fix at home. Plate glasses frame Sheet glasses frames are generally made of high-tech plastic memory sheets. Its appearance is beautiful and fashionable, and its styles are diverse. Therefore, plank glasses are easier to match with clothes, which can show the personality and style of the wearer. What is a progressive lens? Progressive multifocal lenses are lenses with different upper and lower powers for seeing far above and below for seeing near. The distance from the fixed far power above the lens to the near power fixed below the lens does not change suddenly, but a gradual transition between the two through the gradual change in refractive power. Compared with ordinary lenses, there are many advantages. The appearance of the lens is like a single vision lens, and the dividing line of the power change is not visible. Not only is the appearance beautiful, but more importantly, it protects the age privacy of the wearer, and there is no need to worry about revealing the age secret due to wearing glasses. Since the change of the lens power is gradual, there will be no image jump. It is comfortable to wear and easy to adapt. So it is easy to be accepted. Because the degree is gradual, the replacement of the adjustment effect is gradually increased according to the shortening of the short distance. There is no adjustment fluctuation, and it is not easy to cause visual fatigue. Clear vision can be obtained at all distances in the visual range. Such a pair of glasses meets the use of long-distance, near-use and various distances in the middle at the same time. It is especially good news for teachers, doctors, music workers, and computer operators, because these people not only need to see far and near objects clearly, but most of the time they also need to be able to see middle distance objects such as blackboards, piano scores, and computer screens. . This is not possible with any lens other than progressive lenses.
<urn:uuid:c0f86b26-296c-4a25-bf6c-2d6d2a1ab3b2>
{ "dump": "CC-MAIN-2022-27", "url": "https://www.koalaeye.com/blogs/our-stories/what-glasses-make-you-look-older-4", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00127.warc.gz", "language": "en", "language_score": 0.9398751854896545, "token_count": 911, "score": 2.609375, "int_score": 3 }
|Description: This detail of a map of Florida indicates Dade County current to 1874. It shows drainage, township and county boundaries, cities and towns, battlefields, and submarine cables to Havana. It also lists operating and newly chartered railroads of the time. Some of the features shown are Boca Raton and Lake Worth.| Place Names: Dade, Lake Okeechobee, Fort Jupiter, Jupiter Inlet, Lake Worth, Boca Raton, Fort Lauderdale, New River Inlet, Fort Dallas, Everglades, Long Key, Chatham Bay, Ponce De Leon, White Water Bay, Mangrove Swamps, Key Biscayne, Turtle Harbor, Palm Point, ISO Topic Categories: inlandWaters, location, oceans, transportation Keywords: Dade County, political, transportation, historical, county borders, roads, railroads, other military, inlandWaters, location, oceans, transportation, Unknown,1874 Source: Columbus Drew, LC Railroad Maps (Jacksonville, FL: Columbus Drew, 1874) 195 Map Credit: Courtesy of the Library of Congress, Geography and Map Division.
<urn:uuid:3a632264-2880-4a27-9586-4a12a5061d3f>
{ "dump": "CC-MAIN-2016-26", "url": "http://fcit.usf.edu/florida/maps/pages/10900/f10902/f10902.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00162-ip-10-164-35-72.ec2.internal.warc.gz", "language": "en", "language_score": 0.8308264017105103, "token_count": 239, "score": 2.734375, "int_score": 3 }
This report argues that Comprehensive Sexuality Education (CSE) leads to improved sexual and reproductive health, promotes gender equality and equitable social norms, and has a positive impact on safer sexual behaviours, delaying sexual debut and increasing condom use. - A clearly supported strategy is needed against homophobia and sexism in educational policies and the national curriculum. - There is a need to articulate and strengthen the intersectionality between educational policies against homophobia with other public policies, such as poverty reduction, work, health and others. - Long-term policies against homophobia should be developed. - There is a need to acknowledge and develop strategies to tackle local resistance to policy implementation. - Resources are required to support staff promoting equality (information, workshops, protection from abuse, permanent forums). This report presents an analysis of public education policies in Braziland considers where these policies intersect with programmes aimed at preventing and reducing discrimination and violence against LGBT people. This audit analyses key aspects of public policies in education and sexuality in Brazil, which have been designed as part of the wider programme Brazil Without Homophobia (BWH – Programa Brasil sem Homofobia), launched in 2004. Tackling homophobia and its cultural and social effects has been highlighted by a number of authors as an important policy strategy. This report shares the findings of a sexuality and gender audit of a national government programme to strengthen secondary school education in India. Work on homophobic and transphobic bullying in schools This discussion guide has been produced for use by organisations working with young people. It aims to support work aimed at preventing teenagers from becoming perpetrators and victims in abusive relationships.
<urn:uuid:a9e096cb-633a-4faa-a84e-c3105207b4a0>
{ "dump": "CC-MAIN-2020-45", "url": "http://spl.ids.ac.uk/sexuality-and-social-justice-toolkit/5-information-and-resources/sexuality-and-education", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880656.25/warc/CC-MAIN-20201023043931-20201023073931-00339.warc.gz", "language": "en", "language_score": 0.9366874098777771, "token_count": 326, "score": 3.34375, "int_score": 3 }
Ice Hockey Drill Demonstration Blue player 1 skates with the puck and shoots on goal. After the shot they then skate and touch the goal post. Once the blue player has touched the post the second player can set off, skating with the puck towards the goal, with the blue player behind them putting them under pressure. Players then skate and collect their puck (or a new puck) and join the back of the opposite line. The drill then continues.
<urn:uuid:2deb7ecc-05bf-4f24-b503-2b9a4d52f893>
{ "dump": "CC-MAIN-2019-39", "url": "https://www.sportplan.net/drills/Ice-Hockey/Samples/1-v-1-Under-Pressure-IH-Alex3.jsp", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576345.90/warc/CC-MAIN-20190923084859-20190923110859-00129.warc.gz", "language": "en", "language_score": 0.9499872326850891, "token_count": 95, "score": 2.640625, "int_score": 3 }
Saturday, September 25, 2010 iPod adventures - Clifford's Be Big With Words Clifford's Be Big with Words app on our iPods. Students can make three letter words by choosing letters from a palette and dragging them onto a line. They do this three times to make a word. Irregardless of what letter they choose, they will still make a word each time. I added an extension by having students write down the words they made on a chart. Our kindergarten students were excited to be making words and it was a good exercise in using fine motor skills as well. After only two exposures of about 8 minutes apiece, our students are becoming very comfortable with the iPod touch. For most of our class, this is the only experience they have had with this device. I have a colleague who guides me to always ask "Why are we doing this?" before trying something new. The point of the question is to make sure every activity has a sound educational reason behind it. Using this app helped our class review letters and sounds and work on stretching out words in order to decode. We shot a video of one of our students saying "L-O-G". He was pretty excited to be able to recognize this word.
<urn:uuid:c7ea69e6-d490-472e-a0eb-00d2b68bb0cc>
{ "dump": "CC-MAIN-2018-30", "url": "http://ncteacherstuff.blogspot.com/2010/09/ipod-adventures-cliffords-be-big-with.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593223.90/warc/CC-MAIN-20180722120017-20180722140017-00038.warc.gz", "language": "en", "language_score": 0.9805164337158203, "token_count": 249, "score": 3.15625, "int_score": 3 }
In 1993, in response great debate and political controversy, Congress passed and President Clinton signed new legislation holding that “[t]he presence in the armed forces of persons who demonstrate a propensity or intent to engage in homosexual acts would create an unacceptable risk to the high standards of morale, good order and discipline, and unit cohesion which are the essence of military capability.” The new policy, colloquially known as "Don't Ask, Don't Tell," established that service members not be asked about, nor allowed to discuss, their sexual orientation. This guide contains primary materials on the U.S. military’s policy on sexual orientation, from World War I to 2013, with Professor Janet E. Halley’s book, Don’t: A Reader’s Guide to the Military’s Anti-Gay Policy (Duke University Press, 1999) as the site’s foundation. The site includes legislation; regulations; internal directives of military service branches; materials on particular service members’ proceedings (from hearing board transcripts to litigation filings/papers and court decisions); policy documents generated by the military, Congress, the Department of Defense and other offices of the Executive branch of the U.S. Government; and advocacy documents submitted to government entities. NOTE: This guide is no longer being updated! President Barack Obama signed the Don’t Ask, Don’t Tell Repeal Act of 2010 into law at the Department of the Interior in Washington, DC, on December 22, 2010. The repeal became official on September 20, 2011. ©Stanford University, Stanford, California 94305.
<urn:uuid:e0d04979-6357-4eac-b30c-22366970beeb>
{ "dump": "CC-MAIN-2021-31", "url": "https://guides.law.stanford.edu/DADT", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00424.warc.gz", "language": "en", "language_score": 0.9278349280357361, "token_count": 335, "score": 2.609375, "int_score": 3 }
Have you ever dreamt of flying mach 2 on the edge of the stratosphere from New York to London in around 3 hours? With the possible return of Concorde in 2019, you may yet get the chance. 12 years after the last commercial flight, there is increasing talk of a resurrection. I cannot wait! I’ve always wanted to fly Concorde and was so disappointed on missing last time. I’m determined not to let that happen again 🙂 Here’s our tribute to this icon of aviation, and a brief Concorde history of the supersonic jet. May she fly again. A Brief History of Concorde Concorde was a collaboration between Aérospatiale and BAC – British Aircraft Corporation, under a special Anglo French treaty. Concorde could only carry somewhere between 92 and 128 passengers, but normally had seating for 100 passengers and 9 crew. There were only 20 Concorde aircraft ever built, at a huge cost. Six of these built were prototypes. British Airways and Air France bought the remaining 14, 7 each. The purchase was heavily subsidised by both the British and French governments. Concorde’s very first flight was in 1969 and regular test flights began in Britain in December 1971. Commercial flights began in 1976 and ended in 2003. There were regular transatlantic flights between Heathrow (British Airways) and Charles de Gaulle (Air France) to JFK (New York), flying at record speed levels, with maximum speed being twice the speed of sound, reducing flight time by half. Three hours between London Heathrow and New York JFK. There were also also flights to Washington Dulles International airport and Barbados. Concorde had a droopy nose, that could drop down for taxiing, take off and landing, so as to allow the pilots to see the runway, and be raised up during flight to make the supersonic jet more streamlined and reduce drag. With a downturn in the aviation industry in the late 90s early 2000s, the crash of a Concorde flight in France in July 2000, killing 113 people, and the 9/11 attacks, Concorde flew it’s last commercial flight in 2003. For even more in-depth info on the history of Concorde, watch the following youtube video Our Tribute to an Icon and Brief Concorde History – by David John cover image via This concludes our Tribute to an Icon and Brief Concorde History. We hope you find this information useful. If you are looking for a specific piece of information, please do comment below as we may have just forgotten to mention it. Was our Tribute to an Icon and Brief Concorde History helpful to you? Let us know your thoughts in the comments below.. Share our Tribute to an Icon and Brief Concorde History
<urn:uuid:de2f84ad-309b-40ba-b52f-7f1f7390c5bc>
{ "dump": "CC-MAIN-2018-05", "url": "http://www.travelho.com/2015/09/20/concorde-history/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886964.22/warc/CC-MAIN-20180117193009-20180117213009-00318.warc.gz", "language": "en", "language_score": 0.9519959092140198, "token_count": 585, "score": 2.53125, "int_score": 3 }
A new geophysical model shines some light on the Tibetan Plateau’s unique geology. Some 50 million years ago, India was a huge hit in Asia — quite literally, as the peninsula smashed into the continent after breaking up with Gondwana, creating the Himalayas of today. We don’t know very much about the specifics of this collision, as the Tibetan Plateau — an area at the epicenter of this collision — is quite inhospitable and hard to reach, for earth scientists and laymen alike. New research, led by scientists from the University of Illinois at Urbana-Champaign, comes to shed more light on the event. Not only do the findings help patch our understanding of the area’s geology. The results also help explain the highly-peculiar — and very violent — seismic activity in this area. Shaking things up “The continental collision between the Indian and Asian tectonic plates shaped the landscape of East Asia, producing some of the deadliest earthquakes in the world,” said Xiaodong Song, a geology professor at the University of Illinois and co-author of the new study. “However, the vast, high plateau is largely inaccessible to geological and geophysical studies.” Song and his team drew on high-resolution seismic (earthquake) data to generate the clearest model of the Tibetan Plateau’s geology to date. They pooled together geophysical data from various studies and other sources, and collated them to generate seismic tomography images of Tibet — think of them as ultrasound imaging for geology — that peer down to about 160 kilometers under the surface. Their work reveals that the upper mantle layer of the Indian tectonic plate is broken into four distinct pieces that push under the Eurasian plate. Each of these four fragments lies at a different distance from the origin of the tear and moves at a different angle relative to the surface than its peers. The new data match well with recorded earthquake activity, geological, and geochemical observations in the area, the team writes, which helps improve confidence in the results. “The presence of these tears helps give a unified explanation as to why mantle-deep earthquakes occur in some parts of southern and central Tibet and not others,” Song said. While the Indian plate was definitely shredded after the impact, the bodies of intact crust between the tears (the four fingers themselves) are still strong enough to accumulate strain — and such strain, when released, is what causes earthquakes. At the same time, heat upwelling from the deeper mantle can pass through the torn areas more readily. Areas of crust directly above the tears become more ductile and less susceptible to earthquakes as they warm. This last tidbit of information helps explain the “unusual locations” of some of the earthquakes in the plateaus’ southern reaches, according to co-author Jiangtao Li, who adds that “there is a striking correlation with the location of the earthquakes and the orientation of the fragmented Indian upper mantle”. The model also helps us get a better idea of the local geology as a whole, explaining some of the area’s more peculiar surface deformation patterns, such as a series of unusual north-south rifts along the plateau, for example. Such deformation patterns, together with the location of most earthquakes in the area, further suggest that the crust and upper mantle are strongly coupled in southern Tibet — i.e. surface rocks are very well ‘glued’ to deeper formations. Overall, the findings offer a clearer picture of the state of the crust and upper mantle in the Tibetan Plateau. The findings will also help us better assess areas that are at risk from earthquakes, the team adds, with the potential to safeguard lives and property from their devastating effects. The paper “Tearing of Indian mantle lithosphere from high-resolution seismic images and its implications for lithosphere coupling in southern Tibet” has been published in the journal Proceedings of the National Academy of Sciences.
<urn:uuid:b9ab55f7-fdb2-4026-8134-8707f73e05d4>
{ "dump": "CC-MAIN-2023-14", "url": "https://www.zmescience.com/science/tibet-plateau-tectonic-fingers-95973523/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00447.warc.gz", "language": "en", "language_score": 0.9439456462860107, "token_count": 837, "score": 4.0625, "int_score": 4 }
Updated: December 22, 2022 Tomato is the number one choice for backyard gardeners. But, it is often seen that new gardeners spend more money on growing tomatoes comparing experienced gardeners. Besides, there has a myth that “home gardening is expensive.” In general, it seems expensive due to a need for more information. You can cut your cost by up to 50% by giving extra effort and concentration to make your own homemade natural fertilizer and grow your tomatoes from seeds. In this article, I will explain how to make a highly nutrient organic fertilizer at home like commercial standard. Your garden plants will thank you if you follow my recipe to make your homemade fertilizer. It is easy, cheap, and good for your plants’ health. So let’s start making your homemade organic fertilizer. Benefits of homemade fertilizers: - It’s cheaper - No risk of over-fertilization - Develop soil structure - Slow-released fertilizer - Supply nutrients over the season - Helps to grow more nutritious and tasty fruits Can my homemade fertilizer provide all the necessary nutrients for tomatoes? Yes, it can. But it mostly depends on your present soil condition and the ingredients you use to make the compost. Usually, tomato plants need essential nutrients like nitrogen, phosphorous, and potassium to grow and harvest. Moreover, they need micronutrients such as calcium, magnesium, and sulfur to get more quality fruits. If your compost pile contains all the essential nutrients, including micronutrients, it will fulfill the demand for your tomato plants. So, do the soil test first to know the present soil condition. After getting the soil test result, make your compost by adding all the necessary ingredients which contain the following nutrients. What are the essential nutrients of fertilizer, and how do they work for plants? Essential nutrients mean NPK, which stands for nitrogen(N), phosphorous(P), and potassium(K). Plants can’t grow without these basic nutrients. Besides, some supplementary nutrients also play an important role in developing healthy plants, like calcium, magnesium, and sulfur. Nitrogen helps to grow your plants larger and develops foliage. It plays a vital role in plants growing up to setting fruits. It develops plant roots, healthy blooming, and sets a lot of fruits. Potassium improves plant growth, and plant hardiness fights diseases and protects plants from insects. Magnesium plays a vital role in the process of photosynthesis which helps the leaves turn green. Sulfur provides necessary proteins to plants. It is needed in a small amount, but its deficiency significantly threatens the plants. It develops plants’ stems and prevents the blossom-end rot of tomatoes. Which ingredients should I use, and what nutrients do they contain? To make the best quality fertilizer for tomatoes, you must ensure all the necessary soil nutrients are present in your homemade fertilizer. Here I show you a list of ingredients containing the nutrients for your fertilizer. Ingredients that contain a high level of nitrogen: Contain enough nitrogen for your plants. These are a good source of nitrogen. It contains a balanced proportion of NPK 2.4-1.4-.6. Therefore, it never burns the plants. Aged and composted cow, chicken, or horse manure are the best nitrogen sources and homemade fertilizer. Corn gluten meal: It comes from corn as a byproduct which contains 10% nitrogen and is a good source of soil nutrients. However, it is also used as an herbicide, so wait to apply it before seed germination. Ingredients that contain a high level of phosphorous: It seems weird but includes an adequate amount of nitrogen, phosphorous, and potassium comparatively stored brought fertilizer. This is a good source of phosphorous. One cup of soybeans contains 1309 milligrams of phosphorous. It’s a kind of seaweed used as organic fertilizer and contains a good amount of phosphorous. It performs better when the tomatoes begin to flower. One cup of sesame seeds contains 906 milligrams of phosphorous. Pumpkin or squash seeds: One cup of pumpkin seeds contains 676 milligrams of phosphorous. Before applying it to your compost pile, smash it properly to remove its shells. One cup of lentils contains around 866 milligrams of phosphorous. One cup contains around 380 milligrams of phosphorous. Navy beans, oats, dried peas, pinto beans, peanuts, and barley are excellent sources of phosphorous. Ingredients that contain a good level of potassium: It is an excellent source of potassium and other micronutrients. You can directly apply it to the soil or toss it into the compost pile. Moreover, you can make a liquid form of fertilizer from it. In addition, you can make liquid fertilizer spray and insect traps using these peels. This is another good source of potassium. It also contains vitamin C (tomatoes like acidic soil) and other micronutrients and trace minerals. Potato peels contain a large amount of potassium. A medium size potato skin includes 600-700 milligrams of potassium. Cucumber peel contains a medium level of potassium. Sweet potato skins: Sweet potato skins contain a high level of potassium than normal potato skins. Your fireplaces might be a decent source of potassium and other trace minerals to make organic fertilizer. Cantaloupe, honeydew, apricots, grapefruit, mushrooms, and raisins are good sources of potassium. I suggest you only use fresh food for homemade fertilizer if it is a waste. Ingredients that contain micronutrients, organic matter, and traces of minerals: Your compost pile is the number-one nutrient provider to your tomatoes and all other plants. It contains all the basic nutrients, including micronutrients and macronutrients, essential for tomato plants. It contains 1% of nitrogen and 93% of calcium carbonate. It helps to develop plant cells as well as plant growth. Generate microbes and other helpful bacteria. Moreover, it contains most of the nutrients for your plants, like calcium, manganese, copper, sulfur, carbon, iron, potassium, and magnesium. Increases plants’ ability to take food from the soil and suitable additives of organic fertilizer. The coffee ground: Contains 2% nitrogen, 1% potash, and 3% phosphoric acid. This is particularly suitable for acid-loving plants such as tomatoes, blueberries, azaleas, roses, camellias, avocados, and evergreens. Fish waste liquid will be excellent nutrients for your plants if it is not salty. After boiling, the water of fruits, vegetables, potatoes, eggs, food grains, or even pasta can be used as plant nutrients. First, let the water cool and then apply it directly to the plants or compost pile. It contains magnesium which helps to produce more vigorous plants, enough bloom, a healthy harvest, and a sweeter taste in tomatoes. Pet and human hair work better as tomato fertilizer. This is because it contains keratin, nitrogen, sulfate, and other trace minerals. This is a kind of herb commonly used to feed livestock. Dried Alfalfa is an excellent source of organic tomato fertilizer. This is a liquid form of fertilizer that contains basic nutrients NPK, including other micronutrients such as magnesium, calcium, and sulfur. Besides, it works quickly compared to compost fertilizer. You can apply it directly to your plants. The fish head is an excellent source of nutrients for any plant. It contains calcium, nitrogen, phosphorous, potassium, and other essential micronutrients. You can apply it directly to the transplanting holes or toss it into the compost pile. Organic Cottonseed Meal: This additive works slowly and performs better during transplanting plants. It contains the basic soil nutrients NPK in a ratio of 2-1-6. Moreover, it has other micronutrients like calcium, magnesium, sulfur, and other trace elements. Can I use any combination of the ingredients for my homemade fertilizer? Yes! You can use whatever you get from your trash can. Also, homemade fertilizer in your garden soil can be less than 100% accurate. But I recommend you use a balanced combination of the ingredients which contain all the necessary nutrients and include some additional micronutrients if needed. This little effort will help you in various ways, such as: - Provide an adequate balanced supply of nutrients if your soil holds poor quality - Able to protect your plants from blossom end rot, yellowing leaf, and other nutrients deficiencies - Produce good quality fruits - No need to provide an additional nutrient supply - Increase your confidence level to make your own fertilizer in future How to determine which additives I need most? No matter what you grow in your backyard, do the soil test first. It will help you determine the ingredients you need most and the present soil condition. You can quickly get a cheap soil test kit from your local garden centers or online stores. If you find difficulties, go to the nearest agricultural extension center for help. They will do it for you. Based on your soil test report, you can add the ingredients to your compost pile for the best soil amendment. What proportion of the ingredients to use for my fertilizer recipe? Your soil test result will show the present soil condition, nutrient deficiency, and soil (pH) level. If your soil holds less nitrogen, phosphorous, potassium, or other nutrients, add more ingredients that containing these types of nutrients. For example, if your soil test report shows that your soil has high potassium deficiency than other nutrients. What will you do then? In that case, you must add some ingredients to your compost pile containing a high potassium level. Such as banana peels or wood ashes, to keep balance in your fertilizer. How to make high nutrients organic fertilizer for tomatoes at home? Following your soil test report to make your homemade fertilizer would be excellent. Apply the same ingredients to your compost pile as your soil test report says. This little effort will bring you the best output. Basic understanding of fertilizer measurement Typically, a 50-lb commercial fertilizer bag contains a nutrient ratio like (NPK) of 5-10-15, which means it has 5% nitrogen, 10% phosphate, and 15% potash. To calculate the lb of the nutrients, multiply 50 by each value. Such as: Measuring nitrogen: 50 multiplied by 0.05 = 2.5-lb Measuring phosphorous: 50 multiplied by 0.10 = 5-lb Measuring potassium: 50 multiplied by 0.15 = 7.5-lb. In other words, it holds 2.5-lb of nitrogen, 5-lb of phosphate, and 7.5-lb of potash, in total 15-lb of nutrients out of 50-lb. In addition, the weight may fill with sand, granular limestone, and other particles. How the homemade fertilizers measure the nutrient value like commercial fertilizers? When measuring your homemade fertilizer nutrients, a 6-gallon container or bucket is similar to a 50-lb fertilizer bag. For example, what will you do if your soil test result recommends a nutrient (NPK) ratio of 6-12-12 for your garden soil? Firstly, I recommend getting a 6-gallon container (similar to a 50-lb fertilizer bag) for better measurement. Calculating the lb of the nutrients, multiply 50 by each value. Such as: Measuring nitrogen: 50 multiplied by 0.06 = 3-lb Measuring phosphorous: 50 multiplied by 0.12 = 6-lb Measuring potassium: 50 multiplied by 0.12 = 6-lb Fill the container with 6lb of ingredients containing nitrogen, 12 lb of ingredients with phosphorous, and 12 lb with potassium. The reason for adding the ingredients double because it converted half or lower of its weight when it breaks down and turns into nutrients. In addition, add some micronutrient ingredients around 10 lb, which contain calcium, magnesium, and sulfur, as your plants require. Finally, fill the remaining container with fresh and diseases free garden soil. [[[Note: The nutrient value of your homemade fertilizer can be less than 100% accurate like commercial fertilizer. But, this process ensures you none of the nutrients is missing from your homemade fertilizer and confirms almost 80% accuracy or more. Besides, you can add other essential micro-nutrients in your fertilizer as your plants demand, which is not likely in store-brought fertilizers.]]] Some typical grades and their nutrients measurement for your home garden: N-P-K = 5-10-5 (for measuring a 50-lb fertilizer bag or 6-gallon container) Nitrogen:50 multiply by 0.05 = 2.5-lb Phosphorous:50 multiply by 0.10 = 5-lb Potassium:50 multiply by 0.05 = 2.5-lb N-P-K = 5-10-10(for measuring a 50-lb fertilizer bag or 6-gallon container) Nitrogen: 50 multiplied by 0.05 = 2.5-lb Phosphorous:50 multiply by 0.10 = 5-lb Potassium:50 multiply by 0.10 = 5-lb N-P-K = 10-10-10 (for measuring a 50-lb fertilizer bag or 6-gallon container) Nitrogen: 50 multiplied by 0.10 = 5 lb Phosphorous:50 multiply by 0.10 = 5-lb Potassium:50 multiply by 0.10 = 5-lb N-P-K = 8-0-24 (for measuring a 50-lb fertilizer bag or 6-gallon container) Nitrogen: 50 multiplied by 0.08 = 4 lb Phosphorous:50 multiply by 0.0 = 0-lb Potassium:50 multiply by 0.24 = 12-lb N-P-K = 6-6-18 (for measuring a 50-lb fertilizer bag or 6-gallon container) Nitrogen: 50 multiplied by 0.06 = 3 lb Phosphorous:50 multiply by 0.06 = 3-lb Potassium:50 multiply by 0.18 = 9-lb How much compost mix need for each tomato plant? The best way to apply organic compost is to transplant your tomato plants. Dig deep holes for each tomato plant and fill them with your homemade organic fertilizer mix. Water well to set them in the ground, and let your tomato plants do the rest. Just keep watering your plants regularly. Challenges of homemade fertilizer - Proper measurement of the necessary nutrients - Availability of the required nutrient ingredients - Need a suitable place to do the job - Required necessary tools - Need Patience Difference between chemical fertilizer and homemade organic fertilizer: You can compare organic and chemical fertilizers based on some parameters. Such as: Homemade fertilizers are cost-effective and almost accessible. On the other hand, commercial chemical fertilizers are more expensive. Homemade fertilizers are environment-friendly. This traditional way of neutralizing soil has continued for thousands of years. It continues without any harmful impacts on the environment. On the contrary, continuous use of chemical fertilizers can reduce soil fertility. This is because they feed the plants directly and don’t develop the soil structure. Moreover, chemical fertilizers pollute water sources through rainwater or irrigation water. As a result, using these chemical fertilizers for a long time can destroy the ecosystem. Organic fertilizers are soil friendly and release nutrients slowly into the ground. They don’t feed the plants directly; instead, they develop the soil structure. Therefore, it helps plants take nutrients from the soil easily. Conversely, chemical fertilizers feed the plants and don’t develop the soil structure. Duration of effectiveness: Organic fertilizer is released slowly and last longer in the soil. So you don’t need to fertilize regularly. On the other side, chemical fertilizers feed your plants, so they don’t last long in the soil. As a result, you need to maintain a schedule of fertilizing. Organic fertilizers are the best choice for every home gardener. They are a gift for your garden soil and environment and good for your health. In addition, they produce nutritious, tasty, and juicy fruits that every gardener dreams of. Another point of view, chemical fertilizer also produces healthy products for a certain period. But they are less tasty comparing the fruits produced from homemade fertilizer. Which ingredients to avoid making my homemade fertilizer? - Pets manure like cats and dogs because they are poisonous - Human Manure - Any animal manure which is not pure vegetarian - Bones, meat, cheese, and all dairy products (due to attracts by natural scavengers) - Chemical products - Biohazards (animal blood, tissues, animal waste and certain body fluids, human waste, animal or plant pathogens, pathological waste, etc.) How long does it take to make my homemade fertilizer? According to your soil test report, you need to start a little earlier if you want to make your homemade fertilizer. Therefore, you must begin composting in the fall to catch up to the following spring season. Typically, aged compost works better than new compost. Generally, it takes 45 days to 180 days to get good quality compost depending on the ingredients used for your compost pile. If you are using only kitchen waste, it will take 45 to 90 days to break it down. On the other hand, if you want to get highly nutrient, best-quality compost. Then, using manures, kitchen garbage, fish head, and other micronutrient ingredients. This gives it a little more time (up to 6 months) to break them into good-quality fertilizer. How much does it cost to make my organic fertilizer? Honestly, it is free to produce your homemade organic fertilizer. You must sort and compile the required ingredients according to your soil test report from your daily household garbage. Then put those ingredients in your compost pile and let it break down. That’s all you need. You need more patience and knowledge than money to make your homemade fertilizer. If you are thinking of growing tomatoes in your backyard, I recommend making your own fertilizer. As a matter of fact, when you apply chemical fertilizers to feed your tomato plants, ultimately, those chemicals get inside your body through the fruits you produce in your garden. So I always recommend you use organic fertilizer for your home garden. Happy tomato gardening!!! Sources and Citations:
<urn:uuid:4bbc39c7-0f67-48e7-883a-765970977904>
{ "dump": "CC-MAIN-2023-06", "url": "https://ofags.com/homemade-fertilizer-for-tomatoes/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499899.9/warc/CC-MAIN-20230201013650-20230201043650-00811.warc.gz", "language": "en", "language_score": 0.8665977716445923, "token_count": 4311, "score": 2.71875, "int_score": 3 }
Alcoholism - Nature or Nurture? The nature nurture debate is one of the most enduring in the field of psychology. Nature is innate behaviour that has been evolved over many generations, under the influence of natural selection. The behaviour is adapted to our way of life and is shown by all members of the human species. Nurture is learned behaviour that is learned by the individual throughout his/her life. There is often great variation amongst people as it depends on the environment and experiences of the individual. In previous decades the two opposing views were that behaviour was determined by either nature or nurture. Nowadays the essence of the debate is: what is the ratio of genetic to environmental influences in understanding the source and expression of various biological and behavioural characteristics? The relevance of this debate to psychology in the study of alcohol addiction is that many researchers feel that alcoholism is hereditary and that if parents are alcoholic, their children may be more likely to develop into alcoholics than others. However, others feel that genetics may play a small part in the development of alcoholism, yet the environment plays a larger part in helping to shape the individual's alcoholic behaviour. … E-pasta adrese, uz kuru nosūtīt darba saiti: Saite uz darbu:
<urn:uuid:8690690a-155e-4987-b8ef-88ffa1c8a573>
{ "dump": "CC-MAIN-2018-05", "url": "https://www.atlants.lv/eseja/alcoholism-nature-or-nurture/778811/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891791.95/warc/CC-MAIN-20180123072105-20180123092105-00441.warc.gz", "language": "en", "language_score": 0.9569336771965027, "token_count": 263, "score": 2.90625, "int_score": 3 }
In our Universe quantum fluctuations have been expanded into the largest structures we observe and clouds of hydrogen have collapsed to form kangaroos. The larger end of this hierarchical range of structure - the range controlled by gravity, not chemistry, is what inflation is supposed to explain. Inflation produces structure because quantum mechanics, not classical mechanics describes the Universe in which we live. The seeds of structure, quantum fluctuations, do not exist in a classical world. If the world were classical, there would be no clumps or balls to populate classical mechanics textbooks. Inflation dilutes everything - all preexisting structure. It empties the Universe of anything that may have existed before, except quantum fluctuations. These it can't dilute. These then become the seeds of who we are. One of the most important questions in cosmology is: What is the origin of all the galaxies, clusters, great walls, filaments and voids we see around us? The inflationary scenario provides the most popular explanation for the origin of these structures: they used to be quantum fluctuations. During the metamorphosis of quantum fluctuations into CMB anisotropies and then into galaxies, primordial quantum fluctuations of a scalar field get amplified and evolve to become classical seed perturbations and eventually large scale structure. Primordial quantum fluctuations are initial conditions. Like radioactive decay or quantum tunneling, they are not caused by any preceding event. "Although introduced to resolve problems associated with the initial conditions needed for the Big Bang cosmology, inflation's lasting prominence is owed to a property discovered soon after its introduction: It provides a possible explanation for the initial inhomogeneities in the Universe that are believed to have led to all the structures we see, from the earliest objects formed to the clustering of galaxies to the observed irregularities in the microwave background." - Liddle & Lyth (2000) In early versions of inflation, it was hoped that the GUT scale Higgs potential could be used to inflate. But the GUT theories had 1st order phase transitions. All the energy was dumped into the bubble walls and the observed structure in the Universe was supposed to come from bubble wall collisions. But the energy had to be spread out evenly. Percolation was a problem and so too was a graceful exit from inflation. New Inflation involves second order phase transitions (slow roll approximations). The whole universe is one bubble and structure cannot come from collisions. It comes from quantum fluctuations of the fields. There is one bubble rather than billions and the energy gets dumped everywhere, not just at the bubble wall. One way to understand how quantum fluctuations become real fluctuations is this. Quantum fluctuations, i.e., virtual particle pairs of borrowed energy E, get separated during the interval t / E. The x in x / p is a measure of their separation. If during t the physical size x leaves the event horizon, the virtual particles cannot reconnect, they become real and the energy debt must be paid by the driver of inflation, the energy of the false vacuum - the inf associated with the inflaton potential V() (see Fig. 8). Figure 8. Model of the Inflaton Potential. A potential V of a scalar field with a flat part and a valley. The rate of expansion H during inflation is related to the amplitude of the potential during inflation. In the slow roll approximation H2 = V() / mpl2 (where mpl is the Planck mass). Thus, from Eq. 22 we have inf = 3 V() / mpl2. Thus, the height of the potential during inflation determines the rate of expansion during inflation. And the rate at which the ball rolls (the star rolls in this case) is determined by how steep the slope is: = V' / 3H. In modern physics, the vacuum is the state of lowest possible energy density. The non-zero value of V() is false vacuum - a temporary state of lowest possible energy density. The only difference between false vacuum and the cosmological constant is the stability of the energy density - how slow the roll is. Inflation lasts for ~ 10-35 seconds while the cosmological constant lasts 1017 seconds. What kind of choices does the false vacuum have when it decays? If there are many pocket universes, what are they like? Do they have the same value for the speed of light? Are their true vacua the same as ours? Do the Higgs fields give the particles and forces the same values that reign in our Universe? Is the baryon asymmetry the same as in our Universe?
<urn:uuid:5fff7f68-2cd9-4d30-8ed8-96086a5eb193>
{ "dump": "CC-MAIN-2014-49", "url": "http://ned.ipac.caltech.edu/level5/March03/Lineweaver/Lineweaver5.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006716.69/warc/CC-MAIN-20141125155646-00078-ip-10-235-23-156.ec2.internal.warc.gz", "language": "en", "language_score": 0.9403166174888611, "token_count": 933, "score": 3.390625, "int_score": 3 }
Grade 5 families, This past week (if we could call it that), students have begun studying non-fiction writing in Language Arts. In Math, students learned how to create graphs from collected data on Google Sheets. In Social Studies, we discussed what it means to be influential and studied a few influential people from the Middle Ages. Number the Stars In Language Arts, we have begun reading the book Number the Stars. Students have been tasked with keeping up with daily reading, chapter summaries, and reflection questions. A schedule for these readings, summaries, and questions has been shared with students and can be found on their Google Classroom account. Here is this week’s schedule: Mar.3: Ch.2 Summary and Reflection Assignments and Assessments - Wednesday – Math Test - Thursday – Spelling List 22 Quiz Dates to Remember: - Monday, March 8th: Spring Photo Day - March 15th-19th: March Break (No School) That’s all for now. See you next week.
<urn:uuid:39493a55-f26b-4802-b3d8-284db5bfa5a4>
{ "dump": "CC-MAIN-2022-27", "url": "https://www.gcspei.ca/week-24-in-grade-5/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00767.warc.gz", "language": "en", "language_score": 0.9458880424499512, "token_count": 276, "score": 2.78125, "int_score": 3 }
Regional Anesthesia involves blocking sensations to one part of the body. By injecting local anesthetic (or numbing medicine) around a group of nerves, the anesthesia provider can block the sensation from one part of the body, such as the arm, the hand, or the foot. Most of the time the patient is given sedation before and during the procedure. A regional block can often give the patient several hours of pain relief after surgery. Regional anesthesia can be given alone, with sedation, or in combination with General Anesthesia. Your anesthesia provider may use this technique to provide anesthesia for your surgery or simply to provide pain relief after your surgery. Use of this technique will depend on the type and length of your surgery, your medical history, and your anesthesiologist and surgeon's preference. Many patients, and even some physicians, automatically assume that surgery requires general anaesthesia, and that the patient should be asleep during surgery. This is not true. Many procedures can be performed on awake patients, using local or regional anesthesia. This not only avoids the risks and unpleasantness sometimes associated with general anesthesia, but may also provide specific benefits such as reduced blood loss and better postoperative analgesia. Nowadays most patients come to the hospital on the day of their operation, and are seen by the anesthesiologist in the pre-anesthetic assessment clinic or the day of their scheduled surgical procedure. Before meeting the anesthesiologist you could be requested to answer to a questionnaire. This questionnaire will help you organize and provide important information for your anesthesiologist. You can choose your anesthesiologist, though the multiple duties and assignments shared by the anesthetic staff may make it logistically difficult to have a given member of the staff available on the day of your operation. Please check with your surgeon about your local situation. He or she will review your medical history, examine you, and order any necessary laboratory tests, electrocardiogram (EKG) and chest X-ray. He or she will make sure that any medical conditions, which might complicate your anesthetic, are being treated as well as possible. The history should include past and current medical problems, current and recent drug therapy, unusual reactions or responses to drugs, and any problems or complications associated with previous anesthetics. A family history of adverse reactions associated with anesthesia should also be obtained. lnformation about the anesthetic that the patient considers relevant should also be documented. Occasionally, the anesthesiologist will request an opinion from another specialist, such as a cardiologist, to help in your assessment. The different types of anesthesia appropriate for you and the relative risks according to the American Society of Anesthesiologists Physical Status Classification and New York Heart Association Classification will be explained. Very rarely, your operation may be postponed or cancelled because of the risks involved. The anesthesiologist who is assigned to look after you on the day of your operation will review this information, and make the final decision with you about the details of your anesthetic. Your anesthesiologist will inform you about NPO (nothing per os): on the morning of surgery, you should have nothing to eat or drink. You may brush your teeth or rinse your mouth, but you should not eat or drink anything. If, however, you routinely take medications for your Blood Pressure or Heart in the morning, you should take your usual medications with a sip of water. When the right amount of the right drug is injected in the right place, it will eventually work and provide good pain relief. In some cases, the correct spot is easy to identify (e.g. spinal anesthesia) while, in other cases (e.g. epidural, sciatic nerve block), it is harder to find the correct spot. Most blocks take 5-30 minutes to work. Commonly used blocks are usually 85-99% likely to work, depending on the type of block and the skill of the anesthetist. What if it does not work? Depending on the circumstances, there are a variety of options available: There are different types of regional anesthesia: Spinal anaesthesia involves putting local anesthetic in the patient's back to "freeze" the lower part of the body. It is usually very safe and effective. It may be associated with less blood loss, and less risk of dangerous blood clots, than general anaesthesia. Spinal anaesthesia is suitable for many procedures in the lower half of the body. In general, spinal anesthesia provides excellent pain relief during all these procedures. Patients may feel some stretching or tugging during delivery of the baby by Caesarean section, or during handling of the bowels in a hernia repair. Major orthopedic surgery may include cutting bone and hammering to insert artificial joints, and some patients dislike the noise and/or vibration this causes. You can watch some short movies about spinal anesthesia (better viewed with ADSL connection). Epidural or peridural anesthesia uses a larger volume of anesthetic, positioned in the fat and veins further away from the spinal cord. This block takes effect more slowly, which can be an advantage in some cases. For example, an epidural is less likely to produce a severe drop in blood pressure than a true spinal block. The other major advantage is that a small catheter can be placed in the epidural space to allow the block to be continued over a period of hours or days, by using dedicated machines to infuse medications to control postoperative acute pain. While a true spinal block only lasts a few hours. You can watch some short movies about peridural anesthesia and analgesia (better viewed with ADSL connection). Various types of plexus block There is a wide variety of other nerve blocks, including blocks at the ankle, around the groin, in the buttocks, at the arm, at the leg. Some examples can be watched in some video clips taken from the site Peripheral Regional Anesthesia (better viewed with an ADSL connection and Quick Time plug in) After regional anesthesia your arm or leg could be numb and weak for up to 24 hours after. Your limb to feel warmer or colder than other; this could last for 24 hours. You will need not use your arm or your legs until the numbness and weakness are gone. Be careful around hot, cold and sharp things until the numbness is gone. If these persist longer than 24 hours, please call your anesthesiologist. In general, regional anesthesia is very safe, and usually safer than a general anaesthetic. However, the potential for side effects or complications exists with any form of anesthesia. The most common side effect of a block is a temporary weakness or paralysis of the affected area. This is often useful to the surgeon, and wears off after a while. The complications that may arise depend on the specific block. They usually occur when the local anaesthetic is injected in the wrong place. If a large volume (10-20 mls.) of local anaesthetic is injected into a vein by mistake, it may cause convulsions and even cardiac arrest. This is why physicians always inject local slowly, sucking back on the syringe to check the local is not going into a vein. Major nerve blocks are safe when performed by physicians trained in the technique. After spinal or epidural anesthesia you can have headache with a probability less than 1/100, transient neurological symptoms less than 1/5000 and permanent neurological damage less than 1/100000-150000. The risk of peridural hematoma with compression of the nervous structures is very rare. Regional anesthesia related complications Incidence of seriuos events related to Upper Limb Blocks Incidence of seriuos events related to Lower Limb Blocks From the National Medical Library a tutorial about epidural anesthesia. The tutorial is for informational purposes only and its content is general information and not medical advice. Content is not intended to be a substitute for professional medical advice. Always seek the advice of your anesthesiologist with any questions you may have regarding a medical condition or medical treatment. Go to the NML tutorial....... © Copyright 2003-2008 Luigi Brandi. All rights reserved on line since January, 22, 2003
<urn:uuid:1bc871e0-d7b5-4ebc-bd85-e286fc12005d>
{ "dump": "CC-MAIN-2014-42", "url": "http://www.brandianestesia.it/english/reganesth.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119649048.35/warc/CC-MAIN-20141024030049-00040-ip-10-16-133-185.ec2.internal.warc.gz", "language": "en", "language_score": 0.9376042485237122, "token_count": 1665, "score": 2.609375, "int_score": 3 }
Introduction To Two Basic Rhythms in Middle Eastern Music Students are introduced to two basic Middle Eastern rhythms: Maqsoum and Chiftitelli. They notate, count, and perform these rhythms using drums and other percussion instruments. 3 Views 5 Downloads - Activities & Projects - Graphics & Images - Handouts & References - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - Study Guides - Graphic Organizers - Writing Prompts - Constructed Response Items - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - All Resource Types - Show All See similar resources: Two Sides to Every Story: Day 4 and 5 (Part 2) There really are two sides to every story! Seventh graders analyze various texts and media types to compare and contrast how perspective is conveyed. The fourth and fifth days of the unit focus on learners writing first drafts of a... 7th English Language Arts CCSS: Designed Math Masterpieces: Two Women on the Beach by Paul Gauguin Why not add a little additional knowledge to your drill and practice math problems? This cute worksheet provides young mathematicians with a little exposure to art and art history as they practice subtracting 3-digit numbers. As students... 2nd - 4th Math KWHL Strategy for A Tale of Two Cities Students activate prior knowledge about French and English history. In this historical fiction literacy instructional activity, students list information they know about the French Revolution and the history of England. Students... 8th - 11th Social Studies & History Introduction to World of Geography Test Assess your learners on the five themes of geography and the most important key terms and concepts from an introductory geography unit. Here you'll find an assessment with 15 fill-in-the-blank and 14 multiple-choice questions, sections... 6th - 8th Social Studies & History CCSS: Adaptable David Walker vs. John Day: Two Nineteenth-Century Free Black Men What was the most beneficial policy for nineteenth-century African Americans: to stay in the United States and work for freedom, or to immigrate to a new place and build a society elsewhere? Your young historians will construct an... 6th - 11th Social Studies & History CCSS: Designed
<urn:uuid:7bae4a47-e996-4587-8d24-13881a6d937d>
{ "dump": "CC-MAIN-2018-34", "url": "https://www.lessonplanet.com/teachers/introduction-to-two-basic-rhythms-in-middle-eastern-music", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215487.79/warc/CC-MAIN-20180820003554-20180820023554-00532.warc.gz", "language": "en", "language_score": 0.8804367184638977, "token_count": 494, "score": 4.21875, "int_score": 4 }
I’ve often said that world governments work in close cooperation with the world’s banking industry. It’s a natural partnership; one creates the money for the other to spend. Of course, our government was founded to be something different. It says right in Article I, Section 8 of the US Constitution that the Congress shall have the power, “To coin money, regulate the value thereof, and of foreign coin, and fix the standard of weights and measures.” Despite the fact that Section 8 of the Constitution uses the word “coin” in lieu of print, it is not a blanket prohibition against the issuing of paper money as many of my Libertarian friends believe. The same section of the US Constitution specifies that Congress does have the power “To borrow Money on the credit of the United States”. During the War of 1812, the US Treasury was issuing paper interest bearing notes in order to finance the war and to circulate in lieu of bank notes which were greatly reduced in circulation due to the expiration of the charter of the Bank of the United States in 1811. In a letter by Thomas Jefferson to celebrated French author Say, dated March 2, 1815 Jefferson wrote: The government is now issuing treasury notes for circulation, bottomed on solid funds and bearing interest. The banking confederacy (and the merchants bound to them by their debts) will endeavor to crush the credit of these notes; but the country is eager for them a something they can trust to, and so soon as a convenient quantity of them can get into circulation the bank notes die. So it’s not that the US government was not able to issue notes of paper that would circulate, but that it could not “print” money. That is to say, it could not issue fiat money that was legal tender. The treasury notes that the government issued were a paper money, but the notes were promises to pay gold and silver at a future date. More importantly, no one was under any obligation to take them. Compare that to our present day paper money which must, by law, be accepted for “all debts public and private.” The Constitutionality of the government issuing fiat money as legal tender has often been questioned as I discussed in my book. The Supreme Court of the United States declared the United States government had no authority to declare fiat paper money as legal tender in 1871, but President Grant appointed two justices to the Supreme Court who favored fiat money and that decision was soon overturned with Knox v Lee. The problem is that, as Thomas Jefferson himself wrote as Vice President in 1798 that the US Government lacks the Constitutional power “of making paper money or anything else a legal tender,” and yet that is what it has done and continues to do. Our modern arrangement does not have the US Government itself issuing the fiat money, but instead it is issued by the Federal Reserve, a private bank. As I mentioned at the state of this post, this arrangement, of the government giving legal tender powers to a private bank that then controls the issuance of money is an old one. It was first tried in England with the Bank of England acting as the monetary extension of the policies that were enacted by Parliament. The reason the government uses its powers to endow a separate body with one of the most fundamental and abusive powers of the state is simple: it garners the support of the monied interests by allowing them the power to control the issuance of the currency. As long as the government is then allowed to monetize its debt by having a central bank uses some of its money creation power to buy government bonds, then both parties are happy; the government has de facto use of a printing press and the private bankers give the legal tender the aura of respectability in return for using the printing press for their own private purposes in the meanwhile. But what should happen if the bankers go broke? It might seem a paradoxical question given that they have the power of the printing press, but what would happen if they made so many bad loans that to use the printing press to inflate away the loses would pose too large a threat to the system? I don’t ask this situation lightly, because it’s the situation we’re in today. As the New York Times January 16th story states, the Rescue of U.S. banks hints at nationalization. With mounting loses, the government has had to step in an absorb bank loses as well as give banks additional funding by way of acquiring equity stakes in these banks. But we are now at the point where the government is basically the largest shareholder of banks such as Citigroup, and, by also being in the position to decide who has to bear what loses and how much of these loses to absorb, the government is now in the position to decide whether a bank shall remain in business and how much profit it will be allowed. This is, of course, in addition to its powers as the single largest shareholder of these banks. In essence, the puppet of banking has slipped off the hand of government in this puppet show and any casual observer can tell that its really the entire banking industry has collapsed down to the government’s printing press. Furthermore, the government is now engaging in the seemingly capricious decisions of what gets the benefit of the printing press and who doesn’t. If the banking system, why not the auto industry? If the auto industry, why not, as Hustler Publisher Larry Flynt asks, the porn industry. After all, pornography has fallen on hard times too. This situation is dangerous for the government, because it presents the spectacle of modern finance to the public in a way that makes it easy for the common man to understand how the powers of government are being used in a very arbitrary way for the benefit of certain groups. They will also quickly come to under stand that if they are not amongst the groups who are benefitting, then they are amongst the groups that are being made to pay for it. Although, in this case, the payment will be made by way of inflation. The printing press will run and bailout the banking and the auto industry, but the rest of us can do nothing but watch as our money loses value. This will expose the simple truth of Frederic Bastiat who wrote that “Government is the great fiction by which everyone endeavors to live at the expense of everyone else.”
<urn:uuid:09bf45bd-7e8d-48ba-974c-98d7dcc1a835>
{ "dump": "CC-MAIN-2016-40", "url": "http://prestonpoulter.com/tag/constitutionality-of-the-federal-reserve/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661023.80/warc/CC-MAIN-20160924173741-00104-ip-10-143-35-109.ec2.internal.warc.gz", "language": "en", "language_score": 0.9747279286384583, "token_count": 1310, "score": 2.875, "int_score": 3 }
Napoleon not murdered, say physicists Feb 14, 2008 3 comments The idea that Napoleon Bonaparte was murdered by arsenic poisoning appears to have been ruled out by new research by nuclear physicists in Italy. The team analysed samples of the French emperor’s hair that they had irradiated with neutrons and found that it contains about the same amount of arsenic as hair from several of his contemporaries — suggesting that the poison probably came from environmental sources such as wallpaper dyes, rather than from a malicious poisoner. The official cause of Napoleon's death in 1821 is stomach cancer, but the idea that he was murdered gained scientific credibility in 2001 when forensic experts in France found levels of arsenic in samples of the emperor’s hair about 40 times higher than found in modern hair. This seemed to support theories that had emerged in the 1950s that Napoleon had been poisoned either to prevent him from regaining control of France, or to make him so ill that the British allowed him to return to France from exile on the island of St Helena in the South Atlantic Ocean. Ettore Fiorini of the Milano-Bicocca University and colleagues at Milano-Bicocca, Pavia University and the laboratories of the Italian National Institute of Nuclear Physics (INFN) in Milan and Pavia analysed a range of hair samples using a research reactor at Pavia. These samples included Napoleon’s own hair from: when he was a child in Corsica in around 1770; when he was exiled on the island of Elba in 1814; on the day of his death on St Helena; and on the day after his death. The researchers also analysed several strands of hair from Napoleon’s son and his first wife, the Empress Josephine — as well as hair from people living today. Each individual hair was placed inside a container and subject to a large neutron flux inside the reactor. In this way, arsenic nuclei in the hair (arsenic-75) could gain a neutron, become unstable, undergo beta decay and then, in an excited state, emit high-energy gamma rays. Ultra-sensitive germanium detectors The challenge for the researchers was to pick out these gamma rays from a host of environmental gamma radiation, and they did so by using ultra-sensitive germanium detectors based on technology also used in the Cuore nuclear physics experiment under construction at the Gran Sasso underground laboratory in central Italy. The work is described in a forthcoming paper in the journal Il Nuovo Saggiatore. Our conclusion is that the death of Napoleon was probably natural Ezio Previtali, INFN Milano-Bicocca The analysis allowed the team to conclude that arsenic was not administered maliciously to Napoleon. The scientists found that all of the hair samples from 200 years ago contained arsenic at levels — around ten parts per million — that are about 100 times greater than those in the hair of people living today. The Italian researchers do not know exactly where the arsenic came from but they believe their results clearly indicate that Napoleon absorbed arsenic thoughout his life rather than being administered a fatal dose. For example, while on St Helena, Napoleon may have absorbed some of the substance from green colouring in wallpaper, say the researchers. “Our conclusion is that the death of Napoleon was probably natural,” says Ezio Previtali of INFN Milano-Bicocca. “However,” he adds,” this is unlikely to be the end of the story. There are plenty of others who will still believe he was murdered.” About the author Edwin Cartlidge is a science journalist based in Rome
<urn:uuid:ecf57f59-c47c-42d1-830f-67fdb256b918>
{ "dump": "CC-MAIN-2014-35", "url": "http://physicsworld.com/cws/article/news/2008/feb/14/napoleon-not-murdered-say-physicists", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826322.0/warc/CC-MAIN-20140820021346-00429-ip-10-180-136-8.ec2.internal.warc.gz", "language": "en", "language_score": 0.9708713293075562, "token_count": 754, "score": 2.8125, "int_score": 3 }
Mold can be both dangerous and toxic and although it can grow on any organic surface, it typically thrives in dark and damp environments. Like mentioned in previous posts, it’s important to know what mold looks like and common places in your home to look for mold and the potential health effects it can have on you and your family. Mold is able to quickly spread, so it is important to hire a professional company such as Paradise Cleaning & Restoration to help mitigate the situation and restore your home to a clean and healthy environment. It is estimated that there are over ten thousand different species of mold and although some of these molds are not harmful to the health of humans, some types can cause severe health issues so it’s important to understand the difference between harmful and non-harmful molds. Below are 8 different species of mold to be aware of in your home (there are other forms of mold that are able to grow in your home. If you think you may have mold in your home please reach out to us.): The first and probably the most well known mold is Stachybotrys and often referred to as “black mold”. Like most molds, this toxic mold thrives in damp, wet areas with higher humidity levels and can often be found on wood, paper, hay, cardboard, and wicker. Black mold can cause symptoms including: difficulty breathing, fatigue and sinusitis. Acremonium is another toxic mold that is typically found in humidifiers, drain pans, window sealants, and cooling coils. This mold often is grey, pink, orange, or white and the look changes overtime; it usually starts as a small moist mold that turns into a dusty material. Next is an allergenic mold that can grow in both warm and cold environments- Cladosporium. This mold is most commonly found in indoor materials such as couches, carpets, curtains, and upholsteries. It is brown or green in color with a smoother texture. Cladosporium causes allergy-like symptoms to the nose, eyes, throat and skin. Alternaria is the most common species of allergenic mold that spread quickly and is the mold you most likely are finding in your showers, tubs and sinks. When there has been water damage to your home, this is one of the types of mold that appears and can cause health issues such as asthma-like symptoms and respiratory issues. Chaetomium is another form of mold that may appear in a water damaged roof, basement or sink. A good indicator that you may have chaetomium in your home is a musty “basement” like smell. This mold has a cotton- like texture that can change from a white or grey color to brown or black over time. Chaetomium can be harmful to your health causing skin or nail infections. Fusarium, like cladosporium, is another mold that has the ability to grow and spread in colder temperatures. This pink or red allergenic and toxigenic mold can be found in wallpaper, fabrics, household materials, and naturally grows on food. Health issues such as allergy- like symptoms and skin infections can occur if exposure happens. This blue or green textured mold is easy to spot and is again often found in water damaged homes in carpets, mattresses, wallpapers and ducting. Although this mold is responsible for important antibiotic production, it can also be the cause of respiratory issues when growing occurs indoors. Lastly is Aureobasidium, which is a pink allergenic mold that is often found growing behind wallpaper or on wooden or painted surfaces. The main health risk if exposed to this mold is skin, eye or nail infections. If you think you have any sort of mold in your home it is important to call in the professionals. Paradise Cleaning & Restoration specializes in mold mitigation and we can help you get your home feeling like paradise again! Give us a call at 401-849-6644 to schedule an appointment.
<urn:uuid:20769c89-9b6f-45ef-b7f6-bfff3fca8df6>
{ "dump": "CC-MAIN-2022-40", "url": "https://paradiseri.com/8-harmful-molds-that-can-be-found-in-your-home/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00097.warc.gz", "language": "en", "language_score": 0.9561547636985779, "token_count": 889, "score": 2.671875, "int_score": 3 }
There have always been supermoons. They just didn’t used to be called that. In fact the origin of the term is, shall we say, unconventional. Here’s how they work: The moon’s orbit around Earth is not a perfect circle. The moon is, on average, 238,855 miles (384,400 km) from us, but the distance varies by 26,465 miles. When a full moon happens while the moon is at or near its closest point to Earth (called perigee) it’s called a supermoon. The exact timing and proximity dictates just how super. A supermoon can be 14 percent bigger and 30 percent brighter than a full moon occurring at its farthest point from Earth (apogee). Supermoon is a catchy term, and a supermoon can be notably bigger to a casual observer, but don’t expect the moon to be dramatically gigantic (photos that make the moon look positively gargantuan involve telephoto lenses and strategically located foreground objects). But do take the opportunity to go out and look up. Any full moon, hanging huge on the horizon, is a worthy sight—a supermoon is just a little worthier. And while you’re out there—during any rise of the full moon—check out the moon illusion. The moon will look larger on the horizon than when it’s higher in the sky. This is, in fact, all in your head (really!). Oh, and about the origin of the term. According to NASA, the term “supermoon” was coined in 1979 by astrologer Richard Nolle. Astrologers and astronomers don’t usually see eye-to-eye on the meanings of heavenly objects and occurrences, but on this one they’ve come to agree, at least on terms.
<urn:uuid:23315b2e-99a9-4797-86fc-e44966acc180>
{ "dump": "CC-MAIN-2019-39", "url": "http://nophonews.com/supermoon/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573988.33/warc/CC-MAIN-20190920092800-20190920114800-00055.warc.gz", "language": "en", "language_score": 0.9280747771263123, "token_count": 385, "score": 3.65625, "int_score": 4 }
Screen time is an integral part of the lives of children and young adults. Young people use screens as a medium for communication, access, work, pleasure, entertainment, and education. Given the amount of activities that are accomplished on a screen, it seems inevitable that children and young adults are constantly engaged with their phones, computers, iPads or video game consoles. The Kaiser Family Foundation published statistics stating that “8- to 18-year-olds consume an average of 7 hours and 11 minutes of screen media per day.”* Upon hearing this startling statistic, many parents panic about their child’s screen usage. This panic manifests in polarized ways. Parents may seek answers online to questions such as, “Is screen time linked to increased psychological difficulties?” or “How much time should my child be spending in front of his or her screen?” While searching for clear answers, parents will be met with highly debated research that implies correlation rather than causation. Lack of clear and concise research prompts parents to take control into their own hands; parents become the regulators of screen time. Consequently, when they ask their child to put down their phone or console, parents are often met with: (1) unsuccessful control whereby their child finds it difficult to stop using their screen; (2) preoccupation as their child thinks exclusively about screen time; (3) withdrawal when their child acts frustrated or aggressive when he/she can’t use the screen; (4) tolerance whereby the amount of time needed for satisfaction in screen time continues to increase; and even (5) deception when he or she sneaks screen time.** These emotional and behavioral responses often prompt intense arguments. Thus parents and children are pitted against one another in a battle that is often brought into therapy appointments. Through psychoeducation, parents begin to recognize that these behaviors are not simple incidents of their child behaving poorly; rather these behaviors are unhealthy responses identified by researchers at the University of Michigan as “behavioral consequences of screen addiction.”** Once parents become aware of the “emotional or social problems connected with screen addiction,” their tone changes drastically.** The question of “why is my child acting out?” shifts to “how do I teach my child how to develop a thoughtful and healthy relationship with their screens?” The answer to the aforementioned question is multi-faceted and depends upon who is behind the screen and how they use their device. Other variables that affect screen usage may include age, gender, race, and psychological well being. On the first Monday of each month, we will post a thoughtful review of the topics listed below in a series entitled “Digital Detox.” Current state of research Screen time in the news Screen time per platform Screen time versus age, gender, and race Negative consequences of excessive screen usage (screen time as an “addiction?”) Positive implications of screen use Management and parental controls Device distraction in parents and teens, alike As clinicians who are extremely passionate about the complex world of screen media activity, we hope that our series will provide you with clearer answers to these highly contended issues. We intend to keep up with the current state of research and quickly evolving pace of technology, all while living in this on-going digital experiment, in order to provide answers to the digital questions we are asked often in sessions.
<urn:uuid:22104ef6-a9fc-4eba-8907-adb383cb9abc>
{ "dump": "CC-MAIN-2021-17", "url": "https://innerfokus.com/blog-digitaldetox/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076454.41/warc/CC-MAIN-20210414004149-20210414034149-00066.warc.gz", "language": "en", "language_score": 0.9579434394836426, "token_count": 702, "score": 3.375, "int_score": 3 }
The winter moth (Operophtera brumata) feeds on deciduous plants including Maple, Oak, Cherry, Basswood, Ash, White Elm, Apple, Blueberry, and other perennials. It is commonly observed in late fall/early winter as a white-ish adult moth and, in spring, as a tiny green caterpillar. Winter moth caterpillars feed on the leaves of their host species. High populations of the pest can completely defoliate the host plant, inhibiting its ability to grow and leaving it vulnerable to other insects, diseases, and eventual mortality. See image gallery above for signs and symptoms. The winter moth is native to Europe. It was transported to the United States via Nova Scotia in the 1950s, most likely within the soil of ornamental landscape trees.The invasive insect has since spread into Eastern and Western Canada and the United States. Adult winter moths emerge from the ground in November and December. The fully developed adult moths immediately begin the reproductive process. The female winter moth, unable to fly, climbs up trees or buildings and attracts males to her with pheromones. After mating females lay a cluster of approximately 150 bright orange eggs under tree bark and in crevices; immediately dying following the dispersal of the eggs. In March and April, when temperatures reach approximately 55°F, the eggs hatch into smooth green inchworms. The newly hatched caterpillars are approximately a half inch to an inch in length and have a white stripe lengthwise on each side of their bodies. The young caterpillars disperse up into the plant's canopy by spinning a strand of silk and riding wind currents up, a method known as “ballooning”. Once in the canopy the caterpillars feed on the foliage, both open leaves and within tree buds. In mid-June, the caterpillars migrate into the soil to pupate. Fully grown winter moths will emerge from these pupae in November and December. The winter moth is a species of particular concern because it damages a wide variety of tree and agricultural crop species. High populations of the pest can cause complete defoliation. Repeated years of defoliation may contribute to tree decline. Winter moth has never been detected in Vermont. Dimitrios Avtzis, NAGREF-Forest Research Institute, Bugwood.org Fabio Sterguic, Universita di Udine, Bugwood.org Gyorgy Csoka, Hungary Forest Research Institute, Bugwood.org Hannes Lemme, Bavarian State Research Center for Agriculture, Bugwood.org Milan Zubrik, Forest Research Institute - Slovakia, Bugwood.org
<urn:uuid:0db02605-f608-4e27-99f3-95f9d2f5461e>
{ "dump": "CC-MAIN-2017-22", "url": "https://vtinvasives.org/invasive/winter-moth", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608877.60/warc/CC-MAIN-20170527075209-20170527095209-00237.warc.gz", "language": "en", "language_score": 0.937623143196106, "token_count": 553, "score": 3.953125, "int_score": 4 }
Discharging paper from Canada into a lighter in the Royal Docks. © There was considerable emigration from Britain to Canada in the 19th century, leading to the increasing exploitation of Canada's vast forests and huge areas of farmland. Canada's main exports to London were wheat, timber and timber products. Most of these goods came into the Surrey and Millwall Docks. Mineral exports to Britain included asbestos and gold (after the Klondyke strike of 1896). Find out more: The hub of empire: Imperial trade
<urn:uuid:4466a9f6-365d-44db-9162-022f807d5745>
{ "dump": "CC-MAIN-2017-51", "url": "http://www.portcities.org.uk/london/server/show/conPopUp.96/noClose/1/Canada.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512121.15/warc/CC-MAIN-20171211033436-20171211053436-00797.warc.gz", "language": "en", "language_score": 0.950416624546051, "token_count": 110, "score": 3.453125, "int_score": 3 }
Credit of this article goes to Danika Worthington and The Denver Post A green thumb may lower your risk of cancer. Don’t believe it? You’re not the only one. Which is why a University of Colorado Boulder researcher is setting out to find hard evidence during a three-year clinical trial that will measure a variety of health factors in 312 participants who will be introduced to community gardening for the first time. “We tend to intervene from the top down,” CU Boulder professor Jill Litt said of programs to improve physical inactivity and poor diets. “You need solutions from the ground up to meet people where they’re at.” Litt said that throughout more than a decade of researching community gardening, people regularly say they there’s something about it that makes them feel better. Her previous observational surveys found that gardeners eat 5.7 servings of fruits and vegetables on average per day compared to 3.9 for non-gardeners. They tend to have lower body mass index. They also report an average of 2.6 days of poor mental or physical health in the past month compared to the national average of 6.2 days. She said it’s likely because gardeners are less sedentary, often spending an average of two hours not sitting during the activity, and because they’re eating the organic food they produce. There’s some research to suggest that exposure to the microorganisms in the soil also benefits mental health, she said. Additionally, the best ways to create a permanent shift to healthier behaviors is to get people in contact with nature and to have them build meaningful relationships, she said. Both of those are intrinsic to community gardens. But it’s unclear if people who are involved in community gardening are more likely to already have those traits or if community gardening itself facilitates a change in people. To put some data behind it, Litt is measuring the body mass index, consumption of fruits and vegetables, stress and anxiety levels and other health measures of participants before planting, during harvesting and the following spring at community gardens mainly located in Denver and Aurora. The study will also use accelerometers worn by participants to measure physical activity levels. There’s been growing interest in this type of research with a handful of similar studies also receiving grants this year, she said. But Litt said her study is distinguished from the rest due to its gold standard for measurements, noting the accelerometers. The results of these studies could have a great impact. First, data from it could be used to help convince clinicians about the benefits of community gardens. It could also help the gardens themselves. In 2005, when Litt first started researching gardens, there were about 42 gardens in the Denver area. Now there are 165 gardens with 10-12 being built a year, she said. The demand to build more gardens and the necessity to update old gardens are both pulling on already limited funds, she said. More people may be willing to put money toward community gardens if they realized the health benefits, she said. The study is looking for people to join its second wave. Interested people can reach out to project coordinator Angel Villalobos at (303) 724-1235 or at email@example.com.
<urn:uuid:f2dae352-17e2-4c9c-9431-4ed272e4176d>
{ "dump": "CC-MAIN-2020-50", "url": "https://capsmetro.org/en/caps-in-the-media/the-denver-post-interviews-investigator-jill-litt/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195929.39/warc/CC-MAIN-20201128214643-20201129004643-00087.warc.gz", "language": "en", "language_score": 0.9629963040351868, "token_count": 679, "score": 3, "int_score": 3 }
8: No. 6, November 2011 Angela F. Leone, MS, RD; Samantha Rigby, MS; Connie Betterley, MS, RD; Sohyun Park, PhD; Hilda Kurtz, PhD; Mary Ann Johnson, MA, PhD; Jung Sun Lee, PhD, RD Suggested citation for this article: Leone AF, Rigby S, Betterley C, Park S, Kurtz H, Johnson MA, Lee JS. Store type and demographics influence on the availability and price of healthful foods, Leon County, Florida, 2008. Prev Chronic Dis 2011;8(6):A140. http://www.cdc.gov/pcd/issues/2011/nov/10_0231.htm. Accessed [date]. The availability of healthful foods varies by neighborhood. We examined the availability and price of more healthful foods by store type, neighborhood income level, and racial composition in a community with high rates of diet-related illness and death. We used the modified Nutrition Environment Measures Survey in Stores to conduct this cross-sectional study in 2008. We surveyed 73 stores (29% supermarkets, 11% grocery stores, and 60% convenience stores) in Leon County, Florida. We analyzed the price and availability of foods defined by the 2005 Dietary Guidelines for Americans as “food groups to encourage.” We used descriptive statistics, t tests, analysis of variance, and χ2 tests in the analysis. Measures of availability for all more healthful foods differed by store type (P < .001). Overall, supermarkets provided the lowest price for most fresh fruits and vegetables, low-fat milk, and whole-wheat bread. Availability of 10 of the 20 fruits and vegetables surveyed, shelf space devoted to low-fat milk, and varieties of whole-wheat bread differed by neighborhood income level (P < .05), but no trends were seen for the availability or price of more healthful foods by neighborhood racial composition. Store type affects the availability and price of more healthful foods. In particular, people without access to supermarkets may have limited ability to purchase healthful foods. Nutrition environment studies such as this one can be used to encourage improvements in neighborhoods that lack adequate access to affordable, healthful food, such as advocating for large retail stores, farmer’s markets, and community gardens in disadvantaged Back to top The 2010 Dietary Guidelines for Americans define more healthful foods as “food groups to encourage,” including fruits, vegetables, low-fat milk products, and whole-grain products (1). Although consumption of these foods is recommended to promote health and prevent chronic diseases, studies have found that most Americans do not meet these recommendations (2-4). People who do not consume a nutritious diet are more likely to develop diabetes, cardiovascular disease, obesity, and certain cancers The decision to purchase and consume more healthful foods is influenced by personal and environmental factors. The community or consumer nutrition environment has been identified as a priority area of research. The community nutrition environment is the type, location, and accessibility of food stores, and it is described by the availability, price, and quality of food in food stores (8). Previous studies have suggested that understanding the community nutrition environment could provide insight to barriers that may influence dietary behavior (9-11). The relationship of the consumer nutrition environment to more healthful foods is poorly understood. Despite significant interest in consumer nutrition environment research, little progress has been made to devise a reliable and valid tool to be used across all consumer nutrition environment studies (12). The purpose of this study was to evaluate the consumer nutrition environment of Leon County, Florida, a community with high rates of diet-related illness and death. We used the validated Nutrition Environment Measures Survey in Stores (NEMS-S) to identify potential barriers that some residents may have to accessing healthy, affordable food. We examined the availability and price of more healthful foods by store type, neighborhood income level, and racial composition. Back to top We analyzed the price and availability of foods defined by the 2005 Dietary Guidelines for Americans as “food groups to encourage.” We used data from the Leon County Nutrition Environment Measures Project, which was initiated, designed, and implemented in 2008 by Florida Department of Health administrators. The study used the most reliable and valid consumer environment measuring tool available to conduct this research. Institutional review board approval was not required for this study because human subjects were not involved. Leon County is in the panhandle area of Florida. According to 2000 US Census data, the county population was 239,452. The racial composition of the county was 66.4% white, 29.1% black, and 4.5% other race. Approximately 18.2% of residents lived below the federal poverty level, which was higher than the state average of 12.5% (13). Leon County residents have higher rates of childhood obesity and diet-related deaths than do residents of most Florida counties (14). Census tracts were used as proxies for neighborhoods (15). Using methods similar to those used in previous studies, we dichotomized the 48 census tracts into high- (n = 24) and low-income (n = 24) groups based on the percentage of households living below the federal poverty level in each census tract (16,17) and classified the census tracts into 3 racial groups based on criteria of a previous food environment study (16): predominantly white (<20% of the population was black, n = 11), predominantly black (>80% of the population was black, n = 6), or racially mixed (20%-80% of the population was black, n = 31). We obtained a list of supermarkets, grocery stores, and convenience stores from the Florida Department of Agriculture and Consumer Services. Because no standardized definition and classification of food stores has been consistently used in previous nutrition environment studies, we classified stores according to the Department of Agriculture and Consumer Services Florida Administrative Code (18). Food stores are defined on the basis of total square footage, the number of cash registers, and the amount of food processing and service. Stores were then geocoded to census tracts. We used the Florida Community Health Assessment Resource Tool Set to obtain poverty data for each census tract (19). A convenience sampling of the stores was designed to select various store types from each census tract. If possible, a supermarket in each census tract was surveyed. If more than 1 supermarket was available, a chain other than Publix Super Markets was selected to increase the diversity of the sample. If no supermarkets were present, 1 grocery store and 1 convenience store were surveyed. If no supermarkets or grocery stores were found, 2 convenience stores were surveyed. If more than 1 grocery or convenience store was available, stores were randomly selected. This process yielded 65 stores. Additional supermarket, grocery store, and convenience store sampling was conducted to ensure that stores from the highest- and lowest-income neighborhoods were included. Starting with the census tract with the highest poverty rate, the first census tract that had at least 1 supermarket and more than 2 convenience stores was identified, and all stores within that census tract were surveyed. The same procedure was followed for the census tract with the lowest poverty rate. This criterion yielded 13 more stores, for a total of 78 stores. Five stores were excluded because they did not meet food store definitions or because the store was in an unsafe neighborhood, which yielded a final sample size of 73 stores (29% supermarkets, 11% grocery stores, and 60% convenience stores). Safety was subjectively assessed by 2 observers. The total number of stores sampled is 25% of all stores in Leon County, 100% of all supermarkets, 18% of all grocery stores, and 59% of all convenience stores. Food store survey tool A modified NEMS-S was used to collect data for this study (8). The NEMS-S tool surveys 11 different measures : milk, fruit, vegetables, ground beef and meat alternatives, hot dogs, frozen dinners, baked goods, convenience store and grocery store beverages, bread, baked chips, and baked goods. We modified the survey’s fruit and vegetable measures to add items that may be more commonly purchased by low-income people (eg, items commonly found on the Thrifty Food Plan, 1 of 4 USDA plans specifying foods and amounts of foods to provide adequate nutrition and items available in convenience stores). The availability and price of canned fruit cocktail and canned carrots were added to the fruit and vegetable measures, as were canned and frozen produce. Florida health department administrators attended a 3-day NEMS-S training course conducted by the Emory University researchers who developed and validated the original NEMS-S. The administrators used the modified NEMS-S to conduct a pilot test in 4 stores. After pilot testing, health administrators revised the modified tool and consulted Emory researchers before finalizing the modified tool. Two trained raters surveyed each store between January and March 2008. All store survey protocols followed the original NEMS-S protocol. After the 2 raters completed surveying stores, they compared their scores. When there were discrepancies, raters went back to the store to verify the information. Availability of all items was recorded by bubbling in yes or no on the survey next to the preferred brand of each item. If the preferred item to be surveyed was unavailable (eg, Red Delicious apples), a similar alternate item was written in (eg, Granny Smith apples). Availability of fresh fruits and vegetables was also measured by counting the total number of types of fruits and vegetables in a store and assigning each a maximum score of 10. For milk availability, shelf space for low-fat milk (skim and 1%) and whole milk in pint, quart, half-gallon, and gallon sizes was measured. Shelf space was measured by counting the total number of available columns of low-fat and whole-fat milk for each carton size. These numbers were used to calculate the total inches of shelf space devoted to low-fat and whole-fat milk. The availability of whole-wheat bread was also measured by recording the number of different brands and types of whole-wheat and whole-grain bread in a store. The lowest price was recorded for all food items. Sale prices were recorded if they were the only prices available and the regular price could not be calculated from the sale price. The price of fruits and vegetables was recorded by piece or by pound. To minimize potential bias, price data for each fruit and vegetable were converted to the unit that was most commonly recorded for that item. The US Department of Agriculture (USDA) nutrient database was used to convert the price of produce from 1 unit to another (eg, 3 medium apples equals 1 pound) (20). The price of milk was recorded for quart and half-gallon sizes. The price of bread was recorded by loaf size (weight in ounces). For the purposes of this study, we report the availability and price of all 10 fruits and vegetables on the original NEMS-S, low-fat milk, and whole-wheat bread. Data for each store were entered into the NEMS-S database. Analysis was conducted by Stata data analysis and statistical software, version 10.1 (StataCorp LP, College Station, Texas). Descriptive analysis was conducted to describe the availability and price of each of the food measures. We used t tests and analysis of variance tests to compare the continuous availability and price measures of more healthful foods between the 2 neighborhood income-level groups and between the 3 store types and neighborhood race groups, respectively. Fisher’s exact tests or χ2 tests were used to compare the categorical availability measures by store type and neighborhood characteristics. Nonparametric tests were also used to examine the differences in the price measures by store type and neighborhood characteristics, but the tests provided similar results. The potential interaction between store type and neighborhood characteristics on the continuous availability and price measures were examined by using analysis of variance tests. Significance was set at P < .05. Back to top We analyzed the distribution and percentage of store types included in this study by neighborhood income level and racial composition (Table). Nearly twice as many convenience stores and 7 times more grocery stores were surveyed in low-income neighborhoods than in high-income neighborhoods. Most stores surveyed (64%) were in a mixed-race neighborhood (n = 47). The smallest number of stores was selected from predominantly black neighborhoods; 75% of them were convenience More than three-quarters of stores surveyed in predominantly white neighborhoods were in high-income neighborhoods. All 8 stores surveyed in predominantly black neighborhoods were in a low-income neighborhood. Availability by store type The availability of all 10 fruits and all 10 vegetables was significantly different by store type (P < .001). Four of the fruits were not available in grocery stores and 6 of the fruits were not available in convenience stores, respectively (Figure). All 10 vegetables were available in supermarkets, but none were available in convenience stores. Figure. Availability of fruits and vegetables by store type, Leon County, Florida, 2008. [A tabular version of this figure is also available.] Supermarkets had the highest fruit availability score (mean [SD], 9.6 [0.93]), on average, compared with grocery stores (3.1 [1.4]) and convenience stores (1.3 [1.3]) (F = 325.7, P < .001). Supermarkets also had higher vegetable availability scores (10.0), on average, compared with grocery stores (5.1 [3.0]) and convenience stores (0) (F = 805.5, P < .001). The availability of low-fat milk differed by store type for both quart and half-gallon sizes (χ2 = 23.0 and 23.7, respectively, both P < .001). Half-gallon size milk was most commonly available in all 3 store types. All supermarkets (100%) carried low-fat half-gallon milk, compared with 63% of grocery stores and 36% of convenience stores. Supermarkets devoted 52% of shelf space to low-fat milk, compared with 24% in grocery stores and 11% in convenience stores (F = 41.6, P < .001). All supermarkets carried whole-wheat bread, compared with 38% of grocery stores and 7% of convenience stores (χ2 = 53.0, P < .001). Among stores that carried whole-wheat bread, on average, supermarkets carried more varieties (6.0 [0.22]) than did grocery stores (1.1 [2.1]) and convenience stores (0.07 [0.25]) (F = 502.7, P < .001). Price by store type The prices of 3 fruits (apples, bananas, and oranges, F = 25, 3, 189.0, and 15.7, respectively, comparing supermarkets, grocery stores, and convenience stores) and 3 vegetables (cucumbers, lettuce, and peppers, F = 8.4, 14.7, and 10.8, respectively, comparing supermarkets and grocery stores) differed by store type: supermarkets provided the lowest price for these fresh fruits and vegetables (P < .001). We were able to compare prices for vegetables at supermarkets and grocery stores only, due to the absence of vegetables in convenience stores. The price of low-fat half-gallon size milk (F = 26.6, P < .001) had a wide price range by store type ($2.22-$5.09). On average, half-gallon and quart size low-fat milk were least expensive in supermarkets and most expensive in convenience stores. When available, whole-wheat bread was least expensive in supermarkets ($2.45 [$0.17]), compared with grocery stores ($2.68 [$0.42]) and convenience stores ($2.62 [$0.12]) but the difference was not significant (F = 2.36, P = .11). Availability by neighborhood characteristics Availability of each of the 20 fresh produce items was greater in high-income than low-income neighborhoods. Fruit availability scores were significantly higher in high-income than in low-income neighborhood stores (t = 2.3, P = .02) but did not differ by neighborhood racial composition. Stores in high-income neighborhoods had a larger percentage of shelf space devoted to low-fat milk (33%) than did those in low-income neighborhoods (19%) (t = 2.4, P = .02). High-income neighborhood stores had more varieties of whole-wheat bread, on average, (2.7 [3.0]) than did low-income neighborhood stores (1.3 [2.4]) (t = 2.1, P = .04). Availability was not significantly different by neighborhood racial composition. Neighborhood characteristics were not significantly related to the price of more healthful foods. No significant interactions were found between store type and neighborhood characteristics on the availability and price of healthier food items. Back to top This study documented the availability and price of more healthful foods to better understand the consumer nutrition environment of Leon County, Florida. Our findings suggest that store type is associated with the availability and price of more healthful foods. Neighborhood income level was related to the availability but not the price of some Greater availability of foods in supermarkets compared with other food stores has been shown (10,21,22). As expected, among surveyed stores, supermarkets had the greatest availability of all more healthful foods, followed by grocery stores and convenience stores. Several studies have found price differences between store types for various foods (11,23). Few studies have analyzed the price of more healthful foods, but previous studies have consistently found that supermarkets offer more healthful foods at a lower price compared with other food stores (10,24,25). We found significant differences in price by store type for low-fat half-gallon size milk and 6 of 20 fresh fruits and vegetables. These items were significantly less expensive in supermarkets than in grocery stores and convenience stores. Some of the nonsignificant differences in the price of produce may be due to the time of year that the study was conducted, the absence of produce items in grocery and convenience stores, and the use of the USDA nutrient database to roughly estimate price of produce. Previous consumer nutrition environment studies (16,21,23,25,26) have focused on examining whether the poor and minority neighborhoods have less access to affordable foods and have found inconsistent results. We found no significant differences in the availability and price of more healthful foods by neighborhood racial composition, but neighborhood wealth was associated with significant differences in the availability of 10 of the 20 fruits and vegetable items surveyed, shelf space devoted to low-fat milk, and the availability of varieties of whole-wheat bread. Although it appears that stores in predominantly black and low-income neighborhoods in Leon County provide similar availability and prices of more healthful foods compared with stores in predominantly white and high-income neighborhoods, poor and minority neighborhoods have fewer supermarkets. Supermarkets are distributed fairly equally between high- and low-income level neighborhoods in Leon County, but the distribution of supermarkets by neighborhood racial composition is disproportionate (27). This finding is similar to those of other studies, and this disproportionate distribution may influence the purchasing and consumption of more healthful foods among these populations (24,28). This study is unique because it was designed and conducted by health department administrators with the intent to make policy changes and interventions in their community. Administrators adopted the best available method by using the validated NEMS-S. Although the study was conducted in a single county at 1 point in time by using a convenience sample of a small number of stores, research has suggested that 1 observation of an area’s consumer nutrition environment is sufficient to provide accurate data (29). An additional strength of this study is its focus on the availability and price of more healthful foods that are essential to promoting health and preventing disease and that are the focus of federal dietary guidelines. This study has many limitations. As with any consumer nutrition environment study, the findings of this study cannot be easily generalized or compared with those of previous studies for the following reasons. First, this study used census tracts to define neighborhoods. Although research has found that residents’ definition of a neighborhood is comparable to a census tract, most neighborhoods include parts of at least 2 census tracts (30). Making associations between neighborhoods and their accompanying nutrition environment relies on the assumptions of understanding where people shop for food and the various contexts in which a person’s food-related behaviors occur. Second, this study used census tract-level demographic characteristics to determine neighborhood wealth and racial composition. Similar to Morland and Filomena (16), we defined predominantly black as more than 80% of the population. Other studies have defined predominantly black as more than 75% (31) or used more than 50% nonwhite or Hispanic populations or both to define a minority population (26). Third, a range of store type definitions have been used in previous studies. Depending on which source is used, the results of a study may vary. Other potential limitations of the study include the use of the USDA nutrient database to estimate produce prices, inaccurate or unclear data collection resulting from human error, and the convenience sampling that was used to select stores to be audited. This study suggests that access to supermarkets and more healthful foods varies by neighborhood, which may negatively influence people’s eating behavior. By employing the best available tools and method, nutrition environment studies can be used to provide convincing evidence to policy makers, administrators, and consumers that will encourage improvements in neighborhoods that lack adequate access to affordable, healthful food. Examples of such improvements include advocating for large retail stores, farmer’s markets, and community gardens in disadvantaged neighborhoods. Back to top We thank the Florida Department of Health employees who conducted all of the store audits. Back to top Corresponding Author: Jung Sun Lee, PhD, RD, 280 Dawson Hall, Department of Foods and Nutrition, University of Georgia, Athens, GA 30602. Telephone: 706-542-6783. E-mail: Author Affiliations: Angela F. Leone, Samantha Rigby, Hilda Kurtz, Mary Ann Johnson, University of Georgia, Athens, Georgia. Connie Betterley, Florida Department of Health, Tallahassee, Florida; Sohyun Park, Centers for Disease Control and Prevention, Atlanta, Georgia. Back to top - Dietary guidelines for Americans 2010. US Department of Health and Human Services and US Department of Agriculture. 7th edition. Washington (DC): 2011. - Cleveland LE, Moshfegh AJ, Albertson AM, Goldman JD. Dietary intake of whole grains. J Am Coll Nutr 2000;19(3 Suppl):331S-8S. - Guenther PM, Dodd KW, Reedy J, Krebs-Smith SM. Most Americans eat much less than recommended amounts of fruits and vegetables. J Am Diet Assoc 2006;106(9):1371-9. - Ranganathan R, Nicklas T, Yang SJ, Berenson GS. The nutritional impact of dairy product consumption on the dietary intakes of adults (1995-1996): the Bogalusa Heart Study. J Am Diet Assoc 2005;105(9):1391-400. - Krauss RM, Eckel RH, Howard B, Appel LJ, Daniels SR, Deckelbaum RJ, et al. AHA Dietary Guidelines: revision 2000: a statement for healthcare professionals from the Nutrition Committee of the American Heart Association. Circulation 2000;102(18):2284-99. - Byers T, Nestle M, McTiernan A, Doyle C, Currie-Williams A, Gansler T, Thun M; American Cancer Society Nutrition and Physical Activity Guidelines guidelines on nutrition and physical activity for cancer prevention; reducing the risk of cancer with healthy food choices and physical activity; 2001. CA Cancer J Clin 2002;52(2):92-119. - American Diabetes Association. How to prevent prediabetes. Accessed July 10, 2011. - Glanz K, Sallis JF, Saelens BE, Frank LD. Environment Measures Survey in Stores (NEMS-S): development and evaluation. Am J Prev Med 2007;32(4):282-9. - Powell LM, Slater S, Mirtcheva D, Bao Y, Chaloupka FJ. Food store availability and neighborhood characteristics in the United States. Prev Med 2007;44(3):189-95. - Jetter KM, Cassady DL. The availability and cost of healthier food alternatives. Am J Prev Med 2006;30(1):38-44. - Liese AD, Weis KE, Pluto D, Smith E, Lawson A. Food store types, availability, and cost of foods in a rural environment. J Am Diet Assoc 2007;107(11):1916-23. - Glanz K, Sallis JF, Saelens BE, Frank LD. Healthy nutrition environments: concepts and measures. Am J Health Promot 2005;19(5):330-3, ii. - US Census Bureau: Leon County, FL fact sheet, 2000. http://factfinder.census.gov/servlet/SAFFFacts?_event=&geo_id=05000US12073&_geoContext=01000US|04000US12|05000US12073& _street=&_county=leon+county&_cityTown=leon+county&_state=04000US12&_zip=&_lang=en& _sse=on&ActiveGeoDiv=&_useEV=&pctxt=fph&pgsl=050&_submenuId=factsheet_1&ds _name=ACS_2007_3YR_SAFF&_ci_nbr=null&qr_name=null®=&_keyword=&_industry=. Accessed April 23, 2009. - Florida Department of Health. Leon County chronic disease profile; 2008. Office of Health Statistics and Assessment. Accessed September 3, 2009. - US Census Bureau: census tracts and block numbering areas, 2000. http://www.census.gov/geo/www/cen_tract.html. Accessed April 23, 2009. - Morland K, Filomena S. Disparities in the availability of fruits and vegetables between racially segregated urban neighborhoods. Public Health Nutr 2007;10(12):1481-9. - Zenk SN, Schulz AJ, Israel BA, James SA, Bao S, Wilson ML. Fruit and vegetable access differs by community racial composition and socioeconomic position in Detroit, Michigan. Ethn Dis 2006;16(1):275-80. - Florida Department of Agriculture and Consumer Services: Florida Administrative Code food permits, 2009. https://www.flrules.org/gateway/ruleno.asp?ID=5K-4.020. Accessed April 20, 2009. - Florida Department of Health. Percent of the total population below poverty level, 2000. Office of Health Statistics and Assessment. http://www.floridacharts.com/charts/chart.aspx. Accessed April 20, 2009. - US Department of Agriculture nutrient database. www.nal.usda.gov/fnic/foodcomp/search/. Accessed April 25, 2009. - Block D, Kouba J. A comparison of the availability and affordability of a market basket in two communities in the Chicago area. Public Health Nutr 2006;9(7):837-45. - Connell CL, Yadrick MK, Simpson P, Gossett J, McGee BB, Bogle ML. Food supply adequacy in the Lower Mississippi Delta. J Nutr Educ Behav 2007;39(2):77-83. - Crockett EJ, Clancy KL, Bowering J. Comparing the cost of a thrifty food plan market basket in three areas of New York state. J Nutr Educ 1992;24(1):71S-8S. - Horowitz CR, Colson KA, Hebert PL, Lancaster K. Barriers to buying healthy foods for people with diabetes: evidence of environmental disparities. Am J Public Health 2004;94(9):1549-54. - Cassady D, Jetter KM, Culp J. Is price a barrier to eating more fruits and vegetables for low-income families? J Am Diet Assoc 2007;107:1909-15. - Hosler AS, Varadarajulu D, Ronsani AE, Fredrick BL, Fisher BD. Low-fat milk and high-fiber bread availability in food stores in urban and rural communities. J Public Health Manag Pract 2006;12(6):556-62. - Rigby S, Leone AF, Kim H, Betterley C, Johnson MA, Kurtz H, Lee JS. Food deserts in Leon County, FL: disparate distribution of Supplemental Nutrition Assistance Program accepting stores by neighborhood characteristics. J Nutr Educ Behav - Zenk SN, Schulz AJ, Israel BA, James SA, Bao S, Wilson ML. Neighborhood racial composition, neighborhood poverty, and the spatial accessibility of supermarkets in metropolitan Detroit. Am J Public Health 2005;95(4):660-7. - Zenk SN, Grigsby-Toussaint DS, Curry SJ, Berbaum M, Schneider L. Short-term temporal stability in observed retail food characteristics. J Nutr Educ Behav 2010;42(1):26-32. - Coulton CJ, Korbin J, Chan T, Su M. Mapping residents’ perceptions of neighborhood boundaries: a methodological note. Am J Community Psychol 2001;29(2):371-83. - Baker EA, Shootman M, Barnidge E, Kelly C. The role of race and poverty in access to foods that enable individuals to adhere to dietary guidelines. Prev Chronic Dis 2006;3(3):A76. Accessed July 29, 2011. Back to top
<urn:uuid:e1041bcb-e114-4b1c-8b05-e4268eacc160>
{ "dump": "CC-MAIN-2016-40", "url": "http://www.cdc.gov/pcd/issues/2011/nov/10_0231.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662133.5/warc/CC-MAIN-20160924173742-00262-ip-10-143-35-109.ec2.internal.warc.gz", "language": "en", "language_score": 0.9276120662689209, "token_count": 6397, "score": 2.921875, "int_score": 3 }
Historically, multiple sclerosis (MS) has been associated with many different viruses, including several members of the Herpesviridae family. However, no human or animal virus has been identified as a true “cause” of MS; rather, the epidemiologic and diagnostic data suggest that viral infection may be a cofactor affecting the pathogenesis of MS. Human herpesvirus-6 (HHV-6) is a ubiquitous herpesvirus associated with a common childhood illness, roseola, and this virus is one of those most recently associated with MS. During the past decade, a number of investigations have examined anti–HHV-6-specific antibody responses, HHV-6 viral DNA, or HHV-6 presence in central nervous system (CNS) tissue in both MS patients and controls. There is a growing body of evidence associating HHV-6 infection of the CNS with MS in at least a subpopulation of patients, although the specific factors that define the vulnerable subpopulation(s) of MS patients have not been elucidated. This evidence is provocative but not definitive, and it does not distinguish between HHV-6 as a causal agent in MS versus HHV-6 as a cofactor. Although more clinically based data are needed, the controversy surrounding HHV-6 and MS has again focused attention on the role of viral infection in the clinical and pathologic course of MS. Suggested citation: Is There a Role for Human Herpesvirus-6 in the Course of Multiple Sclerosis?. Delgado S, McCarthy M. Int J MS Care [Serial on-line]. March 2002;4(1). Multiple sclerosis (MS) is a chronic, demyelinating disorder in which the immune system attacks the white matter of the central nervous system (CNS). Mechanisms involved in the causes of this disease are complex and multifactorial.1 Evidence based on animal disease models, such as that derived from Theiler’s murine encephalomyelitis virus infection, supports the theory that infectious agents, including human viruses, may play an important role in the pathogenesis of MS in genetically Historically, MS has been associated with many different viruses, including several members of the Herpesviridae family.5 However, no human or animal virus has been identified as a true “cause” of MS; rather, the epidemiologic and diagnostic data suggest that viral infection may be a cofactor affecting the pathogenesis of MS.6 Among human viruses, herpesviruses have certain features that are potentially relevant to the pathogenesis of MS.6 Herpesviruses typically cause primary infection in childhood and then follow a lifelong course of latent and recurrent infection, which can persist in the CNS. Several herpesviruses can induce demyelination as well. Human herpesvirus-6 (HHV-6) is a ubiquitous herpesvirus associated with a common childhood illness, roseola (Table 1). This virus is one of those most recently associated with MS.7,8 The controversy surrounding HHV-6 and MS has again focused attention on the role of viral infection in the clinical and pathologic course of MS. A viral infection can cause tissue damage within the CNS by different mechanisms. First, a virus may have a direct cytopathic effect on the cell it infects. In this instance, viral replication and the production of progeny virus can destroy the cell’s specialized function(s). An example of this is the CNS demyelination seen in progressive multifocal leukoencephalopathy, a demyelinating disorder in which a polyomavirus (JC) infects astrocytes and oligodendrocytes. Because oligodendrocytes are the principal cells responsible for myelin production in the CNS, their destruction by the JC virus results in loss of myelin. Second, viral infection can act to “trigger” MS attacks by stimulating antiviral immune responses that may not be properly regulated. Increased risk of clinical MS exacerbations following viral infection has been described for upper respiratory tract infections.9,10 Viruses may act to trigger a local immune response to viral proteins juxtaposed to normal myelin antigens in viral-infected brain or spinal cord, causing secondary inflammation and demyelination.11 Alternatively, viral infection may trigger an autoimmune reaction by “molecular mimicry,” whereby a virus can express antigens with structural homology to normal myelin antigen and thus elicit a mistaken autoimmune attack.12 In this instance, a viral infection can stimulate antigen-specific clones of T cells, which, after being activated, could enter the CNS and stimulate immune attack on normal “self” antigens expressed within myelinated tissues, again causing damage to myelin. Some authors have suggested that the pathogenesis of demyelination in MS may be heterogeneous within different plaques and among different MS patients.13 This concept raises the possibility that different causal agents could be associated with MS lesions, and thus both viral and autoimmune mechanisms could be involved in selected subpopulations of patients.14 Overview of Human Herpesvirus-6. variants: HHV-6A, HHV-6B cells: astrocytes, oligodendrocytes of cell destruction cell death (apoptosis) of inflammatory cytokines (eg, TNF-a) = tumor necrosis factor alpha HHV-6 was first isolated in 1986 from lymphocytes of patients with lymphoproliferative disorders.15 It is a member of the b-herpesvirus subgroup, closely related to cytomegalovirus (CMV) and human herpesvirus-7 (HHV-7).16 The viral genome is arranged as a linear segment of double-stranded DNA.17 HHV-6 can be separated into two strain variants, HHV-6A and HHV-6B, according to DNA restriction analysis, in vitro cellular tropism, and antigenic differences detected by monoclonal antibodies.18,19 These two HHV-6 variants share a genomic homology of about 94% to 96%.20 Variant HHV-6A is considered more neurotropic than is HHV-6B in that HHV-6A is more frequently identified in cerebrospinal fluid (CSF) than HHV-6B is. However, HHV-6B predominates as the persistent variant in peripheral blood mononuclear cells from both children and adults.21 The two variants may have distinct immunologic properties; HHV-6A can infect individuals with prior exposure to HHV-6B. Thus, immunity to HHV-6B does not necessarily prevent infection by HHV-6A. HHV-6 grows most productively in CD4-expressing T-lymphocytes.22 Efficient HHV-6 replication in primary lymphocyte culture in the laboratory setting requires activation of T-lymphocytes with a mitogen such as phytohemagglutinin and interleukin 2 to stimulate cell proliferation.15,23,24 HHV-6 also infects other immune system cells such as CD8-expressing T-lymphocytes, natural killer cells, and macrophages.25,26 The cell surface receptor molecule for HHV-6 has been reported to be CD46.27 CD46 is a glycoprotein present on the surface of all nucleated cells. It protects against spontaneous complement activation on autologous cells. Cermelli and Jacobson recently postulated that virus-induced down-regulation or inactivation of CD46 in genetically predisposed individuals could secondarily allow complement activation, which may lead to myelin damage and increased immune activity against myelin antigens.5 HHV-6 infection can induce apoptosis of CD4-expressing T-lymphocytes in vitro and in vivo.28,29 HHV-6–mediated apoptosis may be caused by cytokines, inflammatory molecules secreted by virus-infected immune cells.30 This could explain the reported susceptibility to apoptosis of uninfected CD4-expressing T-lymphocytes in human HHV-6 infection in vivo.29 The induction of apoptosis in CD4-expressing cells in vivo could cause a dysregulation of the immune system and have important significance in the course of autoimmune disorders such as MS. HHV-6 also has demonstrated neurotropism, that is, the capacity to grow in the cell types that are found in the brain and spinal cord, including embryonic glia, glioblastoma, and neuroblastoma cell lines, and human fetal astrocytes.25,31-33 HHV-6 infection in human fetal astrocytes can activate the HIV-1 major gene promoter34; therefore, HHV-6 may act as a cofactor in AIDS-related neurologic disorders and play a similar role in the pathogenesis of MS in association with other viruses. HHV-6 can also infect cultured adult oligodendroglia and microglia,35 which establishes an experimental basis for predicting that HHV-6 can infect oligodendrocytes surrounding MS plaques. HHV-6 has also been reported to infect human endothelial cells in vitro.36 Infection of cerebral vascular endothelial cells could give rise to a chronic inflammation of blood vessels (vasculitis) induced by HHV-6, which may be a mechanism of CNS complications of HHV-6 infection.37 HHV-6 vasculitis would lead to breakdown of the blood-brain barrier and to increased passage of potentially autoreactive immune system cells into the CNS. Therefore, antigens expressed within myelin would be more accessible to immunologic attack. Infection of the CNS Epidemiologic studies indicate that more than 90% of all individuals are exposed to HHV-6 during early childhood, probably before age 2.38,39 The presence of HHV-6 DNA in the CSF has been reported in children with repeated febrile seizures after primary infection40 and in the CSF of seven of 10 children with exanthem subitum.41 HHV-6 DNA has been detected by polymerase chain reaction (PCR) technique in 43% to 85% of specimens of normal human brain tissue.42-44 Studies have shown an association of HHV-6 with meningitis or encephalitis45,46 and with increased intrathecal production of anti–HHV-6 early antigen immunoglobulin M (IgM), a possible indicator of viral reactivation in patients with meningitis or encephalitis.47 Knox and colleagues directly demonstrated by immunohistochemistry the presence of HHV-6–infected astrocytes, oligodendrocytes, and neurons in autopsy brain tissue from an HIV-infected child with fulminant encephalitis.48 These findings suggest that HHV-6 can infect the CNS directly and remain in a latent state in normal or immune-suppressed individuals. Thus, the CNS could be a reservoir of latent or chronic HHV-6 infection, and reactivation could occur locally.40 There is additional neuropathologic evidence linking HHV-6 infection with demyelination in the CNS. Yanagihara et al described HHV-6–infected neurons and astrocytes in areas of white matter demyelination found in an adult bone marrow transplant patient with encephalitis.49 HHV-6 viral particles were demonstrated within oligodendrocytes in multifocal demyelinating white matter lesions of a patient with fulminant encephalomyelitis.50 HHV-6–infected cells have been found in association with areas of active demyelination in postmortem tissues from a patient with subacute demyelinating leukoencephalitis who was previously diagnosed as having MS.51 In a case report, HHV-6 antigens were demonstrated in astrocytes from a patient with chronic myelopathy and progressive spastic paraparesis associated with demyelination, axonal loss, chronic inflammation, and gliosis.52 and MS: Are They Linked? Several considerations make HHV-6 a strong candidate to play a role in the course of MS, either as a causal agent or as a cofactor. HHV-6 is a ubiquitous, as well as lymphotropic and neurotropic, virus. It can cause common primary infection in early childhood, and it may remain latent in CNS tissue until optimal conditions permit its reactivation and replication. HHV-6 may induce secretion of inflammatory cytokines such as tumor necrosis factor alpha (TNF-a), which could promote immune-mediated demyelination in MS.53,54 As previously described, there is also neuropathologic evidence of an association of HHV-6 with active demyelinating lesions within the CNS. During the past decade, a number of studies have suggested an association between HHV-6 and MS. These investigations have examined anti–HHV-6-specific antibody responses, HHV-6 viral DNA, or HHV-6 presence in CNS tissue. In 1993, Sola et al reported significantly higher titers of serum anti–HHV-6 antibodies in MS patients compared to normal controls.55 In 1997, Soldan et al documented increased serum IgM response to HHV-6 early antigen (p41/38) in relapsing-remitting MS patients compared with chronic progressive MS patients or patients with other neurologic or autoimmune diseases or normal controls.56 These researchers also detected HHV-6 DNA in the serum of 15 out of 50 MS patients but in none of 47 non-MS cases studied by PCR. Ablashi et al also found increased serum IgM and IgG responses to HHV-6 early antigen p41/38 in MS patients compared with patients with other neurologic diseases and normal controls.57 However, in more recent studies, comparison of serum HHV-6 antibody between MS patients and normal controls showed no Studies examining HHV-6 DNA have yielded conflicting results. HHV-6 DNA has been detected in the CSF of MS patients but in none of the controls in two reports.57,59 Other studies have failed to find significant differences in HHV-6 DNA between MS and control groups60 or failed to detect serum or CSF HHV-6 DNA in MS patients.61,62 Fillet et al examined a group of newly diagnosed MS patients (before treatment), but they did not find differences in either the serum or CSF HHV-6 DNA between these MS patients and patients with other neurologic diseases or normal individuals.63 In a recent analysis of HHV-6 viral DNA transcription, Rotola et al concluded that HHV-6 and HHV-7 gene sequences are similarly prevalent in peripheral blood mononuclear cells of MS patients and controls.64 However, these viral gene sequences were maintained in a “non-transcriptional state,” typical of latency. Using immunohistochemical techniques, Challoner et al found expression of HHV-6 antigens in the nuclei of oligodendrocytes associated with demyelinated plaques in brain sections from MS patients but not from controls.43 Knox et al also described HHV-6–infected cells in 90% of CNS tissue sections showing active demyelination, ie, sections associated with inflammatory cells that were obtained from patients with definite MS.65 This compared with 13% of tissue sections without active disease, suggesting that HHV-6 infection is prevalent in areas of active demyelination. Using PCR to study postmortem brain samples of MS patients, Sanders et al reported a higher frequency of gene sequences from multiple herpesviruses, including HHV-6, herpes simplex virus (HSV), and varicella zoster virus (VZV), in active demyelinated plaques compared with inactive plaques.44 However, the researchers did not find statistically significant differences between MS cases and controls. Thus HHV-6 can associate with other viruses, such as HSV and VZV, that can cause persistent or latent infection of nervous tissues and reactivate in demyelinating plaques. This association may have significance for the pathologic course of MS. HHV-6 could function as a cofactor to facilitate concurrent chronic replication of these other viruses within neural cells. Thus, HHV-6 would indirectly support or promote viral-induced mechanisms of neurologic damage. Conflicting results among these studies of HHV-6 in MS patients may be due to multiple causes, such as variation in technical protocols, detection of different variants of HHV-6 with different antigenic reactivity, and differences among MS subpopulations studied.5 Active HHV-6 viral infection in some MS patients may fluctuate over time during disease progression,56 causing inconsistency in viral detection. One recent study found an increased prevalence of HHV-6A in patients with MS,66 and earlier studies reported an increased association of MS with HHV-6B.43,57,67 It remains to be determined if both HHV-6 strain variants could be involved in the development and course of MS in different subpopulations of patients, which possibly could be distinguished by clinical, immune, or genetic parameters. Given the marked heterogeneity of MS lesions and possible involvement of different pathogenic mechanisms (recently reviewed by Noseworthy et al68), HHV-6 could be a more or less significant factor in specific, genetically susceptible subpopulations of patients. It is not possible at this time to conclude that HHV-6 “causes” MS. There is a growing body of evidence associating HHV-6 infection of the CNS with MS in at least a subpopulation of patients, although the specific factors that define the vulnerable subpopulation(s) of MS patients have not been elucidated. This evidence is provocative but not definitive, and it does not distinguish between HHV-6 as a causal agent in MS versus HHV-6 as a cofactor. The pathogenesis and course of HHV-6 infection have interesting parallels to that of MS, and the viral cycles of latency and reactivation could contribute to clinical remissions and exacerbations in susceptible MS patients. Whether HHV-6 is the causal agent of the initial MS “attack,” a local inflammatory reaction in the CNS could stimulate HHV-6 replication in oligodendrocytes by recruiting immunologically activated, CD4-expressing T-lymphocytes that secrete inflammatory molecules. Viral replication in these oligodendrocytes and T-lymphocytes could cause further inflammation and both immune and viral-associated demyelination. This could translate clinically into more severe relapses and higher risk for progressive or permanent disability. Given the provocative but inconclusive evidence for the role of HHV-6 in the course of MS, more clinically based data are needed. Do HHV-6 latency and reactivation parallel clinical remissions and exacerbations in HHV-6–infected MS patients? Should MS patients at risk for HHV-6 reactivation receive anti-viral medication during MS exacerbations or even during remissions? A useful approach to these questions would be a longitudinal clinical and virologic study of HHV-6 gene expression and specific antibody responses in MS patients to determine whether these markers of HHV-6 infection correlate over time with MS-related clinical and MRI imaging findings. Subpopulations of MS patients with evidence of HHV-6 infection could then participate in clinical trials of effective anti–HHV-6 drugs. A placebo-controlled, double-blind trial of the anti-herpesvirus agent acyclovir in 60 MS patients has already been reported.69 Acyclovir tended to reduce the frequency of MS exacerbations and significantly reduced titers of anti-HSV IgG antibodies in treated patients. However, acyclovir is not the optimal anti-viral agent to use against HHV-6. It is commonly used to treat HSV and VZV infections associated with fever blisters or shingles. Ganciclovir and foscarnet, used in the treatment of CMV infection, are more potent anti-viral drugs against beta-herpesviruses such as CMV and HHV-6.6 In addition to specific anti-herpesvirus drugs, the beta-interferons are known to have broad anti-viral effects,6 so these MS medications may have some adjunctive effects on HHV-6 infection. There may also be synergy or cooperativity between interferons and specific anti-herpesvirus drugs,6 and this possibility could be studied specifically with respect to HHV-6 in MS patients taking interferons. Ultimately, well-controlled clinical trials regarding this issue are the best means to establish whether MS patients could benefit from anti-viral medications and to establish effective doses of these agents and proper protocols for using them as either therapeutic or prophylactic agents. - Lucchinetti CF, Brueck W, Rodriguez M, Lassman H. Multiple sclerosis: lessons from neuropathology. Semin Neurol. - Johnson RT. The virology of demyelinating diseases. Ann Neurol. - Miller SD, Vanderlugt CL, Begolka WS, et al. Persistent infection with Theiler’s virus leads to CNS autoimmunity via epitope spreading. Nat Med. 1997;3:1133-1136. - Dalgleish AG. Viruses and multiple sclerosis. Acta Neurol Scand Suppl. - Cermelli C, Jacobson S. Viruses and multiple sclerosis. Viral - Bergstrom T. Herpesviruses—a rationale for antiviral treatment in multiple sclerosis. Antiviral Res. 1999;41:1-19. - Jacobson S. Association of human herpesvirus-6 and multiple sclerosis: here we go again? J Neurovirol. 1998;4:471-473. - Friedman JE, Lyons MJ, Cu G, et al. The association of the human herpesvirus-6 and MS. Mult Scler. 1999;5:355-362. - Panitch HS. Influence of infection on exacerbations of multiple sclerosis. Ann Neurol. 1994;36:S25-S28. - Edwards S, Zvartau M, Clarke H, et al. Clinical relapses and disease activity on magnetic resonance imaging associated with viral upper respiratory tract infections in multiple sclerosis. J Neurol Neurosurg Psychiatry. 1998;64:736-741. - Meinl E. Concepts of viral pathogenesis of multiple sclerosis. Curr Opin Neurol. 1999;12:303-307. - Wucherpfennig KW, Strominger JL. Molecular mimicry in T cell-mediated autoimmunity: viral peptides activate human T cell clones specific for myelin basic protein. Cell. - Lucchinetti CF, Bruck W, Rodriguez M, Lassman H. Distinct patterns of multiple sclerosis pathology indicates heterogeneity on pathogenesis. Brain Pathol. 1996; 6:259-274. - Monteyne P, Bureau JF, Brahic M. Viruses and multiple sclerosis. Curr Opin Neurol. 1998;11:287-291. - Salahuddin SZ, Ablashi DV, Markham PD, et al. Isolation of a new virus, HBLV, in patients with lymphoproliferative disorders. Science. - Kimberlin DW, Whitley RJ. Human herpesvirus-6: neurologic implications of a newly-described viral pathogen. J Neurovirol. - Josephs SF, Salahuddin SZ, Ablashi DV, et al. Genomic analysis of the human B-lymphotropic virus (HBLV). Science. - Ablashi D, Agut H, Berneman Z, et al. Human herpesvirus-6 strain groups: a nomenclature. Arch Virol. 1993;129:363-366. - Dockrell DH, Smith TF, Paya CV. Human herpesvirus 6. Mayo Clin - Aubin JT, Agut H, Collandre H, et al. Antigenic and genetic differentiation of two putative types of human herpes virus 6. J Virol Methods. 1993;41:223-234. - Hall CB, Caserta MT, Schnabel KC, et al. Persistence of human herpesvirus 6 according to site and variant: possible greater neurotropism of variant A. Clin Infect Dis. 1998;26:132-137. - Lusso P, Markham PD, Tschachler E, et al. In vitro cellular tropism of human B-lymphotropic virus (human herpesvirus-6). J Exp Med. - Lopez C, Pellet P, Stewart J, et al. Characteristics of human herpesvirus-6. J Infect Dis. 1988;157:1271-1273. - Black JB, Sanderlin KC, Goldsmith CS, et al. Growth properties of human herpesvirus-6 strain Z29. J Virol Methods. - Ablashi DV, Lusso P, Hung CL, et al. Utilization of human hematopoietic cell lines for the propagation and characterization of HBLV (human herpesvirus-6). Int J Cancer. 1988;42:787-791. - Lusso P. Human herpesvirus 6 (HHV-6). Antiviral Res. - Santoro F, Kennedy PE, Locatelli G, et al. CD46 is a cellular receptor for human herpesvirus-6. Cell. 1999;99:817-827. - Inoue Y, Yasukawa M, Fujita S. Induction of T-cell apoptosis by human herpesvirus 6. J Virol. 1997;71:3751-3759. - Yasukawa M, Inoue Y, Ohminami H, et al. Apoptosis of CD4+ T lymphocytes in human herpesvirus-6 infection. J Gen Virol. - Flamand L, Gosselin J, D’Addario M, et al. Human herpesvirus 6 induces interleukin-1b and tumor necrosis factor alpha, but not interleukin-6, in peripheral blood mononuclear cell cultures. J - Tedder RS, Briggs M, Cameron CH, et al. A novel lymphotropic herpesvirus. Lancet. 1987;2:390-392. - Levy JA, Ferro F, Lennette ET, et al. Characterization of a new strain of HHV-6 (HHV-6SF) recovered from the saliva of an HIV-infected individual. Virology. 1990;178:113-121. - He J, McCarthy M, Zhou Y, et al. Infection of primary human fetal astrocytes by human herpesvirus 6. J Virol. 1996;70:1296-1300. - McCarthy M, Auger D, He J, Wood C. Cytomegalovirus and human herpesvirus-6 trans-activate the HIV-1 long terminal repeat via multiple response regions in human fetal astrocytes. J Neurovirol. - Albright AV, Lavi E, Black JB, et al. The effect of human herpesvirus-6 (HHV-6) on cultured human neural cells: oligodendrocytes and microglia. J Neurovirol. 1998; 4:486-494. - Wu CA, Shanley JD. Chronic infection of human umbilical vein endothelial cells by human herpesvirus-6. J Gen Virol. - Yoshikawa T, Asano Y. Central nervous system complications in human herpesvirus-6 infection. Brain Dev. 2000;22:307-314. - Yoshikawa T, Suga S, Asano Y, et al. Distribution of antibodies to a causative agent of exanthem subitum (human herpesvirus-6) in healthy individuals. Pediatrics. 1989; 84:675-677. - Robinson WS. Human herpesvirus 6. Curr Clin Top Infect Dis. - Caserta MT, Hall CB, Schnabel K, et al. Neuroinvasion and persistence of human herpesvirus 6 in children. J Infect Dis. - Yamanishi K, Kondo K, Mukai T, et al. Human herpesvirus 6 (HHV-6) infection in the central nervous system. Acta Paediatr Jpn. - Luppi M, Barozzi P, Maiorana A, et al. Human herpesvirus 6 infection in normal human brain tissue. J Infect Dis. 1994;169:943-944. - Challoner PB, Smith KT, Parker JD, et al. Plaque-associated expression of human herpesvirus 6 in multiple sclerosis. Proc Natl Acad Sci U S A. 1995;92:7440-7444. - Sanders VJ, Felisan S, Waddell A, Tourtellotte WW. Detection of herpesviridae in postmortem multiple sclerosis brain tissue and controls by polymerase chain reaction. J Neurovirol. - Yoshikawa T, Nakashima T, Suga S, et al. Human herpesvirus-6 DNA in cerebrospinal fluid of a child with exanthem subitum and meningoencephalitis. Pediatrics. 1992;89:888-890. - McCullers JA, Lakeman FD, Whitley RJ. Human herpesvirus 6 is associated with focal encephalitis. Clin Infect Dis. - Patnaik M, Peter JB. Intrathecal synthesis of antibodies to human herpesvirus 6 early antigen in patients with meningitis/encephalitis. Clin Infect Dis. 1995;21:715-716. - Knox KK, Harrington DP, Carrigan DR. Fulminant human herpesvirus six encephalitis in a human immunodeficiency virus–infected infant. J Med Virol. 1995; 45:288-292. - Yanagihara K, Tanaka-Taya K, Itagaki Y, et al. Human herpesvirus 6 meningoencephalitis with sequelae. Pediatr Infect Dis J. - Novoa LJ, Nagra RM, Nakawatase T, et al. Fulminant demyelinating encephalomyelitis associated with productive HHV-6 infection in an immunocompetent adult. J Med Virol. 1997;52:301-308. - Carrigan DR, Harrington D, Knox KK. Subacute leukoencephalitis caused by CNS infection with human herpesvirus-6 manifesting as acute multiple sclerosis. Neurology. 1996;47:145-148. - Mackenzie IR, Carrigan DR, Wiley CA. Chronic myelopathy associated with human herpesvirus-6. Neurology. 1995;45:2015-2017. - Selmaj KW, Raine CS. Tumor necrosis factor mediates myelin and oligodendrocyte damage in vitro. Ann Neurol. 1988;23:339-346. - Hofman FM, Hinton DR, Johnson K, Merrill JE. Tumor necrosis factor identified in multiple sclerosis brain. J Exp Med. - Sola P, Merelli E, Marasca R, et al. Human herpesvirus 6 and multiple sclerosis: survey of anti–HHV-6 antibodies by immunofluorescence analysis and of viral sequences by polymerase chain reaction. J Neurol Neurosurg Psychiatry. 1993;56:917-919. - Soldan SS, Berti R, Salem N, et al. Association of human herpes virus 6 (HHV-6) with multiple sclerosis: increased IgM response to HHV-6 early antigen and detection of serum HHV-6 DNA. Nat Med. - Ablashi DV, Lapps W, Kaplan M, et al. Human herpesvirus-6 (HHV-6) infection in multiple sclerosis: a preliminary report. Mult Scler. - Enbom M, Wang FZ, Fredrikson S, et al. Similar humoral and cellular immunological reactivities to human herpesvirus 6 in patients with multiple sclerosis and controls. Clin Diagn Lab Immunol. - Wilborn F, Schmidt CA, Brinkmann V, et al. A potential role for human herpesvirus type 6 in nervous system disease. J Neuroimmunol. - Liedtke W, Malessa R, Faustmann PM, Eis-Hubinger AM. Human herpesvirus 6 polymerase chain reaction findings in human immunodeficiency virus associated neurological disease and multiple sclerosis. J Neurovirol. 1995;1:253-258. - Martin C, Enbom M, Soderstrom M, et al. Absence of seven human herpesviruses, including HHV-6, by polymerase chain reaction in CSF and blood from patients with multiple sclerosis and optic neuritis. Acta Neurol Scand. 1997;95:280-283. - Goldberg SH, Albright AV, Lisak RP, Gonzalez-Scarano F. Polymerase chain reaction analysis of human herpesvirus-6 sequences in the sera and cerebrospinal fluid of patients with multiple sclerosis. J - Fillet AM, Lozeron P, Agut H, et al. HHV-6 and multiple sclerosis [letter]. Nat Med. 1998;4:537-538. - Rotola A, Caselli E, Cassai E, et al. Novel human herpesviruses and multiple sclerosis. J Neurovirol. 2000;6(suppl 2):S88-S91. - Knox KK, Brewer JH, Henry JM, et al. Human herpesvirus 6 and multiple sclerosis: systemic active infections in patients with early disease. Clin Infect Dis. 2000;31:894-903. - Akhyani N, Berti R, Brennan MB, et al. Tissue distribution and variant characterization of human herpesvirus (HHV)-6: increased prevalence of HHV-6A in patients with multiple sclerosis. J Infect - Ongradi J, Rajda C, Marodi CL, et al. A pilot study on the antibodies to HHV-6 variants and HHV-7 in CSF of MS patients. J - Noseworthy JH, Lucchinetti C, Rodriguez M, Weinshenker BG. Multiple sclerosis. N Engl J Med. 2000;343:938-952. - Lycke J, Svennerholm B, Hjelmquist E, et al. Acyclovir treatment of relapsing-remitting multiple sclerosis: a randomized, placebo-controlled, double-blind study. J Neurol. 1996;243:214-224.
<urn:uuid:6c2190ea-ef63-4102-8d34-995537487788>
{ "dump": "CC-MAIN-2017-26", "url": "http://friendswithms.com/human_herpes.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320489.26/warc/CC-MAIN-20170625101427-20170625121427-00074.warc.gz", "language": "en", "language_score": 0.8323254585266113, "token_count": 7932, "score": 2.515625, "int_score": 3 }
Subdivision of Journal: Learning Objectives: What is subdivision of journal? Define and explain cash book and bank reconciliation statement What are the different types of cash book? Write single, double, and three column cash book. Prepare a bank reconciliation statement. Though the principle of Bills Receivable Book Bills receivable book is used to record the bills received from debtors. When a bill is received, details of it are recorded in the bills receivable book.The Bills Receivable Book is ruled according to the requirements of a particular account.
<urn:uuid:ce9e82f5-c1c4-41d5-b55c-a4dd69702671>
{ "dump": "CC-MAIN-2019-30", "url": "https://www.accountingdetails.com/tag/bills-receivable-book", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526408.59/warc/CC-MAIN-20190720024812-20190720050812-00206.warc.gz", "language": "en", "language_score": 0.952745258808136, "token_count": 112, "score": 3.1875, "int_score": 3 }
- SENSORS Journal - July 2019 Digital Object Identifier (DOI) International Standard Serial Number (ISSN) - Injuries caused by the overstraining of muscles could be prevented by means of a system which detects muscle fatigue. Most of the equipment used to detect this is usually expensive. The question then arises whether it is possible to use a low-cost surface electromyography (sEMG) system that is able to reliably detect muscle fatigue. With this main goal, the contribution of this work is the design of a low-cost sEMG system that allows assessing when fatigue appears in a muscle. To that aim, low-cost sEMG sensors, an Arduino board and a PC were used and afterwards their validity was checked by means of an experiment with 28 volunteers. This experiment collected information from volunteers, such as their level of physical activity, and invited them to perform an isometric contraction while an sEMG signal of their quadriceps was recorded by the low-cost equipment. After a wavelet filtering of the signal, root mean square (RMS), mean absolute value (MAV) and mean frequency (MNF) were chosen as representative features to evaluate fatigue. Results show how the behaviour of these parameters across time is shown in the literature coincides with past studies (RMS and MAV increase while MNF decreases when fatigue appears). Thus, this work proves the feasibility of a low-cost system to reliably detect muscle fatigue. This system could be implemented in several fields, such as sport, ergonomics, rehabilitation or human-computer interactions. - electromyography; low-cost hardware; validation
<urn:uuid:dacd3e01-a2d0-4d88-ac48-31d586c44b8d>
{ "dump": "CC-MAIN-2021-21", "url": "https://researchportal.uc3m.es/display/act522653", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00082.warc.gz", "language": "en", "language_score": 0.9347963333129883, "token_count": 336, "score": 2.625, "int_score": 3 }
Native Challenges to Mining and Oil Corporations Press, 2001, 241 pages. is a detailed and informative account of modern day genocide that reveals the dark side of the global economic system. By exposing the continuing genocide of indigenous peoples throughout the world, Gedicks reveals the hidden cost of western “progress.” Uncovering the connections between genocide in remote corners of the globe and western demand for natural resources, Resource Rebels makes a compelling argument that the price of our oil is not paid at the pump but with the lives of indigenous people. Sitting on top of coveted natural resources, indigenous people find themselves on the frontline of the global economic war for energy supplies. Yet, they are fighting back. Al Gedicks relates how indigenous organizations opposing oil and mining on their ancestral lands are not only multiplying, but also reaching out from far-flung regions to forge political alliances with western sympathizers. Resource Rebels charts the emergence of a transnational movement of environmentalists, human rights workers, lawyers and indigenous organizations, rallying in opposition to corporate abuses. A detailed account is given of the networks of international supporters working to defend indigenous rights. The greater international recognition now granted to indigenous rights issues has allowed indigenous organizations to gain legitimacy in their own countries. The book’s forte is its fascinating study of the global systems of power driving the genocide of indigenous peoples. Resource Rebels documents the networks of multinational companies, international financiers, government officials, and international trade institutions that come together to profit from development. Gedicks relates how demand for fossil fuels and minerals has subjected indigenous communities to a new style of predatory global economics that breathes new life into historical structures of colonial oppression. The book explores how the global elite sanction environmental racism with the belief that indigenous cultures are “an obstacle to progress” that must be assimilated into the “modern world.” Gedicks investigates the legal mechanisms—domestic laws, mining codes, environmental regulations—that are used and abused in the interests of politicians and corporations. He masterfully describes how the collusion between corporate and political interests breeds corruption and repression. Insight is given into how the lack of accountability and transparency in the activities of corporations and politicians eats away at the democratic foundations of society. In one of the strongest chapters of the book, Gedicks reveals that the arms industry is a primary consumer of minerals. To feed the need for arms, corporations drive deeper into resource frontiers to seek raw materials. Gedicks makes the observation that the fragile ecosystems of the resource frontiers will collapse long before their mineral wealth is exhausted. When the environmental impacts of mineral extraction are added to pollution caused by weapons production, a vicious cycle of environmental destruction emerges. Indigenous people sit in the eye of this storm. Rebels details the consequences for indigenous people of large-scale natural resource extraction projects: forced resettlement, toxic pollution, disease, loss of food and water supplies and the collapse of traditional ways of life. The “divide and conquer” tactics employed by corporations to break indigenous opposition are laid bare. Gedicks recounts how intimate connections between oil, mining, and militarization result in a familiar pattern of violent conflict in indigenous ancestral lands. Plentiful evidence is given of the involvement of corporations in human rights abuses, from oil companies in Nigeria to mining companies in West Papua. Given the odds facing indigenous communities in the path of large-scale development projects, Al Gedicks reasons that the presence of transnational supporters is vital to the continuation of indigenous resistance. Excluded from decision-making procedures, the key challenge facing indigenous peoples is to make their voice heard. Allies in Europe and the United States can speak out about human rights abuses against indigenous peoples without fear of retaliation. Gedicks states that the strategic use of information by transnational allies can expose the otherwise hidden activities of governments and corporations in remote areas. A key role for international allies lies in questioning current western energy consumption. Gedicks makes a series of recommendations for more efficient energy use. Broad in its geographical scope, Resource Rebels tells of resource battles in Colombia, Ecuador, Nigeria, West Papua, the United States, and Canada. Different battles take different forms. In Ecuador alone indigenous organizations and international allies have produced remarkable results using various tactics—massive civil disobedience, international legal action, constitutional reform and international campaigns. The wealth of detail provided on key case studies makes Resource Rebels a valuable tool for international and indigenous activists. Readable and engrossing, this book will engage a wide concludes that indigenous people are the modern world’s equivalent of the miner’s canary. Their David and Goliath struggle against corporate power contains a warning to the world. Our unsustainable natural resource consumption habits are not only a threat to the lives of indigenous peoples. They are threat to the planet’s future. Z
<urn:uuid:16c1f657-6559-461b-bbd7-aa89eb727c32>
{ "dump": "CC-MAIN-2016-36", "url": "https://zcomm.org/zmagazine/native-challenges-to-mining-and-oil-corporations-by-al-gedicks/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982290765.41/warc/CC-MAIN-20160823195810-00244-ip-10-153-172-175.ec2.internal.warc.gz", "language": "en", "language_score": 0.8903121948242188, "token_count": 1106, "score": 3.03125, "int_score": 3 }
Docetism was a heresy about Jesus that gained in popularity in the third century among those committed to Greek philosophy. Docetism is a term for a set of beliefs that were found in a number of heresies, including Marcionism and Gnosticism. “Jesus Felt No Pain” Unlike many early heresies that denied the divinity of Jesus, Docetism eliminates his humanity. Suggesting that Jesus only appeared to be human though he was in fact not, Docetism derives its name from the Greek word dokeo, which means “to seem or appear.” Those holding to Docetism believed that there was one eternal father who was eternally transcendent and therefore unable to experience any sort of human emotion of suffering. The idea that Jesus became human flesh (John 1:14) and experienced life as a human was unthinkable and offensive to this philosophy. The Gospel of Peter, an apocryphal book, illustrates a Docetic view. It says that during his crucifixion, Jesus “kept silence, as one feeling no pain,” which implied, as church historian J.N.D. Kelly notes, “that His bodily make-up was illusory.” Jesus Truly Suffered The orthodox early church was strongly opposed to Docetism. Irenaeus thought the teaching was so dangerous that he wrote a five-volume work (Against Heresies) against one of Docetism’s prominent teachers, Valentinus (c. 136–c. 165). Ignatius said that it would have been foolish for him to have been imprisoned for proclaiming one who merely appeared to suffer for his sake: - Turn a deaf ear therefore when any one speaks to you apart from Jesus Christ, who was of the family of David, the child of Mary, who was truly born, who ate and drank, who was truly persecuted under Pontius Pilate, was truly crucified and truly died….But if, as some godless men, that is, unbelievers, say, he suffered in mere appearance (being themselves being mere appearances), why am I in bonds? Polycarp makes the strongest possible charge against the Docetists by saying that “everyone who does not confess that Jesus Christ has come in the flesh is an anti-Christ,” echoing 1 John 4:2-3. Jesus Came in the Flesh As theologian Stephen Nichols points out, much contemporary popular theology tends to “view Jesus as sort of floating six inches off the ground as he walked upon the earth.” Downplaying or rejecting the true humanity of Jesus is common today, but it does not fit with the biblical picture of Jesus given to us in the Gospels. While on earth, Jesus experienced hunger (Matt. 4:2) and thirst (John 19:28), showed compassion (Matt. 9:36), was tired (John 4:6), felt sorrow to the point of weeping (John 11:35), and grew in wisdom (Luke 2:52). Yet, in all of his humanness, Jesus never sinned (Heb. 4:15). Like Us in Every Way, Yet Without Sin Avoiding Docetism is important because, as the author of Hebrews writes, Jesus “had to be made like his brothers in every respect, so that he might become a merciful and faithful high priest in the service of God, to make propitiation for the sins of the people” (Heb. 2:17). It is because Jesus was tempted as we are that he is able to sympathize with us in our weakness. Put bluntly, the whole of the atonement rests on Docetism being false. On this point, T. F. Torrance writes: “Any docetic view of the humanity of Christ snaps the lifeline between God and man, and destroys the relevance of the divine acts in Jesus for men and women of flesh and blood.” If Docetism is true and he was so heavenly that he only appeared human, then we no longer can place our confidence in Jesus Christ, who as truly God and truly man serves as the mediator between God and men.
<urn:uuid:cf7abd07-c52b-45bb-b149-9498c0673edd>
{ "dump": "CC-MAIN-2021-04", "url": "http://justinholcomb.com/2010/12/08/docetism-know-your-heretics/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565541.79/warc/CC-MAIN-20210125092143-20210125122143-00036.warc.gz", "language": "en", "language_score": 0.9798527359962463, "token_count": 874, "score": 3.125, "int_score": 3 }
- Describe, using a diagram, the circular flow of income between households and firms in a closed economy with no government. - Identify the four factors of production and their respective payments (rent, wages, interest and profit) and explain that these constitute the income flow in the model. - Outline that the income flow is numerically equivalent to the expenditure flow and the value of output flow. - Describe, using a diagram, the circular flow of income in an open economy with government and financial markets, referring to leakages/withdrawals (savings, taxes and import expenditure) and injections (investment, government expenditure and export revenue). - Explain how the size of the circular flow will change depending on the relative size of injections and leakages. We will then build up a model of the economy on the board and you will be given a worksheet to complete. Check out this physical model of the circular flow as well...
<urn:uuid:cb660c73-7f98-41e3-a1ef-11f0291a5346>
{ "dump": "CC-MAIN-2019-04", "url": "http://e-conomics.weebly.com/11sl-economics/21-the-level-of-overall-economic-activity-1", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659654.11/warc/CC-MAIN-20190118005216-20190118031216-00459.warc.gz", "language": "en", "language_score": 0.9136490225791931, "token_count": 196, "score": 3.859375, "int_score": 4 }
There is so much more to digital currency than just the main stream media’s treatment of it. Digital currency may be defined as a form of non-traditional currency that is the internet-based unit of currency. It is not so much a so-called virtual gold or silver, but the freedom to create money in the confines of your computer. It is not just the cheapest way to store value, but it has a feature that is a remarkable asset to society. Using a computer that is connected to the internet is the easiest way to allow small businesses to accept payments over the internet. Payments can be sent through multiple ways such as PayPal, credit cards, and wire transfers. Those with more money can use prepaid credit cards, or check the balances on their accounts. Many complain about fractional reserve banking because it reduces the ability of banks to lend money. Many can see what a great advantage it would be if everyone was able to create their own money without the danger of turning around and selling it at a profit. Currently, cash is the most important asset class that keeps most consumers going. You need to remember that without cash, there would be no way to buy food, gas, clothes, and pay other bills that you might have. Consumers are buying more personal property that they cannot afford, because they need it for their lifestyles, or to make it through difficult times, like non-storey buildings in smaller towns and cities. Of course, most people would rather have extra money in their bank account, rather than risk buying something with a personal credit card. People can purchase things with a credit card, or even run up a big debt to the bank, for items that they can easily afford. When they buy something, they pay the same amount as a credit card, or they pay the interest on the account that is attached to the credit card, and they pay the bill through the merchant. This makes computer technology a boon to many consumers. They are able to connect to the internet with just one device, and then print a check out for something they are buying. For example, if they wanted to purchase food, they could print a check and either send it over the internet, or deposit it into their bank account. The consumer does not have to worry about losing their checkbook, because they can still make their payment with a credit card. Of course, if they do not have a credit card, they can still be accepted if they mail the check. Imagine how convenient it would be to have a credit card that can send you money that you never have to wait for. The reason that the credit card is a way to avoid waiting times for transactions that don’t involve actual cash is because the credit card has already paid for the transaction. For those who have a checking account, they can also use the checkbook feature. Those with no checking account can still shop and be paid from their bank account. It seems that digital currency is about a lot more than just the average person getting a job or a large purchase. It is all about eliminating the headache of waiting for credit card payments to get processed and having a way to keep track of all the purchases that you have made on your computer.
<urn:uuid:ea14e5ce-f8bc-46f0-a49f-97eed272c345>
{ "dump": "CC-MAIN-2023-40", "url": "http://www.baviaan.net/the-rising-popularity-of-digital-currency/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510238.65/warc/CC-MAIN-20230927003313-20230927033313-00620.warc.gz", "language": "en", "language_score": 0.9738579988479614, "token_count": 659, "score": 3.0625, "int_score": 3 }
New catalyst could lead to less expensive fuel cellsFebruary 20, 2015 Research team develops a new catalyst for fuel cells comprised of iron and carbon Researchers from Finland’s Aalto University have successfully developed a new catalyst that can be used in fuel cells. The catalyst is meant to serve as an alternative to platinum catalysts that are quite common in fuel cell energy systems. Conventional catalysts are notoriously expensive, because they are made from platinum, and this affects the total cost of a fuel cell system. By replacing platinum, fuel cells can be more affordable and attractive to those interested in renewable energy. New catalyst could be a suitable replacement for those that use platinum The new catalyst developed by the research team is comprised of iron and carbon, which means that it is significantly less expensive than catalysts made from platinum. According to researchers, the new catalyst is as efficient as those that use platinum, which is a major success in the field of fuel cell research. In order to produce the catalyst, the research team developed a new manufacturing process that allowed them to make use of inexpensive materials. Carbon nanotubes, graphene, and iron make up the catalyst Carbon nanotubes are used for much of the catalyst’s structure, providing much needed support. This material can also conduct electricity extremely well, making an ideal tool for catalyst development. The nanotubes are applied to the catalyst’s iron structure, which is covered with graphene. The manufacturing process can be completed in one stage, making it highly efficient and significantly less expensive process when compared to conventional catalyst manufacture. Better catalysts could reduce the costs associated with fuel cells Finding ways to reduce the costs associated with fuel cells has become quite important. These energy systems are gaining popularity throughout the world and are being used to power clean vehicles and even homes. The problem, however, is that they are very expensive, and their high cost has limited their adoption. Less expensive catalysts could solve this problem, to some degree, and help make fuel cells more attractive to consumers and businesses that want to embrace renewable energy.
<urn:uuid:1ef52775-ff25-4bcc-8e1b-fc055cbbf089>
{ "dump": "CC-MAIN-2023-50", "url": "https://www.hydrogenfuelnews.com/new-catalyst-could-lead-to-less-expensive-fuel-cells/8521259/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102637.84/warc/CC-MAIN-20231210190744-20231210220744-00632.warc.gz", "language": "en", "language_score": 0.9731477499008179, "token_count": 423, "score": 3.328125, "int_score": 3 }
UPDATE- Since the publication of Brad Osborn’s Kid Algebra (2014), I’m going to switch to his category of Euclidean rhythms (in their 4 types) to describe the patterns below. In summary, Euclidean rhythms (ER) are rhythms in which k onsets in n divisions are as similar as possible, which essentially means that they will only differ by at most one subdivision each. So in ER the groups are as similar as possible, but the term maximally even we will reserve for ER rhythms where the smaller note groups are as separated as much as possible. For example, (2,2,3,3) and (2,3,2,3) are both ER, but only the latter is maximally even. This is a library of all the maximally even (including strictly even) rhythms for 2-7 rhythmic onsets within 6,8,12 and 16 beat cycles. Maximal evenness (M.E.) describes a rhythm which is as evenly spread out as possible given both a number or events (rhythmic onsets), and a number of available slots (beats). Strict evenness (marked with a º) is a subset of M.E. and occurs when the hits are equally spaced. M.E. rhythms are intrinsic to much music making in a wide range of cultures from Sub-saharan Africa, South America to EDM and much in between. The parenthesised number shows the number of displacements (or ‘rotations’) available for the rhythm in the beat-cycle, and allows for starting on rests. When the number of rotations equals the number of beats in the cycle this is marked with an * and represents maximally independence (MI – a common trait of African timelines and clave patterns). Note that 5,6 and 7 in 12 also represents maximally even pentatonic, hexatonic and heptatonic scale sets e.g. 3,3,2,2,2 represents all the modes of the major pentatonic as well as a 5 in 12 set of ME rhythms. As another example 2,2,1,2,2,1,2 (a rotation of 2,2,2,1,2,2,1) represents both the African standard time-line and the Mixolydian mode. Enjoy. Announcing a 2-day symposium (November 15-16 2013 at University of Notre Dame in Central London) examining the process, philosophy and products of collaborations between scientists, musicians and performing artists. It’s hosted and organised by me and my sister Dr. Alex Mermikides, and is an output of the Chimera Network – and AHRC-supported project promoting Art/Sci research. I have been thinking about ‘nutshell’ images for a range of musical concepts. Here’s a work in progress on common progressions in a major key in 17-19th century tonal harmony. Yes caveat, WIP etc, but it seems quite useful. Comments and suggestions welcome! Minor key next, please don’t let me turn them into 3D models for mode mixtures and modulations. For Part 2 of the Beatless series lets look at a Beatles rework by one Joe Connor. Here the motivic and harmonic elements of the piece are extracted and examined through repetition with gentle timbral variation – techniques borrowed from minimalist and process music. This, together with non-quantisation rhythmic elements creates a compelling atmosphere. Electronic music has been refreshed of late with such artists as Mount Kimbie rejecting the dominance that strict grid-based (‘quantized’) time has had on the genre. ‘Loose’ (but not sloppy) timing has a huge effect on musical expression, and this latest trend in IDM is heartening. I thought I’d use this platform to share with you some of the great work my students are doing together with some commentary addressing compositional technique. Here’s the first of many to come. As part of one of my coursework portfolios, I offered my talented and creative students at Surrey the option to rework a Beatles track. Beyond a cover or remix, the brief was to reinterpret and/or electronically deconstruct/reconstruct musical materials from any Beatles track. There was some great work such as Em Bollon’s modal reinterpretation of ‘Here Comes The Sun’ Remodalisation (invented term) is the technique of translating melodic and/or harmonic material into a parallel mode (set of notes or scale). The original track’s major tonality (with some modal interchange and secondary dominants) is really effectively (and intuitively) reinterpreted into mixolydian and dorian ideas, blended with electronic japery. Quite lovely. A diagram demonstrating a section of the huge modal universe. You may see how mirroring modes (turning them upside down) can organize them into levels of brightness. It can also identify those modes that are identical in mirror form. These include Dorian (used in a thousand tunes from Scarborough Fair, Shine on You Crazy Diamond, Brick House to The Hitchhiker’s Guide To The Galaxy), Aeolian Dominant (Babooshka) and Double Harmonic (Miserlou from Pulp Fiction). These are just 3 of the heptatonic even-tempered modes with mirror symmetry parents. There are many others, scales with 2-12 notes, as well as scales with ‘twin’ mode systems. Regardless this technique can be applied widely and is a rich resource for composers and improvisers alike. Today’s musical legends had their fair share of criticism. Match the composer to their bad review, and in the process don’t take any criticism of your own work too seriously, you’ll only avoid negative feedback if you don’t write anything. Just returned from 4 days at Peter Gabriel’s Real World Studios. Wow, what a place! High-end technology and a Pavlovian saliva-inducing stash of microphones & gear. All that would you expect from a good studio, what makes Real World a uniquely wonderful residential studio is the great effort made to make the artists feel relaxed with wonderful surroundings. The studios are perfectly sound proofed yet awash with natural light. Cast your gaze up from the mixing desk and you are treated with a delightful view of a bridge over a stream amidst gorgeous English countryside. Stroll through the picturesque Wilstshire grounds, coffee in hand between takes. Or why not, mid-recording, gaze at a running stream through the glass floor of the studio. A team of helpful people or on hand to keep you in excellent home-cooked food & endless hot beverage. The ideal place to work. So what did the precocious Peter Gregson and I achieved in those 4 short days? A superb recording of Steve Reich’s 2003 work ‘Cello Counterpoint’ for 8 cellos. What a pieces: labyrinthine rhythmic ideas, stunning ensemble interplay & a deep jzz-influenced harmonic language. Our production really is something special. But we earned it, 4 looong days of intense work, We go back to mix in the next few weeks, (together with some original compositions) and look forward to sharing the album with the world. Funded by the Wellcome Trust, Microcosmos is a sound installation project in collaboration with microbiologist Dr. Simon Park and Cameraman Steve Downer (Blue Planet, Life on Earth etc.). The DNA codes, colour & shape of microbacterial colonies are translated into sound design using a complex automated mapping system. The resulting soundscape reveals the hidden music of this spectacularly tiny world. Microcosmos has been performed internationally including the Science Museum, Royal Academy of Music, Art Researches Science Belgium and disseminated in international conferences. Here is an extract of the piece: Some images from the installation below as well as a representation of the mapping system employed (from the hidden music liner notes)
<urn:uuid:c04a0f50-7b61-483c-a844-441f79f3ee7b>
{ "dump": "CC-MAIN-2020-29", "url": "http://www.miltonline.com/tag/composition/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655894904.17/warc/CC-MAIN-20200707173839-20200707203839-00511.warc.gz", "language": "en", "language_score": 0.930790364742279, "token_count": 1690, "score": 3.515625, "int_score": 4 }
Eager for every pollen-sized bit of information, second-grade students at Payson Elementary School buzzed eagerly around beekeeper David Jones, bedecked in his white suit and headgear that made him look like an astronaut. “If I didn’t wear all of this, I’d be stung all over,” said Jones. Jones, of Sunflower Honey fame, came to help the students understand insects as they study the topic for their science unit. “He’s come for the last few years,” said second-grade teacher Brianne DeWitt. Jones brought a hive, minus the bees, filled with slats riddled with honeycomb imprints on the plastic sheets that line the inside of the individual combs to illustrate how he keeps his bees. Beekeepers in modern times use plastic sheets with the start of a honeycomb imprinted to help the bees form the beeswax honeycombs. Jones brought honeycombs in various states of completeness — from empty to full of honey. He told the children to carefully hold the honeycombs at the edges so as not to destroy the bees’ work. When the children asked how he collected the honey, he launched into an explanation. First, he uses smoke to cause the bees to panic into believing their hive is on fire. In their state of panic, they gorge on honey, filling their bellies so full they cannot curl up to sting. After pulling the honeycomb out of the hive, Jones uses a brush to remove the gorged bees from the honeycomb. A hot knife melts the wax off of the honeycombs so Jones can collect the honey. The children stared in awe at the knife. Jones dumps the honeycombs into a large drum that separates the wax from the sweet, golden honey. “In the olden days, they called this ‘robbing the bees’,” said Jones, “I call it charging rent because I built the apartment.” He went on to explain that he only takes honey from the top two levels of the hive, because the queen tends to lay her eggs on the bottom two layers. “Have you seen a queen?” asked one of the children. “Yes...(but) Queen will hide from me because she knows she’s the future of that colony,” said Jones. In a year with normal rainfall, Jones said he has enough hives to collect more than 20 gallons of honey — enough to sell at the local Farmers Market and health food stores. This year, however, because of the lack of rainfall over the winter and spring, the wildflower crop left much to be desired, especially for the bees. He only collected four gallons, barely enough to cover his family’s personal use. Recent studies of the Southwest region show that as temperatures rise, rainfall and water tables will decline. That’s bad news not only for the forest and trees, but for the bees and beekeepers too. Still, Jones said he will continue to keep bees. He’s loved them since childhood when he watched a bee collect pollen from a flower, even though his investigations earned him a sting. “Do they ever get into your mask?” asked a second-grader. “Yes, and then I go home with my face swollen up and my wife will laugh at me,” he said.
<urn:uuid:62e5be2f-9389-43fd-b5b9-16d28064295a>
{ "dump": "CC-MAIN-2015-35", "url": "http://www.paysonroundup.com/news/2012/sep/25/students-learn-z-about-bees/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645293619.80/warc/CC-MAIN-20150827031453-00092-ip-10-171-96-226.ec2.internal.warc.gz", "language": "en", "language_score": 0.9548039436340332, "token_count": 730, "score": 2.921875, "int_score": 3 }
Presentation on theme: "Lesson 1: Points, Lines and Planes. » Point- represents a location » Line- made up of points and has no thickness or width, extends infinitely at both."— Presentation transcript: Lesson 1: Points, Lines and Planes » Point- represents a location » Line- made up of points and has no thickness or width, extends infinitely at both ends (cannot be measured) » Collinear- points on the same line » Plane- flat surface made from points that has no depth and extends in all directions infinitely » Coplanar- points or lines on the same plane » Space- boundless, 3-D set of all points that contains lines and planes » Step 1- fold the construction paper in half both by width and length (hamburger and hotdog) » Step 2- Unfold the paper and hold width wise, fold in the ends until they meet at the center crease » Step 3- Cut the folded flaps along the crease so that there are now 4 flaps » Label the outside of the flap with the lesson number and title. » Inside the flap create a grid with 7 columns and 4 rows. ModelDrawnNamed ByFactsWords/ Symbols Examples Point As a dotA capitol letter A point has neither size nor shape point P Line With an arrowhead at both ends Two letters representing points on the line- or the script letter There is exactly 1 line through any two points line n line AB line BA Plane As a shaded, slanted, 4- sided figure A capital script letter or by any three letters of non- collinear points There is exactly 1 plane through any three non- collinear points plane S plane XYZ plane XZY plane ZXY plane ZYX plane YXZ plane YZX P A B X Y Z S n A. Use the figure to name a line containing point K. B. Use the figure to name a plane containing point L. C. Use the figure to name the plane two different ways. A. Name the geometric shape modeled by a 10 12 patio. B. Name the geometric shape modeled by a water glass on a table. C. Name the geometric shape modeled by a colored dot on a map used to mark the location of a city. D. Name the geometric shape modeled by the ceiling of your classroom. A. How many planes appear in this figure? B. Name three points that are collinear. C. Are points A, B, C, and D coplanar? Explain.
<urn:uuid:92b8cfd3-ae96-413c-a354-4f0e27258004>
{ "dump": "CC-MAIN-2016-50", "url": "http://slideplayer.com/slide/3854029/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541896.91/warc/CC-MAIN-20161202170901-00021-ip-10-31-129-80.ec2.internal.warc.gz", "language": "en", "language_score": 0.9073085784912109, "token_count": 531, "score": 4.21875, "int_score": 4 }
The text of the bill below is as of Jul 12, 1994 (Introduced). HR 4720 IH H. R. 4720 To establish the Hudson River Valley American Heritage Area. IN THE HOUSE OF REPRESENTATIVES July 12, 1994 July 12, 1994 Mr. HINCHEY (for himself, Mr. MCNULTY, Mr. FISH, Mr. GILMAN, and Mrs. LOWEY) introduced the following bill; which was referred to the Committee on Natural Resources To establish the Hudson River Valley American Heritage Area. Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, SECTION 1. SHORT TITLE. - This Act may be cited as the ‘Hudson River Valley American Heritage Area Act of 1994’. SEC. 2. FINDINGS. - The Congress finds the following: - (1) The Hudson River Valley between Yonkers, New York, and Troy, New York, possesses important historical, cultural, and natural resources, representing themes of settlement and migration, transportation, and commerce. - (2) The Hudson River Valley played an important role in the military history of the American Revolution. - (3) The Hudson River Valley gave birth to important movements in American art and architecture through the work of Andrew Jackson Downing, Alexander Jackson Davis, Thomas Cole, and their associates, and played a central role in the recognition of the esthetic value of the landscape and the development of an American esthetic ideal. - (4) The Hudson River Valley played an important role in the development of the iron, textile, and collar and cuff industries in the 19th century, exemplified in surviving structures such as the Harmony Mills complex at Cohoes, and in the development of early men’s and women’s labor and cooperative organizations, and is the home of the first women’s labor union and the first women’s secondary school. - (5) The Hudson River Valley, in its cities and towns and in its rural landscapes-- - (A) displays exceptional surviving physical resources illustrating these themes and the social, industrial, and cultural history of the 19th and early 20th centuries; and - (B) includes many National Historic Sites and Landmarks. - (6) The Hudson River Valley is the home of traditions associated with Dutch and Huguenot settlements dating to the 17th and 18th centuries, was the locus of characteristic American stories such as ‘Rip Van Winkle’ and the ‘Legend of Sleepy Hollow’, and retains physical social, and cultural evidence of these traditions and the traditions of other more recent ethnic and social groups. - (7) New York State has established a structure for the Hudson River Valley communities to join together to preserve, conserve, and manage these resources, and to link them through trails and other means, in the Hudson River Greenway Communities Council and the Greenway Conservancy. SEC. 3. PURPOSES. - The purposes of this Act are the following: - (1) To recognize the importance of the history and the resources of the Hudson River Valley to the Nation. - (2) To assist the State of New York and the communities of the Hudson River Valley in preserving, protecting, and interpreting these resources for the benefit of the Nation. - (3) To authorize Federal financial and technical assistance to serve these purposes. SEC. 4. HUDSON RIVER VALLEY AMERICAN HERITAGE AREA. - (a) ESTABLISHMENT- There is hereby established a Hudson River Valley American Heritage Area (in this Act referred to as the ‘Heritage Area’). - (b) BOUNDARIES- The Heritage Area shall be comprised of the counties of Albany, Rensselaer, Columbia, Greene, Ulster, Dutchess, Orange, Putnam, Westchester, and Rockland, New York, and the Village of Waterford in Saratoga County, New York. - (c) MANAGEMENT ENTITIES- The management entities for the Heritage Area shall be the Hudson River Valley Greenway Communities Council and the Greenway Conservancy (agencies established by the State of New York in its Hudson River Greenway Act of 1991, in the Act referred to as the ‘management entities’). The management entities shall jointly establish a Heritage Area Committee to manage the Heritage Area. SEC. 5. COMPACT. - To carry out the purposes of this Act, the Secretary of the Interior (in this Act referred to as the ‘Secretary’) shall enter into a compact with the management entities. The compact shall include information relating to the objectives and management of the area, including the following: - (1) A discussion of the goals and objectives of the Heritage Area, including an explanation of a proposed approach to conservation and interpretation, and a general outline of the protection measures committed to by the parties to the compact. - (2) A description of the respective roles of the management entities. - (3) A list of the initial partners to be involved in developing and implementing a management plan for the Heritage Area, and a statement of the financial commitment of such partners. - (4) A description of the role of the State of New York. SEC. 6. MANAGEMENT PLAN. - The management entities shall develop a management plan for the Heritage Area that presents comprehensive recommendations for the Heritage Area’s conservation, funding, management and development. Such plan shall take into consideration existing State, county, and local plans and involve residents, public agencies, and private organizations working in the Heritage Area. It shall include actions to be undertaken by units of government and private organizations to protect the resources of the Heritage Area. It shall specify the existing and potential sources of funding to protect, manage and develop the Heritage Area. Such plan shall include specifically as appropriate the following: - (1) An inventory of the resources contained in the Heritage Area, including a list of any property in the Heritage Area that is related to the themes of the Heritage Area and that should be preserved, restored, managed, developed, or maintained because of its natural, cultural, historic, recreational, or scenic significance. - (2) A recommendation of policies for resource management which consider and detail application of appropriate land and water management techniques, including but not limited to, the development of intergovernmental cooperative agreements to protect the Heritage Area’s historical, cultural, recreational, and natural resources in a manner consistent with supporting appropriate and compatible economic viability. - (3) A program for implementation of the management plan by the management entities, including plans for restoration and construction, and specific commitments of the identified partners for the first 5 years of operation. - (4) An analysis of ways in which local, State, and Federal programs may best be coordinated to promote the purposes of the Act. - (5) An interpretation plan for the Heritage Area. SEC. 7. AUTHORITIES AND DUTIES OF MANAGEMENT ENTITIES. - (a) AUTHORITIES OF THE MANAGEMENT ENTITIES- The management entities may, for purposes of preparing and implementing the management plan under section 6, use Federal funds made available through this Act-- - (1) to make loans and grants to, and enter into cooperative agreements with, States and their political subdivisions, private organizations, or any person; and - (2) to hire and compensate staff. - (b) DUTIES OF THE MANAGEMENT ENTITIES- The management entities shall-- - (1) develop and submit to the Secretary for approval a management plan as described in section 6 within 5 years after the date of the enactment of this Act; - (2) give priority to implementing actions as set forth in the compact and the management plan, including taking steps to-- - (A) assist units of government, regional planning organizations, and nonprofit organizations in preserving the Heritage Area; - (B) assist units of government, regional planning organizations, and nonprofit organizations in establishing, and maintaining interpretive exhibits in the Heritage Area; - (C) assist units of government, regional planning organizations, and nonprofit organizations in developing recreational resources in the Heritage Area; - (D) assist units of government, regional planning organizations, and nonprofit organizations in increasing public awareness of and appreciation for the natural, historical and architectural resources and sites in the Heritage Area; - (E) assist units of government, regional planning organizations and nonprofit organizations in the restoration of any historic building relating to the themes of the Heritage Area; - (F) encourage by appropriate means economic viability in the corridor consistent with the goals of the Plan; - (G) encourage local governments to adopt land use policies consistent with the management of the Heritage Area and the goals of the plan; and - (H) assist units of government, regional planning organizations and nonprofit organizations to ensure that clear, consistent, and environmentally appropriate signs identifying access points and sites of interest are put in place throughout the Heritage Area; - (3) consider the interests of diverse governmental, business, and nonprofit groups within the Heritage Area; - (4) conduct public meetings at least quarterly regarding the implementation of the management plan; - (5) submit substantial changes (including any increase of more than 20 percent in the cost estimates for implementation) to the management plan to the Secretary for the Secretary’s approval; - (6) for any year in which Federal funds have been received under this Act, submit an annual report to the Secretary setting forth its accomplishments, its expenses and income, and the entities to which any loans and grants were made during the year for which the report is made; and - (7) for any year in which Federal funds have been received under this Act, make available for audit all records pertaining to the expenditure of such funds and any matching funds, and require, for all agreements authorizing expenditure of Federal funds by other organizations, that the receiving organizations make available for audit all records pertaining to the expenditure of such funds. - If a management plan is not submitted to the Secretary as required under paragraph (1) within the specified time, the Heritage Area shall no longer qualify for Federal funding. - (c) PROHIBITION ON THE ACQUISITION OF REAL PROPERTY- The management entities may not use Federal funds received under this Act to acquire real property or an interest in real property. Nothing in this Act shall preclude any management entity from using Federal funds from other sources for their permitted purposes. - (d) ELIGIBILITY FOR RECEIVING FINANCIAL ASSISTANCE- - (1) ELIGIBILITY- The management entities shall be eligible to receive funds appropriated through this Act for a period of 10 years after the day on which the compact under section 5 is signed by the Secretary and the management entities, except as provided in paragraph (2). - (2) EXCEPTION- The management entities’ eligibility for funding under this Act may be extended for a period of not more than 5 additional years, if-- - (A) the management entities determine such extension is necessary in order to carry out the purposes of this Act and notify the Secretary not later than 180 days prior to the termination date; - (B) the management entities, not later than 180 days prior to the termination date, present to the Secretary a plan of their activities for the period of the extension, including provisions for becoming independent of the funds made available through this Act; and - (C) the Secretary, with the advice of the Governor of New York approves such extension of funding. SEC. 8. DUTIES AND AUTHORITIES OF FEDERAL AGENCIES. - (a) DUTIES AND AUTHORITIES OF THE SECRETARY- - (1) TECHNICAL AND FINANCIAL ASSISTANCE- - (A) IN GENERAL- The Secretary may, upon request of the management entities, provide technical and financial assistance to the Heritage Area to develop and implement the management plan. In assisting the Heritage Area, the Secretary shall give priority to actions that in general assist in-- - (i) conserving the significant natural, historic, and cultural resources which support its themes; and - (ii) providing educational, interpretive, and recreational opportunities consistent with its resources and associated values. - (B) SPENDING FOR NON-FEDERALLY OWNED PROPERTY- The Secretary may spend Federal funds directly on non-federally owned property to further the purposes of this Act, especially in assisting units of government in appropriate treatment of districts, sites, buildings, structures, and objects listed or eligible for listing on the National Register of Historic Places. - (2) APPROVAL AND DISAPPROVAL OF COMPACTS, AND MANAGEMENT PLANS- - (A) IN GENERAL- The Secretary, in consultation with the Governor of New York, shall approve or disapprove a compact or management plan submitted under this Act not later than 90 days after receiving such compact or management plan. - (B) ACTION FOLLOWING DISAPPROVAL- If the Secretary disapproves a submitted compact or management plan, the Secretary shall advise the management entities in writing of the reasons therefor and shall make recommendations for revisions in the compact or plan. The Secretary shall approve or disapprove a proposed revision within 90 days after the date it is submitted. - (3) APPROVING AMENDMENTS- The Secretary shall review substantial amendments to the management plan for the Heritage Area. Funds appropriated pursuant to this Act may not be expended to implement the changes until the Secretary approves the amendments. - (4) PROMULGATING REGULATIONS- The Secretary shall promulgate such regulations as are necessary to carry out the purposes of this Act. - (b) DUTIES OF FEDERAL ENTITIES- Any Federal entity conducting or supporting activities directly affecting the Heritage Area, and any unit of government acting pursuant to a grant of Federal funds or a Federal permit or agreement conducting or supporting such activities, shall to the maximum extent practicable-- - (1) consult with the Secretary and the management entities with respect to such activities; - (2) cooperate with the Secretary and the management entities in carrying out their duties under this Act and coordinate such activities with the carrying out of such duties; and - (3) conduct or support such activities in a manner consistent with the management plan unless the Federal entity, after consultation with the management entities, determines there is no practicable alternative. SEC. 9. AUTHORIZATION OF APPROPRIATIONS. - (a) COMPACTS AND MANAGEMENT PLAN- From the amounts made available to carry out the National Historic Preservation Act, there is authorized to be appropriated to the Secretary, for grants for developing a compact under section 5 and providing assistance for a management plan under section 6, not more than $300,000, to remain available until expended, subject to the following conditions: - (1) No grant for a compact or management plan may exceed 75 percent of the grantee’s cost for such study, plan, or early action. - (2) The total amount of Federal funding for the compact for the Heritage Area may not exceed $150,000. - (3) The total amount of Federal funding for a management plan for the Heritage Area may not exceed $150,000. - (b) MANAGEMENT ENTITY OPERATIONS- From the amounts made available to carry out the National Historic Preservation Act, there is authorized to be appropriated to the Secretary for the management entities, amounts as follows: - (1) For the operating costs of each management entity, pursuant to section 7, not more than $250,000 annually. - (2) For technical assistance pursuant to section 8, not more than $50,000 annually. - The Federal contribution to the operations of the management entities shall not exceed 50 percent of the annual operating costs of the entities. - (c) IMPLEMENTATION- From the amounts made available to carry out the National Historic Preservation Act, there is authorized to be appropriated to the Secretary, for grants and the administration thereof for the implementation of the management plans for the Heritage Area pursuant to section 8, not more than $10,000,000, to remain available until expended, subject to the following conditions: - (1) No grant for implementation may exceed 50 percent of the grantee’s cost of implementation. - (2) Any payment made shall be subject to an agreement that conversion, use, or disposal of the project so assisted for purposes contrary to the purposes of this Act, as determined by the Secretary, shall result in a right of the United States of reimbursement of all funds made available to such project or the proportion of the increased value of the project attributable to such funds as determined at the time of such conversion, use, or disposal, whichever is greater.
<urn:uuid:1e22fd0b-09c6-4d74-8c3e-360922e28903>
{ "dump": "CC-MAIN-2016-44", "url": "https://www.govtrack.us/congress/bills/103/hr4720/text", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719453.9/warc/CC-MAIN-20161020183839-00195-ip-10-171-6-4.ec2.internal.warc.gz", "language": "en", "language_score": 0.9153339862823486, "token_count": 3519, "score": 3.125, "int_score": 3 }
SiC power devices have emerged as a solution to the growing desire to reduce energy consumption at high switching frequencies. SiC devices can withstand high working temperatures, making it a good contender for the industrial setting. The availability of wide bandgap semiconductor devices has enabled engineers to construct power electronic systems to fulfil specific application needs . Along with Silicon Carbide (SiC), Gallium Nitride (GaN) is a wide bandgap material. Cost, efficiency, power density, complexity, and dependability are all important considerations when constructing any power system . Due to their quick switching rates and low on-state resistance, SiC MOSFETs are frequently damaged as a result of short-circuiting events . A group of researchers from Anhui University of Technology’s School of Electrical and Information Engineering presented failure models of SiC devices in short-circuit cases on two commonly used power devices, SiC MOSFET (n-channel enhancement SiC MOSFET from Cree) and SiC JFET (normally-on SiC JFET from Infineon) . Constructing SiC Transistor Failure Models The investigation conducted by Z. Wang et al. demonstrates that under short-circuit conditions, the failure current is greater than the rated current of the power device . The current component of SiC devices, particularly MOSFETs and JFETs represent the hole current density in both SiC transistors. We may deduce from the executed file that the high-density hole current passes through the PN junction between the transistors’ N – drift area and P base region.”TCAD simulation has also shown that, for SiC MOSFET, high-concentration carriers collect on the top of the JFET area, a percentage of which inject into the gate oxide and create the gate leakage current under the stress of high temperature and high electric-field strength,” the researchers say. When developing the SiC JFET and SiC MOSFET failure models, the injection current leakage was an important parameter. The structure in the dotted boxes is typical of conventional circuit models, as is the supplementary current component, I DS _LK was made in parallel connection with the channel current. I CH is the leakage current across the PN junction between the transistors’ N – drift region and P base region. According to the researchers, when there is no voltage bias on the gate to switch the device on. The thermal generation current Ith, avalanche current Iav, and diffusion current Idiff are the formulations for the leakage current via the PN junction. As a result, numerous strategies for reducing leakage current in the gate oxide have been proposed, including Fowler-Nordheim (FN) tunnelling and Poole-Frenkel (PF) emission. It is believed that the two currents, IPF and IFN, are a part of the SiC MOSFET gate oxide leakage current. The Shichman-Hodges physical model was used to simulate SiC MOSFET circuits based on the SPICE level-1 model. While on the other hand, SiC JFET circuit modelling uses the Shockley physical model. Charge carriers in the channel can experience this impact as a result of higher current stress during short-circuit situations. This leads to a higher temperature than in the normal switching condition. In particular, SiC JFETs have a substantially longer failure time than a SiC MOSFETs. Leading to this, they also have a lower saturation current than a SiC MOSFET. Validation of Failure Models of SiC Devices The created SiC JFET and SiC MOSFET failure models are validated under short-circuit fault conditions, with the figure illustrating a comparison of the failure currents produced from the models and the data provided in the study [6,7]. The results reveal that the short circuit failure time (tSC) for SiC JFET is 150s under 400V DC voltage and 13μs for SiC MOSFET under 600V DC voltage. “The carrier mobility based on temperature and electric-field strength are required to appropriately create the SiC power device failure model,” the team writes. “Furthermore, by varying the combination mode of three current components of IDS _LK, a definite conclusion can be obtained that Ith affects whether or not the constructed model can mimic device failure.” As a result, the heat-generating current during short-circuit dominates the failure impact. The schematic figure depicts the verification under short circuit fault conditions, with VDC representing the DC-link voltage, RS representing the stray resistance of the circuit loop, RG representing the gate resistance, and DUT representing the device (SiC JFET or SiC MOSFET). The image with red curves represents the first failure mode and blue curves represents the second failure mode. Two factors stand out: the lower saturation current of a SiC JFET than a SiC MOSFET and the much longer failure time of a SiC JFET than a SiC MOSFET. The temperature-dependent coefficient of carrier mobility is responsible for these variations. “SiC JFET has more short-circuit capacity than SiC MOSFET for instantaneous failure, but the failure duration and critical failure energy are both greater.” “The failure time of SiC JFET is much longer than that of SiC MOSFET for delayed failure at lower DC-link voltage, although the difference in failure time looks moderate at higher DC-link voltage,” the team discovers. A. K. Agarwal, "An overview of SiC power devices," 2010 International Conference on Power, Control and Embedded Systems, 2010, pp. 1-4, doi: 10.1109/ICPCES.2010.5698670. H. Qin, B. Zhao, X. Nie, J. Wen and Y. Yan, "Overview of SiC power devices and its applications in power electronic converters," 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), 2013, pp. 466-471, doi: 10.1109/ICIEA.2013.6566414. B. J. Nel and S. Perinpanayagam, “A Brief Overview of SiC MOSFET Failure Modes and Design Reliability,” Procedia CIRP, vol. 59, pp. 280–285, 2017, doi: Y. Zhou, T. Yang, H. Liu and B. Wang, "Failure Models and Comparison on Short-circuit Performances for SiC JFET and SiC MOSFET," 2018 1st Workshop on Wide Bandgap Power Devices and Applications in Asia (WiPDA Asia), 2018, pp. 123-129, doi:10.1109/WiPDAAsia.2018.8734661. Z. Wang et al., "Temperature-Dependent Short-Circuit Capability of Silicon Carbide Power MOSFETs," in IEEE Transactions on Power Electronics, vol. 31, no. 2, pp. 1555-1566, Feb.2016, doi: 10.1109/TPEL.2015.2416358.
<urn:uuid:097d9e25-b990-475b-8794-2fa03f04a054>
{ "dump": "CC-MAIN-2023-40", "url": "https://www.epowerjournal.com/2022/11/24/short-circuit-failure-models-for-silicon-carbide-devices/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510149.21/warc/CC-MAIN-20230926043538-20230926073538-00777.warc.gz", "language": "en", "language_score": 0.8945881724357605, "token_count": 1591, "score": 3.234375, "int_score": 3 }
Little is known about the three “Magi” who pay the “Holy Family” a visit in scripture. They are an integral part of the story in Matthew’s account and tell the story of encountering Herod on their journey following a star from the East to find the baby. In Bethlehem there are graves that date back to around the time of Jesus. One grave holds the body of the Catholic saint St. Jerome who translated the Bible into Latin. Buried nearby are graves of babies and children. Some historians have stated that this is factual evidence that the “slaughter of innocents” recorded in Matthew actually occurred. Upon hearing of the birth of Jesus from the magi, Herod orders the death of babies. The Jewish King was concerned that this child would overthrow him thus the call for these deaths. These graves can be found at the Church of the Nativity, the site that is believed to be the birthplace of Jesus. It is written in Matthew that the magi follow the star to Bethlehem where they find the baby with his mother Mary and offer him gifts of gold, frankincense and myrrh. The gifts are significant, as they represent presenting gifts to a royal, a holy person, and prophet who scripture says will die for the sake of humanity to bring new awareness to the world. Isaiah notes that he will be like a “lamb who is led to slaughter”, a suffering servant who will be sacrificed in the name of God (Isaiah 53). Magi is a significant title that some misappropriate as kings. They are actual mystics and astrologers, interpreters of dreams. Some scholars believe they hailed from the regions of Syria/Lebanon and Babylon or Arabia. There is uncertainty about the number of magi present, as there may have been more than three. Some early Christian historians like Clement of Alexandria in the Stromata, written in 155AD, assert that they came from Persia. The figures names are Balthazar, Melkior and Caspar. The information about these people comes later, so the facts are speculative surrounding them. Speculation is that one of the kings is from the Zoroastrian faith. This is significant, as some scholars have asserted that their importance in Jesus’ life is that they became his teachers and trained him in theological understanding and the ways of the world. Coming from the East, these are teachings that play in the spiritual development of this Messiah. Christians do not always realize this, but Jesus’ message is appealing because it appeals to the whole world – East and West. The teachings are universal. This is so because of their spiritual nature. If one of the teachers was Zoroastrian then this presents an interesting spin on how Zoroastrianism and the battle of good versus evil plays out in Jewish scripture. It also proves to play a role in Jesus’ thought. Although the Zoroastrian faith is secretive and the truths are explained and held by those who follow the faith, the common knowledge points of this religion are significant and reveal how it may have been a linchpin religion of the East and West. Zoroastrians talk about a “holy spirit” called “Spenta Mainyu”. This spirit is a creative force for breathing new light into the world. It also mentions an evil spirit “Shaitan” who chose the lie over truth. The first Nativity scene that is depicted is at the site of Jesus’ birth. Three images of the kings were depicted there and when Muslim invaders destroyed churches in response to the crusades, they spared this church because of the Arab figures depicted here. There is even a dedicated altar to the magi where the gifts were offered across from the star that supposedly marks the spot of Jesus’ birth. In scripture one must consider what the author is trying to write. Matthew was speaking to a Hebrew audience and attempting to convey to them the Messianic images thus making Jesus the Messiah. Some scholars suggest that the handing down of this information came from generation to generation, as it is written in Luke “Mary held these things in her heart”. Whatever the case may be, the tenor of the magi offers an interesting introduction to Jesus and the mysterious events that occurred around his birth.
<urn:uuid:84e9b5a6-f3cc-45bf-8050-0c93c851035e>
{ "dump": "CC-MAIN-2020-16", "url": "https://spiritbeat.net/2019/12/29/wisdom-of-magi-by-william-klein/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371876625.96/warc/CC-MAIN-20200409185507-20200409220007-00291.warc.gz", "language": "en", "language_score": 0.9677888751029968, "token_count": 884, "score": 3.1875, "int_score": 3 }
In an ongoing effort to help you take fantastic images of your kids (and everyone else), today we are going to talk about giving space in child photography. What does this mean, exactly? It’s pretty simple – give some space! Okay, maybe not that simple. In essence, you never (okay, not never, but most of the time) want to put your subject in the center of an image, it’s boring (again, most of the time). So, you put them off to one side or the other. Giving space means leaving room on the side where your subject is facing. This gives the viewer something to think about – where are they looking, what are they looking at, etc., which in turn creates a more dynamic image. Is there a particular side you should put the child on, left or right? Not really, although as a culture we are used to “reading” from left to right and take the same approach with images. Just try to put the space on the side where they are looking. So why can’t you have your subject, for example, facing the right side of the image and put the space on the left? You can, of course, but what happens here is that the person looking at the image follows the eyes out of the frame, and off they go to the next photo. The trick to creating really good images is to give viewers a reason to keep looking; adding space on the side they are looking at tends to help with this goal. This tip works with all sorts of photography, not just with portraits. For example, if you were taking photos of a race car you would typically leave more space in front of the race car, rather than behind it, for the exact same reason – it’s more interesting to wonder where it’s going than where it’s been. So the next time you are taking portraits think of this rule and try to apply it. In fact, take a few photos 1) employing the rule (space to move into), 2) breaking the rule (space behind where they are looking) and 3) basic portrait shot (face in the center). See which image you feel works the best. Likely you’ll find that you prefer to use this rule to create your images going forward. Tip for busy kids: If you can’t get them to hold still long enough to compose your image using this rule, pull back a bit and take the photo, even if it means that they are now closer to center. You can always crop the photo later to leave more space on the side where they are gazing. Let me know how this tip worked for you!
<urn:uuid:bc08780a-d661-449e-ab78-972d49503b98>
{ "dump": "CC-MAIN-2019-47", "url": "https://janegoodrich.com/child-photography/giving-space-in-portrait-photos/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00374.warc.gz", "language": "en", "language_score": 0.9643489718437195, "token_count": 557, "score": 2.921875, "int_score": 3 }
Parents often pour large amounts of time, money, and effort into raising their children (as they should); it’s one of life’s highest callings to bring infants to maturity, and to nurture children into responsible adults. But even after the kids grow up, move out, and get married, there may soon be more work to do; it’s called grandchildren! Perhaps work is the wrong word to use; it is a privilege and a pleasure to be able to spend quality time with the youngest members of the family; but the role is now different from that of a direct parent. Grandparents have a special function in the lives of their grandkids that ought to be recognized and appreciated. The role of a grandparent (in contrast with a mother or father) is one of lavishing love without the responsibility of deciding and administering discipline. It is the role of telling stories to preserve family heritage. It is the role of creating birthday and holiday memories. But one of the most important aspects, is the role of providing wise counsel to the mother and father who are now the ones trying to raise infants to maturity, and children into responsible adults. The grandparent has graduated from parenthood, to parental counselor. It is time to give advice when a worried parent calls about their baby’s high fever, or when they need to know a good method for potty-training. It’s time to reminisce and share tips, warnings, and encouragements that were learned by experience. Transitioning from being a parent to a grandparent is a beautiful thing! It should be celebrated, and held in honor, because the next generation will always need the wisdom from those who came before them.
<urn:uuid:654fb973-8a3f-4771-b6fd-a537c8077b89>
{ "dump": "CC-MAIN-2018-09", "url": "http://raylarson.net/isnt-grand-transition-parent-grandparent/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812855.47/warc/CC-MAIN-20180219231024-20180220011024-00368.warc.gz", "language": "en", "language_score": 0.9642011523246765, "token_count": 352, "score": 2.734375, "int_score": 3 }
It’s rare that historical findings cause a rethinking of a mainstream record, and rarer still that historians admit that much about their discoveries results from chance rather than ratiocinative work. However, in the hands of professionals, there are no dead ends in research, and so-called happy accidents can make a significant contribution to published accounts. In this regard, all those involved in the Sylvester Manor Archive exhibition of primary documents and artifacts that opens on April 10 at the Fales Library and Special Collection at New York University have reason to be proud of what they have assembled from the Shelter Island manor house and grounds. Sylvester Manor preservations coordinator Maura Doyle credits friends and family of the Manor with the NYU connection, and she is delighted that the university, with its well-regarded interdisciplinary programs, is the recipient of this “fascinating hodge podge” of loaned materials, which will surely modify what is known about early East End social and political history. Called Sylvester Manor: Land, Food, and Power on a New York Plantation, and curated by Stony Brook University professor of history Jennifer Anderson, the display documents the complex and surprising relationship of Europeans, Native Americans and Africans on Shelter Island that began in the colonial period and continued for 300 years. Relationship? Aren’t we talking here about slaves provisioning sugar plantations in the West Indies and about the Atlantic Triangle Trade? Of course, but the findings in effect argue that the relationship in the colonial and antebellum periods was more nuanced and mutually influential than originally thought. Over 10,000 primary documents—among them a first edition of Thomas Paine’s January 1776 Common Sense—and material evidence were unearthed by Stephen Mrozowski, director of The Andrew Fiske Memorial Center for Archaeological Research and his team at UMass, Boston. The findings suggest that the original Sylvester family (Nathaniel Sylvester, d. 1680) and descendants housed both enslaved and indentured “diverse inhabitants” who probably served as domestics, some living in the house and producing “food for home use, regional consumption and overseas export.” The exhibit thus sheds new light on “the politics of food and changing land uses,” not to mention interactions of Europeans, Native Americans and Africans. The documents were discovered by chance at the manor house, many of the artifacts by the UMass team who were first investigating pig and cow remains found in a large slaughtering pit dating to 1660–1670. Subsequent digs yielded evidence of a structure that was probably a workhouse for Native Americans. Although Sylvester Manor is now known for sustainable farming, the plantation was largely provisioning livestock. Evidence on site, however, also indicates home use that signals a “Creolized diet” and “hybrid cultural forms”—foodstuffs, such as a mix of corn and turtle. Ceramic mugs, for example, clearly styled by slaves, contained handles, a decidedly European touch. Kettles with ceramic lining instead of iron reflect Native American hands at work because slaves from Barbados did not like the taste iron gave food. Moreover, Mrozowski speculates on the basis of “negative evidence,” both Native Americans and Africans lived together with Europeans and not in separate quarters. Among other inferences, Mrozowski suggests that such findings may prove that Native Americans on Shelter Island were not decimated by The Contact (the euphemistic phrase used for the arrival of Europeans). In short, these remarkable documents and archeological recoveries imply a new narrative about a critical time in American cultural history. The NYU exhibition is free and open to the public through July 23. sylvestermanor.org
<urn:uuid:e35c7368-6537-4f75-80a0-8201a6fb5bad>
{ "dump": "CC-MAIN-2015-14", "url": "http://www.danspapers.com/2013/04/remarkable-sylvester-manor-artifacts-on-exhibit/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299261.59/warc/CC-MAIN-20150323172139-00114-ip-10-168-14-71.ec2.internal.warc.gz", "language": "en", "language_score": 0.9582035541534424, "token_count": 783, "score": 2.90625, "int_score": 3 }
Teachers know better than anyone how nice it feels to receive gifts of appreciation. Throughout each school year, teachers receive cards or gifts from students and their families; but how often do teachers show appreciation for one another? The opportunities are certainly there. It would seem that an environment in which team teachers and coworkers take these opportunities would actually be an environment where morale is high and productivity soars. If you have worked with, or continually work with, any teacher or even office staff that goes out of their way to provide assistance to you, your students, or other people, you have the opportunity to show appreciation. And hence, you have the opportunity to encourage this person and brighten their day. Here are a few ways you can show that you appreciate the job someone else is doing. - ID badge lanyards are highly used in school settings, and therefore make ideal gifts for any teacher or school employee. Most of the time, teachers wear ID badge lanyards that were either given to them by the school office or that they picked up at an educational conference. This means they probably lack that certain panache that adds polish to work attire. When you want to offer a special gift to someone at your school, consider the lanyard. Today, ID badge lanyards are available in many different styles and themes that teachers love. Providing all the same convenience for carrying keys or badges, lanyards today can also look very polished and professional. - When you work closely with someone, you can get to know their likes and dislikes. Perhaps you know of a special hobby or interest a fellow teacher has. This makes giving them a small gift easier; as you can tailor it to their personal interests. - Teachers can always use helping hands, as you are well aware. Showing appreciation for a fellow teacher doesn’t have to be done with gifts that you purchase; it could be done by helping them where you can. Perhaps you can take their lunch duty for a day to give them time away. Acts of service are kind and thoughtful. There are many ways you can provide service to a fellow teacher that won’t interfere with your own duties. This is a gift that costs nothing and leaves you both feeling great. - Another gift that costs nothing is that of a note. Teachers like to hear that they have done well at something; we all do. If you know another teacher who has gone out of their way, taught you something, inspired you or somehow helped you, tell them. It doesn’t get any simpler than that! Keep blank note cards in your desk so you can write notes to others when the opportunity arises. It pays to just say thanks. Receiving accolades from peers is a high honor that professionals love to be treated to. In the school setting, there is no need to leave the praise to students and families. Teachers can and should take it upon themselves to continually life one another up and encourage those around them. Doing so can only result in a more positive environment.
<urn:uuid:c48d9c43-3b05-4b37-8d81-a12a07edbcd0>
{ "dump": "CC-MAIN-2017-17", "url": "https://learningtales.wordpress.com/2010/10/29/showing-appreciation-to-fellow-teachers/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119356.19/warc/CC-MAIN-20170423031159-00302-ip-10-145-167-34.ec2.internal.warc.gz", "language": "en", "language_score": 0.9778302311897278, "token_count": 619, "score": 2.5625, "int_score": 3 }
February 24, 2015 2 Comments In 1938, Our Town premiered on the American stage. Written by Thornton Wilder, it gave a universal outlook to a few lives in Grovers Corners. The rituals of birth, marriage, and death were commented on by a matter-of-fact Stage Manager, played by Frank Craven. Suddenly, small town life gained a cosmic significance. Oscar Hammerstein was influenced by this play, and tried to create a musical that would follow a man from his birth to his death. He chose for his character the son of a rural doctor, and called him Joseph Taylor, Jr. He hoped to show how significant and miraculous one human life was by tracing its early influences, obstacles, loves, career struggles, and, ultimately, decline and death. However, in the middle of his second act, he began to lose focus, so he could not fulfill his goal completely. Yet, his failure resulted in a dynamic, dramatic, innovative, unforgettable musical that in many ways did accomplish some of Oscar’s goals. The musical begins with the celebration of Joseph Taylor, Jr.’s birthday. The mayor has declared the event a legal holiday, so there is no school. The opening chorus of celebration involves the whole town, from a church choir to drunks, stumbling along to somewhat grotesque rhythms. And even the children cry out: “Look what Marjorie Taylor’s done… Hail him, hail him, everyone! Joseph Taylor, Jr.!” So, the simple birth is magnified in importance, and we feel that the country doctor, Joseph Taylor is quite an important man in the minds of the town’s citizens. Grandma Taylor introduces the theme of time flow that is so critical to the first act: “The winters go by. The summers fly. And, all of a sudden you’re a man. I have seen it happen before, so I know it can happen again.” Growth is as much a human ritual as birth. In her generalizing, Grandma Taylor sounds like Wilder’s Stage Manager. Growth does occur when Joseph Taylor, Jr. takes his first steps in “One Foot, Other Foot.” Richard Rodgers uses the music from this song as a “growing up” motif, indicating the steps Joe will take throughout the show. Once Joe can walk, he emerges as a truly living character. The play concludes with him taking another major step in his life to the same motif. During Joe’s childhood, Grandma Taylor dies, though she will appear together with his mother as ghosts during his wedding, and final scene of Act 2. So, now it is time for another childhood rite: the encounter with the opposite sex. In an eerie, often grotesque Children’s Dance, punctuated with fragments of nursery rhymes, Jennie Brinker and Joseph Taylor, Jr. are pushed into each other through a children’s game, displaying the inexorability of fate. Jennie Brinker, the winsome daughter of wealthy lumberman, Ned Brinker, is a beautiful blond with insouciant charm and Joe is smitten immediately. Soon, it is time for Joe to go to college to study medicine as his father did. The 1920s college atmosphere is evoked through dance music, college cheers, and excerpts from professors’ lectures. But Jennie continues to haunt Joe in his thoughts and desires: “You are lovelier by far, my darling, than I dreamed you could be!” Joe is obsessed with Jennie’s external beauty, but of her deeper motivations he hasn’t a clue. While in college, Joe meets fellow student, Charlie Townsend, a more worldly hedonist, without Joe’s hometown values. All Charlie can think about is girls, and when Jennie is seeing another boy, Bertram, Joe decides to go out with Charlie’s acquaintance, Beulah for a date. She sings “So Far”, a song about the romantic possibilities of their beginning friendship. However, upon finishing her song, Beulah notices that “the little louse is asleep!” No competition for Jennie! When Joe returns home, Jennie and Joe become engaged. Shortly before the wedding, Marjorie Taylor and Jennie Brinker have a major argument, and Jennie reveals her true intent by telling Marjorie the plans she has for Joe’s success, and by demonstrating her indomitable will and determination to achieve them. Marjorie is shattered, realizing that Joe has fallen into the arms of a ruthless schemer, but is helpless to change matters. In fact, she dies soon after their confrontation. With her major antagonist out of the way, Jennie is free to control Joe the way she wants. In Act 2, she does just that. Another ritual: the wedding. “What A Lovely Day For A Wedding!’ is a satirical song, showing how the Brinker relatives and Taylor relatives despise one another as they come from different social milieus and have completely different values and expectations, but the wedding must go on. Act 1 ends with a choir wishing the newlyweds well in almost daring, shrieking tones, and the orchestra ends in total discord, hinting at what disasters lie ahead. Grandma Taylor and Marjorie appear on stage, shaking their heads in horror as to the coming future. And we, as the audience, can only wait for Act 2!
<urn:uuid:9b70d822-3173-4222-98cd-8547169b8781>
{ "dump": "CC-MAIN-2015-11", "url": "http://discoveryandwonder.com/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461848.26/warc/CC-MAIN-20150226074101-00311-ip-10-28-5-156.ec2.internal.warc.gz", "language": "en", "language_score": 0.9686707258224487, "token_count": 1139, "score": 2.625, "int_score": 3 }
The Sun Signs – Zodiac Dvādasa Āditya– The twelve Sun Signs: For one reason or the other, whether it be the conquest of the Normans or the birth of Christ, the starting date of the year has been varying as calendars come and go. In the scheme of Vedic astrology, the solar calendar consists of twelve houses of 30 degrees each covering the total span of 360 degrees. These are called the twelve Sun Signs (dvādasa āditya). These signs are fixed (unlike that used in the western astrology) although the names and other significations, nature etc are similar to that used in Western Astrology. Table 1: The Characteristics of the Signs |8||Scorpio||Water||Fixed||Tamas||Female||Mars & Ketu| |11||Aquarius||Air||Fixed||Tamas||Male||Saturn & Rāhu| (a) Sex: The sign is either positive “masculine”, or negative “feminine”. The odd numbered signs (as reckoned from Aries) are the Male or Odd signs while the even numbered signs are Female or Even signs. (b) Ruling Element: Each sign belong to one of the triplicity of Fire, Air, Earth and Water (Refer tattva below). “Triplicity” means triplicate or three of a kind and there are three signs of each of the four types of elemental forms. The seers called these the Fire triplicity (Aries, Leo and Sagittarius), because there are three zodiac signs for each element. We will stick to this terminology instead of using the more refined term ‘energy’. Since these signs are similar, this triplicity, trine or trikoṇa (jyotiṣa terminology) represents harmony or similarity of nature/ interest. These signs are 120° apart. (c) Mobility: Each sign is either cardinal (movable or Chara), fixed (sthira) or mutable (dual or dvisvabhāva). Thus, every fourth sign reckoned from Aries is movable, every fourth reckoned from Taurus is fixed and every fourth reckoned from Gemini is Dual in nature. This similarity of every fourth is called the quadruplicity of the sign. The movable signs have excessive energy and are capable of easy movement showing the predominance of rajas guṇa. The fixed signs have low energy and have an inability to move thereby showing a predominance of Tamas guṇa. The Dual signs are a balance between the excessive mobility of the movable signs and the immobility of the fixed signs thereby showing a predominance of sattva guṇa. Guṇa is the inner attribute of the sign and this inner nature of the sign manifests externally in different ways and mobility being one of them. This difference in the tropical and sidereal zodiac of the western astrologer and Vedic astrologer is because Vedic Astrology takes into account the astronomical fact of the precession of the solar system around another point called a “nābhi” or Navel whereby the system processes (like going back) at the rate of about 50.18 seconds per year (others take an average varying from 50″ to 54″ per year based on the time of 26,000 years or 24,000 years for the precession to complete one circle of 360 degrees). This precession results in a mathematical correction called Ayanāṁśa. Viscount Cheiro writes “We must not forget that it was the Hindus who discovered what is known as the precession of the Equinoxes, and in their calculation such an occurrence takes place every 25,827 years. Our modern science, after labors of hundreds of years has simply proved them to be correct.” The dates assigned to the signs of the zodiac are based on the solar ingress (i.e. entry of the Sun) of the signs. Depending on the value of Ayanāṁśa used, this date can vary by a few days and different “astrologers” assign slightly different dates based on their belief on the date of conjunction of zero point of the precession and Aries (called beginning of Kali Yuga) and the rate of the precession. The Government set up a committee called the Calendar reforms committee to correct the anomaly between the beliefs of different Vedic calendars. The result was what is popularly called the Rashtriya Pañcāṅga (national calendar) and the Lahiri ayanāṁśa. The date at which the Sun enters a sign is called the Saṅkrānti. Thus, we have 12 Saṅkrānti based on the date of the Sun’s entry into each of the 12 signs from Aries to Pisces. Good Vedic Astrologers will date events from the days calculated from Saṅkrānti and also the Tithi. The Vedic Sun signs have a profound influence on the desires of the soul, which is the real individual and if charts are matched based on the Sun signs in addition to the Moon, and then real compatibility can be ascertained. Thus, in a way, people having the same Sun signs as can be called “soul mates”. Tithi is the Vedic date of the Lunar calendar and is a measure of the distance between the Sun and the Moon starting from Pratipada when they conjoin at Pūrnima when they oppose at 180 degrees. There are 15 Tithi in the Śukla Pakṣa (Waxing Phase) and 15 Tithi in the waning phase (Kṛṣṇa pakṣa) (Refer Table-1). Each Tithi is an angle of 12 degrees. This angle is mathematically represented as: Angle = Longitude of Moon – Longitude of the Sun and, Tithi = Angle / 120 Table 2: Tithi or the Vedic date (All angles in degrees) |Śukla Pakṣa||Pratipad-1||0 -12||Dwiteeya-2||12-24||Truteeya-3||24-36| āditya is the name of the Sun God as born from Aditi [the mother of the Gods or Deva (Deva is derived from Diva, meaning the giver of light or enlightenment)]. There are 12 Āditya or Sun Gods for each of the 12 months of Solar transit through the 12 signs. To find your Sun God/Āditya, refer to Vedic Remedies in Astrology by Sanjay Rath. Ketu, the descending node is the co-lord of Scorpio Rahu, the ascending node is the co-lord of Aquarius Ayanāṁśa: this is the precession of the solar system and is to be added or subtracted from the zero point of Aries in the western chart to arrive at the Vedic Horoscope. For example, the solar ingress into Aries resulting in the start of the Aries Month in western astrology is March 21. However, the Ayanāṁśa at present (2000 AD) is about 23 degrees and adding 23 days to March 21 we get April 14 as the date for Solar ingress into Aries in the Vedic Calendar. Based on the traditional period of 25,827 years to cover 360 Degrees of the zodiac. Cheiro Book of numbers, Page 19.
<urn:uuid:4359e837-252e-4643-a36d-5623c0e73305>
{ "dump": "CC-MAIN-2021-39", "url": "https://srath.com/jyoti%E1%B9%A3a/dasa/introduction-01/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057421.82/warc/CC-MAIN-20210923104706-20210923134706-00612.warc.gz", "language": "en", "language_score": 0.8549916744232178, "token_count": 1627, "score": 2.671875, "int_score": 3 }
Animal Welfare Officer Enforce law and educate public on prevention of cruelty to animals. - Investigate complaints of animal cruelty - Rescue animals that have been treated badly - Undertake inspections of properties and commercial operations - Educate people about the proper treatment of animals - Prepare cases for court hearings - Care and treat neglected animals - Re-home animals once they have recovered - like working with animals - communication skills - ability to work alone or as part of a team Explore a career as an Animal Welfare Officer here. A Certificate II or III, or at least 1 year of relevant experience, is usually needed to work in this job. Around one in three workers have a university degree. Even with a qualification, sometimes experience or on-the-job training is necessary. Registration or licensing may be required. VET Course Providers: New South Wales Australian Capital Territory For more information on required qualifications, click here.Act Now
<urn:uuid:bb671169-3fd5-4829-ba72-f7abcac46d42>
{ "dump": "CC-MAIN-2019-26", "url": "https://www.careerharvest.com.au/career/animal-welfare-officer/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998690.87/warc/CC-MAIN-20190618063322-20190618085322-00078.warc.gz", "language": "en", "language_score": 0.9069878458976746, "token_count": 199, "score": 2.671875, "int_score": 3 }
Newsroom Formatted Articles - Life Stages & Populations Mat releases (also known as matte releases or formatted releases) are formatted, ready-to-print articles that are free to use in any publication. CDC′s Formatted Release Library has articles on a variety of important health topics. Please call (404) 639-3286 or e-mail firstname.lastname@example.org with the title of the mat release you would like to use and the name of your publication. We will get back to you within one business day with a watermark-free copy. Remember to check back for new articles or e-mail email@example.com to get on our distribution list and receive updates when articles are added. First Look: U.S. Youth and Seizures Nearly 1 in 100 children and adolescents in the United States have seizures, according to the first national study to look at seizures, co-occurring conditions, household income, and access to healthcare for children and teens between the ages of 6 and 17 years. Protect your children from environmental hazards As parents and kids get ready to head back to school, it's a great time for parents to update your child's vaccine records. It's also a good idea to be aware of your child's environment and how it might be affecting their health. The environment affects children differently than adults. Because their bodies are still growing, children are at greater risk if they are exposed to environmental contaminants. Simple Steps to Reduce Fall Risks Every year, one in three adults over age 64 falls. Thousands of older adults die from fall injuries every year and about two million are treated for nonfatal fall injuries in emergency departments. But simple home modifications and exercises that improve strength and balance can help reduce the risk of falling. Diabetes Among American Indians and Alaska Natives American Indian and Alaska Native adults are twice as likely to have diagnosed type 2 diabetes than non-Hispanic whites. Rates of diagnosed diabetes among American Indians and Alaska Natives younger than 35 doubled from 1994–2004. Teen Sleep Habits; What Should You Do? Almost 70 percent of high school students are not getting the recommended hours of sleep on school nights, according to a study by the Centers for Disease Control and Prevention. Researchers found insufficient sleep to be associated with a number of unhealthy activities. Positive Parenting Tips: Babies, Toddlers & Preschoolers Positive Parenting Tips: Childhood & Adolescence As children grow, they experience physical, mental, social, and emotional changes. Learning about each of these stages can help prepare you for the challenges and opportunities of parenting teenagers. Learning the Signs of Autism and the Importance of Acting Early To raise awareness about developmental milestones and the importance of identifying them and getting help early, the Centers for Disease Control and Prevention (CDC) offers free information and tools for parents, health care professionals, and early educators through it's "Learn the Signs. Act Early." campaign (www.cdc.gov/actearly). Research has shown that early intervention is key to helping a child reach his or her full potential. That's why CDC wants all parents to "learn the signs" and "act early," even if a problem is only suspected. Keys to Healthy Aging What is longevity without health? Adults today are looking not only to extend their lives, but to enjoy their extra years. By 2030, the proportion of the U.S. population aged 65 and older will double to about 71 million older adults, or one in every five Americans. The far-reaching implications of the increasing number of older Americans and their growing diversity will include unprecedented demands on public health, aging services, and the nation's health care system. The Centers for Disease Control and Prevention (CDC) works hard to protect health and promote quality of life through the prevention and control of disease, injury, and disability. CDC has developed some keys to preventing some of the most common health issues facing older adults. Latino teens happier, healthier if families embrace biculturalism Parents of adolescents know that it can be challenging to make sure their teens are making healthy choices. Latino parents who have immigrated to the United States face an additional and unique challenge: raising adolescents in a new country and culture. African-American Women and Their Babies at a Higher Risk for Pregnancy and Birth Complications Preterm, or premature, delivery is the most frequent cause of infant mortality, accounting for more than one third of all infant deaths during the first year of life. The infant mortality rate among black infants is 2.4 times higher than that of white infants, primarily due to preterm birth. In the United States, the risk of preterm birth for Non-Hispanic black women is approximately 1.5 times the rate seen in white women. Breathe Easier When You Know More About Asthma Did you know that 1 in 10 Americans has, or has had asthma at some point in their lives? Most people don′t die from asthma, but there is concern for African Americans because asthma is more likely to cause death. The reason for this disparity is not known. But there are asthma control techniques to help people manage their condition successfully. The Centers for Disease Control and Prevention (CDC) offers this important advice to everyone with asthma – have an asthma action plan and exercise it. The CDC has a variety of information that patients and health-care providers can use to control asthma. Help Seniors Live Better, Longer: Prevent Brain Injury Anyone who cares for or just cares about an older adult—a parent, grandparent, other family member, or even a close friend—will say they are concerned about keeping their loved one healthy and independent. But few will say they are worried about a traumatic brain injury (TBI) robbing their loved one of his or her independence. That's because many people simply are unaware that TBI is a serious health concern for older adults. Hearing Screening for Newborns Important for Development Babies begin to develop speech and language from the time they are born. They learn by listening and interacting witvh the sounds and voices around them. But, when a baby is born with hearing loss, many sounds and voices are not heard, and the child's speech and language development can be delayed. Most Parents Unaware of Possible Brain Damage from Untreated Jaundice A majority of Americans are not aware of the serious potential risks associated with newborn jaundice, according to a recent survey. This national survey of nearly 5,000 Americans found that more than 70 percent (71.9 percent) of respondents polled had never heard of kernicterus, a condition that results from brain damage caused when bilirubin levels get too high and go untreated. Why It′s Important To Learn About Cerebral Palsy Today We all know the importance of making sure a child is healthy, but parents may not be aware of the signs and symptoms of major developmental disabilities, such as cerebral palsy (CP). CP, the most common cause of motor disability in childhood, is a group of disorders that affect a person′s ability to move and keep their balance and posture. Cerebral means having to do with the brain. Palsy means weakness or problems with using the muscles. The symptoms of CP vary from person to person. A child may simply be a little clumsy or awkward or unable to walk at all. Why Alcohol and Pregnancy Do Not Mix - Page last reviewed: February 26, 2014 - Page last updated: February 26, 2014 - Content source:
<urn:uuid:a2693d5b-f5cf-4f6f-b087-2c133ddf3e2d>
{ "dump": "CC-MAIN-2016-44", "url": "http://www.cdc.gov/media/subtopic/matte/matteLifeStages.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719273.38/warc/CC-MAIN-20161020183839-00293-ip-10-171-6-4.ec2.internal.warc.gz", "language": "en", "language_score": 0.9466012716293335, "token_count": 1555, "score": 2.765625, "int_score": 3 }
Doubled Die Classification This page and the following pages are compliment of I want to personally thank Chuck - (dblddies) and Coppercoins.com for the use of there text, links and support of Coin Varieties Club and the Club's website. Please visit Coppercoins.com by clicking on the link above. Thanks The author's system of designating die numbers to doubled Dies as they are added to the system would be complete in itself if all doubled dies occurred in the same manner. Being that this is not the case, we turn to a system of "classes" of doubling to further describe the doubling on a given coin. Doubled dies are classified using a system originally published by John A . Wexler in his 1984 book, The Lincoln Cent Doubled Die. Each of the eight different classes of doubling are unique and readily discernible if one knows what to look for. The classes of doubling are used throughout this site to help visitors visualize what a doubled die might look like just by looking at the die number and classification. Each is explained below as it is currently used by the Combined Organization of Numismatic Error Collectors of America (CONECA), who purchased the system from John Wexler and still uses it today. Courtesy of Coppercoins.com Doubled Die Classification Class I - Rotated Hub Doubling Used to describe doubling that occurs when a hub is rotated from an axis near the center of the design between hubbings. The rotation between hubbings is described as being either counterclockwise (CCW) or clockwise (CW) in direction. Class II - Distorted Hub Doubling This class of doubling occurs when two separate hubs are used to make a die, and one of the hubs is warped or distorted. This style of doubling is spread toward the edge (E) or center (C) of the die. Class III - Design Hub Doubling Occurs when the design is changed between the hub used for the first hubbing and the second hubbing. A good example of this would be the different 1960 small date over large date (or vice versa) varieties. Class IV - Offset Hub Doubling This type of doubling, instead of having a clock-like turn in the hubbings, has a shift in direction between hubbings. Spread is in a cardinal direction, usually described simply with north, south, east, or west. A good example in this case is the famous 1983P cent DDR. All design elements are spread to the north. Class V - Pivoted Hub Doubling Another of the clock-like hub rotations, but the main difference between this class and class I is that this type of doubling has a pivot point near the rim of the die instead of near the center of the die. This causes the doubling to be stronger on one side of the coin than the other. A good example here is the well known 1995 cent DDO. The pivot point is near the date on the coin, hardly any doubling is noticeable there, while LIBERTY on the opposite side of the design is heavily doubled. Class VI - Distended Hub Doubling In this style of doubling the devices on the hub actually flatten on the hub between hubbings on the die. The result is fatter than normal design elements or skewed letters and numbers. This is usually more prevelant toward the edge of the design than in the middle. A good example to see is a 1943 doubled die in which the date is nearly twice as thick as normal due to a distorted hub. This type of doubling is often not associated with the presence of separation lines or notching at the corners of the design elements. Class VII - Modified Hub Doubling This type of doubling occurs when a design element is ground off of a hub (although not completely) and replaced. An example would be a known 1963D cent doubled die in which the 3 in the date was too low on a hub. This doubling only shows in the 3 of the date since that was the only design element in which a modification took place. Class VIII - Tilted Hub Doubling This class of doubling occurs when a hubbing is not aligned properly, but also is not aligned squarely with the die. The result is doubling only on one area of the die, usually somewhere along the rim. This is the newest recognized class of doubled die.
<urn:uuid:fef5029f-d797-4d3d-a835-99cb84132c0f>
{ "dump": "CC-MAIN-2018-13", "url": "http://overdate1.3.tripod.com/doubledieclassification/index.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650262.65/warc/CC-MAIN-20180324112821-20180324132821-00728.warc.gz", "language": "en", "language_score": 0.9469051957130432, "token_count": 901, "score": 2.734375, "int_score": 3 }
As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2 facility was constructed which consists of 24 individual 33.6 m2 field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4-P, K+, Mg2+, and Ca2+ had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems. 20 Related JoVE Articles! Experimental Protocol for Manipulating Plant-induced Soil Heterogeneity Institutions: Case Western Reserve University. Coexistence theory has often treated environmental heterogeneity as being independent of the community composition; however biotic feedbacks such as plant-soil feedbacks (PSF) have large effects on plant performance, and create environmental heterogeneity that depends on the community composition. Understanding the importance of PSF for plant community assembly necessitates understanding of the role of heterogeneity in PSF, in addition to mean PSF effects. Here, we describe a protocol for manipulating plant-induced soil heterogeneity. Two example experiments are presented: (1) a field experiment with a 6-patch grid of soils to measure plant population responses and (2) a greenhouse experiment with 2-patch soils to measure individual plant responses. Soils can be collected from the zone of root influence (soils from the rhizosphere and directly adjacent to the rhizosphere) of plants in the field from conspecific and heterospecific plant species. Replicate collections are used to avoid pseudoreplicating soil samples. These soils are then placed into separate patches for heterogeneous treatments or mixed for a homogenized treatment. Care should be taken to ensure that heterogeneous and homogenized treatments experience the same degree of soil disturbance. Plants can then be placed in these soil treatments to determine the effect of plant-induced soil heterogeneity on plant performance. We demonstrate that plant-induced heterogeneity results in different outcomes than predicted by traditional coexistence models, perhaps because of the dynamic nature of these feedbacks. Theory that incorporates environmental heterogeneity influenced by the assembling community and additional empirical work is needed to determine when heterogeneity intrinsic to the assembling community will result in different assembly outcomes compared with heterogeneity extrinsic to the community composition. Environmental Sciences, Issue 85, Coexistence, community assembly, environmental drivers, plant-soil feedback, soil heterogeneity, soil microbial communities, soil patch Design and Operation of a Continuous 13C and 15N Labeling Chamber for Uniform or Differential, Metabolic and Structural, Plant Isotope Labeling Institutions: Colorado State University, USDA-ARS, Colorado State University. Tracing rare stable isotopes from plant material through the ecosystem provides the most sensitive information about ecosystem processes; from CO2 fluxes and soil organic matter formation to small-scale stable-isotope biomarker probing. Coupling multiple stable isotopes such as 13 C with 15 O or 2 H has the potential to reveal even more information about complex stoichiometric relationships during biogeochemical transformations. Isotope labeled plant material has been used in various studies of litter decomposition and soil organic matter formation1-4 . From these and other studies, however, it has become apparent that structural components of plant material behave differently than metabolic components (i.e . leachable low molecular weight compounds) in terms of microbial utilization and long-term carbon storage5-7 . The ability to study structural and metabolic components separately provides a powerful new tool for advancing the forefront of ecosystem biogeochemical studies. Here we describe a method for producing 13 C and 15 N labeled plant material that is either uniformly labeled throughout the plant or differentially labeled in structural and metabolic plant components. Here, we present the construction and operation of a continuous 13 C and 15 N labeling chamber that can be modified to meet various research needs. Uniformly labeled plant material is produced by continuous labeling from seedling to harvest, while differential labeling is achieved by removing the growing plants from the chamber weeks prior to harvest. Representative results from growing Andropogon gerardii Kaw demonstrate the system's ability to efficiently label plant material at the targeted levels. Through this method we have produced plant material with a 4.4 atom%13 C and 6.7 atom%15 N uniform plant label, or material that is differentially labeled by up to 1.29 atom%13 C and 0.56 atom%15 N in its metabolic and structural components (hot water extractable and hot water residual components, respectively). Challenges lie in maintaining proper temperature, humidity, CO2 concentration, and light levels in an airtight 13 atmosphere for successful plant production. This chamber description represents a useful research tool to effectively produce uniformly or differentially multi-isotope labeled plant material for use in experiments on ecosystem biogeochemical cycling. Environmental Sciences, Issue 83, 13C, 15N, plant, stable isotope labeling, Andropogon gerardii, metabolic compounds, structural compounds, hot water extraction Environmentally Induced Heritable Changes in Flax Institutions: Case Western Reserve University. Some flax varieties respond to nutrient stress by modifying their genome and these modifications can be inherited through many generations. Also associated with these genomic changes are heritable phenotypic variations 1,2 . The flax variety Stormont Cirrus (Pl) when grown under three different nutrient conditions can either remain inducible (under the control conditions), or become stably modified to either the large or small genotroph by growth under high or low nutrient conditions respectively. The lines resulting from the initial growth under each of these conditions appear to grow better when grown under the same conditions in subsequent generations, notably the Pl line grows best under the control treatment indicating that the plants growing under both the high and low nutrients are under stress. One of the genomic changes that are associated with the induction of heritable changes is the appearance of an insertion element (LIS-1) 3, 4 while the plants are growing under the nutrient stress. With respect to this insertion event, the flax variety Stormont Cirrus (Pl) when grown under three different nutrient conditions can either remain unchanged (under the control conditions), have the insertion appear in all the plants (under low nutrients) and have this transmitted to the next generation, or have the insertion (or parts of it) appear but not be transmitted through generations (under high nutrients) 4 . The frequency of the appearance of this insertion indicates that it is under positive selection, which is also consistent with the growth response in subsequent generations. Leaves or meristems harvested at various stages of growth are used for DNA and RNA isolation. The RNA is used to identify variation in expression associated with the various growth environments and/or t he presence/absence of LIS-1. The isolated DNA is used to identify those plants in which the insertion has occurred. Plant Biology, Issue 47, Flax, genome variation, environmental stress, small RNAs, altered gene expression Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study Institutions: RWTH Aachen University, Fraunhofer Gesellschaft. Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems. Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody Direct Pressure Monitoring Accurately Predicts Pulmonary Vein Occlusion During Cryoballoon Ablation Institutions: Piedmont Heart Institute, Medtronic Inc.. Cryoballoon ablation (CBA) is an established therapy for atrial fibrillation (AF). Pulmonary vein (PV) occlusion is essential for achieving antral contact and PV isolation and is typically assessed by contrast injection. We present a novel method of direct pressure monitoring for assessment of PV occlusion. Transcatheter pressure is monitored during balloon advancement to the PV antrum. Pressure is recorded via a single pressure transducer connected to the inner lumen of the cryoballoon. Pressure curve characteristics are used to assess occlusion in conjunction with fluoroscopic or intracardiac echocardiography (ICE) guidance. PV occlusion is confirmed when loss of typical left atrial (LA) pressure waveform is observed with recordings of PA pressure characteristics (no A wave and rapid V wave upstroke). Complete pulmonary vein occlusion as assessed with this technique has been confirmed with concurrent contrast utilization during the initial testing of the technique and has been shown to be highly accurate and readily reproducible. We evaluated the efficacy of this novel technique in 35 patients. A total of 128 veins were assessed for occlusion with the cryoballoon utilizing the pressure monitoring technique; occlusive pressure was demonstrated in 113 veins with resultant successful pulmonary vein isolation in 111 veins (98.2%). Occlusion was confirmed with subsequent contrast injection during the initial ten procedures, after which contrast utilization was rapidly reduced or eliminated given the highly accurate identification of occlusive pressure waveform with limited initial training. Verification of PV occlusive pressure during CBA is a novel approach to assessing effective PV occlusion and it accurately predicts electrical isolation. Utilization of this method results in significant decrease in fluoroscopy time and volume of contrast. Medicine, Issue 72, Anatomy, Physiology, Cardiology, Biomedical Engineering, Surgery, Cardiovascular System, Cardiovascular Diseases, Surgical Procedures, Operative, Investigative Techniques, Atrial fibrillation, Cryoballoon Ablation, Pulmonary Vein Occlusion, Pulmonary Vein Isolation, electrophysiology, catheterizatoin, heart, vein, clinical, surgical device, surgical techniques Magnetic Tweezers for the Measurement of Twist and Torque Institutions: Delft University of Technology. Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument. Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques Optimization and Utilization of Agrobacterium-mediated Transient Protein Production in Nicotiana Institutions: Fraunhofer USA Center for Molecular Biotechnology. -mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana plants with Agrobacteria carrying launch vectors. Optimization of Agrobacterium cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana , N. excelsiana × N. excelsior ) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium harboring pBID4-GFP (Tobacco mosaic virus -based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium laboratory strain GV3101 showed the highest protein production compared to Agrobacteria laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin). Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria Efficient Agroinfiltration of Plants for High-level Transient Expression of Recombinant Proteins Institutions: Arizona State University . Mammalian cell culture is the major platform for commercial production of human vaccines and therapeutic proteins. However, it cannot meet the increasing worldwide demand for pharmaceuticals due to its limited scalability and high cost. Plants have shown to be one of the most promising alternative pharmaceutical production platforms that are robust, scalable, low-cost and safe. The recent development of virus-based vectors has allowed rapid and high-level transient expression of recombinant proteins in plants. To further optimize the utility of the transient expression system, we demonstrate a simple, efficient and scalable methodology to introduce target-gene containing Agrobacterium into plant tissue in this study. Our results indicate that agroinfiltration with both syringe and vacuum methods have resulted in the efficient introduction of Agrobacterium into leaves and robust production of two fluorescent proteins; GFP and DsRed. Furthermore, we demonstrate the unique advantages offered by both methods. Syringe infiltration is simple and does not need expensive equipment. It also allows the flexibility to either infiltrate the entire leave with one target gene, or to introduce genes of multiple targets on one leaf. Thus, it can be used for laboratory scale expression of recombinant proteins as well as for comparing different proteins or vectors for yield or expression kinetics. The simplicity of syringe infiltration also suggests its utility in high school and college education for the subject of biotechnology. In contrast, vacuum infiltration is more robust and can be scaled-up for commercial manufacture of pharmaceutical proteins. It also offers the advantage of being able to agroinfiltrate plant species that are not amenable for syringe infiltration such as lettuce and Arabidopsis . Overall, the combination of syringe and vacuum agroinfiltration provides researchers and educators a simple, efficient, and robust methodology for transient protein expression. It will greatly facilitate the development of pharmaceutical proteins and promote science education. Plant Biology, Issue 77, Genetics, Molecular Biology, Cellular Biology, Virology, Microbiology, Bioengineering, Plant Viruses, Antibodies, Monoclonal, Green Fluorescent Proteins, Plant Proteins, Recombinant Proteins, Vaccines, Synthetic, Virus-Like Particle, Gene Transfer Techniques, Gene Expression, Agroinfiltration, plant infiltration, plant-made pharmaceuticals, syringe agroinfiltration, vacuum agroinfiltration, monoclonal antibody, Agrobacterium tumefaciens, Nicotiana benthamiana, GFP, DsRed, geminiviral vectors, imaging, plant model Mapping Bacterial Functional Networks and Pathways in Escherichia Coli using Synthetic Genetic Arrays Institutions: University of Toronto, University of Toronto, University of Regina. Phenotypes are determined by a complex series of physical (e.g. protein-protein) and functional (e.g. gene-gene or genetic) interactions (GI)1 . While physical interactions can indicate which bacterial proteins are associated as complexes, they do not necessarily reveal pathway-level functional relationships1. GI screens, in which the growth of double mutants bearing two deleted or inactivated genes is measured and compared to the corresponding single mutants, can illuminate epistatic dependencies between loci and hence provide a means to query and discover novel functional relationships2 . Large-scale GI maps have been reported for eukaryotic organisms like yeast3-7 , but GI information remains sparse for prokaryotes8 , which hinders the functional annotation of bacterial genomes. To this end, we and others have developed high-throughput quantitative bacterial GI screening methods9, 10 Here, we present the key steps required to perform quantitative E. coli Synthetic Genetic Array (eSGA) screening procedure on a genome-scale9 , using natural bacterial conjugation and homologous recombination to systemically generate and measure the fitness of large numbers of double mutants in a colony array format. Briefly, a robot is used to transfer, through conjugation, chloramphenicol (Cm) - marked mutant alleles from engineered Hfr (High frequency of recombination) 'donor strains' into an ordered array of kanamycin (Kan) - marked F- recipient strains. Typically, we use loss-of-function single mutants bearing non-essential gene deletions (e.g. the 'Keio' collection11 ) and essential gene hypomorphic mutations (i.e. alleles conferring reduced protein expression, stability, or activity9, 12, 13 ) to query the functional associations of non-essential and essential genes, respectively. After conjugation and ensuing genetic exchange mediated by homologous recombination, the resulting double mutants are selected on solid medium containing both antibiotics. After outgrowth, the plates are digitally imaged and colony sizes are quantitatively scored using an in-house automated image processing system14 . GIs are revealed when the growth rate of a double mutant is either significantly better or worse than expected9 . Aggravating (or negative) GIs often result between loss-of-function mutations in pairs of genes from compensatory pathways that impinge on the same essential process2 . Here, the loss of a single gene is buffered, such that either single mutant is viable. However, the loss of both pathways is deleterious and results in synthetic lethality or sickness (i.e. slow growth). Conversely, alleviating (or positive) interactions can occur between genes in the same pathway or protein complex2 as the deletion of either gene alone is often sufficient to perturb the normal function of the pathway or complex such that additional perturbations do not reduce activity, and hence growth, further. Overall, systematically identifying and analyzing GI networks can provide unbiased, global maps of the functional relationships between large numbers of genes, from which pathway-level information missed by other approaches can be inferred9 Genetics, Issue 69, Molecular Biology, Medicine, Biochemistry, Microbiology, Aggravating, alleviating, conjugation, double mutant, Escherichia coli, genetic interaction, Gram-negative bacteria, homologous recombination, network, synthetic lethality or sickness, suppression Linking Predation Risk, Herbivore Physiological Stress and Microbial Decomposition of Plant Litter Institutions: Yale University, Virginia Tech, The Hebrew University of Jerusalem. The quantity and quality of detritus entering the soil determines the rate of decomposition by microbial communities as well as recycle rates of nitrogen (N) and carbon (C) sequestration1,2 . Plant litter comprises the majority of detritus3 , and so it is assumed that decomposition is only marginally influenced by biomass inputs from animals such as herbivores and carnivores4,5 . However, carnivores may influence microbial decomposition of plant litter via a chain of interactions in which predation risk alters the physiology of their herbivore prey that in turn alters soil microbial functioning when the herbivore carcasses are decomposed6 . A physiological stress response by herbivores to the risk of predation can change the C:N elemental composition of herbivore biomass7,8,9 because stress from predation risk increases herbivore basal energy demands that in nutrient-limited systems forces herbivores to shift their consumption from N-rich resources to support growth and reproduction to C-rich carbohydrate resources to support heightened metabolism6 . Herbivores have limited ability to store excess nutrients, so stressed herbivores excrete N as they increase carbohydrate-C consumption7 . Ultimately, prey stressed by predation risk increase their body C:N ratio7,10 , making them poorer quality resources for the soil microbial pool likely due to lower availability of labile N for microbial enzyme production6 . Thus, decomposition of carcasses of stressed herbivores has a priming effect on the functioning of microbial communities that decreases subsequent ability to of microbes to decompose plant litter6,10,11 We present the methodology to evaluate linkages between predation risk and litter decomposition by soil microbes. We describe how to: induce stress in herbivores from predation risk; measure those stress responses, and measure the consequences on microbial decomposition. We use insights from a model grassland ecosystem comprising the hunting spider predator (Pisuarina mira ), a dominant grasshopper herbivore (Melanoplus femurrubrum ),and a variety of grass and forb plants9 Environmental Sciences, Issue 73, Microbiology, Plant Biology, Entomology, Organisms, Investigative Techniques, Biological Phenomena, Chemical Phenomena, Metabolic Phenomena, Microbiological Phenomena, Earth Resources and Remote Sensing, Life Sciences (General), Litter Decomposition, Ecological Stoichiometry, Physiological Stress and Ecosystem Function, Predation Risk, Soil Respiration, Carbon Sequestration, Soil Science, respiration, spider, grasshoper, model system A Venturi Effect Can Help Cure Our Trees Institutions: Unversity of Padova. In woody plants, xylem sap moves upwards through the vessels due to a decreasing gradient of water potential from the groundwater to the foliage. According to these factors and their dynamics, small amounts of sap-compatible liquids (i.e. pesticides) can be injected into the xylem system, reaching their target from inside. This endotherapic method, called "trunk injection" or "trunk infusion" (depending on whether the user supplies an external pressure or not), confines the applied chemicals only within the target tree, thereby making it particularly useful in urban situations. The main factors limiting wider use of the traditional drilling methods are related to negative side effects of the holes that must be drilled around the trunk circumference in order to gain access to the xylem vessels beneath the bark. The University of Padova (Italy) recently developed a manual, drill-free instrument with a small, perforated blade that enters the trunk by separating the woody fibers with minimal friction. Furthermore, the lenticular shaped blade reduces the vessels' cross section, increasing sap velocity and allowing the natural uptake of an external liquid up to the leaves, when transpiration rate is substantial. Ports partially close soon after the removal of the blade due to the natural elasticity and turgidity of the plant tissues, and the cambial activity completes the healing process in few weeks. Environmental Sciences, Issue 80, Trunk injection, systemic injection, xylematic injection, endotherapy, sap flow, Bernoulli principle, plant diseases, pesticides, desiccants Extraction of High Molecular Weight Genomic DNA from Soils and Sediments Institutions: University of British Columbia - UBC. The soil microbiome is a vast and relatively unexplored reservoir of genomic diversity and metabolic innovation that is intimately associated with nutrient and energy flow within terrestrial ecosystems. Cultivation-independent environmental genomic, also known as metagenomic, approaches promise unprecedented access to this genetic information with respect to pathway reconstruction and functional screening for high value therapeutic and biomass conversion processes. However, the soil microbiome still remains a challenge largely due to the difficulty in obtaining high molecular weight DNA of sufficient quality for large insert library production. Here we introduce a protocol for extracting high molecular weight, microbial community genomic DNA from soils and sediments. The quality of isolated genomic DNA is ideal for constructing large insert environmental genomic libraries for downstream sequencing and screening applications. The procedure starts with cell lysis. Cell walls and membranes of microbes are lysed by both mechanical (grinding) and chemical forces (β-mercaptoethanol). Genomic DNA is then isolated using extraction buffer, chloroform-isoamyl alcohol and isopropyl alcohol. The buffers employed for the lysis and extraction steps include guanidine isothiocyanate and hexadecyltrimethylammonium bromide (CTAB) to preserve the integrity of the high molecular weight genomic DNA. Depending on your downstream application, the isolated genomic DNA can be further purified using cesium chloride (CsCl) gradient ultracentrifugation, which reduces impurities including humic acids. The first procedure, extraction, takes approximately 8 hours, excluding DNA quantification step. The CsCl gradient ultracentrifugation, is a two days process. During the entire procedure, genomic DNA should be treated gently to prevent shearing, avoid severe vortexing, and repetitive harsh pipetting. Microbiology, Issue 33, Environmental DNA, high molecular weight genomic DNA, DNA extraction, soil, sediments A Microplate Assay to Assess Chemical Effects on RBL-2H3 Mast Cell Degranulation: Effects of Triclosan without Use of an Organic Solvent Institutions: University of Maine, Orono, University of Maine, Orono. Mast cells play important roles in allergic disease and immune defense against parasites. Once activated (e.g. by an allergen), they degranulate, a process that results in the exocytosis of allergic mediators. Modulation of mast cell degranulation by drugs and toxicants may have positive or adverse effects on human health. Mast cell function has been dissected in detail with the use of rat basophilic leukemia mast cells (RBL-2H3), a widely accepted model of human mucosal mast cells3-5 . Mast cell granule component and the allergic mediator β-hexosaminidase, which is released linearly in tandem with histamine from mast cells6 , can easily and reliably be measured through reaction with a fluorogenic substrate, yielding measurable fluorescence intensity in a microplate assay that is amenable to high-throughput studies1 . Originally published by Naal et al.1 , we have adapted this degranulation assay for the screening of drugs and toxicants and demonstrate its use here. Triclosan is a broad-spectrum antibacterial agent that is present in many consumer products and has been found to be a therapeutic aid in human allergic skin disease7-11 , although the mechanism for this effect is unknown. Here we demonstrate an assay for the effect of triclosan on mast cell degranulation. We recently showed that triclosan strongly affects mast cell function2 . In an effort to avoid use of an organic solvent, triclosan is dissolved directly into aqueous buffer with heat and stirring, and resultant concentration is confirmed using UV-Vis spectrophotometry (using ε280 = 4,200 L/M/cm)12 . This protocol has the potential to be used with a variety of chemicals to determine their effects on mast cell degranulation, and more broadly, their allergic potential. Immunology, Issue 81, mast cell, basophil, degranulation, RBL-2H3, triclosan, irgasan, antibacterial, β-hexosaminidase, allergy, Asthma, toxicants, ionophore, antigen, fluorescence, microplate, UV-Vis Unraveling the Unseen Players in the Ocean - A Field Guide to Water Chemistry and Marine Microbiology Institutions: San Diego State University, University of California San Diego. Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique. Environmental Sciences, Issue 93, dissolved organic carbon, particulate organic matter, nutrients, DAPI, SYBR, microbial metagenomics, viral metagenomics, marine environment Physical, Chemical and Biological Characterization of Six Biochars Produced for the Remediation of Contaminated Sites Institutions: Royal Military College of Canada, Queen's University. The physical and chemical properties of biochar vary based on feedstock sources and production conditions, making it possible to engineer biochars with specific functions (e.g. carbon sequestration, soil quality improvements, or contaminant sorption). In 2013, the International Biochar Initiative (IBI) made publically available their Standardized Product Definition and Product Testing Guidelines (Version 1.1) which set standards for physical and chemical characteristics for biochar. Six biochars made from three different feedstocks and at two temperatures were analyzed for characteristics related to their use as a soil amendment. The protocol describes analyses of the feedstocks and biochars and includes: cation exchange capacity (CEC), specific surface area (SSA), organic carbon (OC) and moisture percentage, pH, particle size distribution, and proximate and ultimate analysis. Also described in the protocol are the analyses of the feedstocks and biochars for contaminants including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals and mercury as well as nutrients (phosphorous, nitrite and nitrate and ammonium as nitrogen). The protocol also includes the biological testing procedures, earthworm avoidance and germination assays. Based on the quality assurance / quality control (QA/QC) results of blanks, duplicates, standards and reference materials, all methods were determined adequate for use with biochar and feedstock materials. All biochars and feedstocks were well within the criterion set by the IBI and there were little differences among biochars, except in the case of the biochar produced from construction waste materials. This biochar (referred to as Old biochar) was determined to have elevated levels of arsenic, chromium, copper, and lead, and failed the earthworm avoidance and germination assays. Based on these results, Old biochar would not be appropriate for use as a soil amendment for carbon sequestration, substrate quality improvements or remediation. Environmental Sciences, Issue 93, biochar, characterization, carbon sequestration, remediation, International Biochar Initiative (IBI), soil amendment High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado. Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics. Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading Institutions: Grand Valley State University. Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration. Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release. The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations. Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes A Technique to Screen American Beech for Resistance to the Beech Scale Insect (Cryptococcus fagisuga Lind.) Institutions: US Forest Service. Beech bark disease (BBD) results in high levels of initial mortality, leaving behind survivor trees that are greatly weakened and deformed. The disease is initiated by feeding activities of the invasive beech scale insect, Cryptococcus fagisuga , which creates entry points for infection by one of the Neonectria species of fungus. Without scale infestation, there is little opportunity for fungal infection. Using scale eggs to artificially infest healthy trees in heavily BBD impacted stands demonstrated that these trees were resistant to the scale insect portion of the disease complex1 . Here we present a protocol that we have developed, based on the artificial infestation technique by Houston2 , which can be used to screen for scale-resistant trees in the field and in smaller potted seedlings and grafts. The identification of scale-resistant trees is an important component of management of BBD through tree improvement programs and silvicultural manipulation. Environmental Sciences, Issue 87, Forestry, Insects, Disease Resistance, American beech, Fagus grandifolia, beech scale, Cryptococcus fagisuga, resistance, screen, bioassay Determination of Microbial Extracellular Enzyme Activity in Waters, Soils, and Sediments using High Throughput Microplate Assays Institutions: The University of Mississippi. Much of the nutrient cycling and carbon processing in natural environments occurs through the activity of extracellular enzymes released by microorganisms. Thus, measurement of the activity of these extracellular enzymes can give insights into the rates of ecosystem level processes, such as organic matter decomposition or nitrogen and phosphorus mineralization. Assays of extracellular enzyme activity in environmental samples typically involve exposing the samples to artificial colorimetric or fluorometric substrates and tracking the rate of substrate hydrolysis. Here we describe microplate based methods for these procedures that allow the analysis of large numbers of samples within a short time frame. Samples are allowed to react with artificial substrates within 96-well microplates or deep well microplate blocks, and enzyme activity is subsequently determined by absorption or fluorescence of the resulting end product using a typical microplate reader or fluorometer. Such high throughput procedures not only facilitate comparisons between spatially separate sites or ecosystems, but also substantially reduce the cost of such assays by reducing overall reagent volumes needed per sample. Environmental Sciences, Issue 80, Environmental Monitoring, Ecological and Environmental Processes, Environmental Microbiology, Ecology, extracellular enzymes, freshwater microbiology, soil microbiology, microbial activity, enzyme activity Measurement of Greenhouse Gas Flux from Agricultural Soils Using Static Chambers Institutions: University of Wisconsin-Madison, University of Wisconsin-Madison, University of Wisconsin-Madison, University of Wisconsin-Madison, USDA-ARS Dairy Forage Research Center, USDA-ARS Pasture Systems Watershed Management Research Unit. Measurement of greenhouse gas (GHG) fluxes between the soil and the atmosphere, in both managed and unmanaged ecosystems, is critical to understanding the biogeochemical drivers of climate change and to the development and evaluation of GHG mitigation strategies based on modulation of landscape management practices. The static chamber-based method described here is based on trapping gases emitted from the soil surface within a chamber and collecting samples from the chamber headspace at regular intervals for analysis by gas chromatography. Change in gas concentration over time is used to calculate flux. This method can be utilized to measure landscape-based flux of carbon dioxide, nitrous oxide, and methane, and to estimate differences between treatments or explore system dynamics over seasons or years. Infrastructure requirements are modest, but a comprehensive experimental design is essential. This method is easily deployed in the field, conforms to established guidelines, and produces data suitable to large-scale GHG emissions studies. Environmental Sciences, Issue 90, greenhouse gas, trace gas, gas flux, static chamber, soil, field, agriculture, climate
<urn:uuid:8be2a790-c2f0-49e3-bbaf-686df1727423>
{ "dump": "CC-MAIN-2017-04", "url": "http://www.jove.com/visualize/abstract/24236120/subfossil-leaves-reveal-new-upland-hardwood-component-pre-european", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00551-ip-10-171-10-70.ec2.internal.warc.gz", "language": "en", "language_score": 0.898262083530426, "token_count": 9002, "score": 2.875, "int_score": 3 }
When visiting Venice, it is hard not to notice the many lions adorning the doors and facades of the buildings, especially around Saint Mark’s Square. Let’s take a closer look at this symbol of Venice to discover some of the lesser-known aspects of the history of the Serenissima. Saint Mark’s Lion The Venetian lion normally has wings, very often holds a book below its paw, and sometimes is completed by a halo around its head. These three elements (wings, book, halo) reveal it as a symbol of Saint Mark the Evangelist, patron saint of the city. According to a tradition started in the 2nd century AD, each of the four Evangelists is represented by a winged creature: lion, bull, eagle, human. This set of four creatures was also used in relation with the divine presence in the Old Testament. Also, the lion has been associated with power, courage and strength since ancient times. What better symbol for the prestigious Venetian Republic? “Pax tibi Marce” So, now we know that the book the lion is holding stands for the gospel of Saint Mark. Have you tried to read the Latin sentence written in it? It says “Pax tibi Marce, evangelista meus”, which translates as “Peace to you Mark, my evangelist”. According to legend, while Saint Mark was visiting the Venetian lagoon in the 1st century AD, a storm put him in danger, but an angel appeared to him and reassured the saint with those words. Sometimes, a different phrase is written in the book: for example, the paintings of lions displayed in some government offices contained credos specific to the work of the magistrates there. If you are strolling through the Rialto market, on the other hand, you will notice a banner with an angry lion holding a book with the inscription “Rialto no se toca” (“Don’t touch the Rialto market!”). It was placed there by the fishmongers in 2011 when the local municipality proposed moving the wholesale fish market farther from the city. This would have made it too long and expensive for them to carry the fish to Rialto, and thus caused the death of the local market. Luckily, the lion and the protests were strong enough to make the authorities change their minds! The Venetian Flag The winged lion also appears in gold on a red background on the flag of Venice. The relics of Saint Mark arrived in the lagoon in the early 9th century, and the first representations of the lion as his symbol in Venice date to the 12th century. Yet, it was not until the 1260s that a lion appeared on the Venetian flag. Why did it take so long? An interesting hypothesis is that Venetians decided to use the lion after the fall of the Latin Empire (1261) when Egypt became their main commercial partner and the striding lion was on the shield of Sultan Baybars. It was therefore a symbol of new alliances after the war with the Byzantine Empire. It is definitely impossible to count how many lions there are in Venice: you can find them everywhere, from the top of bell towers to doorknockers! Some of them, anyway, have become real icons of the city, like the one on the top of the column next to the Doge’s Palace. This strange bronze sculpture, originally gilded, might date back to the 4th century BC. It arrived in Venice from Constantinople or another Middle Eastern region, and has stood on the column since the 12th century. So old, and yet so modern: its unique profile has become the symbol of the Venice Biennale and the shape of the Golden Lion Prize for the Venice Film Festival! Other lions have arrived in Venice from distant lands carrying with them stories that remained hidden for centuries. If you get to the Arsenale, the former shipyard, have a look at the big statue of a lion to the left side of the gate. If you look carefully on the two sides of the lion, you can see some long inscriptions carved on the white stone: the words are in the Runic alphabet used by ancient Scandinavian people. The statue of the lion was originally in Athens and was there when, in the 11th century, an army of Norwegian mercenaries was sent by the Byzantine emperor to put down an insurrection. The successful deeds of the soldiers were recorded on the surface of the lion. The statue was taken from Athens and placed in front of the gate of the Arsenale in 1687, following the Venetian conquest of the Peloponnesus. Are you curious to find out more about Venice and its lions? Let’s take a stroll together!
<urn:uuid:985f74c5-f1c2-49da-9456-fa4b9a029813>
{ "dump": "CC-MAIN-2024-10", "url": "https://www.wheninvenice.com/the-venetian-lion-explained/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476413.82/warc/CC-MAIN-20240304033910-20240304063910-00179.warc.gz", "language": "en", "language_score": 0.968264639377594, "token_count": 1014, "score": 2.84375, "int_score": 3 }
A “power of attorney” is a written document that authorizes someone (referred to as the agent) to make decisions or take actions on someone else's (known as the principal) behalf. In Texas, there are several kinds of powers of attorney that will grant the agent the right to accomplish different things on the principal's behalf. This guide will present information about the various kinds, help you understand which may be best for your situation, and link to forms when available. A "general power of attorney" is a document that grants the agent very broad rights to act on behalf of the principal. A general power of attorney ends: General powers of attorney are used to allow someone to act for you in a wide variety of matters. For example, general powers of attorney are often used in business dealings to allow an employee to enter into contracts, sell property, spend money, and take other actions on behalf of their client. You may wish to create a general power of attorney if you are still capable of managing your own affairs but would like to have someone else take care of them for you. Because general powers of attorney terminate when someone is incapacitated, they are not ideal for end-of-life planning or medical directives. Medical powers of attorney and durable powers of attorney (ones that last after or begin upon the incapacitation of the principal) are better alternatives for these situations. Limited, or special, powers of attorney grant someone else the right to perform very specific actions for you. These e-books contain information on powers of attorney. These e-books can be viewed by those who have signed up for a free library account with the State Law Library. Only Texas residents are eligible to sign up.
<urn:uuid:0827b503-5296-49b9-8765-93fa1773d73c>
{ "dump": "CC-MAIN-2022-49", "url": "https://guides.sll.texas.gov/powers-of-attorney/general-information", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00501.warc.gz", "language": "en", "language_score": 0.9556704759597778, "token_count": 352, "score": 2.75, "int_score": 3 }
Perhaps you are looking for an effective method to treat a spine condition. However, the mention of back surgery is not that appealing, is it? The good news is, with the current technological advancements in the field of medicine, you can undergo a non-invasive spinal procedure to alleviate your pain. In a non-invasive spinal procedure, microsized instruments are used to access the affected area and viewed in real-time X-ray images by the surgeon performing the procedure using an operating microscope. There are different options of this procedure available, and here is a list of non-invasive spinal procedures that our spinal specialists at SEPSC can recommend if you don’t want invasive surgery. Non-Invasive Spinal Procedures Kyphoplasty procedure allows treatment of compression fractures that typically result from the collapse of part or the entire vertebra due to conditions such as osteoporosis, injury, or cancer. In this non-invasive procedure, the surgeon applies bone cement to the fractured area, which imparts stability, and ensures it does not collapse, ultimately strengthening the vertebrae. 2. Non-invasive Spinal Fusion The surgeon will remove spinal discs between vertebrae during the spinal fusion procedure and use metal plates and screws or grafted bone to fuse the two adjacent vertebrae. It effectively treats several painful conditions such as scoliosis, traumatic fractures, persistent neck and back pain, spinal instability, and more. Since the bone grafts have to grow and fuse, this procedure has a long recovery period, thus used as a last resort. 3. Artificial Disc Replacement Instead of spinal fusion, the surgeon can use an artificial disc replacement procedure to treat severely damaged discs. It involves removing discs and replacing them with an artificial one to restore the vertebrae’s height and motion. 4. Spinal Stenosis Decompression Spinal stenosis occurs typically when obstructions such as bone spur lead to narrowing the spinal canal, leading to pain, weakness, or numbness. In spinal decompression, your surgeon aims to expand the spinal column to eliminate any obstructions and release pressure on the nerves. A herniated disc is usually the leading cause of compressed nerves. Discectomy and spinal decompressions described above go together to eliminate herniated discs. In a non-invasive discectomy, the surgeon removes the herniated disc, which presses the spinal cord or a nerve root in the spinal column. 6. Spinal Canal Enlargement In some cases, the disc bulges into the wall of the spinal instead of becoming herniated. Failure to treat the resulting compression may lead to a natural thickening of the vertebrae as the body tries to provide extra protection to the nerves. If you have any of the conditions, the surgeon can perform bony hole enlargement at the affected nerve root region, which relieves pressure and pain. Seek Assistance Today! Non-invasive procedures are new and more effective ways to treat spine conditions while reducing risk and allowing quicker recovery. Contact Southeast Pain & Spine Care today to schedule a consultation with our experienced spinal specialist.
<urn:uuid:ef12dbac-c3d8-4205-8dc1-7c4f0bd9d0bc>
{ "dump": "CC-MAIN-2021-17", "url": "https://www.sepainandspinecare.com/blog/six-alternative-non-invasive-spinal-procedures/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00223.warc.gz", "language": "en", "language_score": 0.9184032082557678, "token_count": 652, "score": 2.78125, "int_score": 3 }
2 – 3 years Children at this age need to have time to explore and experiment but still need the cuddles and the attention they had as a baby. In this class, they are introduced to the Practical Life exercises in the Montessori materials. We offer a varied, fun filled, child responsive daily activity routine. We aim to excite your child’s curiosity and to engage him or her in activities that inform and influence development and learning. The activities are kept short and changed frequently during the day to meet the abilities of the children. Our Montessori Curriculum includes: Practical Life: Exercises performed by adults every day. They allow the child to gain independence in everyday tasks. Sensorial: Exercises to organise objects by size, shape, colour, touch, smell, sound, weight and temperature. Maths: Exercises that help the child to gain and understanding of sequences, order, addition and subtraction. Language: Materials promote the child’s language abilities by introducing new words through word cards and Sandpaper Letters. Culture: Culture is taught to the child through the uses of jigsaw maps, sandpaper globes, 3 part cards and continent folders. Children learn about geography, history and the natural sciences. The Montessori Method of Education is complemented in our school by the following activities: - Arts and Crafts - Outdoor Play - Ballet/ Hip Hop - Table Top Activities - Imaginary Play - Circle Time - Stretch – n –Grow. A weekly Diary is recorded, informing parents of their child’s weekly activities and is sent home every Friday. Your child’s art work is sent home at the end of the month. Our qualified staff encourages language development and good behaviour. We will also work alongside parents to aid in your child’s transition to toilet training.
<urn:uuid:ddcdc4f2-5284-4f65-90a1-5e53c3af1add>
{ "dump": "CC-MAIN-2020-29", "url": "http://rathgarmontessori.com/montessori-two/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890566.2/warc/CC-MAIN-20200706222442-20200707012442-00112.warc.gz", "language": "en", "language_score": 0.9516799449920654, "token_count": 395, "score": 3.5, "int_score": 4 }
Who knew delicious bananas, pineapples and coconuts could potentially be used to build stronger and lighter cars? A team of Brazilian scientists have figured out a way to use fruit fibers to build extremely strong plastics that are up to four times stronger than petroleum-based plastic. Led by researcher Alcides Leão, the "nanocellulose fibers" possess properties that can supposedly rival Kevlar, the tough stuff found in bulletproof vests. Nanocellulose fibers aren't just lighter and stronger, it's also "more resistant to heat, gasoline and water." Can you see a car that doesn't instantly catch on fire and explode in a car crash? We can. In addition to being great for building dashboads, bumpers and body panels, the fiber is also entirely biodegradable, meaning the material won't be sitting in a landfill for hundreds of years, but will decompose. While the goal of the fiber is to replace regular plastics, Leão hopes it can eventually be an adequate substitute for steel and aluminum car parts as well. For the time being, Leão says that nanocellulose fibers are way too expensive to mass produce, but there's hope. He says that if the auto industry stands by its potential in creating greener cars, it's entirely possible that the price could come down in the future. We're already a seeing rise in plastics made from corn, and with fruits, the benefits are clear: it's sweeter for you, me, the cars — and Earth.
<urn:uuid:cac7c321-d277-438b-b6e5-a31f62d8c546>
{ "dump": "CC-MAIN-2014-41", "url": "http://www.dvice.com/archives/2011/03/plastics-made-f.php", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663218.28/warc/CC-MAIN-20140930004103-00397-ip-10-234-18-248.ec2.internal.warc.gz", "language": "en", "language_score": 0.9667602181434631, "token_count": 314, "score": 3.28125, "int_score": 3 }
Growth or stagnation? The role of public education AbstractThis paper presents a political-economic theory of growth and human capital accumulation. Age heterogeneity is put forth as the primary source of disagreement between individuals over various levels public education expenditures. An overlapping generations model with with two-period lived agents is constructed to capture the heterogeneity. With a growing population, the equilibrium quantity of public education reflects the preferences of youth and is therefore foward looking. As such, policy preferences are a function of intertemporal elasticities, utility discounting and population growth. Despite foward looking behavior, it is shown that sufficiently rapid population growth can trigger stagnation (zero growth) in the form of a corner solution to the public policy problem. The model therefore complements existing models that associate slow per capita output growth with high population growth. Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large. As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it. Bibliographic InfoArticle provided by Elsevier in its journal Journal of Development Economics. Volume (Year): 64 (2001) Issue (Month): 2 (April) Contact details of provider: Web page: http://www.elsevier.com/locate/devec Other versions of this item: - Kenneth Beauchemin, 1999. "Growth or Stagnation? The Role of Public Education," Discussion Papers 99-06, University at Albany, SUNY, Department of Economics. - O40 - Economic Development, Technological Change, and Growth - - Economic Growth and Aggregate Productivity - - - General - E62 - Macroeconomics and Monetary Economics - - Macroeconomic Policy, Macroeconomic Aspects of Public Finance, and General Outlook - - - Fiscal Policy Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: - Tamura, Robert, 1996. "From decay to growth: A demographic transition to economic growth," Journal of Economic Dynamics and Control, Elsevier, vol. 20(6-7), pages 1237-1261. - Benhabib, Jess & Spiegel, Mark M., 1994. "The role of human capital in economic development evidence from aggregate cross-country data," Journal of Monetary Economics, Elsevier, vol. 34(2), pages 143-173, October. - Gary S. Becker & Kevin M. Murphy & Robert Tamura, 1994. "Human Capital, Fertility, and Economic Growth," in: Human Capital: A Theoretical and Empirical Analysis with Special Reference to Education (3rd Edition), pages 323-350 National Bureau of Economic Research, Inc. - Gary S. Becker & Kevin M. Murphy & Robert Tamura, . "Human Capital, Fertility, and Economic Growth," University of Chicago - Population Research Center 90-5a, Chicago - Population Research Center. - Gary S. Becker & Kevin M. Murphy & Robert F. Tamura, 1990. "Human Capital, Fertility, and Economic Growth," NBER Working Papers 3414, National Bureau of Economic Research, Inc. - Saint-Paul, G. & Verdier, T., 1991. "Education, Democracy and growth," DELTA Working Papers 91-27, DELTA (Ecole normale supérieure). - Barro, R.J., 1989. "Economic Growth In A Cross Section Of Countries," RCER Working Papers 201, University of Rochester - Center for Economic Research (RCER). - Peter J. Klenow & Mark Bils, 2000. "Does Schooling Cause Growth?," American Economic Review, American Economic Association, vol. 90(5), pages 1160-1183, December. - Larry E. Jones & Rodolfo E. Manuelli & Ennio Stacchetti, 2000. "Technology (and policy) shocks in models of endogenous growth," 281, Federal Reserve Bank of Minneapolis. - Larry E. Jones & Rodolfo E. Manuelli & Ennio Stacchetti, 1999. "Technology (and Policy) Shocks in Models of Endogenous Growth," NBER Working Papers 7063, National Bureau of Economic Research, Inc. - Jones,L.E. & Manuelli,R.E. & Stacchetti,E., 1999. "Technology (and policy) shocks in models of endogenous growth," Working papers 9, Wisconsin Madison - Social Systems. - Glomm, Gerhard & Ravikumar, B, 1992. "Public versus Private Investment in Human Capital Endogenous Growth and Income Inequality," Journal of Political Economy, University of Chicago Press, vol. 100(4), pages 818-34, August. - Tamura, Robert, 1994. "Fertility, Human Capital and the Wealth of Families," Economic Theory, Springer, vol. 4(4), pages 593-603, May. - Barro, Robert J. & Lee, Jong-Wha, 1994. "Sources of economic growth," Carnegie-Rochester Conference Series on Public Policy, Elsevier, vol. 40(1), pages 1-46, June. - Ehrlich, Isaac & Lui, Francis T, 1991. "Intergenerational Trade, Longevity, and Economic Growth," Journal of Political Economy, University of Chicago Press, vol. 99(5), pages 1029-59, October. - Krusell, Per & Quadrini, Vincenzo & Rios-Rull, Jose-Victor, 1997. "Politico-economic equilibrium and economic growth," Journal of Economic Dynamics and Control, Elsevier, vol. 21(1), pages 243-272, January. - Lucas, Robert Jr., 1988. "On the mechanics of economic development," Journal of Monetary Economics, Elsevier, vol. 22(1), pages 3-42, July. - Greiner, Alfred, 2008. "Fiscal policy in an endogenous growth model with human capital and heterogenous agents," Economic Modelling, Elsevier, vol. 25(4), pages 643-657, July. - Yanagihara, Mitsuyoshi & Lu, Chen, 2013. "Cash-in-advance constraint, optimal monetary policy, and human capital accumulation," Research in Economics, Elsevier, vol. 67(3), pages 278-288. - Andrea Doneschi & Rossana Patron, 2011. "Assessing incentives and risks in training decisions. A methodological note applied to the Uruguayan case," Documentos de Trabajo (working papers) 1511, Department of Economics - dECON. - Leonid Azarnert, 2006. "Free Education: For Whom, Where and When?," DEGIT Conference Papers c011_024, DEGIT, Dynamics, Economic Growth, and International Trade. - Alfred Greiner & Peter Flaschel, 2009. "Economic Policy in a Growth Model with Human Capital, Heterogenous Agents and Unemployment," Computational Economics, Society for Computational Economics, vol. 33(2), pages 175-192, March. - Greiner, Alfred, 2008. "Human capital formation, public debt and economic growth," Journal of Macroeconomics, Elsevier, vol. 30(1), pages 415-427, March. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Zhang, Lei). If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about. If references are entirely missing, you can add them using this form. If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form. If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation. Please note that corrections may take a couple of weeks to filter through the various RePEc services.
<urn:uuid:5fba0fef-0eda-4e71-828b-faba0a5c014f>
{ "dump": "CC-MAIN-2014-23", "url": "http://ideas.repec.org/a/eee/deveco/v64y2001i2p389-416.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273676.54/warc/CC-MAIN-20140728011753-00446-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.8044860363006592, "token_count": 1842, "score": 2.625, "int_score": 3 }
Understanding the Gastrointestinal Tract Understanding Your Body’s Digestive System To better understand the treatment options available, it is best to know how your body digests, processes, and stores food. 1. Food enters the esophagus. 2. Once through the esophagus, food travels to the stomach for processing. Here it is reduced to small particles and liquefied by the stomach’s acids. 3. Slowly, in small amounts, the food travels to the small intestine where it is digested further and absorbed into the body. 4. Vitamins and minerals are absorbed. 5. Waste products and unused remains are passed and eliminated into the colon. Digestion and Absorption Calories are absorbed and used for the body’s energy. If the number of calories absorbed exceeds the number of calories used, then the body converts the remaining calories to be stored as fat. Often individuals who are obese have lower calorie needs compared to those who are at an ideal body weight and are prone to weight gain as excess calories go unused.
<urn:uuid:4d381459-11f8-45b8-ac00-de8558c27ca6>
{ "dump": "CC-MAIN-2016-50", "url": "https://www.urmc.rochester.edu/highland/departments-centers/bariatrics/right-for-you/gastro-tract.aspx", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543614.1/warc/CC-MAIN-20161202170903-00090-ip-10-31-129-80.ec2.internal.warc.gz", "language": "en", "language_score": 0.8939458131790161, "token_count": 224, "score": 3.75, "int_score": 4 }
I have considered the data from Treviso airport (N.E. Italy) to sum up the monthly analysis (Figure1) in the N.E. of Italy. The mean minimum and maximum temperatures (11.84/21.17 °C) have been considerably above the 1971-2000 average (8.8/18.9 °C). This is due to the continuous mild conditions especially between the first and second decade, with no cold spells during the whole month (Figure1) and numerous days with maximum temperatures above 20 °C and minimum above 10°C. In fact, the number of days observing maximum temperatures above 20°C is 22, with 15 consecutive days (in the first two decades) when the maximum values never fall below 20°C. Moreover, during this period, for 3 consecutive days the maximum values reached 24°C, almost 6°C above the climatological average, and, during the whole month, this values has been reached 6 times. During the long period of very mild maximum temperatures, also the minimum values were well above the average, with 18 consecutive days observing minimum temperatures above 10°C, 5 consecutive days above 13 °C (almost 6°C above the climatological average). To notice that, even if the maximum values decreased during the intense rainfall event occurred in the last week of October, the minimum values stayed high due to the strong southerly winds blowing from the Adriatic Sea which caused still mild conditions and due to the continuous overcast conditions which have limited the radiative cooling at surface during the night. The observed rainfall (135 mm) ,instead, has been above the average (92 mm). This is due to the intense event occured in Italy during the last week of October,which has caused floods in several areas. Thus, though the total observed rainfall is above the climatological average, it was not evenly distributed during the month, with 20 consecutive days (between the 6th and 26th) with no rain (anticyclonic conditions), and only 9 with observed rainfall, 6 of them consecutively during the last week of the month.
<urn:uuid:52f2082f-2aa7-4723-904c-607123ce02b2>
{ "dump": "CC-MAIN-2021-31", "url": "https://weathermeteo.org/2018/11/17/october-2018-n-e-italy-analysis/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152112.54/warc/CC-MAIN-20210806020121-20210806050121-00340.warc.gz", "language": "en", "language_score": 0.9461817741394043, "token_count": 437, "score": 2.78125, "int_score": 3 }
Learn something new every day More Info... by email A database audit is a database security control involving several monitoring aspects. It allows administrators to control access, know who is using the database and what users are doing with the database. Auditing is done to prevent database theft and also to keep users from messing up the database code. Some of the monitoring aspects involved in a database audit include identifying users, logging actions performed on the database and checking database changes. A database audit is rarely performed by a person; it is more often handled by a program. A variety of users access databases associated with businesses or large websites on a daily basis. These users are able to see the data and perform high- or low-level changes to the information based on their access level, and they can store the data in other programs. Without any form of protection, the risk of data theft is very high, because no user could be implicated if any information were stolen. When a database audit program is installed, this program creates a trail that watches all the users. One basic form of protection is that the audit identifies all users and watches what each user does. Low-level functions normally are not monitored. This is because the functions do not present a threat and because these functions are performed so regularly that the auditing program may be overwhelmed by the amount of data it has to monitor. Along with knowing what the user is doing, the auditing program will log actions performed on the database. For example, whenever a user performs a large database change, the auditing program will watch the user and show that the user made the change. The database audit may be set to activate whenever a high-level action is performed, so there is no chance the action is missed by the audit. These database audits, unless the database is especially small with a few users accessing it, is rarely performed by a person. This is because a person cannot check all of the changes or identify all the users without a high potential of inaccuracy. A program also ensures only potentially threatening or damaging changes are logged. While theft is the main reason for performing a database audit, it is not the only reason. When the database is changed, an incorrectly coded section can corrupt all database information. With high-level actions such as this logged, the auditor can assign blame to the user who performed the change and appropriate actions can be taken. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
<urn:uuid:fb82a761-d60f-4fdb-986b-18d5fc5419d8>
{ "dump": "CC-MAIN-2017-13", "url": "http://www.wisegeek.com/what-is-a-database-audit.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188717.24/warc/CC-MAIN-20170322212948-00524-ip-10-233-31-227.ec2.internal.warc.gz", "language": "en", "language_score": 0.9418001174926758, "token_count": 533, "score": 2.921875, "int_score": 3 }
William F. Buckley Jr.’s death yesterday marks the passing of a champion of liberty. By founding his magazine "The National Review," Buckley was able to provide the country’s first widely read platform for articles promoting free-market economic ideas, love of freedom and opposition to big government solutions. Buckley, along with Michigan author Russell Kirk, rose to prominence in the hazy moment of history following Lionel Trilling’s 1950 pronouncement: "In the United States at this time, liberalism is not only the dominant but even the sole intellectual tradition. For the plain fact that nowadays there are no conservative or reactionary ideas in general circulation…. [The] conservative impulse and the reactionary impulse do not, with some isolated and some ecclesiastical exceptions, express themselves in ideas but only in action or in irritable mental gestures which seek to resemble ideas." In the wake of Trilling’s statement, government began to grow exponentially. New programs funded by taxpayer dollars were added on what seemed a daily basis. By the 1960s, the Johnson administration initiated the Great Society programs that put the big government New Deal programs of the Great Depression to shame. Those programs continued to expand under subsequent administrations, regardless of party affiliation. Buckley was among the first to plead the case for smaller government. Armed with the philosophical ideas captured in Kirk’s "The Conservative Mind," Buckley defined conservatism as one who "stands athwart history, yelling Stop, at a time when no one is inclined to do so, or to have much patience with those who so urge it." In the first issue of "The National Review," published in 1955, Buckley wrote: "National Review" is out of place, in the sense that the United Nations and the League of Women Voters and the New York Times and Henry Steele Commager are in place. It is out of place because, in its maturity, literate America rejected conservatism in favor of radical social experimentation. Instead of covetously consolidating its premises, the United States seems tormented by its tradition of fixed postulates having to do with the meaning of existence, with the relationship of the state to the individual, of the individual to his neighbor, so clearly enunciated in the enabling documents of our Republic." Henceforth, Buckley’s magazine became the go-to publication for readers parched for intellectual nourishment in the anabasis of failed ideas marching toward oblivion. In its pages, one could read Milton Friedman, Thomas Sowell and, for its first 25 years, Kirk’s "From the Academy" column. The magazine also managed to embrace culture, featuring as it did for many years the sometimes truculent film criticism of John Simon. Miraculously, many of these views took hold. Buckley himself became a cultural institution, appearing on his "Firing Line" television program and even making appearances on "Laugh-In." "He was the person who could make free-market ideas understandable," said Annette Kirk in a telephone interview yesterday afternoon from the Russell Kirk Center for Cultural Renewal in Mecosta, Mich. Mrs. Kirk, a member of the Mackinac Center's Board of Scholars as was her late husband, also said Buckley was able to explain why free markets are important and why readers should care. "He made the case for published writers who could make the case in his magazine," she said. "For Russell and Buckley, economic freedom is the hallmark of the conservative movement." Mrs. Kirk added that Buckley "also shared with Russell the idea that a free market should always be bound up within a moral context. He was a tireless champion of ordered freedom." To the last, Buckley was an independent thinker who would take gutsy stands or change his mind once he became convinced such stands were wrong, as he did with his eventual disavowal of the rationales given for the Iraq War. Whether one agreed with either his initial or eventual position, one can at least admire the principled manner by which he expressed himself in both instances. Buckley died working at his desk yesterday morning, more than likely composing another highly literate piece that would send millions scrambling for their Webster’s. Mrs. Kirk remembers him as "not only personally kind, but as an enormous influence. All lovers of freedom and liberty should grieve his passing." Bruce Edward Walker is communications manager for the Property Rights Network at the Mackinac Center for Public Policy, a research and educational institute headquartered in Midland, Mich. Permission to reprint in whole or in part is hereby granted, provided that the author and the Center are properly cited.
<urn:uuid:0eabb6e4-21eb-4450-a648-6ebfc203edd4>
{ "dump": "CC-MAIN-2015-40", "url": "http://mackinac.org/9314", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00084-ip-10-137-6-227.ec2.internal.warc.gz", "language": "en", "language_score": 0.970820963382721, "token_count": 951, "score": 2.53125, "int_score": 3 }
Demography is an important component of socioeconomic development, because population trends shape development paths. Population size and composition are related to a wide variety of factors in social organization and economic vitality, such as employment, taxation, consumption, housing, environmental pressure, transportation, business location, voting patterns, education, law enforcement, and healthcare. Population change can be a response to, as well as an agent for, changing social organization and economic structure (Brown, 2002). Therefore, it is very important to understand these trends, including their interrelationship, for successful development planning and policy implementation, especially that due to increasing global integration rural areas and economies are no longer isolated from mainstream economic, political and societal processes (Summers 1993). Sociologists and demographers have long been aware of prolonged population decline in the Great Plains (Rathge and Highman, 1998; Johnson and Rathge, 2006), caused partly by the "great agricultural transition" discussed by Lobao and Meyer (2001). Population decline, however, is a relative term, and it covers different dynamics for different places across the Plains. The aggregate population of the Great Plains is not shrinking yet, but its growth rate is well below the national level. Trends at the county level differ greatly between metropolitan and nonmetropolitan places. Many rural areas experience actual population decrease, and their age structure and migration patterns suggest prolonged decline for the future as well. Localized, positive net migration in the Great Plains is usually associated with either suburbanization or the availability natural amenities (Cromartie, 1998). Present day Kansas has three main demographic challenges: increasing population concentration in urban areas, increasing population diversity, and the aging population. None of these three are new phenomena, but slowly unfolding macro level trends corresponding with general demographic dynamics in the United States. This study investigates these trends in detail and discusses the related development implications and possible future paths for Kansas. In the 20th century, the population of Kansas increased from 1.5 to about 2.7 million people, growing approximately 8% per decade. In the decade before the last decennial census, Kansas grew at 8.5%, compared to the national average of 13.2%. Historically, when comparing two decennial censuses, Kansas has experienced 5 to 10% less growth than the nation (Figure 1), and 67 of the 105 counties reached their peak total population by 1930. In the 1990s, only 9 counties experienced growth equal to or greater than the national average growth rate. Even this slow growth occurs unevenly in space. The population of Kansas is much more concentrated today than in the beginning of the 20th century (Figure 2). On average, most rural counties account for less than only 0.5% of the state’s population. But neither the slow population growth nor the population concentration should be surprising. The Great Plains had very similar population dynamics over the 20th century (Johnson and Rathge, 2006). The Depression and the Dust Bowl caused many people to leave rural areas, while the post-war mechanization of agriculture, farm consolidation, and the industrial boom were also responsible for population concentration (Table 1). Farm consolidation in Kansas was a process inherently linked to urban concentration, embedded into the general transformation of rural America. In fifty years, the number of farms declined more than 50%, while their average size doubled. The farm population of the state declined from almost half a million people to below a hundred thousand. While in 1950 about one in every four persons in Kansas lived on farms, now this proportion is less than one in every thirty (Table 1). Despite the image of Kansas as part of the nation’s breadbasket, urbanization has been one of the most profound changes over the 20th century. The proportion of the urban population of Kansas reached 71% in 2000, up from 52% in 1950. This population concentration occurred in and around those counties that host the three large urban centers: Kansas City, Topeka, and Wichita. Applying the 2000 metropolitan status definition, the nine metropolitan counties gained more than 130,000 people on average over the 20th century. At the same time, the average population growth in the 96 non-metropolitan counties was only 152 people. The average county population increased from 15,000 to 25,000 over the 20th century, but this increase was exclusively the population boom of the existing or would-be metropolitan areas. The average population of a rural Kansas county remained around 12,000 people over the course of 20th century. There are six counties in Kansas that lost population in each decade since 1900, and 37 that had a negative net migration rate in each decade since 1950. While metropolitan counties rapidly gained population, and most rural areas faced slow population decline, some non-metropolitan counties were able to turn around this declining trend. Of those nine counties that experienced growth between 1990 and 2000 equal to or greater than the national average, three are not metropolitan hinterlands, but destinations for immigrant laborers who come to work in the food processing industry in Southwest Kansas. These workers contributed to increasing population diversity in the state. Population diversity refers to both the ethnic and racial composition of the population, as well as the proportion of foreign born people. Kansas, like many rural Midwest regions, has been ethnically homogeneous and predominantly white for most of the 20th century. Until the 1960s, more than 95% of the state’s population was white. However, this proportion declined to 86% by 2000, mostly taking place in the 1990s. Similarly, the foreign born population of Kansas also increased, from 1% in 1970 to 5% in 2000. There are two causes for increasing population diversity in Kansas—one general and one specific. The general cause is Kansas experiences the same trend as the United States as a whole. Increasing immigration to the U.S. after the 1970s caused the population to become more diverse. At the same time the increasing social tolerance for racial diversity resulted in a geographically less concentrated minority population. This process was supported by the increasing urban concentration, since urban areas are traditionally more diverse than rural areas. Hence, population concentration in Kansas was one driving force for increasing population diversity. In addition to this general cause, Kansas experienced a particular phenomenon which contributed to increasing population diversity. The most remarkable contemporary migration trend in non-metropolitan Kansas was the influx of workers into the food processing industry. As a result, three southwestern Kansas counties that were primary meat processing areas experienced changing population trends. These are Finney County (Garden City), Ford County (Dodge City) and Seward County (Liberal). The food processing industry changed the demographic trends for a number of communities, both in terms of population size and composition. Corporate recruitment strategies have a large impact in developing these new migration streams (Krissman, 2000). Once migration networks develop, they provide linkages between origin and destination, and not only help to overcome the obstacles by diminishing risks, but also increase the volume of migration over time by providing positive feedbacks for further migrants (Massey, 1990). In some cases, firms rely more on such informal networks than on traditional recruitment strategies, since they can get a steady supply of unorganized, low-skilled, low-wage workers (Kandel and Parrado, 2006). The net migration rate of the foreign-born population in Kansas between 1995 and 2000 was 47.6 compared to the rate of –5.2 in the native population. About 35 thousand (approximately one-fourth) of the foreign-born living population in Kansas in 2000 were abroad in 1995. More than half of the 114 percent increase in the foreign born population between 1990 and 2000 was a result of newcomers arriving in the U.S. in the 1990s. In other words, the growth of the foreign-born population was not a result of the redistribution of long term foreign-born residents, but the emergence of a new migration flow attributed to the presence of the food processing industry concentrated in Southwest Kansas (Figure 3). Many of the workers are from Latin America, making Hispanics the largest ethnic group in Garden City, Dodge City and Liberal, all meat industry boomtowns (Figure 4). This corresponds to a larger structural redistribution trend of the Hispanic population, which has two basic characteristics. First, there is an unprecedented Hispanic population boom outside urban areas, and second, there is a regional change of Hispanics who no longer live only in the southwestern states of Arizona, New Mexico, Colorado, Texas, and California (Kandel and Parrado 2006). While increasing diversity helped to stabilized the decline in population, many aspects of community development remained a challenge for rural places with insufficient resources to accommodate the needs of their new populations (Broadway and Stull, 2006). Healthcare, education, and housing were the most urgent issues to address, given the linguistic isolation of the new immigrants. Moreover, the high turnover in the meat processing industry created mixed results related to economic multiplier effects, due to the presence of a transient population which is less integrated into the local community. Migration is age-selective for those in their 20s and 30s, so the new immigrants helped slow down the aging population trend in Southwest Kansas. Elsewhere in the state, especially in rural areas, population aging is one of the most significant challenges for community development. The demographic dynamics behind the aging population reflect a complex web of societal processes, albeit with relatively simple demographic root causes. First, declining mortality resulted in high life expectancy at birth, increasing the number of people who survive to old age. Second, declining fertility changed the overall age composition. With fewer children born, the younger population cannot balance out the increase of the older population. The third factor is migration, and this made most of the difference for Kansas. Until the late 1980s, Kansas was characterized by the out-migration of the younger generation who left the state for job opportunities elsewhere. At the local level there are various dynamics in age composition. Urban places resemble the average U.S. age distribution, while places without significant labor attraction are slowly ending in a vicious cycle of disappearing businesses, diminishing capacity to retain the younger generations and shrinking population dominated by the elderly cohorts. The national context of this trend is the aging of the Baby Boomer generation. Those who were born during the postwar fertility boom are close to retirement and this transition alone will increase the number and proportion of elderly population nationwide. In this context, population decline in the Midwest, perpetuated by age-selective out-migration from the region, will pose a significant economic development challenge for many rural communities. Historically, Kansas mirrored the aging trends of the United States (Kulcsàr and Bolender, 2006), although the percent population 65 and older has always been higher in our state. During the 20th century, the 15-or-younger population group declined from 35 to 22 percent (Figure 5). Most of this decline occurred before 1950. Shortly after the Second World War the period of high birth rates, known as the Baby Boom, resulted in a short period of proportionate increase of younger people, helping to create a population rise. At the same time, the proportion of 65 and above increased from 4 to 13 percent. Interestingly, this process was not affected by the Baby Boom. One could expect a temporary decline in the proportion of 65 and above when the proportion of 15 and younger increases. Since it did not occur in Kansas, we can conclude that the proportionate decline was concentrated in the working age population (between 15 and 65), especially in the 1960s when Kansas lost more than 130,000 people (about six percent of its population) due to out-migration. Contemporary demographic dynamics are good predictors of future trends. According to Census Bureau projections, the population of Kansas will increase by approximately 252,000 people by 2030. This population increase however is very unevenly distributed across various age groups. Most of the increase (237,000 people) will occur in the 65+ age category. In other words, out of every ten people Kansas gains in the next 25 years, nine will be 65 or above. This projection does not count retirement migration streams to Kansas. These people are already here: they are the active Baby Boomers who will retire by 2030. This will change the age composition of the state, and due to its uneven geographic distribution will mean significant problems for many communities. What are some examples of social change and community development challenges regarding population aging? One of the most important challenges communities face is institutional care and the provision of related community services. These services have functions that exceed healthcare needs and maintain a social network of older people (Luetz et al, 1993). The integration of the elderly into community life is vital for long-term community development. Also, while Social Security provides a basic income for older people, it makes a significant difference in the status of the elderly whether they can accumulate personal savings or if they have the opportunity for part-time work. Urban and suburban communities have a better chance at providing these opportunities, while rural communities have a disadvantage. The issue of the family network is closely related to the living arrangements of the elderly. While the basic preference is to live in an independent household as long as one can, such independence is strongly contingent on supportive family and community networks, as well as transportation possibilities, physically accessible housing, and local social services. Research indicates that with regard to housing and transportation, especially, older people in rural areas have a traditional disadvantage (Coward, 1988). Population aging in Kansas, similarly to other Midwest states, is ahead of the United States as a whole. This aging is very different from what one sees in popular retirement destinations, such as Florida or Arizona. Aging in Kansas is, first of all, aging in place. The Baby Boom cohort, which had a mitigating impact on aging in the mid-20th century, will have an accelerating impact on aging very soon. In Kansas, this will mean that community challenges with respect to aging will intensify, with spatially less mobile and socially and economically more disadvantaged elderly population. Demographic trends in Kansas include increasing population concentration, slow population growth, increasing population diversity, and aging in place. These trends are similar to what is experienced across the Midwest. The increasing global integration of rural America will result in gradual demographic convergence when general trends will be more and more applicable to Kansas as well. This demographic convergence however occurs in the context of spatial heterogeneity, due to various levels of community capacity to address traditional and new challenges. Spatial inequalities in Kansas will probably increase in the future, and this will result in increasing socioeconomic inequalities as well. Immigration, for instance, will have a substantive impact on general social change in the state, but it will be concentrated in Southwest Kansas and in the large metropolitan areas. Population growth will occur mostly in metropolitan places and their outlying areas, which will accelerate the aging population trend in most rural counties. Aging at county level has a strong negative impact on income and there is a certain path-dependency, since the aging situation in 1950 is a relatively good predictor of income in 1999. Furthermore, the process of population aging is very difficult to change. The fact that it can be a persistent problem in certain counties for 50 years indicates that in many cases, local communities are ill-prepared to address development challenges that arise from population aging. Aging itself is not a problem, though the difficulties are results of insufficient community capacity to address new challenges. The population concentration in Kansas has important implications for policy-making and representation in state legislation. Since population dynamics in Kansas are driven by urban population processes, rural places are disadvantaged because urban population dynamics can mask rural problems. This means that sparsely populated rural areas might have difficulties receiving statewide attention. We also have to ask whether policy makers have sufficient information to assess these trends at both the state and local levels. Detailed knowledge about past demographic trends and contemporary dynamics can help Kansas communities prepare for today’s challenges. And in the midst of an increasing diversity of interests and agendas, state policy should enhance local capacity to help communities make better decisions. 1. This study was presented at the 2006 Kansas Economic Policy Conference at the University of Kansas. Broadway, M. and D. Stull (2006). "Meat Processing and Garden City, KS: Boom and Bust," Journal of Rural Studies 22, pp. 55-66. Brown, David L. (2002). "Migration and Community: Social Networks in a Multilevel World," Rural Sociology 67, pp. 1-23. Coward, R. T. (1988). "Aging in the rural United States," in North American Elders: United States and Canadian Perspectives by E. Rathborne-McCuan and B. Havens (eds.). New York: Greenwood. Cromartie, John (1998). "Net Migration in the Great Plains Increasingly Linked to Natural Amenities and Suburbanization," Rural Development Perspectives 13:1. Johnson, Kenneth and Richard Rathge (2006). "Agricultural dependence and changing population in the Great Plains," in Population Change and Rural Society, by William Kandel and David L. Brown (eds.). Dodrecht: Springer. Kandel, William and Emilio Parrado (2006). "Rural Hispanic Population Growth: Public Policy Impacts in Nonmetro Counties," in Population Change and Rural Society, by William Kandel and David L. Brown (eds.). Dordrecht: Springer. Krissman, F. (2000). "Immigrant labor recruitment: U.S. agribusiness and undocumented migration from Mexico," in Immigration Research for a New Century, by N. Foner, R. Rumbaut, and S. Gold (eds.). New York: Russell Sage. Kulcsàr, Làszló J. and Benjamin C. Bolender (2006). "Home on the Range: Aging in Place in Rural Kansas," Online Journal of Rural Research and Policy 2006.3. http://www.ojrrp.org/issues/2006/03/index.html Leutz, W., R. Abrahams and J. Capitman (1993). "Administration of eligibility for community long-term care," The Gerontologist 33, pp. 92-104. Lobao, Linda and K. Meyer (2001). "The Great Agricultural Transition: Crisis, Change and Social Consequences of Twentieth Century US Farming," Annual Review of Sociology 27, pp. 103-24. Massey, Douglas (1990). "Social Structure, Household Strategies and the Cumulative Causation of Migration," Population Index 56, pp. 3-26. Rathge, Richard and Paula Highman (1998). "Population Change in the Great Plains. A History of Prolonged Decline," Rural Development Perspectives 13, pp. 19-26. Summers, G. F. (1993). "Rural Development Policy Options," in: Economic Adaptation. Alternatives for Nonmetropolitan Areas, by D. L. Barkley (ed.). Boulder: Westview Press. Làszló J. Kulcsàr is Assistant Professor of Sociology in the Department of Sociology, Anthropology, and Social Work and Director, Kansas Population Center, Kansas State University. Please send comments about the Institute’s WWW pages to email@example.com
<urn:uuid:df25abca-d95d-4d55-9118-34ad756e5765>
{ "dump": "CC-MAIN-2016-18", "url": "http://www.ipsr.ku.edu/publicat/kpr/kprV29N1/kprV29N1A1.shtml", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860128071.22/warc/CC-MAIN-20160428161528-00021-ip-10-239-7-51.ec2.internal.warc.gz", "language": "en", "language_score": 0.9384826421737671, "token_count": 3970, "score": 2.921875, "int_score": 3 }
An Early History of Natural Wine 1639-1906 I was reading David White’s Terroirist website on July 10, 2013 when I came across the latest news about the natural wine movement in the form of posts by Lettie Teague The Actual Facts Behind the Rise of Natural Wine and Alice Feiring Natural wine, believe and desire. I have stayed on the sideline about this subject though I did once express my frustrations with awful and unstable bottles of un-sulphured natural wine. For the record, I have enjoyed bottles of un-sulphured natural wine. In such books as Jamie Goode’s Authentic Wine, the modern natural wine movement is traced back to Jules Chauvet in 1970. During my research for the Murder and Thieves posts I can across early references to natural wine. Curious about the early use of this term I decided to conduct broad research. For this post I have relied on Google eBooks and a few other digital archives. I have focused on the period marked by the first English search result for natural wine, dated 1639, to 1906 when the House of Representatives passed the Pure Wine Bill. Part 1 – “naturall wine” The First Epistle of John speaks to the sacramental wine transubstantiating into the blood of Christ. In early writings I find references to the terms natural, bread, wine, and the blood of Christ. Martin Luther wrote in 1520, “it is an article of faith that in the natural bread and wine the true natural body and blood of Christ are present”. The Saxon Visitation Articles of 1584 describe the “true and natural body…and natural blood…of Christ” and that the body and blood of Christ are received “by the mouth, with the bread and wine; yet in an inscrutable and supernatural manner.” John Knox writes “of Wine into his natural Blood“. The early use of the term natural wine dates back to at least 1639. Henry Ainsworth writing in Annotations Upon the Five Books of Moses, the Book of the Psalmes and the Song of Songs writes “For by the Beloved, usually in this Song is meant Christ: by going to righteousnesses (or according to righteousnesses ) that is, going aright, straightly or directly, is signified the nature of pure wine, manifesting the goodnesse by the moving and springing in the cup, where by it is discerned to be the right and naturall wine, and is pleasing to them that drink it.” In writing about communion Richard Mason comments in 1670 that one may just receive either the “Species of Bread” or the” Species of Wine” and still receive both the “Body and Blood of the Saviour”. He notes the Greeks and most of the Orientals give the sick “only the consecrated Host, in natural wine” for the sick receive the Species of Bread on Maundy Thursday. This association between natural wine and the blood of Christ survives forward. In 1813 Paul Holbach differentiates between the miraculous wine that Christ transmuted from water being even better than the natural wine which had previously just been drunk. Both pure wine and natural wine were used in the same sentence by Henry Ainsworth. A very early reference to pure wine may be found in the Calendar of the plea and memoranda rolls of the city of London. Dated 8 November 1327 it was noted “The King is given to understand that vintners and their taverners, selling wine by retail in the City and suburbs, mix weak and corrupt wine with other wine and sell the mixture at the same price as good and pure wine, not allowing their customers to see whether the wine is drawn in measures from casks or otherwise, to the great scandal of the City and in corruption of the bodily health of the purchasers.” Thus a mixture of wines was not a pure wine. Later references to pure wine allude to its strength. “Which who so drinkes, although his draught be small, Stumbles as if pure Wine had made him fall.” In a “Cure of a moist distemper” published in 1624, both the roasted flesh of middle aged beasts and “pure wine, that is mightte to drinke” are prescribed, though administered seldom. Shortly after these publications a refinement of the term pure wine starts to appear. “They may drinke water alone, but not wine mingled therewith, unlesse they have a dispensation; that which is pure wine they call wine of the Law; this perhaps was one among other reasons, why they were of old, mistaken to have worshipped Bacchus”. Pure wine was also not sophisticated nor imbased. Sophisticated and adulterated wines are often used in exchange but a sophisticated wine is one mixed with just other wines. An adulterated wine may be mixed with other ingredients. [cite] The terms were used with distinction as in the 1688 Act for Prohibiting all Trade and Commerce with France where it was illegal to sell “corrupted sophisticated or adulterated” wine. This distinction between pure wine and wine mixed with water returns with an English and Dutch dictionary which defines “Pure Wine, or Wine unmixed with water.” There was even a test to determine if wine had been mixed with water. If an egg sank in new wine then it contained water. Part 2 – “they squeeze Bordeaux out of the sloe and draw Champagne from an apple” Bishop Gilbert Burnet, in writing about his travel through Italy, provides an early definition of natural wine. In describing “Aromatick-wine” he notes “its strength being equal to a weak Brandy, disposes one to believe that it cannot be a natural Wine, and yet it is the pure juice of the Grape without any mixture.” Upon tasting a 40 year old sample Madam Salis assured him that “there was not one grain of Spice in it, nor of any other mixture whatsoever.” This wine was made from extremely ripe grapes which were then stored in the garrets for one to three months. Then only the sound berries were picked, pressed, and stored in hogshead. Not all natural wine was good for decades. In 1698, Guy Miege discusses that wine is best in the spring when it is natural and cheaper and despite the propensity of English vintner’s to mix their wines “one may have plenty of good natural Wine.” Thus at the end of the 17th century pure wine refers to the subsequent mixture of water with wine and natural wine refers both the manufacture of wine solely from grapes and the subsequent prohibition on mixture. In Genoa, Italy extensive steps were taken to ensure a supply of natural wine. In 1701, Ellis Veryard comments that a two year provision of wine was kept in governmental cellar throughout the city. These cellars were known as Fondequa and were managed by an intendant. All innkeepers and private people could purchase wine from the cellars for retail. These wines were never adulterated for the punishment was transportation to the galleys. The citizens of Genoa boasted, “that pure and natural Wine is only drank in Genua.” In the treating of “hysterick fits” natural wine could be mixed with water and drops of Sal volatile oleosum and Spirit of Lavender. For those not used to drinking wine, drunk alone the presumably natural wine could cure them. Some natural wine might have been strong as Germans learning English could learn. In one exchange, “This wine is mighty strong, we should put some water in it. It is not too strong: It is a natural wine. But sometimes I put wine into my beer. That’s cunningly done: wine does not spoil beer, but water spoils wine.” The English author S.J. takes an interesting approach to the use of natural wine and that is a reference to what is commonly produced in a region. Thus the “natural wine of Champaign” is “Oiel de Perdix” and “Again, the natural Wine of Burgundy is Red.” Though red wine was made in Champagne using the Burgundian manner he did “not call that, the natural Wine of the Province, because By Champaign we are to understand the Wine most commonly made there.” Four years later in 1731, the definition of natural wine appears in an English dictionary. Natural WINE, is such as it comes from the grape without any mixture or sophistication. Adultered WINE, is that wherein some drug is added to give it strength, fineness, flavor, briskness, or some other qualification. Sulphur’d WINE, is that put in casks wherein sulphur has been burnt, in order to fit it for keeping, or for carriage by sea. This definition appears to mark the beginning of texts commenting on the manufacture of fake natural wine and investigations into what are the properties of a natural wine. In the Infusion section of the works of Francis Bacon flavor may be given to any wine to “imitate and exceed the Colour, Flavours, and Richeness of any natural Wines of foreign Growth.” A suggested experiment includes added fresh and green leaves of Basalm to simulate Frontignanc. Descriptions of wine production illuminate the expected differences between natural and adulterated wine. A 1622 recommendation for producing wine in Virginia starts with placing a little bundle of vine branches with a brick on top near the tap hole. This will be used to filter out the bits later on. The ripe grapes are then placed in the cask or tub, trodden by bare feet, then left to work itself for five or six days. This wine is then drawn out into a hogshead. Greener grapes may then be trodden with the leftover skins to produce a smaller wine. These leftover skins may be pressed with a tenth-part of water added to the juice to produce an even smaller wine. Once in barrel the wine must purge itself and be kept topped off before sealing. In 1665 William Hughes provides a detailed description of keeping wine. He starts with several different methods for bruising and pressing the grapes, including what is done in Germany. The juice is to be run off into firm and new vessels which are well bounded with iron. Fermentation is to occur in a warmer area and have been completed before the wine is transferred into tuns in a deep and cool cellar. A full and well bunged cask will keep the best. When tapping a cask the best way to prevent decay is to draw all of the wine into bottles which are then stored in sand. If this is not done then a linen cloth steeped in melted brimstone may be burnt inside of the cask. With the sulphurous fumes in the casks it may be stopped up. William Hughes comments that Vintners and Wine-Coopers are quick to make mixtures and if they stopped doing so “it would be much better both for their houses and health of their Customers.” Customers enjoyed a pleasing and brisk wine from a drawn down cask. As an alternative to adding sugar, vinegar, vitriol, and “many other ingredients which must not be mentioned” he suggests combining boiled and concentrated wine with regular wine. If wine is too sharp it should be drawn into bottles where it is combined with one or two spoonful of refined sugar. After some time this will be a “pleasant and good Wine.” In 1727 S. J. recommended that the gathered grapes be immediately pressed while still cool and without bruising. This method best preserves the spirituous parts. The first juice that runs out of the press, from its weight alone, is pleasant “but has not Body enough to keep a long time without Mixture.” Wine of the second and third pressing will last four or five years without mixture. Wine of the fourth pressing is tolerable to drink without mixture. For white wines he recommends running the juice into new casks to prevent coloring but the red wines may be run into old casks provided they were sweet and clean. In Burgundy and Champagne the casks are rinsed with water infused with peach leaves and flowers which gives a “delicious Flavour to those Wines.” Once the wine begins to ferment the froth from the first casks may be put into the other casks so that the yeast will encourage the start of fermentation. After fermentation is complete the casks must be kept topped off. Once they are stored in the cellar this should be done once per month. To fine the wine one ounce of isinglass should be mixed with white wine or brandy the mixture then added for every fifty gallons of wine. He then notes that in Burgundy and Champagne they burn brimstone in the casks during the first and second racking. This method provides “a more lively, brisk, and sparkling Colour” in the bottled wines. S. J. notes he cannot determine whether the English Vintners and Wine-Coopers were aware of this method. S. J. notes that the demand for frothy wines has caused dealers to experiment with “sundry sort of Drugs, and Chymical Preparations” which include mixing “Allum, Spirit of Wine, and Pidgeons Dung.” He continues that the adulteration of wine sometimes takes place within the country supplying the wine. Often the English vintners and wine coopers were simply performing an act of necessity. They are simply correcting wines which have turned “eager and sower” or “sweet and ropey” due to the original adulteration of the wine. Published in 1747, The London Tradesman lists several honest tasks of a wine cooper such as mixing various wines to produce the “Flavour and Taste required by the different Palates of his Customers”, to remove the lees, cure them of disease, recover pricked wine, preserve then, and recover flavor and color due to age. R. Campbell then claims the wine coopers have lately converted cider and “several more noxious Materials” to resemble Port, Sack, Canary, and other wines and that “few People know when they drink the the Juice of the Grape, or some sophisticated Stuff brewed by the Wine-Cooper.” One text claims, “some fall Sick upon it, as many have lately done and dye, by no great quantity moderately drank of it”. Fabian Philipps claimed that merchants, wine coopers, and vintners mixed their wines with “Stum, Molosse or Scum of Sugar, Perry, Sider, Lime, Milk, Whites of Eggs, Elder berries, putting in raw flesh”. Recipes were published for artificial wines which resembled other natural wines. Some of these artificial wines did not even include any wine. In 1661 one could make claret from cinnamon, “Galanga”, Ginger,” Paradise”, pepper, cloves, honey, sugar, and white wine, the mixture of which was fined with egg whites. The London and Country Cook includes several wine recipes including one “To make a wine like claret.” The ingredients include cider, Malaga raisins, barberries, raspberries, black cherries, mustard seed, and dough to cover the mixture. Apparently this “will be like common claret.” Augmenting English wine with raisins was recommended by William Hughes in 1667. In recommending the “best way to help our English wines” he suggests adding one pound of raisins per gallon of English wine. The Jesuit F. Balthazar Tellez writes in 1710 that communion in Ethiopia was taken with a cup of water mixed with four or six raisins for they had no wine and decided not to use liquor. In 1766 an argument was made to expand the definition of natural wine to a new method of wine production for Great Britain. This involved the addition of water to dry fruit to produce a superior “natural wine” because it only involved the restoration of moisture naturally lost during the drying and ripening of the fruit. However, the recommended recipe is for an “artificial wine” which also includes sugar and yeast. This recipe involved 30 gallons of rain or river water combined with 100 weight of Malaga raisins picked from stalks. This mixture would eventually ferment and must be stirred twice per day for 12 or 14 days. The fluid and pressed raisins were then run off into a good wine cask to which eight pounds of Lisbon sugar and a little yeast were added. This would ferment for one month after which the cask would be sealed for at least a year-long rest. After which is may improve for four or five years then it may be flavored and colored to match other wines. The lack of wine in Ethiopia continued to be an issue towards the end of the century. Producing wine from raisins and water was practiced by Father du Bernet so that he could say mass in Ethiopia. Mr. Poncet, who informed Father du Bernet of this method, argued that a natural wine was produced whether water penetrated the raisin through the skin or roots of the vine. Father du Bernet remained skeptical. Natural wines were infrequently listed for sale in London newspapers. In 1711, Brooke and Helliar gave notice that their “new natural Portugal Wines” were in the City and that they were “like new natural Wine.” The Viana sold for 14L and the Port for 16L per hogshead. In 1714, Brooke’s Natural Wines sold White and Red French wine, Port wine, Malmsey wine, Carcavalla, Sherry, Mountain, Canary, and Rhenish. In 1740 one could purchase Vidonia Madeira wine at the Wine Vaults under Widow Lowe’s. This parcel of the 1737 vintage was brought round Barbadoes and included no additional ingredients. A “Person, who values his Health, and on that Account chuseth to drink natural Wine from the Grape, rather than adulterated; will in the least grudge the aforementioned Price.” In 1799, an article described how the Chancellor of the Exchequer thought that sophisticated wines should be taxed as much as Port, the favorite foreign wine. This “would check their consumption and operate at once in favour of health and revenue, by creating a preference for wholesome Natural wine.” The consideration of which types of wines were natural wines increases. Vermouth was not a natural wine because there were no vineyards of that name nor were there any natural wines which had the flavor of wormwood mixed with St. Georges wine. James Busby considered Sherry the furthest thing from natural wine because it was a mixture of wines of various ages. He continues that a natural wine is “a wine as it comes from the press without a mixture of other qualities.” Several years later Ralph Barnes Grindrod quotes this same definition and goes on to write that “brandy is almost universally used in the fictitious preparation of wines.” James Warre writes that Manzanilla is a natural wine, not made up for the English taste. Port on the other hand must “possess certain qualities not found in the natural wine, – deep colour, great body, and much richness.” For him a natural wine is “limpid, white, and fragrant.” The detrimental effect to the aroma and flavor of natural wine which has had brandy added is well expressed by Cyrus Redding. For the more delicate wines he found “the aroma and perfume perish, together with that peculiar freshness which renders pure wine so estimable beyond every other potable.” He writes that the English relish the strength of wine with brandy but in places where spirits are drunk such as Sweden and Petersburgh, they prefer unadulterated natural wine as in France. Cyrus Redding uses a combination of pure wine and natural wine terms. He defines pure wine as “it be the pure juice of the grape along, after due fermentation”. He considered the wines of France, Germany, and Hungary the purest. It is around this period that pure wine and natural wine were used interchangeably. Part 3 – “The juice of the grapes fermented, preserved, or fortified for use as a beverage or medicine.” The techniques of Dr. Chaptal, Mr. Petiot, and Dr. Ludwig Gall turned attention from the post fermentation addition of mixtures to that during fermentation. Dr. Chaptal added a grape sugar syrup or sugar to the fermenting must to increase the alcohol of the finished wine. In the 1850s Mr. Petiot ran the juice off of crush grapes then added a sugar solution to the grape must to cause an additional fermentation. He then ran off the new infusion and repeated this process until the must was exhausted. He could then obtain several times the amount of wine as compared to normal production. Dr. Gall proposed the adjustment of the must acidity followed by the addition of a sugar solution much in the manner fruit wines had been made. Both John Louis William Thudichum and William J. Flagg considered the wine made from the first pressed grapes, to which nothing was added, natural wine. The “chemical article” produced subsequently did not sacrifice the previously obtained natural wine. In the United States the expansion of vineyards led to concerns about the varying quality of wine produced due to vintage conditions and the spread of disease. In the 1859 Report of the Commissioner of Patents on Agriculture it was proposed that wine produced from wild vines would yield 50% more if it was produced according to Dr. Gall and Mr. Petiot. Using these methods would provide profitable employment. The quality of natural wine varied but that of Dr. Gall was always in harmony and generally preferred. David M. Balch in writing about Dr. Gall and Mr. Petiot opens “Yet mistaken and narrow views have led to much opposition to these methods; and have even caused them to be decried as specious forms of adulteration, by those who stand forth as champions of what they are pleased to call ‘natural wines.’” He believes it was acceptable to assist nature and quotes a chapter from Dr. Mohr’s Verbesserung des Weines. Dr. Mohr believed there was no natural wine where grapes are not naturally found and that it is man who cultivates the grape on the best hillside. He viewed the natural wine debate as that between those with superior vineyards who want to retain a monopoly on trade and everyone else who employ the methods of Dr. Chaptal, Dr. Gall, and Mr. Petiot to make wine just as good but cheaper. Dr. Mohr was consistent in distinguishing between “well-prepared sugar wine” and natural wine as well as “imitated” from natural wine. In describing the definition of adulteration taken up in the debate, Dr. Mohr writes that “Selecting, pressing, racking, clarifying, sulphuring” were all natural processes. Dr. J Bersch’s lecture on Artificial Wine defines natural wine as “the must should be left just as it flows from the press without anything being added to it.” He goes on however to argue that there is no such thing as natural wine because it is not a natural product because nature would turn it into vinegar. Because man intervenes to promote the production of wine it is an artificial product. With no such thing as natural wine he defines artificial wine as those made from the residue of wine or actual wine, to which certain substances are added. A Chaptalized wine was not artificial but ones made by the methods of Dr. Gall and Mr. Petiot were artificial wines and should be sold as such. In 1869 James Lemoine Denman published a booklet centered on the apparent “Pure Wine Controversy” in which he concludes that Greek wines are the best. James Denman was a London wine importer and merchant who actually published a number of booklets, which while self-serving, strongly supported pure and natural wines against adulterated wines. He even badgered the commissioners of the 1874 London International Exhibition about what they meant by their Exhibition of Pure Wine. For this exhibition it was defined as “the exclusive produce of the country in which they are stated to be manufactured.” The wines of Spain and Portugal continued to be viewed as adulterated because spirit was added before shipping to England. However the wines of Southern France, Sicily, Hungary, American, and Australia did not have spirit added and survived the journey. Robert Druitt considered the wines of France, Germany, and Hungary as natural wine producing countries whereas those of Spain, Portugal, and Sicily were often fortified. He believed one reason for fortification was to help poorly made wines travel well. Of the three types of wine made in Hungary samorodny was considered the natural wine because the whole clusters which are pressed. In describing the commercial relations of Greece and the United States in 1892 it was noted that the Hamburger & Co. winery was selling Greek natural wine similar to that of French claret and German table wines. Prompted by the introduction of a sparkling brut Saumur the Dublin Journal of Medical Sciences considered the production of Champagne. The addition of the “highly saccharine” dosage made Champagne not purely a natural wine but an “artificial compound.” However the brut Saumur received a small dosage which was just sufficient to produce “carbonic acid gas” thus it was a natural wine. Despite the philosophical debate about whether natural wines even exist, the term continued to be used as its scientific characteristics were studied. The composition of wine continued to appear in numerous publications detailing the various acids, sugars, and other constituents. It appears that alcoholic strength was of prime interest in England. In 1862 the Government Wine Commission measured international samples to determine the greatest natural strength of wine. They determined that it was 33.3 percent. At the time The Chancellor of the Exchequer delivered a speech in which he stated, “When I speak of natural wine, I meant wine with only so much spirit added as is necessary to make it a merchantable commodity for the general markets of the world.” In 1875 A. Dupre lectured that natural wine could very rarely reach 15.8% alcohol by volume. In the Report from the Select Committee on Wine Duties of 1879, Parliament accepted the conclusion of the Customs authorities that no natural wine contained more than 26 per cent of natural spirit and redefined natural wine “as one in which the spirit is produced entirely by natural fermentation, there being no added spirit.” This categorization of wine as natural or fortified with spirits was aimed to generate revenue for all imported spirits because natural wine would have a low rate of duty. It also reflects that England was primarily a wine importing country versus a wine producing country. Natural wine developed a different meaning in the United States were there were significant quantities of wine produced. In 1877 several American states begin to pass laws against adulteration of food and beverages. In 1882 the California State Board of Viticulture started a campaign for a federal pure wine law. There were significantly different vineyard and winemaking practices throughout the country. The bill had to balance a strict definition against the practical methods employed by vintners. In 1886 the California State Viticultural Commission tried to pass a National Pure Wine Bill through Congress. It was believed that such standards would improve the marketability of Californian wine. The bill failed so in 1887 California passed the California Wine Adulteration Law in which pure wine was defined as “The juice of the grapes fermented, preserved, or fortified for use as a beverage or medicine.” During fermentation pure grape products could be added, water to decrease the strength of must, but no products such as analine dyes, salicyclic acid, glyerine, or alum. Sulphur fumes were allowed to disinfect and prevent disease while gelatinous and albuminous materials could be used for fining and clarification. This act appears in the appendix of the book Food Adulteration and its Detection. This book set out to present the most important facts about food and beverage adulteration. In describing the history of adulteration the authors cites the Parisian Conseil de Salubrite who found 15.18% of wine, spirits, and beer were adulterated between 1875-1880. The most common adulteration to American wines from the 1870s included water, spirits, coal tar, vegetable color, and imitation wine. For example, ambergris and sugar would add bouquet to claret. Imitation claret was still manufactured with one recipe requiring white sugar, raisins, sodium chloride, tartaric acid, brandy, water, gall nuts, and brewer’s yeast. It was reported that in 1881, 52 million gallons of imitation claret were made in France. In New York City alone two manufacturers produced over 30,000 gallons of fake Californian wine per month. The spread of phylloxera and its subsequent destruction of vineyards significantly reduced the supply of natural wine. This was believed to be the stimulus for the large increase in adulterated wines. On Thursday, February 1, 1906 the Committee on Ways and Means in the House of Representatives considered a Pure Wine Bill. This bill defined natural wine as “the product of alcoholic fermentation of the juice of the grape, with such additions as are necessary in usual cellar treatment.” These cellar treatments were defined as the correction of the must through the addition of pure care or beet sugar solution, the use of sulphites before and after fermentation, the use of clarifying agents the use of tartaric or citric acid, the normal manipulations to produce sparkling wine, sweet, and fortified wines, and finally the blending of natural wines. Later that same year in August 1906 the Governor of the Cape of Good Hope passed the act To Prohibit the use of certain foreign substances in Wine, Brandy, Whiskey and spirits. This defined that natural wine or “’Pure Natural Wine’ means the product solely of the alcoholic fermentation of the juice or must of fresh grapes without the addition of any foreign substance as hereinafter defined, before, during, or after the making of the same.” Foreign substances included ethers, oils, barium, fluorine, magnesium, arsenic, lead, etc. Foreign substances did not include yeast, isinglass and others for clarification, common salt, sulphate of lime, tartaric acid, natural products of grape leaves and flowers, and pure wine spirit. The natural wine term is centuries old. It was used in religious, medical, historical, and scientific books and journals. It has been used in newspaper advertisements and legally defined. The growth of the term appears to have developed in three stages. First, it was used in reference to wine for communion. Second, it was used as a clear distinction between natural and artificial wines during a time when wine production yielded varying qualities of wine, not all wines were successfully transported, and not all wines kept well. To some degree it was acceptable to add some ingredients or act upon the wine to correct faults. Exceeding that intention by turning the wine into something it was not, involved a wider variety of ingredients and was considered adulteration. Vineyard practices and specific winemaking techniques were not defined by the common usage. Finally, with the improvement in production, transportation, and storage attention turns to the methods of Dr. Chaptal, Mr. Petiot, and Dr. Gall. The methods of Mr. Petiot and Dr. Gall allowed for the artificially increased production of wine and with the advance of phylloxera a new wave of adulterated and artificial wine appears. England, the great wine importing country, defined natural wine with the intent to capture all revenue from imported spirits. The wine producing countries of the United States, New South Wales, and the Cape Colony defined natural wine both for consumer protection and trade purposes allowing for the practical variations in production techniques. The conflict over the term natural wine is not surprising given its historical meaning. Perhaps the conflict may be avoided if this modern category of wine is given a new name.
<urn:uuid:977eac30-e837-4ab7-9326-d873c2ef409d>
{ "dump": "CC-MAIN-2015-11", "url": "https://hogsheadwine.wordpress.com/2013/07/24/an-early-history-of-natural-wine-1639-1906/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463444.94/warc/CC-MAIN-20150226074103-00289-ip-10-28-5-156.ec2.internal.warc.gz", "language": "en", "language_score": 0.9709240794181824, "token_count": 6798, "score": 2.515625, "int_score": 3 }
Top 10 Nursery Rhymes Written by: Catalogs.com Editorial Staff November 14, 2011 Filed Under Books Contributed by Jennifer Andrews, Catalogs.com Top 10 Guru Nursery rhymes have been used for hundreds of years as a soothing and light-hearted way to help babies and children fall asleep at night. Most rhymes have a simple rhythm and soothing melody to help kids drift off to sleep with peaceful thoughts and visions. Here is a list of the top ten nursery rhymes as suggested by The Guardian and the A-Rhyme-A-Week program from Webbing Into Literacy (WIL). 10. Little Miss Muffett Originally written by Arthur Rackham, Little Miss Muffett is a popular nursery rhyme that has kids running away from imaginary spiders. 9. Incey Wincey Spider The Incey Wincey Spider is a common nursery rhyme that reveals that even after the rain, the sun can come out and life goes on. 8. Baa, Baa, Black Sheep A fun nursery rhyme, these lyrics are enjoyed by many a kid for the pronunciation of “baa” as evoked from a sheep. Sing this one to kids at night as they count sheep to drift off to sleep. 7. Twinkle, Twinkle, Little Star One of the most well-known nursery rhymes, Twinkle, twinkle, little star is set to a soothing and soft melody that will conjure up images of hope and wistfulness for kids and adults alike. 6. Jack and Jill Jack and Jill tells the story of a little boy and girl who go up a hill to fetch water from a well only to fall and come tumbling down. 5. Humpty Dumpty This rhyme has conjured up many vivid images and illustrations of a big-headed, egg-shaped king that falls from a wall and is unable to be put together again. 4. Rain, Rain, Go Away This rhyme is ideal for rainy days when kids are stuck inside and longingly looking outdoors to play. 3. Star Light, Star Bright This rhyme has kids everywhere wishing on stars for something big or something small to come true for them. 2. Jack Be Nimble A cute rhyme, this poem conjures up images of a little boy jumping over a candlestick. But quickly! Otherwise, he risks getting burned. 1. Hickory Dickory Dock This fun nursery rhyme tells the tale of a mouse that runs up a clock only to run back down scared when the clock strikes one and the bells start chiming.
<urn:uuid:2d218600-12d3-437f-9657-11a9a95dd643>
{ "dump": "CC-MAIN-2016-50", "url": "http://www.catalogs.com/info/bestof/top-10-nursery-rhymes", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543614.1/warc/CC-MAIN-20161202170903-00394-ip-10-31-129-80.ec2.internal.warc.gz", "language": "en", "language_score": 0.9061859250068665, "token_count": 556, "score": 2.671875, "int_score": 3 }
What is a Bachelor's Degree? A bachelor’s degree is a four-year degree meaning it typically takes four years of full-time study to complete your bachelor’s degree. In these four years, you will complete 120 semester credits or around 40 college courses. If your college uses a quarter system rather than a semester system, you’ll need to complete a minimum of 180 quarter credits to earn an accredited bachelor’s degree. A bachelor’s is a post-secondary undergraduate degree. Historically, the term “college degree” meant a bachelor’s or traditional four-year degree. Bachelor degrees are also sometimes called baccalaureate degrees. Regionally accredited liberal arts colleges award most of the bachelor degrees in the United States. Liberal arts classes are required for all types of bachelor degrees. In most cases, more than half of a bachelor’s degree consists of general education or liberal arts courses in areas such as English, critical thinking, psychology, history and mathematics. Typically only 30 to 36 credits—10 to 12 courses—will be in your major area of study. The bachelor’s degree remains the standard for entry into many professional careers. Getting a bachelor’s degree can be the ticket to a more promising career. In most cases, you cannot attend a professional graduate school in law, medicine, or teacher education unless you hold a bachelor’s degree. That means you will almost always need a bachelor’s before enrolling in a master’s program to open the door to even more career opportunities. Interested in pursuing a bachelor degree? These schools offer an excellent variety of options, many of which are affordable, flexible and/or accelerated. - Western Governors University of Utah is a competency-based university, founded by the governors of 19 western states. You earn college credits by demonstrating your knowledge or “competency” in specific subject areas. - was approved by The U.S. Department of Education to offer an innovative "FlexPath" direct-assessment set of programs. Capella’s FlexPath programs offer the potential to significantly reduce the cost of a degree, accelerate the time required for degree completion, and better align learning to the needs of the student. - is transfer-friendly, offers accelerated programs and provides a dedicated academic advisor and student services designed for the adult learner. - Grand Canyon University is a premier Christian university offering over 50 bachelor degree programs. When to Seek a Bachelor’s Degree When You … - Know that a bachelor degree is required for your career - Have already earned more than 60 semester college credits or hold at least one associate degree. - Know that a graduate or professional degree will be required for your career RELATED: Accredited Online Bachelor Degrees Associate vs. Bachelor Degree While a bachelor’s degree is a 4 year degree, the associate’s degree takes two years to complete. A bachelor’s degree program aims to round out a student not only as a potential worker, but as a whole. It equips graduates with skills and knowledge in a particular field that will lead them to professional and middle-management jobs. Courses needed to get a bachelor’s degree include general courses in the liberal arts and specific required courses in a major concentration. Associate degrees, on the other hand, typically prepare graduates for entry-level work with the basic skills and knowledge needed in a field. Associate’s degrees can also allow students to complete general education requirements through a two-year program, for later transfer into a four-year degree. Many traditional and online colleges, universities, community colleges, and junior colleges have what are called 2+2 programs. After a student completes the first two years of their four-year bachelor degree, they have earned their associate’s degree. A student can continue their education post-associate at a larger university or college through an articulation agreement. This plan can be an easy and affordable bachelor’s degree journey. Types of Bachelor Degrees A list of bachelor degrees and the their specific majors and concentrations would be almost infinite. The three most popular types of bachelor degrees are: - Bachelor of Arts (BA degree) - Bachelor of Science (BS degree) - Bachelor of Fine Arts (BFA degree). What is a BA Degree? A BA degree generally requires students to take fewer concentration courses and to focus more on exploring the liberal arts. These students have a little more freedom when it comes to customizing their education to fulfill their career goals and aspirations. The most common majors include English, Art, Theatre, Communications, Modern Languages and Music. What is a BS Degree? The BS degree, on the other hand, is less focused on exploration and more targeted to a specific concentration. Bachelor of Science students, more often than not, focus specifically on the field of their major and tend to be more career-focused. Bachelor degrees in the medical field, for example, are more likely to be Bachelor of Science degrees. Popular majors that make the Bachelor of Science degree list include: - Computer Science - Chemical Engineering - Western Governors University BS Information Technology - Capella University - Strayer University Online What is a BFA? The BFA is another vocational or professional degree. The goal of a BFA program is for its graduates to go on to become professionals in the creative arts world. This includes dancers, singers, actors, painters, and sculptors, just to name a few. Like the BS degree, the main difference between a BFA and a BA program is the tendency to focus more on their major concentration than on general studies. - Savannah College of Art and Design BFA in Graphic Design TIP: Should you earn a second bachelor’s degree? In most cases, the answer is NO. If you have a bachelor’s in one area—say art history—and are trying to re-tool to work in another area, such as human resources, consider adding a certificate to your resume rather than trying to earn a second bachelor’s degree. By earning a certificate you’ll essentially be adding a new “major” area of study to the general education studies of your original bachelor’s degree. Accelerated Bachelor Degree Programs The length of time it takes to earn a bachelor’s degree most likely depends upon the bachelor degree program you choose to enter, and the college in which you enroll. Options vary from full-time, traditional four-year programs to accelerated online bachelor degree programs which can be completed in just two years. Others may pursue their degree part-time, in which case, it would take longer. If you have previously completed a number of post-secondary courses, these courses may be approved for transfer credit. This would reduce the time it takes to complete a 4 year bachelor degree. If you have an associate's degree then you may also be eligible to enroll in an accelerated, 90 credit online bachelor’s degree program. In addition, adult students may have earned prior higher education credits that can be transferred, or have completed workforce trainings and gained professional experience that also qualify for earned credits. Many higher education institutions allow students to test out of courses, through recognized assessments, including the College Level Examination Program (CLEP) and DANTES Credit by Examination. Finding a distance education program that offers year round courses may offer another alternative, if you have the time commitment and motivation. Tip: If time is of the essence and you need a bachelor’s degree as fast as possible, then you should consider attending an online school that has flexible enrollment periods. This allows for students to take their courses on their own time instead of within the confines of a traditional semester or quarter. Bachelor's Degree Salary Concerning the academic respect, a BA degree, BFA or BS are all valued equally. Depending on the kind of field into which a person is entering, the cost benefits can vary. BS degree jobs, like in the field of engineering, often pay more than their BA counterparts in Education or the Arts. Some of the highest paying jobs— like physicians and lawyers— require not only a bachelor’s degree, but also additional schooling. Does a bachelor degree guarantee steady employment? No. But it does help your chances significantly. Even when the unemployment is high, the unemployment for people with bachelor degrees is lower by at least a few percentage points. On average, according to a Georgetown report, college graduates (those with a bachelor’s) make 84 percent more over a lifetime than those who have earned only a high school diploma. According to the National Center for Education Statistics, 72 percent of young adults (aged 25-34) who earn a bachelor’s degree work full time, year round in 2013. According to that same report, young adults with undergraduate degrees versus young adults with only a high school education made more than twice as much money. The average bachelor’s degree salary is $48,500 compared to $23,900. Payscale reports salaries for professionals with bachelor’s degrees. Potential early-career earnings include annual salaries of up to: - Engineering - $101,000 - Computer Science - $69,100 - Dental Hygiene - $65,800 - Mathematics - $58,800 - Nursing - $56,600 - Finance - $55,700 - Management - $54,200. Average Cost of a Bachelor's Degree Tuition for a bachelor’s degree varies significantly from school-to-school. The College Board published a report that stated the median tuition for a full-time student in a single year at a private, not-for-profit four-year institution is about $11,000. Affordability factors can include, but are not limited to: public vs. private institutions, the state in which you enroll, available aid, and your status as an in-state or out-of-state residence. Online bachelor degree programs have set rates that do not base tuition on in-state and out-of-state status. Still, these rates vary widely from school-to-school, and from program-to-program. Financial aid tremendously affects the total cost of a bachelor’s degree. For example in the same study, the College Board found that while the average tuition and fees at a public college is about $8,900, the actual net price when factoring in grants and tax credits equaled about $3,100. - Be proactive in your search for the right degree program, and the right school. - Choose your major according to your interests and career goals, then explore cost rankings and look for the best financial options. - View GetEducated’s list of the cheapest colleges for an online bachelor’s degree and see if they offer your intended major. How to Choose a Bachelor’s Degree Program Before applying, answer these important questions. - Does this particular degree program fulfill criteria for my intended profession? - Will my profession require licensure? Is this degree program approved for licensure? - Will this bachelor's degree transfer into a master's degree if I decide to further my educational goals in the future? - How much will it cost to obtain my degree? - Is financial aid available? - Is coursework semester-based? Year round? Accelerated? - Does online mean completely online? Or are there on-campus requirements? - How much flexibility do I need? Do I prefer asynchronous courses that I complete on my own time, or would I enjoy synchronous classes in which classes meet at set times? Application requirements vary widely among universities. Most colleges will require that you have a high school diploma or a GED equivalency. You will most likely need to complete an application, and may need to submit additional documentation, such as official transcripts, or assessments. If a four-year program seems daunting, consider a two-year program that will transfer into a bachelor’s program. TIP: Some careers may require a very specific type of bachelor degree. For example, if your goal is to become a public school teacher your state Board of Education will require, at minimum, a bachelor’s degree in education. That degree will need to include some very specific courses. Check with your state licensing board before enrolling in any bachelor’s degree program in accounting, education, nursing, counseling and engineering, in particular. Show Me an Online Bachelor’s Degree Basically, a bachelor’s degree looks like an associate degree doubled. Below is a sample online bachelor's degree from Western Governors University so you can see the type of curriculum commonly required. Colleges will vary in their exact degree requirements. Compare colleges carefully on the courses they will require you to take to earn your bachelor's degree in any one major area. Western Governors University Bachelor of Science in Business Management Full-time undergraduate students must be enrolled in at least 12 competency units (CUs) per term. WGU recognizes prior learning and experience to accelerate your degree program, and for this reason, identifies their courses in CUs; with similarities to that of traditional credit units. Below is a traditional route for individuals that enrolled in WGU with no transfer credits. |Course Description and Competency Units (CUs)| Organizational Behavior and Leadership (3 CUs) English Composition I (3 CUs) Introduction to Geography (3 CUs)Principles of Management (4 CUs) English Composition II (3 CUs) Fundamentals of Business Law and Ethics (6 CUs) Intermediate Algebra (3 CUs) College Algebra (4 CUs) Integrated Natural Science (4 CUs) Integrated Natural Science Applications (4 CUs) Legal Issues for Business Organizations (3 CUs) Elements of Effective Communication (3 CUs) Introduction to Probability and Statistics (3 CUs) Information Systems Management (3 CUs) Principles of Accounting (4 CUs) Critical Thinking and Logic (3 CUs) Introduction to Humanities (3 CUs) Microeconomics (3 CUs) Ethical Situations in Business (3 CUs) Macroeconomics (3 CUs) Global Business (3 CUs) Quantitative Analysis for Business (6 CUs) Fundamentals of Marketing and Business Communication (6 CUs) Marketing Applications (3 CUs)Managerial Accounting (3 CUs) Project Management (6 CUs)Strategy, Change and Organizational Behavior Concepts (7 CUs) Finance (3 CUs) Quality, Operations and Decision Science Concepts (8 CUs) Business Management Tasks (3 CUs) Term ten:Business Management Capstone Written Project (4 CUs) Popular Online Bachelor Degrees - Business Administration - Business Management - Construction Management - Hospitality Management - Human Resources - International Business - Management Information Systems - Nonprofit Management - Operations & Logistics - Organizational Leadership - Project Management - Public Administration - Real Estate - Sports Management - Technology Management
<urn:uuid:8749d4eb-7f50-4739-9f7f-9b4375c5813f>
{ "dump": "CC-MAIN-2019-43", "url": "https://www.geteducated.com/career-center/detail/what-is-a-bachelors-degree", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700435.69/warc/CC-MAIN-20191019214624-20191020002124-00303.warc.gz", "language": "en", "language_score": 0.9328673481941223, "token_count": 3190, "score": 2.578125, "int_score": 3 }
It may seem odd that Santa is smoking and touching his nose in this illustration from an 1888 version of Clement Clarke Moore’s “A Visit from St. Nicholas,” later called “The Night Before Christmas.” But the verses of Moore’s poem, first published in 1822, mention both habits. “The stump of a pipe he held tight in his teeth,” writes Moore. In a later stanza, he adds, “And laying his finger aside of his nose, And giving a nod, up the chimney he rose!” Moore borrowed these traits from an earlier description. Washington Irving wrote about St. Nicholas smoking a clay pipe and “laying his finger beside his nose” to magically disappear in A History of New York from the Beginning of the World to the End of the Dutch Dynasty in 1809. Would you have nightmares if he fell down your chimney? Vote for this scary Santa!
<urn:uuid:d71a7a1d-be31-4dd5-a319-40477d780dcd>
{ "dump": "CC-MAIN-2015-22", "url": "http://www.smithsonianmag.com/arts-culture/photos-the-scariest-santas-youll-ever-see-10266106/?c=y&navigation=thumb&page=2", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195034286.17/warc/CC-MAIN-20150601214354-00017-ip-10-180-206-219.ec2.internal.warc.gz", "language": "en", "language_score": 0.9703312516212463, "token_count": 199, "score": 3.140625, "int_score": 3 }
String Games: Cat's in the Cradle Source: Babycenter Community Ages: 8+ years Instructions:Do you remember doing this by the HOUR with your friends? Don't let your kids miss it. To do it, you need: a piece of string tied at the ends to make a circle In the directions, 'you is the first person and 'she is the second person. You and she take turns: when it's HER turn, the directions will switch sides. 1. You put your hands through the string. Keep your thumb out of it 2. Then you loop the string around each hand. Keep your thumb out of the loop. 3. Put the middle finger of one hand through the loop on the other hand and pull. 4. Put the middle finger of the OTHER hand through the loop. This is "the Cat's Cradle." Now comes the first hard thing. Find the two places where the string makes an X. 5. She takes her thumb and forefinger and pinches those X shaped parts. 6. Still pinching them, she moves her hands farther apart, until the string is taut. 7. This is hard, too,so there are two pictures-- (SEE WEBSITE) a) she kind of points her fingers DOWN (through the sides) and then b) scoops them up through the middle -- and pulls, very gently. As she does the last part, you should let the cat's cradle slide out of YOUR hands. 8. She ends up with cat's cradle on HER hands. For detailed PICTURES explaining the steps go to the website: Similar activities:Heart Strings Game, Cat and Mouse, Cat in the Hat Birthday Party, Cat in the Hat Birthday Party, Egg carton cat nose, Hanukkah Fishing Game, Stringing Macaroni, A Sew And Sew Hanukkah Game, Shamrock Catch Game, Halloween Cat Recipes, Cat puppet, The Name Game, Bored with Board Games?, The Valentine And Game, The Opposites Game 8 more ways to have fun this winter Coloring book pages
<urn:uuid:62a818da-18ca-40f9-9744-f88d4fd554e6>
{ "dump": "CC-MAIN-2014-10", "url": "http://www.babycenter.com/210_string-games-cats-in-the-cradle_4082.bc", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678701804/warc/CC-MAIN-20140313024501-00082-ip-10-183-142-35.ec2.internal.warc.gz", "language": "en", "language_score": 0.8680912256240845, "token_count": 450, "score": 2.71875, "int_score": 3 }
The Birmingham Civil Rights Institute in Alabama does a good job of showing what blacks endured before the civil rights victories of the 1960s. I visited there last fall and was especially struck by one particular image — a 1926 map of the small and isolated patches of Birmingham where city zoning regulations allowed blacks to live. What struck me was the similarity of this map to maps of the isolated patches of the West Bank including East Jerusalem where Palestinians are allowed to live. The map then made me think about other similarities between the oppression of blacks in the Jim Crow South and Israel’s present-day oppression of Palestinians. The methods for keeping blacks within their enclaves in Birmingham were more direct and brutal than the redlining agreements among banks and realtors that maintained a de facto segregation in the North. Municipal zoning laws in Birmingham prevented sales to blacks outside designated areas, and if a black person somehow acquired a house outside the designated area, even if just across the street, the house would be blown up. Similarly, the Israeli legal system keeps Palestinians within restricted areas of East Jerusalem and elsewhere in the West Bank. Palestinians living outside those areas have been evicted and their homes destroyed or occupied by Jewish settlers. Eighteen thousand Palestinian homes have been destroyed by Israel since 1967, according to the Israeli Committee Against House Demolitions. The black areas and white areas of Birmingham were very different physically. The black areas often lacked municipal amenities or services such as street lighting, paved streets, sidewalks, garbage collection and sewers that the white areas had. Similarly, the Palestinian areas of East Jerusalem often lack these same basic facilities and services, and the differences between Palestinian areas and those reserved for Israeli settlers are clear to all. Suppression of the human rights of blacks in the South was maintained by both “legal” and extralegal means. State and municipal Jim Crow laws restricted residence, use of public facilities, use of public transport, interracial marriage and other aspects of life in the South. White courts and police forces enforced these laws and the whole system of segregation. Arbitrary arrests under vagrancy laws yielded large numbers of black prisoners (who were often forced to do hard labor). Nonviolent civil rights marches and protests were met with police and state National Guard violence. Similarly, Israeli control over the lives of Palestinians is maintained by a system of laws, courts, police and Israeli military that discriminates against Palestinians. Laws restrict where Palestinians can live, where they can travel, what roads they can travel on, and whether they can live with their spouse in another part of the country. Permits to travel from the West Bank to East Jerusalem for work are tightly controlled and dependent on “good” behavior. “Administrative detentions” have led to the indefinite incarceration of thousands of Palestinians without trials. The Israeli military meets unarmed protests against the separation wall and the taking of Palestinian land with violence. Black compliance with the system of segregation in the South was ensured by extralegal as well as legal means, including economic threats, harassment of various sorts, and extreme violence. More than 5,000 lynchings were recorded between 1882 and 1959, and many beatings and killings went unrecorded. Violence against blacks increased as the civil rights movement grew in strength during the 1950s and 1960s. In one year alone 30 black homes and churches were bombed in Birmingham. The white-controlled legal system only rarely prosecuted white-on-black violence. Similarly, harassment and violence against Palestinians by Israeli settlers in the West Bank including East Jerusalem occurs almost every day. The settlers try to force Palestinians off their land or to leave the region entirely. The settlers threaten or attack children on their way to school and shepherds in the fields. Palestinian land, wells and olive groves are occupied. The Israeli military protects the settlers, and the Israeli legal system only rarely prosecutes settler harassment or violence. Blacks in the Jim Crow South had no control over the governments that oppressed them and denied them their share of common resources. The 15th Amendment of 1870 gave blacks the right to vote, but that right was progressively taken away in Southern states following the failure of reconstruction. Discriminatory registration procedures were introduced and were enforced by violence. As late as the 1960s, many counties in the South, even those with black majorities, had no registered black voters. The Voting Rights Act of 1965 finally changed that. Similarly, the four million or so Palestinians in Gaza and the West Bank, including East Jerusalem, have no say in the government that in fact controls them. They cannot vote in the Israeli elections. Palestinians did vote for a virtually powerless Palestinian government in 2006 in which a majority of seats in the parliament went to Hamas, a political party. The Hamas legislators were immediately arrested and jailed by Israel. Many were kept in prison for more than five years and the elected parliament has never been able to meet. Even if the parliament could meet, it would have only limited control over limited enclaves of the West Bank. Israel controls the water, electricity, borders, airspace, exports and imports of the enclaves, and the Israeli military enters the enclaves and arrests Palestinians at will. Nonviolent methods such as marches, boycotts and direct actions are a critical tool for the success of any human rights movement, such as the American civil rights movement, that confronts a power structure with a monopoly on physical force. The civil rights movement in the United States maintained the practice of nonviolence to a heroic degree over many years, even in the face of violent repression from the Southern white power structure. Participants aroused the conscience of the rest of the nation and the world. Tactics of resistance Similar methods are now of central importance for the Palestinian rights movement. Protest marches against the separation wall, “Freedom Rides” on Israeli-only public transit, and “camp-ins” on land illegally expropriated for Israeli settlements are becoming common now in Palestine. Internationally, boycotts of all sorts and divestment from companies that maintain and profit from the occupation of Palestinian land are taking hold. The blacks in the American civil rights movement made their appeal to the federal government for redress of wrongs committed at the lower levels of state and local governments. The federal government was already formally committed to the rights of blacks through the 14th and 15th amendments as well as various Supreme Court decisions. They also had authority and power over local governments. The aroused conscience of the nation and of the world finally forced the United States federal government to act. Presidents John F. Kennedy and Lyndon Johnson could not continue to present the United States to the world as the land of freedom and democracy when its own citizens were being beaten for asserting their freedom and their right to vote. Here too there are parallels between the civil rights movement in the American South and today’s movement for Palestinian rights. Israel cannot indefinitely present itself as a law-abiding, humane and democratic state when it denies the human rights of the four million or so Palestinians in Gaza and the West Bank. The federal government of the United States shares responsibility for the continuing denial of Palestinian human rights, just as for many decades it shared responsibility for the denial of human rights to blacks in the Jim Crow South by not enforcing federal law. Now, and for many decades, United States diplomatic support has allowed Israel to violate international law with impunity. The United States has blocked United Nations sanctions against Israel for such violations of international law as the occupation of Palestinian land, the colonization of the West Bank by placing settlers on that land, and the annexation of East Jerusalem, the historic home of Christian and Muslim Palestinians. America breaks own law In addition, the United States federal government provides about $3 billion in military aid to Israel every year, and may be violating its own laws in doing so, as pointed out by a recent letter to Congress from 15 leaders of major American Christian churches (“Religious leaders ask Congress to condition military aid to Israel on human rights compliance,” Presbyterian Church USA, 5 October 2012). The letter urged an “investigation into possible violations by Israel of the US Foreign Assistance Act and the US Arms Export Control Act, which respectively prohibit assistance to any country which engages in a consistent pattern of human rights violations and limit the use of US weapons to ‘internal security’ or ‘legitimate self-defense.’” The letter cited evidence for human rights violations on the part of Israel and for Israel’s use of US arms against Palestinian civilians. The tactics for resisting segregation brought significant changes for blacks in the South. Hopefully, with commitment and perseverance, similar methods may someday accomplish the same for Palestinians. Curtis Bell is a peace activist in Portland, Oregon. He is a member of the board of Unitarian Universalists for Justice in the Middle East, an organization that works for Palestinian rights within the Unitarian Universalist denomination.
<urn:uuid:c0c2197d-ae87-4c9b-980d-4b4a5a3852c2>
{ "dump": "CC-MAIN-2014-35", "url": "http://electronicintifada.net/content/jim-crow-palestine-parallels-between-us-and-israeli-racism/12216", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924501.17/warc/CC-MAIN-20140909013110-00117-ip-10-180-136-8.ec2.internal.warc.gz", "language": "en", "language_score": 0.9616530537605286, "token_count": 1818, "score": 3.234375, "int_score": 3 }
Allergic asthma is a condition that causes coughing, wheezing, shortness of breath, rapid breathing and tightness in the chest. These symptoms are the result of the immune system overreacting to harmless antigens (allergens), such as tree pollen. Allergic asthma is characterized by early and late asthmatic responses (EARs and LARs) following the introduction of an allergen. An EAR is an asthma response that occurs immediately and usually resolves after a couple hours; a LAR is a delayed asthma response, which can occur hours after exposure to an allergen. LARs are followed by airway hyperresponsiveness (AHR)—increased airway sensitivity to a bronchoconstrictor stimulus. Among humans, the time point at which the response occurs is highly variable. Researchers are learning more about asthmatic responses to allergens by using guinea pigs as a model for functional parameters of the condition. Guinea pigs are a valuable animal model because compared to humans they have a similar distribution of mast cells, which play a key role in inflammation. Also, compared to mice models, guinea pig EAR bronchoconstriction is pronounced and responsive to more inflammatory mediators. However, studies in recent years have shown that guinea pigs have become less sensitive to a standard protocol used for eliciting EAR, LAR, AHR and airway inflammation. Thus, to re-establish the conditions necessary to use guinea pigs in asthma research, this study1 assessed lung function and inflammation in relation to several different protocols. The study used the antigen ovalbumin (Ova)—the main protein found in egg white—to elicit asthmatic responses. (Ova is commonly used in allergen challenges, and an Ova protocol had been previously developed and successfully used in guinea pig studies.) Six male guinea pigs were sensitized with Ova injections. Aluminum hydroxide (Al(OH)3) was administered with the Ova to enhance immune response. After allowing time for antibodies and immune responses to develop, the guinea pigs underwent an allergen challenge: exposure to inhaled Ova for one hour. A control group was sensitized in the same way, but exposed to aerosolized saline. Six different sensitization and challenge conditions were performed, with variable Ova challenge doses, numbers of sensitizations, Ova sensitization doses, Al(OH)3 doses, and challenge days. Following the allergen challenges, specific airway conductance (sGaw) measurements were used to assess EAR and LAR. Airway response to histamine was measured to assess AHR. After lung function tests, the guinea pigs were euthanized. Lung tissue was sampled and eosinophils, macrophages, lymphocytes, and neutrophils were counted to determine the effect of the sensitization and challenges on pulmonary inflammation. For this study, functional asthmatic responses were assessed using DSI’s Buxco® respiratory solutions. Using the patented Allay™ restraint in a double-chamber plethysmograph, Buxco Non-Invasive Airway Mechanics (NAM) technology measured sGaw in conscious, spontaneously breathing guinea pigs. Airway responses to aerosolized histamine before and after the Ova challenge were also measured using the same technology. Results confirmed the protocol that had been used in previous studies no longer achieved the full range of desired effects. Ultimately, the authors found that increasing the Ova sensitization and challenge concentrations restored AHR, increased the peak of the EAR, and increased eosinophils. From that point, either increasing the Al(OH)3 concentration during sensitization or extending the duration between Ova sensitization and challenge induced EAR, LAR, AHR and pulmonary inflammation. In addition, allowing more time for the immune response to develop before the challenge prolonged the EAR and LAR. Interestingly, there was also a dissociation between AHR and the influx of inflammatory cells, highlighting the importance of assessing asthmatic responses directly, rather than relying on cell counts alone. 1Lowe, A. P. P., Broadley, K. J., Nials, A. T., Ford, W. R. & Kidd, E. J. (2015). Adjustment of sensitisation and inflammatory responses to ovalbumin in guinea-pigs. Journal of Pharmacological and Toxicological Methods, 72: 85-93. doi: 10.1016/j.vascn.2014.10.007 To read the complete article, visit: http://www.ncbi.nlm.nih.gov/pubmed/25450500
<urn:uuid:c5c21cef-3205-4d06-82db-cdb2b19ebfcc>
{ "dump": "CC-MAIN-2017-17", "url": "https://www.datasci.com/resources/the-dsi-monitor/the-dsi-monitor-fall15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123560.51/warc/CC-MAIN-20170423031203-00020-ip-10-145-167-34.ec2.internal.warc.gz", "language": "en", "language_score": 0.9396125078201294, "token_count": 959, "score": 3.90625, "int_score": 4 }
Bile duct cancer ( Cholangiocarcinoma ) Bile duct cancer Causes and risk factors Signs and symptoms How it is diagnosed Staging and grading Bile duct cancer The bile ducts are the tubes connecting the liver and gall bladder to the small intestine (small bowel). Bile is a fluid made by the liver and stored in the gall bladder. Its main function is to break down fats during their digestion in the small bowel. In people who have had their gall bladder removed, bile flows directly into the small intestine. The bile ducts and gall bladder are known as the biliary system. [ Diagram showing the position of the bile duct ] Cancer is classified according to the type of cell from which it starts. Cancer of the biliary system almost always starts in a type of tissue called glandular tissue and is then known as adenocarcinoma. If the cancer starts in the part of the bile ducts contained within the liver it is known as intra-hepatic. If it starts in the area of the bile ducts outside the liver it is known as extra-hepatic. This information concentrates mainly on extra-hepatic bile duct cancers. Intra-hepatic bile duct cancers may be treated like primary liver cancer. Causes and risk factors The cause of most bile duct cancers is unknown. There are a number of risk factors that can increase your risk of developing bile duct cancer. These are : - - Inflammatory bowel disease People who have a chronic inflammatory bowel condition, known as ulcerative colitis, are at an increased risk of developing this type of cancer. - Abnormal bile ducts People who are born with (congenital) abnormalities of the bile ducts, such as choledochal cysts, are more at risk of developing cholangiocarcinoma. - Infection In Africa and Asia, infection with a parasite known as the liver fluke is thought to cause a large number of bile duct cancers. Bile duct cancer, like other cancers, is not infectious and cannot be passed on to other people. Signs and symptoms If cancer develops in the bile ducts it may block the flow of bile from the liver to the intestine. This causes the bile to flow back into the blood and body tissues, and leads to the skin and whites of the eyes becoming yellow (known as jaundice). The urine also becomes a dark yellow colour and stools (bowel motions) are pale. The skin may become itchy. Mild discomfort in the abdomen, loss of appetite, high temperatures (fevers) and weight loss may also occur. These symptoms can be caused by many things other than bile duct cancer, but any jaundice or any symptoms which get worse or last for a few weeks should always be checked by your doctor. How it is diagnosed Usually you begin by seeing your GP, who will examine you. They will refer you to a hospital specialist for any tests that may be necessary and for expert advice and treatment. At the hospital the doctor will ask you about your general health and any previous medical problems. They will also examine you and take blood samples to check your general health and that your liver is working properly. The following tests are commonly used to diagnose bile duct cancer : - - Ultrasound scan Sound waves are used to make up a picture of the bile ducts and surrounding organs. These scans are done in the hospital's scanning department. You will be asked not to eat, and to drink clear fluids only (nothing fizzy or milky) for four to six hours before the scan. Once you are lying comfortably on your back, a gel is spread onto your abdomen. A small device, like a microphone, is then rubbed over the area. The sound waves are converted into a picture using a computer. The test is completely painless and takes 15-20 minutes. - CT (computerised tomography) scan A CT scan takes a series of x-rays which are fed into a computer to build up a detailed picture of your bile ducts and surrounding organs. On the day of the scan you will be asked not to eat or drink anything for at least four hours before your appointment. You will be given a special liquid to drink an hour before the test and again immediately before the scan. The liquid shows up on x-ray to ensure that a clear picture is obtained. Once you are comfortably positioned on your back on the couch, the scan can be taken. About half-way through the scan a special dye will be injected into the vein to show up the blood vessels. This may make you feel warm or 'flushed' for up to half an hour. The test itself is completely painless, but it will mean that you have to lie still for about 10-30 minutes. If you had little to drink before the scan, you may be advised to drink plenty afterwards to make up for this. - MRI (magnetic resonance imaging) scan This test is similar to a CT scan, but uses magnetism instead of x-rays to build up cross-sectional pictures of your body. During the test you will be asked to lie very still on a couch inside a large metal cylinder which is open at both ends. The whole test may take up to an hour. It can be slightly uncomfortable and some people feel a bit claustrophobic during the scan, which is also very noisy. You will be given earplugs or headphones to wear. A two-way intercom enables you to talk with the people controlling the scanner. - ERCP (endoscopic retrograde cholangiopancreatography) This is a procedure by which an x-ray picture of the pancreatic duct and of the bile duct can be taken. It may also be used to unblock the bile duct if necessary. You will be asked not to eat or drink anything for about six hours before the test so that the stomach and duodenum (first part of the small bowel) are empty. You will be given an injection to make you relax (a sedative) and a local anaesthetic spray will be used to numb your throat. The doctor will then pass a thin flexible tube known as an endoscope through your mouth into your stomach and into the duodenum just beyond it. Looking down the endoscope, the doctor can find the opening through which the bile duct and the duct of the pancreas drain into the duodenum. A dye which can be seen on x-ray can be injected into these ducts and the doctor will be able to see whether there is any abnormality or any blockage in the ducts. If there is a blockage it may be possible or the doctor to insert a small tube known as stent. You may have some discomfort during this procedure; if you do, it is important that you let your doctor know. You will be given antibiotics beforehand (to help prevent any infection) and will probably stay in hospital for one night afterwards. - PTC (percutaneous transhepatic cholangiography) This is another procedure by which your doctor can obtain an x-ray picture of the bile duct. You will be asked not to eat or drink anything for about six hours before the test and will be given a sedative as for the ECRP. An area on the right side of your abdomen will be numbed with a local anaesthetic (an injection) and a thin needle will be passed into the liver through the skin. A dye will be injected through the needle into the bile duct within the liver. X-rays will then be taken to see if there is any abnormality or blockage of the duct. You may feel some discomfort as the needle enters the liver. You will be given antibiotics before and after this procedure (to help prevent infection) and you will stay in hospital for at least one night afterwards. - Angiography As the bile duct is very close to the major blood vessels of the liver, a test called an angiogram may be done. The angiogram can check whether the blood vessels are affected by the tumour. A fine tube is inserted into an artery in your groin and a dye is injected through the tube. The dye circulates in the arteries to make them show up on x-ray. An angiogram is carried out in a room within the x-ray department. Sometimes an MRI scan can be used to show up the blood vessels of the liver and then an angiogram will not be necessary. - Biopsy The results of the previous tests may make your doctor strongly suspect a diagnosis of cancer of the bile duct, but the only way to be sure of the diagnosis is to take some cells or a small piece of tissue from the affected area of the bile duct to look at under a microscope. This is called a biopsy and may be carried out during an ECRP or PTC. A fine needle is passed into the tumour through the skin after the area has been numbed using a local anaesthetic injection. CT or ultrasound may be used at the same time, to make sure that the biopsy is taken from the right place. - Endoscopic ultrasound scan (EUS) This scan is similar to an ERCP but involves an ultrasound probe being passed down the endoscope to take an ultrasound scan of the pancreas and surrounding structures. If the doctor cannot make the diagnosis from the above tests, a procedure called a laparotomy may be done under a general anaesthetic. This involves making a cut (incision) into your abdomen so that the surgeon can examine the bile duct and the tissue around it for cancer. Sometimes this examination can be done through a tiny cut using a camera called a laparoscope - this procedure is known as keyhole surgery. If a cancer is found, but looks as though it has not spread to surrounding tissues, the surgeon may be able to remove the cancer or relieve any blockage that it is causing. Staging and grading Staging : - The stage of a cancer is a term used to describe its size and whether it has spread beyond its original site. Knowing the particular type and the stage of the cancer helps the doctors to decide on the most appropriate treatment. Cancer can spread in the body, either in the blood stream or through the lymphatic system. The lymphatic system is part of the body's defence against infection and disease. The system is made up of a network of lymph glands (also known as lymph nodes) that are linked by fine ducts containing lymph fluid. Your doctors will usually look at the lymph nodes close to the biliary system in order to find the stage of your cancer. - Stage 1A : - The cancer is contained within the bile duct. - Stage 1B : - The cancer has spread through the wall of the bile duct but has not spread into nearby lymph nodes or other structures. - Stage 2A : - The cancer has spread into the liver, pancreas or gall bladder or to the nearby blood vessels, but not the lymph nodes. - Stage 2B : - The cancer has spread into nearby lymph nodes. - Stage 3 : - The cancer is affecting the main blood vessels that take blood to and from the liver, or it has spread into the small or large bowel, the stomach or the abdominal wall. Lymph nodes in the abdomen may also be affected. - Stage 4 : - The cancer has spread to distant parts of the body such as the lungs If the cancer comes back after initial treatment, this is known as recurrent cancer. Grading refers to the appearance of the cancer cells under the microscope and gives an idea of how quickly the cancer may develop. Low-grade means that the cancer cells look very like normal cells; they are usually slow-growing and are less likely to spread. In high-grade tumours the cells look very abnormal, are likely to grow more quickly and are more likely to spread. The type of treatment that you are given will depend on a number of factors, including your general health, the position and size of the cancer in the bile duct and whether the cancer has spread beyond the bile duct. Before you have any treatment, your doctor will give you full information about what it involves and explain the aims of the treatment to you. They will usually ask you to sign a form saying that you give permission (consent) for the hospital staff to give you the treatment. No medical treatment can be given without your consent. Benefits and disadvantages of treatment Treatment can be given for different reasons and the potential benefits will vary for each person. If you have been offered treatment that aims to cure your cancer, deciding whether to have the treatment may not be difficult. However, if a cure is not possible and the treatment is to control the cancer for a period of time, it may be more difficult to decide whether or not to go ahead. If you feel that you can't make a decision about treatment when it is first explained to you, you can always ask for more time to decide. You are free to choose not to have the treatment and the staff can explain what may happen if you don't have it. You don't have to give a reason for not wanting to have treatment, but it can be helpful to let the staff know your concerns so that they can give you the best advice. Surgery may be used to remove the cancer if it has not spread beyond the bile duct. It is not always possible to carry out surgery, as the bile duct is in a difficult position and it may be impossible to remove the cancer completely. The decision about whether surgery is possible or not depends on the results of the tests described above. If surgery is recommended then you will be referred to a surgeon with a special interest in this rare cancer. There are different operations depending upon how big the cancer is and whether it has begun to spread into nearby tissues : - - Removal of the bile ducts If the cancer is small and contained within the ducts, then just the bile ducts containing the cancer are removed and the remaining ducts in the liver are joined to the small bowel, allowing the bile to flow again. - Partial liver resection If the cancer has begun to spread into the liver, the affected part of the liver is removed, along with the bile ducts. - Whipple's If the cancer is larger and has spread into nearby structures, then the bile ducts, part of the stomach, part of the duodenum (small bowel), the pancreas, gall bladder and the surrounding lymph nodes are all removed. After your operation you may stay in an intensive-care ward for the first couple of days. You will then be moved to a general ward until you recover. Most people need to be in hospital for up to two weeks after this type of operation. - Bypass surgery Sometimes it isn't possible to remove the tumour and other procedures may be performed to relieve the blockage (obstruction) and allow the bile to go into the intestine. The jaundice will then clear up. The surgical method of dealing with blockage of the bile duct involves joining the gall bladder (or the bile duct) to part of your small bowel. This bypasses the blocked part of the bile duct and allows the bile to flow from the liver into the intestine. This operation is called a cholecysto-jejunostomy or cholecysto-duodenostomy if the gall bladder is used. It is called a hepatico-jejunostomy if the bile duct is used. Another type of operation may be necessary if the duodenum is also blocked. This is called a gastrojejunostomy and involves connecting a piece of the small bowel (the jejunum) to the stomach to bypass the duodenum. This will stop the persistent vomiting (being sick) that can occasionally happen if the cancer blocks the duodenum. There are two ways in which it may be possible to relieve jaundice without a surgical operation. These use the ERCP or PTC procedures described below. The ERCP method involves the insertion of a tube, called a stent into the blocked bile duct. The stent is about as thick as a ball-point pen refill and about 5-10cm long (two to four inches). The stent clears a passage through the bile duct to allow the bile to drain away. The preparation and procedure is the same as for ERCP described above. By looking at the x-ray image the doctor will be able to see the narrowing in the bile duct. The narrowing can be stretched using dilators (small inflatable balloons), and the stent can then be inserted through the endoscope to enable the bile to drain. The tube usually needs to be replaced every three to four months to prevent it becoming blocked. If the tube does block, recurrent high temperatures and/or return of the jaundice will occur. It is important to tell your specialist about these symptoms as early as possible. Antibiotic treatment may be needed and your specialist may advise that the stent is exchanged for a new one. This procedure can be done relatively easily for most people. During the PTC method, the procedure and the preparation you will need is as described in the section about PTC. A temporary wire is passed to the area of blockage and the stent is guided along the wire. Sometimes a drainage tube (catheter) is left in the bile duct. One end of the catheter is in the bile duct and the other lies outside the body connected to a bag, which collects the bile. This is to help with the insertion of the stent or, sometimes, to enable x-rays to be taken to check the position of the stent after it has been put in place. It is usually left in for a few days. Once the catheter is removed the hole heals over within two days. You will be given antibiotics before and after the procedure to help prevent any infection. It is likely that you will stay in hospital for a few days. Sometimes, if the bile duct cannot be opened easily from the small intestine during ERCP, a combination of ERCP and PTC may be used. Radiotherapy is occasionally used to treat bile duct cancer. Radiotherapy treats cancer by using high-energy x-rays to destroy cancer cells while doing as little harm as possible to normal cells. It may be given either externally from a radiotherapy machine, or internally by placing radioactive material close to the tumour. Chemotherapy is the use of anti-cancer (cytotoxic) drugs to destroy the cancer cells. They work by disrupting the growth of cancer cells. Occasionally, chemotherapy may be given in combination with radiotherapy for cancers that cannot be removed surgically. Researchers are still looking into how effective chemotherapy is for the treatment of bile duct cancer. Photodynamic therapy (PDT) PDT uses a combination of laser light of a specific wavelength and a light-sensitive drug to destroy cancer cells. In bile duct cancer it is used to help relieve symptoms. The light-sensitive drug (a photosensitising agent) is injected into a vein. It circulates in the bloodstream and enters cells throughout the body. The drug enters more cancer cells than healthy cells. It does not do anything until it is exposed to laser light of a particular wavelength. When a laser is shone on to the cancer, the drug becomes active and destroys the cancer cells. Research into treatments for bile duct cancer is ongoing and advances are being made. Cancer doctors use clinical trials to assess new treatments. You may be asked to take part in a clinical trial. Your doctor must discuss the treatment with you so that you have a full understanding of the trial and what it means to take part. The list of of world class Cancer hospitals in India is as follows : - ||Apollo Hospital, Chennai, India ||Apollo Specialty Hospital, Chennai, India ||Apollo Hospitals, Bangalore, India ||Indraprastha Apollo Hospital, Delhi, India ||Fortis Hospital, Noida, India ||Narayana Cancer Hospital, Bangalore, India ||Artemis Hospital, Gurgaon ( Delhi ) , India For more information, medical assessment and medical quote send your detailed medical history and medical reports as email attachment to Email : - firstname.lastname@example.org Call: +91 9029304141 (10 am. To 8 pm. IST) (Only for international patients seeking treatment in India) For a detailed evaluation send patientís medical reports / X rays / doctors notes to email@example.com Successful heart surgery at We Care India partner hospital allows Robert Clarke to live a normal life despite a rare genetic disorder We Care india helped Robert find best super specialised surgeon for his rare conditions. Read : Robert's Story See All : Patient's Success Stories Find us on
<urn:uuid:e305c3e9-3490-4886-adfa-28fe62a44676>
{ "dump": "CC-MAIN-2019-09", "url": "http://www.indiahospitaltour.com/cancer-treatment/bile-duct-cancer-treatment-india.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481832.13/warc/CC-MAIN-20190217091542-20190217113542-00157.warc.gz", "language": "en", "language_score": 0.9486657381057739, "token_count": 4416, "score": 3.5, "int_score": 4 }
It is easy to get excited or frightened about the predictive powers of DNA phenotyping, depending on your perspective. Knowing what genes led to higher intelligence and athletic ability was the first step towards the designer babies of GATTACA. Is this knowledge worth having given the potential for misuse? Going to such extremes with genetic selection makes for a captivating movie, but it can lead to a flawed understanding of the science. The reality of DNA phenotyping is not so scary. How does DNA phenotyping work? DNA phenotyping is our attempt at replicating what our bodies do naturally: translating DNA into our physical appearances. It is an attempt because there is rarely a direct correlation between a single gene and a single physical feature. Forensic scientists are currently focusing on determining facial features. Much of our understanding has been gleaned from whole genome studies where scientists compare data from over 7,000 points on participants’ faces to sections of their DNA that contain single nucleotide polymorphisms (SNPs)—that is, sections of DNA that differ by a single letter of the genetic code. Comparing facial maps to genes allows scientists to calculate the probability of physical traits based on the presence of particular SNPs. Predictive algorithms are then used to render an image of a face based on those probabilities. There is one question that really matters to most people: how well does this all work? What can DNA phenotyping currently predict? - Eye color – 77 genes identified - Hair color – 32 genes identified - Skin color – 31 genes identified Dr. Manfred Kayser neatly summarized the specific genes and their corresponding references in a single table from his 2015 paper. These three pigment traits are a good start, but they are a far cry from generating an accurate image of a face. Determining ethnicity is currently accurate at broader levels like European, African and so on. Dr. David Ballard has more to say in this video: What will DNA phenotyping be able to predict in the future? - Age – we can broadly detect between 20-year age groups, but more precise measurement is more than a few years away. DNA methylation may be one short term option to improve the predictive power to 3–5 year age groups. - Height – height is complicated; the more SNPs we look at, the more likely we can explain height variance. Currently this is up to 29% accuracy. Improvement beyond identifying someone as extremely tall or short should happen in the distant future. - Baldness – early-onset baldness has been linked to 12 genes and most significantly to one region of the X-chromosome. Late-onset baldness, which is more common, is harder to identify but there haven’t been many studies in the way of this. This could improve in the near future. - Facial structure – many DNA variants determine facial morphology. Five genes have been determined to play a role, but it will take considerably more research to improve facial reconstruction. This is likely something for the distant future. Intelligence, athletic ability and beyond are just that—beyond. These are not coming anytime soon. The same knowledge that could lead to an understanding these complex traits could also lead to a better understanding of complex disorders like obesity, high blood pressure, diabetes, cancer and more (unlike “simple” diseases caused by a single gene, like cystic fibrosis). It is important for skeptics to stay vigilant of potential misuses of scientific knowledge, but is it really worthwhile to halt progress on these important fronts when we already have numerous tests for intelligence and athletic ability at schools and playgrounds? Learn more by attending the DNA phenotyping workshop at ISHI Much progress is being made in the way of converting possibilities to realities with DNA phenotyping. The International Symposium on Human Identification is holding a workshop about DNA phenotyping on Sunday, September 25, 2016, from 1–5 pm. Dr. Manfred Kayser and Dr. Eran Elhaik will be the featured speakers during the workshop. Latest posts by Greg Emmerich (see all) - We Need More Conferences Like ISHI: Memoirs of a First Timer - October 7, 2016 - The Scientific Case for Studying Chimeras - August 12, 2016 - Discovering the Truth About the Dozier School for Boys - June 15, 2016
<urn:uuid:1342a217-07a3-4f88-8753-c62ba9e066ca>
{ "dump": "CC-MAIN-2021-39", "url": "https://www.promegaconnections.com/the-reality-of-dna-phenotyping/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00638.warc.gz", "language": "en", "language_score": 0.9436373710632324, "token_count": 903, "score": 3.46875, "int_score": 3 }