text
stringlengths
263
344k
id
stringlengths
47
47
dump
stringclasses
23 values
url
stringlengths
16
862
file_path
stringlengths
125
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
81.9k
score
float64
2.52
4.78
int_score
int64
3
5
Here's one way you could charge electric cars in the future. A South Korean city is testing electric buses that get their charge from cables buried underneath the road. The cables create magnetic fields that a device on the underside of the buses converts into electricity. (In principle, they're like enormous versions of the induction power that charges toothbrushes and smartphones wirelessly.) The charging works both while the buses are driving and when they're sitting still. Right now, there are two of the buses and they run back and forth along a central city route that's about seven miles long, or 15 miles round-trip. The Gumi government plans to add 10 more so-called OLEV (Online Electric Vehicle) buses by 2015, according to the Korea Advanced Institute of Science and Technology. Electrical engineers at the institute developed OLEVs. Other specs from the institute include that the electromagnetic field that the cables create is weak enough to be safe for pedestrians, according to the institute. The cables also switch on only when they detect that OLEV buses are passing over. Five percent to 15 percent of a roadway needs to have cables in it for the buses to run.
<urn:uuid:14745a36-ee2c-4f42-9956-b08a6759702e>
CC-MAIN-2020-10
https://www.popsci.com/cars/article/2013-08/wirelessly-charged-electric-buses-get-their-first-route/?dom=PSC&loc=recent&lnk=1&con=first-inductively-charged-city-bus-system-now-rolling
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141806.26/warc/CC-MAIN-20200217085334-20200217115334-00177.warc.gz
en
0.956303
237
3.453125
3
Definitions of disaster n. - An unpropitious or baleful aspect of a planet or star; malevolent influence of a heavenly body; hence, an ill portent. 2 n. - An adverse or unfortunate event, esp. a sudden and extraordinary misfortune; a calamity; a serious mishap. 2 v. t. - To blast by the influence of a baleful star. 2 v. t. - To bring harm upon; to injure. 2 The word "disaster" uses 8 letters: A D E I R S S T. All words formed from disaster by changing one letter Browse words starting with disaster by next letter
<urn:uuid:0ae58189-da76-4d02-9fdd-a09c25840653>
CC-MAIN-2014-10
http://www.morewords.com/word/disaster/
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010746376/warc/CC-MAIN-20140305091226-00065-ip-10-183-142-35.ec2.internal.warc.gz
en
0.796608
140
2.640625
3
At the low end of the environmental impact scale, a no-till corn-soybean rotation where 38 percent of the stover was removed emitted 2.7 tons of carbon dioxide per acre and yielded 2 tons of stover. "Combining no-till with continuous corn cultivation when stover is removed was capable of slightly lower sediment loss than the baseline today without any stover removal," Gramig said. "Introducing cover crops or replacing nitrogen that is removed with stover at lower rates was not considered in our study but should further reduce environmental impacts. These practices require additional study and would involve offsetting costs and savings." Perhaps not surprisingly, researchers found that removing stover increased production costs over the predominant corn-soybean rotation in place today. Most of that cost is attributed to replacing nitrogen contained in stover. Stover removal was found to have the lowest cost when collected from corn grown in rotation with soybeans. "For a given crop rotation and tillage system, as we simulated an increase in the rate of stover removal we found an increase in loss of sediment from crop fields, an increase in greenhouse gas flux to the atmosphere and a reduction in nitrate and total phosphorus delivered to waterways," Gramig said. "While optimizing production to maximize stover harvest at the lowest possible cost may lead to a reduction in nutrients delivered to rivers and streams, this comes at the expense of increased soil erosion and greenhouse gas emissions." More study is needed to identify less environmentally harmful stover removal practices, Gramig said. "In the meantime, farmers can use no-till to reduce the amount of sediment loss," he said. "Additional practices, such as the use of cover crops, are going to be necessary if we want to try to reduce greenhouse gas loss. We also need to determine what the correct nitrogen replacement rate is to maintain long-term soil productivity while minimizing nitrogen loss, whether to the atmosphere or to waterways." "Environmental and Economic Trade-Offs in a Watershed When Using Corn Stover for Bioenergy" appears in the January 2013 issue of Environmental Science & Technology. It is available online at http://dx.doi.org/10.1021/es303459h.
<urn:uuid:02adb109-ee8c-4319-ae5c-141f25197143>
CC-MAIN-2014-35
http://www.dairyherd.com/dairy-news/Corn-stover-collection-can-have-environmental-impacts-198826171.html?page=2
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922763.3/warc/CC-MAIN-20140909043953-00138-ip-10-180-136-8.ec2.internal.warc.gz
en
0.947505
455
3.65625
4
Radar Sensor Being Development for Intersection email@example.com December 23, 2021 0 COMMENTS Researchers at the University of Arizona are developing a high-resolution radar sensor that can reliably distinguish between cars and pedestrians at intersections. [Above image via the NTIC] The sensors developed by that research – funded in part by the National Institute for Transportation and Communities or NITC, one of seven U.S. Department of Transportation national university transportation centers – also supplies the counts, speed, and direction of each moving target, no matter what the lighting and weather are like. According to a new report – entitled Development of Intelligent Multimodal Traffic Monitoring using Radar Sensor at Intersections – those researchers developed a prototype based around a high-resolution millimeter-wave or “mmWave” radar sensor that outperforms cameras in low-visibility conditions and beats conventional radar by providing a richer picture. “The key problem in multimodal traffic monitoring is finding the speed and volume of each mode,” explained Siyang Cao, one of the university’s researchers on this project, in a statement. “A sensor must be able to detect, track, classify, and measure the speed of an object, while also being low-cost and low power consumption. With real-time traffic statistics we hope to improve traffic efficiency and also reduce the incidence of crashes.” Cao noted that mmWave radar is also different from other sensors in that it can provide relatively stable radial velocity, which is very helpful for us to identify the speed of vehicles. That offers advantages when compared to Light Detection and Ranging or LiDAR-based sensors, Cao added. LiDAR systems are able to “see” in greater detail, he said, making it easy to determine what an object is. However, they have difficulty with movement and speed. By contrast, mmWave radar can resolve the speed of a moving target much more reliably than LiDAR, Cao said, and works on a lower frequency band than LiDAR, making it more stable under different weather conditions such as rain, snow, fog, smoke, etc. “We realize that sensor technology is moving to a stage that’s going to have a lot of new applications. On one hand, the cost of sensors is dropping and their performance is improving significantly. Meanwhile the surrounding technology – for example battery technology, communications, and computation enabled by artificial intelligence – is also improving,” Cao pointed out. “For multimodal traffic monitoring, a sensor can collect information that can be shared with drivers via a next-generation communication network to improve mobility and safety at intersections,” Cao said. In the future, the university’s researchers hope to refine that model further so such radar sensors can identify additional modes such as motorcycles, bikes, trucks, and buses.
<urn:uuid:9082ad8d-779c-48c0-97b9-f29810085f91>
CC-MAIN-2023-14
https://aashtojournal.org/2021/12/23/radar-sensor-being-development-for-intersection-management/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00168.warc.gz
en
0.94831
596
2.734375
3
In the modern day where we see our environment depleting every day and natural resources getting consumed up with every passing minute, we all must do everything that we can to preserve the nature and save the environment. One of the easy things we can do is solar power our house. Rather than using electricity for heating water or heating the house, we can use the heat of the sun. The following are the 8 most effective ways to solar power your house. - The first thing that you can do to solar power your house is to insulate it in the best possible way. Insulating the house will ensure that maximum heat gets trapped inside. To do this, you can add brown-in cellulose to the attics of the house, renovate insulated walls which do not give proper results and replace the leaking windows. - For maximum solar heat to hit the house, your house must be facing the south side. In case this is not possible, you can remove windows that are facing northwards. - Another way to solar power your house is to add large south facing windows which are made using insulated glass. Try to reduce the passage of heat from the outside as well as the inside. Another way to ensure maximum heat getting trapped is to use windows with vacuum trapped between the glass panes. - In order to make the most of the heat of the sun rather than relying on electric heating sources, you can try to get a solar panel installed on the top of your roof. The panel should be installed on the south facing side so that they capture the maximum solar heat. - You can also install a pump that circulates the water from the panels via the pipes in the floor to the panels. - To solar power your house, you can also try to insulate the pipes using tubular foam insulation. This will keep the heat from escaping till it reaches the slabs. - You can also plant deciduous trees around the house so that they provide shade from the heat in the summer months. And in winters, these trees will allow the sunlight to enter the house when the leaves fall off. - You can install a slab made of concrete in your basement with the pipes which carry water through the solar panels to the pipes. Photo credit By: sepco-solarlighting.com
<urn:uuid:7e792413-afd3-425f-ba96-c68e82191510>
CC-MAIN-2020-29
http://www.thenewecologist.com/2014/06/ways-to-solar-power-your-house/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00077.warc.gz
en
0.927296
467
2.71875
3
Delivery in day(s): 4 Nursing Case Study Proof Reading Services The analysis of the case study shows that the patient is highly vulnerable to the development of cerebrovascular disorders. Heart diseases mainly range over a number of conditions that affect the heart and are mainly an umbrella term that can include blood vessel disorders like coronary artery diseases. This term also includes the issues with the heart rhythm problems called the arrhythmias and many others. Often health care professionals use the two terms like the heart disorders and cardiovascular disorders interchangeably. Cardiovascular disorders are mainly seen to be referring to the conditions that involve blocked or the narrowed blood vessels that can lead to heart attacks, chest pain as well as stroke. He is exposed to the risk of developing any of the disorders, A number of risk factors are seen to be highly associated with the occurrence of the disorder and many of the risk factors are seen to be present in the patient. High blood pressure is one of the most important risk factors for the cardiovascular disorder. He is seen to be having high blood pressure or hypertension and therefore she is vulnerable to develop this. A number of factors like poor lifestyle interventions like lack of physical exercises as well as smoking and alcohol consumption might cause an increase in the blood pressure. Therefore, the factors increase the chance of heart disorders in different patients (Chen et al., 2015). The patient is physically inactive. She is only undertaking 15 minutes of walking with her dog every day that is not the sufficient amount of exercises that needs to be taken by individuals. Researchers are of the opinion that 30 minutes of moderate to vigorous exercises are very much important to ream fit and healthy. Therefore, lack of an adequate amount of physical exercises might act as one of the contributing factors for the disorder. Physical exercise helps in maintaining the blood pressure and body weight control. It helps in the reduction of the waist circumference and thereby helps in reducing the risks for cardiovascular disorders. Researchers are of the opinion that smoking cigarettes even a few per day can increase the chance of the patients to develop heart disorders. Smoking results in clogging of the arteries and is known to contribute towards the development of the atherosceloris. It mainly results in the narrowing and the clogging of the arteries that can reduce the supply of the blood and affects the amount of the oxygen available throughout the body (Mandviwala, Khalid & Deswal. 2016). The chemicals in the smoke can harm the blood vessels, have the ability to damage the structure and function of the blood vessels, and affect the functions of the heart. Consumption of alcohol can result in weight gain, high triglycerides, high blood pressure, stroke and many others. Therefore consuming too much alcohol play an important role in the development of cardiovascular disorders. Although he is taking the restricted amount of alcohol in each week, she needs to be careful about the amount so that these lifestyle and health behaviours of her do not result in such disorders. The patient is also seen to have a body weight of 87 kg and a height of 165 cm. therefore, she had a basal metabolic index or BMI of 32 that shows that she falls into the category of obese individuals. She does not possess a healthy body weight that is yet another contributor to the risk of development of cardiovascular disorders. Excessive amount of weight, especially around the abdomen, increase the risk for high blood pressure, high blood cholesterol and diabetes and in turn, increases the risk of the heart disorders indirectly. He is obese and this factor increases the risk for her development of cardiovascular disorders. He is also seen to have a high level of cholesterol. Studies have shown that cholesterol levels increase the risk of building up of walls in the arteries and this causes atherosclerosis. It results in narrowing of the arteries affecting the flow of blood to the heart and disrupts heart functioning. This causes heart disorders. Another important factor that also contributes to the development of CVD in patients is the ethnic background of the patient. South Asian people are more prone to development of the heart disorders and are likely to have higher chances of being affected by high blood pressure as well as type 2 diabetes. As is from such Asian background coming from Asia, there is a higher chance that she might also develop the disorder. Stress can also contribute to the disease as stress increases the release of stress hormone. Exposure of the body towards unhealthy, persistently elevated levels of stress hormones like adrenaline as well as cortisol are related by researchers the changes in the ways by which blood clots (Carlisle et al., 2018). This factor increases the risk of the heart attack. He was stressed about their financial situations and this stress might have contributed them to the disorder. Therefore, all these factors show that is at the higher risk for the development of cardiovascular disorders. The primary health care services are mainly seen to encompass a large range of providers and different care services in the nation of Australia. This service domain comprises care provided across public, private as well as governmental sectors. At the clinical level, this form of service is mainly seen to involve the first or the primary layer of the services that are encountered in the healthcare and comprises of different types of experts. Teams of healthcare professionals providing primary care are the different types of nurses as well as allied health professionals, midwives, dentists, pharmacists and other Aboriginal health workers. The nurses working in the primary care services might include general practice nurses, community nurses and even nurse practitioners. All of them are found to work together for providing comprehensive, continuous as well as person-centered care. Different types of care services are provided along with health promotion, prevention as well as screening, early intervention, treatment as well as project management. Such services are mainly seen to target specific types of health as well as lifestyle conditions. The primary healthcare professionals handle various types of disorders like a sexual health issue, cardiovascular disorders and drug and alcohol services, oral health, diabetes, asthma, obesity, cancer as well as mental health services. Services are provided in the home as well as the community-based settings like that in the general practices as well as other private practices, community health, local government as well as non-government settings. The patient has been identified with a higher vulnerability towards the development of the cardiovascular disorders in the acute care when she was admitted to the ward due to transient ischemic attack. The nursing professionals should refer her to the primary health care centre in the community she resides. The general practitioner or the GP along with the primary health care nurses would undertake a screening procedure to identify the risks she possesses and accordingly help her identify the vulnerability of the heart disorder she might develop. Following the advice of the GP, the primary healthcare nurse would be developing a care plan for her about the medication she would need to initiate. She would be also helping her to identify the health behaviors she needs to change for overcoming such risks. The primary health nurses would educate her about the disorder and make her understand the risks associated with them. She would also refer her to the other community services from where she would get further support regarding her-self care strategies and management. Community-based healthcare centers have experts and nurses who ensure CBR or the community-based rehabilitation services to the patients in the nation of Australia. The World Health Organization supports these services, as they believe that these services enhance the quality of life for the people with disabilities as well as their families, helping them to meet their basic needs and for ensuring participation as well as inclusion in the society. The community-based experts from whom can get services are the dieticians who would be helping her to develop a diet plan helping get to decrease her weight. Another service that she can also get are the social workers. She had already faced stroke and are identified with a huge number of risk factors like high cholesterol high blood pressure, stress and many others. Therefore, social workers will be helping her in her everyday activities along with taking extra care for her regarding her medications and motivating her constantly for ensuring healthy lifestyle and behaviors. Occupational therapy session can also be taken by helping her to cope with the present situations can come back to normal lives. This would help her to join Uber and continue her livelihood. The services, which are discussed act as act the frontline, care services. The patients who attend to these care centers can not only identify the disorders that had occurred to them but also understand the disorder that they are vulnerable to and can develop in future. Therefore, the best aspect of these forms of treatment is that it has the capability to provide preventive services besides curative services and these preventative services help the patients by saving them from sufferings and warning them beforehand. The advice provided by them help in not only the development of their physical health but also their mental, emotional and financial conditions by providing care following the bio-psycho-social model. They provide a holistic care that covers their physiological, social and psychological determinists of health. Hence, these services bring out the best outcome for patients. The challenges that might face are the financial constraints and her poor health literacy skills, which might affect her accessing care, form the primary healthcare and community services. One of the challenges is the financial issue. He is very concerned about the financial situation and this is evident from the stress she is facing on the advice of not joining Uber for driving from healthcare professionals. The poor financial condition might not make her able to afford all the services that she is referred to and this might affect her health and well-being. Another challenge that I believe can create issues is her poor health literacy. She believes that she does not require treatment and she would have got well even if she was not admitted to the emergency ward. This shows her poor health literacy. Health literacy helps individuals to make correct healthcare decisions (Lyles & Sarkar, 2015). These two factors can act as challenges. One of the healthcare gaps that can be identified in the care service is the lack of culturally competent care services. Often there are reports by the culturally and diverse linguistic communities that they cannot get care services that align with their cultural traditions, exceptions as well as inhibitions (Truong et al., 2017). Therefore, there remain high chances that might not get culturally competent care form the healthcare professionals, might feel disrespected, and dishonored. Hence, care services need to be improved to meet the cultural needs of diverse communities. 1. Carlisle, K., Farmer, J., Taylor, J., Larkins, S., & Evans, R. (2018). Evaluating community participation: A comparison of participatory approaches in the planning and implementation of new primary health?care services in northern Australia. The International journal of health planning and management.https://doi.org/10.1002/hpm.2523 2. Chen, W., Thomas, J., Sadatsafavi, M., & FitzGerald, J. M. (2015). Risk of cardiovascular comorbidity in patients with chronic obstructive pulmonary disease: a systematic review and meta-analysis. The Lancet Respiratory medicine, 3(8), 631-639. https://doi.org/10.1016/S2213-2600(15)00241-6 3. Lyles, C. R., & Sarkar, U. (2015). Health literacy, vulnerable patients, and health information technology use: where do we go from here?. ttps://doi.org/10.1007/s11606-014-3166-5 4. Mandviwala, T., Khalid, U., & Deswal, A. (2016). Obesity and cardiovascular disease: a risk factor or a risk marker?. Current atherosclerosis reports, 18(5), 21. https://doi.org/10.1007/s11883-016-0575-4 5. McKittrick, R., & McKenzie, R. (2018). A narrative review and synthesis to inform health workforce preparation for the Health Care Homes model in primary healthcare in Australia. Australian journal of primary health, 24(4), 317-329.https://doi.org/10.1071/PY18045 6. Truong, M., Gibbs, L., Paradies, Y., Priest, N., & Tadic, M. (2017). Cultural competence in the community health context:‘we don’t have to reinvent the wheel’. Australian journal of primary health, 23(4), 342-347.https://doi.org/10.1071/PY16073
<urn:uuid:368cb4a9-95a6-4ac2-8d91-6cee5463d489>
CC-MAIN-2023-40
https://app.ozassignments.com/solution/nursing-case-study-proof-reading-services
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506658.2/warc/CC-MAIN-20230924155422-20230924185422-00586.warc.gz
en
0.96246
2,559
2.765625
3
NFTs, or non-fungible tokens, are a type of cryptocurrency that represent ownership of a unique asset, such as a piece of art, a collectible item, or a piece of digital content. They can be used in a variety of ways by brands, intellectual property holders, and clothing merchants to create new opportunities for customer engagement and loyalty. Brands can use NFTs to create loyalty programs that reward customers with unique, collectible tokens for their loyalty. For example, the fashion brand Gucci has launched a loyalty program that rewards customers with NFTs for their purchases, which can be used to unlock exclusive content and discounts within the brand's ecosystem (CoinDesk, 2021). • Pros: NFTs can add value and exclusivity to loyalty programs, and can be used to reward customers in a way that is more engaging and interactive than traditional loyalty rewards. • Cons: NFTs may not be accessible to everyone, as they require a certain level of familiarity with cryptocurrency and blockchain technology. Clothing merchants can use NFTs to create unique, limited edition items that are authenticated and verified as genuine through blockchain technology. This can add value to the clothing and create a sense of exclusivity for customers who own them. For example, the streetwear brand Supreme has released a series of limited edition NFTs featuring art by famous artists, which have sold for as much as $100,000 (Decrypt, 2021). • Pros: NFTs can add value and exclusivity to clothing items, and can be used to authenticate the authenticity and provenance of the items. • Cons: NFTs may not appeal to all customers, and the high price of some NFT clothing items may make them inaccessible to some consumers. Brands and merchants can use NFTs to offer premium, one-of-a-kind items to customers. These items could be anything from exclusive pieces of art to rare collectibles. For example, the artist Beeple has sold a single NFT for over $69 million at Christie's auction house, making it the most expensive NFT ever sold (Forbes, 2021). • Pros: NFTs can be used to offer truly unique and exclusive items to customers, which can be a powerful marketing and engagement tool. • Cons: NFTs may not appeal to all customers, and the high price of some NFT items may make them inaccessible to some consumers. NFTs can also be used to create immersive experiences for customers, such as virtual reality events or interactive installations. Customers can use their NFTs to access these experiences and engage with the brand in a new and exciting way. For example, the music festival Electric Daisy Carnival has announced plans to use NFTs to offer virtual reality experiences to fans, allowing them to attend the festival from anywhere in the world (EDM.com, 2021). • Pros: NFTs can be used to offer immersive and interactive experiences to customers, creating new opportunities for engagement and loyalty. • Cons: NFTs may not be accessible to all customers, as they require a certain level of familiarity with cryptocurrency and blockchain technology. Overall, NFTs offer a wide range of possibilities for brands and merchants looking to engage and reward their customers in new and innovative ways. However, it is important to carefully consider the pros and cons of using NFTs, and to ensure that they are accessible and appealing to a wide range of customers. In order to successfully implement NFTs into a loyalty program, brand, or retail business, it is important to consider the following factors: As mentioned, NFTs may not be accessible to everyone due to the technical knowledge and resources required to use them. It is important to consider how to make NFTs accessible to a broad range of customers, such as through educational resources or partnerships with companies that can provide the necessary infrastructure. It is important to clearly define the value that NFTs will offer to customers. This could be in the form of exclusive content, discounts, or other perks that are only available through the use of NFTs. In order to fully leverage the benefits of NFTs, it is important to integrate them seamlessly into a brand's existing systems and processes. This could include integrating NFTs into a loyalty program, or using them as a way to authenticate and verify the authenticity of products. It is important to effectively communicate the value and benefits of NFTs to customers in order to drive adoption and engagement. This could include educational resources, marketing campaigns, or partnerships with influencers or industry leaders. Overall, NFTs have the potential to offer a wide range of benefits to brands, intellectual property holders, and clothing merchants looking to engage and reward their customers. By carefully considering the pros and cons, and taking steps to ensure accessibility and a strong value proposition, businesses can successfully implement NFTs into their operations and drive customer loyalty. Our San Diego office is strategically located to offer you all the nuances of the American market. Our staff is ready to assist with your project, no matter how big or small! Whether you’re looking to get started in Europe or improve your existing presence on the continent, we got you cover. We help international companies expand and grow in Latam by providing multilingual services. Serving Southern Europe with our presence and connections in the Spanish market.
<urn:uuid:aa86b344-0a8b-4310-9186-04206f1e01c1>
CC-MAIN-2023-14
https://www.blocklead.io/blockchain-use-cases/nfts-customer-engagement-strategies-the-next-consumer-behavior
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00448.warc.gz
en
0.935343
1,121
2.765625
3
Artificial Intelligence (AI) is solving problems that seemed well beyond our reach just a few years back. Using deep learning, the fastest growing segment of AI, computers are now able to learn and recognize patterns from data that were considered too complex for expert written software. Today, deep learning is transforming every industry, including automotive, healthcare, retail and financial services. This introduction to deep learning will explore key fundamentals and opportunities, as well as current challenges and how to address them. - Demystifying Artificial Intelligence, Machine Learning and Deep Learning - Key challenges organizations face in adopting this new approach - How GPU deep learning and software, along with training resources, can deliver breakthrough results Director, Developer Marketing, NVIDIA Will Ramey is NVIDIA’s director of developer marketing. Prior to joining NVIDIA in 2003, he managed an independent game studio and developed advanced technology for the entertainment industry as a product manager and software engineer. He holds a BA in computer science from Willamette University and completed the Japan Studies Program at Tokyo International University. Outside of work, Will learns something new every day, usually from his two kids. He enjoys hiking, camping, open water swimming, and playing The Game. ======Below are some main screenshots from the video:
<urn:uuid:33eb8781-d25a-4f91-a66d-a44cefe800dd>
CC-MAIN-2020-05
http://deeplearning.lipingyang.org/category/great-talks/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607407.48/warc/CC-MAIN-20200122191620-20200122220620-00099.warc.gz
en
0.953143
256
2.671875
3
|OEXMANN, MARY JOAN| Submitted to: American Dietetic Association Annual Meeting Publication Type: Book / Chapter Publication Acceptance Date: 3/18/1999 Publication Date: N/A Technical Abstract: Balance studies are a classic technique used to determine human nutrient requirements. The fundamental components of a balance study are the measurement of nutrient intake and nutrient output and losses. An additional technique which allows broader interpretation of balance studies is compartmental modeling with the use of stable isotopes, which are naturally occurring, non-radioactive isotopes useful for investigation of the metabolism of energy, water, macronutrients, and micronutrients. When partnered with up-to-date analytical techniques and mathematical modeling methods, stable isotopes provide new research approaches for understanding nutrient transfer among different tissues and organs and in vivo metabolic processes. This chapter discusses several types of compartmental modeling studies and the kinds of research questions such studies can answer. Practical issues are addressed through examples and in details of the techniques that can help to enhance studies with a controlled diet component. The focus of the information is on those aspects which must be considered in the design and implementation of such studies. This information will be useful to dieticians and study coordinators designing balance studies and collaborating with compartmental modelers.
<urn:uuid:edc9711f-dece-4d9a-8b3d-e4a1c2c33274>
CC-MAIN-2023-23
https://www.ars.usda.gov/research/publications/publication/?seqNo115=96211
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652569.73/warc/CC-MAIN-20230606114156-20230606144156-00404.warc.gz
en
0.881289
311
2.515625
3
All right, class. Who can tell me the answer to this question? If Susie has two lesbian "mommies" and Sam has two transgender "daddies," how many adults who have contributed to California's glorious history are in the two groups? It may come to that, if Golden State teachers want to squeeze education into the curriculum at public schools. Basic education may have to be squeezed in to more politically correct lessons. California Gov. Jerry Brown has just signed into law a bill requiring that public schools include in social studies lessons the contributions of gays, lesbians, transgender people and bisexuals. Controversy over the measure has focused on the moral and social aspects of the law. That misses a critical point - and before you laugh (or grit your teeth, as the case may be) about the Californians, consider the new law is merely a symptom of a big problem facing schools in every state, including West Virginia and Ohio. It is micro-managing of public school curriculums by politicians. Think about this: How much time and effort will California teachers have to put into finding ways to work gays, lesbians and the transgendered into history lessons? Heaven help them when they have to do the research to find prominent bisexuals to talk about. How many classroom hours will the teachers have to devote to so-called LGBT lessons to meet the state law? How many state department of education bureaucrats will be paid to monitor compliance? Now, let's have a multiplication exercise: How much more time, effort and money goes into meeting California's other laws demanding specific instruction on the contributions of women, African-Americans, Mexican-Americans, Asian-Americans, European-Americans, American Indians, entrepreneurs and organized labor? Whatever happened to just teaching history? Is there any time left in the school year for that? Every state has detailed requirements for what public schools must teach. West Virginia and Ohio educators have to keep their eyes on hundreds of specific "learning outcomes." Teachers' lesson plans are reviewed to ensure they comply. Standardized tests check whether students grasp the material. Most educators hate the system. They have a point. But, defenders argue, how else can we guarantee our children will be taught the basics? There lies the contradiction that may well be the most serious threat to public education today. If curriculum laws indeed were limited to the basics, I doubt there would be a problem. They are not, however. In what universe is LGBT information basic? Instead of passing new laws requiring specific - and idiotic - lessons in schools, why not really get back to the basics? Here in West Virginia and Ohio, legislators could craft a good model for the rest of the country by mandating state departments of education simplify curriculum rules. If it's not truly basic, no matter how many special interests demand it, eliminate it. That would never happen in California, of course. But here in West Virginia and Ohio, we're better than that. Myer can be reached at: Myer@news-register.net.
<urn:uuid:66961d7b-bc90-4d78-82f0-db2766300a42>
CC-MAIN-2014-23
http://www.theintelligencer.net/page/content.detail/id/563735/Really-Get-Back-To-Basics.html?nav=509
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274979.56/warc/CC-MAIN-20140728011754-00176-ip-10-146-231-18.ec2.internal.warc.gz
en
0.957964
626
2.515625
3
Engineers who know a little about computer networks or Kubernetes networks should have heard of Overlay Network, which is not a new technology, but a computer network built on top of another network, a form of network virtualization technology, which has been promoted by the evolution of cloud computing virtualization technology in recent years. Because the Overlay network is a virtual network built on top of another computer network, it cannot stand alone, and the network on which the Overlay layer depends is the Underlay network, and the two concepts often appear in pairs. Underlay network is an infrastructure layer specifically used to carry user IP traffic, and the relationship between it and Overlay network is somewhat similar to that of physical machine and virtual machine. Overlay networks and virtual machines are virtualized tiers that rely on lower-level entities using software. Before analyzing the role of Overlay network, we need to have a general understanding of its common implementation. In practice, we usually use Virtual Extensible LAN (VxLAN) to form an Overlay network. In the following figure, two physical machines can access each other through a three-layer IP network. VxLAN uses Virtual Tunnel End Point (VTEP) devices for secondary encapsulation and unencapsulation of packets sent and received by the server. For example, the VTEP in Server 1 needs to know that to access the 10.0.0.2 virtual machine in the green network, it needs to access Server 2 with IP address 184.108.40.206. These configurations can be manually configured by the network administrator, automatically learned, or set by the These configurations can be configured manually by the network administrator, learned automatically, or set by the upper-level manager. When the green 10.0.0.1 virtual machine wants to send data to the green 10.0.0.2, it goes through the following steps. - 10.0.0.1 in green sends IP packets to the VTEP. - the VTEP of server 1 receives the packet sent by 10.0.0.1. - obtains the MAC address of the destination virtual machine from the received IP packet. - look up in the local forwarding table the IP address of the server where this MAC address is located, i.e. 220.127.116.11. - construct a new UDP packet with the virtual network identifier (VxLAN Network Identifier, VNI) where the green VM is located and the original IP packet as the load. - sending the new UDP packet to the network. - after the UDP packet is received by the VTEP on Server 2. - removes the protocol header from the UDP packet. - checks the VNI in the packet. - forwards the IP packet to the target green server 10.0.0.2. - the green 10.0.0.2 will receive the packet from the green server 10.0.0.1 During the transmission of packets, the two communicating parties are unaware of the transformations made by the underlying network, and they think they can access each other through the Layer 2 network, but in fact, they are relayed through the Layer 3 IP network and connected through the tunnel established between VTEPs. Overlay networks can use the underlying network to form a Layer 2 network between multiple data centers, but the packetization and unpacketization process also brings additional overhead, so why do we need Overlay networks for our clusters? The three problems that Overlay networks solve are. - Migration of VMs and instances within clusters, across clusters, or between data centers is more common in cloud computing. - The size of VMs in a single cluster can be very large, and the large number of MAC addresses and ARP requests can put tremendous pressure on network devices. - Traditional network isolation technology VLANs can only create 4096 virtual networks, and public clouds and large-scale virtualized clusters require more virtual networks to meet the demand for network isolation. Virtual Machine Migration Kubernetes is now the de facto standard in container orchestration, and while many traditional industries are still using physical machines to deploy services, more and more computing tasks will be run on virtual machines in the future. VM migration is the process of moving VMs from one physical hardware device to another. Large-scale VM migration in clusters is a relatively common occurrence because of routine maintenance updates, and large clusters of thousands of physical machines make it easier to schedule resources within the cluster, and we can use VM migration to improve resource utilization, tolerate VM errors, and improve node portability. When the host where the virtual machine is located is down due to maintenance or other reasons, the current instance needs to be migrated to another host. To ensure that the business is not interrupted, we need to ensure that the IP address remains unchanged during the migration process, because Overlay implements Layer 2 network at the network layer, so multiple physical machines can form a virtual LAN as long as the network layer is reachable, and the virtual machines or containers are still in the same Layer 2 network after migration, so there is no need to change the IP address. As shown in the above figure, although the migrated VMs and other VMs are located in different data centers, the migrated VMs can still form a Layer 2 network with the VMs in the original cluster through the Overlay network because the two data centers can be connected through IP protocols, and the hosts inside the cluster are not aware of and do not care about the underlying network architecture, they only know that different VMs can be connected to each other. Virtual Machine Scale A traditional Layer 2 network relies on MAC addresses for communication, and a traditional Layer 2 network requires network devices to store forwarding tables from IP addresses to MAC addresses. If each of these 5000 nodes contains only one container, it is not too much pressure on the internal network equipment, but in reality, a 5000-node cluster contains tens or even hundreds of thousands of containers, and when a container sends an ARP request to the cluster, all containers in the cluster will receive the ARP request, which will bring a very high network load. In an overlay network built with VxLAN, the network will re-encapsulate the data sent by virtual machines into IP packets, so that the network only needs to know the MAC addresses of different VTEPs, thus reducing the hundreds of thousands of data in the MAC address table entries to a few thousand, and the ARP requests will only be spread among the VTEPs in the cluster, and the remote VTEPs will only broadcast the data locally after unpacking it, which will not affect the network. Although this is still a high requirement for the network equipment in the cluster, it has greatly reduced the pressure on the core network equipment. Overlay networks are closely related to Software-defined networking (SDN), which introduces a data plane and a control plane, where the data plane is responsible for forwarding data and the control plane is responsible for computing and distributing forwarding tables. A network composed of this technology can learn the MAC and ARP table entries in the network through the traditional self-learning mode, but in a large-scale cluster, we still need to introduce the control plane to distribute the routing forwarding table. Large-scale data centers often provide cloud computing services to the outside world, and the same physical cluster may be split into multiple small pieces to be assigned to different tenants (Tenant), because the data frames of the Layer 2 network may be broadcasted, so for security reasons network isolation is needed between these different tenants to avoid the traffic between tenants from affecting each other or even malicious attacks. Traditional network isolation will use virtual LAN technology (Virtual LAN, VLAN), VLAN will use 12 bits to represent the virtual network ID, and the maximum number of virtual network is 4096. 4096 virtual networks are not enough for large scale data centers, VxLAN will use 24-bit VNI to represent the number of virtual networks, a total of 16,777,216 virtual networks, which can also meet the needs of data center multi-tenant network isolation. More virtual networks are actually a benefit that comes hand-in-hand with VxLAN, and it should not be a decisive factor in using VxLAN. the extension of the VLAN protocol, IEEE 802.1ad, allows us to include two 802.1Q protocol headers in an Ethernet frame, and two VLAN IDs consisting of 24 bits can also represent 16,777,216 virtual networks6, so Trying to address network isolation is not a sufficient condition for using VxLAN or Overlay networks. Today’s data centers contain multiple clusters and a large number of physical machines. The Overlay network is an intermediate layer between the virtual machines and the underlying network devices, and by using the Overlay network as an intermediate layer, we can solve the migration problem of virtual machines, reduce the pressure on the Layer 2 core network devices and provide a larger number of virtual networks. - In a Layer 2 network using VxLAN, VMs can still be guaranteed reachability on the Layer 2 network after migration across clusters, availability zones and data centers, which can help us ensure the availability of online services, improve resource utilization of clusters, and tolerate VM and node failures. - The scale of virtual machines in a cluster may be tens of times larger than physical machines, and the number of MAC addresses contained in a cluster composed of virtual machines may be one or two orders of magnitude larger than in a traditional cluster formed by a physical body, and it is difficult for network equipment to bear such a large scale of Layer 2 network requests; Overlay networks can reduce the MAC address table entries and ARP requests in a cluster through IP packets and control planes. - The protocol header of VxLAN uses 24-bit VNI to represent virtual networks, which can represent 16 million virtual networks in total, and we can allocate network bandwidth separately for different virtual networks to meet the network isolation needs of multi-tenants. It should be noted that Overlay network is only a kind of virtual network on the physical network, using this technology does not directly solve the problems such as scale in the cluster, and VxLAN is not the only way to form Overlay network, we can consider using different technologies in different scenarios, for example: NVGRE, GRE, etc. At the end of the day, let’s look at some more open related issues, and interested readers can think carefully about the following questions. - VxLAN encapsulates raw packets into UDP for distribution across the network, so what methods are used to transmit data by NVGRE and STT, respectively? - What technologies or software should be used to deploy Overlay networks in Kubernetes?
<urn:uuid:344f2221-7e0b-43ab-85d1-81caba2d3bc3>
CC-MAIN-2023-23
https://www.sobyte.net/post/2021-12/whys-the-design-overlay-network/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643663.27/warc/CC-MAIN-20230528083025-20230528113025-00679.warc.gz
en
0.908388
2,232
3.71875
4
I’ve been reading two very different sorts of reading materials recently. The first are journal articles and scholarly books on the Late Antique and Early Medieval period in Western Europe as part of my preparation for my PhD exams that are coming up this Spring. The second are the works of Terry Pratchett as a way to relax my brain after the 10-12 hour work day of straight academic reading. Oddly enough, the two sets of readings have led me to ask an interesting question. Can we actually excavate ethnicity? Is that even the right question? In a number of Pratchett’s novels, there is a character named Carrot Ironfoundersson. He is a Lance-Constable in the Ankh-Morpork City Watch, is approximately 6’6″ and is a dwarf. How is an extremely tall individual a dwarf? He was adopted. However, he follows dwarfish rules throughout the book, is offended by dwarfish jokes, pines for the traditional dwarf foods, and has a crush on a dwarf girl in his first novel appearance. He is considered part of their ethnic group throughout the stories. If he were to die, he would be buried in the traditional dwarf way of having your axe with you. Hypothetically, if an Ankh-Morpork archaeologist were to excavate his grave a hundred years later, how would he interpret this? Why would a human be buried with a dwarf axe? Ethnicity in the discipline can be defined in a number of ways, though it is often implicit. It can be biologically determined, based primarily on genetic relatedness, functional based on shared cultural behavior, or interpreted solely on similar types of artifacts as was done in early archaeological studies. Ethnicity, as defined by Jones (1997) is “identification with a broader group in opposition to others on the basis of perceived cultural differentiation and/or common descent”. Determining this archaeologically is tricky since it requires finding material correlates and biological indicators that suffice. It is further complicated since individuals can have layers of ethnicity expressed in different contexts, and ethnic affiliations can vary in strength of expression over time. Early studies assumed that cultural groups were bounded and homogenous entities that would correlate with distinct archaeological typologies and artifacts. While certain types of artifacts may signal ethnic affiliations, these must be separated from those that signal other group associations or identities. A modern example would be the ethnic ties to Ireland. Individuals of strong Irish descent in the USA tend to display this through the hanging of flags, maintenance of certain homeland artifacts like celtic crosses, maps, and photos. There may also be a tendency to buy certain kinds of food and drink due to having an Irish upbringing. However, this ethnic identity becomes more widely adopted on St. Patrick’s Day. It could be difficult to determine archaeologically whether a household was actually of Irish descent and had that specific ethnic identity, versus a household that loved to celebrate St. Patrick’s Day and simply had the flags and food for this specific day. Obviously the amount of material, quality, and dispersal could be used to determine the difference- but it is a good example of why this concept is tricky. In Western Europe during the post-Roman Empire period, there was a major change in cultural material and funerary practices associated with the migration of the barbarians into this region. Goths, Angles, Saxons, and Franks became the dominant cultural groups across Britain and the continent following the collapse of the Roman Empire in this region. One of the major ways of identifying this change and their spread is the use of specific types of brooches in graves. This single artifact was seen as an indicator of ethnic identity. While this direct association between artifact and ethnicity has been proved false, it is still used due to a lack of alternative methods. Often the artifacts we are looking at are given designations that cause us to have preconceived notions of what they mean, such as Danube style brooches or Frankish style swords. Halsall (2011) argues that “assigning any ethnic name to archaeological evidence is quite impossible on archaeological grounds alone… In other words, the ethnic interpretation of material cultural data can never, ever result from looking at archaeology alone, and taking it on its own terms” (Halsall 2011:18). He proposes instead that we need multiple lines of evidence as well as textual information before we can begin to interpret ethnicity. Garcia (2011) suggests that instead of questioning what ethnicity the artifacts signify, we should be questioning what the presence of these artifacts mean and what their use was meant to portray to the audience. By focusing on what it meant in that context, we may be able to more successfully identity group differences defined by culture than forcing preconceived notions of groups onto the artifacts. The problem isn’t solved. It is still difficult to determine ethnic identity of an individual even if one has text and a range of artifacts. As Hakenback (2011) found, differences in biological indicators of migration such as cranial features or stable isotope ratios don’t necessarily match to the differences in artifacts found with the individual. She studied a number of Bavarian cemeteries dating to the early medieval migration period and found no correlation between biological distance and artifact similarity. This means that foreigners were just as likely to adopt local culture and ethnic customs. Perhaps the more important question isn’t trying to identify barbarian groups, but rather to look at the internal changes in cemeteries and how individuals expressed identity on a more local scale. Halsall, Guy (2011). Ethnicity and early medieval cemeteries Arqueología y Territorio Medieval, 18 Garcia, Carlos (2011). Ethnicity in early middle age cemeteries. The case of the “visigothic” burials Arqueología y Territorio Medieval, 18
<urn:uuid:1398aff1-03ae-44c0-973d-f4a5daef6361>
CC-MAIN-2014-15
http://bonesdontlie.wordpress.com/2013/01/04/can-we-excavate-ethnicity/
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00650-ip-10-147-4-33.ec2.internal.warc.gz
en
0.963439
1,215
2.859375
3
New research suggests that increasing your consumption of fatty fish may boost the efficacy of your antidepressant. Most people are aware that there are benefits to be had from increasing consumption of fatty fish. This has lead millions of people to take “fish oil” pills in order to reap various health benefits without eating fish. In any regard, most people consider fish to be a very healthy source of food as long as it is free of contaminants like mercury. Studies have shown that only 42% of individuals who try antidepressants have a positive response. This means that a whopping 58% of people who try traditional antidepressant medications receive no positive effect. This is significant because it means that less than half of people trying “clinically proven” medications for depression aren’t getting any better. New Study (2014): Eating Fish May Make Antidepressants More Effective Instead of bashing antidepressants and realizing that psychotropic medications are highly problematic, researchers decided to focus on determining why some people respond and others don’t. Although I would tell you that responses likely have a lot to do with genetic predisposition, researchers weren’t looking at genetics. Instead, they were looking at other factors such as diet that may have influenced a person’s response to antidepressants. Roel Mocking (the lead researcher) was quoted saying: “We were looking for biological alterations that could explain depression and antidepressant non-response, so we combined two apparently unrelated measures: metabolism of fatty acids and stress hormone regulation. Interestingly, we saw that depressed patients had an altered metabolism of fatty acids, and that this changed metabolism was regulated in a different way by stress hormones.” The study was pretty simple: A group of 70 patients with depression were given a 20 mg dose of an SSRI medication every day for a 6 week period. Those who didn’t respond to the 20 mg SSRI treatment were titrated upwards in dosing to 50 mg per day. Researchers then took measurements of both cortisol and fatty acid levels in 70 participants with depression. They compared these 70 individuals to a sample of 51 non-depressed, healthy controls. Antidepressant-Responders vs. Non-Responders: Abnormal Fatty Acid Metabolism By documenting cortisol and fatty acid levels during the study, researchers noticed that depressed individuals who didn’t respond to antidepressants had abnormal fatty acid metabolism. Due to the fact that fish is full of fatty acids (e.g. omega-3 EPA / DHA), researchers noted the amount of fish consumed by participants. They discovered that individuals that ate little or no fish typically didn’t respond well to antidepressants. Those who ate a lot of fish in their diets generally had a stronger response to the medication. Researchers of this particular study reported that those who ate fatty fish at least one time per week had a 75% chance of responding positively to antidepressants. Those who never ate any fatty fish had less than a 25% chance of responding to antidepressants. The differences between the two groups were striking – could fish really have this big of an impact on how well an SSRI works? According to this study, the answer is a resounding yes. The study’s lead researcher said, “The alterations in fatty acid metabolism (and their relationship with stress hormone regulation) were associated with future antidepressant response.” He continued with: “Importantly, this association was associated with eating fatty fish, which is an important dietary source of omega-3 fatty acids. These findings suggest that measures of fatty acid metabolism, and their association with stress hormone regulation, might be of use in the clinic as an early indicator of future antidepressant response. Moreover, fatty acid metabolism could be influenced by eating fish, which may be a way to improve antidepressant response rates.” Fish consumption influences efficacy of antidepressant medications They eventually concluded that fish consumption can influence the efficacy of antidepressants. Specifically, the more fish a person ate, the more likely they responded to their SSRI antidepressant medication. On the other end of the spectrum, people who ate minimal amounts of fish had the poorest responses to antidepressants. As a result of these findings, the researchers gave a preliminary suggestion: if you aren’t responding to an antidepressant medication, try increasing your fish consumption. As the study reported, the amount of fish that a person consumed influenced the strength of their response to an antidepressant. Does fish really make antidepressants more effective? Not necessarily. Even Roel Mocking (lead researcher) said that the association between fatty acids in the blood and antidepressant response is not necessarily causal. In other words, just because there was a correlation between those who ate fish and antidepressant response does not necessarily mean that the fish was directly responsible for increased efficacy of the medication. It is pretty easy to assume that the fish was boosting the efficacy of antidepressants. However, there may be a number of other reasons why the fish made people feel happier. Eating fish is considered healthy for the brain and societies with higher rates of fish consumption tend to have lower rates of mental illnesses. Below are some other factors to consider in regards to fish consumption and mood. 1. Fish consumption improves mood There are a variety of studies that support the idea that fish consumption helps improve mood. Those who eat more fish may be more likely to have a better mood than those who don’t. In the study, fish consumption was associated with efficacy of an antidepressant. The fact that fish consumption is capable of improving mood by itself is something that needs to be taken into consideration. Therefore, individuals who are eating fish may be getting an antidepressant-like response from their intake of the fish. - Source: http://www.ncbi.nlm.nih.gov/pubmed/24737638 2. Omega-3 Fatty Acids: EPA / DHA Omega-3 fatty acids such as EPA and DHA are obtained through fish. The brain consists of a significant amount of fatty tissue, a lot of which is DHA. In order to achieve optimal mental health, many people consume fish and/or fish oil pills which contain these omega-3 fatty acids. Another fatty acid, EPA affects the way cells interact with one another. Both of these fatty acids are thought to help minimize depressive symptoms. - Source: http://www.ncbi.nlm.nih.gov/pubmed/19499625 3. Healthier diet improves depression It is well documented that diets high in processed foods and simple sugars tend to be poor for mental health. On the contrary, diets that consist of vegetables, fruits, whole grains, fish, and lean meats tend to be optimal for mental health. There are countless studies that show balanced diets that are low in processed ingredients and simple sugars tend to be better. Individuals eating more fish may not be as hungry or crave other foods that are poor for mental health. In other cases, eating fish may fill them up during a time they would normally turn to junk food. The fact that a person is eating fish, which is good for the brain as opposed to candy or simple carbohydrates could be a result in mood improvement. 4. Fish may be a standalone antidepressant Let’s say that someone who is taking an antidepressant is getting benefit from their drug. They’ve taken it for awhile and their mood has improved significantly. Now let’s say they start making healthy changes to their diet and consuming more fish instead of unhealthy carbohydrates. Not only is the person’s brain improving because they are getting vital nutrients, they are also getting the fatty acids necessary for optimal brain health. The body and brain are getting a benefit from both the EPA and DHA found in the fish. This is hypothesized to contribute to a standalone antidepressant effect in the individual. The fish may be giving the body and the brain more nutrients to optimally function. The fish and the antidepressant may therefore be contributing to separate antidepressant responses in the individual. 5. Synergistic effect: Fish + SSRI Finally, it is possible to consider that the fish consumption works synergistically with an antidepressant to achieve a supramaximal antidepressant response. In other words, the antidepressant response as a result of consuming fish while taking an SSRI may be significantly greater than each standalone option. There may be some sort of symbiotic relationship between fish consumption, fatty acid metabolism, and ultimately the efficacy of an antidepressant. Among certain individuals, it is possible that the cumulative antidepressant effect resulting from the fish and their medication may make them feel better than each option as a standalone treatment. More investigation should be conducted as to whether fish consumption or certain fish oils should be recommended as an antidepressant augmentation strategy. 6. Other classes of antidepressants Since most antidepressants achieve their effect by inhibiting the reuptake of serotonin, fish may enhance this process. It is unknown whether these findings apply to other classes of antidepressants that may not be as serotonin-oriented. One example would be that of Wellbutrin, an atypical antidepressant that affects norepinephrine and dopamine without any impact on serotonin. It could be hypothesized that those taking certain tricyclics and MAOIs may differ in their responses to adding fish. In future studies, researchers should take into consideration the specific type of drug that a person takes. It may also be worth comparing different SSRIs to determine whether a particular medication combined with fish consumption results in better responses than others. 7. Fish oil pills vs. Eating fish Another thing that researchers could consider is conducting a similar study in a depressed population and testing a group given an antidepressant with a particular dosing of fish oil. This would test whether the benefit comes specifically from eating fish or whether it may also be attainable through fish oil supplementation. It may be possible that fish oil pills have the same influence over fatty acid metabolism that is provided from fish consumption. Further research regarding fish intake and antidepressants is necessary The next step for this particular team of researchers is to determine whether changes in fatty acid metabolism and cortisol activity are specifically related to depression, or whether they also apply to those with other mental illnesses like schizophrenia and PTSD. The findings from the above study will be presented at the European College of Neuropsychopharmacology (ENCP) congress in Berlin, Germany. There is a need for larger scale studies with bigger samples over a longer timetable to confirm the findings presented by Mocking et al. It is already fairly well documented that eating fish is healthy for a person’s mental health. If you are taking an antidepressant and want to make sure you are doing everything possible to respond to treatment, consider increasing your intake of fish. Fish is considered healthy and there is substantial evidence supporting its consumption and reduction of mental illness. - Source: http://www.europeanneuropsychopharmacology.com/article/S0924-977X(14)70632-7/abstract
<urn:uuid:23c8579d-d14b-481f-8bc5-99187657dc17>
CC-MAIN-2017-47
https://mentalhealthdaily.com/2014/11/04/eating-fish-makes-antidepressants-more-effective/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806438.11/warc/CC-MAIN-20171121223707-20171122003707-00147.warc.gz
en
0.951313
2,229
2.546875
3
Abrasive waterjets employ a jet stream of water mixed with millions of tiny abrasive particles to cut through a variety of materials. Depending on factors like the shape of the cut, cutting conditions, and the material being cut, the kerf width can vary from the top to the bottom of a cut. The different shapes that the cutting edge can take on are referred to as taper. There are three main types of taper: - V-Shaped Taper. This is the most common form of taper and involves a greater amount of kerf at the top of the material being cut than at the bottom. V-shaped taper occurs when some of the cutting energy disperses as the jet stream cuts deeper into the material. The stream may not have completely cut through the material, causing buildup and removing more material from the top than the bottom. This type of taper is typically associated with rapid cutting. - Reverse Taper. In contrast from v-shaped taper, this form of taper is caused by slow cutting speeds. The slow speed causes the jet stream to remove more material at the bottom of the material than the top. This can also occur when cutting softer materials, as the hardness of a material impacts the focus of the jet stream energy. - Barrel Taper. This type of taper consists of kerf width that is the greatest in the middle of a cut. Barrel taper occurs when cutting thicker materials, because it takes longer for the jet stream to penetrate through to the bottom. It can also take place in laminated materials in which the outer layers are harder than the core of the material. After piercing through the top layer, the energy of the jet stream disperses through the core before continuing through the bottom layer. The Precision of the Waterjet As technology has adapted over time, precision has grown increasingly important in cutting. Over the years waterjet cutting has seen a tremendous amount of growth as researchers and scientists constantly work to improve upon the cutting method. One of the reasons waterjets lead the cutting industry is their ability to cut with an extremely high level of precision. Some cutting jobs allow or even prefer some degree of taper, but with precise cutting the objective is to achieve zero taper, which occurs when the width of a cut is maintained from top to bottom. In order to compensate for taper, the required cutting speed can be slower than what is considered ideal for quick production times. In 1997 Dr. Axel Henning introduced a new way of eliminating taper by tilting the cutting head to produce a high-precision cut while maintaining high speeds, which was revolutionary in helping to compensate for taper. What are the primary causes of taper? Taper can arise from a number of circumstances, including: - The thickness or hardness of the material (soft and/or thinner materials are more likely to see taper) - Cutting speed - Type of abrasive used in the waterjet stream - Distance of the waterjet nozzle from the material (the further away the material is from the nozzle, the more likely it is to result in taper) - Focus and design of the nozzle How do you minimize taper? Tilting heads are key in eliminating taper in waterjet cutting. The tilting head angles the nozzle of the abrasive waterjet as it cuts through a material, ensuring a clean and taper-free cut all the way through. There are several strategies that can be used to control taper if you don’t have a tilting head, such as: - Use a high-quality abrasive and a large grit size, but not so large that is clogs the nozzle - Use a small nozzle and mixing tube - Cut slowly, but not so slowly that you risk winding up with reverse taper - Use the lowest amount of nozzle stand-off that you can, because the closer you can bring the nozzle to the material, the less taper you will get - Make sure the Z-axis is perpendicular to the material in both X-axis and Y-axis directions - If using thin materials, stack them, as taper is most evident in materials that are less than 3 mm thick - Rotate the mixing tube 90 degrees for every 8-10 hours of use to ensure that it wears more evenly, thus enabling it to last longer and prevent taper Some waterjet cutting machines have taper compensation mechanisms in place, such as tilting heads, to help ensure more precise cuts. ICS Cuts is dedicated to producing the highest quality of waterjet cutting, therefore only the best tools are used by our team of experts. All Intelligent Cutting Solutions, equipment features automatic taper compensation consisting of an articulating wrist that eliminates kerf taper errors and stream lag, which results in the most precise cuts while moving at higher speeds and guarantees the best quality of service. Submit a comment
<urn:uuid:32ecbdca-988f-40d5-9003-738c126dde85>
CC-MAIN-2023-23
https://www.icscuts.com/blog/press/taper-compensation-and-waterjet-cutting
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647895.20/warc/CC-MAIN-20230601143134-20230601173134-00625.warc.gz
en
0.934992
1,010
3.828125
4
Technorationality (behaviorist discourse that focuses on learning outcomes, criticized by MacDonald) Ralph W. Tyler (the father of curriculum reform) Tyler (1949): framework for deliberately designing and evaluating curriculum (Tyler rationale) 1. What educational purposes should the school seek to attain? 2. What educational experiences can be provided that are likely to attain these purpose? 3. How can these educational experiences be effectively organized? 4. How can we determine whether these purpose are being attained (evaluation)? A combination of existentialism & neo-Marxism 1947 Chicago Curriculum Conference (hosted by Ralph Tyler) - birthplace of curriculum theory. Social Efficiency Educators Ross, Bobbitt, Gilbreth, Taylor, Thorndike Reconceptualists: in opposition to the Tylerian satus quo. The decade of 1960s marks a critical moment of reconceptualization impacted by the 1957 launching of Sputnik and social unrest (the civil rights movements & the Vietnam War protest). 1969 as begining of the "decade of the Reconceptualization" (Pinar, Reynolds, Slattery, and Taubman) Creator of the word "Reconceptualists" Use "post-critical" to describe the existential/phenomenological work James MacDonald (reconceptualist) (Teacher of Steve Mann, who mentored William Doll and George Willis): from scientism - person-centered humanism - sociopolitical humanism - transcendentalism. Curricularists could construct discourses and structures to allow for an environment more conducive to personal freedom. Dwayne Huebner (reconceptualist) (Teacher of Michael Apple at Teachers College) Curriculum as an educative environment. Rejecting technorationality. The purpose of education is "transcendence" - teachers are to help students grow in their capacity for personal evolution and change. Objected to learning theory because it conceived of education as "doing sth. to an individual", which is in opposition to his Heideggerean notion of a person as "being-in-the-world". Power/knowledge connections. Paul Klohr (reconceptualist) Teacher of Pinar and Greene at OSU Maxine Greene (reconceptualist) Joseph J. Schwab “The field is moribund. It is unable, by its present methods and principles, to continue and contribute significantly to the advancement of education” (1970). Vice-president of AERA Division B (1997-1999) Managing Editor, JCT (the offical journal of the "Reconceptualists") (1978-1998) Emphasizes the importance of a diverse curriculum field, advocating controversy, but with mutual respect Three curricula that all schools teach (1985) 1. Explicit curriculum: The learning and interaction that occurs that is explicitly announced in schools programs 2. Implicit (hidden) curriculum: The learning and interaction that occurs that is not explicitly announced in schools programs 3. Null (non-existing) curriculum: systematically excluded, neglected, or not considered Four categories of curricula (1993) 1. Official: in curriculum guide and conform with state-mandated assessment 2. Taught: individual teachers focus on and choose to emphasize (teacher’s knowledge) 3. Learned: all that students learn 4. Tested: tests represent only part of what is taught or learned Neo-Marxism & critical pedagogy Refuses the notion of reconceptualization Tanner and Tanner All understanding is essentially dialogue (Truth is not method but simply what happens in dialogue). What we must never forget is that we are always part of what it is we seek to understand. Truth as experience (without method). Being that can be understood is language. Language is about negotiating and making sense of a human world largely of our own construction. Hermeneutics is a more general procedure for understanding itself. Gadamer is not usually thought of a "postmodern" but he has this in common with postmodern theorists - questioning the foundations of philosophical modernism. Against Descartes' man as a thinking machine capable of arriving at the kind of certainty to be found in geometry. Interpretation of the world is impossible without pre-understanding. Language is the house of being. Triple root of power, knowledge and self. the overarching view of history is one more metannarrative, even thought it is dominant. not dissolving differences may be a good thing Patrick Slattery (student of Pinar)
<urn:uuid:9e26e56f-eb5e-4461-8aa8-3f40f830aff2>
CC-MAIN-2014-15
http://edcurpavilion.blogspot.com/2008/06/big-names-in-curriculum-theory.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00526-ip-10-147-4-33.ec2.internal.warc.gz
en
0.903466
969
2.859375
3
The effect of alcohol on social behavior is no mystery: Go to any bar on a Saturday night to witness the posturing, giggling, and flirting that drinking encourages. You may also notice differences in the way males and females relate to their partners while under the influence. Now a new study sheds light on the brain chemistry underlying the gender differences you're witnessing—especially if you've walked into a bar full of prairie voles. After being paired for 24 hours and consuming alcohol during that time—yes, prairie voles will have a wee drink or two when given the chance—the males in the study often chose to spend time with a stranger rather than their partners in a subsequent three-hour "partner preference" test. In contrast, the majority of females who met a male while tipsy wanted more "huddling time" with him, behavior that can signal commitment in these animals. The sexes also showed contrasting changes in the neural systems regulating social behaviors. "It's the first time we've shown that alcohol drinking can directly affect social bonding and that these effects are paralleled by changes in neuropeptides," says Andrey Ryabinin of Oregon Health and Science University in Portland, who led the study. Neuropeptides are small molecules that brain cells use to communicate with each other. Models of Monogamy Prairie voles have long been a lab model for studies of human pair bonding; like people, the animals are socially monogamous, meaning they'll stick with a single partner long term. Human and vole brains also process social encounters and drug-related altered states as similarly rewarding, and our bodies and minds handle stress in similar ways. Underlying these behaviors held in common are shared hormones and neuropeptides associated with all of these experiences. "And it just happens that they drink alcohol readily," says Ryabinin. In a previous study the authors showed that heavy-drinking or teetotaling prairie voles can even influence how much a "peer" drinks, sometimes encouraging and other times discouraging more consumption—another parallel between the rodents and humans. In the current study, the animals preferred to lap up water laced with 10 percent ethanol rather than pure water. "It's very convenient—it eliminates the stress that comes with administering it to the animals," which would make it harder to interpret the results, he says. Contrary to what you might expect between well-lubricated vole couples, the effects on pair bonding were not due to the alcohol's influence on mating behavior, aggression, or motor abilities. In fact, alcohol had no significant influence on how much mating or aggression went on between the paired males and females. "The effects on bonding were happening independently," Ryabinin says. While initially a surprise, the contrasting changes in the neuropeptides in males and females "could reflect the different ways the animals handle stress," he says. The neural systems that the alcohol affected in the voles are the same ones that regulate levels of anxiety in these animals. The correlation between bonding and stress needs to be studied further, Ryabinin says, but he notes there's a certain logic to it: Males, very generally, deal with anxiety with a fight-or-flight response. While both fighting and fleeing are actions that are likely to disrupt social bonds, fleeing is in a sense what they're doing in leaving their partners. Females, in contrast, more often lean toward actions that "tend and befriend"—not a bad descriptor of their cuddly behavior after drinking. What motivates a vole under the influence is, of course, not exactly the same as the tangled interplay of biology, experience, and culture that affects the behavior of a similarly impaired human. That's one reason the rodents make better experimental subjects. "In humans," says Ryabinin, "there are so many other factors to consider—for instance, the influence of another drinker or a history of drinking-related economic pressures—that might lead to broken marriages." Voles don't carry such baggage. "This means that we can use prairie voles to model not just our alcohol-related behavior but [also] the underlying molecular influences on that behavior," he says. "More studies are required, but separating biological effects from purely cultural ones could lead to better treatments for both problem drinking and the resulting interpersonal conflicts." Mark Egli, program director for the Division of Neuroscience and Behavior at the National Institute of Alcohol Abuse and Alcoholism, agrees that there's potential for broad benefit down the line. "It's novel and exciting in alcohol research to bring together these perspectives—the social and the biological," he says. "And these neural systems are potentially some that we'd like to target for alcohol-dependence medications." The vole study is just a first step, he says, "but it could lay the groundwork for pharmacological therapy that influences a recovering addict's ability to form social relationships." And having good relationships, he says, can be a major factor in a successful recovery.
<urn:uuid:3508ae3f-f5c9-419b-b3f0-29aa978a51c1>
CC-MAIN-2017-30
http://news.nationalgeographic.com/news/2014/04/140407-prairie-voles-pair-bonding-alcohol-neuropeptides-biology-behavior/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424586.47/warc/CC-MAIN-20170723182550-20170723202550-00289.warc.gz
en
0.955407
1,042
2.71875
3
How does the tax system affect US competitiveness? The international tax policies that best encourage firms to invest in the United States are not necessarily the policies that best help US multinational companies compete with foreign-based multinationals. Policymakers face a trade-off among goals. What is Competitiveness? Many—really all—politicians favor “international competitiveness,” but the term means different things to different people. To some, it is the ability of domestic firms or industries to compete with their foreign counterparts in a global marketplace. For them, this translates into support for “mercantilist” policies that seek to increase exports, reduce imports, or promote more US activity in certain sectors, such as manufacturing. An alternative form of mercantilism seeks to promote the growth of a country’s resident multinational corporations without regard to whether they produce at home or overseas. Concerns about the competitiveness of US multinationals often follow from an assumption that these firms generate spillover benefits for the economy in which they are headquartered. For example, the knowledge created by the research and development (R&D) that these firms conduct (typically at headquarters) often gets diffused to other domestic producers, boosting their competitiveness. By contrast, many economists view free trade and capital movements as mutually beneficial because they tend to raise living standards in all countries. These economists define “competitive” policies as those that increase the standard of living of Americans over the long run, without regard to their effects on the balance of trade, the net direction of international capital flows, or success in expanding specific activities, such as manufacturing or R&D. Global international tax practices seek to promote free capital movements by preventing double taxation of international capital flows. These same practices assign rights to tax profits to the capital-importing countries (i.e., the country where production facilities are located). The capital-exporting country has two ways to avoid double taxation. The first is simply to exempt taxation of the foreign-source income of its residents. The second is to tax the worldwide income of its residents but to allow credits for foreign income taxes they pay so that their income is taxed at the home-country rate rather than the rate in the country where the income is earned. These two approaches have very different implications for a country’s attractiveness as a location for productive investment or as a place for multinational corporations to establish residence. Although the promise of beneficial spillovers from R&D and other headquarters activities is a strong argument for using the tax code to promote them, lower taxes on such activities might lead to a shortchanging of other activities in the economy (such as education, health, and infrastructure) that also provide beneficial external effects. More direct incentives, such as subsidies for R&D, might better encourage the desired spillovers. Tax Policies to Attract Investment The US corporate tax system discourages investment in the United States by both US- and foreign-based corporations because the top corporate tax rate in the United States (if state-level taxes are included) is higher than the top corporate tax rate in all of our major trading partners. However, this disadvantage to US investment is partially offset by capital recovery provisions that are more generous in the United States than in many other countries and by provisions that make it easier in the United States than in most other countries to establish businesses whose owners benefit from limited liability without being subject to corporate-level taxation. The US tax system also encourages US-based multinationals to invest overseas instead of at home because US multinationals can defer US tax on the income of their foreign-owned subsidiaries in low-tax countries until that income is repatriated to the US parent firm. The effects of this incentive for foreign investment are partially offset, though, if the shift of investment overseas by US multinationals raises pre-tax returns on investment in the United States and thereby encourages an inflow of capital from foreign-based firms. Tax Policies to Attract Corporate Headquarters The US tax system arguably places US multinationals at a competitive disadvantage with foreign-based multinationals with income from low-tax countries because US companies must pay the difference between the US tax rate and foreign tax rates when they repatriate profits from their foreign affiliates. In contrast, most countries in the Organization for Economic Co-operation and Development and all the other countries in the G7 (Canada, France, Germany, Italy, Japan, and the United Kingdom) have exemption systems that allow their resident multinationals to pay only the foreign-tax rate on their overseas profits. In addition, the US controlled foreign corporation (CFC) rules tax some forms of foreign-source income of US multinationals as it accrues in their foreign subsidiaries. The goal is to prevent schemes that strip reported profits from US tax jurisdiction to low-tax foreign countries. The CFC rules, however, only apply to US-resident multinationals and do not prevent similar schemes by foreign-resident multinationals to strip profits from their US affiliates. Others argue that US multinationals are not, on balance, put at a disadvantage by the US tax system. They point to the ability of US companies, especially those with significant assets in intellectual property, such as firms in the high-tech and pharmaceutical sectors, to shift reported profits to low-tax jurisdictions. They also note that since 1997, “check-the-box” regulations have effectively enabled US multinationals to shift reported profits from production in high-tax foreign jurisdictions to tax havens without being subject to US CFC rules. Would a Value-Added Tax Increase US Competitiveness? Some commentators argue that substituting a value-added tax (VAT) for all or part of the corporate income tax would improve the US trade balance because, unlike the corporate income tax and other levies imposed on income earned in the United States, VATs typically exempt exports and tax imports. But most economists dispute the claim that a VAT would improve the trade balance, arguing that any benefit to net exports from a VAT would be offset by a resulting appreciation of the US dollar relative to other currencies. In fact, some research suggests that countries that rely heavily on VATs for revenue have lower net exports than those that don’t. Replacing some or all of the corporate income tax with a VAT would, however, affect the trade position of some industries relative to others. Exemptions and lower rates within a VAT affect the relative prices consumers pay for different goods and services but do not distort trade patterns because VAT burdens do not depend on where goods and services are produced. In contrast, preferences within the corporate income tax do affect production location, improving the competitiveness of some US producers while worsening the competitiveness of others because the tax does affect relative costs of production. Altshuler, Rosanne, and Harry Grubert. 2001. “Where Will They Go if We Go Territorial? Dividend Exemption and the Location Decisions of U.S. Multinational Corporations.” National Tax Journal 54 (4): 787–809. Arnold, Brian J. 2012. “A Comparative Perspective on the U.S. Controlled Foreign Corporation Rules.” Tax Law Review 65 (3): 474–505. Clausing, Kimberly A. 2004. “The American Jobs Creation Act of 2004: Creating Jobs for Accountants and Lawyers.” Washington, DC: Urban-Brookings Tax Policy Center. Desai, Mihir A., and James R. Hines Jr. 2005. “Value Added Taxes and International Trade: The Evidence.” Harvard Business School working paper. Cambridge, MA: Harvard University. Desai, Mihir A., C. Fritz Foley, and James R. Hines Jr. 2009. “Domestic Effects of the Foreign Activities of US Multinationals.” American Economic Journal: Economic Policy 1 (1): 181–203. Graetz, Michael, and Rachael Doud. 2013. “Technological Innovation, International Competition, and the Challenges of International Income Taxation.” Columbia Law Review 113 (3): 347–446. Grubert, Harry. 2012. “Foreign Taxes and the Growing Share of US Multinational Company Income Abroad: Profits, Not Sales, are Being Globalized.” National Tax Journal 65 (2): 247–282. Kleinbard, Edward D. 2014. “Competitiveness Has Nothing to Do with It.” Tax Notes. September 1. Slemrod, Joel. 2012. “Competitive Tax Policy.” In Rethinking Competitiveness, edited by Kevin Hassett, 32–65. Washington, DC: American Enterprise Institute for Public Policy Research. Toder, Eric. 2012. “International Competitiveness: Who Competes Against Whom and for What?” Tax Law Review 65 (3): 505–34. Viard, Alan D. 2004. “Border Adjustments Won’t Stimulate Competitiveness.” Tax Notes. December 4.
<urn:uuid:4af6855d-af0a-496f-bd26-effbc509c76d>
CC-MAIN-2017-30
http://www.taxpolicycenter.org/briefing-book/how-does-tax-system-affect-us-competitiveness
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423785.29/warc/CC-MAIN-20170721142410-20170721162410-00154.warc.gz
en
0.93256
1,860
2.59375
3
Artist and Athens School of Fine Arts Professor Marios Spiliopoulos was faced with something of a quandary when approached to hold an exhibition at the Mosque of the Janissaries (Yiali Tzami) in Hania, Crete. How, he thought, could he create a show that was not overwhelmed by the historical importance of the site of worship? The installation, “The 700 Names of God,” which runs to Sunday, is based on an earlier work by Spiliopoulos and consists of hundreds of pairs of shoes donated by Hania residents in response to an invitation from the city’s municipal gallery, which lead the visitor to the entrance of the monument. The mosque’s entrance hall does indeed display 700 names for God printed in blue in an arched recess, while the floor has been covered in sand and a few pairs of shoes lead the way into the recess. The aim of the installation is to encourage viewers to think about the divine as a force that offers support, but also as the standard by which we live our lives. The central idea behind the project was inspired by the 700 names given to God that were recorded by Byzantine Emperor Theodore II in 1254. The journey of self-awareness on which Spiliopoulos invites the public is enhanced by a video project inside the monument showing waves meeting the sand, and more shoes. Here, visitors are also treated to phrases from Heraclitus, the 6th century BC Greek philosopher who was the first to shed intellectual light onto the then black-and-white understanding of the divine. “Now, especially, when lifejackets and shoes wash up on our shores, from the sea, together with the bodies of small children, when ISIS is blowing up Palmyra, the mind appears to go silent and art seems too small to have a voice,” says Spiliopoulos. The exhibition is on display at the Mosque of the Janissaries on Akti Tombazi through September 25. Opening hours are 11 a.m. to 11 p.m. daily.
<urn:uuid:9808e497-a543-4d6c-9b31-04f7f0118b8c>
CC-MAIN-2017-39
http://www.ekathimerini.com/212144/article/ekathimerini/life/an-artistic-invitation-to-contemplate-the-divine
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696696.79/warc/CC-MAIN-20170926212817-20170926232817-00136.warc.gz
en
0.973598
424
2.59375
3
This object is on display in the Boeing Aviation Hangar at the Steven F. Udvar-Hazy Center in Chantilly, VA. Collection Item Summary: The first supersonic airliner to enter service, the Concorde flew thousands of passengers across the Atlantic at twice the speed of sound for over 25 years. Designed and built by Aérospatiale of France and the British Aviation Corporation, the graceful Concorde was a stunning technological achievement that could not overcome serious economic problems. In 1976 Air France and British Airways jointly inaugurated Concorde service to destinations around the globe. Carrying up to 100 passengers in great comfort, the Concorde catered to first class passengers for whom speed was critical. It could cross the Atlantic in fewer than four hours - half the time of a conventional jet airliner. However its high operating costs resulted in very high fares that limited the number of passengers who could afford to fly it. These problems and a shrinking market eventually forced the reduction of service until all Concordes were retired in 2003. In 1989, Air France signed a letter of agreement to donate a Concorde to the National Air and Space Museum upon the aircraft's retirement. On June 12, 2003, Air France honored that agreement, donating Concorde F-BVFA to the Museum upon the completion of its last flight. This aircraft was the first Air France Concorde to open service to Rio de Janeiro, Washington, D.C., and New York and had flown 17,824 hours. Collection Item Long Description: It began with a dream - a dream of a new age in air travel where the boundaries of time and distance were to have been shattered forever. The dream of supersonic passenger air travel was first conceived in the 1950s was developed in the 1960s and came to fruition in the mid 1970s. For 27 years, the graceful Anglo-French Concorde carried world travelers across the Atlantic Ocean in great comfort at twice the speed of sound. While the dream was real, it was so only for the world's privileged elites. It was not a machine for the average citizen. High development costs and high operating costs prevented the Concorde from achieving the dream of practical supersonic flight for the public. But for a while, the Concorde looked promising - it looked like the future. In the 1950s air travel was revolutionized with the advent of jet propulsion. First the de Havilland Comet and later, the Boeing 707, greatly increased the speed of travel from 350 to over 600 mile per hour. Airlines and customers flocked to the new jet airliners as travel times were cut dramatically and the seat-mile costs to the airlines dropped. The conclusion drawn by engineers, managers, and politicians seemed clear: the faster the better. In Europe, enterprising designers in Great Britain and France were independently outlining their plans for a supersonic transport (SST). In November 1962, in a move reminiscent of the Entente Cordiale of 1904, the two nations agreed to pool their resources and share the risks in building this new aircraft. They also hoped to highlight Europe's growing economic unity as well as its aerospace expertise in a dramatic and risky bid to supplant the United States as the leader in commercial aviation. The aircraft's name reflected the shared hopes of each nation for success through cooperation - Concorde. Quickly the designers at the British Aircraft Corporation and Sud Aviation, later reorganized as Aerospatiale, settled on a slim, graceful form featuring an ogival delta wing that possessed excellent low speed and high speed handling characteristics. Power was to be provided by four massive Olympus turbojet engines built by Rolls-Royce and SNECMA. Realizing that this first generation SST would cater to the wealthier passenger, Concorde's designers created an aircraft that carried only 100 seats in tight four-across rows. They assumed that first class passengers would flock to the Concorde to save valuable time while economy class passengers would remain in larger, but slower subsonic airliners. Despite mounting costs that constantly threatened the program, construction continued with exactly 50 percent of each aircraft built in each country. The first Concorde was ready for flight in 1969. With famed French test pilot Andre Turcot at the controls, Concorde 001, which was assembled at Toulouse, took to the air on March 2, 1969. Although the Soviets had flown their version of the SST first, the Tupolev Tu-144 had been rushed into production and suffered from technological problems that could never be solved. Following the successful first flight a total of four prototype and preproduction Concordes were built and thoroughly tested and by 1976, the first of 16 production Concordes were ready for service. Twenty were built in all. But all was not rosy. During this time America sought to produce its own bigger and faster SST. After a contentious political debate, the federal government refused to back the project in 1971 citing environmental problems, particularly noise, the sonic boom, and engine emissions that were thought to harm the upper atmosphere. Anti SST political activity in the United States delayed the granting of landing rights, particularly into New York City, causing further delays. More ominously for Concorde, no airlines placed orders for this advanced SST. Despite initial enthusiasm, the airlines dropped their purchase options once they calculated the operating costs of the Concorde. Consequently only Air France and British Airways - the national airlines of their respective countries - flew the 16 production aircraft and only after purchasing them from their governments at virtually no cost. Nevertheless, in January 1976, Concorde service began and, by November, these graceful SSTs were flying to the United States. A technological masterpiece, each Concorde smoothly transitioned to supersonic flight with no discernable disturbance to the passenger. In service, the Concorde would cruise at twice the speed of sound between 55,000 and 60,000 feet - so high that passengers could actually see the curvature of the Earth. The Concorde was so fast that, despite the outside temperature of less than -56 degrees Celsius, the aircraft's aluminum skin would heat up to over 120 degrees Celsius while the Concorde actually expanded 8 inches in length with the interior of the window gradually growing quite warm to the touch. And all the while each passenger was carefully attended to while enjoying a magnificent meal and superb service. Transatlantic flight time was cut in half with the average flight taking less than four hours. For the next 27 years supersonic travel was the norm for the world's business and entertainment elite. But eventually the harsh reality of the economic marketplace forced Air France and British Airways to cut back their already limited service. Routes from London and Paris to Washington, Rio de Janeiro, Caracas, Miami, Singapore, and other locations were cut leaving only the transatlantic service to New York. And even on most of these flights, the Concorde flew half full with many of the passenger flying as guests of the airlines or as upgrades. With the average round trip ticket costing more than $12,000, few could afford to fly this magnificent aircraft. Operating costs escalated as parts became more difficult to acquire and, with an average of one ton of fuel consumed per seat, the already small market for the Concorde gradually grew smaller. Despite the excellence of the Concorde's design, its operators realized that its days were numbered because of its high costs. In 1989, in commemoration of the 200th anniversary of the French Revolution and the 200th anniversary of the ratification of the Constitution of the United States, the French government sent a copy of the Declaration of the Rights of Man to the U.S. Appropriately, this famous document was delivered on the Concorde and with it a promise from Air France to give one of these aircraft to the people of the United States through its eventual inclusion into the collection of the Smithsonian Institution's National Air and Space Museum. Fourteen years later that promised was fulfilled. In April of 2003 Air France president Jean Cyril Spinetta informed the Museum in April that Concorde service would end on May 31st following the decision by the aircraft's manufacturer to stop supporting the fleet. As planned, on June 12 Air France delivered its most treasured Concorde, F-BVFA, to Washington Dulles International Airport on its last supersonic flight for the airline. This aircraft was the first production Concorde delivered to Air France, the first Concorde to open service between Paris and New York, Washington, and Rio de Janeiro and had amassed 17,824 hours in the air. Onboard were 60 passengers including Gilles de Robien, the French Minister for Capital Works, Transport, Housing, Tourism, and Marine Affairs, Mr. Spinetta, and several past Air France presidents as well as former Concorde pilots and crew members. In a dignified yet bittersweet ceremony Mr. Spinetta signed over Concorde "Fox Alpha" to the Museum for permanent safekeeping. The Concorde is now prominently displayed at the Museum's Steven F. Udvar-Hazy Center. - Wingspan: 25.56 m (83 ft 10 in) - Length: 61.66 m (202 ft 3 in) - Height: 11.3 m (37 ft 1 in) - Weight, empty: 79,265 kg (174,750 lb) - Weight, gross: 181,435 kg (400,000 lb) - Top speed: 2,179 km/h (1350 mph) - Engine: Four Rolls-Royce/SNECMA Olympus 593 Mk 602, 17,259 kg (38,050 lb) thrust each - Manufacturer: Société Nationale Industrielle Aérospatiale, Paris, France, and British Aircraft Corporation, London, United Kingdom See more items in - Aircaft Serial Number: 205. Including four (4) engines, bearing respectively the serial number: CBE066, CBE062, CBE086 and CBE085. - Also included, aircraft plaque: "AIR FRANCE Lorsque viendra le jour d'exposer Concorde dans un musee, la Smithsonian Institution a dores et deja choisi, pour le Musee de l'Air et de l'Espace de Washington, un appariel portant le couleurs d'Air France."
<urn:uuid:005ac461-193c-4ea6-82b0-cf756aa9b321>
CC-MAIN-2017-26
https://airandspace.si.edu/collection-objects/concorde-fox-alpha-air-france?object=nasm_A20030139000
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320049.84/warc/CC-MAIN-20170623100455-20170623120455-00245.warc.gz
en
0.951778
2,120
3.453125
3
A battery electric vehicle is a car that runs solely on electricity. These vehicles can travel farther on a single charge than conventional gas-powered cars, but their overall effect on the environment depends on where they’re driven. The batteries in these cars store chemical energy and convert it to electricity. Typically, these are lithium-ion batteries. What is a BEV? A battery electric vehicle (BEV) is a car that uses electricity to power its motor and drive the wheels. BEVs don’t produce tailpipe pollution because the electricity they use comes from the grid or renewable sources like wind and solar. The electric motor in a BEV runs on the electricity stored in the battery pack, which is a big pack of batteries containing hundreds of individual cells. These are scaled up versions of the lithium-ion (Li-ion) batteries found in mobile phones and laptops. These Li-ion batteries are rated in kilowatt-hours, which tell you how much energy they can hold before running out. An EV with a larger kWh rating can go further than one with a smaller kWh. How does a BEV work? Battery electric vehicles (BEVs) are cars that are powered solely by a large, lithium- ion battery. This big pack consists of thousands of tiny lithium-ion cells working together, each one capable of storing hundreds of watts of power. These EV batteries are vastly different to the heavy lead-acid batteries in most conventional combustion engine cars and offer a massive increase in lifespan. They also work on a far more efficient and reliable alternating current than traditional power grids, which allows for faster charge times. The energy in a BEV battery is pumped to an electric motor, which spins the wheels and gives the car its power. The EV motor has 90% fewer moving parts than an internal combustion engine (ICE) and produces less emissions. Charging a BEV is easy – just plug the vehicle into a wall box or public charging point and you’re good to go. Many chargers can charge up to 80% in 30 minutes. What are the benefits of owning a BEV? There are several benefits to owning a battery electric vehicle (BEV). They are clean, simple and cost-effective to run. One benefit is that there are no oil changes or other fluids like engine oil, which saves on maintenance expenses. Another benefit is that EVs generally last much longer than conventional gas-powered vehicles. EVs have a higher energy efficiency than gasoline-powered cars, meaning they use less fuel to travel the same distance. And that savings adds up to a substantial amount of money. Besides the financial benefits, owning an EV is a great way to help reduce your environmental impact. Moreover, if you are concerned about range anxiety, a BEV lets you swap out the hassle of recharging your car for the convenience of Level 2 or DC fast chargers at home, on public streets and pretty much anywhere there is an outlet. What are the drawbacks of owning a BEV? A battery electric vehicle (BEV) is an alternative to a traditional gas-powered car. They are more energy efficient; they produce no tailpipe emissions and require little maintenance. However, they have a few drawbacks, which are worth considering before you decide to purchase one. First, they typically have a higher initial cost than a gas- powered vehicle. Second, they may need a home charging station. Luckily, there are many public stations across the country, but many people prefer to have one in their home. Third, BEVs often have shorter ranges than a gas-powered vehicle. This can be a big concern for some drivers, especially if they enjoy long road trips. Additionally, the batteries of modern EVs are expected to last for about a decade and replacing them can be expensive. Fortunately, there are federal and state incentives available to help offset these costs. Also, their cost is expected to go down as technology advances.
<urn:uuid:b53efd43-4293-4dd6-9ae0-5e6febea7042>
CC-MAIN-2023-40
https://marylittlewood.com/what-is-a-battery-electric-vehicle.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510603.89/warc/CC-MAIN-20230930050118-20230930080118-00053.warc.gz
en
0.96075
812
3.796875
4
WEST LAFAYETTE, Ind. -- Researchers have discovered a tube-shaped structure that forms temporarily in a certain type of virus to deliver its DNA during the infection process and then dissolves after its job is completed. The researchers discovered the mechanism in the phiX174 virus, which attacks E. coli bacteria. The virus, called a bacteriophage because it infects bacteria, is in a class of viruses that do not contain an obvious tail section for the transfer of its DNA into host cells. "But, lo and behold, it appears to make its own tail," said Michael Rossmann, Purdue University's Hanley Distinguished Professor of Biological Sciences. "It doesn't carry its tail around with it, but when it is about to infect the host it makes a tail." Researchers were surprised to discover the short-lived tail. "This structure was completely unexpected," said Bentley A. Fane, a professor in the BIO5 Institute at the University of Arizona. "No one had seen it before because it quickly emerges and then disappears afterward, so it's very ephemeral." Although this behavior had not been seen before, another phage called T7 has a short tail that becomes longer when it is time to infect the host, said Purdue postdoctoral research associate Lei Sun, lead author of a research paper to appear in the journal Nature on Dec. 15. The paper's other authors are University of Arizona research technician Lindsey N. Young; Purdue postdoctoral research associate Xinzheng Zhang and former Purdue research associate Sergei P. Boudko; Purdue assistant research scientist Andrei Fokine; Purdue graduate student Erica Zbornik; Aaron P. Roznowski, a University of Arizona graduate student; Ian Molineux, a professor of molecular genetics and microbiology at the University of Texas at Austin; Rossmann; and Fane. Researchers at the BIO5 institute mutated the virus so that it could not form the tube. The mutated viruses were unable to infect host cells, Fane said. The virus's outer shell, or capsid, is made of four proteins, labeled H, J, F and G. The structures of all but the H protein had been determined previously. The new findings show that the H protein assembles into a tube-shaped structure. The E. coli cells have a double membrane, and the researchers discovered that the two ends of the virus's H-protein tube attach to the host cell's inner and outer membranes. Images created with a technique called cryoelectron tomography show this attachment. The H-protein tube was shown to consist of 10 "alpha-helical" molecules coiled around each other. Findings also showed that the inside of the tube contains a lining of amino acids that could be ideal for the transfer of DNA into the host. "This may be a general property found in viral-DNA conduits and could be critical for efficient genome translocation into the host," Rossmann said. Like many other viruses, the shape of the phiX174 capsid has icosahedral symmetry, a roughly spherical shape containing 20 triangular faces. Note to Journalists: An electronic or hard copy of the research paper is available by contacting Nature at email@example.com or calling (212) 726-9231. The research has been funded by the National Science Foundation, U.S. Department of Energy, and the U.S. Department of Agriculture. Bentley A. Fane Related Web site: Michael Rossmann: http://www. Researchers have discovered a tube-shaped structure that forms temporarily in a certain type of virus to deliver its DNA during the infection process and then dissolves after its job is completed. The virus is pictured here infecting an E. coli cell. The tube attaches to the cell's inner and outer membranes, bridging the "periplasmic space" in between. (Purdue University image/Lei Sun) A publication-quality graphic is available at http://www. Icosahedral bacteriophage ΦX174 forms a tail for DNA transport during infection Lei Sun1,*, Lindsey N. Young2,*, Xinzheng Zhang1,* Sergei P. Boudko1,†, Andrei Fokine1, Erica Zbornik1, Aaron P. Roznowski2, Ian Molineux3, Michael G. Rossmann1, and Bentley A. Fane2 1Department of Biological Sciences, Purdue University 2School of Plant Sciences and the BIO5 Institute, University of Arizona 3Molecular Genetics and Microbiology, Institute for Cell and Molecular Biology, The University of Texas at Austin *These authors have contributed equally †Current address: The Research Department, Shriner's Hospital for Children, Portland, OR Prokaryotic viruses have evolved various mechanisms to transport their genomes across bacterial cell walls: barriers that can contain two lipid bilayers and a peptidoglycan layer1. Many bacteriophages utilize a tail to perform this function, whereas tail-less phages rely on host organelles, such as plasmid-encoded receptor complexes and pili2-5. However, the tail-less, icosahedral, single-stranded (ss) DNA ΦX174-like coliphages do not fall into these well-defined infection paradigms. For these phages DNA delivery requires a DNA pilot protein6. Here we show that the ΦX174 pilot protein H oligomerizes to form a tube whose function is most probably to deliver the DNA genome across the host's periplasmic space to the cytoplasm. The 2.4 Å resolution crystal structure of the in vitro assembled H protein's central domain consists of a 170 Å-long α-helical barrel. The tube is constructed of 10 α-helices with their N-termini arrayed in a right-handed super-helical coiled-coil and their C-termini arrayed in a left-handed super-helical coiled-coil. Genetic and biochemical studies demonstrated that the tube is essential for infectivity but does not affect in vivo virus assembly. Cryo-electron tomograms have shown that tubes span the periplasmic space and are present while the genome is being delivered into the host cell's cytoplasm. Both ends of the H protein contain trans-membrane domains, which anchor the assembled tubes into the inner and outer cell membranes. The central channel of the H protein tube is lined with amide and guanidinium side chains. This may be a general property of viral DNA conduits and is likely to be critical for efficient genome translocation into the host.
<urn:uuid:ffa0a6d7-4b74-49f3-9c48-bb5901809c0f>
CC-MAIN-2017-47
https://www.eurekalert.org/pub_releases/2013-12/pu-vgt121313.php
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806124.52/warc/CC-MAIN-20171120183849-20171120203849-00174.warc.gz
en
0.925271
1,399
3.53125
4
Web is a HTML-based system for learning about biological diversity. The following screens allow you to select a group to focus on. The system will then guide you through a series of images and ask you questions based on each image. "Pachyderm - because an elephant never forgets....." There are several different types of pages: About the Taxonomy The taxonomy used in Pachyderm is a conservative reflection of what is being used in general biology textbooks. The goal is to help students learn the basics of taxonomy. All you taxonomists out there itching to write in and say that this or that group has been renamed please hold your advice as we will get around to changing it when the consensus has reached the point that the textbooks WE use have changed. Remember, taxonomy is constantly changing, and we are trying to give our students an exposure to the taxonomy that will help them as they search the literature. That being said, if anyone sees an image which we've misidentified, we'd be happy to hear about it. Pachyderm Web is an outgrowth of the Pachyderm Program, written by Dave McShaffrey. All images copyrighted by Dave McShaffrey and Tanya Jarrell. The original Pachyderm software was supported by NSF Grant DUE- 9750553, and development of Pachyderm Web was supported by a TIG grant from Marietta College in turn supported by a US Department of Education Title III grant. For best results, maximize your browser and set your screen resolution to 1280 x 1024 pixels or better. Because some of the pictures are large, it may take a while for them to load over a slow internet connection.
<urn:uuid:ed98d121-1422-499a-8aef-821e0c942afd>
CC-MAIN-2013-48
http://www.marietta.edu/~mcshaffd/Pachyderm_Web/help.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345775423/warc/CC-MAIN-20131218054935-00022-ip-10-33-133-15.ec2.internal.warc.gz
en
0.942826
358
2.84375
3
Battle of Natural Bridge The Battle of Natural Bridge was a battle during the American Civil War, fought in what is now Woodville, Florida, near Tallahassee, on March 6, 1865. A small band of Confederate troops and volunteers, mostly composed of teenagers from the nearby Florida Military and Collegiate Institute that would later become Florida State University, and the elderly, protected by breastworks, prevented Union forces (consisting of African-American soldiers of the United States Colored Troops) from crossing the Natural Bridge on the St. Marks River. This action prevented the Union from capturing the Florida capital and made Tallahassee the only Confederate capital east of the Mississippi River not to be captured by Union forces during the war. The Union's Brig. Gen. John Newton had undertaken a joint force expedition to engage and destroy Confederate troops that had attacked at Cedar Keys, Florida and Fort Myers and were allegedly encamped somewhere around St. Marks. The Union Navy had trouble getting its ships up the St. Marks River. The Army force, however, had advanced and, after finding one bridge destroyed, started before dawn on March 6 to attempt to cross the river at Natural Bridge. The troops initially pushed Rebel forces back, but not away from the bridge. Confederate forces under Brig. Gen. William Miller, protected by breastworks, guarded all of the approaches and the bridge itself. The action at Natural Bridge lasted most of the day, but, unable to take the bridge in three separate charges, the Union troops retreated to the protection of the fleet. |“||This monument erected under authority of an act of the legislature of Florida of 1921 as a just tribute of the people of Florida to commemorate the victory of the battle of Natural Bridge. March 6, 1865. And to keep in cherished memory those brave men and boys who, in the hour of sudden danger, rushed from home desk and field and from the West Florida Seminary and joining a few disciplined troops by their united valor and patriotism saved their capital from the invaders. Tallahassee being the only capital of the South not captured by the enemy during the War between the States.||”| Annual Memorial Service and Battle Reenactment A ceremony honoring the combatants on both sides of the Battle of Natural Bridge, followed by a reenactment of the battle featuring Union, Confederate, and civilian reenactors, is held at the park the first weekend of March every year. The event is free and open to the public. The site is now called Natural Bridge Battlefield Historic State Park. - Military history of African Americans in the U.S. Civil War - United States Colored Troops - Historical reenactment - List of Florida state parks - List of Registered Historic Places in Leon County, Florida - The other three programs are: the Virginia Military Institute (VMI) for the Battle of New Market, The Citadel, The Military College of South Carolina| for the defense of Charleston and other engagements, and The University of Mississippi for the defense of Vicksburg. - State of Florida official website for Natural Bridge Battlefield Historic State Park http://www.floridastateparks.org/naturalbridge/default.cfm - Natural Bridge Battlefield Historic State Park - Florida State Parks - Photos of the annual Battle of Natural Bridge reenactment - Natural Bridge Historical Society - Union Account by Captain Thos. Chatfield
<urn:uuid:d73ebd8e-eb0e-4637-a487-1fa7794c39d4>
CC-MAIN-2014-15
http://en.wikipedia.org/wiki/Battle_of_Natural_Bridge
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00449-ip-10-147-4-33.ec2.internal.warc.gz
en
0.940555
708
3.453125
3
Hello and welcome to this GCSE RE podcast on Religion and Medical Ethics This topic asks you to explore Christian beliefs about a range of ethical issues arising from medical science. Remember not all Christians believe the same thing: some Christians believe one thing and other Christians will believe Specifically this unit focuses on 4 topics 1. Different Christian attitudes towards abortion; 2. Christian responses to issues raised by fertility treatment and cloning; 3. Christian attitudes towards euthanasia and suicide; 4. Christian beliefs about the use of animals in medical research. An introductory comment - Christian beliefs about the issues in all of the topics in this unit are influenced by their belief in the Sanctity of Life. This is a crucial concept that you have to understand. All Christians believe that life is a gift from God and it is therefore special and holy – this is what the word sanctity means : special and holy. Christians believe that if you end a life you are going against God’s plan. God has a plan for every person and if you end someone’s life you are interfering with that plan. Topic 1 – Different Christian attitudes towards abortion The Sanctity of Life is important when you look at different Christian attitudes to abortion. Not all Christians have the same attitude towards abortion because different Christians think life begins at different times. For example, Roman Catholics and Fundamentalist Christians believe that life begins as soon as the sperm fertilises the egg. Roman Catholics believe that as soon as the egg is fertilised it has the potential to be a human person and so has to be given the same rights as a born person – this means that if you terminate a pregnancy you are killing a person and this is murder therefore abortion is Other Christians think life begins later on in the pregnancy, perhaps when the heart starts to beat, and these Christians think that as long as the abortion happens before life begins it is acceptable. Some other Christians (like the Quakers) always try to do the most loving thing. They think abortion is wrong but believe that if someone really feels they have to have an abortion (perhaps because they were raped, the child is deformed or if there is danger to the mother) the most loving thing to do would be to help and support that person. They believe it is important not to judge that person because in the New Testament Jesus said ‘Judge not lest you be Some fundamentalist Christians in America, who strongly oppose abortion, have taken their beliefs to the extreme – thinking that in order to protect innocent unborn foetuses, it is acceptable to kill medical staff at abortion clinics. Virtually all other Christians would see this as an unacceptable action. Finally, you need to remember the principle of double effect: All Christians believe that saving the life of the pregnant woman is the most important action if continuing a pregnancy will kill the woman. Even Roman Catholics believe that in these circumstances the pregnancy should be terminated. However, they would not call this an abortion because the purpose of the medical procedure is not to abort the foetus, it is to save the woman’s life e.g. an ectopic pregnancy. Unfortunately the operation to save the woman’s life also terminates the pregnancy and this is what is meant by the principle of double effect – it (1) saves the mother but (2) terminates the pregnancy. No Christian church will bury an aborted foetus. Topic 2 – Christian responses to issues raised by fertility treatment There are a number of different types of fertility treatment and you need to know what these involve. Some Christians believe all forms of fertility treatment are wrong because if you cannot have children it is all part of God’s plan – in the Old Testament God made Abraham’s wife Hannah infertile for a long time in order to test Abraham’s faith, before finally allowing her to become pregnant when she was more than 80 years old. Other Christians only disagree with certain types of fertility treatment. Artificial Insemination involves the sperm being placed inside the woman’s uterus artificially – this procedure can help a lot of women become pregnant. Most Christians accept this form of fertility treatment for married couples (referred to as AIH – Artificial Insemination using the Husband’s sperm) because all it does is help people get pregnant naturally. However, sometimes donor sperm is used (referred to as AID – Artificial Insemination using Donor sperm) and a lot of Christians (e.g. Roman Catholics) think that this is wrong because it introduces a third person into the relationship between the husband and the wife and can be seen as adultery; it can also cause tension in a family once the child grows up because they will have a different biological father. IVF or In Vitro Fertilisation, which makes a ‘test tube’ baby, involves the removal of a number of eggs from the woman. These are then fertilised in a Petri dish using either the husband’s or a donor’s sperm before two or three of the best embryos get implanted in the woman’s uterus. (The use of a donor’s sperm creates the third person in the relationship problems referred to Any other embryos that were fertilised but not implanted are then destroyed. It is because of this last step that Roman Catholics and Fundamentalist Christians think that IVF is wrong. Remember that they believe that life begins when the sperm fertilizes the egg, so they believe that the spare embryos that are destroyed are already alive and destroying them is murder (this is the same as their argument against abortion and again refers to the sanctity of life). Christians who do not believe that life starts when the egg is fertilised by the sperm do not disagree with IVF treatment, although most Christians only believe that IVF treatment with the husband’s sperm is acceptable. As Jesus was a healer, to heal childless couples is the loving thing to do. For women who cannot produce eggs, they can use eggs provided by a donor but this again is wrong for some Christians as it involves a third person. A surrogate mother will agree to be made pregnant by the husband’s sperm and carry the baby to term then hand it over once it is born. Again – three in a relationship and also the surrogate mother may not want to hand the baby Cloning is making an exact copy of a living thing using live cells from the original. The first cloned animal was Dolly the Sheep – an exact copy of the original. Some Christians oppose cloning as it is not natural and wrong that humans should play God. Other Christians think that as Jesus was a healer, it is good that science can help heal people e.g. with cloned organs. Topic 3 – Christian attitudes towards euthanasia and suicide Euthanasia is a greek word that means ‘a good death’ – it is used to describe ‘assisted suicide’. Because they believe in the sanctity of life most Christians are against euthanasia – if you remember, they believe that God has a plan for everyone and we shouldn’t interfere with this plan. Some Christians believe that in some circumstances euthanasia can be the most loving thing to do (e.g. if someone is dying in a lot of pain). There is an organisation called Dignitas, based in Switzerland that carries out assisted suicides. You may wish to refer to the story of Ann Turner, the doctor who was assisted to die by Dignitas, or Daniel James the young 23 year old rugby player who was paralysed. Most Christians, however believe that human life is sacred and people should not take life away. These Christians would argue that people who are dying could go to a Hospice. Hospices are often Christian organisations, which take care of people who are dying by providing them with pain relief and support so that they can die in dignity. In general Christians think that suicide is wrong. In the past it used to be seen as a very serious sin and people that committed suicide were not allowed to be buried in the church cemetery. Today most Christians take a much more sympathetic approach to people who try to commit suicide. They think that it is important to show love and support to these people so that they no longer wish to kill themselves. Topic 4 – Christian beliefs about the use of animals in medical research This topic links with the work you did on Science and Religion. The things you learnt as part of the topic on Humans and Animals as part of that Unit will be useful for this topic. If you remember, the key points from this were that:- Some Christians believe as we are the superior race and have dominion over all creatures we use animals for medical research Other Christians believe we are all God’s creatures and so we have a responsibility to care for other animals. So, in relation to medical research, because some Christians believe that humans are God’s most important creation, they believe it is acceptable to use animals for medical research if that will help find cures for human However, because a lot of Christians also believe that they have ‘stewardship’ over the earth, they should take care of animals and not treat them cruelly. This means that even though they may be used for medical research, they should be treated well and not made to suffer unnecessarily. That covers the 4 topics in this Unit – just to remind you here are the Topic 1 – Abortion Catholics and Fundamental Christians believe life begins at conception and life is sacred. As God gave life only he can take it away. Abortion is therefore murder. Some Christians think that life begins later in pregnancy and so an early abortion is acceptable. Other Christians believe that abortion can be justified if the mother has been raped, the baby is deformed or there is a danger to the mother. The only reason Catholics might accept an abortion is the double effect where there is danger to the mother and baby. Topic 2 – Fertility treatment Some Christians do not accept fertility treatment as a baby is a gift from God and it is God’s will if the woman falls pregnant. Other Christians may accept fertility treatment as it is the loving thing to do to enable a couple to have a baby. Different types of fertility treatment include AIH – artificial insemination with husband’s sperm and AID – artificial insemination with a donor’s sperm. Some Christians do not approve of AID as it introduces a third person into the IVF involves fertilising eggs outside the womb and storing them in a glass dish. Several fertilised eggs or embryos are implanted back into the woman. The remaining eggs are destroyed – which some Christians view as murder. Egg donation enables some women to become pregnant – but there is still a third person in the relationship. Surrogate mothers will become pregnant though AI or IVF and carry the baby to term, handing it over to the parents when it is born. Cloning enables an exact copy to be made from living tissue. The possibilities are to help people with replacement organs or other body parts. Some Christians oppose cloning as it is not natural. Topic 3 – Euthanasia and suicide Euthanasia is another name for assisted suicide. Most Christians do not agree with euthanasia as life is sacred and a gift from God. People who are ill can go into a Hospice where they will be looked after and cared for. Some Christians would support Euthanasia as it would be the loving thing to do to help someone who is in great pain or who has no quality of life. Christians disagree with suicide – only God has the right to take life. However today people are more sympathetic to attempted suicides because they recognise the suffering these people are enduring. Topic 4 – Animals and medical research Some Christians believe we are the superior species, made by God in his image and given a soul. Humans have dominion (control and power) over other creatures so it is OK to use them for medical research for the benefit of Most Christians also believe we have stewardship over the world and its animals so we should not cause unnecessary suffering to animals. The sorts of questions you might get in this unit include : For 1 mark a definition question – ‘what is …...’ For 2 marks state 2 facts about the topic. For 3 marks describe Christian beliefs about the topic. (Either give 3 separate points or give 2 points with explanation). For 6 marks explain different Christian attitudes about the topic. This asks you to detail the different Christian attitudes and why they have them. When you plan to answer this 6-mark question, imagine 2 columns headed ‘Some Christians’ and ‘Other Christians’. The last question is for 12 marks. It will give a statement and ask you to discuss it, giving opposite viewpoints and also referring to Christianity. This should be answered in 3 sections. First give you own opinion with reasons for your beliefs i.e. give the what and the why. Then follow on giving an opposite point of view with reasons for Secondly write about what some Christians believe, with reasons and if possible a reference from the Bible or a quote. Thirdly, state what other Christians believe as a different viewpoint, giving reasons and also a reference from the Bible or a quote. I hope you find this podcast useful to help you with your revision. Remember, ask your teacher for any extra help if there are some things you still do not
<urn:uuid:7db137a7-9b4c-4a61-9254-6a1a0c2a2e78>
CC-MAIN-2013-48
http://www.docstoc.com/docs/122532275/Hallo-and-welcome-to-this-GCSE-RE-podcast-on-unit-4-%EF%BF%BD
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345765796/warc/CC-MAIN-20131218054925-00080-ip-10-33-133-15.ec2.internal.warc.gz
en
0.946387
2,931
3.046875
3
On June 17, 2013 the U.S. Department of Health and Human Services (HHS) released the 2013 Alzheimer’s disease plan update. The initial “National Plan to Address Alzheimer’s Disease” was released in May 2012 under the 2011 National Alzheimer’s Project Act (NAPA). President Obama signed the National Alzheimer’s Project Act to support Alzheimer’s research and help individuals and families affected by Alzheimer’s disease. The National Plan to Address Alzheimer’s disease was developed by experts in Alzheimer’s disease and aging to discover techniques to prevent and treat Alzheimer’s disease by 2025, improve care for patients, enhance public awareness, and increase support for caregivers. The 2013 update to the National Plan highlights completed goals over the past year in addition to recommendations for additional action steps. Highlights in the fight against Alzheimer’s disease this past year include the National Institutes of Health organized the Alzheimer’s Disease Research Summit in May 2012 to bring together national and international experts, researchers, and advocacy groups to develop suggestions on how to best advance research. Several new Alzheimer’s research projects were funded in areas including clinical trails, genetic sequencing, and development of new cellular models for Alzheimer’s disease. These projects can be reviewed in the 2011-2012 Alzheimer’s Disease Progress Report. The U.S. Department of Health and Human Services launched a website, www.alzheimers.gov, to spread public awareness and provide information and resources to people with Alzheimer’s and their caregivers. This website reached a wide audience, with more than 200,000 visits in the first ten months. The 2013 update to the National Plan addresses the various challenges presented by Alzheimer’s disease and the update identifies actions to overcome these challenges. Specific additional actions recommended in the update include a cohesive Alzheimer’s disease training curriculum for primary care providers, assistance for families and communities affected by Alzheimer’s disease through legal services, and improvement in dementia services within state and local health networks.
<urn:uuid:f1635130-cd95-47c6-82a3-fb23764a2045>
CC-MAIN-2014-41
http://pennadc.org/2013-update-of-the-national-plan-to-address-alzheimers-disease-released/
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657110730.89/warc/CC-MAIN-20140914011150-00079-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
en
0.915434
421
2.65625
3
Getting the perfect shot in wartime is not only about weapons. With over 30 countries involved in World War II and the loss of over 50 million lives, war photography captured the destruction and victories of the deadliest war in history. Lead by Nazi leader Adolf Hitler, over one million German troops invaded Poland on September 1, 1939. Just two days later, Britain and France declared war on Germany—and the world was once again at war. Photographers were there every step of the way to capture the heroic triumphs and devastating losses. Here is a look at some of the most poignant moments captured. After German soldiers swept through Belgium and Northern France in a blitzkrieg in May of 1940, all communication and transport between Allied forces were cut, leaving thousands of troops stranded. Operation Dynamo was quickly put in place to evacuate the Allies stuck along the beaches of Dunkirk, France. Soldiers waded through the water hoping to escape by rescue vessels, military ships, or civilian ships. More than 338,000 soldiers were saved during what would be later called, the “Miracle of Dunkirk.” LISTEN ON APPLE PODCASTS: Hope, Through History: Winston Churchill and World War II On December 7, 1941, the U.S. naval base Pearl Harbor was the scene of a devastating surprise attack by Japanese forces. Japanese fighter planes destroyed nearly 20 American naval vessels, including eight battleships, and over 300 airplanes. More than 2,400 Americans (including civilians) died in the attack, with another 1,000 Americans wounded. This event was the tipping point for the U.S. The next day, December 8, 1941, Congress approved Roosevelt’s declaration of war on Japan. Two years after it’s bloody start, the U.S. had officially entered World War II. Just three days later, Japan’s allies, Germany and Italy, declared war against the United States, which Congress reciprocated by declaring war on the European powers. The world was once again at war. With the United States now involved in the war, men were joining the fight by the millions. Women stepped in to fill the empty civilian and military jobs once only seen as jobs for men. They replaced men in assembly lines, factories and defense plants, leading to iconic images like Rosie the Riveter that inspired strength, patriotism and liberation for women. Women also took part in the war effort abroad, even taking on leading roles behind the camera. This photograph was taken by photojournalist Margaret Bourke-White, one of the first four photographers hired for Life Magazine. She later became the first female war correspondent and the first woman to be allowed to work in combat zones during the war. This photograph, taken in 1942 by Life Magazine photographer Gabriel Benzur, shows Cadets in training for the U.S. Army Air Corps, who would later become the famous Tuskegee Airmen. The Tuskegee Airmen were the first black military aviators and helped encourage the eventual integration of the U.S. armed forces. With racial segregation still remaining in U.S. armed forces during this time, it was believed that black soldiers were incapable of learning to fly and operate military aircrafts. As the U.S. involvement in World War II increased, however, civilian pilot training programs expanded across the country forcing inclusion. After Hitler’s invasion of Poland, more than 400,000 Jewish Poles were confined within a square mile of the capital city, Warsaw. By the end of 1940 the ghetto was sealed off by brick walls, barbed wire and armed guards as other Nazi-occupied Jewish ghettos sprung up throughout Eastern Europe. In April 1943, residents of the Warsaw ghetto staged a revolt to prevent deportation to extermination camps. The Jewish residents were able to stave off the Nazis for an impressive four weeks. However, in the end the Nazi forces destroyed many of the bunkers the residents were hiding in, killing nearly 7,000 people. The 50,000 ghetto captives who survived, like this group pictured here, were sent to labor and extermination camps. This photograph was found amongst others in a report by the SS General Stroop titled, “The Jewish Quarter of Warsaw is No More!” The photographs that emerged from the Nazi-lead concentration camps are among some of the most horrifying ever produced, let alone during World War II. The images remain clear in one’s mind, families being captured and separated, emaciated bodies in barracks. This 1944 photograph shows a pile of remaining bones at the Nazi concentration camp of Majdanek, the second largest death camp in Poland after Auschwitz. This photograph titled “Taxis to Hell- and Back- Into the Jaws of Death” was taken on June 6, 1944 during Operation Overlord by Robert F. Sargent, United States Coast Guard chief petty officer and “photographer’s mate.” The photograph was originally captioned, “American invaders spring from the ramp of a Coast Guard-manned landing barge to wade those last perilous yards to the beach of Normandy. Enemy fire will cut some of them down. Their ‘taxi’ will pull itself off the sands and dash back to a Coast Guard manned transport for more passengers.” The D-Day military invasion was an enormous coordinated effort with the goal of ending World War II. Today, it is regarded by historians as one of the greatest military achievements. On January 27, 1945, the Soviet army entered Auschwitz and found approximately 7,600 Jewish detainees who had been left behind. Here, a doctor of the 322nd Rifle Division of the Red Army helps take survivors out of Auschwitz. They stand at the entrance, where its iconic sign reads “Arbeit Mecht Frei,” (“Work Brings Freedom”). The Soviet Army also discovered mounds of corpses and hundreds of thousands of personal belongings. Prior to the liberation of the camps by the Allies, Nazi guards forced what was known as death marches. Throughout the month of January, over 60,000 detainees were forced to march some 30 miles in their frail, emaciated states leading to the death of many prisoners. Those who survived were sent on to other concentration camps in Germany. This Pulitzer Prize winning photo has become synonymous with American victory. Taken during the Battle of Iwo Jima by Associated Press photographer Joe Rosenthal, it is one of the most reproduced, and copied, photographs in history. During the battle, marines took an American flag to the highest point on the island: Mount Suribachi. U.S. Marine photographer Louis Lowery captured the original shot but several hours later, more Marines headed to the crest with a larger flag. It was on this second attempt, that the iconic image was snapped. Three of the six soldiers seen raising the flag in the famous Rosenthal photo were killed during the Battle of Iwo Jima. The Battle of Iwo Jima image was so powerful in it’s time that it even caused copycats to stage similar images. This photograph was taken on April 30, 1945, during the Battle of Berlin. Soviet soldiers took their flag in victory and raised it over the rooftops of the bombed-out Reichstag. The photograph was also manipulated. The photographer concealed the wrists of the soldiers, which were covered in stolen wristwatches that were looted from the Germans. Stalin had given his soldiers strict instructions not to loot, so the photo manipulation was to avoid harsh consequence, discipline and possibly even death. On August 6, 1945, the Enola Gay dropped the world’s first atom bomb over the city of Hiroshima. Prior to the outbreak of the war, American scientists had been considering the development of atomic weapons to defend against fascists regimes. Once the U.S. joined the war, “The Manhattan Project” began creating the bomb that created this mass destruction. Oddly enough it was nicknamed “Little Boy.” The bomb exploded 2,000 feet above Hiroshima with an impact equal to 12-15,000 tons of TNT. This photograph captured the mushroom cloud. Approximately 80,000 people died immediately, with tens of thousands more dying later due to radiation exposure. In the end, the bomb wiped out 90 percent of the city. Photographer Alfred Eisenstaedt captured this photo in Time Square on Victory against Japan Day (“V-J Day”), August 14, 1945. Sailor George Mendonsa saw dental assistant Greta Zimmer Friedman for the first time among the celebration at V-J Day. He grabbed and kissed her. This photograph would go on to become one of the most well-known in history, while also stirring up controversy. Many women have claimed to be the nurse over the years, some saying it depicts a nonconsensual moment, even sexual harassment.
<urn:uuid:23b3fd1b-d80c-47f4-90a2-572f8bfcabf8>
CC-MAIN-2020-24
https://www.history.com/news/world-war-ii-iconic-photos
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347441088.63/warc/CC-MAIN-20200604125947-20200604155947-00563.warc.gz
en
0.971541
1,825
3.640625
4
Monday, June 11, 2012 Recently, I have had a few parents of kindergarten going kids ask for different ways to teach their kids to recognize letters. I always find that kids, especially little boys, like gross motor activities. So, a few suggestions: play hop scotch with letters, but tell the kids which letter to jump onto (good for numbers, as well). Make paper "targets" with letters on them and have the child throw a ball at them. You can put them on your wall, garage or even on the driveway and have them throw at the letter you state. I recommend focusing on a few letters at a time. Once these are mastered, add some more (maybe 2-3). It's a good idea to teach both upper and lower case letter recognition at the same time. Learning is easier when the kids are having fun!
<urn:uuid:9250e102-2667-4604-bbae-379b5cf225fe>
CC-MAIN-2017-26
http://therapyfun4kids.blogspot.com/2012/06/learning-letters.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320666.34/warc/CC-MAIN-20170626013946-20170626033946-00398.warc.gz
en
0.978921
176
3.21875
3
Influencing the Brain Through Deep Brain Stimulation This content is not compatible on this device. Before we continue, we'll need to review a few facts about the brain. You may already know that the brain is divided into many specialized areas, each responsible for different tasks. There are separate regions of your brain that play a role in controlling muscle movements, memory and even emotions. These separate regions of the brain work together to accomplish larger goals. When injury or disease prevents any one brain region from performing its role, the larger goals might not be met. A good example of this is the basal ganglia. The basal ganglia is a group of brain structures that work together to help control body motions. As movements are planned and coordinated in the brain, information in the form of electrical brain activity flows between the structures of the basal ganglia. Each structure plays a role in modifying and refining the information to help fine tune muscle movements. When any part of the basal ganglia is impaired, the normal flow of information is altered. Widespread movement control problems are often the result, as in the case of Parkinson's disease. To find out where deep brain stimulation comes in, let's stick with the example of the basal ganglia. As mentioned above, the normal electrical flow of brain activity throughout the basal ganglia is disrupted by the effects of Parkinson's disease. The purpose of an implanted DBS electrode is to counteract this abnormal brain activity, altering it in a way that decreases the disease symptoms. The electrode accomplishes this by targeting one of several possible structures within the basal ganglia. For Parkinson's disease, this is most commonly the subthalamic nucleus (STN). A deep brain stimulation electrode implanted in the STN sends out pulses of electricity, modifying its behavior. By altering the behavior of the STN, the electrode is ultimately altering all of the brain activity that the STN normally affects. This makes the DBS electrode very influential, since the STN is one of several structures in the basal ganglia that all work together. Sounds simple enough, right? Well, what the experts haven't fully worked out yet is exactly how DBS influences the brain structures it stimulates -- although there are several likely possibilities. For example, the quickly repeating electrical signals emitted by the DBS electrode may act to block irregular brain activity. In this scenario, the effects of the electrical stimulation can be thought of as a gate blocking certain pathways of corrupted information. Another possibility is that the regular pattern of electrical pulses from the implanted DBS electrode would act to override irregular flows of information. In other words, the electrical stimulation of the DBS device acts to drown out the abnormal patterns of brain activity. The complete story of how DBS achieves its effects is probably much more complex. It's likely that the same pattern of deep brain stimulation affects different parts of the same brain structure in completely opposite ways. Although the mechanisms of DBS aren't yet fully worked out, doctors have enough experience using DBS to feel confident of its safety and effectiveness. Now that you have an idea of how a DBS device works, let's take a look at how it's implanted in the brain.
<urn:uuid:22ebf019-f983-47a2-9028-bb94d2d37ae1>
CC-MAIN-2020-16
https://science.howstuffworks.com/life/inside-the-mind/human-brain/deep-brain-stimulation3.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00113.warc.gz
en
0.936139
646
3.234375
3
From water bottles to flip-flops, this plastic waste poses a serious threat to marine animals. According to EcoWatch, Americans use “roughly 500 billion plastic bags and 35 billion plastic water bottles annually. On average, an American discards approximately 185 pounds of plastic every year.” Much of this plastic waste ends up in our oceans. Scientists, according to a research study published in 2014, estimate that more than 5 trillion pieces of plastic pollution are floating in our oceans. From water bottles to flip-flops, this plastic waste poses a serious threat to marine animals. These animals often confuse the plastic waste for food and either starve to death or get caught in plastic packaging and suffocate. If you’re curious about the study entitled, Plastic Pollution in the World’s Oceans: More than 5 Trillion Plastic Pieces Weighing over 250,000 Tons Afloat at Sea, visit http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0111913#s3. Here are 4 simple ways you can help save the ocean: 1. Buy a bracelet. When you buy one bracelet from 4Ocean, you are funding the collection of one pound of trash from the ocean. The bracelets are made from 100 percent recycled materials. The beads are made from recycled glass and the cord is made from recycled water bottles. 4Ocean facilitates beach and offshore clean ups. To date, they have removed almost 48,000 pounds of waste from the oceans. 2. Purchase some hand soap. The cleaning products company, Method, offers 2-in-1 dish and hand soap in an ocean plastic bottle. Method recovers plastic from the ocean and makes soap containers from it. By using ocean plastic, Method is preventing the creation of new plastic for these products. They have also removed over one ton of plastics from beaches. To find out more about Method’s ocean plastic, visit http://methodhome.com/beyond-the-bottle/ocean-plastic/. 3. Pick up some new sunglasses. Norton Point is an eyewear brand based on the island of Martha’s Vineyard, M.A. The focus of the company is to “produce sustainable and environmentally conscious products that stand the test of time.” They have pledged to remove a pound of ocean waste for every pair of eyewear purchased. They also reinvest 5 percent of net profits into research, education and development efforts towards stemming the impact of ocean plastic. To find out more about Norton Point eyewear, visit https://www.nortonpoint.com/. 4. Rock a new graphic tee. For each purchase made, United By Blue removes a pound of trash from waterways and oceans. To date, they have hosted 177 cleanups in 26 different states which has removed 995,291 pounds of trash from oceans and waterways. To sign up for a cleanup in your area, visit https://unitedbyblue.com/pages/cleanups. United By Blue’s started their business with “a handful of organic cotton tees” and have since expanded to include apparel, bags and accessories “for people who care about the outdoors.” United By Blue uses sustainable materials like organic cotton, recycled polyester and bison fiber. To find out more about United By Blue and their products, visit https://unitedbyblue.com/.
<urn:uuid:e43285a9-fed5-4ca5-9057-a6b05355fc3a>
CC-MAIN-2017-26
https://recyclenation.com/2017/06/4-simple-ways-you-can-help-save-the-ocean/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320206.44/warc/CC-MAIN-20170623235306-20170624015306-00419.warc.gz
en
0.9297
728
3.65625
4
The pictures hanging up are from my niece and nephew. My kids don’t draw. They don’t color. The last time I recall one of my kids holding a pen was when my son wrote his Seven Games password on his shorts because he couldn’t find a piece of paper. My kid’s type. This isn’t what I imagined. I grew up with art supplies everywhere. I assumed all kids like painting and drawing and writing loopy letters to pass time. But now I understand that cursive is an anachronism, and maybe printing is next, and kids pass time on iPhones. Yet there are benefits to handwriting that you don’t get when you only use a keyboard: Handwriting connects our brain to our body. This is especially important for kids with vestibular issues. Kids on the autism spectrum often have bruises and odd gaits because they don’t connect their brain and their body very well. These issues are not small: New York City paid for four hours a week of occupational therapy for three years to help my older son overcome these issues. So teaching kids on the spectrum to write will be helpful. (But not if it’s a huge uphill battle. They can learn to color in the lines or play a musical instrument with many of the same results.) Handwriting helps with memory and recall. This was especially important before the Internet, when the academic class, residing in the Ivory Tower, differentiated itself by memorizing facts. (Please, someone quiz me on big dates in the history of individualism in western thought.) In the age of the the Internet, people who are great at memorizing slipped quickly from the realm of Einstein to the realm of special ed. (Topic for some enterprising blogger: is Aspergers a function of the brain or of our time in history?) At any rate, we actually synthesize information faster on the computer, as we move ideas around (think: copy-and-paste is the killer app for intellectuals ). So almost no one needs to write to memorize anymore. Handwriting allows us to mind map. To do lists are linear, but the way we think (when we create lists of ideas) is not so linear, but more like interrelated, overlapping maps. We think in related chunks. My first novel was actually my attempt to answer the question: how can someone write a linear story when our thoughts are non-linear, repetitive, and incomplete? And I confess that when I want to get a handle on how my brain is thinking about big ideas, I’m more apt to make a picture with bubbles and arrows and squares than a list. However there’s plenty of software to make mind maps, I just don’t know how to use it. So none of that is a great argument for teaching handwriting to homeschoolers. That said, I let my kids type everything. The truth is that some people will benefit from writing by hand, and they will write. And some people will benefit from a keyboard, and they’ll use it. I find myself writing by hand a lot even though I write on a keyboard for my job. There is not a perfect answer for everyone when it comes to writing by hand or not. Just like there are benefits to learning to play an instrument but there is no right answer to whether your kid should play an instrument. Or long-distance running. Or cooking. All skills have benefits. The trick is to let your kids decide which skills and which benefits mean the most in their own life.
<urn:uuid:ca470335-8e9a-4f95-b846-c07206700283>
CC-MAIN-2020-24
https://education.penelopetrunk.com/2016/05/10/do-your-kids-need-the-benefits-of-handwriting/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00194.warc.gz
en
0.95701
737
2.515625
3
After the Munich agreement and the Czech surrender of the Sudetenland to Germany, German authorities expelled these Jewish residents of Pohorelice from the Sudetenland to Czechoslovakia. The Czech government, fearing a flood of refugees, refused to admit them. The Jewish refugees were then forced to camp in the no-man's-land between Bruno and Bratislava on the Czech frontier with Germany. We would like to thank The Crown and Goodman Family and the Abe and Ida Cooper Foundation for supporting the ongoing work to create content and resources for the Holocaust Encyclopedia. View the list of all donors.
<urn:uuid:76e74465-4646-475d-b483-016f1c5267ac>
CC-MAIN-2020-10
https://encyclopedia.ushmm.org/content/en/film/jewish-refugees-trapped-in-no-mans-land
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00516.warc.gz
en
0.933377
126
3.3125
3
During the winters, the days get short and the electricity bills get hard to afford for many users. Additionally, there can be some gloomy indoor spaces regardless of what kind of lighting you are using. If you want to add some natural light to your homes, solar tubes are a simple, cost-effective solution. They provide soft, gentle, natural sunlight however, despite their advantages there are certain drawbacks of solar tube lighting. Here is a detailed review of solar tube lighting, exploring the drawbacks in detail. Solar tube lighting These are narrow tubes that enhance natural daylight by placing them on the roof of the house and are also called sun tunnels and light tubes. Solar tube lights can be as long as one requires and are available in different sizes. Usually, they vary in diameter i.e., 10 to 22 inches. It is generally assumed that a 10-inch long solar tube is equal to three 100-watt bulbs. Solar tubes are popular in areas where electric light is used in the daytime. Therefore, such lights help light up the dark hallways, closets, laundry rooms, kitchen, bathrooms, and staircases. If a mechanical chase or a closet is available, solar tubes can run through them. They can provide light even to lower floor levels. How does solar tube light work? A solar tube is usually in a cylindrical shape and is made up of a reflective material that is mostly a clear, weather-resistant glass dome or an acrylic cap. This material is used to catch the sunlight and then bounces it onto the inner surface of the tube. A highly polished material is used to line these solar tube lights so that the sunlight can be amplified as it passes through the dome. After passing through many surfaces, the light reaches inside your homes where the acrylic domes are installed in the ceiling from where the natural light emits. Solar tube lights also have a diffuser that helps spread the light evenly in the room providing bright light and also preventing the formation of hotspots from the sunlight. Since light has a bouncing nature, so there is a threat of UV rays entering your homes. Solar tube lighting has, therefore, emerged as an easy alternative to use renewable and clean energy without paying the cost of energy. Solar Tube lights Vs. Skylights Solar tube lights Solar tube lights are a cost-effective and easy to afford alternative to get natural sunlight. Their total cost including the installation is around $100 and they are installed on the roof. Furthermore, they come with an installation assistance kit to provide further support. As compared to the skylights, solar tubes are more efficient in terms of energy and they gain less heat. Also, lesser finishing of interior design is required by solar tubes for setup. These lights can be a great addition to your homes and will also prevent stale air from entering your homes. Skylights are windows similar to the concept of solar tubes and are placed into the roofs of the houses. However, their time for installation is more, they are comparatively expensive and require more materials and labor since they are large in size. The process for the installation of such lights is difficult and costs around $2000. Skylights are very efficient in terms of lighting, but they can also result in warming of the room since they allow solar energy to pass through. Also, they filter UV light partially. These lights are very popular since they are providing two benefits, lighting and a complete sky view right from your homes. Solar tubes, on the other hand, do not provide a view of the sky since they are narrow and are set deeply in the roof, therefore, you cannot use them for sky gazing through their opening. That being said, skylights may be a bit expensive, but they add to the resale value of your home. They are a true definition of grace and elegance and give your homes a new appearance, adding to the value of your house. Benefits of solar tube lighting Although the benefits of solar tube lights have been mentioned above, we decided to give you a clear picture of their inherent advantages. Here is a list that fully highlights the benefits of the installation of such an innovative device in your homes: - Solar tube lights are bright and convenient. - They are easy to install. - Comparatively less expensive than skylights. - Greate effectiveness and reduced risk of leakage - Allow the design to be flexible. - Do not cause heating of the room and retain heat in winters. - Solar tube lights help to avert seasonal affective disorders since they are a constant source of daylight. Drawbacks of solar tube lighting High initial cost It has been observed that solar tube lighting has a high initial cost which includes the cost of installing the system, however, the overall cost is lower than other options. So, moving from regular power to solar energy will be an expensive move. To generate the same amount of energy as the conventional power system, a high level of investment will be needed. Not suitable for all types of houses A common assumption is that innovative devices such as solar tubes can be compatible with all types of houses but this is not always true. There are consumers which have to face a lack of usability. Solar tubes need to be installed in houses that have conventional roofs. If you think for a moment, it will be difficult to mount a solar tube on a roof that has an A-shaped frame. Requires big space Another factor to consider is space when installing a solar tube light. It can be a daunting task to install a solar tube light in a house where there is no space. Since more space you have, the more amount of energy you will be able to get. However, if you have to light up a significant area then there will be additional requirements for installing the solar tube lighting other than space which is a prerequisite. It is not possible to install ventilation with solar tube lighting. This means that you cannot open the solar tube lights for heat release or to let in the fresh air as you can do with the standard skylight. One should also note that the longer the length of the solar tube, the lesser will be the light therefore it can be ineffective in some specific areas No control of lighting Solar tubes do not possess a power feature therefore the light cannot be controlled as compared to other sources of lights such as light bulbs. You can only control the amount of light a solar tube provides. The product has two options and comes in 10 and 14 inches in diameter. A diffuser or a window film can be used to control the lighting, however, this single factor can be the deal-breaker as there might be some days where you will have extreme exposure to sunlight. Solar tube lighting is a single source of energy One basic problem with solar energy is that sunlight is the primary source of energy. To provide a sufficient amount of energy, you need a constant supply of sunlight. Therefore, on the days when there is bad weather or there is not enough sunlight, you will have to store solar energy. Generally, these lights are used to complement artificial lights in homes that might have some structural defects. They are used to supplement the lighting of the house since they do not enrich the interior design of the house. Water of condensation A very common problem that has been reported by the consumers using solar tube lights is water of condensation. This problem occurs in those areas where the weather is humid because of which moist air gets into the solar tube. This leads to water condensation and accumulation on the dome of the light. Accumulated water can become a problem since it can reduce the life span of the light. Moreover, it can also leak into your rooms, ruining your paint and walls. Before making a certain investment, it is always good to weigh the benefits and drawbacks. Knowing the drawbacks means preparing oneself as to what should be expected out of a certain product. We have tried to highlight all the things you should know before buying solar tube lighting including their inherent drawbacks. If you have an indoor space that needs natural daylight, then solar tubes can be a good option. They can help provide a convenient, cost-effective solution. However, like all products solar tubes also have certain drawbacks, so considering all this information, we hope you will be able to make a sound decision.
<urn:uuid:10d66d05-f9e9-45b8-b639-7552498bbfba>
CC-MAIN-2023-23
http://solarsena.com/what-are-the-drawbacks-to-solar-tube-lighting/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656963.83/warc/CC-MAIN-20230610030340-20230610060340-00476.warc.gz
en
0.956166
1,693
2.578125
3
Learning to give and receive feedback is an important part of growing as a writer. Anyone can learn to give constructive feedback! Here are some tips to get started: There are many different types of feedback. Students may comment on grammar, punctuation or spelling, or they may offer big picture ideas about the characters or story. It’s difficult to receive feedback when it comes in a form you’re not expecting (especially if it’s an area you haven’t worked on yet). Make sure both the writer and the reviewer know what kind of feedback is expected. Help your students frame their feedback with sentence starters. Here are some suggestions to draw from: Too often, when we’re sharing our ideas and suggestions, we skip directly to what we don’t like. This activity helps students offer well rounded feedback by saying two things they liked (stars) followed by one thing they didn’t like (wish). Constructive feedback is descriptive feedback. If your students liked something, they should try to explain why. If something didn’t work for them, saying why may help the author fix the problem more readily. If your students struggle with providing only the requested type of feedback, try putting them in groups, and assign each person a different job. For example, one reviewer might focus on punctuation, while others focus on story or spelling. Novice reviewers can practice these ideas on books they read in class or at home. As writers and readers gain experience, encourage students to help each other. The ability to give and receive constructive feedback is a skill that will serve young writers throughout their lives.
<urn:uuid:3b069c63-3610-4e72-be83-f417de6899b7>
CC-MAIN-2017-39
http://beaninthegarden.com/5-ways-help-kids-give-constructive-feedback/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689775.56/warc/CC-MAIN-20170923175524-20170923195524-00657.warc.gz
en
0.948143
341
3.84375
4
Eyes on the Sky, iPhone In Hand: New App Helps Skywatchers Count Meteors, Log Data, Aid NASA Research Washington– A new NASA handheld device application for mobile devices enables skywatchers to better track, count and record data about sporadic meteors and meteor showers anywhere in the world. The "Meteor Counter" app enables astronomers -- laypersons and experienced meteor hunters alike -- to easily capture meteor observations with the software's innovative, piano-key interface. As the user taps the keys, the app records critical data for each meteor, including time and brightness. Once each observing session ends, that data is automatically uploaded, along with observer information, to NASA researchers for analysis. The new app was developed by Dr. Bill Cooke, the head of NASA's Meteoroid Environment Office at NASA's Marshall Space Flight Center in Huntsville, Ala. and Dr. Tony Phillips of space weather.com. "We developed the iphone app to be fun, and informative, but also to encourage going outside to observe the sky," said Cooke. "Our hope is the app will be useful for amateur and professional astronomers – we want to include their observations in NASA’s discoveries – and have them share in the excitement of building a knowledge base about meteor showers." A recorded audio track is optional. Users can record commentary as they input data, to be sent to NASA along with numerical information. Researchers suggest this function will be ideal for identifying shower meteors or one-time events. The Meteor Counter is designed for all kinds of observers, ranging from experts with experience in science-grade meteor observations to first-time sky watchers who might never have seen a meteor before. "The beauty is that it gradually transforms novices into experts, "says Cooke. "As an observer gains experience , we weight their data accordingly in our analyses." The Meteor Counter app also provides a newsfeed and event calendar -- both updated by professional NASA and meteor scientists -- to keep users informed of the latest meteor sightings and upcoming showers. The app is currently available for iPhone, iPad and iPod Touch. To download the free app, visit: http://itunes.apple.com/us/app/meteor-counter/id466896415 A version for other mobile devices will be available in the near future. Complete instructions for using the Meteor Counter app are available at: http://meteorcounter.com/ For more information about NASA's Meteoroid Environment Office, visit: http://www.nasa.gov/offices/meo/home/index.html - end - text-only version of this release
<urn:uuid:78e66bad-d212-42db-a835-99dec3093c05>
CC-MAIN-2014-35
http://www.nasa.gov/centers/marshall/news/news/releases/2011/11-155.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921550.2/warc/CC-MAIN-20140901014521-00139-ip-10-180-136-8.ec2.internal.warc.gz
en
0.891535
545
2.890625
3
- PDF (111 kb) - Full Text with Thumbnail Figures - Full Text with Large Figures - Supplemental Data - Cited by in Scopus (25) - Request permission - X chromosomes, retrogenes and their role in male reproduction Trends in Endocrinology & Metabolism, Volume 15, Issue 2, 1 March 2004, Pages 79-83 P. Jeremy Wang AbstractRetrogenes originate from their progenitor genes by retroposition. Several retrogenes reported in recent studies are autosomal, originating from X-linked progenitor genes, and have evolved a testis-specific expression pattern. During male meiosis, sex chromosomes are segregated into a so-called ‘XY’ body and are silenced transcriptionally. It has been widely hypothesized that the silencing of the X chromosome during male meiosis is the driving force behind the retroposition of X-linked genes to autosomes during evolution. With the advent of sequenced genomes of many species, many retrogenes can be identified and characterized. The testis-specific retrogenes might be associated with human male infertility. My goal here is to integrate recent findings, highlight controversies in the field and identify areas for further study. Abstract | Full Text | PDF (135 kb) - Processed pseudogenes: the ‘fossilized footprints’ of past gene expression Trends in Genetics, Volume 25, Issue 10, 1 October 2009, Pages 429-434 Ondrej Podlaha and Jianzhi Zhang AbstractAlthough our knowledge of the genes and genomes of extinct organisms is improving as a result of progress in sequencing ancient DNA, the transcriptomes of extinct organisms remain inaccessible, owing to the rapid degradation of messenger RNA after death. We provide empirical evidence that gene expression levels in the reproductive tissues of mice and during early mouse development correlate highly with the rate of inherited retroposition: the source of processed pseudogenes in the genome. Thus, processed pseudogenes might serve as fossilized footprints of the expression of their parent genes, shedding light on ancient transcriptomes that could provide significant insights into the evolution of gene expression. Abstract | Full Text | PDF (713 kb) - X for intersection: retrotransposition both on and off the X chromosome is more frequent Trends in Genetics, Volume 21, Issue 1, 1 January 2005, Pages 3-7 Pavel P. Khil, Brian Oliver and R. Daniel Camerini-Otero AbstractAs the heteromorphic sex chromosomes evolved from a pair of autosomes, the sex chromosomes became increasingly different in gene content and structure from each other and from the autosomes. Although recently there has been progress in documenting and understanding these differences, the molecular mechanisms that have fashioned some of these changes remain unclear. A new study addresses the differential distribution of retroposed genes in human and mouse genomes. Surprisingly, chromosome X is a major source and a preferred target for retrotransposition. Abstract | Full Text | PDF (129 kb) Copyright © 2005 Elsevier Ltd All rights reserved. Trends in Genetics, Volume 22, Issue 2, 69-73, 1 February 2006 Retroposition of processed pseudogenes: the impact of RNA stability and translational control a Genetic Information Research Institute, 1925 Landings Drive, Mountain View, CA 94043, USA b Institute of Molecular Genetics, Academy of Sciences of the Czech Republic, Flemingovo 2, Prague CZ-16637, Czech Republic Human processed pseudogenes are copies of cellular RNAs reverse transcribed and inserted into the nuclear genome by the enzymatic machinery of L1 (LINE1) non-LTR retrotransposons. Although it is generally accepted that germline expression is crucial for the heritable retroposition of cellular mRNAs, little is known about the influences of RNA stability, mRNA quality control and compartmentalization of translation on the retroposition of processed pseudogenes. We found that frequently retroposed human mRNAs are derived from stable transcripts with translation-competent functional reading frames that are resistant to nonsense-mediated RNA decay. They are preferentially translated on free cytoplasmic ribosomes and encode soluble proteins. Our results indicate that interactions between mRNAs and L1 proteins seem to occur at free cytoplasmic ribosomes.
<urn:uuid:860ee599-194e-4a62-83be-4380a788ffdb>
CC-MAIN-2013-20
http://www.cell.com/trends/genetics/abstract/S0168-9525(05)00334-3
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698554957/warc/CC-MAIN-20130516100234-00033-ip-10-60-113-184.ec2.internal.warc.gz
en
0.90546
902
2.578125
3
Smoking is bad for many reasons, though most people generally associate it with illnesses like emphysema and lung cancer. It’s not just your lungs that are affected, however. Your heart feels the strain of smoking significantly, and the nasty habit can seriously increase your risk of heart disease. With cigarette smoking causing one in five deaths in America, it’s important to understand the damage it can do to your heart — and how quitting can make you healthier almost immediately. What Smoking Harm Smoking cigarettes directly affects your heart in several ways, and can easily be extremely harmful. One of the biggest threats smoking poses is to your blood cells. The chemicals found in tobacco damage blood cells as well as your blood vessels, and can lead to diseases like Atherosclerosis — a disease that develops when plaque builds in the arteries. Plaque buildup leads to a lack of blood flow to the organs, which can be devastating to your health. If plaque builds up in the coronary arteries, that can lead to a heart disease known as Coronary Heart Disease. Coronary Heart Disease causes heart attack and heart failure, and can also potentially kill those who suffer from it. When it comes to the heart, smoking can not only be a sole cause for several types of heart disease, it can also be an added factor to increased risk for heart disease when combined with high blood pressure, obesity and high cholesterol levels. It is widely accepted that there is no safe amount of cigarettes to smoke. Even one can affect your health greatly, and smoking more just enhances those risks. Even secondhand smoke, which occurs when you inhale cigarette smoke from somebody smoking nearby, can lead to heart problems. In fact, almost 40,000 people die each year from heart disease caused by secondhand smoke. The biggest piece of advice that any doctor would give when it comes to smoking is that you should quit, and quit sooner than later. Why does it matter how soon you give up the habit? That’s because studies have shown that giving up smoking can improve your health almost immediately. A University of Alabama study found that it can take as little as eight years for a smoker’s risk of heart disease to drop to the level of a non-smoker, whereas previous research indicated that it would take around 15 years to accomplish that much of a decrease in risk. “It’s good news,” the study’s author, Ali Ahmed, MD, MPH, said. “Now there’s a chance for even less of a waiting period to get a cleaner bill of cardiovascular health.” Even better, studies have shown that your heart health, as well as your overall health, can actually begin to improve within days of quitting. Your ability to taste and smell increase after just 48 hours, your breathing improves within 72 hours and your risk of a heart attack decreases to half of a non-smoker’s level in just one year. “These findings [in the new research] underscore what we already know, but gives doctors more power to encourage people to stop smoking,” Merle Myerson, MD, said of the findings. If you’re a smoker, and especially one with prior risk for heart disease, knowing what a threat you’re posing to your heart health is vitally important. Quitting sooner than later could be the difference between life or death in some cases, but it’s never too late to give up the habit in favor of a healthy lifestyle.
<urn:uuid:b4215c20-3ff4-4022-b9b6-fe71209cf902>
CC-MAIN-2020-24
https://myhealthycondition.com/how-smoking-affects-heart-health
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513321.91/warc/CC-MAIN-20200606124655-20200606154655-00410.warc.gz
en
0.957115
728
3.125
3
According to a recent Pew study, about 90% of those ages 12-17 use the Internet, but only 66% of adults do so. However, one of Jakob Nielsen’s usability studies seems to show that adults make more proficient use of the Internet, in contrast to persistent stereotypes: Many people think teens are technowizards who surf the Web with abandon. It’s also commonly assumed that the best way to appeal to teens is to load up on heavy, glitzy, blinking graphics. Our study refuted these stereotypes. Teenagers are not in fact superior Web geniuses who can use anything a site throws at them. We measured a success rate of only 55 percent for the teenage users in this study, which is substantially lower than the 66 percent success rate we found for adult users in our latest broad test of a wide range of websites. (The success rate indicates the proportion of times users were able to complete a representative and perfectly feasible task on the target site. Thus, anything less than 100 percent represents a design failure and lost business for the site.) Teens’ poor performance is caused by three factors: insufficient reading skills, less sophisticated research strategies, and a dramatically lower patience level. I’ve wondered if the growing popularity of blogs would help to counteract the declining literacy among young people: In fact, fewer kids are reading for pleasure. According to data released last week from the National Center for Educational Statistics’s long-term trend assessment, the number of 17-year-olds who reported never or hardly ever reading for fun rose from 9 percent in 1984 to 19 percent in 2004. At the same time, the percentage of 17-year-olds who read daily dropped from 31 to 22. On the one hand, blogs require their readers to read and interpret information; on the other hand, they’re often undemanding both in length and content. We’ll see.
<urn:uuid:0faa2840-3e9d-4739-b84f-8e89d93ac041>
CC-MAIN-2014-15
http://austinmatzko.com/2005/07/28/more-teens-use-the-web-but-for-what/
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00158-ip-10-147-4-33.ec2.internal.warc.gz
en
0.94047
396
2.609375
3
1) a conjunction of planets (two planets coming close together in the sky) 2) a conjunction of a planet with a bright star 3) an “occultation” in which the Moon passes in front of a planet 4) a comet 5) a supernova More imaginative suggestions include a flying saucer, an angel, and the Shekinah glory (the light or radiance of God occasionally made visible to humans). Although we see aster in Revelation 1 as the symbol for a messenger, or angel, nothing in the Matthew 2 passage indicates a symbolic or metaphoric usage. Likewise, though New Testament references to Shekinah can be found (Matthew 17:1–3; Luke 2:9, Revelation 1:12–16), none is associated with the word aster. The “glory of the Lord” mentioned in Luke 2:9 refers to the radiance that surrounded the shepherds outside of Bethlehem, apparently seen by no one other than the shepherds. Thus, it seems reasonable to propose that the aster followed by the magi refers to an astronomical object or phenomenon. One challenge to the supernova explanation is that such a phenomenon can be so spectacular as to be visible in broad daylight. Nearly all sky watchers everywhere would have seen and recorded it. Observers in China, India, and Egypt kept meticulous records of supernova events, and yet the Christmas star received no mention in their extensive documentation. King Herod and the Jewish religious leaders in Jerusalem seemed oblivious to the star (Matthew 2:1–3). The shepherds outside of Bethlehem “keeping watch over their flocks at night” on the eve of the Messiah’s birth made no note of any astonishingly brilliant star (Luke 2:8–20). Perhaps they would have been less startled and terrified by the angels’ visit (Luke 2:9–10) had a dazzling stellar object presaged that visit. The explanation offered by lawyer Rick Larson in his DVD presentation encounters a similar challenge. Larson asserts that the star is a conjunction of Jupiter and Venus (the two brightest planets in the sky), a meeting so close that they merged in the sky to appear as a single object. Such an event, while brief, would have been so bright as to be visible in the daytime. Close conjunctions of Jupiter and Venus did occur in 2 BC (a separation of 1 arc minute at its closest moment, or one-thirtieth of the Moon’s diameter in the sky) and also in 3 BC (closest separation = 4 arc minutes, or one-seventh of the Moon’s diameter in the sky). However, such events would have made an indelible impression on the shepherds as well as on King Herod and the Jewish religious leaders. Further, they would have been observed as two objects, rather than one aster, and as two events, rather than as one and the same aster indicated by the text. Another difficulty for Larson is that the dates for these two conjunctions by most scholars’ calculations come too late. The best historical scholarship places the date of Herod’s death at 4 BC. Further, the two conjunctions occurred only ten months apart. Herod’s command to kill boys “two years old and under in accordance with the time he had learned from the magi” (Matthew 2:16) seems out of alignment with this explanation. Comets, too, seem unlikely candidates. They are typically so familiar as to warrant no special response from the magi. Further, comets are so well documented throughout history that if one did occur, especially an unusually bright one, at the time of Christ’s coming, it would likely show up in the records of Chinese, Indian, Egyptian, and Greek astronomers. The lunar occultation explanation meets with the same difficulty. The Moon frequently passes in front of, or occults, a planet. In such an event the planet disappears from view only briefly—ranging from a few seconds up to 55 minutes. Such events seem too common and unspectacular to create a stir among the magi. (Tomorrow: "Does any other option seem plausible?") [This series is take from Hugh Ross on "Reasons To Believe"] If this post got you to thinking, please leave a comment and join the conversation
<urn:uuid:a30973a4-479f-48ca-8b4a-f4632b8885a2>
CC-MAIN-2017-34
http://christianconsideration.blogspot.com/2010/12/christmas-star-what-might-christmas.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110573.77/warc/CC-MAIN-20170822085147-20170822105147-00272.warc.gz
en
0.954693
899
3.609375
4
Imagine a Fresh Start | Turn Student Struggles into School Success Why Is My Child Struggling In School? “My teacher hates me!” Jamie exclaimed as she threw her backpack onto the kitchen floor. “She always calls me out for talking or not paying attention, but I am paying attention!” It can be disheartening for parents to hear that your child is having a hard time with a teacher, or that they may have difficulty staying on task. And when their grades are poor, too, you may wonder, “How do I help my child improve in school?” Intervention Ideas for Struggling Students Talk to Your Student About Why School is Hard Start by gathering more information about what specifically is challenging for your child at school. Ask your child for examples of what happens at school while acknowledging their feelings. “It sounds like you had a rough day at school. I’m sorry to hear that. Help me picture what happened. What did your teacher say? What was happening right before that?” Take notes on activities or subjects that cause difficulty in school. Make mention of the emotion your student experiences, is she stressed at school or bored, frustrated or disinterested in learning? Develop Strategies for Promoting Positive Behavior in the Classroom Use imagery language to help your child picture what they could do differently, based on what the issue may be. “I know you were so excited about going to the movies over the weekend. When do you picture is the best time to tell your friends? During homeroom announcements or during recess?” Offering choices can make it easier for students who have difficulty verbalizing their thoughts or are hesitant to talk about how school is going. Picturing the right choice to make can help a child with behavior problems in school. Communicate with the Teacher about Supporting Your Struggling Learner Start a conversation with the teacher about how to help your struggling student. Because tone can often be misinterpreted, it’s often best to meet in person. Sending a brief email to set up a time to chat may be helpful. Keeping a positive and respectful tone may help keep things productive: “Jamie seems to be having a tough time meeting the classroom expectations lately. I would love to meet one day to discuss what I can do to help support her in school.” The teacher may be able to shed light on the times of day or activities that are tricky for your student. She may suggest homework and assignments you can help with at home, which would help your student feel more prepared in the classroom. There may even be extra help available in the classroom or after school. Signs of a Deeper Issue: Supporting Struggling Learners Despite everyone’s efforts, students may continue struggle with school work. A renewed focus on your child’s homework can often reveal difficulties with the material; weakness in learning and literacy skills may seem more apparent. They may also start to share more detail about their classroom struggles. How To Help Struggling Learners For many struggling students, behavior problems often begin in the classroom when the workload becomes too hard or when they realize they aren’t able to read as well or as quickly as their peers. They know they can’t always do the assignments presented to them, so it becomes easier to find new and clever ways to avoid tasks. Gifted children can present behavior issues in school when expectations don’t align with performance. Students with a high IQ for example, often are labeled “lazy” because it is assumed that they should be able to read and comprehend well. For these bright students, it’s especially tough to see how much easier reading is for their peers. No parent wants to feel like their child is falling behind in school. For many, the first step in helping struggling learners succeed in school is addressing underlying learning challenges. If the foundational sensory-cognitive skills for reading are not in place, students may struggle to reach their learning potential. Learning Challenges: Symbol Imagery and Concept Imagery A cause of difficulty in establishing sight words and contextual fluency is difficulty in visualizing letters in words. This is called weak symbol imagery. A primary cause of language comprehension problems is difficulty creating an imagined gestalt. This is called weak concept imagery. This weakness in comprehension causes individuals to get only “parts” of information they read or hear, but not the whole. Signs of weak symbol imagery can be easier to spot in struggling students (slow, labored reading, difficulty with spelling) than those of weak concept imagery (difficulty with following directions, answering open-ended questions, grasping humor, mental mapping). Students struggling with symbol imagery often have difficulty reading words but can comprehend, and maybe labeled dyslexic. Weakness in comprehension can often present as low motivation or a short attention span in students struggling in school. How to Help a Child Struggling with Reading Finding the right intervention can make all the difference for children struggling with reading and comprehension. Individualized sensory-cognitive instruction can address the specific learning challenge of each child and help them find success in school. Watch the video below to hear from a mom whose daughter was struggling at school and wasn’t able to read despite being extremely bright. She describes how Lindamood-Bell instruction changed their family’s life: “She took a final assessment at the end, and the results were just incredible. More than what I had hoped for.” Learn more about how Lindamood-Bell instruction can turn this school year around for your struggling student. An accurate Learning Ability Evaluation is the first step in teaching individuals to learn to their potential. Click here to find a Learning Center near you.
<urn:uuid:c618b1b9-464c-418e-bc8b-3075304f5840>
CC-MAIN-2020-10
https://lindamoodbell.com/learning-centers/imagine-fresh-start-turn-things-around-school
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141430.58/warc/CC-MAIN-20200216211424-20200217001424-00367.warc.gz
en
0.968716
1,191
3.015625
3
Hubble Ultra Deep Field Image - Part D No Princess is sending holographic help messages. No Han Solo is warming up a Millenium Falcon to jump into hyperdrive. We don't even have a Death Star waiting around the corner. But, what we do have is evidence that astronomers have pushed the Hubble Space Telescope to its limits and have seen further back in time than ever before. “We are looking back through 96% of the life of the universe, and in so doing, we have found just one galaxy, but it is one, but it is a remarkable object. The universe was only 500 million years old at that time versus it now being thirteen thousand-seven hundred million years old. ” said Garth Illingworth, Ames Research Scientist. We know about the Hubble Ultra Deep Field, but we invite you to boldy go on... While studying ultra-deep imaging data from the Hubble Space Telescope, an international group of astronomers have found what may be the most distant galaxy ever seen, about 13.2 billion light-years away. “Two years ago, a powerful new camera was put on Hubble, a camera which works in the infrared which we had never really good capability before, and we have now taken the deepest image of the universe ever using this camera in the infrared.” said Garth Illingworth, professor of astronomy and astrophysics at the University of California, Santa Cruz. “We’re getting back very close to the first galaxies, which we think formed around 200 to 300 million years after the Big Bang.” The study pushed the limits of Hubble’s capabilities, extending its reach back to about 480 million years after the Big Bang, when the universe was just 4 percent of its current age. The dim object, called UDFj-39546284, is a compact galaxy of blue stars that existed 480 million years after the Big Bang, only four percent of the universe's current age. It is tiny. Over one hundred such mini-galaxies would be needed to make up our Milky Way. The farthest and one of the very earliest galaxies ever seen in the universe appears as a faint red blob in this ultra-deep–field exposure taken with NASA\'s Hubble Space Telescope. This is the deepest infrared image taken of the universe. Based on the object\'s color, astronomers believe it is 13.2 billion light-years away. (Credit: NASA, ESA, G. Illingworth (University of California, Santa Cruz), R. Bouwens (University of California, Santa Cruz, and Leiden University), and the HUDF09 Team) Illingworth and UCSC astronomer Rychard Bouwens (now at Leiden University in the Netherlands) led the study, which will be published in the January 27 issue of Nature. Using infrared data gathered by Hubble’s Wide Field Planetary Camera 3 (WFC3), they were able to see dramatic changes in galaxies over a period from about 480 to 650 million years after the Big Bang. The rate of star birth in the universe increased by ten times during this 170-million-year period, Illingworth said. “This is an astonishing increase in such a short period, just 1 percent of the current age of the universe,” he said. There were also striking changes in the numbers of galaxies detected. “Our previous searches had found 47 galaxies at somewhat later times when the universe was about 650 million years old. However, we could only find one galaxy candidate just 170 million years earlier,” Illingworth said. “The universe was changing very quickly in a short amount of time.” The Hubble Ultra Deep Field WFC3/IR Image. This Region of the Sky Contains the Deepest Optical and Near-Infrared Images Ever Taken of the Universe and is useful for finding star-forming galaxies at redshifts 8 and 10 (650 and 500 million years after the Big Bang, respectively). At UCSC and Leiden, we are using these data to better understand the properties of the first galaxies. Credit: Bouwen According to Bouwens, these findings are consistent with the hierarchical picture of galaxy formation, in which galaxies grew and merged under the gravitational influence of dark matter. “We see a very rapid build-up of galaxies around this time,” he said. “For the first time now, we can make realistic statements about how the galaxy population changed during this period and provide meaningful constraints for models of galaxy formation.” Astronomers gauge the distance of an object from its redshift, a measure of how much the expansion of space has stretched the light from an object to longer (“redder”) wavelengths. The newly detected galaxy has a likely redshift value (“z”) of 10.3, which corresponds to an object that emitted the light we now see 13.2 billion years ago, just 480 million years after the birth of the universe. “This result is on the edge of our capabilities, but we spent months doing tests to confirm it, so we now feel pretty confident,” Illingworth said. The galaxy, a faint smudge of starlight in the Hubble images, is tiny compared to the massive galaxies seen in the local universe. Our own Milky Way, for example, is more than 100 times larger. The researchers also described three other galaxies with redshifts greater than 8.3. The study involved a thorough search of data collected from deep imaging of the Hubble Ultra Deep Field (HUDF), a small patch of sky about one-tenth the size of the Moon. During two four-day stretches in summer 2009 and summer 2010, Hubble focused on one tiny spot in the HUDF for a total exposure of 87 hours with the WFC3 infrared camera. “NASA continues to reach for new heights, and this latest Hubble discovery will deepen our understanding of the universe and benefit generations to come,” said NASA Administrator Charles Bolden, who was the pilot of the space shuttle mission that carried Hubble to orbit. “We could only dream when we launched Hubble more than 20 years ago that it would have the ability to make these types of groundbreaking discoveries and rewrite textbooks.” To go beyond redshift 10, astronomers will have to wait for Hubble’s successor, the James Webb Space Telescope (JWST), which NASA plans to launch later this decade. JWST will also be able to perform the spectroscopic measurements needed to confirm the reported galaxy at redshift 10. “It’s going to take JWST to do more work at higher redshifts. This study at least tells us that there are objects around at redshift 10 and that the first galaxies must have formed earlier than that,” Illingworth said. “After 20 years of opening our eyes to the universe around us, Hubble continues to awe and surprise astronomers,” said Jon Morse, NASA’s Astrophysics Division director at the agency’s headquarters in Washington. “It now offers a tantalizing look at the very edge of the known universe -- a frontier NASA strives to explore.” How far back will we go? If you sit around a campfire watching the embers climb skywards and discuss cosmology after an observing night with your astro friends, someone will ultimately bring up the topic of space/time curvature. If you put an X on a balloon and expand it - and trace round its expanse - you will eventually return to your mark. If we see our beginnings, will we also eventually see our end coming up over the horizon? Wow... Pass the marshmallows, please. We've got a lot to think about. Reader Info: Illingworth’s team maintains the First Galaxies website, with information about the latest research on distant galaxies. In addition to Bouwens and Illingworth, the coauthors of the Nature paper include Ivo Labbe of Carnegie Observatories; Pascal Oesch of UCSC and the Institute for Astronomy in Zurich; Michele Trenti of the University of Colorado; Marcella Carollo of the Institute for Astronomy; Pieter van Dokkum of Yale University; Marijn Franx of Leiden University; Massimo Stiavelli and Larry Bradley of the Space Telescope Science Institute; and Valentino Gonzalez and Daniel Magee of UC Santa Cruz. This research was supported by NASA and the Swiss National Science Foundation. Hubble Ultra Deep Field Image and Video courtesy of NASA/STSci.
<urn:uuid:ddfc0f3a-4ac4-4361-8bb2-590884ddf396>
CC-MAIN-2017-43
http://opttelescopes.blogspot.fr/2011/01/press-release-long-ago-and-far-far-away.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825510.59/warc/CC-MAIN-20171023020721-20171023040721-00033.warc.gz
en
0.93227
1,761
2.796875
3
4 Types Of House Wall Material Material to build a house usually use brick, wood, iron, stone, ceramic, glass, roof tile, cement and so on. These materials are used to build from foundation until roof. But if we make classify for house wall, it can fall into four categories. These categories are brick, wood, stone and glass. Most application is by combining these materials together. Brick is common material for building house wall. Most house types, from classic until modern house architecture use brick as main material for wall. Brick is widely used because it’s durable, tidy and it can absorb and dispose heat. Besides that this material can be gotten with affordable price. Second material is wood. Wood is other popular material for building wall. Use wood as material to build a house can make this house close to nature. We can find houses with wood as main material on rustic or suburb region. Wood can be used as house wall, but it can also be used as main material. Here, this wood is used for wall, floor and roof. Some house types with wood as main material are tree house, tiny house and rustic house. If we want to feel living in green environment, building a house with wood is good choice. Third material to build wall is stone. We still find a house with its wall is built by stone. Usually this house is built on rustic and suburb region. Use stone as material to build a house can make this wall become strong. This is an advantage of this material as wall material, but the weakness is this material comes in irregular shape. It’s difficult to build flat wall with stone. On modern house, stone is used as an exterior element. Usually natural stone is chosen to be used. This stone is cut in regular shape and then be arranged and installed on the wall. It’s just for beautifying. Last material is glass. Nowadays we can find modern house and building use glass as main material for wall. House and building like this are looked very modern. This house type is most expensive, because price of thick and big glass are higher than brick, stone and wood.
<urn:uuid:58378715-07cb-4cf2-ac99-813aee15854a>
CC-MAIN-2017-30
http://homedecorreport.com/4-types-of-house-wall-material-842/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424645.77/warc/CC-MAIN-20170724002355-20170724022355-00687.warc.gz
en
0.959092
441
2.5625
3
The era between empire and communism is routinely portrayed as a catastrophic interlude in China's modern history. But in this book, Frank Dikötter shows that the first half of the twentieth century was characterized by unprecedented openness. He argues that from 1900 to 1949, all levels of Chinese society were seeking engagement with the rest of the world and that pursuit of openness was particularly evident in four areas: governance, including advances in liberties and the rule of law; greater freedom of movement within the country and outside it; the spirited exchange of ideas in the humanities and sciences; and thriving and open markets and the resulting sustained growth in the economy. Copub: Hong Kong University Press Frank Dikötter is Professor of Chinese Modern History at the School of Oriental and African Studies, University of London, and Chair of Humanities at the University of Hong Kong. He has published a series of innovative books, including The Discourse of Race in Modern China and Narcotic Culture: a History of Drugs and China. "In this succinct and vigorous book, Frank Dikötter presents a cornucopia of graphic examples to show that China in the first half of the twentieth century, far from being in a state of decay that called for revolutionary action, was in fact a vibrant and cosmopolitan society. In such a reading, the current Chinese leaders should not be seen as striving to do something bold and new; they are merely struggling to rebuild a network of global connections that Mao and others had systematically helped to destroy. This should be an ideal book to spark class discussion on modern China."—Jonathan Spence, author of The Search for Modern China and Return to Dragon Mountain "The always innovative Frank Dikötter infuses new life into an historical period left by most historians for dead—China's republican era from 1912 to 1949. In his persuasive recounting, this cosmopolitan, dynamic era has more to tell us about modern China's long-term trajectory than the authoritarian interlude that followed it."—Andrew J. Nathan, Class of 1919 Professor of Political Science, Columbia University
<urn:uuid:0da08546-71cb-4a2f-826f-783823b8a509>
CC-MAIN-2014-41
http://www.ucpress.edu/book.php?isbn=9780520258815
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137145.1/warc/CC-MAIN-20140914011217-00141-ip-10-234-18-248.ec2.internal.warc.gz
en
0.946621
423
2.734375
3
Home > Preview The flashcards below were created by user on FreezingBlue Flashcards. What is Economics? The social science that studies the allocation of scarce resources for the creation, distribution, and consumption of goods and resource. What are the Laws of Diminishing returns? The saw states the continuous increase of one input factor while holding the other input factors fixed to a decrease in the per unit output of the variable input factor. What are the Factors of Production of resources? Land, Labor, Capital, Entrepreneur, and Technology Supply is a relationship between various quantities of goods and services producers are willing and able to produce at various prices. What is the Law of Supply? Producers are willing and able to produce more goods and services at higher prices (Total Revenue = Price x Quantity. name the Six (6) Non-Price determinants that cause shifts in Supply: Technology, Factor Cost, Opportunity Cost, Taxes and Subsidies, Expectations, and the Number and size of producers in the market Demand is the relationship between various quantities of goods and services consumers are able and willing to purchase at various prices. What would you like to do? Home > Flashcards > Print Preview
<urn:uuid:9aa5fe19-d5ef-487a-97b7-62c41ccbfa59>
CC-MAIN-2017-30
https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=6417
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424884.51/warc/CC-MAIN-20170724142232-20170724162232-00251.warc.gz
en
0.925297
255
3.46875
3
When you're home visiting the family, often times you'll find yourself updating a few computers that have fallen behind. While updating software isn't hard to do, you've probably run into a family member or two who have yet to learn how. This guide is for them. To help you out in your tech support role, we're offering easy-to-email guides to teach beginners the basics of using a computer. You can find all of the guides here. Today we're going to take a look at keeping system software and third-party applications up-to-date. You'll find the instructions below, but the same instructions are also available in video form above. System Software Updates First, let's look at updating system software. You always want to keep your system updated as much as possible as updates most often focus on bug fixes, so your system will run better, and additional security, so your computer doesn't end up with a virus or something like that. To update system software on a Mac, just follow these steps: - Click the Apple menu (up in the top left corner of your screen) and choose "Software Update." - Software Update will load and check for updates. When it finishes, it'll let you know if there are any updates to install. Click "Show Details" to see any updates Software Update wants to install, or just click the "Install" button to install them. The process is similar on Windows computers. To update your system software on Windows, just follow these steps: - Click the Windows icon in your task bar to open up the Start menu. (If you don't already know, this icon is in the bottom left corner of your screen.) - Click "All Programs." - Click, "Windows Update." - After Windows Update opens, click "Check for Updates" on the top left side of the window. - Once Windows finishes checking for updates, click the "Install" button. - When the updates have finished installing, restart your computer (if prompted). Software Update (Mac) and Windows Update (Windows) will periodically run all by themselves and ask you to update. Nonetheless, you may not notice this or ignore it from time to time, so it's good to check yourself once in a while. Note: If you're worried about messing up your computer, don't. It's very hard to make a mistake when updating your software nowadays, and Windows Update even creates a restore point for you in case an update goes south. If you're on a Mac and already backing up with Time Machine, you'll be able to restore as well. The chances of something going wrong are pretty slim, however, so as long as you don't turn off your machine during an update you have nothing to worry about. Third-Party Software Updates Third-party software describes any software created by a third party and did not come with your computer's operating system. This primarily includes any software you, yourself, have installed on your machine. Because third-party software is created by different people, the way you update it varies. Web browsers, such as Firefox and Google Chrome, update themselves. You don't have to do anything at all. Other software may also update itself, or notify you of an update so you can choose whether to install it or not. Most software will allow you to check for updates manually. The location varies, but you'll almost always find a "Check for Updates" option in one of the program's menus. Some software will not notify you of updates and you'll have to visit the software's web site in order to find out if a new version is available. If it is, just download the available update or the most recent version and install it like it's a new program. If it asks you to replace the previous version, it's okay to allow that. Finally, if you downloaded an application from the Mac App Store, simply open the Mac App Store, click the "Updates" tab, and install any available updates. Those are the basics of updating software. It's a good idea to set a day and time each week to check for new updates to make sure you don't forget. It only takes a few minutes and your computer will be better off for it. Emailable Tech Support is a tri-weekly series of easy-to-share guides for the less tech savvy people in your life. Got a beginner tech support question you constantly answer? Let us know at email@example.com. Remember, when you're just starting out computing, there's very little that's too basic to learn.
<urn:uuid:f42e9fbe-c883-4f26-84d2-a9e376bf1a3a>
CC-MAIN-2017-39
https://lifehacker.com/5802139/how-to-update-software-for-beginners?tag=tech-support
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818692236.58/warc/CC-MAIN-20170925164022-20170925184022-00277.warc.gz
en
0.936723
946
2.65625
3
Rho GTPases are molecular switches that regulate many essential cellular processes, including actin dynamics, gene transcription, cell-cycle progression and cell adhesion. About 30 potential effector proteins have been identified that interact with members of the Rho family, but it is still unclear which of these are responsible for the diverse biological effects of Rho GTPases. This review will discuss how Rho GTPases physically interact with, and regulate the activity of, multiple effector proteins and how specific effector proteins contribute to cellular responses. To date most progress has been made in the cytoskeleton field, and several biochemical links have now been established between GTPases and the assembly of filamentous actin. The main focus of this review will be Rho, Rac and Cdc42, the three best characterized mammalian Rho GTPases, though the genetic analysis of Rho GTPases in lower eukaryotes is making increasingly important contributions to this field. Review Article| May 23 2000 Rho GTPases and their effector proteins Anne L. BISHOP; Biochem J (2000) 348 (2): 241–255. - Views Icon Views - Share Icon Share - Cite Icon Cite Anne L. BISHOP, Alan HALL; Rho GTPases and their effector proteins. Biochem J 1 June 2000; 348 (2): 241–255. doi: https://doi.org/10.1042/bj3480241 Download citation file:
<urn:uuid:96c332a1-c310-45ef-8e7d-7bab497845df>
CC-MAIN-2020-05
https://portlandpress.com/biochemj/article-abstract/348/2/241/38427/Rho-GTPases-and-their-effector-proteins
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00129.warc.gz
en
0.908183
314
2.78125
3
I know for the instrument checkride next month I will be asked about the different types of weather reporting. This is important to know, because as a pilot, you need to be able to not just interpret the weather, but also know what types of weather reports you need. At KCRQ (Carlsbad), there is an ATIS. At KOKB (Oceanside), there is an ASOS. F70 (French Valley) has an AWOS-3. So what’s an AWOS? AWOS stands for Automatic Weather Observation System. It is a unit that measures and reports local weather at an airport to pilots. There are four basic levels of AWOS: - AWOS-A: Reports only the altimeter setting. - AWOS-1: Reports altimeter setting, wind data, temperature/dew point and density altitude. - AWOS-2: Reports the information provided by AWOS-1, plus the visibility. - AWOS-3: Provides the visibility provided by AWOS-2, plus cloud-ceiling data. For Part 121 or 135 operators, AWOS-3 is the only type of AWOS that’s acceptable without restriction. On the instrument check ride next month, I know the examiner will be asking me about the Wind & Temps Aloft forecast. This is issued 4 times daily for different altitudes and flight levels. The format is DIRECTION – SPEED – TEMPERATURE. If it says 9900 then that means light and variable. Wind direction is from true north, according to aviationweather.gov. Things get a little tricky when the wind is is about 100 or 200 knots. Just remember “Between 51 and 86.” When the wind speed is 100 knots or greater, wind direction is coded as a number between 51 and 86. And then you subtract 50 from that number – that’s your direction. And then you add 100 to the second set of numbers. That is your wind speed. Let’s practice: - Direction (75-50) Winds coming from 250° - Speed (19 + 100) 119 knots Above 24,000 feet, the temperature is assumed to be negative. If this forecast was issued at 34,000, you would assume the temperature to be negative 50 degrees. Let’s try another one, from tonight’s forecast: At 39,000 feet over SAN: “771357” - Direction= (77-50) winds coming from 270° - Speed = (13 + 100) 113 knots - Temperature = -57 degrees another one, just for fun. at 39,000 feet over BLH: “761358” - Direction= (76-50) winds coming from 260° - Speed= (13+100) 113 knots - Temperature -58 degrees All levels through 12,000 feet are true altitude (MSL). The levels 18,000 feet and above are pressure altitude.
<urn:uuid:eb0b0acc-2c2b-4787-981f-8d604f934d43>
CC-MAIN-2017-39
http://www.cessnaspupsandsippycups.com/2016/03/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689413.52/warc/CC-MAIN-20170923014525-20170923034525-00621.warc.gz
en
0.886106
630
2.640625
3
A fictional character is a person who appears in a narrative. They may be based on real people, but they are often distinct from one another. In writing, a fictional character must fulfill certain essential qualities in order to become a memorable and interesting character. Here are four of those qualities. Then, we’ll explore the physical world of fictional characters. Hopefully, they’ll help you create a memorable fictional character. And remember, if you like a fictional character, you’ll want to find out more about it. A literary property, also called a set, refers to the set of properties that correspond to an object or group of objects. It is important to note that fictional characters are not real objects. Their properties are the same, but they are not identical to each other. For example, a fictional character might have the same appearance as a real object. In this case, the fictional character would be a non-human. In other words, a fictional character could be a living creature or an animal. The question of whether a fictional name refers to a fictional character is central to the semantic debate about the nature of fiction. A realist would argue that fictional names refer to fictional entities. However, this theory is problematic in several ways. The first problem with it is that it assumes an identity relation between a fictional name and a fictional character. Furthermore, it assumes a relation that is not problematic. Furthermore, this view also presupposes that a fictional name is a representation of a fictional character. Developing motivation for fictional characters is a crucial part of creating a realistic story. Without character motivation, the plot will not be as believable as it could be. When a fictional character is motivated by a specific goal, the reader is more likely to relate to the character, which makes the story more believable. Character motivation also plays a significant role in making characters likable. There are many different ways to use motivations for fictional characters to enhance the story and make it more believable. Character motivations can be divided into positive and negative. The former is proactive, while the latter is reactive. The protagonist’s central goal is often a solution to a problem. The protagonist’s scene motivations are often dependent on the plot of the novel. These motivations push the protagonist to complete a certain scene. A writer may have two protagonists with the same backstory, but they might have different motivations for solving each problem. According to a new study, people form emotional attachments with fictional characters and make similar judgments about their personalities as they do about real people. The study, conducted at the University of Florida, investigates the concept of assumed similarity, or the tendency to associate similar traits with someone we don’t know. The research involved 56 fictional characters from the popular “Game of Thrones” book series and TV show. Participants were asked to rate the characters’ personality traits according to commonly studied factors. One resource for identifying fictional characters is the Myers-Briggs Personality Type Indicator. This test involves four different pairs of personality traits: extroversion, sensing, intuition, and openness. Each pair represents a different personality style. These four types are based on Carl Jung’s theory. The four types are different, and a fictional character may have a few of them. If a character shares many traits with a real person, they are likely to be more interesting than a fictional one. The physical world of a fictional character or entity lacks the same inherent existence as the real world. It does not exist according to the nonexistence datum, so no empirical evidence can be offered to support such a claim. It is also impossible for a fictional entity to be a physical object. Therefore, any observation of an entity or object in the fictional world must be based on perception, not on scientific observation. Therefore, we must resort to speculation. A fictional character can be influential for many reasons. For example, many people who were inspired to pursue careers in education were motivated by Robin Williams’ portrayal of Mr Keating in the film Dead Poets Society. Another character who inspired many to consider careers in the health care field was Ellen Pompeo, who became a nurse after reading Harper Lee’s novel Atticus Finch. But is there a relationship between the type of fictional character that a person likes and their personal beliefs? Interestingly, this relationship is stronger for fictional characters than for real people. When we identify with fictional characters, we are more likely to experience the same emotional and mental states as the characters we identify with. We may also draw on our vMPFC to evaluate the character at a later time. However, the opposite is true as well. A fictional character is often perceived as being less important than a real person, and a fictional character may have a negative impact on our perception of a real person. WikiProject Fictional characters aims to improve Wikipedia’s coverage of fictional characters. It supervises the creation of individual character articles, lists of characters, and general articles about characters. It also develops guidelines for articles about fictional characters. Its main goals are to improve coverage and provide sources that indicate a fictional character’s notability. You can learn more about this project and its goals on the WikiProject wiki.
<urn:uuid:c8189daf-49ed-4889-9a51-185d9b460810>
CC-MAIN-2023-40
https://conceptualhub.com/4-qualities-of-a-fictional-character/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510967.73/warc/CC-MAIN-20231002033129-20231002063129-00802.warc.gz
en
0.970579
1,083
3.078125
3
A Melakarta raga has seven distinct notes in its Arohanam, ascending in pitch, ( kramasampurna) and has the same notes in the Avarohanam, as its descent. 72 MELAKARTA RAGAS PDF – A Melakarta raga has seven distinct notes in its Arohanam, ascending in pitch, (kramasampurna) and has the. Start studying 72 melakarta ragas. Learn vocabulary, terms, and more with flashcards, games, and other study tools. |Published (Last):||13 February 2013| |PDF File Size:||5.99 Mb| |ePub File Size:||17.91 Mb| |Price:||Free* [*Free Regsitration Required]| Ragas form the basis of melodic music of India and some of the ragas form the parent or melakarta ragas. From the melakarta ragas only, many derived or janya ragas are possible. A melakarta ragaz is characterized by having all seven swaras or notes in both ascending arohana and descending avarohana order. It shows a complete sequence of swaras called sampoorna pattern. Except Sa and Pa, which are fixed at a definite frequency, the other five swaras- Ri, Ga, Ma, Dh and Ni have two sets of swaras-the lower and higher or komal and teevra varieties. Thus the seven swaras are expanded to 12 notes. From these 12 notes alone, all the melakarta ragas are generated and from these melakarta ragas only, thousands and thousands of janya ragas get evolved. Venkatamakhi, a gifted musicologist of the century is credited to have put all the sampoorna ragas, which are the main 72 parent ragas in a logical order. MUKUND MELAKARTA RAGA CHART This was done by placing successively swaras of higher frequency from one melakarta to the next. Thus the classification of the melakarta ragas was developed. This important work “Chaturdandi Prakashika” was published in AD. The melakarta ragas scheme is variously called Janaka ragas, Sampoorna ragas, Raganga ragas,Melakarta ragas or Mela ragas. The melakaerta ragas are divided into two major groups for convenience. The first half consists of 36 melakarta ragas with shudda madhayama M1 and the second half contains the other melakarta ragas with prati madhyama M2. Since the time of Venkatamakhi’s classification of Carnatic ragas into melakarta scheme, attempts have been made by a few musicologists to put the above scheme into a concise form. Mukund of Bangalore has succeeded admirably in his attempt to put the melakrta ragas in a compact chart. He devised the chart, while he was in the early teens. This chart has been appreciated by the well known musicologists like Prof. Parthasarathy and other musicians of standing. Mukund is a reputed musicologist and a prolific composer of ragxs. He has composed varnas, kritis, raga malikas, padams etc in many languages. He has also composed more than one set of kritis in all melakarta ragas. To date, he has close to compositions of various kinds to his credit including several varnas and songs with jatis as dance items. The Mukund melakarta raga chart is presented below and the explanations are given to emphasize the utility of the chart. The melakarta numbers are given in brackets along with the melakarta ragas. PDF version of chart. To repeat again, the melakarta ragas 1 through 36 take shudda madhyama M1 and the melakarta ragas 37 through 72 have prati madhyama M2. In the Mukund chart, the upper triangle in each square contains shudda madhyama melakarta ragas and the prati madhyama ragas are found in lower triangles. Chakravakam 16 and Ramapriya All melakarta ragas in the outermost rows horizontal – Ri-Ga and column vertical -Da-Ni give rise to the 40 vivadi melakarta ragas apparently disonant scales. That is the melakarta ragas occurring in the border. Navaneetam 40 and Vagadeeshwari The 16 squares in the center of the chart contain all avivadi melakarta ragas. Bhavapriya 44 and Harikambodi Antipodal melakarta ragas the two melas with maximum differences, with swaras being melakartaa, excepting Sa and Pa being common can be found in opposite triangles in the two opposing squares. First locate the square in which the swaras are marked. The triangle, where the row and the column pertaining to the given swaras meet, indicates the melakarta raga. The chart is useful in finding ragas differing in one swara only. The following examples will illustrate the point. The Mukund chart has been very useful to students of music, teachers and music scholoars to make intelligent use of the melakarta ragas. It is an invaluable contribution to the theory of Carnatic music. The 12 notes of the Melakarta ragas are given with notations for easy reference. Sa and Pa are fixed.
<urn:uuid:735005cf-00e5-498a-99ca-4d5ed9ef4f95>
CC-MAIN-2020-29
https://info-cascades.info/72-melakarta-ragas-13/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657181335.85/warc/CC-MAIN-20200716021527-20200716051527-00184.warc.gz
en
0.88877
1,184
3.15625
3
Authenticated encryption of information is aimed at ensuring that messages cannot be read or changed during transmission. It is an aspect of security that will pose significant challenges in the next few years, especially in light of the rapid development of the internet of things. TU Graz’s Institute of Applied Information Processing and Communications has a research team specialising in cryptography. In 2014, the institute submitted its ASCON algorithm, which was developed in-house, for the high-profile, international Competition for Authenticated Encryption: Security, Applicability, and Robustness, also known as CAESAR. The algorithm was tested for five years and assessed in terms of its cryptanalytic and practical security. It set such high standards of security and efficiency that the high-calibre jury selected the TU Graz encryption procedure as its primary recommendation for what are known as lightweight applications. These applications are used mainly for systems that do not run on expensive, high-end desktop PCs, notebooks and smartphones, for example typical everyday “smart” devices and industrial logistics modules with slow processors, small memory and passive power supply. Confidential, authenticated data transmission The ASCON algorithm was specially designed for CPUs and chips with limited processing power. It can be implemented easily and efficiently, offers 128-bit security, and is ideally suited for use in effectively countering side-channel attacks. “This makes it particularly attractive for smart systems and industry 4.0,” explains Maria Eichlseder, who developed the procedure alongside Christoph Dobraunig (Radboud University Nijmegen), Florian Mendel (Infineon Technologies) and Martin Schläffer (Infineon Technologies). Next goal: a new encryption standard Since the CAESAR competition was launched in 2014, the algorithm has undergone numerous reviews, analyses and comparisons. In all, 57 algorithms were submitted, and six candidates made it into the final portfolio. The CAESAR organisers’ goal was not to select a single algorithm as the winner, but to make a first and second choice in each of three categories. The ASCON team is looking to follow up on its success by taking part in the Lightweight Cryptography Standardization Process, a competition organised by the National Institute of Standards and Technology (NIST). The US measurement science and standards body will use the competition to promote lightweight authenticated encryption standards. “Maybe ASCON can match the success of the cryptographic hash algorithm Grøstl, which TU Graz was involved in developing and made it into the top five in the NIST’s SHA-3 competition,” says Eichlseder of her hopes for ASCON. A team from TU Graz has also entered two submissions for the NIST’s current post-quantum cryptography project. This project is designed to single out signature, key exchange and encryption procedures that can withstand attacks from quantum computers. The two signature procedures that TU Graz played a part in developing – Picnic and SPHINCS+ were nominated as second-round candidates in February 2019.
<urn:uuid:b84b12b2-ba08-4045-a4a2-5ed1ab3af915>
CC-MAIN-2020-24
https://www.tugraz.at/en/tu-graz/services/news-stories/planet-research/singleview/article/tu-graz-setzt-internationale-kryptografie-standards0/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00553.warc.gz
en
0.958078
632
2.640625
3
As the holidays approach, feelings of joy and merriment begin to seep into our hearts. Even if we’re more than ready to bid the year ado, we can’t help but get sucked into the holiday spirit. It turns out, that spirit can also live on in an active war zone. In 1914, just a few months into World War I, nearly 100,000 troops from opposing sides engaged in a series of impromptu ceasefires, dubbed the Christmas Truce of 1914. Instead of mortar fire, there was caroling. No man’s land became filled with enemies exchanging gifts and swapping stories. The (exaggerated) accounts of football matches have become the stuff of legend. The Christmas Truce, albeit brief, lives on as a symbol of peace and comradery in the darkest of times. Singing From the Trenches On Christmas Eve, 1914, a British machine gunner named Bruce Bairnsfather, was crouched in a muddy trench in Belgium – a “horrible clay cavity” – as he called it. Suddenly, he and many other British soldiers heard an extraordinary sound: Singing from the German trenches. After some rival carolers joined in on the British side, along with some friendly shouting, enemy soldiers actually began climbing out of the trenches! On what was once a stage of death, soldiers became revelers, shaking hands, trading tobacco and wine, and enjoying one another’s company. The Christmas Truce also became a time to safely gather fallen comrades for proper burial. The many accounts from different soldiers paint a picture of spontaneous fellowship in the midst of a bloodbath that would soon rage on. Before the Christmas Truce of 1914 During the first two months of the Great War (no, not that Great War), French and British troops steadily pushed back the German attack that was tearing through Belgium and into France. In order for both sides to maintain their manpower and establish firm positions, they dug miles of trenches from the North Sea to the Swiss frontier. In the weeks leading up to the truce, there had been many efforts made to establish peace by groups like the British women suffragettes, and even Pope Benedict XV. On December 7, 1914, the Pope pleaded with the warring governments to establish an official truce. The nations declined. With both sides firmly dug in, the harsh winter weather brought damp and muddy conditions that turned to a sudden hard frost. Morale plummeted across the Western Front and the thought of Christmas bringing anything other than more bloodshed was ludicrous. Until Christmas finally arrived, and with it, a moment of peace. When Christmas Came It remains a mystery just how widespread the Christmas Truce was. When Christmas came, there were still numerous accounts of fighting across Europe. In some instances, soldiers who attempted to fraternize with the enemy were shot by commanding officers, whose superiors had grown horrified by these increasingly peaceful attitudes. Then, there were the Russian Orthodox soldiers who celebrated Christmas on January 7, with fewer accounts of fraternization. One estimate suggests that the truce most likely extended across two-thirds of the British-held trench line through Belgium. While the higher-ups wanted war no matter the holiday, the Christmas Truce was an unplanned event appearing almost magically out of a collective desire to salvage some of the humanity that was being chipped away. But as history tells us, this magic did not last long. World War I claimed the lives of nearly 15 million people. The war forever altered the global political and military landscape, not to mention the scars torn across hearts and minds for generations to come.
<urn:uuid:eb1a45ab-0271-47ac-ad6e-8a39757a5f43>
CC-MAIN-2023-14
https://www.historyinmemes.com/2022/12/21/the-christmas-truce-of-1914/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00305.warc.gz
en
0.964421
747
3.765625
4
Public Lands, Public Debates A Century of Controversy Publication Year: 2012 Owned in common, our national forests, monuments, parks, and preserves are funded through federal tax receipts, making these public lands national in scope and significance. Their controversial histories demonstrate their vulnerability to shifting tides of public opinion, alterations in fiscal support, and overlapping authorities for their management—including federal, state, and local mandates, as well as critical tribal prerogatives and military claims. Miller takes the Forest Service as a gauge of the broader debates in which Americans have engaged since the late nineteenth century. In nineteen essays,he examines critical moments of public and private negotiation to help explain the particular, and occasionally peculiar, tensions that have shaped the administration of public lands in the United States. “Watching democracy at work can be bewildering, even frustrating, but the only way individuals and organizations can sift through the often messy business of public deliberation is to deliberate...”—Char Miller, from the introduction Published by: Oregon State University Press Title Page, Copyright Table of Contents Download PDF (47.6 KB) Download PDF (39.1 KB) Introduction: In the Woods Download PDF (94.1 KB) In his foreword to Paul W. Gatesâ massive tome, History of Public Land Law Development, Rep. Wayne Aspinall, head of the Public Land Law Review Commission (1965-1970), for which the book had been written, was trying to be something that his many critics doubted he could ever beâ even handed. The Colorado Democrat offered this balancing caution to those ready to plunge into the 828-page document: âThe members of the ... Part I: Creative Forces Download PDF (24.0 KB) Le Coup D'Oeil Forestier: Shifting Views of Federal Forestry in America, 1870-1945 Download PDF (116.8 KB) Lucien Boppe, assistant director of LâEcole nationale forestière in Nancy, France, made a âtremendous impressionâ on Gifford Pinchot. The young American student, the first to attend classes at the venerable French forestry school, was captivated by Boppe, a man of short and stocky stature with immense vitality, a teacher who had âa great contempt for mere professors,â for he had âlearned in the woods what he taught in the lecture room.â ... Rough Terrain: Forest Management and Its Discontents Download PDF (175.8 KB) They came in the middle of the night, broke into Merrill Hall, site of the Center for Urban Horticulture on the campus of the University of Washington, and set incendiary devices within and around the office of researcher Terry Bradshaw; then they stole away before fiery blasts ripped through the building. The subsequent conflagration destroyed Bradshawâs facility and gutted much of the rest of the complex, causing damage ... A Transformative Place: Grey Towers and the Evolution of American Conservationism Download PDF (102.6 KB) On a beautiful late September day, just two months before he was assassinated, President John F. Kennedy spoke from the front porch of Grey Towers, the Milford, Pennsylvania, estate of Gifford Pinchot, founding chief of the USDA Forest Service. His visit served two purposes. It kicked off the presidentâs five-day, eleven-state âconservation tour,â during which he would deliver a series of addresses on the environment to buttress his ... Thinking Like a Conservationist Download PDF (162.9 KB) Humpty Dumpty was as perplexing as anything Alice encountered when she melted through the looking glass. Their conversation, although riddled with playful double entendres, was also immensely frustrating for the young girl, who did not always understand what the prickly and precariously perched character meant by the words he uttered. When he said, for instance, that he preferred âun-birthday presentsâ to birthday presents (for âthere are three ... Part II: Policy Schemes Download PDF (37.2 KB) Landmark Decision: The Antiquities Act, Big-Stick Conservation, and the Modern State Download PDF (85.0 KB) Roy Neary could not help himself. At dinner, he played with his mashed potatoes. Had he been a child, no one would have much minded, but he was a father, and his wife and three children anxiously watched his mealtime antics. Ever since that fearsome night when the power was suddenly cut off, he had become ever-more reclusive and odd, so much so that at dinner the kids had scooted closer to their mother. From that remove they... Rewilding the East: The Weeks Act and the Expansion of Federal Forestry Download PDF (57.0 KB) The Weeks Act, signed into law by President William Howard Taft on March 1, 1911, has had a profound impact on the American landscape, not least in New England. Just how profound is clear in a slick, two-page advertisement that the New Hampshire Division of Travel and Tourism Development ran in Audubon Magazine (May/June 2010). Wrapped around a series of photographs that capture the stateâs mountain vistas... Riding Herd on the Public Range Download PDF (53.9 KB) You have probably never heard of Pierre Grimaud. But whenever you pay to use one of the recreational-fee areas on the San Bernardino, Gifford Pinchot, or White Mountain national forests, you might want to thank him. The same is true the next time you get a permit to camp deep in the Bob Marshall or Gila wilderness areas. And if you have ever applied for a permit to run cattle or graze sheep within the folds of the Sierra, Wasatch... Download PDF (59.3 KB) Devils Postpile National Monument, despite its wonderfully lurid name, is not much of a draw. Located near Mammoth Lakes in the eastern Sierra, it is small, a mere 798 acres. It can be buried under more than four hundred inches of snow a year, so its visiting season is a short four months. Even the pile to which its name refersâa columnar basalt structure, formed from a lava flow that may date back one hundred thousand yearsâis not ... Download PDF (55.0 KB) Mt. Gleason Fire Camp 16 sits atop a rocky ridge separating the North Fork of Mill Creek and the Gleason Canyon drainages, situated deep inside the Angeles National Forest. It is brutally rough country, with steep slopes falling away at 70° angles, and quite inaccessible. To reach the camp, you have to drive six miles out along a winding, narrow road off the Angeles Forest Highway. Its very remoteness made it a perfect site for a Nike Missile... Download PDF (48.9 KB) Summer 2010 was a busy time on the Angeles National Forest. The Forest Service and its contractors poured lots of time and energy into restoring the badly burned terrain in the aftermath of the 2009 Station fire. CalTrans and its crews labored to reconstruct the torched, eroded, and washed-out roads that had been damaged during the historic blaze that consumed 250 square miles. Although shut out from the scorched portions of the... Landscape Mosaic: Managing Fragmented Forests Download PDF (81.6 KB) The important ecological interconnections between public and private lands are widely recognized, most recently in the form of ecosystem management. But what is the historical and political relationship between national-forest management and private-land development? And how might the U.S. Forest Service respond to increased private-land development and landscape fragmentation? There are historical lessons that might guide policy makers... The Once and Future Forest Service Download PDF (98.4 KB) The news from the Far North has not been good. In spring 2007, University of Alberta scientists reported that portions of the Canadian tundra were transforming into new forests of spruce and shrubs much more rapidly than once was imaginable. âThe conventional thinking on treeline dynamics has been that advances are very slow because conditions are so harsh at these high latitudes and altitudes,â reported Dr. Ryan Danby, a member of the... Part III: Internal Tensions Download PDF (42.7 KB) Download PDF (49.8 KB) The postcard-sized broadside from Greenpeace slid under the hotel door at 4:00 a.m. âSorry we couldnât be with you in Arkansas this week,â the italicscript reads, âbut we are busy addressing the threats to our public lands at our first U.S. Global Forest Rescue Station in Oregon.â All conferees at the Forest Service-organized Healthy Forest Conference, held in Little Rock in early June 2004, and focused on the Bush Administrationâs Healthy Forest... Download PDF (46.6 KB) What do Albuquerque and Atlanta have in common? Las Vegas and San Antonio? Water woes. Dire water woes. And their increasing thirst, directly tied to each cityâs population boom and sprawling size, will not be easily slaked. Thatâs because they, and their metropolitan cousins across the West and the South, have been the beneficiaries (for lack of a better word) of a massive in-migration since 1970, and as a consequence have been quickly... Download PDF (57.4 KB) In 2006, there was some good news emanating from the public lands. The Bush administrationâs controversial proposal to sell upwards of 300,000 acres of national forests and grasslands to underwrite the reauthorization of the Secure Rural Schools and Community Self-Determination Act of 2000 did not happen.... Download PDF (50.4 KB) No one stood up and shouted. None of the questions cut like razor-edged barbed wire. No reply was laced with acrimony. Heck, even the few sharp exchanges were delivered with civility. Where was the discord and rancor? Where were the flared nostrils and bruised egos? What happened to the high-blown rhetoric and the low blows? Was this really a conference about the Forest Service and its land-management practices? Or had I, in my... Download PDF (51.6 KB) The U.S. Forest Service seems forever in trouble. Its many external critics have said as much since its formation in 1905, but a new charge in 2008 came from a credible, inside sourceâsome of the agencyâs twenty-nine thousand employees. As one of them bluntly responded in a controversial survey taken that year: âAre we a timber organization? Are we a fire organization? Are we recreation-based? Are we just cleaning toilets now? I ... Download PDF (54.2 KB) It canât be a happy moment when a House subcommittee calls in the Government Accountability Office (GAO) to analyze a federal agencyâs actions and status. So the U.S. Forest Service snapped to attention in February 2008 after the House Appropriations Subcommittee on Interior, Environment and Related Agencies asked the GAO to study the feasibility of transferring the Forest Service from the Department of Agriculture to ... The New Face of the Agency Download PDF (48.7 KB) There is a striking moment in the 2005 U.S. Forest Service-funded documentary, The Greatest Good. Toward its close, the film probes the uproar that accompanied the agencyâs unilateral decision in the late 1960s to launch massive timber operations in the Bitterroot and Monongahela national forests. The âOh My Godâ clearcuts that scarred previously green hillsides infuriated residents in Montana and West Virginia, sparking local... Part IV: Global Green Download PDF (37.3 KB) Download PDF (51.8 KB) In the New Town district of Quito, Ecuador, lives a rooster with a funny idea of dawn. He begins reveille at an ungodly 2:00 a.m., and then works his lungs for the next several hours. While I realize his daily kikiriquÃs need not be triggered by a lightening sky, still his sense of timing left a lot to be desired. I couldnât wait to get to the Amazon jungle for a little peace and quiet.... A Changing Climate Download PDF (66.5 KB) Frederico Carlos Hoehne, long-time director of the São Palo State Botanical Institute, knew the ecological and social costs that came with the destruction of Brazilâs natural flora and fauna. As âforests and prairies were destroyed, we also exterminated insects, birds, and thousands of other animals that were our helpers, our friends,â he wrote in the 1930s. âAnd in such manner, we caused our own ruin.â There was hope, however: if he... Forestry Done Right Download PDF (68.2 KB) Young and energetic, Tasso Azevedo, the former director of the Brazilian Forest Service (BFS), loves to tell stories. Such as the pointed one he spun while we were hiking through a section of the Amazon rainforest, near Itacoatiaria, 180 kilometers east of Manaus, capital of the Brazilian state of Amazonas. âHere is one of the cultural problems we face,â he told a group of international foresters attending Megaflorestais 2008, an informal ... Download PDF (74.0 KB) Download PDF (2.9 MB) Publication Year: 2012
<urn:uuid:e7565b83-7914-4dff-95e9-d9e6f5eaef74>
CC-MAIN-2013-48
http://muse.jhu.edu/books/9780870716607
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164021066/warc/CC-MAIN-20131204133341-00028-ip-10-33-133-15.ec2.internal.warc.gz
en
0.935906
2,841
2.546875
3
How do the kidneys work? The body takes nutrients from food and converts them to energy. After the body has taken the food that it needs, waste products are left behind in the bowel and in the blood. The kidneys and urinary system keep chemicals, such as potassium and sodium, and water in balance by removing a type of waste, called urea, from the blood. Urea is produced when food containing protein, such as meat, poultry, and certain vegetables, are broken down in the body. Urea is carried in the bloodstream to the kidneys. Two kidneys, a pair of purplish-brown organs, are located below the ribs toward the middle of the back. Their function is to: - Remove liquid waste from the blood in the form of urine - Keep a stable balance of salts and other substances in the blood - Produce erythropoietin, a hormone that aids the formation of red blood cells The kidneys remove urea from the blood through tiny filtering units called nephrons. There are about one million nephrons in each kidney, located in the medulla and the cortex. Each nephron consists of a ball formed of small blood capillaries, called a glomerulus, and a small tube called a renal tubule. Urea, together with water and other waste substances, forms the urine as it passes through the nephrons and down the renal tubules of the kidney. Urine collects in the calyces and renal pelvis and moves into the ureter, where it flows down into the bladder. In addition to filtering waste from the blood and assisting in the balance of fluids and other substances in the body, the kidneys perform other vital functions. These functions include: - Production of hormones that help to regulate blood pressure and heart function - Conversion of vitamin D into a form that can be used by the body’s tissues What is nephrology? Nephrology is the branch of medicine concerned with the diagnosis and treatment of conditions related to the kidneys. Other health professionals who treat kidney problems include primary care doctors, pediatricians, and urologists. What causes problems with the kidneys? In children, problems of the urinary system include acute and chronic kidney failure, urinary tract infections, obstructions along the urinary tract, and abnormalities present at birth. Diseases of the kidneys often produce temporary or permanent changes to the small functional structures and vessels inside the kidney. Frequent urinary tract infections can cause scarring to these structures leading to renal (kidney) failure. Some diseases that cause kidney damage include: - Hemolytic uremic syndrome - Polycystic kidney disease - Urinary tract infections Disorders of the genitourinary system in children are often detected by fetal ultrasound prior to birth. If not detected on fetal ultrasound, often children will develop a urinary tract infection that will prompt your child's doctor to perform special diagnostic tests that may detect an abnormality. Some diseases of the kidney do not reveal themselves until later in life or after a child has a bacterial infection or an immune disorder.
<urn:uuid:1d71274f-0282-47d0-8382-69c7b094561b>
CC-MAIN-2017-43
https://childrensnational.org/choose-childrens/conditions-and-treatments/kidney-diseases/chronic-kidney-disease
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824068.35/warc/CC-MAIN-20171020101632-20171020121632-00752.warc.gz
en
0.945992
640
4.03125
4
As snakes lack obvious external clues that indicate their gender, the best way to distinguish male and female ball pythons (Python regius) is to have an experienced keeper or veterinarian probe them. The process is minimally invasive, inexpensive and – for those experienced with the technique – relatively easy. Alternatively, if your ball python is very young, your veterinarian may be able try to manually evert the specimen's reproductive organs, although this option does not always provide conclusive results. How to Find Out if Your Ball Python Is a Male or Female Male snakes have two intromittent organs called hemipenes (singular: hemipenis). These hemipenes reside inside the tail base when not in use. When mating, males evert one of the hemipenes and insert it into the female's vent, allowing for the transfer of sperm. Females have no hemipenes, although they have small pockets attached to the cloacal wall that are homologs of hemipenes. Probing is a technique in which a keeper or veterinarian gently inserts a smooth steel rod into a snake's vent to determine the snake's sex. When the probe passes into the vent of a male, it enters one of the two inverted hemipenes; when the probe passes into the vent of a female, it enters one of hemipene homologs. Hemipenes are much longer than homologs are, thus allowing the probe to pass much deeper into the tail base of a male than of a female. By noting the depth to which a probe passes, you can infer the sex of the animal. Probes usually penetrate females to a depth equivalent to one to five scale rows, while probes pass about five to 16 scales deep in males. Manual eversion of a snake's hemipenes – often called "popping" – is another technique that can provide clues about a snake's sex. To evert a snake's hemipenes, an experienced keeper or your veterinarian can apply gentle, rolling pressure (like squeezing a tube of toothpaste) to the snake's tail base. This usually causes a snake's hemipenes – if present – to pop out of the vent. However, popping is not an infallible technique; it only produces definitive results in the case of males. For a variety of reasons, the hemipenes of males fail to evert sometimes, which can lead to males being misidentified as females. The technique is most effective when performed on very young snakes, as mature males may be able to keep their hemipenes inside their body. Ball pythons – and most other pythons – bear two small clawlike appendages near their tail bases. Called spurs, these structures are the vestigial remnants of rear legs. Males use them to stimulate and position females during mating, so their spurs are normally larger than females'. However, exceptions are common, so spur size is not a reliable criterion for distinguishing males from females. Husbandry Is the Same Male and female ball pythons require similar husbandry. Females may grow a little faster, reach slightly larger sizes and have slightly larger heads than males do, but few other differences exist. Males tend to be slightly more common in the marketplace, as breeders often maintain two to four times as many females as males, thus making males more commonly available.
<urn:uuid:a73c43e2-6ef5-4278-bb89-e72555fabc16>
CC-MAIN-2020-05
https://www.cuteness.com/article/out-ball-python-male-female
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00022.warc.gz
en
0.933926
688
3.421875
3
Written By fania lubis on Sunday, 1 January 2017 | 12:38 Dessert is a course that proves a main meal. The course usually includes sweet foods and beverages, such as dessert wine or liqueurs, but may include espresso, cheeses, nuts, or other savory items. In some parts of the world, such as much of central and western Cameras, there is no traditions of any dessert course to conclude a meal.The phrase "dessert" can apply at many confections, such as truffles, tarts, cookies, biscuits, gelatins, pastries, ice creams, pies, puddings, custards, and fairly sweet soups. Fruit is also commonly found in treat courses because of its naturally occurring sweetness. Several cultures sweeten foods that are more commonly tasty to create desserts. The word "dessert" originated from the French word desservir, meaning "to clear the table. " Its first known use was in 1600, in a health education manual entitled Naturall and artificial Directions for Health, which was authored by William Vaughan. Inside his A History of Dessert (2013), Michael Krondl explains it refers to the fact dessert was served following the table experienced been cleared of other dishes. The term dates from the 14th century but attained its current meaning around the beginning of the twentieth century when "service? la fran? aise" (setting a variety of dishes available at the same time) was replace by "service? los angeles russe" (presenting a meal in courses. )"Sweets were fed to the gods in historic Mesopotamia: 6 and India: 16 and other ancient civilizations. Dried fruit and honey were most likely the first sweeteners used in almost all of the world, but the spread of sugarcane around the world was essential to the development of dessert.: 13The particular spread of sugarcane Sugarcane was grown and processed in India before five hundred BCE: 26 and was crystallized, so that it is easy to transport, by 500 CE. Sugar and sugarcane were traded, making sugar available to Macedonia by 300 BCE and The far east by 600 CE. Inside South Asia, the Midsection East and China, glucose has been a software program of cooking and desserts for over a thousand years. Sugarcane and sugar were little known and rare in Europe until the twelfth century or later, when the Crusades and then colonialization propagate its use.Europeans commenced to manufacture sugar in the Middle Ages, and more sweet desserts became available. Also then sugar was so expensive usually the particular wealthy could indulge on special occasions. The first apple company pie recipe was published in 1381. The first documentation of the term cupcake was in "Seventy-five Receipts for Pastry, Cakes, and Sweetmeats" in 1828 in Eliza Leslie's Receipts cookbook. The particular Industrial Revolution in The usa and Europe caused desserts (and food in general) to be mass-produced, processed, preserved, canned, and packaged. Frozen foods became very popular starting in the 1920s when freezing emerged. These fully processed foods became a big part of diets in many industrialized nations. Many countries have desserts and foods distinctive to their nations or region.Cakes are sweet sensitive breads made with sugars and delicate flour. Truffles can differ from light, airy sponge cakes to heavy cakes with less flour. Common flavourings include dried up, candied or fresh fruit, nuts, cocoa or ingredients. They may be filled with fruit preserves or dessert sauces (like pastry cream), iced with buttercream or other icings, and adorned with marzipan, piped borders, or candied fruit. Wedding cake is often served as a celebratory dish on ceremonial occasions, for example weddings, anniversaries, and birthdays. Small-sized cakes have become popular, in the form of cupcakes and petits fours. Chocolate is a typically sweet, usually dark brown, food preparation of Theobroma cacao seeds, roasted, floor, and often flavored. Pure, unsweet ill-flavored chocolate contains mostly cacao solids and cocoa rechausser in varying proportions. A lot of the chocolate at present consumed is in the form of sweet chocolates, combining chocolate with sugar. Business is sweet chocolates that additionally contains milk powder or condensed milk. White chocolate contains cocoa rechausser, sugar, and milk, but no cocoa solids. Dark chocolate is produced by adding fat and sugar to the cacao blend, with no milk or much less than whole milk chocolate.
<urn:uuid:efcde457-0fc9-4d3b-8593-06c70fe8b67c>
CC-MAIN-2017-34
http://icravesweetandsalty.blogspot.com/2017/01/here-is-your-delicious-dessert-list-of.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102663.36/warc/CC-MAIN-20170816212248-20170816232248-00594.warc.gz
en
0.963677
954
3.34375
3
This section offers data and statistical reports on acute health conditions caused by pathogens and chronic conditions, reported by primary care providers and health care institutions. - Asthma Data - Cancer Data - Carbapenem Resistant Enterobacteriaceae (CRE) Surveillance - Chronic Disease Profiles - Chronic Hepatitis Surveillance - Communicable Disease Surveillance Data - Health of Washington State Report - Heart Disease, Stroke, and Diabetes - HIV/AIDS Data and HIV Factsheets and Surveillance - Newborn Screening Statistics - Oral Health - Sexually Transmitted Disease - Tuberculosis Data and Reports - West Nile Virus The department maintains data and statistical reports on diseases and conditions. Sources of data include reports by health care providers, hospitalization data, death records, studies, and community surveys. These data summarize trends in notifiable communicable diseases reported by local health jurisdictions to the department. Reported cases often represent only a fraction of the actual burden of disease. The accuracy of this information has limitations, due to: - Sick people who do not seek healthcare - Healthcare providers and others who do not always recognize, confirm or report notifiable conditions. A chronic disease persists over a long period of time or recurs. Some chronic diseases, such as chronic hepatitis and HIV are caused by pathogens (germs). Others are causes by behaviors (some types of heart disease, cancer, etc), the environment (some asthma and cancer), or genetics (some birth defects and cancer). Sometimes the cause of chronic conditions is not fully known or there may be a combination of causes. Chronic diseases or conditions include: - Birth defects (and other special healthcare needs of children) - Chronic hepatitis - Heart Disease and Stroke An acute disease is a disease with a rapid onset and/or a short course. Many acute diseases are caused by pathogens (germs). Acute diseases include: - Sexually Transmitted Diseases - Other communicable diseases (including: enteric/foodborne disease, vaccine preventable disease, zoonotic disease, acute hepatitis)
<urn:uuid:eb6e2e37-9d92-4f08-8683-33414dc8f13b>
CC-MAIN-2023-50
https://doh.wa.gov/data-and-statistical-reports/diseases-and-chronic-conditions
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100873.6/warc/CC-MAIN-20231209071722-20231209101722-00000.warc.gz
en
0.888106
429
2.890625
3
Australia to Build Huge Desalination Plant Southern Australia has been in the grip of serious drought; reservoirs are drying out, and water restrictions are in place. The government has planned a $4 billion project to provide more drinking water, including a huge desalination plant that is expected to be one of the worlds largest. The project is planned to be sited in Wonthaggi, south-east of Melbourne, and the government expect that water bills could double to fund it. This, combined with news from the WWF warning that removing salt from sea water could worsen the problem, makes it a less than popular plan. Desalination is energy intensive and emits a lot of greenhouse gases. The WWF said that Australia, Spain and Saudi Arabia have made significant progress by limiting water use and recycling supplies. However, the water must come from somewhere, and desalination is a convenient solution. Although the energy use if a problematic side effect, one imagines that it would be entirely feasible to power the plant using solar cells. It would raise the cost of an already expensive project, but money must be spent in order to create a sustainable and appropriate infrastructure. ::ENN
<urn:uuid:027b085f-2b21-4d9f-8c8a-b1168ec5ebb0>
CC-MAIN-2017-39
https://www.treehugger.com/clean-water/australia-to-build-huge-desalination-plant.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690318.83/warc/CC-MAIN-20170925040422-20170925060422-00033.warc.gz
en
0.961124
238
3.203125
3
-Permaculture, means permanent culture - besides being a method to design human-scale systems, it provides a systemic way of viewing the world and the correlations between all its components. - It has taught me that each location has its own nature; each person has its own nature, people influence places and places influence people. It means that, to establish what is the work and its rhythm we must take the place and the people who live there, as an organizing principle. Observing holistically, experiencing every detail. Understanding WHY you are there - and what is your intension.
<urn:uuid:344b9760-8ca5-45bf-8c27-1bfd63ca0b25>
CC-MAIN-2013-48
http://www.behance.net/gallery/Sensitive-Iconography/1662146
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164579146/warc/CC-MAIN-20131204134259-00052-ip-10-33-133-15.ec2.internal.warc.gz
en
0.972937
122
2.828125
3
Tech moves fast! Stay ahead of the curve with Techopedia! Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia. AdaBoost is a type of algorithm that uses an ensemble learning approach to weight various inputs. It was designed by Yoav Freund and Robert Schapire in the early 21st century. It has now become somewhat of a go-to method for different kinds of boosting in machine learning paradigms. Experts talk about AdaBoost as one of the best weighted combinations of classifiers – and one that is sensitive to noise, and conducive to certain machine learning results. Some confusion results from the reality that AdaBoost can be used with multiple instances of the same classifier with different parameters – where professionals might talk about AdaBoost "having only one classifier" and get confused about how weighting occurs. AdaBoost also presents a particular philosophy in machine learning – as an ensemble learning tool, it proceeds from the fundamental idea that many weak learners can get better results than one stronger learning entity. With AdaBoost, machine learning experts are often crafting systems that will take in a number of inputs and combine them for an optimized result. Some take this idea to a further extent, talking about how AdaBoost can command "armies of decision stumps" that are essentially less sophisticated learners employed in large numbers to crunch data where this approach is seen favorably over using a single classifier.
<urn:uuid:8083954c-653c-4ab3-b45d-ee78b12c4c82>
CC-MAIN-2020-24
https://www.techopedia.com/definition/33213/adaboost
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401260.16/warc/CC-MAIN-20200529023731-20200529053731-00053.warc.gz
en
0.941025
291
3.234375
3
Bony Framework of Abdominopelvic Cavity The skeletal framework that serves as the attachment site for the muscles that make up the abdominal and pelvic walls consists of the lower ribs, costal cartilages, five lumbar vertebrae, and bony pelvis. The costal cartilages of the fifth, sixth, and seventh ribs angle obliquely upward and medially to join the sternum superior and lateral to the xiphisternal junction. The terminal portion of each of the 8th, 9th, and 10th costal cartilages tapers to a point and is attached to the lower border of the costal cartilage above. The 11th and 12th costal cartilages are quite short, with pointed tips, neither of which attaches to the cartilage above it. The lower border of the 10th costal cartilage is commonly the most inferior part of the caudal margin of the thoracic cage. From the beginning of the 10th costal cartilage to the junction of the 7th costal cartilage with the sternum, a cartilaginous border is formed, which is frequently referred to as the “costal arch” (costal margin), although this term is perhaps more correctly used to refer to the arch formed by the right and left cartilaginous borders as they are connected by the lower end of the sternal body from which the variable xiphoid process of the sternum projects. The latter serves as a landmark for the level of the body of the 10th (or 11th) thoracic vertebra. The five lumbar vertebrae present the parts described for a typical vertebral body (centrum) and vertebral (neural) arch, supporting the two transverse processes, the spinous process, and the superior and inferior articular processes. The bony pelvis is made up of the two hip bones, with the sacrum and coccyx wedged between them posteriorly. For descriptive purposes, the bony pelvis is divided by a plane passing through the sacral promontory and the crest of the pubis, into the major (false) pelvis above the plane and the minor (true) pelvis below this plane. This plane lies roughly in the inlet of the true pelvis, which is bounded by the sacral promontory, crest of the pubis, anterior margin of the ala of the sacrum, the arcuate line of the ilium, and the pecten pubis, all of which could be considered as forming the linea terminalis. The hip bone (os coxae or innominate bone) is made up of the ilium, pubis, and ischium, which are separate bones in the young subject but fuse at the acetabulum in the adult. On the inner surface of the ilium, the arcuate line indicates the inferior border of the ala of the ilium, which ends superiorly in the palpable iliac crest, stretching from the anterior superior spine of the ilium to the posterior superior iliac spine. The crest also presents an external (lateral) lip, an internal (medial) lip, and an intermediate line and thickening on its lateral aspect a short distance posterior to the anterior superior spine, which is called the tubercle of the crest. The body of the pubis joins the pubic bone on the other side, by means of a fibrocartilaginous lamina, the symphysis pubis. The upper border of the body, which is thick, roughened, and turned anteroinferiorly, is called the crest, and at its lateral end is a prominence named the pubic tubercle. The superior ramus of the pubis, coursing superiorly and posterolaterally, enters into the formation of the acetabulum (acetabular portion, sometimes called the body) and presents a prominent pecten pubis, or pectineal line, which is continuous with the arcuate line of the ilium. The inferior ramus courses interiorly and posterolaterally, to join the ramus of the ischium and complete the margins of the obturator foramen. The main portion of the ischium extends interiorly and posteriorly from the acetabulum, to expand into the ischialtuberosity, which projects posteroinferiorly. From the posterior border of the inner side of the lower part of the acetabular portion of the ischium, the ischial spine projects posteromedially between the greater and lesser sciatic notches. A ramus of the ischium courses anteriorly from the lower end of the main portion of the bone, to become continuous with the inferior ramus of the pubis, forming what is often referred to as the ischiopubic ramus.
<urn:uuid:4f8ceb57-ecd7-4186-b0e6-908aeb89f47c>
CC-MAIN-2023-14
https://www.pediagenosis.com/2018/10/bony-framework-of-abdominopelvic-cavity.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00078.warc.gz
en
0.90503
1,019
3.0625
3
Crafts which develope both mental power and skill of hand are steadily gaining in favor in this country. The system Of apprenticeship, by which it was possible to learn a trade while serving under a master- craftsman, is past, and in its place a few institutions which realize its importance are establishing classes of workers. The Norwich Art School two years ago selected book-binding as the craft which best unites the head and the hand, andgives fullest scope to the art faculty of the student. Every facility has been provided for successfully carrying it on, and now at the end of two years’ trial it can be pronounced successful. Next year the work will be continued with increased classes and equipment. The Bindery is in the Slater Memorial Building, shown in the accompanying illustration, which contains, together with the Museum and Art School, the Peck Library. The periodicals and books from this library are bound by students who spend three-fourths of their time upon regular work going on in the Bindery. Thus by carrying out in a measure the apprenticeship system they receive the most practical training and pay a much lower rate of tuition than would otherwise be possible. In addition to the books bound for the library the Norwich Free Academy of which the Art School is a department has a printing office from which books are issued from time to time. During the first year of the Bindery the Academy Press printed the “Journal of Madam Knight” in a limited edition of 210 copies. This year from the same Press has come the “Stone Records of Groton” in an edition of 300 volumes. These books have for the most part been bound in substantial boards with leather and cloth. A few of the volumes were bound in leather hand-tooled. These books together with those from the library, make atotal of some joo volumes. In addition the students have brought in a great variety of their own books to bind. With this variety of work constantly going forward in the Bindery there is here every opportunity for becoming an expert binder. There are three courses of study offered in the Bindery, the students being divided as follows. Day students working five days in the week from 9 tot, giving three- fourths of their time to the school and the remainder to their own work. The tuition for this course is five dollars per month. Special students who bind their own hand-tooled work, paying fifteen dollars per month ; and evening students working Tuesday and Friday evenings from 7 to 9 at three dollars for the term. All of the work is done under the instruction of Mr. Robert W. Adams, a skillful and experienced binder, who is in the room constantly guiding all the work going forward. In the library are to be found reference books upon the subject, together with specimen bindings, and in the design class of the Art School students will be assisted in selecting and carrying out designs for tooled covers. There is also an opportunity for a limited number of students to study printing in the Academy printing office. It is planned to arrange the courses of study so that the students may become practical craftsmen in the Art of putting together a book. Additional information in regard to the school will be furnished upon application to the Director of the Art School, Norwich, Conn.July 1St, 1903.
<urn:uuid:6bc12649-8eff-42c3-a74a-31ccb2674e94>
CC-MAIN-2020-29
https://bookbinding.com/writings-on-bookbinding/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886865.30/warc/CC-MAIN-20200705023910-20200705053910-00227.warc.gz
en
0.973686
685
2.90625
3
From a profitable asset to an expensive liability By the 1960s Northern Ireland's traditional industries, namely linen and shipbuilding, were in serious decline. In textiles, 31,000 jobs - more than 50% of the workforce - had disappeared since the war. In 1961 the largest Belfast spinning firm employing 1,700 workers closed down. In one fell swoop the great Belfast shipyard, Harland & Wolff, made 8,000 workers redundant, out of a workforce of 21,000, and over the next 12 months laid off an additional 2,000. The aircraft factory, Shorts, Belfast's then second-largest employer, was threatening to close down, after having cut its workforce by 20% between 1958 and 1961. By 1969, only 183,000 workers were still working in manufacturing in Northern Ireland, compared with 303,000 at the end of the war. The narrow base of the Ulster economy, and the fact that private family-run firms were typical, right up until the seventies, limited the possibility of diversification and restricted investment. As a result it was far more vulnerable to outside competition. While the British economy was still expanding, and London ministers could boast of "full employment", quite the opposite was happening in Northern Ireland. There, unlike in the rest of the UK, state subsidies were not conditional on set employment targets. This encouraged mechanisation rather than job creation. In fact, Northern Ireland was already becoming a low-cost backyard for British and American multinationals such as Du Pont, Monsanto, Courtaulds and ICI, whose highly-automated artificial textile plants replaced the lost linen mills - but not the jobs which had disappeared. The Northern Ireland working class did not take this lying down. Strikes and demonstrations met every new announcement of job cuts and the 1962 Mayday march in Belfast was the largest since 1919. Faced with threatening industrial unrest on a par with the 1930s, the Northern Ireland establishment demanded that the British government intervene. It did, and paid to keep the Harland and Wolff shipyard open - then still the biggest shipyard in the world. Within just one decade, the publicly-subsidised sector (which included Shorts in addition to Harland & Wolff) expanded enormously - from 22.5% of the manufacturing workforce in 1961 to 44.9% in 1972! It was estimated that for each job created during the 60s in Northern Ireland, the cost to the Treasury was more than double that in Britain's most deprived areas - meaning that neither the local privileged nor the much acclaimed "outside investors" were prepared to put their own cash on the table to fund investment in the province. This was in fact a paradoxical situation. Ever since partition, the cost to the Treasury of maintaining a state apparatus (and later a welfare state) in Northern Ireland had been justified by the profits this guaranteed to British companies. Yet, by the mid-60s, while the rest of the British economy was still expanding, that of Northern Ireland was shrinking and depending increasingly on the life support subsidy machine of the Exchequer. This was also the time when the British bourgeoisie was getting rid of its empire in order to free British capital from the political and financial burden of having to maintain colonial state machineries in the Third World. But here it was, faced with a rocketting bill to retain its control over a tiny market of just over one million people! An increasingly parasitic economy Yet the cost of sustaining Northern Ireland in the sixties was nothing compared with what was to come, after the social explosion at the end of that decade. From £100m in 1968 and £181m in 1972, the bill for the British state was to rocket to £1bn in 1980 and £2bn in 1990. Today, just 8 years later, it is £3.4bn! No wonder Patrick Mayhew, Major's Northern Ireland Secretary, was quoted by a German magazine - Die Zeit - in 1993, as saying, "Three billion pounds, for one and a half million people - we have no strategic interest. We have no economic interest in staying there." Private sector output per head in Northern Ireland in the mid-eighties was 67% of that in Britain, but public sector output was 134% - at the height of Thatcher's drive to "roll back" the state and cut public expenditure! Of course this included defence - but on education and health the figure was 118%. In fact from the seventies onwards, the province was totally dependent on outside support for its existence. There was an enormous growth of governmental agencies and quangos which provided careers for a substantial layer of the middle class as a way of securing its support for British rule. At the same time there was another huge, but hidden subsidy. London was forced to exclude Northern Ireland when it came to such austerity measures as the sale of council housing, the poll tax, the Jobseekers' Allowance and health and education cuts - or at least postpone these measures - for fear of a possible social backlash. In fact, the Northern Ireland petty-bourgeois were the main beneficiaries of London's reluctant "largesse". However, to some extent at least, the Northern Ireland working class benefited as well. But it found itself in the paradoxical situation of having the highest level of health and education provision in the UK, while also having the highest level of unemployment and the lowest level of income! Of course the purpose of the British bourgeoisie in advancing these large subsidies over the years was not to make life easier for Northern Ireland's workers. As long as the British state retained political responsibility for Northern Ireland, it had no other choice than to foot the bill to sustain the economic viability of the province. This is why it never stopped its search for a negotiated settlement of one kind or another, even in the middle of the worst bombing campaigns. Not for the sake of "ending violence" and "securing peace" for a poverty-stricken population, but for the sake of relieving British capital of an increasing drain on the finances of its state - regardless of the consequences for the population of Northern Ireland. The first failed attempt at North-South rapprochement What was to become Britain's strategy for the next decades really took shape in the early sixties, under pressure of the economic recession in Northern Ireland. If the British state was to avoid taking responsibility for the North's increasing dereliction, it had to hand over the responsibility to somebody else. Self-government, which was already in place in some measure, was a convenient stepping stone towards this goal. But devolving powers to Belfast could only postpone the problem, providing a temporary screen, not a long-term solution. All the more so, as a substantial part of the Stormont government machinery was made of representatives of that section of the Northern middle-class which was increasingly threatened with economic bankruptcy - and therefore reluctant to implement London's policy. The only long term solution was a takeover of Northern Ireland by the Republic. A section, at least, of the priviliged classes of the North was not averse to this - those whose fortunes had been built on services, finance, large scale commerce, as well as a whole layer of professionals. For them, a phased-in opening of the South was a welcome extension of their field of operation. As to the resistance which existed in the South to a resumption of closer ties with Britain, this could easily be overcome. After all, the 1949 withdrawal of the Republic from the Commonwealth and subsequent build-up of tariff barriers between the two countries had been primarily a reaction of defence against the drastic economic demands put by London on Dublin. Should London now offer a better deal, which in this period was easy enough without putting the profits of British companies at risk, the Southern bourgeoisie would not hesitate to accept. In the South, Fianna Fail had begun to attract the votes of the business community and its new leader, Sean Lemass, who took over from de Valera in 1959, represented these "new deal" forces. In the North, economics had also been decisive. Terence O'Neill the new leader of the Ulster Unionists, expressed the concensus among the "new deal" establishment - he followed the remit of the British government in attempting to secure an alternative lifeline for the province. Thus, for the first time since 1925, direct political links were made between North and South, with Lemass visiting O'Neill in Belfast in January 1965 to discuss areas of future co-operation. That the unionist electorate saw the need for this was shown in the election called in Northern Ireland in October the same year. O'Neill's reformist policies received widespread support, and he celebrated the result in an often- quoted speech where he said: "Co-operation between North and South is now publicly endorsed, and today, when a militant Protestant housewife fries an egg she may well be doing it on Catholic power generated in the South and distributed in the North as a result of that first O'Neill-Lemass meeting." That same year, the Fianna Fail government negotiated a Free Trade Agreement with Britain to come into effect in 1966. This deal dismantled all tariff barriers between the two countries - in economic terms it reintegrated Ireland into the British sphere and in this sense, the barrier between the North and the South no longer had any real significance. The full integration of the two markets was clearly in the interests of British and other foreign companies, which were taking over an increasing share of capital holdings both North and South. It was also in the interests of the local capitalists, North and South, with the prospect of increased profits generated by the resulting boost for Irish trade. However this process was stopped before it had time to come to fruition, by the social explosion which broke out in 1968, in the North. But it did create significant economic ties and cross-border trade which became permanent. From an economic liability to a political threat By the late 60s conditions had deteriorated severely for the entire Northern Ireland working class and very suddenly for a whole section of previously secure workers. Social discrimination, which was built into the system of local government, denied the vote to those who were not well-off enough to be rate-payers and sent the poorest families to the back of the housing queue. Those most affected tended to be catholic - who made up the largest part of the poorest - but also included poor protestant families without the right connections. This situation provoked the emergence of militant housing action and civil rights associations. A whole social layer moved into action - those from the poorest ghettoes - providing them with a voice and a political expression which so far they had never had. What started mildly enough as protests demanding "one man, one vote, one family, one house!" soon exploded into a huge social uprising which destabilised the whole province and threatened to spread to the South. For the British government this was undoubtedly a shock, throwing them right off track as far as their aims for the province were concerned; they had not expected it and they were not prepared for it. They sent troops in and then lurched between a policy of brutal repression on the one hand and gestures of reform on the other, in an attempt to contain the revolt. Never before, except in some of their former colonies in the Third World, had they been confronted with ghetto children throwing molotov cocktails at their police and whole areas, like the Bogside in Derry and large parts of West Belfast being transformed into no-go areas against their armed forces. In addition there was the potential, early on in this situation, for all the poor classes to unite against British rule, given the increasing unemployment and degradation of conditions also now faced by so many protestant workers. In the event, the British bourgeoisie won a respite, because this powerful explosion failed to find a class expression. Instead, on the catholic side of the sectarian divide, it was channelled towards the narrow nationalism of the past. On the protestant side, on the other hand, reactionary loyalist forces capitalised on the fear generated by new or worsening poverty by organising local gangs of vigilantes on the pretext of "defending" the community and helping the Crown forces to restore order. For Britain, the price for containing this rebellion was high. The events considerably reinforced the most backward section of the protestants - made up of mainly small businessmen, self-employed and shopkeepers, whose survival depended directly on the crutch provided to the economy by Britain and who could always be counted on to oppose any threat to sever this lifeline. Those protestants who had previously favoured a rapprochement with the South took fright as a result of the social unrest. A settlement, as previously planned, would now look as if it was a victory for the ghetto uprising. So, once again, they ran for shelter behind the Union. The other element in the high cost to Britain was the development of political terrorism. While this development was much less threatening than the social revolt, keeping it under control required a massive military and repressive machine. But in addition to the exorbitant economic cost of this machine, its deployment also sustained a smouldering anger in the poor ghettoes - and therefore the risk of new social explosions whose first target would be inevitably the British state. This permanent instability turned Northern Ireland into a serious political liability for the British bourgeoisie. The Sunningdale rehearsal The ghetto explosion made the search for a settlement more difficult, but also more urgent. The Stormont election in 1969 brought the collapse of the reforms O'Neill had championed. Barely half the Unionists who were elected now sided with him, and he almost lost his own seat to the demagogic bigot, Ian Paisley. As a result, he resigned, to be replaced first by James Chichester-Clark, and then by Brian Faulkner. But the Unionist tail was not wagging the British dog. O'Neill's successors took a very British official line against the uprising in the ghettoes - that of brutal repression. And in tandem with this went a reform policy. In fact given the summoning of British troops to the province, and the consequent tripping back and forth of British politicians at the height of the "Troubles" the supervision of a policy in British interests was tight. By the time Faulkner took over, in March 1971, "One man, One vote" was on the statute book, a Central Housing Commission had been set up, the notorious B Specials (an official protestant armed militia) had been disbanded and the RUC disarmed - all demands made by the civil rights movement. There was nothing surprising in this: a settlement remained more than ever a necessity for Britain, even more so now because of the political instability, but it had to be achieved on the basis of a relationship of forces favourable to British interests. Hence the faster the move towards a settlement, the tougher the repressive measures. This is why Faulkner brought in internment without trial in August 1971. However this draconian measure produced an unexpected escalation of protest and resistance, in the catholic communities who were targetted by it. It culminated in the shooting dead of 13 demonstrators by the British Parachute Regiment on Bloody Sunday in January 1972. The Provisional IRA, which had emerged on the back of the ghetto uprising, then retaliated with a bombing campaign which left 56 British soldiers dead in the following two months. By this time the catholic SDLP had resigned from Stormont under pressure of catholic public opinion, and the Unionists were regrouping in all sorts of factions and new parties with more and more reactionary agendas. Not to mention the former B-specials and other loyalist gangs who regrouped into new protestant paramilitary organisations. The de facto collapse of the Stormont government prompted a return to direct rule by Westminster in March 1972. Immediately, the Tory government of Ted Heath proceeded to draw up plans for a new framework to pave the way to a settlement, while at the same time stepping up the repression even more. On July 30th, a military build-up started. This was to involve a total of 26,000 soldiers, plus tanks, bulldozers, saracens and helicopters - the biggest military expedition since Suez. Operation Motorman, as it was called, was ostensibly aimed at restoring law and order to the no-go areas of Derry and West Belfast. Such a show of military strength was unnecessary, however. The army knew that the IRA had neither the military means to counter such an operation nor the political will to mobilise the population of the ghettoes to do it. The purpose of this show of strength was pure propaganda, just as the IRA's purpose in detonating 26 car bombs in Belfast ten days before, had been propaganda. Operation Motorman was aimed at demoralising the catholic ghettoes as well as demonstrating to loyalist supporters that the British state had the situation under control and would tolerate no more attempts at forcing its hand. In that sense it was also meant to be understood as a thinly veiled threat to the loyalist demagogues who had emerged as leaders of the reaction. Above all, however, this Operation was really meant to open the way for Heath's new attempt at a settlement. On September 20th, the SDLP took (too conveniently for this to have been just a co-incidence) the initiative of issuing a policy document, called "Towards a New Ireland". It called for joint British-Irish sovereignty over Northern Ireland. Five days later, a conference was held at Darlington in England to discuss the future of Northern Ireland. All the parties previously involved in Stormont were invited. Only the Northern Ireland Labour Party, the Ulster Unionist Party and the Alliance attended. The SDLP boycotted it because of internment and the Democratic Unionist Party because it branded it a "betrayal". Nothing came of it. But what mattered was the gesture. A new process had been initiated. The next move was the proposal of a new Assembly. This was not to be quite the same as the old one. It was to be a "power-sharing" body, elected by proportional representation, in order to allow minorities a fairer share of seats. Its size was increased from 52 seats to 78, to permit the functioning of an inter-party committee system. And most importantly, it was stated that: "following elections to the Northern Ireland Assembly, the Government will invite the Government of the Republic of Ireland and the leaders of the elected representatives of Northern Ireland opinion to participate with them in a conference to discuss the question of a Council of Ireland." This "Council of Ireland" was nothing but the expression of the common interests that had developed between sections of the bourgeoisie in Belfast, Dublin and London. As William Whitelaw stated in his Green Paper on the future for Northern Ireland in 1972: "It is, therefore, clearly desirable that any new arrangements for Northern Ireland should, whilst meeting the wishes of Northern Ireland and Great Britain, be so far as possible acceptable to and accepted by the Republic of Ireland, which from 1 January 1973 will share the rights and obligations of membership of the European Communities." Garret FitzGerald, one of the main ideologues of the "new deal" in Ireland, put the Republic's point of view: "Within a vast European Community the two parts of Ireland, sharing common interests in relation to such matters as agriculture and regional policy, must tend to draw together." In June 1973, elections to the Assembly took place with a record 72% turnout despite the IRA's call to boycott the ballot. The SDLP along with the Official Unionists of Faulkner had enough of a majority between them for an executive to be set up. Thus talks between the British government, the newly-elected Northern Irish Assembly and the Dublin governments took place in Sunningdale, in December 1973, on the setting up of a Council of Ireland. This was to be a two-tier affair, with a Council of Ministers and a Consultative Assembly. There would be equal representation from both parts of Ireland and all decisions would have to be unanimous. The Council's main function would be to facilitate economic and social co-operation. An "agreed communiqué" was issued, announcing that: "The Irish government fully accepted and solemnly declared that there could be no change in the status of Northern Ireland until a majority of the people of Northern Ireland desired a change in that state." But despite the obvious similarities this was not 1998. It was the end of 1973. And in the first months of 1974, the Ulster unionists who were against any dealing with the South had defeated Faulkner and his faction. Then, in the face of the miners' strike in Britain, the Conservative government called a snap general election. For the new Northern Ireland Assembly, the timing could not have been worse. The new United Ulster Unionist Council brought together all the anti-power-sharing Unionists to fight the general election as one bloc. They won 11 of the 12 seats in the British parliament - although, in and of itself, this did not actually derail the process in motion in Northern Ireland. Sunningdale's plug pulled out As the Northern Ireland executive was taking a vote to accept the Sunningdale agreement, on 14 May 1974, the Ulster Workers Council announced a general strike. The UWC was led by, among others, Glen Barr, an ex-shop steward and ex-power worker, who was also the spokesman of the loyalist UDA. The UDA, UVF and other smaller paramilitary groups, the leaders of loyalist skilled workers and their favoured politicians, such as Paisley and William Craig, all came together in the UWC. They were opposed to the Council of Ireland, and powe-sharing, on the basis that these would undermine the future of protestant workers. Their central demand was for there to be new elections to the Assembly. However, the politicians were not too keen on being associated with the general strike proposed by the loyalist members of the UWC. In the end the UWC called a strike without the politicians' agreement. The strike lasted two weeks and brought the province to a complete standstill - thanks largely to the support of the power workers who shut down most of Northern Ireland's electricity supply. But in fact it was not the kind of strike even the workers involved were expecting. At Harland and Wolff's shipyard, eight thousand manual workers were called to a lunchtime meeting where they expected to hear rousing speeches. Instead an unnamed voice simply announced that any cars still in the car park at 2 o'clock would be burnt! Rather than a strike - that is the result of collective action by workers - it was a military operation in which the workers' labour power was switched off, more or less at gunpoint. It was the paramilitaries who ran the show on the ground, intimidating anyone who tried to go to work. They manned roadblocks, prevented transport from running, closed down shops and when catholic pubs in Balleymena didn't obey their orders they wrecked three of them and shot dead the two middle-aged brothers who ran a fourth pub. There was little popular support for this strike except in loyalist strongholds like Larne Harbour. But with the paramilitaries organising it by their usual methods, and the British army and RUC standing back while they did so, it was not surprising that it was successful. On the fourth day of the strike, massive car bombs were exploded by loyalists in the Republic - in Dublin and Monaghan town - killing 28 people. After just 14 days the new Assembly executive resigned. This "strike" had in fact been met with passivity by a large section of the working class, on both sides of the divide, who saw no point in challenging the UDA's and UVF's roadblocks. Despite this, the strike was presented as evidence of support for the loyalist agenda amongst vital sections of the working class, like power and dock workers. However it primarily exposed the political vacuum which existed in the ranks of the working class where the existing working class organisations and political currents - and there were quite a few - conveniently kept a low profile throughout this period, including the trade unions. It also exposed a consistent feature in Northern Irish politics - the well-justified suspicion that workers felt towards politicians and governments. Certainly few among these workers were prepared to confront the paramilitaries in order to defend another deal cobbled up by politicians and ministers. And for that, at least, they could hardly be blamed. As a consequence the Assembly fell, Faulkner resigned and Westminster once more resorted to direct rule. A Convention was set up to discuss new arrangements for a devolved government. However the splits in the Unionist majority, the formation of new paramilitary groups, and the increasing tension in the protestant community due to the on-going economic crisis, meant that the Unionist-dominated Convention failed to agree on how the province might govern itself. It was dissolved in March 1975. The 1975 IRA truce Despite the failure of the Sunningdale Agreement, and the re-imposition of direct rule, the British Labour Government did not give up its efforts to find some kind of settlement. For a whole year in 1975 a "truce" with the IRA was theoretically in place accompanied by on-going discussions with the Republicans. This wasn't the first time that the IRA and the British government had had discussions, of course. There had been secret talks in 1972 between Conservative Ministers led by William Whitelaw and the IRA top leadership, in the aftermath of Bloody Sunday, only one month after Whitelaw had turned down the IRA's offer of a meeting in Derry. While Whitelaw later dismissed this meeting as a non-event, in accordance to the IRA's demand he had neveertheless granted political status to paramilitary prisoners, though refusing to talk about withdrawal of British troops. The 1975 truce, however, took place in the context of the collapse of Sunningdale. Given that this collapse had been due to the ability of the loyalist paramilitaries to demonstrate their control over a decisive section of the protestant working class, the Republicans wished to show that they too had the ability to control a significant section of the catholic working class. They offered a ceasefire on condition that they were given the chance to police their own areas. Joint incident centres were set up to monitor this in the catholic ghettoes. But in fact it did not really work out. What the "truce" expose instead, was that the leadership of the Provisional IRA was still far from being in a position to control the Republican movement as a whole. During the "truce" period, there was first a bloody feud between the old Official IRA - which had theoretically declared a ceasefire long before - and a split from within its ranks which was to result in the setting up of the INLA. Then later on, a second feud between the Provisionals and the Official IRA broke out over conflicting territorial claims. But more importantly, the authority of the Provisional leadership over its own ranks was shown to be lacking. There were a whole string of military operations by IRA units in the border area and even some in Belfast. And then, to crown it all, the truce was brought to a spectacular end by the Derry IRA, who, having always been opposed to the truce, decided to blow up the town's joint incident centre itself! Had the British government been in a position to deal with the IRA in seeking a settlement, the experience of this truce would have dissuaded them. Not being capable of controlling Republican ranks or even its own units, the IRA was clearly not in a position to discipline the catholic ghettoes. Yet, in a situation of intense conflict, where in addition, the loyalists' control over the protestant ghettoes was unchallenged, this would have been the only reason for the British state to consider the possibility of doing business with the Republicans. Maintaining the status quo The British government was left with little alternative than to wait for a more favourable moment to re-attempt a settlement. But since the immediate problem was the instability and the cost of the British occupation, the government decided to try to reduce the burden of these aspects. This policy was the so-called "normalisation" and "criminalisation". That is, instead of the army dealing with IRA suspects or staging pre-emptive strikes against paramilitaries, the local police force, the RUC, would start to take over. To this end, the RUC was increased by 1,200 to 6,500 and the army's main battalion was withdrawn. The disturbances would no longer be officially regarded as political, but criminal - and in line with this, political status was no longer granted to new prisoners. Of course what focused the government's mind on this kind of approach - much approved by the Unionist MPs in Westminster - was their loss of a majority in the House of Commons - a phenomenon which affected Major's government as well in the last two years of his office, and similarly resulted in a "putting on ice" of the question of North-South rapprochement, and an apparent hardening of military and policing strategy. In fact Michael Foot actually discussed increasing the number of seats in the House of Commons for Northern Ireland, on condition that the Unionists supported the Labour government. It was agreed that the number of constituencies would be increased from 12 to 17 - which was left to Thatcher to implement in the 1983 election. As a result of this opportunist agreement, however, Labour lost the support of the SDLP's Gerry Fitt. The United Ulster Unionist Council tried again in 1977 to use the protestant workers as a stage army in order to bid for a return to Stormont and Unionist rule. But this time the power workers refused to participate, despite the severe intimidation of workers to go on strike by the UDA. As a result the UDA lost face and there was further polarisation and dissension within the protestant ghettoes. Since this turned into a fiasco ending with the arrest of Ian Paisley, there was no question of the government making any concessions to such direct action. More importantly though, this showed the limits of the demagogic rhetoric used by Paisley and the loyalist paramilitaries. Their failure to impose their diktat on the protestant working class meant that another threat such as that raised by the 1974 UWC strike was no longer on the cards. Paisley's bigotted demagogy still mustered significant electoral support, but it had lost its real teeth - the support in the streets - and this was what counted. At some point - provided the status quo could be maintained - with the British army demonstrating enough repressive activity to deprive the loyalists gangs of their main recruiting arguments, unionist politics would return to the safe ground of the UUP itself, which would also usher back the prospect of power-sharing. Thatcher's not-so-new tough approach The Conservative government's first consultative document on Northern Ireland put out in November 1979 ruled out, for the first time for a British government, integration of the so-called "Irish dimension" though it did state that direct rule was not a satisfactory basis for government in Northern Ireland. But this apparent change in the strategy adopted so far by British governments was purely tactical. Thatcher's problem was to bring the UUP back to the idea of a political settlement. Ruling out the "Irish dimension" was merely a ploy - to lure the UUP into talks. As this failed, further gestures were made. In March 1980, Northern Ireland Secretary Atkins ended the special status of all existing and future terrorist prisoners. And, on the eve of a meeting with the Irish PM Charles Haughey, in May, Thatcher made a statement saying, "The future of the constitutional affairs of Northern Ireland is a matter for the people of Northern Ireland, this government and this parliament and no-one else." But despite this rhetoric, Thatcher was effectively initiating a new strategy aimed at exactly the same kind of settlement which had been embodied in the Sunningdale Agreement. Only this time, instead of trying to entice the Northern Irish politicians into a power-sharing agreement which would lead on, afterwards, to permanent links with the South, Thatcher chose to approach the problem the other way round. Permanent links would be created between London and Dublin over the heads of the Unionist politicians, and regardless of their complaints. Dublin would start sharing with London the responsibility of guaranteeing political stability in the North. At first this would be through co-operation on security matters aimed at weakening the IRA, thereby softening the suspicions of the Unionists towards the Republic. Then links would have to be built between southern civil servants and their northern counterparts through co-operation on secondary issues via ad-hoc committees. This, again, would be achieved with or without the approval of Northern politicians. On the other hand, those politicians who proved willing could easily be brought into the ad hoc bodies at any point. Strand by strand, a web of functional ties could thus be created between the state machineries of the North and South. So that by the time the Northern politicians were eventually brought to the negotiating table, the "Irish dimension" would be a fait accompli, and its fledgling apparatus already in working order. This was more or less the blueprint discussed by Thatcher and Haughey at a new summit in Dublin on the 8 December 1980. And it was this strategy which was adopted in the 1985 Anglo-Irish Agreement at Hillsborough and paved the way for today's peace agreement. The most striking feature of this approach was its treatment of Unionist politicians. Thatcher stated in no uncertain terms that she would not tolerate what her predecessors had tolerated from them. Thatcher was ruthless in her treatment of the IRA, but she was no less determined not to be dictated by the Unionists' parochialism. Ironically, while Thatcher often reminded everyone that her party's full name was that of "Conservative and Unionist Party", she was also the first British prime minister to spell out to Northern Ireland's Unionists that they would not be allowed to stand in the way of the interests of British capital, no matter what. The hunger strike and the aftermath The process initiated by Thatcher was however suspended by new developments. On 1st March, 1981, a hunger strike was launched in the face of Thatcher's stance against the "prisoner of war" status claimed by the IRA prisoners. The first prisoner to strike - namely the leader of the H-block prisoners, Bobby Sands, also then stood as the Sinn Fein candidate, with SDLP support, in a by-election for the Westminster seat of Fermanagh and South Tyrone and won the seat. For the first time in many years, a candidate standing on a clear Republican ticket, and what's more, an IRA prisoner, had been elected. Thereafter, despite the deaths of Sands and another nine Republican prisoners during the course of almost a whole year, and the obvious growing support in Northern Ireland for the Republicans, the Thatcher government remained publicly resolute against any concessions on political status. Two more hunger strikers were actually elected to the Southern Parliament in the interim. The hunger strikes marked a watershed for Sinn Fein who thereafter changed their official policy of abstention from Northern Ireland elections and Westminster elections on the strength of their obvious success. In fact, for the first time they probably thought they had a real chance of sidelining the SDLP. In June 1983, Sinn Fein obtained 13.4% of the vote compared to the SDLPs 17.9%. However this was rather due to the fact that they had managed to mobilise behind them a previously silent, non-voting section of the electorate - the poor, from the catholic ghettoes. In fact the SDLP's vote had not gone down much from the 18.3% they had achieved in the previous general election. The emergence of Sinn Fein as a significant electoral force was seen as a major success for the Republicans and a setback for Thatcher. That it was a success for the Republicans is unquestionable. But at what a cost! The courage of the hunger strikers certainly commands nothing but respect and Thatcher's policy amounted to cold- blooded murder. But what can be said of the leadership of an organisation traditionally strict on discipline, which allows or even instructs its members to starve themselves to death - when the Republicans could have chosen alternative ways of carrying on their political struggle! As to being a setback to Thatcher, Sinn Fein's electoral success was undoubtedly so in the short-term. But in the longer term, the establishment of the Republicans as an electoral force opened up, from the point of view of the British government, new possibilities for a future political settlement. Indeed, whatever the official rhetoric about rejecting any deal with "terrorists", no political settlement could be reached over Northern Ireland without securing at some stage the support of the Republicans who had now demonstrated themselves to be the only force who could police the catholic ghettoes and get them to toe the line of such a settlement. In fact, despite the difficulties this involved, imposing the presence of Sinn Fein, on account of its new electoral support, at a negotiating table next to the other political currents, seemed a much easier task than having to impose the "terrorist" IRA itself. Towards the Anglo-Irish Agreement In any case, steps towards a resolution of the Irish stalemate resumed soon after the end of the hunger strikes. In 1982, there was a fresh attempt at resuming the devolution approach which failed almost immediately. But that same year Lord Gowrie, of the Northern Ireland Office signalled in no uncertain terms where Thatcher was heading by stating that: "Northern Ireland is extremely expensive on the British taxpayer...if people of Northern Ireland wished to join with the South of Ireland, no British government would resist it for twenty minutes". However at this point a hiatus in Anglo-Irish relations occurred because of the Falklands war - which the Irish Republic refused to endorse. With FitzGerald and Fine Gael back at the helm of the Irish Republic by March 1984, Thatcher and Fitzgerald started again to discuss Anglo-Irish co-operation. This led almost exactly one year later to the signing of an Anglo- Irish Agreement between their two governments at Hillsborough Castle near Belfast. This agreement established an Inter-Governmental Conference to deal on a regular basis with political, security and legal matters - including the administration of justice. More importantly it stated: "if it should prove impossible to achieve and sustain devolution on a basis which secures widespread acceptance in Northern Ireland the conference shall be a framework within which the Irish government may, where the interests of the minority community are significantly or especially affected, put forward views on proposals for major legislation and on major policy issues, which are within the purview of the Northern Ireland departments." Thatcher had now put in place a framework which stood above the Northern politicians, being purely an agreement between the two governments of Ireland and Britain. This 1985 agreement therefore laid the basis for future negotiations in which all politicians, provided they agreed to give up their past overbidding, would be able, once more, to discuss a settlement - whether it be power-sharing under devolved government or any other arrangement. It also gave the middle class SDLP a chance to increase their political profile after a number of years of being squeezed out of the picture by Sinn Fein. At the same time, of course, it clearly risked a backlash from those in the Unionist camp who had for years staked their political careers on refusing to have anything to do with the Republic. Right on cue, Paisley prayed on the following Sunday for god to "take vengeance on this wicked, treacherous, lying woman...", Margaret Thatcher. To allay the fears of Unionist politicians, FitzGerald stressed the anti-republican obvectives of the agreement in a phone-in: "We are determined to defeat the IRA, remove any possible basis they may have of support, North or South... It's towards that end that the Agreement has been signed." Sinn Fein's leader, Gerry Adams, seemed to go along with this, accusing the Agreement of aiming at "creating a climate in which this party can be isolated". At face value, given the security dimension of the agreement, this assessment seemed to be correct. Except that it paved a clear way out for Sinn Fein, a way which in fact they had already embarked on - that of a turn from the armalite to the ballot box. Scarcely a year later Sinn Fein abandoned their abstentionist position towards the Southern parliament and stood in elections for the Dail, thereby strengthening their electoral strategy. At the same time, the introduction of the so-called shoot-to-kill policy showed how Thatcher intended to use repression to pressurise the Republicans into opting for the ballot box. There remained the question of the unionists' response. Immediately after the signing of the Anglo-Irish Agreement the Unionists had organised a 100,000 strong rally in Belfast denouncing it. The following January, 15 sitting Unionist MPs resigned their Westminster seats in protest, hoping to use the following by-elections as a mini- referendum on the agreement. However this slightly backfired when the SDLP gained one seat at their expense. Protests by Harland & Wolff workers outside the first session of the Anglo-Irish Conference at Stormont, ended in a riot with the RUC, and 38 police were injured. But in fact the Thatcher government continued with its policy despite the Unionists' opposition. They even allowed an enquiry into child abuse at the Kincora childrens' home (which implicated prominent Unionists) to go ahead, which added fuel to the fire of the Unionists' claims that they were being betrayed by the Thatcher government. By 1987 Haughey came back as prime minister in the South and predictably now endorsed the Anglo-Irish Agreement. In the North opposition to it remained as strong among loyalists. Thus the UDA, in line with their "anything is better than Dublin" policy, called for Unionist politicians to reach a power-sharing agreement with the SDLP. This did not prevent an escalation on both sides by the paramilitaries in what seemed an orgy of murder and sectarian overbidding. But all this to no avail, since Anglo-Irish relations continued in regular meetings of the Anglo-Irish conference. In 1990, an All-Ireland Forum was set up under Northern Ireland secretary Peter Brooke, who insisted at the time that Britain "had no selfish, strategic or economic interest in remaining in Northern Ireland". This Forum was to create a permanent framework in which Northern and Southern politicians could discuss issues affecting the island as a whole. Surprisingly the talks got off to a reasonable start with even Paisley's DUP allowing itself to be lured into them. However after two years, when the talks were symbolically transferred from Belfast to Dublin, the DUP sent only observers and soon pulled out, followed by the UUP, which did not want to be outflanked by its rival, and the talks collapsed. This led to a period in which the loyalist paramilitaries upped the ante, once again, in order to ensure that they would not be left out of any future settlement - as they had been the last time round. For good measure, the UDA, no doubt in the hope that it would be treated in the same way, proclaimed that they would not object to Sinn Fein being invited to all-party talks, provided there was a ceasefire. In fact by the following year, despite the IRA's bombing campaign on the British mainland, Sinn Fein was involved in secret talks with the Major government. By the winter of 1993, the leader of the SDLP, John Hume, was engaged as a public go-between, under the patronage of the Irish government. His job was to come up with proposals, agreed between himself and Sinn Fein's leadership, offering a ceasefire on condition that Sinn Fein be allowed to participate in talks towards a peace settlement. Eventually the remaining obstacles were lifted and the ceasefire was announced on 31st August 1994. The 1994-1998 "peace process" and its outcome It took almost another four years, a resumption of terrorist activity and a new ceasefire on the Republican side, many more random killings on the loyalist side, and a change of parliamentary majority in Westminster, before the "peace process" initiated by the IRA ceasefire in 1994 finally came to anything. As so often in the past, the main stumbling block was the resistance of Unionist politicians to allowing Sinn Fein and Dublin a space in the process. And if this process has finally had some results since Labour's advent to power, it is not thanks to Blair's exceptional diplomatic skills, as the spin-doctors claimed, but due to the fact that Unionist MPs could no longer use their votes in Westminster to exercise pressure on the British government. However, such politicking could probably have been overcome, had the Tory governments been determined to reach a conclusion earlier. But the very length of the process was also part of their tactic. Paramilitary leaders - mainly those on the Republican side, but also, to a lesser extent, on the loyalist side - had to be given enough time to get their troops to line up behind a new policy of compromise. At the same time, a lengthy process ensured that the expectations raised by the original ceasefire among the population of Northern Ireland, particularly among its poorest layers, would be dampened by the time an agreement was reached. The last thing the governments involved wanted, or the politicians for that matter, was to spread the illusion that somehow the ghetto population would be allowed a say in the process. All colluded to ensure that this did not happen and that the settlement remained firmly in the hands of ministers and politicians. In the end, the protracted and convoluted process started by Thatcher back in 1980 produced the so-called "Good Friday agreement". This obviously raises the question: is it going to be yet another failed stage in the British state's attempt at extricating itself from the mess it has created in Northern Ireland? Or is it the beginning of the final stage in the search for a settlement? The details of this agreement have been profusely covered by the media and many parallels have been drawn with past attempted settlements, from the 1974 Sunningdale agreement to the 1985 Anglo-Irish agreement and the Framework Document issued jointly by London and Dublin in February 1995. Trimble's Unionists argued that the new agreement marked a significant retreat from all previous attempts. The SDLP argued that, on the contrary, the North-South dimension had never been so clearly spelt out. Sinn Fein, of course, had the easiest and most convincing argument - it was the first deal ever, in which they were included. But to a large extent the actual details of the agreement are irrelevant. Indeed this deal was primarily shaped by the concessions made at the last minute to the main protagonists in the negotiation. It was designed to enable all of them to boast a "victory" in front of their respective constituencies. And in fact, taken to the letter, the deal as it stands would probably be unworkable, either because it is too prone to widely diverging interpretations or because it is even contradictory in some of its aspects. In reality, rather than a deal aimed at being actually implemented, it is yet another framework for negotiation. The institutions that it sets up are more sophisticated than the rather informal ones used in the previous stage of the negotiations, but they are still meant to be transitional structures, with no power for the time being. In every sphere that matters, the governments, and above all the British government, retain total control of the situation. The main feature of this deal is that it is a consumate exercise in arm-twisting. The protagonists were allowed to limit their actual commitment to very little. But in return, they had to put up with Blair's take-it-or-leave-it attitude. They could choose to remain outside of the deal, but the deal would take place regardless. In that, Blair's approach is consistent with that adopted by Thatcher in the early 1980s, and subsequently by Major in the Framework Document - no-one, neither friend nor foe, will be allowed to stand in the way of the scheme that British capital has in store for Northern Ireland. And this scheme has not changed in the least over the past forty years. The future institutions outlined by the agreement may create what could be described as an "Anglo-Irish Union" - in other words, for the first time since partition, the establishment of apparatuses linking the Irish and British state machineries. This allowed the UUP leader Trimble to boast of having "won" a reinforcement of the Union with Britain. And it is not the least irony in this agreement, that Sinn Fein should have signed up to a deal which strengthens London's political hold over Dublin! But whatever the noises made by the politicians, this "Anglo-Irish Union" is just a means to an end in the strategy of British imperialism - a means to ensure that the end will be effectively achieved. Of this, there can be no doubt. Regardless of the wording of the deal, the desires of the protagonists and the transitional forms that the process may take in the coming months and years, British capital intends to dump Northern Ireland into the orbit of the Irish Republic, once and for all. Moreover the determination of the British state to achieve this goal, and its sense of urgency, can only be much greater today than it has ever been before. British imperialism's relatively risky strategy in its rivalry with its European competitors over the shaping of the future euro zone, cannot tolerate political instability in its Irish backyard. What is at stake for British capital over the coming years is its share of financial and trading markets covering hundreds of millions, if not billions of people. It will not allow what it sees as the parochial concerns of the 1.5 million inhabitants of Northern Ireland, or even the five million of Ireland as a whole for that matter, to stand in its way. And Blair can be trusted to treat ruthlessly those who do not comply, as his masters in the City would expect. The decisive role of the balance of forces in the ghettoes What has changed, also, particularly since the 1970s, and allows the British state to take a much bolder attitude, is the balance of forces in Northern Ireland itself. Even if Paisley's gesticulations still allow him to attract a sizeable chunk of the protestant vote, his demagogy is no longer a threat, at least not for the British government. Nor can the survival of a rump of active paramilitary groups, on both sides of the sectarian divide, derail London's schemes - at least not as long as the relationship of forces in the poor ghettoes remains what it is today. Indeed, it was never the intrinsic strength of the unionist or nationalist currents which allowed them to block London's attempts to settle the "Northern Ireland question". It was primarily the potential for uncontrolled social explosions which existed in the working class ghettoes since the late 1960s - a potential which could have been triggered easily, and even unwittingly, by the demagogy of the politicians. Likewise, this explosive potential was reflected in the ghettoes by a smouldering frustration and anger which lured the youth towards the paramilitary groups, thereby allowing these groups to maintain their profile and hold on the ghettoes despite British repression. Thirty years on, however, this explosive potential has receded. Not that the objective conditions for a social explosion in the ghettoes are less today than they were thirty years ago. On the contrary, they are probably far greater, in that, despite the British subsidy, the economic crisis has continued to take a heavy toll in Northern Ireland, on both sides of the divide. And in that sense that threat is as present today as it ever was. But what is no longer there today compared to thirty years ago, is the dynamism and confidence of an entire layer of the poor population, who had just discovered its collective strength in street confrontations with the repressive machinery of the state. This dynamism and confidence could of course come back very quickly, should the ghetto population take to the streets again in defence of their own interests. But for the time being, what is left of the militant generation which came out of the explosion of the late 1960s, is a layer of ageing activists, whose outlook was shaped by the Republican and loyalist military machines and narrow nationalism. And in so far as they stick to the perspectives of the Republicans and loyalists, these activists see their role only as ensuring that their respective ghettoes will act as disciplined footsoldiers for their leaderships, not as encouraging and building on the conscious aspirations of the poor masses. These activists have been incapable of passing on to the younger generations the tradition of the mass movements of the 1960s, because the explosive nature of such movements is precisely what they now fear most. This present situation in the ghettoes is the main card in the hand that the British state is currently playing. If Blair feels confident enough to go beyond the old strategy of maintaining an uneasy status quo, if he is able, in addition, to force into his political settlement the main politicians and paramilitary forces on the basis of his own agenda, it is not due to any "special peacemaking skills". It is, of course, because it is more difficult today for the various protagonists to use the ghettoes as a lever in their rivalries and overbidding. But it is also because, more importantly, the British state no longer feels the pressure and social explosive potential of the poor masses. Beyond the "peace agreement" - for a return to class politics What the coming settlement has in store for the Northern Ireland working class is now clear beyond doubt. The timescale of the future changes will depend on many unknown factors, but not their general direction. Selecting and shaping a reliable local state machinery capable of taking over the running of Northern Ireland in cooperation with the southern state - which is the real objective of the coming stage of the settlement process - will involve bribing the middle-class and petty-bourgeois layers who will be entrusted with the future political stability of the province, both those who were already part of the establishment and those who went into dissidence to get their own share of the cake. Someone will have to foot the bill for these bribes. In the short term, a combination of European funds and transitional British subsidy may do the trick, but for how long? And in any case, the settlement process will also involve creating future sources of income for the privileged classes, ready to bridge the gap when the flow of external subsidies dries up. Already politicians and ministers are lining up to hail the prospect of foreign investment and making moralistic speeches on the need for workers to be flexible and cut labour costs. Indeed, it is the working class of Northern Ireland who is expected to tighten its belt in order to cater for the comfortable lifestyle of the province's future elite. And what does this mean in the context of Northern Ireland, where private sector earnings are 20% below British average and unemployment far higher than anywhere in Britain? It can only mean a drastic slide in the standard of living of the working class as a whole, and particularly its poorest layers, toward the lowest levels of the European scale, if not further down toward the Third World. The Blair-Trimble-Adams "peace" can only mean more drastic exploitation for the working class, and the intensification of the class war waged by the capitalists against the working class. Nor does this "peace" necessarily mean the end of the sectarian divide. The built-in anti-discrimination, pro-Irish language and policing reform dimensions of the deal are no guarantees in this respect. Tokenism on a background of scarce jobs very often backfires on those whose interests are supposed to be protected. And no doubt, politicians will seek to make political capital out of the resulting frustrations, thereby whipping up once again sectarian tensions and prejudices. All the more so as the Northern Ireland Assembly itself will have a built-in sectarian dimension, by giving a greater say to those political currents claiming to represent one side of the divide or the other. This sectarian divide may have its roots in the long history of Britain's oppressive rule over Ireland, but the main factor which allows it still to be alive today is neither the presence of British troops in Northern Ireland, nor even the survival of antiquated religious bigotry. It is the degrading social conditions imposed on the majority of the working class. It is this chronic deprivation of the working class ghettoes which has allowed one section of the working class to be set up against the other - by convincing the catholic minority that the slightly better conditions enjoyed on average by protestant workers made them accomplices to the exploiters, and, at the same time, entrenching among the protestant majority the idea that their conditions were somehow threatened by Catholic workers . Above all, against this background of deprivation, the decisive factor in perpetuating the sectarian divide is the way in which the political consciousness of workers is still being shaped, from a very early age, by political forces which are themselves remnants of the past, feeding on the sectarian divide - whether it be the Republican currents among catholics or the loyalist groups among protestants. Today the surviving predominance of these currents and ideas in the ranks of the working class of Northern Ireland, is the main obstacle to its ability to build up the unity it needs to defend effectively its class interests against all exploiters, orange or green. The tragegy of the working class of Northern Ireland, over the past thirty years, has been that its fighting capacity has been diverted from the defence of its class interests and consciously obscured by sectarian and nationalist illusions. In this respect, one positive aspect of the current political process, and probably the only one, will have been to expose the deadend of Irish nationalism and loyalism. One can only hope that the sinister irony of seeing yesterday's paramilitaries signing up to an agreement which strengthens the hand of their alleged "mortal" enemies at the expense of the ghettoes on whose sacrifices these groups have built up their political clout, will open the eyes of their deceived supporters. Today may be the first decisive opportunity, since the social explosion of the late 1960s, for class-based politics to reclaim its right in the ranks of the Northern Ireland working class - and for a wholly new fighting political tradition to be rebuilt, this time based on working class consciousness, organisation and democracy. In any case, this is the only road which can lead the Irish working class toward a future free from the anachronistic divisions and deadly antagonisms left over from Britain's colonial oppression, a future which it can only build by taking its place among the ranks of the international proletariat.
<urn:uuid:94dba010-d5e4-42a2-ae96-fccceaee3977>
CC-MAIN-2020-05
https://www.communist-union.org/en/1998-09/39-northern-ireland-the-peace-agreement-and-the-hidden-agenda-of-british-capital-860
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672537.90/warc/CC-MAIN-20200125131641-20200125160641-00456.warc.gz
en
0.975605
12,114
2.96875
3
I honestly never thought much about how trauma impacts my student’s learning. Sure, I knew that certain events in my student’s lives impacted them, but I never truly understood to the degree that they do impact student learning. I knew that little Eric may have it rough at home. And Caroline spent the weekend with her mom so her day will be rough today. But what trauma did they face? What effect will that trauma have on their ability to learn? I never thought of these things. Trauma is a response to a negative external event or series of events which surpasses the child’s ordinary coping skills. It comes in many forms and includes experiences such as maltreatment, witnessing violence, or the loss of a loved one. Traumatic experiences can impact brain development and behavior inside and outside of the classroom. It is estimated that one half to two-thirds of children experience trauma. The trauma doesn’t have to be directed at the child and it doesn’t have to be violent in nature. The child could witness an event, be threatened, become injured, or even experience losing a loved one. Trauma can result in a lower GPA, higher rate of school absences, increased risk for drop-out, more suspensions and expulsions, and decreased reading ability. Single exposure to traumatic events may cause jumpiness, intrusive thoughts, interrupted sleep and nightmares, anger and moodiness, and/or social withdrawal—any of which can interfere with concentration and memory. Chronic exposure to traumatic events, especially during a child’s early years, can: adversely affect attention, memory, and cognition; reduce a child’s ability to focus, organize, and process information; interfere with effective problem solving and/or planning; and result in overwhelming feelings of frustration and anxiety. - Physical symptoms like headaches and stomachaches - Poor control of emotions - Inconsistent academic performance - Unpredictable and/or impulsive behavior - Thinking others are violating their personal space, i.e., “What are you looking at?” - Blowing up when being corrected or told what to do by an authority figure - Fighting when criticized or teased by others As educators what can we do? We can work on enacting change within our schools. You may not know it but trauma informed approaches are already at work within medical professions and judicial systems all over the country and even the world. The heart of these approaches is a belief that students’ actions are a direct result of their experiences. The question we should ask is not “what’s wrong with you,” but rather “what happened to you?” Sensitivity to students’ past and current experiences with trauma we can work to break the cycle of trauma, prevent re-traumatization, and engage a child in learning and finding success in school. So what are some easy and practical ways you can practice trauma informed interventions in your classroom? - Give children choices. Often traumatic events involve loss of control and/or chaos, so you can help children feel safe by providing them with some choices or control when appropriate. - Increase the level of support and encouragement given to the traumatized child. - Set clear, firm limits for inappropriate behavior and develop logical—rather than punitive— consequences. - Recognize that behavioral problems may be transient and related to trauma. Remember that even the most disruptive behaviors can be driven by trauma-related anxiety. - Provide a safe place for the child to talk about what happened. Set aside a designated time and place for sharing to help the child know it is okay to talk about what happened. - Give simple and realistic answers to the child’s questions about traumatic events. - Clarify distortions and misconceptions. - If it isn’t an appropriate time, be sure to give the child a time and place to talk and ask questions. - Be sensitive to the cues in the environment that may cause a reaction in the traumatized child. For example, victims of natural storm-related disasters might react very badly to threatening weather or storm warnings. - Children may increase problem behaviors near an anniversary of a traumatic event. - Anticipate difficult times and provide additional support. Many kinds of situations may be reminders. - If you are able to identify reminders, you can help by preparing the child for the situation. For instance, for the child who doesn’t like being alone, provide a partner to accompany him or her to the restroom. - Warn children if you will be doing something out of the ordinary, such as turning off the lights or making a sudden loud noise. - Be aware of other children’s reactions to the traumatized child and to the information they share. Protect the traumatized child from peers’ curiosity and protect classmates from the details of a child’s trauma. - Understand that children cope by re-enacting trauma through play or through their interactions with others. Resist their efforts to draw you into a negative repetition of the trauma. For instance, some children will provoke teachers in order to replay abusive situations at home. - Although not all children have religious beliefs, be attentive if the child experiences severe feelings of anger, guilt, shame, or punishment attributed to a higher power. Do not engage in theological discussion. Rather, refer the child to appropriate support. What do you do in your own classroom to support your students who have been through a trauma? I’d love to know! Leave a Reply
<urn:uuid:c52cc400-7d01-411a-9446-a66bb9e2e5a8>
CC-MAIN-2023-14
http://primarilyau-some.com/trauma-informed-interventions/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00494.warc.gz
en
0.945247
1,144
3.921875
4
The Land of Zarahemla is located in the Grijalva River valley in the state of Chiapas in southern Mexico, Figures 1 and 2. The valley is surrounded on the south, west and north by mountainous wildernesses as described in Alma 22:27. The dark area in the center is an artificial lake created by construction of a hydroelectric damn at the mouth of the Angostura Canyon. Before the damn was built, the Grijalva River ran through the valley in a roughly northwesterly direction with numerous twists and turns (Archeological Exploration of the Upper Grijalva River, Chiapas, Mexico by Gareth W. Lowe in Papers of the New World Archeological Foundation as shown in Figure 3. Its source consisted of three tributaries that originate in the southern wilderness, part of the narrow strip of wilderness mentioned in Alma 22: 27. All three tributaries originate within a 45 mile radius and run from east to west at the source, Alma 22:27, see Figure 4. Many of those who attempt to determine the geographic location of the Book of Mormon place undo attention on the “narrow neck” which is only mentioned three times in the Book of Mormon with little geographic information that would identify it with a specific location on the American continents, as shown by the myriad of locations proposed for its identity.. The River Sidon, on the other hand, is mentioned over 20 times and in at least four different geographic contexts. Each of these contexts contain geographic information which should make it possible to find a river in the Americas that can be uniquely identified with the River Sidon. The description of the Nephite and Lamanite lands in Alma 22:27-34 identifies 3 specific geographic attributes relative to the River Sidon. 1. Its head, source, is located in a narrow strip of wilderness. 2. The head runs from east to west 3. The narrow strip of wilderness is located south of the Land of and runs from an east sea to a west sea from the east to the west. If one accepts that the Book of Mormon is translated correctly from the plates given to Joseph Smith by the Angel Moroni, then the text of the book must be accepted as the most authoritative source for information relative to the geography of the Book of Mormon. Using the three dimensional satellite maps incorporated into the computer program “EARTHA Global Explorer DVD” by Delorme, a thorough search of the geography of America in 3D can be made. Such a search results in one and only one location that fits the geographic restraints imposed by the text of Alma 22:27 for the River Sidon. This is as described above for the Grijalva River, indicating that the Grijalva is the same river described as the Sidon in the Book of Mormon and as has been proposed by many proponents of Book of Mormon geographies. A Correlation of the Sidon River and the Lands of Manti and Zarahemla with the Southern End of the Rio Grijalva (San Miguel) John L. Hilton, Janet F. Hilton Provo, Utah: FARMS, 1992. Pp. 142—162 Although these authors correctly identify the head of the Grijalva river with the head of the river Sidon, they present their data so inconsistently that it is no wonder that no one else has taken them seriously. I found this reference recently,several years after I had developed my own conclusions, and was quite surprised that I had not seen it quoted in any of my reading. One glaring example of their inconsistency is their map of Book of Mormon lands in which they show an east sea and a west sea located on a north-south axis according to the compass star in the figure and then use compass directions of east and west to describe the headwaters of the Grijalva river as it is correctly found on subsequent maps in their article. This same figure shows a narrow strip of wilderness on a north-south axis between the east and west seas which does not correlate with any range of mountains in or around the Grijalva river. courtesy of Nasa's Inset area from courtesy of Nasa's Original course of the Grijalva River Sources of the Based on Delorme’s |The Land of Zarahemla
<urn:uuid:558dc302-ae11-4b46-a941-af6261146323>
CC-MAIN-2014-10
http://www.poulsenll.org/bom/zarahemla.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010213406/warc/CC-MAIN-20140305090333-00024-ip-10-183-142-35.ec2.internal.warc.gz
en
0.927851
963
3.078125
3
Distribution of overwintering Calanus in the North Norwegian Sea. During winter 2003 and 2004, zooplankton and hydrographic data were collected in the northern parts of the Norwegian Sea (68–72º N, 8–17º E) west of the Norwegian shelf break at depths down to 1800 m. The results cover both inter and intra annual changes of hydrography and distribution of Calanus spp. For the whole survey area, average seawater temperature down to 1000mwas higher in 2004 than in the same period in 2003. For the upper 500m the difference was ca. 1º C. Calanus finmarchicus dominated at ca. 75% of the total copepod abundance. Typical abundance of C. finmarchicus in the survey area was 30 000–40 000m-2. C. hyperboreus was found deeper than C. finmarchicus while other copepods were found at the depth of C. finmarchicus or shallower. From January to February 2004, the peak of abundance of C. finmarchicus and C. hyperboreus shifted approximately 300m upwards indicating that ascent from overwintering depth took place at a speed of 10md-1 during this period. In general, high abundance of copepods was found adjacent to the shelf slope while oceanic areas had low and intermediate abundance. In the southern part of the survey area, location of high and low copepod abundance shifted both between and within years. In the northern part of the survey area where the shelf slope is less steep, copepods was present at intermediate and high abundance during all surveys. PublisherEuropean Geosciences Union SeriesOcean science 2(2006), pp 87-96 MetadataShow full item record The following license file are associated with this item: Showing items related by title, author, creator and subject. Baadshaug, Ole (Master thesis; Mastergradsoppgave, 2018-06-29)Moving icebergs represent a major problem for shipping, as well as for oil and gas installations in ice infested waters. To be able to take actions against hazardous icebergs, it is necessary to develop models for prediction of iceberg drift trajectories. Many models have been developed in order to do so, using different approaches. These approaches can be divided into two main categories, dynamic ... Kirchhefer, Andreas Joachim (Doctoral thesis; Doktorgradsavhandling, 2000-03-17)A total of ten tree-ring chronologies of Scots pine, Pinus sylvestris L., was constructed between the Vesterålen archipelago and the Finnmarksvidda in order to investigate the regional variability of radial growth and climate response of pine. The longest tree-ring chronology, located in Forfjorddalen in Vesterålen, was highly significant back to AD 1354. The study area was divided into three ... Blix, Arnoldus S (Journal article; Tidsskriftartikkel; Peer reviewed, 2017-01-25)This paper describes the significant direct and indirect contributions to science made by the Norwegian polar explorer Roald Amundsen in the period 1897–1924. It documents that his expeditions through the North-west Passage (1903–06) with Gjøa, to the South Pole (1910–12) with Fram and through the North-east Passage (1918–1920) and the Chukchi and East Siberian seas (1921–25) with Maud yielded vast ...
<urn:uuid:47da4e3a-309f-4d30-b929-b06d62cd874d>
CC-MAIN-2020-24
https://munin.uit.no/handle/10037/573
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413097.49/warc/CC-MAIN-20200531085047-20200531115047-00245.warc.gz
en
0.930348
746
2.75
3
The Human Genome Project (HGP), according to the National Human Genome Research Institute, was the international, collaborative research program formed to complete the mapping and understanding of all the genes of human beings. All our genes together are known as our "genome." Our hereditary material is the double helix of deoxyribonucleic acid (DNA), which contains all human genes. DNA, in turn, is made up of four chemical bases, pairs of which form the "rungs" of the twisted, ladder-shaped DNA molecules. All genes are made up of stretches of these four bases, arranged in different ways and in different lengths. During the HGP, researchers deciphered the human genome in three major ways: determining the order, or "sequence," of all the bases in our genome's DNA; making maps that show the locations of genes for major sections of all our chromosomes; and producing what are called "linkage maps" through which inherited traits (such as those for genetic disease) can be tracked over generations. The HGP revealed that there are approximately 25,000 human genes. The completed human sequence can now identify the location of each gene. The result of the HGP has given the world a resource of detailed information about the structure, organization, and function of the complete set of human genes. This information can be thought of as the basic set of inheritable "instructions" for the development and function of a human being. The International Human Genome Sequencing Consortium completed and published the full sequence in April 2003. Click here to view the Online Resources of Cancer Center
<urn:uuid:0be751db-b4bf-4cca-9ab1-888457a2a3bb>
CC-MAIN-2014-23
http://www.nyhq.org/diw/Content.asp?PageID=DIW007229&More=OTH&language=Chinese
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268363.15/warc/CC-MAIN-20140728011748-00332-ip-10-146-231-18.ec2.internal.warc.gz
en
0.929141
331
4.46875
4
Leslie K. John – Marvin Bower associate professor of business administration at Harvard Business School: “…Technology has advanced far beyond the browser cookies and retargeting that allow ads to follow us around the internet. Smartphones now track our physical location and proximity to other people — and, as researchers recently discovered, can even do so when we turn off location services. We can disable the tracking on our web browsers, but our digital fingerprints can still be connected across devices, enabling our identities to be sleuthed out. Home assistants like Alexa listen to our conversations and, when activated, record what we’re saying. A growing range of everyday things — from Barbie dolls to medical devices — connect to the internet and transmit information about our movements, our behavior, our preferences, and even our health. A dominant web business model today is to amass as much data on individuals as possible and then use it or sell it — to target or persuade, reward or penalize. The internet has become a surveillance economy. What’s more, the rise of data science has made the information collected much more powerful, allowing companies to build remarkably detailed profiles of individuals. Machine learning and artificial intelligence can make eerily accurate predictions about people using seemingly random data. Companies can use data analysis to deduce someone’s political affiliation or sexuality or even who has had a one-night stand. As new technologies such as facial recognition software and home DNA testing are added to the tool kit, the surveillance done by businesses may soon surpass that of the 20th century’s most invasive security states. The obvious question is, How could consumers let this happen? As a behavioral scientist, I study how people sometimes act against their own interests. One issue is that “informed consent” — the principle companies use as permission to operate in this economy — is something of a charade. Most consumers are either unaware of the personal information they share online or, quite understandably, unable to determine the cost of sharing it — if not both…”
<urn:uuid:75c149e0-9580-4bee-9c82-e5b2349fd295>
CC-MAIN-2020-24
http://bespacific.com/hbr-uninformed-consent/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00058.warc.gz
en
0.930576
414
2.59375
3
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) Follow this link and learn how to complete the square. Ahhh... so because -c/a have "no value" or are just equal to 1, we can manipulate them how we want? Well you could multiply by 7/7 or xyz/xyz or 4pq/4pq I get that we're supposed to multiply -c/a by 4a/4a, but I'm not seeing where that 4a/4a is coming from. Why nog justmultiply the top and the bottom of the fraction by 4 like Bob said? Ah I'm sorry bob, but I don't think you're getting what I'm trying to say. I think I have a better way to explain it, check this out. Ok, don't worry. I'll try to sort that out. It is an algbra misunderstanding. When I have trouble with algebra, I go back to some numbers and try the same thing. Let's say that c = 3, b = 12 and a = 5 Now, if you have two fractions to add together, you have to make the denominators the same. I want both /100 So multiply the fraction by 4 x 5 Now with letters Ah, so let me try something here. Yout aren't multiplying by 4A but rather multiply by 4A / 4A. Which is the same as multiplying by 1. I actually watched a video that explained it that way. I'm still confused on one thing though, I want to know where the 4a that we multiply -c/a comes from. That's what I'm trying to figure out. The 4a we multiply against -c/a can't just come from nowhere right? Nothing in math can just come from "thin air" right? I mean we can't just say "We're going to multiply -c/a by 4a just because", that 4a has to come from some process and that's what I want to know. OK, so carrying on from there: Then put all over this denominator and re-arrange. Then you can square root everything. Only one +/- sign is needed in the final expression. Take the b/2a term across to the right hand side, all over the same denominator. Hopefully that sorts it out for you. Alright let me start from the beginning. What I do here first is I move the "loose" number over to right. Now we have Now I take the coefficient on and divide it through the entire equation. Now my method tells me that I take half of the middle term, square it and then add it to both sides. We end up with this. This is where I got stuck. I don't know how to get the common denominator of 4a for -c/a. I hope I wrote everything out correctly as I was doing this through memory, and keeping track of exponents and what not can be a little tough when typing it out lol. Thanks guys. I was learning to do this through a different slightly different method however, and even though this method looks a bit shorter I'd prefer to stick with what I'm already familiar with . If you could explain how -c/a gets the common denominator of 4a in my version of the problem bob that'd be great .
<urn:uuid:0b33b9d9-e93d-44e8-9510-bdf68d8c60cc>
CC-MAIN-2013-20
http://www.mathisfunforum.com/post.php?tid=18520&qid=242395
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00077-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961474
775
3.0625
3
Plot 3, Lot 4 Mount Pleasant Cemetery, Toronto When 15-year-old James Franceschini arrived in Canada from his hometown of Pescara, Italy in 1906, the young man spoke virtually no English and was totally penniless. Befriended by a Toronto city policeman who found Franceschini a place to sleep that first night in his adopted city, the youngster found himself a job the very next day. Franceschini had soon earned enough to buy a horse and wagon and eventually began his own small excavation company. Time passed, and the young man could soon afford to add a steam shovel to his equipment list. He suffered a major financial setback in 1916, but was able to recover and within a decade was the country’s largest road contractor. One of his many enterprises was called Dufferin Construction. In 1939, Canada went to war and Franceschini did his part by establishing the Dufferin Shipbuilding Company at the foot of Spadina Avenue. Here he contracted to build minesweepers for the government, but suddenly, and as it turned out, without proof, James Franceschini was arrested, fingerprinted and consigned to an internment camp as an enemy alien. Investigations subsequently proved Franceschini’s innocence, but a full year went by before he was granted a pardon. But due to government ineptitude (as a result of what was later proven to be blatant racism) Franceschini’s release was held up for another five months. Finally, a physician’s report on the deteriorating health of the 52-year-old Canadian citizen gained him his release. Once free, Franceschini purchased an estate in the Laurentians where he died on September 16, 1960. Mount Pleasant Cemetery: An Illustrated Guide Second Edition Revised and Expanded
<urn:uuid:34e7b4f1-36a5-41b0-b946-f54b60fd77ff>
CC-MAIN-2020-24
https://www.mountpleasantgroup.com/en-CA/General-Information/Our%20Monthly%20Story/story-archives/mount-pleasant-cemetery/James%20Franceschini.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00547.warc.gz
en
0.984879
366
2.890625
3
When we speak about “THE CONFEDERATE FLAG” which flag are we referring to? To date there has been over 2,200 different Confederate Flags identified that were used by the Southern States during the War of Northern Aggression and some are still in use today. The State Flag of Virginia was used during the War and remains in use now. Slight variations of the South Carolina, North Carolina and Texas Flags, are all “Confederate Flags” and are still in use today. What about the flag of West Virginia? That State and Flag were born during the War (June 1863), in which over half of the state had southern sympathies and military units. What does it mean when the Crescent Moon is backwards (upside down) on a variant of the South Carolina Flag? It indicates that the State is in a defensive mode. For several years now the South’s Heritage has been under attack. Certain organizations and Individuals have seen fit to blame a piece of cloth for everything wrong in the United States. In the State of South Carolina the SC House and Senate decided in 1962 to erect the Confederate Battle Flag on the State House. It was to commemorate the 100th anniversary of the War Between the States. It was to fly from 1962 to 1965; however the governing bodies never included the end date in the Bill. The NAACP called for it’s removal as early as 1994. Our Army of Northern Virginia Flag was removed from the State House in 2000 and moved to the monument on the State House Grounds to appease the NAACP. The NAACP found this unsatisfactory and continued their protest. KEEP HER FLYING – Heritage Not Hate If she doesn’t wave over the State House or on State House grounds, we as Southerners can still host her and fly her at our homes in honor of our ancestors. Confederate Flag Designer William Porcher Miles on the Heraldry of the Battle Flag Richmond, August 27, 1861. Gen. G. T. Beauregard, Fairfax Court house, Virginia: Dear General, I received your letter concerning the flag yesterday, and cordially concur in all that you say. Although I was chairman of the ‘Flag Committee,’ who reported the present flag, it was not my individual choice. I urged upon the committee a flag of this sort. [Design sketched.] This is very rough, the proportions are bad. [Design of Confederate battle-flag as it is.] The above is better. The ground red, the cross blue (edged with white), stars white. This was my favorite. The three colors of red, white, and blue were preserved in it. It avoided the religious objection about the cross (from the Jews and many Protestant sects), because it did not stand out so conspicuously as if the cross had been placed upright thus. [Design sketched.] Besides, in the form I proposed, the cross was more heraldic than ecclesiastical, it being the ‘saltire’ of heraldry, and significant of strength and progress (from the Latin salto, to leap). The stars ought always to be white, or argent, because they are then blazoned ‘proper’ (or natural color). Stars, too, show better on an azure field than any other. Blue stars on a white field would not be handsome or appropriate. The ‘white edge’ (as I term it) to the blue is partly a necessity to prevent what is called ‘false blazoning,’ or a solecism in heraldry, viz., blazoning color on color, or metal on metal. It would not do to put a blue cross, therefore, on a red field. Hence the white, being metal argent, is put on the red, and the blue put on the white. The introduction of white between the blue and red, adds also much to the brilliancy of the colors, and brings them out in strong relief. But I am boring you with my pet hobby in the matter of the flag. I wish sincerely that Congress would change the present one. Your reasons are conclusive in my mind. But I fear it is just as hard now as it was at Montgomery to tear people away entirely from the desire to appropriate some reminiscence of the ‘old flag.’ We are now so close to the end of the session that even if we could command votes (upon a fair hearing), I greatly fear we cannot get’ such hearing. Some think the provisional Congress ought to leave the matter to the permanent. This might, then, be but a provisional flag. Yet, as you truly say, after a few more victories, ‘association’ will come to the aid of the present flag, and then it will be more difficult than ever to effect a change. I fear nothing can be done; but I will try. I will, as soon as I can, urge the matter of the badges. The President is too sick to be seen at present by any one. Very respectfully yours, Wm. Porcher Miles. Transcribed by T. Lloyd Benson, Department of History, Furman University, from Peleg D. Harrison, The Stars and Stripes and other American Flags (Boston: Little, Brown and Company, 1908), 337-38. Old style block quotation marks removed. The confusion caused by the similarity in the flags of the Union and the Confederacy was of great concern to Confederate General P.G.T. Beauregard after the first Battle of Manassas. He suggested that the Confederate National Flag be changed to something completely different, in order to avoid confusion in battles in the future. However, this idea was rejected by the Confederate government. Beauregard then suggested that there should be two flags. One, the National flag, and the second one being a battle flag, with the battle flag being completely different from the United States flag. No Confederate flag was ever flown on a slave ship. English, Dutch, Portuguese, and the New England States ships were used in the slave trade. The first National flag design looked too much like the Union Flag and caused confusion in commanding armies in maneuvers. Known as the “Stars and Bars”. The second one looked too much like a surrender flag when there was no breeze and it was hanging limply.. A vertical red bar was added to the third and final version of the Confederate national flag. This is the first Army of Northern Virginia Flag used by General Robert E. Lee as his Headquarters Flag . (Special Note: Gen. Robert E. Lee never owned slaves and released the slaves his future wife to be owned prior to the War.) This is the Confederate Battle Flag for the Army of Northern Virginia . It was first used on December 1861 until the end of the war. This is the Confederate Battle Flag for the Army of Tennessee, and was the 2nd Confederate Naval Jack . Was in use from 1863 to 1865. This is the Bonnie Blue Confederate Flag used at the beginning of the War Between the States; aka Republic of West Florida Flag . The flag was first used by the Republic of West Florida, which broke away from Spanish West Florida in September 1810. Was used by Mississippi when she seceded from the Union in 1861. This is the current South Carolina Flag, which a variant was adopted January 28, 1861. That variant involved the Crescent facing the opposite direction. (In Defense) “Big Red” (Spirit Flag) adopted by the Citadel in Charleston, SC on January 9, 1861. Used on Morris Island when the Cadets from the Citadel fired on Fort Sumter. This is the South Carolina Sovereignty Flag. It was never recognized as an official flag of South Carolina, but flew briefly in December 1860 following South Carolina’s Secession . This is the South Carolina Secession Flag. It was flown over the Charleston Custom House the day following South Carolina’s Secession. It spread to other cities in South Carolina, but had a brief life. It was subsequently flown on the C.S.S. Dixie.
<urn:uuid:1a2463df-2948-4410-bf4c-1b284a02d9cd>
CC-MAIN-2017-39
http://horryroughandreadyscamp1026.com/history/history-of-flags/the-flag/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686705.10/warc/CC-MAIN-20170920071017-20170920091017-00630.warc.gz
en
0.961623
1,689
3.0625
3
The objective: The need for a reliable method of encryption has persisted throughout history; encryption applications range from military and intelligence uses to daily commercial activities. As technology has improved to allow for easier and better encryption and transmission, so has it allowed improvements in interception and message processing. Codes have become more advanced, progressing from simple character-replacement ciphers to today's algorithms of large pseudoprimes, exponents, and modular congruences. But the concept has remained simple; it is desirable to be able to send information from one point to another without anyone being able to understand it in the middle. Ideally, the encrypted information should contain no shadows of the original message, which could be identified by careful observation. That is, the ideal code would encrypt a message so that it would be indistinguishable from random noise during transmission. The aim of this project is to determine just how random the messages encrypted by various algorithms really are by comparing large empirical tests to an ideal, random set. The internal complexity, or randomness, of each message was tested using Shannon's measure of information entropy. A Chi-square test was then used to determine how close to the ideal of random noise the encrypted form comes. Data were encrypted using the DES, 3DES, and AES strong encryption methods. While all three encryption algorithms effectively randomized the set with respect to one-character strings, only AES performed well at higher orders of entropy and approximated the random condition well in all tests. 3DES outperformed DES on all tests. The results strongly indicate that AES is more secure than other algorithms tested. It is highly unlikely that any cryptanalytic attack could be developed for use against AES-encrypted messages which takes advantage of internal patterning. Also, though no such attack has yet been developed, it is likely that one exists in DES and 3DES systems. Additionally, results demonstrate that it is possible to develop a secure communication system using AES in which it would be impossible for an adversary eavesdropping on the communication channel to determine whether a message was being transmitted or simply random data. This project is designed to determine the effectiveness of various encryption algorithms at increasing the entropy of, or randomizing, sets of several internal complexities. Science Fair Project done By Joshua A. Kroll
<urn:uuid:3b85d593-e562-4535-a5bb-67a47c8d65f3>
CC-MAIN-2017-39
http://www.sciencefairprojects.co.in/Software-Projects/Security-Through-Chaos.php
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690029.51/warc/CC-MAIN-20170924134120-20170924154120-00112.warc.gz
en
0.946038
456
3.890625
4
Attack of the Giant Black Wasps! As the weather warms in Oakdale and the children flood the parks, you may notice a large black wasp around your home with a long, thin “thread-waist” between their thorax and abdomen. Though frightening to behold, they generally will not harm you unless you pursue them and grip them in your cupped hands. They’re Mud Daubers, and they rarely sting people. Mud Daubers are solitary insects and get their name from the materials with which they build their nests. Mud Daubers collect chunks of mud which they use to construct nests for their young. Though the shape of their nests can vary from one species to the next (mostly varying from horizontal construction to vertical or organ pipes), they’re commonly made into small, cylinder “tubes” where the female lays her eggs. She then deposits a paralyzed spider or other insect which the larvae will feed on once they emerge from their eggs. Mud Daubers will commonly create multiple nests side by side (4 inches long and 2 inches wide), resembling a pipe organ. This is why they are sometimes called “organ pipe wasps”. Long Winter Rest Mud Daubers spend the winter in the stage between larvae and adult, or the pupae stage. During this time, they are spun into a cocoon, and emerge in the spring as adults. They typically feed on plant nectar and honeydew, as well as spiders and other insects. Since they are not aggressive, and rarely sting humans, they tend to not be as much of a physical nuisance as other insects and wasps, however their nests can be a bother, and many people can be annoyed by their presence when trying to enjoy the porch or deck. Rove’s Got the Answer Rove Pest Control’s Mud Dauber Service is the best way to protect your home from Mud Daubers, and other species of hornets and wasps. Oakdale certainly experiences the full extent of all four seasons and Rove Mud Dauber Specialists are well trained to know how to approach control for any pest during any of the seasons. Contact your Rove specialist today!
<urn:uuid:0f7a8247-8468-45c2-b566-1000d77fb0c3>
CC-MAIN-2020-16
https://www.rovepestcontrol.com/oakdale-pest-control/mud-daubers/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00519.warc.gz
en
0.972263
460
2.84375
3
Do one of the following: Change the page-number format, such as 1, i, or a Note If your document contains multiple chapters or sections, you may want to restart page numbering with 1 for each section. Insert "Page X of Y" page numbers If you want the page numbers at the bottom of the page, click Switch Between Header and Footer on the Header and Footer toolbar (toolbar: A bar with buttons and options that you use to carry out commands. To display a toolbar, press ALT and then SHIFT+F10.), and then, in the footer area, click where you want to place the page numbers. Change the font and size of page numbers If you inserted page numbers by using the Page Numbers command on the Insert menu, make sure to select the page number inside its frame (frame: A container that you can resize and position anywhere on the page. To position text or graphics that contain comments, footnotes, endnotes, or certain fields, you must use a frame instead of a text box.). A cross-hatched frame border appears around the page number.
<urn:uuid:e71334f9-9966-4b45-8dc7-43d6a5cc27fc>
CC-MAIN-2013-48
http://office.microsoft.com/en-us/word-help/format-page-numbers-HP005230577.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164007111/warc/CC-MAIN-20131204133327-00007-ip-10-33-133-15.ec2.internal.warc.gz
en
0.788092
233
2.515625
3
Light scattering by selected zooplankton from the Gulf of Aqaba. Publikation/Tidskrift/Serie: Journal of Experimental Biology Förlag: The company of biologists Ltd Light scattering by zooplankton was investigated as a major factor undermining transparency camouflage in these pelagic animals. Zooplankton of differing transparencies – including the hyperiid amphipod Anchylomera blossevillei, an unknown gammarid amphipod species, the brine shrimp Artemia salina, the euphausiid shrimp Euphausia diomedeae, the isopod Gnathia sp., the copepods Pontella karachiensis, Rhincalanus sp. and Sapphirina sp., the chaetognath Sagitta elegans and an enteropneust tornaria larva – were illuminated dorsally with white light (400–700 nm). Spectral measurements of direct transmittance as well as relative scattered radiances at angles of 30°, 90°, 150° and 180° from the light source were taken. The animals sampled had transparencies between 1.5% and 75%. For all species, the highest recorded relative scattered radiance was at 30°, with radiances reaching 38% of the incident radiance for the amphipod A. blossevillei. Scattering patterns were also found to be species-specific for most animals. Relative scattered radiances were used to estimate sighting distances at different depths. These calculations predict that all of the examined zooplankton are brighter than the background radiance when viewed horizontally, or from diagonally above or below at shallow depths. Thus, in contrast to greater depths, the best strategy for detecting transparent zooplankton in the epipelagic environment may be to search for them from above while looking diagonally downwards, looking horizontally or looking from below diagonally upwards. Looking directly upwards proved to be more beneficial than the other viewing angles only when the viewed animal was at depths greater than 40 m. - Biology and Life Sciences - sighting distance - Lund Vision Group - ISSN: 0022-0949 - ISSN: 1477-9145
<urn:uuid:f3ab1709-ec9e-448b-a30a-5bc9292739e0>
CC-MAIN-2014-10
http://www.lu.se/lup/publication/760563
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999669324/warc/CC-MAIN-20140305060749-00016-ip-10-183-142-35.ec2.internal.warc.gz
en
0.847986
470
2.609375
3
Electromagnetic Pulse (EMP) is a large, very-short-duration, release of electromagnetic energy into the atmosphere. Most often these pulses are associated with the high-altitude detonation of nuclear weapons. As these waves of energy pass across electrical conductors, such as the wires that make up the national power grid, or even the conductors inside the tiny integrated circuits of modern computers, large electrical currents are induced. These currents are strong enough that the affected wires and circuits can simply burn out, not unlike an electrical fuse burning out when too much current flows through it. The result is that these electrical systems cease to work until they are repaired, and a repair of the entire national power grid has been estimated to require in excess of ten years. This does not even take into account all the other electrical and electronic devices that would also require repair or replacement (for example, the electronics inside modern fuel pumps). Developed nations would instantly be transported to a reality that would truly be surreal, and quite deadly. The reality is that a weapons-caused EMP event is unlikely because it would almost certainly need to be delivered by missile, and very few nations have the capability to deliver a high-altitude atomic burst over the US. Any nation with that degree of sophistication would probably not want to risk the almost certain retaliation that would result. However, EMP pulses can also be generated by solar storms, and those storms do happen occasionally (see my previous posting about the ‘Carrington Event’ of 1859). As a result it is entirely possible that an EMP event will affect you within your lifetime, or certainly will affect your children within their lifetimes. Such an event would pose a serious threat to civilization throughout the world. Our book, “When There is No FEMA – Survival for Normal People in (Very) Abnormal Times” – is the most comprehensive and detailed reference ever written on the topic of human survival. It can be previewed and ordered from our web site at http://nofema.com/.
<urn:uuid:95199481-4ab5-40e0-b5c0-47c77bee6a79>
CC-MAIN-2017-47
http://nofema.com/the-possibility-of-emp/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806832.87/warc/CC-MAIN-20171123123458-20171123143458-00528.warc.gz
en
0.960248
414
3.109375
3
Household SHARPS Program SAFE DISPOSAL OF HOME-GENERATED SHARPS Beginning September 1, 2008, California law prohibits the disposal of home-generated sharps in the trash or recycling containers, and requires that all sharps waste be transported to a collection center in an approved container. What are home-generated sharps? Sharps are defined as hypodermic needles, syringes, lancets, and any other medical devices used for self-injection or blood test, which may have a sharp tip or end. What is an approved sharps container? According to State law, approved sharps containers are rigid, leak free, puncture resistant, and sealed. Example of acceptable containers include: coffee cans with lids, liquid detergent bottles with lids, plastic soda bottles with lids, plastic milk or juice containers with lids, and any containers designated for sharps disposal. What are some problems associated with improper disposal of sharps? If sharps are not disposed of properly, they pose a very serious threat as a puncture hazard, and also a vector to transmit diseases such as hepatitis, HIV, AIDS, and tetanus. Sharps should not be thrown in the trash, placed in a recycling container, or flushed down the toilet. Used sharps left loose among waste can hurt sanitation workers during collection rounds, at sorting and recycling facilities, and at landfills. Children, adults, and even pets are also at risk for needle-stick injuries when sharps are disposed improperly at home or in a public setting. How does one dispose of their sharps properly? When purchasing needles or syringes from your local pharmacy, purchase mail-in storage containers or purchase an approved medical waste container (they are bright red, and have the large “biohazard” symbol displayed). If you cannot purchase a container, you may make a homemade sharps container (see the above question “What is an approved sharps container?”) Approved containers may be transported to one of the following locations: - A home-generated sharps consolidation point - A medical facility - Through a medical waste mail-back program - A household hazardous waste collection facility For more information, contact the Stanislaus County Department of Environmental Resources at 209-525-6700.
<urn:uuid:7c5141d1-8739-4dfd-bdfd-9a01b988f0a7>
CC-MAIN-2017-43
http://www.stancounty.com/er/hazmat/household-sharps.shtm
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823462.26/warc/CC-MAIN-20171019194011-20171019214011-00207.warc.gz
en
0.918218
486
2.671875
3
Illinois’ long history of questionable election practices has become ingrained in American folklore. The state is known for phrases such as “vote early and vote often,” “graveyard precincts” and “ghost voting.” Following the 1960 presidential election, the Chicago Tribune concluded “once an election has been stolen in Cook County, it stays stolen.” But, elections can be rigged without violating any laws. In fact, the most common method is embedded in the Illinois Constitution. That is the partisan gerrymandering of legislative districts. The Illinois Reform Commission, appointed by Governor Pat Quinn, had harsh words for Illinois’ system of drawing legislative districts, declaring that it “deprives Illinois voters of fair representation,” that it places Illinois voters in direct conflict with legislators, and “regardless of which party wins, the people of Illinois are the losers…” Even the framers of the 1970 Illinois Constitution have acknowledged that the system they devised has failed. Senate Joint Resolution Constitutional Amendment 104 and House Joint Resolution Constitutional Amendment 56 take an initiative sponsored by the League of Women Voters and other government reform groups and puts it into legislation for General Assembly passage. The goal is reform the system and end the partisan gerrymandering of Illinois. Illinois has a rare opportunity to end this legalized form of election fraud. But it will only happen if the public insists. An Oct. 2009 survey by the Paul SImon Public Policy Institute shows strong public support for major political and ethical reforms in Illinois, including gerrymandering reform. More than 71% of respondents disapprove of the current Illinois system for drawing legislative districts, with nearly 28% expressing strong disapproval. Turning the process over to a neutral party had the support of nearly 73% of respondents The term "Gerrymander" was first used in the Boston Gazette of March 26, 1812 to describe a district that the newspaper likened to the shape of a salamander. Gerrymandering comes from combining salamander and the name of Elbridge Gerry, the governor of Massachusetts from 1810 to 1812, who signed into law a redistricting plan that was designed to benefit his political party. Read More Illogical...dysfunctional...legalized protection racket. Those are just some of the terms used by editorial writers from across Illinois in demanding gerrmandering reform. Read More.
<urn:uuid:71694b69-79b3-4c82-92d8-6872337a9e1f>
CC-MAIN-2013-48
http://www.gerrymandering.senategop.net/
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163818502/warc/CC-MAIN-20131204133018-00020-ip-10-33-133-15.ec2.internal.warc.gz
en
0.954834
492
2.921875
3
Fossil Fuel Divestment Makes Financial Sense Source: Trent Arthur http://trentarthur.ca/fossil-fuel-divestment-makes-financial-sense/ It is morally wrong to make a profit off the fossil fuel industry. While violating indigenous and human rights all over the world, these companies are planning to extract 5 times more coal, oil, and gas than what scientists say we can afford to burn, and are blocking legislation that would set limits on greenhouse gas emissions. That being said, many students say the immorality of investing in these companies is an even trade-off for the money being made from them, and that investments in the industry are necessary to pay for staff salaries, student scholarships, and infrastructure improvements. This argument, based purely on finance, might seem to be valid, given the financial success of the fossil fuel industry. However, if we delve deeper into the issue, we can begin to see that there is much more to the story. Surprisingly, fossil fuels make up only, on average, 5 percent of a college or university’s endowment, so divesting from those companies is not likely to have a significant impact on returns. In fact, investment management firm, Aperio Group, conducted an academic study entitled “Do the Investment Math: Building a Carbon Free Portfolio”, which showed that the “theoretical return penalty” of divesting from fossil fuels was only at 0.003 percent. There is even academic literature to show that fossil fuel-free portfolios outperform those with fossil fuel investments. A report, “Beyond Fossil Fuels: The Investment Case For Fossil Fuel Divestment”, from Impax Asset Management Group, tracked the past seven years of international equity markets. The results showed that if fossil fuel companies were removed from the MSCI World Index, then the resulting fossil fuel-free portfolio would have made 2.3% per year, while a portfolio with fossil fuel companies would achieve an annual net return of only 1.8% for the same period. Another paper published by index provider MSCI Inc. found results that almost mirror those in the Impax report. In addition to investments, colleges and universities depend upon making money through tuition fees, as well as through fundraising efforts from alumni, corporate sponsorships, and research awards. The investment choices of an educational institution can have a great impact on both their fundraisers and future students. “After we divested we started receiving donations online,” said Stephen Mulkey, President of Unity College in Maine, which was the first school in the United States to divest its holdings in fossil fuels. “We’ve seen an uptick in our inquiries from students. I think that will transform into an improvement in enrollment.” Of course, money divested from fossil fuel stocks will be reinvested in something else. “You’re not divesting and then just forgoing those profits,” said Mulkey. “You divest from BP and invest in something else. You reanalyze your portfolio.” Options for schools looking to reinvest will increase in the future. “The speed at which this campaign has spread is causing ripples in the investment community,” commented Andy Behar, CEO of the shareholder advocacy group As You Sow. “We anticipate more ‘carbon-free’ investment options coming onto the market over the coming months for endowments, foundations, and other institutional investors who want to move investment dollars to build a clean energy future.” The best way for schools to reinvest would be to do so in their own campuses. Investing in solar panels, LEED-standard buildings, and efficient light bulbs, for example, would have significant environmental and long-term economic benefits. In 2010, George Washington University in Washington, DC invested $141,000 to upgrade the lighting in their academic centre. Since completion, the project has been generating $100,000 per year in savings. It paid itself off in less than 2 years. With a projected lifespan of at least 8 years, the original $141,000 investment will generate about $800,000 in total savings. A report published by Mark Orlowski, head of Sustainable Endowments Institute, showed that, on average, the annual return on investment for a thousand efficiency projects at campuses across the U.S. was just under 30 percent, much higher than any return rate on the stock market. The median payback was also shown to be just 3.5 years. “College trustees often think of a new lighting system as an expense, not an investment, but it’s not,” noted Orlowski. “If you invest a million dollars, and can expect to clear 2.8 million dollars over the next decade, then that’s the definition of fiduciary soundness.” The writing is on the wall for fossil fuel divestment. Divesting would not only be a good moral choice for Trent University, but a sound financial one as well. What are we waiting for? You can find out more information about the Fossil Free Trent campaign by visiting facebook.com/fossilfreetrent.
<urn:uuid:c40262de-5e7e-413d-9e64-6dc21355b062>
CC-MAIN-2020-05
https://sustainabletrent.org/2014/01/04/138/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00139.warc.gz
en
0.957433
1,086
2.59375
3
Find things in common, talk to everyone in the group Group SizeSmall Medium Large None, pen and paper helpful Divide large group into smaller groups. Instruct each group to find 5-10 (depending on the time frame) things they all have in common. Challenge them to think creatively; places they have traveled, TV show they have watched, some other experience. The first group to come up with the designated number of things wins, points for creativity too!
<urn:uuid:2c64e964-33dc-477a-88ee-dd21f83ebec8>
CC-MAIN-2013-20
http://www.teampedia.net/wiki/index.php?title=Common_Threads&oldid=4874
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696384213/warc/CC-MAIN-20130516092624-00084-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937494
95
2.859375
3
The term diabetic eye disease represents a group of eye problems that people with diabetes may face as a complication of the disease. All of which can cause severe vision loss or even blindness. Diabetic retinopathy is the most common diabetic eye disease, and a leading cause of blindness in adults. It is caused by changes in the blood vessels of the retina. In some people with diabetic retinopathy, blood vessels may swell and leak fluid. In other people, abnormal new blood vessels grow on the surface of the retina. The retina is the light-sensitive tissue at the back of the eye. A healthy retina is necessary for good vision. If you have diabetic retinopathy, at first you may not notice changes to your vision. However, over time, diabetic retinopathy can get worse and cause vision loss. This condition, usually affects both eyes. For more information, follow the links below:
<urn:uuid:c2a90b55-34e0-44b0-8222-6233b282a4e1>
CC-MAIN-2017-47
http://www.mcbrideandmccreesh.com/diabetic-eye-disease/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00070.warc.gz
en
0.907088
183
3.421875
3
One of the hallmarks of eating disorders is a preoccupation with thinking about the body, weight, food and calories. Instead of engaging in activities that bring joy, like gratitude, people with eating disorders disengage from friends and family, and in many ways, life itself. One way to reengage is to practice gratitude – to think about the things we’re thankful for. It may sound obvious. But as with other changes we all try to make, it’s not always easy to set aside negative or self-sabotaging thoughts and look for sources of light and happiness – whether it’s something as profound as the love of family and friends or as simple as having the ability to go for a walk on a sunny winter day. Recovery from eating disorders can be a long process, with many ups and down. On the difficult days, taking a few moments to really think about the things that you’re thankful for can lift the spirits and provide the strength to keep going – for people with eating disorders, and for all of us.
<urn:uuid:9f3affa7-d6f3-4a09-b16d-7a0d32ba1049>
CC-MAIN-2023-23
https://www.rosewoodranch.com/a-message-about-gratitude-from-dena-our-vp-of-clinical-services/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652494.25/warc/CC-MAIN-20230606082037-20230606112037-00126.warc.gz
en
0.971807
225
2.546875
3
MapReduce is a great approach to problem solving. It is very popular too, but MapReduce examples other than word-count are scarce on the web. This article describes MapReduce problem solving that is beyond word-count. Arduino uses asynchronous serial communication to send-receive data to and from other devices. Arduino Uno supports serial communication via on-board UART port and Tx/Rx pins. Generally this transmission happens at 9600 bits per second which is termed as baud rate.
<urn:uuid:aec63ac0-2200-455a-a268-552cebbde77b>
CC-MAIN-2020-29
https://idevji.com/tag/world/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00043.warc.gz
en
0.940798
106
2.578125
3
Tungsten heavy alloys (WHAs) are often used instead of pure tungsten because they have a significantly higher density and improved mechanical properties. Pure tungsten, often manufactured from powder which is sintered and forged to size is relatively brittle and has a relatively low density compared to other metals, which makes it difficult to work with and limits its applications. Tungsten heavy alloys are made by alloying tungsten with other metals, such as nickel, iron, copper and molybdenum, which increases its ductility, toughness, and strength. The added metals also increase the density, which makes it ideal for applications where high density and weight are required. WHAs are primarily used in aerospace, military, and medical applications, as well as in sports equipment and industrial machinery. They are often used to make counterweights, radiation shielding, and other high-density components. These materials also have potential use in the field of nuclear medicine and radiation therapy as a radiation shield. There are several types of tungsten heavy alloys which Admat supplies, each with unique properties and uses. Some examples include: - W-Ni-Fe: This is the most common type of tungsten heavy alloy and is composed of tungsten, nickel, and iron. It has a high density and is used for applications such as radiation shielding and balancing weights. - W-Ni-Cu: This alloy has a higher thermal conductivity than W-Ni-Fe and is used in applications that require good thermal conductivity such as heat exchangers and electrical contacts. - W-Ni-Mo: This alloy has excellent wear resistance and is used in applications that require wear resistance such as bearings and seal rings. It’s important to note that the composition and properties of tungsten heavy alloys can be tailored to specific application needs by adjusting the ratio of the elements that are present. Admat’s tungsten heavy alloys are used instead of pure tungsten because they have a higher density, improved mechanical properties and increased ductility, toughness, and strength.
<urn:uuid:2f87aca9-5820-47a7-8209-fc6cbf1d96a3>
CC-MAIN-2023-14
https://www.admatinc.com/products/tungsten/heavy-alloy/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00414.warc.gz
en
0.964876
434
3.546875
4
Image courtesy NOAA. Flood: Any high flow, overflow, or inundation by water which causes or threatens damage. Flash Flood: A rapid and extreme flow of high water into a normally dry area, or a rapid water level rise in a stream or creek above a predetermined flood level, beginning within six hours of the causative event (e.g., intense rainfall, dam failure, ice jam). However, the actual time threshold may vary in different parts of the country. Ongoing flooding can intensify to flash flooding in cases where intense rainfall results in a rapid surge of rising flood waters. (Source: National Weather Service) During the 20th Century, floods were the number one natural disaster to cause the loss of lives and property, according to the USGS. Flooding can occur in a number of situations, including heavy downpours in strong thunderstorms, or during the spring when snowpack is melting. Record flood events have occurred along the Mississippi and Missouri rivers in years such as 1927, 1973, 1993, and 2011. During the spring, snow melt from winter snows flows into rivers in the upper U.S., most of which converge into the Mississippi. After winters of extreme snowfall, rivers can swell far into their flood plains and wreak havoc on river towns. Flash flooding is a rapid and extreme flow of high water into a normally dry area, beginning within six hours of the causative event (e.g. a strong thunderstorm). Flooding Caused By Tropical Cyclones In tropical storms and hurricanes, wind speeds and surge are not the only danger—flooding and flash flooding have claimed the most lives in tropical cyclones from 1970 to 1999 (unlike the historic 2005 hurricane season, where storm surge claimed thousands of lives). Flash flooding will occur in creeks, streams, and urban areas within hours of torrential rain. These floods can reach heights of 30 feet or more. Streets can be turned into rivers, and underpasses become deadly. Deaths caused by the effects of tropical cyclones in the U.S. 1970-1999 Image courtesy NOAA. Turn Around, Don't Drown The National Weather Service's "Turn Around, Don't Drown" program warns people of the danger of driving through flooded areas. The Center for Disease Control estimates that over half of all flood-related drownings occur when a vehicle is driven into flood water. The second highest percentage of drownings are from people who walk into or near flood waters. People generally underestimate the force of moving flood water. Just two feet of water can move or lift a car, even a truck or SUV. Only six inches of water is necessary to sweep you off your feet. If flooding occurs, take the following precautions: - Move to higher ground and stay away from low-lying flood-prone areas - Do not allow children to play in flood waters, no matter how fun it might look - Never drive on a flooded road - Do not set up camps along streams or washes when there's a chance of rain or thunderstorms - Be extra cautious during nighttime flooding situations The National Weather Service issue the following flood-related advisories: A hazardous flood event could develop. The expectation of a flood event has increased. Usually this means that somewhere within the watch zone, a flood is expected. If you're in the watch area, you should pay attention to Weather Radio or local news in case a warning is issued. Flash Flood Warning Flash flood warnings, flood warnings, or flood advisories are issued when flooding is occurring or imminent.
<urn:uuid:a5d09306-9339-4e56-a035-31350d3a501a>
CC-MAIN-2017-43
http://rss.wunderground.com/resources/severe/flood.asp
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825147.83/warc/CC-MAIN-20171022060353-20171022080353-00252.warc.gz
en
0.92823
738
3.953125
4
Appliances that use natural gas for fuel, like your furnace, water heater or clothes dryer, rely on combustion to create heat. These appliances have traditionally utilized atmospheric combustion, or from air drawn inside the home, often from the basement. The combustion exhaust gases are then vented out of the flue or chimney. With sealed combustion appliances the supply and return air flow is tightly contained, so it does not have to rely on the air inside the home to convert fuel into heat. The Advantages of Sealed Combustion The main advantage of sealed combustion is improved efficiency. To achieve an Annual Fuel Utilization Efficiency (AFUE) of 90 or higher, furnaces utilize sealed combustion. With sealed combustion, the furnace connects to the outdoor air through a supply and a return pipe. Because the air supplied to the furnace is outdoor air, and the flue gases are exhausted back outside furnace efficiency is increased because it is not heating air only to vent it outside. Another advantage of sealed combustion is safety. Without an exposed flame, there is no risk of flammable materials near the appliance catching fire. Burning natural gas can also generate dangerous carbon monoxide (CO) gas, which is more likely to enter the home through backdraft in an sealed combustion chamber.
<urn:uuid:b826f20e-3239-46a4-9fa3-4cb165c170c6>
CC-MAIN-2017-34
http://maitzhomeservices.com/about-maitz/around-the-home-blog/item/93-how-sealed-combustion-appliances-improve-efficiency-and-safety
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117911.49/warc/CC-MAIN-20170823074634-20170823094634-00015.warc.gz
en
0.921498
261
3.171875
3
By Ken Zurski Before any formal holiday existed, the idea to recognize the war dead with a day of commemoration can be attributed to dozens of communities that organized events adorning the grave sites of local soldiers killed in the Civil War. Holding prayer ceremonies at grave sites and placing flowers on graves was not an original concept, but beyond the church groups, large turnouts of people of all faiths and races, whether churchgoers or not, were gaining momentum and support for their act of of kindness and reverence. Nearly every town in America had buried dead from the horror of the Civil War and nearly every town had a cemetery as a reminder of the terrible loss. Carbondale, Illinois home of one of the earliest infantry regiments in that state, has a stone marker that recognizes it as the first site of a Decoration Day ceremony, although it too was held several years before the holiday was officially enacted. Their reasoning is valid thanks to the stirring words of a hometown General, John A. Logan, who would later be credited as the “Father of Memorial Day.” “Tell my wife, tell my sister, mother, that I died with my face to the enemy; that my country might live; that the principles of liberty and freedom might be enjoyed; and that they might be protected by the laws and Constitution.” But like Carbondale, other cities also claimed the distinction. Columbus, Mississippi, was one town that buried many. After the bloody Battle of Shiloh, many of the wounded and war dead were sent by train to the small Southern town just above the Tombigbee River. Thousands of soldiers on both sides of the battle were interred at the hopefully named Friendship Cemetery. In April of 1866, several Columbus women went to the cemetery and brought bouquets of garland, blossoms, lilies and roses to the site. Miss Matt Moreton was among the gatherers. Moreton was a recent widow. Her husband was a victim of the war. One by one, she and the other women placed flowers on the graves of over a thousand Confederate souls. Miss Moreton showing no partiality, did the same for the federal’s soldiers grave sites as well. “This first act of floral reconciliation was discussed in praise and censure,” a local described. “[But] this sweet woman with whom God has blessed the earth – volunteered, of her own mind, to strew flowers upon the Federal’s graves too. not just upon the fallen Confederates.” The Mississippi Index praised the event: “We were glad to see that no distinction was made between our own dead and about forty Federal soldiers, who slept their last sleep by them. It proved the exalted, unselfish tone of the female character. Confederate and Federal—once enemies, now friends—receiving this tribute of respect.” The act prompted Francis Miles Finch to write a poem, famously titled The Blue and the Gray. …From the silence of sorrowful hours The desolate mourners go, Lovingly laden with flowers Alike for the friend and the foe; Under the sod and the dew, Waiting the judgement-day; Under the roses, the Blue, Under the lilies, the Gray. Moreton and three other local women were given credit for the gesture, and their story is remembered today in Columbus, where Memorial Day services are still carried out in the same manner. A century later, in 1966, thanks to a presidential proclamation signed by Lyndon B. Johnson, the New York town of Waterloo, built along the banks of the Cayuga-Seneca Canal, holds the official distinction of being the “birthplace of Memorial Day.” The effort was originally spearheaded by the governor of New York at the time, Nelson D. Rockefeller, who recognized Waterloo as the first village-wide, annual observance of a day to honor the war dead. The local resolution was inspiring enough to be taken up by Congress, passed by the House and Senate, and sent to the President for approval. Here’s Waterloo’s story: 100 years earlier, in the summer of 1866, Henry Welles, a druggist, suggested a day of social gathering not only to honor the living soldiers but remember the fallen ones as well. General John B. Murray supported the idea and instituted a plan. It was more like a funeral procession. Flags were flown at half-staff and black bunting was hung in respect as soldiers and townsfolk marched to three village cemeteries and placed flowers on the gravesides. The next year, in similar fashion, they did it again, and again the following year, and in each year since. Perhaps the largest and earliest pre-holiday ceremony was held in Charleston, South Carolina, in a large field known as the Race Course, where prized horses once ran. During the Civil War, the infield was used as a prisoner-of-war camp. Hundreds of mostly young men were either held there or awaited transfer to larger prison camps, like Belle Isle in Richmond or Andersonville in Georgia. Many never made it out of the Race Course, suffering from sicknesses like dysentery, which spread quickly in the inhumane conditions and tight quarters. Some 257 men perished and were quickly buried in a pasture nearby. In May of 1865, just a year after the war ended, several Charleston residents, went out to see the gravesites, just mounds of dirt really, and still fresh, noted one observer, “with the marks of the hoofs of cattle and horses and feet of men.” They decided to erect a fence and place a monument on the site. Then, on May 1, 1865, May Day, nearly 3,000 local schoolchildren and “double that the number of grown-ups” went to the Washington Race Course with bouquets of roses and other “sweet smelling flowers.” James Redpath, known as “Uncle James,” a witness, remembered the event. “The children marched from the Race Course singing the John Brown Song and then, silently and reverently, and with heads uncovered, they entered the burial ground and covered the graves with flowers. “It was the first free May Day gathering they ever enjoyed,” Redpath noted, referring to the “colored” children present and their parents, former slaves. Three years later, on May 5, 1868, General John A. Logan of the Union Veterans—the Grand Army of the Republic—established a day for all Americans to decorate with flowers the graves of war heroes. On May 30 1868, just as Logan had ordered, the first Memorial Day service (then known as “Decoration Day”) took place at Arlington Cemetery. This entry was posted in History, Uncategorized and tagged American history, Blackjack Logan, Carbondale Illinois history, Charleston, Columbus Mississippi history, Decoration Day History, Francis Miles Finch, Friendship Cemetary, General John A. Logan, History, Illinois history, May Day Ceremonies, Memorial Day History, Miss Matt Moreton, South Carolina History, The birthplace of memorial Day, The Blue and the Gray, Unremembered, Unrememebered History, Waterloo. New York history.
<urn:uuid:e9f2270c-6a50-4fdf-8db0-7c1d3007902d>
CC-MAIN-2020-29
https://unrememberedhistory.com/tag/charleston/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891654.18/warc/CC-MAIN-20200707044954-20200707074954-00006.warc.gz
en
0.967192
1,524
3.59375
4
Dinosaurs on the Move: Movable paper figures to cut, color, and assemble For generations, dinosaurs have entranced children. Dinosaurs are not only some of the most awesome creatures in the world, but they are also a mysterious part of our world's history and further proof of a Creator God. Like monsters, dinosaurs are scary, but unlike monsters, dinosaurs were real, which adds to their kid-appeal. Some children have even dreamed of having a dinosaur as a pet. Build 20 Movable Dinosaurs Now your child can make twenty movable dinosaur action figures with Dinosaurs on the Move, including both carnivores and herbivores—even flying dinosaurs. This ingenious learning tool reinforces your child's small-motor skills through coloring and cutting out easy-to-assemble dinosaurs in pre-colored and colorable versions. Includes Full-Color and Ready-to-be-Colored Versions! Based on authentic dinosaur fossils and skeletons, the drawings are printed on sturdy cardstock, with perforated pages for easy removal. Each dinosaur has a full-color version and a black-and-white version that can be colored as boldly as your child desires. Then cut out the pieces, punch holes, and fasten together your formidable dinosaur. No Evolutionary Statements! Dinosaurs on the Move also contains a Dinosauria, a mini dinosaur encyclopedia, with facts for each of the ten dinosaurs in the collection. These facts include fascinating tidbits about habitat, diet, length, weight, the location of fossil discoveries, and key features about each dinosaur, but none of that goofy "zillions of years ago" stuff. Art & Science Supplement If your family includes a future paleontologist, Dinosaurs on the Move is a perfect supplement to his arts or science curriculum. Dinosaurs on the Move does require the use of a 1/8" round hole punch and mini brads for attaching the parts together. Dinosaurs on the Move includes both full color and black-and-white versions of an Allosaurus, Ankylosaurus, Baryonyx, Brachiosaurus, Ouranosaurus, Parasaurolophus, Pteranodon, Stegosaurus, Triceratops, and a Tyrannosaurus Rex. "The originality and quality of this book is outstanding!" — Dr. Richard Moody, The Dinosaur Society "Dinosaurs on the Move combines the appeal of dinosaurs with key facts about them while encouraging creativity in the coloring and assembly of the moving figures. It's a unique addition to any dinosaur-lover's bookshelf, and one that I would have certainly loved myself as a child!" — Dr. Matthew Carrano, Museum Paleontologist, Washington D.C. "A fun book to engage the young mind!" — Dr. Kenneth Carpenter, Chief Preparator & Curator of Lower Vertebrate Paleontology, Denver Museum of Nature and Science "This educational and fun dinosaur activity book combines children's fascination with dinosaurs and their ability to be creative in coloring and assembling moving figures of these creatures from long ago." — Joseph J. Kchodl "PaleoJoe", Science Educator, Winner of the Katherine Palmer Award
<urn:uuid:2ea8d2e5-904b-485f-8f4d-3b9fb6b3613b>
CC-MAIN-2017-26
https://www.timberdoodle.com/Dinosaurs_on_the_Move_p/298-299.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128329344.98/warc/CC-MAIN-20170629135715-20170629155715-00120.warc.gz
en
0.895166
662
2.765625
3
Planet X Nibiru Is this Her or NASA etc., Winding us up. Watch this as a Dark star Planet is found in the InfraRed Telescopes seemingly in the right positions, as scheduled, Watch and YOU decide. for those subscribers following Nibiru. Nibiru is a comet swarm, not a planet. Mathematical projections (from Ice Cores) indicates that it will pass through the inner solar system by the end of 2017. Russian sources suggest by 2015. ET alien sources suggest it will threaten Earth ~(and maybe a few months thereafter). Some of these meteorites have already struck (Sep. to Nov. 2014). The meteors appear to be devoid of volatiles and may became noticeable only if they are close enough to be heated by the Sun’s heliosphere. The main threat could be detected Aug. 2015. This threat, coming DURING 2017 (with all its hype), has been hidden from us in plain sight. Should even a modest impact (or massive atmospheric explosions) occur, civilization could collapse. There could be global weather changes. Earth could even be return to Ice Age conditions – and mass starvation would follow. Tsunamis, floods, gigantic mud slides and meteorite falls igniting fires over wide areas could occur. Continents could slightly move and magnetic poles could change magnitude and/or even re-align. World governments may be aware of this impact threat. International (UN like) and world actions, unusual activity in the financial sector, energy sector and military (Martial Law) sector might be expected prior to this threat. Massive population relocation to hospitable climates could be required provoking major wars for survival. Russian, a country most affected, could be preparing, but not alarming, its citizens through its NEWS media by reporting these possibilities, but including the inaccurate ‘Nibiru is a planet’ concept (to lessening potentially alarming scientific verification of this threat).
<urn:uuid:f5bcf316-e5ab-44f1-9423-a37d215d34b5>
CC-MAIN-2017-39
https://fusionlacedillusions.com/index.php/2014/12/20/planet-x-nibiru-possibly-found-in-infrared-approaching-on-schedule-planet-x-update-12202014-video/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696182.97/warc/CC-MAIN-20170926141625-20170926161625-00398.warc.gz
en
0.942673
394
2.609375
3
Why you might not care about it as much as in the past. The clock is typically on the motherboard. But regardless of where the clock resides, you need a processor capable of running at that speed. So, in a sense, that salesperson is right: if you want a faster CPU, buy a faster CPU. But there are several reasons that may not be as important as it once was. Become a Patron of Ask Leo! and go ad-free! Does CPU speed matter CPU speed is important, but it’s not as good a measure of overall system speed as it might have been in years past. Factoring in to today’s machines are technologies like multi-core CPUs, solid state disk drives, and more. Understanding how these relate and when they become important can result in a machine more properly configured for your use. CPU and motherboard discussions bring out passionate opinions in some. There are strong (STRONG) feelings that this or that is most important, that this architecture is better than that architecture, and so on. If that’s you, this article isn’t for you. My intent here is to cover things at a higher, much more abstract layer — a layer that’s more practical for the less passionate “average consumer” who doesn’t want or need to understand all the different processor families, characteristics, and the like. They’re not trying to eek out every possible cycle of processor speed; they just want something that works well and works reliably. They just want to know what to pay attention to. Before we dive in, it’s probably worth defining a couple of things. MHz & GHz – Mhz is shorthand for megahertz, meaning 1,000,000 times per second. Ghz is a shorthand for gigahertz, or 1 billion times per second. Clock – the clock speed of a CPU is at its most basic a measure of how many instructions it can perform per second period so a simple CPU running at 300 MHz, for example, can perform around 300 million operations per second. I have to say “simple” CPU, because of course current CPUs are anything but simple. Some operations actually take more than one clock cycle, while others can be done in parallel, meaning that together they take less than one clock cycle each when combined. But the CPU clock speed has always been a rule of thumb that we’ve used for decades to measure at least at a conceptual level how fast our processors are running. In the past, the design outlined in your question wasn’t all that wrong. The clock rate of the processor (or Central Processing Unit) in the computer was a relatively good indicator of its overall speed. I recall the days when replacing a 333MHz machine with a 666MHz was an incredible difference worth getting excited over. One of the reasons we could so easily rely on that one measurement is that it represented more than just raw CPU computing power. That same clock often drove many other components on the motherboard besides the CPU. It was truly the “system” clock, and was a rough indicator of system speed. Clock rate seems to have levelled off for some time. For example, most of what I see is in the 3.2Ghz range, but it’s really all over the map. The desktop computer I’m running has a 3.5Ghz processor, while the server that’s running the Ask Leo! website is running at 2.5Ghz. First, all that number reflects is the CPU speed. It may not relate to as many other things on the motherboard as before. It’s not uncommon for other devices to have their own decoupled clock, which allows them to run at speeds more appropriate for whatever they’re doing. CPU speed remains important, but not as important. If you’re performing lots of CPU-intensive tasks, for example, then a fast CPU can make a lot of difference. It’s one of the reasons my desktop machine is at the higher end of the range. I selected it with video editing in mind, which is a very CPU-intensive task. Games are another situation where the speed of the CPU itself can matter quite a bit. But if you don’t regularly run CPU-intensive tasks, other things may come into play. Generally more important than speed In my mind, two things have taken the place of pure CPU speed. First, CPUs are now multi-core. A core is essentially an independent computing unit. A dual-core machine is like having two CPUs, except they’re in a single package, so we refer to it as a single CPU with two cores. My desktop machine has a CPU with 16 cores. The Ask Leo! server’s CPU has 8. Higher-end servers can have hundreds. The reason cores matter so much is that we’re rarely asking the computer to do only one thing. Multiple cores allow the computer to literally do several things at once. The Ask Leo! server, for example, can actively respond to 8 different requests at exactly the same time.1 Software can also be written to take advantage of multiple cores. For example, my video editing software can take advantage of all the cores on my machine. So not only are they fast cores (3.5GHz) but there are 16 of them. (It might be tempting to say that’s the equivalent of 16 times 3.5GHz or a 56GHz processor, but the reality isn’t quite that simple. It is fast, though.) The single biggest improvement to computing speed over the last several years has not been an ever increasing CPU speed, but an increasing number of cores placed in the CPU. Generally more important than the CPU The other major improvement to the speed of our computers has nothing to do with the CPU. It’s the disk. Specifically, replacing traditional hard disks (HDDs) with solid state drives (SSDs) is a big boost. SSDs are significantly faster, particularly when reading information. The fact that the disk can make such a dramatic improvement in our perception of speed just confirms the fact that the CPU might not matter as much as we might think. Depending on what we’re doing, we might access the disk much more than we realize, and as a result, a faster disk will have the most dramatic impact. Nothing is absolute As with so many of my answers, there’s no single definitive solution. Most often than not, it depends on how you use your computer. If you do computer tech support for a living, like to edit videos, run multiple virtual machines, keep four different browsers open at once (as well as a plethora of other applications), and you plan to keep your machine for a decade or more, then maximizing everything — CPU speed and cores, RAM, and SSDs — probably makes the most sense. If, on the other hand, you’re more normal than I am and spend most of your day surfing the internet and using chat and email, then most current CPUs with multiple cores (it’s difficult to find less than four these days) will do you just fine. I also recommend an SSD, but that too is pretty standard now as well. And if your needs are somewhere in between, you’ll want to trade all of those off against each other and your budget. I don’t obsess about CPU speed anymore, and neither should you, unless you know you have a specific need. Instead, understand and consider the various combinations of technologies in today’s computers and how they might combine to solve the problem you’re looking to solve. Hopefully, you’ve now got a high-level idea of the kinds of things that might matter. Looking for more guidance? Subscribe to Confident Computing! Less frustration and more confidence, solutions, answers, and tips in your inbox every week. Footnotes & References 1: It’s actually much more than this because CPU resources aren’t the only thing involved. I did say this was an over-simplification, though.
<urn:uuid:33fe6961-6596-44ee-a128-1342fc7242e9>
CC-MAIN-2023-40
https://askleo.com/does-cpu-speed-matter-any-more/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510697.51/warc/CC-MAIN-20230930145921-20230930175921-00729.warc.gz
en
0.951988
1,733
2.890625
3
Little Book of Churchill: In His Own Words (Hardcover) The iconic leader who 'mobilized the English language and sent it to battle'. Churchill's life and political career were certainly long and colorful: he travelled the world, fought in the Boer War, and oversaw the disastrous Gallipoli campaign during the First World War. But it was during the Second World War that this natural leader's qualities of grit, dogged determination and perseverance truly came to the fore as he led the nation to victory. Churchill's rallying speeches made him one of the world's greatest orators, while his acerbic wit was legendary. Readers will delight in finding the best of this illustrious Briton's words in one handy, pocket-sized volume. 'Never give in - never, never, never, never, in nothing great or small, large or petty, never give in except to convictions of honor and good sense.' Speech given at Harrow School for boys, London, October 1941. 'I am ready to meet my Maker. Whether my Maker is prepared for the great ordeal of meeting me is another matter.' Said on his 75th birthday, 30th November 1949.
<urn:uuid:96c54076-2107-4719-977e-5c5e315d1d43>
CC-MAIN-2023-23
https://www.left-bank.com/book/9781911610410
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647810.28/warc/CC-MAIN-20230601110845-20230601140845-00059.warc.gz
en
0.958614
243
2.65625
3
Studies have shown that cold muscles are more prone to injury. Warm up with jumping jacks, stationary cycling, or running or walking in place for three to five minutes. Then slowly and gently stretch, holding each stretch for 30 seconds. Gentle stretching after physical activity prepares your body for the next time you exercise. It will make recovery from exercise easier. Avoid the “weekend warrior” syndrome. Try to get at least 30 minutes of moderate physical activity every day. Take sports lessons. Whether you are a beginner or have been playing a sport for a long time, proper form and instruction reduce the chance of developing an “overuse” injury like tendinitis or a stress fracture. Listen to Your Body As you age, you are not as flexible as you once were and cannot tolerate the same types of activities that you did years ago. Modify activity to accommodate your body’s needs. Use the 10 Percent Rule When changing your activity level, increase it in increments of no more than 10 percent per week. When strength training, use the 10 percent rule as your guide and increase your weights gradually. Source: American Orthopaedic Society for Sports Medicine and American Academy of Orthopaedic Surgeons.
<urn:uuid:7830e345-0157-4022-ad86-fb2a59bdef3f>
CC-MAIN-2013-48
http://www.boston.com/lifestyle/health/2012/10/07/sports-injury-prevention/sfKXRbFdGNT3rvtRQPGulM/story.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164997874/warc/CC-MAIN-20131204134957-00098-ip-10-33-133-15.ec2.internal.warc.gz
en
0.926464
257
2.96875
3
On this page: Definition of the noun Ameira longipes What does Ameira longipes mean as a name of something? Ameira longipes is a species of Ameira, described by Boeck in 1865. - synonym: species Ameira longipes - kingdom: Animalia - phylum: Arthropoda - class: Maxillopoda - order: Harpacticoida - family: Ameiridae - genus: Ameira - observational distribution: specimen and observational data from the Global Biodiversity Information Facility Network (not necessarily true occurrence density gradients) Count per one degree cell 1 - 9 10 - 99 100 - 999 1000 - 9999 10000 - 99999 100000+ Online dictionaries and encyclopedias with entries for Ameira longipes Click on a label to prioritize search results according to that topic: Share this page Go to the thesaurus of Ameira longipes to find many related words and phrases!
<urn:uuid:d6ccf82f-75b4-48c8-adf1-a66b3e2f8eb9>
CC-MAIN-2017-30
http://www.omnilexica.com/?q=Ameira+longipes
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428300.23/warc/CC-MAIN-20170727142514-20170727162514-00421.warc.gz
en
0.732053
208
2.625
3
PE and Sports Premium How we used our Sports Premium funding (2015-2019) and planned future spend Following Year 6's swimming lessons in the Summer Term 2019, 83% of the children can swim a minimum of 25 metres. Meeting national curriculum requirements for swimming and water safety What percentage of your current Year 6 cohort swim competently, confidently and proficiently over a distance of at least 25 metres? What percentage of your current Year 6 cohort use a range of strokes effectively [for example, front crawl, backstroke and breaststroke]? What percentage of your current Year 6 cohort perform safe self-rescue in different water-based situations?
<urn:uuid:a65c9324-3719-42cb-9674-f7577de897fd>
CC-MAIN-2020-24
https://www.cookleysebright.co.uk/pe-and-sports-premium/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513230.90/warc/CC-MAIN-20200606093706-20200606123706-00000.warc.gz
en
0.876117
135
2.5625
3
Previous Challenge Entry (Level 3 - Advanced) Topic: Writing (01/11/07) TITLE: Dream Writing By Beth Muehlhausen LEAVE COMMENT ON ARTICLE SEND A PRIVATE COMMENT ADD TO MY FAVORITES I’m not a writer – not even close – but I’ve been told writing can be likened to breathing. Writers are unique individuals who invite all the details and guts of life inside, the good and the bad, to be absorbed. That’s like breathing in. Then during the exhale phase it all tumbles out in the form of written words. The trick is to write to the emotional center of the whole mess so it effectively speaks to the human condition. Anyway, when my friend Louise said she had started a new writing project, I took notice. Louise seemed to be defined by creative brilliance. Surely her writing would be equally addictive. “I’m working on an autobiography,” Louise said, matter-of-factly. “Ooooh, awesome. Tell me – how’s it going? What’s the writing process like? Can I read it sometime?” Her electric blue eyes and petite nose squinted quizzically. “What exactly do you mean?” Writing defined Louise’s life; in her mind, there was no living without writing. I raised my chin and tossed my bobbed hair with an air of confidence. I need not apologize for this question. “I mean, tell me what it’s like – writing an autobiography.” Several seconds, maybe ten really long ones, dragged between us. Finally, she sighed. “It’s like … chasing down your heart … taking a bunch of biopsies from different eras … and then looking at each one under a microscope …” She paused to take a big breath. “… and then exposing the ...” The sentence hung, unfinished, in midair. “Yes? The … what?” Louise gazed into space over the top of her stylish plum-colored, rectangular glasses. “You silly! I’m not going to give away my last chapter.” I had to think on that one, and another long silence pressed me into a conversational corner. I literally took a step backward. “Well then, “ I chuckled, “I won’t ask. Let me know when … I mean if and when I can read your autobiography … especially the last chapter.” That night I dreamt one of those crazy off-the-wall dreams that could never, ever happen in a million years. I found myself engrossed in a bubble of introspection as I wrote a short version of my own autobiography. I sat cross-legged on the living room floor dressed in cut-off jean shorts and a tank top in front of the coffee table. Sunshine streamed through breeze-blown, lace curtains. The laptop lid stood at attention as my fingers flew across the keys and words tumbled easily across the screen, unbidden and free. As a young child, I noticed things other kids missed. I seemed particularly spellbound by small observations: patient snails carving wiggly paths in the sand; spiders surrounded by webs covered with precious sprinklings of silver dew. During my teen years something froze up inside. I felt numb and out of place; my parents and I wore masks to protect our fake facades. At school I didn’t fit with the “in” crowd. The friendly stars and moon in the inky night sky became my soul mates. Then I married, had children, and fell into “the norm” of the everyday call of duty. My identity was swallowed up in mountains of diapers and dirty dishes. On occasion, something simple - something like a stiff, cool wind in my face scented with the aroma of freshly mowed grass - called me to freedom. I longed for more, more, always more of what could be, what should be, what would be if only my heart could find rest. I was shocked and fearful when a serious, life-threatening cancer invaded my body. Pain and loss defined me. Was there a silver lining to that cloud? Questions assailed me from within. Was God real? Who was I? Why was I alive? Where was I going after this life? Seeds of faith planted years before finally took root and grew. Long-neglected rooms in my heart opened; protective skyscraper-sized inner walls tumbled down. Despair lifted as Jesus drew me to Himself. I woke with a start and stared outside into the night sky. Louise’s words repeated themselves in my mind: “… chasing down your heart … taking biopsies … looking at each one under a microscope … exposing the ...” I finished her sentence out loud by whispering to the sighing tree branches outside my window: “… exposing the evidence of God’s work and His gift in every phase of life: HOPE!” The opinions expressed by authors may not necessarily reflect the opinion of FaithWriters.com. Accept Jesus as Your Lord and Savior Right Now - CLICK HERE JOIN US at FaithWriters for Free. Grow as a Writer and Spread the Gospel.
<urn:uuid:4df928f0-94e6-4ead-a9d2-1b5c2979404b>
CC-MAIN-2017-43
http://www.faithwriters.com/wc-article-level3-previous.php?id=13239
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828356.82/warc/CC-MAIN-20171024090757-20171024110757-00142.warc.gz
en
0.941181
1,132
2.53125
3
Over the past decade, social media platforms have transformed from being simply a means of connecting with friends and family to a vital source of information for users. With over 4 billion active social media users worldwide, social networks are rapidly becoming the new search engines. Traditionally, search engines like Google, Bing, and Yahoo have been the go-to sources for people looking for information on the internet. However, with the meteoric rise of social media, users are beginning to turn to platforms like Facebook, Twitter, Instagram, TikTok, and LinkedIn to search for information, products, and services. Gen Z has been turning to platforms like TikTok for more than just entertainment. In fact, a recent study has shown that Gen Z prefers to use TikTok instead of Google for search. A main reason for this shift is the way social media platforms are designed, offering personalized experiences with feeds and timelines tailored to individual users based on their interests and behaviors. This means when users search on a social network, they are presented with relevant results based on their preferences, search history, and connections. Social networks have also made it easier for businesses to reach their target audiences. Companies can target specific audiences based on demographics, interests, and behaviors. This means that businesses can reach potential customers on social media that may not yet have found them through traditional search engines. 1. TikTok's algorithm offers a more personalized experience. The app's "For You" page displays content based on the user's interests and behaviors, making it easier to discover new information and topics. As users engage with content, the algorithm continues to further refine recommendations to create a more tailored experience. 2. Using social channels as search engines “ups” the trust factor. Social media users are more likely to trust recommendations and reviews from their connections and social networks than from search engines. Users are also more likely to engage with content and ads on social networks than on search engines. 3. Social media platforms are now providing more robust search functionalities resulting in users easily finding what they are looking for. For example, Twitter's advanced search allows users to filter tweets by location, date, and keyword, while Facebook's search bar enables users to search for specific people, pages, groups, and posts. Social networks are rapidly becoming the new search engines, providing users with personalized experiences, relevant content, and trustworthy recommendations. As social media platforms continue to evolve and offer more robust search functionalities, businesses and users alike will continue to turn to social networks as a primary source of information and discovery.
<urn:uuid:5188d1e2-0859-4018-ac73-3b09c43f699a>
CC-MAIN-2023-50
https://www.tpxmc.com/blogs/why-social-media-is-the-new-search-engine
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100686.78/warc/CC-MAIN-20231207185656-20231207215656-00120.warc.gz
en
0.933178
518
2.65625
3
History of the Monument April 19, 1783, marked America's true independence day: the moment when an official cessation of hostilities with the British Army came into effect. This event began the process that would end with official international recognition of the United States of America by the Treaty of Paris, signed later that year. The epicenter of this monumental event was the simple stone house on the banks of the Hudson at the village of Newburgh, occupied by General George Washington and his staff. If any place in America represents the final victory of the eight-year armed struggle for independence, it is Jonathan Hasbrouck's House, now known as Washington's Headquarters. By the middle of the following century, the memory of this pivotal event in this unusual place was under threat. The generation of Americans who had fought for independence was passing, to be replaced by a new generation that remembered the conflict only through the tales of their elders. For these latter people, place became an issue irrevocably interwoven with history, since the locations where these heroic events took place increasingly represented the sole physical connection with the past. It was in this broader context of loss that a new era in the memory of the nation was born with a re-investment in the events of 1783, on the original ground. In the 1840s, a group of concerned citizens banded together to save the Hasbrouck House, then under threat from the family's bankruptcy. The formation of the Newburgh Historical Society, with its mission to preserve the great sites and artifacts of America's formative struggle, marked a shift from an older philosophy of preservation, which emphasized evoking memories through paintings of old locations or saving pieces of key structures. In advocating for New York state's purchase of the Hasbrouck House, the Newburgh Historical Society made the first move in the nation's history to preserve historic landscapes for the education and enjoyment of future generations. The results of this campaign produced the first publicly-owned historic site in the United States of America and established our tradition of house museums. In October 1883, Newburgh once again took center stage with a week-long gala celebration of the Revolution's conclusion. Over a hundred thousand people descended on the city from around the world to take part in festivities that included parades, military demonstrations, and patriotic speeches. Out of these events came a new initiative that would mark a re-confirmation of the events of 1783 and would broadcast them to the world. Months earlier, in April, Abraham Lincoln's son, Secretary of War Robert Todd Lincoln, announced plans to erect a monument at Newburgh to commemorate "the events which took place there a century ago." Four years later, this monument would be unveiled as the Tower of Victory. The autumn of 1883 marked the incorporation of the Newburgh Historical Society, which held its first meeting on Washington's Birthday in 1884. The members re-affirmed their commitment to the "discovery, collection and preservation" of the area's revolutionary history and took on a greater title as the Historical Society of Newburgh Bay and the Hudson Highlands. One of the first items of business was to begin planning the centennial monument, originally envisioned as a statue of Washington that would "awaken increased interest and regard for the picturesque stone house now consecrated by so many memories of the past." By 1886, plans had expanded to enclose the statue in a stone tower that would "typify the rugged simplicity of the times and personages." Historical society members commissioned architects Maurice J. Power and John Hemmenway Duncan, who would later become known for their work on the 1892 Columbian Exposition in Chicago, Grant's Tomb in Manhattan, and Prospect Park in Brooklyn, to design the tower. By the end of 1887, the monument was complete, broadcasting the site's significance to a new world-wide audience of visitors. In 1950 a severe storm damaged the roof of the Tower of Victory and it was removed to prevent further damage to the base. For more than 65 years it has been closed to the public. For the past five years, a volunteer committee through the Palisades Park Conservancy chaired by Barney McHenry with help from Sue Smith and Matthew Shook have become the latest to honor the site by advocating for the monument. The group has been raising awareness and funding for the restoration of the Tower of Victory through mailings, events and via social media. In 2014, local philanthropist Bill Kaplan pledged $100,000 and inspired many others to contribute. Donna Cornell and Jeffrey Werner helped connect the committee donors including discounted services to complete the landscaping. Wint Aldrich and Kevin Burke lent their preservation expertise to the project by shaping the mission of the fundraising campaign and Denise Van Buren kept everyone on task. By the end of 2015, the committee was successful in raising 1. 6 million dollars - almost enough to complete the restoration! As a member of the committee, I attended an update meeting at Washington's HQ on January 14th. We were informed that bids for a contractor went out in December and returned with a 1.9 million dollar pricetag. With a 50% matching grant from NYS Office of Parks, Recreation and Historic Preservation, the committee is now tasked with raising the remaining $150,000. If we can't source the funds by the end of January, it'll mean that the project bids expire and the project will be set back a few months. Feel free to contact Matthew Shook email@example.com if you have any ideas to help the project meet it's goals. I hope that I'll be able to update you all soon with a more specific timeline. "I wasn't made for the great light that devours; a dim lamp was all I had been given, and patience without end to shine it on the empty shadows."
<urn:uuid:4f9e681c-7c21-49c4-b1c1-6382d8473718>
CC-MAIN-2020-16
http://www.johannayaun.com/public-history-blog/archives/01-2016
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371807538.83/warc/CC-MAIN-20200408010207-20200408040707-00161.warc.gz
en
0.963804
1,191
4.09375
4
- Scholarly Retreats - Type of work Katharine Jack is a primate behavioral ecologist whose research examines male reproductive strategies and hormonal correlates of male dominance rank and life history status. Jack has studied a number of different primate species throughout her career, though the bulk of her research focuses on a population of white-faced capuchin monkeys (Cebus imitator) in the Santa Rosa sector of the Área de Conservación Guanacaste, Costa Rica. The Santa Rosa primate project began in 1983 and one of the longest running research projects focusing on wild primates. Jack began her research at the site in 1997, joining Dr. Linda Fedigan (University of Calgary) as a co-director in 2004 (they were joined by Dr. Amanda Melin, University of Calgary in 2011). Via Jack’s collaborations with the Santa Rosa research team and a number of experts in the areas of primate genetics and endocrinology, her research makes use of long-term demographic, life history, behavioral, and biological data (including microsatellites, major histocompatibility genes, and hormones). In addition, since beginning her studies in Santa Rosa, she has been intimately involved in the on-going study of the long-term population trends of the capuchin and howler monkeys in the park. Jack’s team has been conducting park-wide censuses of these two primates since 1983, in order to track the effects of forest protection, forest regeneration, and climate change on primate populations. During her retreat, Jack worked with colleagues Erin Riley and Stacey Tecot on a manuscript advancing a nature-based approach to the study of primates. They also drafted an outline and writing plan for a book based on this manuscript; working title “Being a primate: A nature-based approach to the study of primate behavior and adaptations”.
<urn:uuid:4e25986f-6ca0-4edf-959d-302202d3262a>
CC-MAIN-2023-40
https://www.astudiointhewoods.org/artist/katherine-jack/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506559.11/warc/CC-MAIN-20230924023050-20230924053050-00427.warc.gz
en
0.9331
409
2.578125
3
Called the “Chief Architect of the Constitution,” he wrote many of the Federalist Papers which helped convince the States to ratify the Constitution. He also introduced the First Amendment in the first session of Congress. This was James Madison, born MARCH 16, 1751. During the War of 1812, Madison proclaimed two National Days of Prayer, in 1812 and 1813. When the British marched on Washington, D.C., citizens evacuated the city, along with President and Dolly Madison. On August 25, 1814, as the British burned the White House, Capitol and public buildings, dark clouds began to roll in. A tornado sent debris flying, blew off roofs and knocked chimneys over on top of British troops. Two cannons were lifted off the ground and dropped yards away. A British historian wrote: More British soldiers were killed by this stroke of nature than from all the firearms the American troops had mustered. British forces fled in confusion and rains extinguished the fires. Madison then proclaimed a National Day of Public Humiliation, Fasting & Prayer to Almighty God on November 16, 1814. Two weeks after the War ended, Madison proclaimed a National Day of Thanksgiving & Devout Acknowledgment to Almighty God, on March 4, 1815. The Moral Liberal contributing editor, William J. Federer, is the bestselling author of “Backfired: A Nation Born for Religious Tolerance no Longer Tolerates Religion,” and numerous other books. A frequent radio and television guest, his daily American Minute is broadcast nationally via radio, television, and Internet. Check out all of Bill’s books here
<urn:uuid:5e9a5b49-1571-49cf-b1e4-546c3d621d88>
CC-MAIN-2014-41
http://www.themoralliberal.com/2010/03/15/god-sends-a-tornado-war-of-1812-american-minute/
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133417.25/warc/CC-MAIN-20140914011213-00180-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
en
0.958604
340
3.3125
3
↪️ Using Services from Removelinebreaks.net ↪️ Tricks Find & Replace in Microsoft Office Word - Paste text from PDF into Word. - Select the text you want to fix or type Ctrl + A if you want to select all text. - Open Find & Replace, or can also be through the shortcut Ctrl + H. - In the Find what column, type ^ p which is a representation of the line break. - In the Replace with column, just type the space key once. This means that a line break will be replaced by a space. - Then click Replace all and all line breaks will be lost and replaced by a space.
<urn:uuid:97fe670f-4aff-403d-ab33-51bd392f379b>
CC-MAIN-2020-29
https://www.alltutorials.info/2019/09/how-to-copy-text-from-pdf-without-cropping.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00102.warc.gz
en
0.818085
142
2.5625
3
FOIA at 50: How American Views of Transparency Have Changed By Lily Rothman, TIME, July 21, 2016 The Freedom of Information Act was passed a half-century ago this month. A lot has changed since then It started in 1953, when California Congressman John Emerson Moss Jr. was denied a request to access information from the U.S. Civil Service Commission. His quest to establish a “right to know”—something that is not contained in the Constitution, and was coined in the mid-20th century—led directly to the passage, 50 years ago this month, of the Freedom of Information Act (FOIA). The bill passed unanimously in the House and, despite the numerous exceptions already enshrined within it, changed the role of secrecy in American government by shifting the burden of justification from those who would uncover to those who would shield. The change wasn’t only on paper: the lifespan of FOIA has been contemporaneous with a major shift in the way Americans think about transparency, as shown by historical opinion polls compiled by the Roper Center for Public Opinion Research in honor of the law’s anniversary. Read more here.
<urn:uuid:475f0311-69e1-47bb-8a95-7915729d78cc>
CC-MAIN-2017-34
http://www.foiaadvisor.com/home/2016/7/21/foia-news
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110573.77/warc/CC-MAIN-20170822085147-20170822105147-00456.warc.gz
en
0.959644
238
2.609375
3
Back pain is a symptom of back injury that can arise from many causes. The pain can range from a dull, annoying ache to absolute agony. Ironically, the severity of back pain is often unrelated to the extent of physical damage. Muscle spasm from a simple back strain can cause excruciating back pain that can make it difficult to walk or even stand, whereas a large herniated disc or completely degenerated disc can be completely painless. Lower, or lumbar, back pain is the second most common illness-related reason given for a missed workday, and work-related back injury is the number one occupational hazard. Low back pain, generally as a result of degenerative disc disease, is the most prevalent cause of disability in people under age 45. The causes of back pain can be very complex, and there are many structures in the lower back that can cause back pain. The following conditions can cause pain: Many cases of back pain are caused by stresses on the muscles and ligaments that support the spine. Sedentary jobs and lifestyles may create a vulnerability to this type of stress or damage. Obesity, which increases both the weight on the spine and the pressure on the discs, is another factor. Strenuous sports such as football and gymnastics can also cause damage resulting in back pain. People with one or more of the following indications have a higher risk of developing back pain: Many types of lower back pain have no known anatomical cause, but the pain is still real and needs to be treated. However, lower back pain can usually be linked to a general cause (such as muscle strain) or a specific and diagnosable condition (such as degenerative disc disease or a herniated disc). To qualify for disability benefits because of back pain, you must have a specific diagnosis for what's causing the pain. If the doctor hasn't found any abnormal physical results after doing x-rays, MRIs, and lab tests, yet you still suffer from disabling pain, Social Security will not be able to consider you for disability benefits because of your back pain. However, Social Security will consider pain caused by a properly documented mental disorder, such as somatoform pain disorder. For more information, see our articles on how Social Security treats chronic pain and how Social Security evaluates common back problems.
<urn:uuid:c00ed4e3-ff92-4fb8-b73d-98fa7f33c012>
CC-MAIN-2020-29
https://www.disabilitysecrets.com/medicine-medication-prescription-drugs-back-pain.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655937797.57/warc/CC-MAIN-20200711192914-20200711222914-00043.warc.gz
en
0.952624
472
2.828125
3