text
string |
---|
- Special Sections
- Public Notices
ound as the IQ sinks” is an amazing example of chutzpah. In a column meant for readers in Los Alamos, certainly a mathematically aware county, he tells us that today’s kids’ aptitudes are being degraded by not doing enough long division exercises.
He harkens back to a better time when students learned how better how to do “real” calculations. Maybe some students need to do more rote exercises and can skip Euclid’s proof that the square root of two is irrational. (I change my mind. That proof is just too beautiful.) But we need to teach the students with the highest aptitude for science, technology, engineering and math too. Concepts-based math is just right for these students.
Sure, we all need to learn some actual methods, but in my view the old school way, teaching students to do endless long divisions like 1/7=0.142857142857… is not the whole story. I wished I had learned earlier the art of estimation — how to calculate on your feet an answer that is good enough, and knowing how good is good enough.
These are quick methods used in science and engineering but also for doubling recipes, calculating tips, estimating your gas mileage — and knowing that 29.787 MPG is not a better answer than 30 MPG.
I would have appreciated a column on this subject that interviewed parents and teachers, both here and elsewhere in the state, reporting on their responses and suggesting improvements based on those responses.
|
Protected areas have long been recognised as an important tool for protecting biodiversity, and are now increasingly recognised for their role in protecting carbon; recent global analyses by UNEP-WCMC (2008) have shown that approximately 15% of the global carbon stock is currently found within protected areas. Carbon emissions from deforestation account for an estimated 20% of global carbon emissions (IPCC 2007.
In collaboration with the University of Copenhagen and the University of Queensland, we aim to measure protected area effectiveness, at a regional and national scale, using a landscape modelling approach. As part of this project we are working to update and analyse the Protected Areas Management Effectiveness (PAME) database, which is linked to the
World Database on Protected Areas (WDPA), and holds information on the management and governance of over 6000 protected areas worldwide.
|
“Knowledge is Power”, this is probably one of the most famous sayings in modern world. We humans love to think, reason, and interpret. As the book 48 laws of power would put it: “Humans are machines of interpretation and explanation; they have to know what you are thinking.” This is the fundamental principal behind the fourth law. The more you talk, the more information you give away and thus, the more your power decreases. This is because by talking, we reveal our intentions, biases, quirks and personality (basically, we reveal who we are). When this information reaches the ears of other people, it can be used to stop your plans, it can be used against you or it can be used to gain your favor.
On the other hand, when we say less than necessary, we put others on the defensive. The silence will make them uncomfortable and as a result, they will try to fill in the void with comment and talk. In the process they will reveal something about themselves, which you can later use against them.
When King Louis XIV was in court, he would say very little. All he would say is “I shall see”; then without telling anyone, he would implement whatever he decides. His officials did not know what he wanted; as a result, the officials ended up talking more in an attempt to convince the king on their stand. They had to do this because they did not know what the king wanted, so they are unable to make decisions based on the king’s wishes. They were uncomfortable around the king because of his silence. As a result, they said a lot more thus exposing their own beliefs to the king. King Louis XIV would then use their words against them.
Words once uttered cannot be returned so choose your words carefully.
Hi 18 K
|
In addition to causing damage to your home, a rodent infestation can actually make your family sick. In fact, rats and mice can transmit more than two dozen different diseases to humans. Transmission can occur through direct contact with rodents, their droppings, urine or saliva, or indirectly through contaminated food, infected fleas, ticks or mites.
If you suspect you have rats or mice in your home, call a professional! Handling rodents, infested insulation or any other contaminated materials is a potential biohazard and can lead to these common rodent-related diseases.
4 Reasons to Rid Your Home of Mice & Rats
Salmonella is a food-borne bacteria that affects the digestive tract. Humans can contract salmonella if they consume food that has been contaminated by rodent droppings or food that has been prepared on a contaminated surface. Salmonella symptoms include abdominal cramps, diarrhea and fever lasting 4-7 days. Most people recover without treatment, however, young children, elderly adults and those with compromised immune systems may require hospitalization.
Hantavirus is found in the droppings, urine and saliva of infected deer mice and other rodents, and when inhaled by humans it can cause a rare, but potentially fatal lung disease known as Hantavirus Pulmonary Syndrome. Symptoms are similar to that of the flu, and include fatigue, fever, chills, muscle aches, nausea, vomiting and diarrhea, as well as a dry cough, difficulty breathing, dizziness and headaches. There is no vaccine or cure, however, treatment may involve intensive care, oxygen therapy and mechanical ventilation.
While more prevalent in the Western United States than here in New York and Vermont, the bubonic plague bacteria is most commonly transmitted to humans through the bite of an infected flea. Responsible for the deaths of millions in the Middle Ages, plague is now treatable with common antibiotics. However, left untreated it can cause serious illness and death. Symptoms include the sudden onset of fever, headache, chills and weakness with one or more swollen, tender and painful lymph nodes.
Lymphocytic choriomeningitis, or LCM, is a rodent-borne virus primarily carried by house mice. Although rarely fatal, it can cause serious neurological problems, including encephalitis and meningitis. Humans typically contract the LCM virus after coming into contact with the urine, droppings, saliva or nesting materials of infected rodents. Early symptoms may present like the flu, with neurological symptoms appearing in the second phase of illness. Common treatments involve anti-inflammatory drugs and hospitalization depending on severity of symptoms.
Prevent Rodent-Related Diseases
Don’t risk your family’s health. If you live in Northern New York or Western Vermont, and you suspect you have rodents in your home, contact Nature’s Way Pest Control for a free inspection. Our professional pest control experts will inspect your home, set traps and remove any potential biohazards. We can also perform attic cleanouts, install special pest control insulation and prevent mice and rats from entering with our professional Pest Block services.
|
This study aimed to investigate students' opinions on using blogs for writing and giving comments among classmates outside class. Results from a questionnaire show that students consider the act of giving comments on Blogs helped them to improve their critical thinking, problem solving and communication skills. Some students felt uncomfortable to give comments when others knew who they were. However, most of the participants preferred online to discussion to face, could express ideas in the online discussion. More than 90% agreed that Blogging supports collaborative learning both in knowledge development and group sharing. Many students agreed that Free-writing was too difficult while nearly all the students indicated Blog could help them learn how to write, they learned and used new vocabulary and they improved their writing skills. Although half of the students agreed that the assignments in Blogs were too time consuming, they encouraged learner independence.
Sumonta Damronglaohapan, Rajamangala University of Technology Srivijaya, Thailand
Stream: Web-based Learning
This paper is part of the ACTC2015 Conference Proceedings (View)
View / Download the full paper in a new tab/window
|
In this edition of our Dominican English Dictionary, we bring you a new Dominican word, as always, you will find several examples with which we try to explain in the best way possible these words used daily by Dominicans.
Dominicans are a creative people, something which is especially evident, in the way we talk – often using words that are not listed in the dictionary.
Now we introduce you to a very common Dominican word specially when you want to express that a person, animal or thing is damaged or in bad shape …
“Deguabinao” means “in bad shape”, or “tired” It is often used when an object is about to move into a state of uselessness.
For a better understanding, next we present some of the synonyms this word has:
- lack of strength
- Worn out
And the list continues…
Estoy deguabinao/ I’m really tired.
No te sientes en ese mueble que está deguabinao/ Don’t sit on that couch because it is broken.
Juan cayó deguabinao de la escalera/ Juan fell down the stairs awfully.
|
Brunsma, David L. (2004). The school uniform movement and what it tells you about American education. Scarecrow Press. [isbn: 157886125X] is held by the University of Melbourne Library, call number: UniM ERC 371.51 BRUN
Scientific School Uniform Research
The scientific research on uniforms is just starting to come in. The following discusses a paper from The Journal of Education Research (Volume 92, Number 1, Sept./Oct. 1998, pp. 53-62) by David L. Brunsma from the University of Alabama and Kerry A. Rockquemore of Notre Dame:
Effects of Student Uniforms on Attendance, Behavior Problems, Substance Abuse, and Academic Achievement
This study showed that uniforms did not lead to an improvement in attendance, behavior, drug use, or academic achievement.
Click here to read the study for yourself.
Here's the abstract from their study:
Mandatory uniform policies have been the focus of recent discourse on public school reform. Proponents of such reform measures emphasize the benefits of student uniforms on specific behavioral and academic outcomes. Tenth grade data from The National Educational Longitudinal Study of 1988 was used to test empirically the claims made by uniform advocates. The findings indicate that student uniforms have no direct effect on substance use, behavioral problems, or attendance. Contrary to current discourse, the authors found a negative effect of uniforms on student academic achievement. Uniform policies may indirectly affect school environments and student outcomes by providing a visible and public symbol of commitment to school improvement and reform.
Brunsma and Rockquemore wanted to
investigate the extraordinary claims being made about how wonderful school
uniforms are, particularly from the Long Beach California. It was being claimed
that mandatory uniform policies were resulting in massive decreases (50 to 100
percent) in crime and disciplinary problems.
It is typically assumed, as exemplified in Long Beach, that uniforms are the sole factor causing direct change in numerous behavioral and academic outcomes. Those pronouncements by uniform proponents have raised strident objections and created a political climate in which public school uniform policies have become highly contested. The ongoing public discourse is not only entrenched in controversy but also largely fueled by conjecture and anecdotal evidence. Hence, it now seems critical that empirical analysis should be conducted to inform the school uniform debate. In this study, we investigated the relationship between uniforms and several outcomes that represent the core elements of uniform proponent's claims. Specifically, we examined how a uniform affects attendance, behavior problems, substance abuse, and academic achievement. We believe that a thorough analysis of the arguments proposed by uniform advocates will add critical insight to the ongoing debate on the effects of school uniform policies. (Brunsma and Rockquemore, 1998, pg. 54)
The authors point out that if uniforms work, they should see some of the following trends in schools with uniforms:
uniforms decrease substance use (drugs).
2. Student uniforms decrease behavioral problems.
3. Student uniforms increase attendance.
4. Student uniforms increase academic achievement.
They suspected that when other variables affecting these four items were accounted for, it would be shown that uniforms were not the cause for improvement.
How They Did Their Study
They used data from the National
Educational Longitudinal Study of 1988 (NELS:88), and three follow-up studies.
These studies tracked a national sample of eighth graders (in 1988) from a wide
variety of public and private schools and followed their academic careers
through college. Some of the data collected in the studies included uniform
policies, student background (economic and minority status), peer group
(attitudes towards school and drug use), school achievement, and behavioral
characteristics (how often did each student get into trouble, fights ,
suspensions, etc.). The authors concentrated on data from the students 10th
Some of the independent variables they considered were sex, race, economic status, public or private school, academic or vocational "tracking", rural or urban district, peer proschool attitudes, academic preparedness, the student's own proschool attitudes, and most importantly, whether or not the students wore uniforms. The researchers wanted to determine if there was a tie between these variables and desirable behavior by the students. The areas that they were looking for improvement as a result of the previous variables included reduced absenteeism, fewer behavioral problems, reduced illegal drug use, and improved standardized test scores. The researchers considered this second group of variables to be the dependent variables. The goal of their study was to determine if there was any relationship between the independent variables (particularly uniforms) and the dependent variables.
The authors took all of the data for these variables from the NELS:88 study and performed a regression analysis to see if any of the independent variables were predictors of any of the dependent variables. If there was a strong tie in the data between any two variables ( uniforms and absenteeism, for example), it would show up in the study as a correlation coefficient close to 1 or -1. A correlation coefficient near 0 indicates no relationship between the two variables. So, if wearing uniforms had a large effect on behavior, we would expect to see a correlation coefficient of say 0.5 between uniforms and measures of good behavior. If we see a very low correlation coefficient between these two, then we know that wearing uniforms has no real effect on behavior.
The only positive result for uniforms that the study showed was a very slight relationship between uniforms and standardized achievement scores. The correlation coefficient was 0.05, indicating a very slight possible relationship between the two variables, but showing that uniforms are a very poor predictor of standardized test scores and that the relationship is much weaker than has been indicated in the uniform debate. Notice that 0,05 is much closer to 0 than to 1. Other than this one weak, possible relationship, uniforms struck out. In the authors own words:
Student uniform use was not significantly correlated with any of the school commitment variables such as absenteeism, behavior, or substance use (drugs). In addition, students wearing uniforms did not appear to have any significantly different academic preparedness, proschool attitudes, or peer group structures with proschool attitudes than other students. Moreover, the negative correlations between the attitudinal variables and the various outcomes of interest are significant; hence, the predictive analysis provides more substantive results.
In other words, the authors saw no relationship between wearing uniforms and the desirable behavior (reduced absenteeism, reduced drug usage, improved behavior). They did, however, see a strong relationship between academic preparedness, proschool attitudes, and peers having proschool attitudes and the desirable behaviors. Furthermore, they saw no relationship between wearing uniforms and the variables that do predict good behavior (academic preparedness, proschool attitudes, and peers having proschool attitudes).
Based upon this analysis, the authors were forced to reject the ideas that uniforms improved attendance rates, decreased behavioral problems, decreased drug use, or improved academic achievement. The authors did find that proschool attitudes from students and their peers and good academic preparedness did predict the desired behavior. They saw that wearing uniforms did not lead to improvements in proschool attitudes or increased academic preparation.
David L. Brunsma, D.L. and
Rockquemore, K.A. (1998) Effects of Student Uniforms on Attendance,
Behavior Problems, Substance Abuse, and Academic Achievement, The Journal of Education Research Volume 92, Number 1, Sept./Oct. 1998,
Polk County School Uniforms Home Page
|
Dealing with a medical condition can be one of life’s greatest challenges. But health problems are exacerbated further when patients don’t have access to consistent, quality care. Such is the case for roughly 65 million Americans living in healthcare deserts today.
What Are Healthcare Deserts?
Healthcare deserts are defined as locations in which a population is more than 60 minutes away from the closest acute-care hospital. And with 65 million Americans already living under these conditions, the issue is only expected to get more challenging as 100,000 doctors are due to leave the industry by 2025.
With these factors set for a collision course that could greatly impair the healthcare industry’s ability to provide the level of care needed for today’s patients, telehealth appears to be the answer to resolving this looming issue.
The Benefits of Telehealth Services in Rural Areas
By introducing telehealth services, patients will be empowered by gaining a direct line of communication with their medical team from the comfort of their own home. This proves particularly effective in cases where a specialist, not a primary care physician, is required.
This opinion was recently espoused by the Children’s Health Fund, which claims that 15 million American youths reside in areas with fewer than one health professional for every 3,500 residents. The group argued that both time and transportation are significant barriers for children in need of specialty care—even those with insurance. Those in poor, rural areas are even more at risk of receiving inadequate medical care. The introduction of telehealth services in rural areas would enable families to connect with doctors from around the country, including the types of specialists that generally flock to urban centers.
Who Could Benefit from Telehealth Services?
The introduction of telehealth services will ensure a higher quality of care for these 15 million children, but will the same impact be felt for an aging adult population?
The answer is a resounding yes. While children may be able to rely on their families to manage the logistics of getting to and from a doctor, an impaired adult may be on their own. Additionally, it is the Baby Boomer population driving a surge in demand for healthcare at either their own home or from within the comfortable confines of an assisted living facility.
Despite the numerous benefits of telehealth services, the aforementioned Children’s Health Fund whitepaper referred to several challenges that still act as implements to wider telehealth adoption:
· Legal and licensing barriers preventing telehealth from crossing state lines
· Lack of access to Internet or smart phones
· Inability of healthcare systems and small practices to afford telehealth technologies
· Lack of a system reimbursement for consulting specialists and primary care teams
· Quality assurance and regulatory concerns
However, though these challenges are very real, selecting the right telehealth services provider will be able to offer a comprehensive and systematic method for building a program that addresses these concerns while increasing your ability to improve patient outcomes simultaneously. Discover how an effective telehealth program can improve your healthcare organization, and benefit those in rural areas.
|
Honoured in Lewes for centuries, St Pancras has been one of Rome’s own favourite saints ever since he was martyred there in 304 during the Diocletian persecution. Christian Roman soldiers adopted the youthful saint as a patron. Little else is known about him. He may have been the patron of Pope St Gregory the Great’s English mission, led by St Augustine. He dedicated his first English Church to St Pancras in Canterbury in 597.
Lewes quickly became the third most significant Pancras site in England, after Canterbury and London. In the 1070s, a Saxon Church, St Pancras, became the nucleus of the important Cluniac Priory of St Pancras, which was dissolved in 1573.
When the 1829 Catholic Emancipation Act allowed Catholics to worship openly, Mass was said at 10 Priory Crescent, overlooking the priory ruins. When a church was built in 1870, St Pancras was the obvious choice for patron.
The Priest’s house was built at the same time as the church of 1870. Two cottages on the corner of Irelands Lane, on the site of the present forecourt, were used as a parish school.
These buildings and their furnishings were all paid for by the parents of the first parish priest Fr Hubert Wood. Mrs Wood also made several fine vestments, some of which are still in use today.
In 1930 St Pancras Catholic Primary School was built in De Montfort Road, allowing redevelopment in Irelands Lane. At that time the first church was in a poor state of repair and had to be pulled down. The present church, built in1939, was designed by Edward Walters and cost £7500. The stained Glass windows in the nave and chancel are from the first church and there is also a commemorative tablet to be found in the Lady Chapel. Other stained glass windows have been added more recently, with St Philip Howard commemorating the fiftieth anniversary of the church in 1989, and a modern design marking the new millennium.
A number of re-arrangements to the interior of the church have occurred over the years, with the most recent development being the addition of the Cluny Annexe consisting of the Wood Room and Challoner Hall. This build replaces the previous Canon O Donnell Hall sited a few hundred yards up the road on the island formed by Spital Road and Western Road.
|
Catsear Treatment Guide
Catsear, also known as flatweed and false dandelion, is a low-lying perennial weed often found in lawns, pastures, and nature strips. The weed can tolerate drought and low nutrient soils well, if catsear is overtaking lawn this often indicates poor soil health.
Often mistaken for true dandelion (Taraxacum officinale) due to its remarkably similar appearance. Catsear forms as a flat basal rosette with rough leaves covered in stiff hairs. It has leafless flower stems with up to 7 bright yellow flowers on each branched stem. Catsear can be distinguished from Dandelion due to its rough, hairy leaves and multi branched stems with several flowers whereas dandelions have unbranched, hollow flower stems with a single flower.
How to Kill Catsear
To treat 200m2 of turf, mix 130mL of Cutlass M Herbicide into 5-8L of water apply to actively growing weeds and moist soil.
- Do not mow turf for two days before or after application
- Only spray actively growing weeds
- Ensure all weeds are thoroughly wet from application
- Avoid fertilising within two weeks of spraying
- Do not re-apply to Buffalo Grass within 12 months
Always read product label prior to use.
|
Quality control is the component of quality monitoring that guarantees services and products adhere to needs. It is a work approach that promotes the measurement of the quality attributes of a device, compares them with the established requirements, and also analyses the distinctions between the results acquired as well as the wanted cause order to make choices which will certainly correct any kind of distinctions. Technical requirements specify the kind of controls that need to be performed to ensure the building works are performed correctly. They include not only products products, yet likewise the execution as well as completion of the works.
One way of managing quality is based upon the evaluation or confirmation of ended up products. The purpose is to filter the products prior to they get to the customer, so that products that do not follow demands are thrown out or fixed. This reception control is typically lugged out by individuals that were not involved in the production tasks, which implies that prices can be high, and also preventative tasks and enhancement strategies may not be reliable. It is a final control, situated between manufacturer as well as customer, and also although it has the advantage of being impartial, it has a lot of drawbacks, such as slow-moving details moves, which the inspectors are not acquainted with the conditions of production and are exempt for the production quality. When examinations are damaging, the choice to approve or turn down a full batch should be made on the basis of the quality of an arbitrary example. This type of analytical control offers less details as well as contains sampling dangers. Nevertheless, it is more affordable, calls for less inspectors, and speeds up decision-making, while the denial of the entire set motivates suppliers to boost their quality. This sort of control can likewise recognize the sources of variants and also, so establish procedures for their systematic elimination.
Analytical control can be put on the final product (acceptance control) or throughout the production process (process control). Statistical controls at function establish sampling plans with clearly-defined acceptance or rejection criteria, and total batches are examined using arbitrary tasting. The sampling control can be based upon evaluation by characteristics in accordance with the ISO standards. A building and construction company must minimize the expenses of bad quality as high as possible, and make certain that the outcome of its processes adhere to the customer's demands. Both inner and also outside controls can be accomplished. For instance, the control of concrete received by the service provider can be performed by an independent entity; the execution of steelworks can be managed by the project supervisor (on behalf of the customer), or the building and construction company can establish an interior control for the implementation of the building job.
Quality control is a set of prepared and also systematic activities to ensure that product or services abide by specific needs. It not only entails examining the final quality of items to stay clear of flaws, as holds true in quality control, but likewise checking product quality in a planned method in all the production phases. It is the advancement of work and item style treatments to stop mistakes from occurring to begin with, based on planning backed up by quality guidebooks as well as devices. When an agreement has been gotten to on the requirements of a quality administration system, it is feasible to specify a series of common requirements suitable to any sort of organisation. The worldwide criteria, generically called ISO 9001, are one of the most prevalent as well as normally accepted in established nations. The ISO 9001 criteria is composed of 4 basic synergistic standards supported by overviews, technological records and also technological requirements.
Firms can just be certified under the demands of the ISO 9001 standard. It is a standard that can be made use of to certify the efficiency of a quality administration system. Nonetheless, if the purpose is to improve effectiveness, the objectives of the ISO 9004 standard are broader in extent. The principles that underlie the management of quality in these requirements are the following: customer focus, management, participation of individuals, procedure technique, system method to administration, continual renovation, factual technique to decision production and also equally valuable distributor relationships. The ISO 9001 standard defines requirements for a quality management system where an organisation needs to demonstrate its capability to consistently offer products that fulfill the needs of clients and applicable laws demands. Regulative needs concentrate on the quality monitoring system, monitoring duty, resources management, product realisation and also dimension, analysis and also improvement.
When a quality system is ISO 9001 consultants put on an item as complicated and special as construction, a certain quality strategy must be prepared by applying the firm's global system to the certain project. The strategy should be composed by the contractor before the beginning of the construction functions and will be evaluated throughout its execution. The quality strategy applies to the products, work systems as well as solutions that have been particularly selected by the construction business in order to abide by the quality requirements stated in the agreement. The quality strategy is prepared for the building functions when a preventative method is needed to guarantee the building and construction quality, also though there may also be a quality handbook, in conformity with the ISO 9001 typical requirements.
|
Editor's note: Naina Subberwal Batra is the chairperson and CEO at the Asian Venture Philanthropy Network, a Singapore-based non-profit that has offices in 13 countries around the world, including Vietnam. AVPN has over 487 members from 32 countries and undertakes field building activities in Asia while providing a range of networking and learning services to support its members and followers to create meaningful social impact.
We are drowning in plastics. Unless we change our behavior, there will be more plastic by weight than fish in the ocean by 2050. Solving this crisis requires us to do more than just banning plastic straws. We need a paradigm shift. We must adopt deep structural changes to our plastic production and consumption patterns in order to move away from the extractive linear model of ‘take, make, use and dispose’ towards a ‘closed-loop’ circular economy – an economy that is intentionally restorative.
Over the past year, governments in Southeast Asia have been in the hot seat over plastic pollution. Indonesia, the Philippines, Thailand and Vietnam are some of the world’s top plastic polluters. Together with China, they account for up to 60 percent of the plastic waste leaking into our oceans. In our joint report with ECCA Family Trust – released last week – we identify the main challenges as a lack of infrastructure and financing, poor public awareness, poor execution of recycling policies, illegal dumping, as well as unplanned industrial development.
The good news is that governments at both the national and city levels are stepping up their efforts to reduce plastic pollution. Vietnam accounts for six percent of global marine plastic garbage flows or some 0.28-0.73 million metric tons annually. Last week, Prime Minister Nguyen Xuan Phuc approved the National Action Plan on Ocean Waste Management by 2030. This plan aims to reduce plastic debris discharged into the ocean by 75 percent and end the usage of disposable plastic products and fishing gear in coastal tourist resorts within the next 10 years. At the city level, the Chairman of the People’s Committee of Phu Quoc District has signed up to the WWF Plastic Smart Cities initiative announced last month and launching in February next year. WWF is working with district officials to develop an action plan to fight plastic pollution, establish circular economy projects, and test innovative solutions. These efforts are laudable but we cannot rely on governments alone. To solve the problem, we need coordinated multi-stakeholder support for a circular economy.
Consumer-goods companies have been struggling to rethink their plastic packaging but an investment fund in Singapore may drive change. Last week, investment management firm Circulate Capital closed its debut Circulate Capital Ocean Fund with a total capital commitment of US$106 million from PepsiCo, Danone, Unilever, and The Coca-Cola Company, among others. The fund will make debt and equity investments across the entire plastics value chain – from alternative materials to waste management infrastructure to advanced recycling technology. It seeks to demonstrate that investments in turning plastic from waste into a resource can provide attractive financial returns.
More and more closed-loop circular economy initiatives are emerging amongst local communities and entrepreneurs. In Vietnam, for example, Evergreen Labs develops and supports businesses to address pressing environmental and social challenges, including waste management. One of these is ReForm, a scalable social franchise model. ReForm uses the existing infrastructure of garbage collection centers and transforms them into small production facilities. The centers are equipped with low-cost machinery that enables workers to produce new tradable products. In addition to transforming low-grade plastic that might not otherwise be collected or reutilized, it improves the income of waste pickers, generates new jobs, and creates a local circular economy. This example represents a low-cost solution that, when scaled and connected to other efforts, could potentially move the needle on plastic waste.
These questions of scale and coordination are important ones. And it is where networks like AVPN demonstrate their true value. We help to remove barriers to scale by providing platforms and mechanisms that connect investors and capacity-builders with social enterprises and non-profits who require both financial and non-financial resources. The AVPN Southeast Asia Summit, happening in Bali, Indonesia in February next year, is one such platform that helps social investors build their awareness of new opportunities and find local collaborators. Similarly, our Climate Action Platform (CAP) brings together the widest possible range of stakeholders to collectively develop the scale, support and investment to tackle climate change. Projects like those by Evergreen Labs are exactly the kind of opportunities that the CAP presents to investors.
I believe there are sizeable and valuable opportunities to create a circular economy within the plastics value chain in Southeast Asia. But at present efforts are too fragmented and uncoordinated to have impact scale. The Ellen MacArthur Foundation estimates that 95 percent of the material value of plastic packaging – valued at between US$80 and $120 billion annually – is lost to the global economy after a brief initial use. If we – investors, businesses, governments, and consumers – can move the plastics industry into a positive spiral of value capture, we will do an enormous service to both our oceans and our economy. A world in which plastic never becomes waste is not beyond the realms of possibility. But for it to become a reality, we need to work together.
|
<< Back to full list of biographies
The 4th Duke of Portland began his education in Ealing, at the school run by Dr Samuel Goodenough (later Bishop of Carlisle). He later went on to attend Westminster School and then Oxford University, though he spent only a brief amount of time at the latter after his father, the 3rd Duke, decided to send him to complete his education at The Hague.
William was an active politician. In 1790 he became M.P. for Petersfield, and in 1791 he was elected for Buckinghamshire - a seat which he then held for five successive parliaments without ever being opposed. Between 1794 and 1842 he held the position of Lord Lieutenant of Buckinghamshire. In 1806 he rejected an offer from Lord Grenville to enter the House of Lords through the family barony of Ogle, because he differed with the administration on a number of issues. Shortly afterwards, a new government was formed headed by his father, the 3rd Duke, and William was appointed a junior Lord of the Treasury, but he held the post for only a few months.
In terms of his political beliefs, the 4th Duke became more and more liberal as time went on, and after his succession to the dukedom developed a close association with George Canning. When Canning formed a government in 1827, the 4th Duke accepted the office of Lord Privy Seal. He later held the post of Lord President in Viscount Goderich's short-lived administration. He had no real desire for political office, however, and after his period as Lord President ended he took little further part in national political life. He did, though, take an active interest in political affairs through the activities of his son, Lord George Bentinck.
From his position outside the government, the 4th Duke continued to follow politics closely. He was a supporter of the Reform Bill and a prominent supporter of agricultural protection. As a result, he used his political influence to oppose the Earl of Lincoln in the bitter battle for the South Nottinghamshire constituency in 1846.
The 4th Duke was heavily involved in the management of the family estates, and, after finding them burdened with debts when he inherited, was highly successful in reversing the financial situation. He was particularly interested in farming methods and techniques and undertook several drainage schemes, gaining a reputation as an agricultural improver. He was also central to the development of Troon Harbour in Ayrshire, Scotland, and to the construction of the railway which linked to it.
Other interests included the study of shipbuilding and naval design. He arranged a number of trials with the Admiralty, in which his own and other private yachts competed with some of the fastest ships in the navy. The duke was also a keen devotee of horse racing. He was a tenant of the Jockey Club at Newmarket, and was responsible for many improvements there, including the turf, the gallops and the building of the Portland stand.
In 1795 he married Henrietta Scott (d 1844) of Balcomie, Fife, by whom he had 9 children:
- William Henry (1796-1824), Marquess of Titchfield
- William John (1800-1879), Marquess of Titchfield from 1824 and later 5th Duke of Portland
- [William] George Frederick (1802-1848), politician
- [William] Henry (1804-1870), M.P.
- Margaret Harriet (1798-1882),
- Caroline (d 1828)
- Charlotte (1806-1889), m John Evelyn Denison, later Viscount Ossington in 1827
- Lucy (1808-1899), m Charles Augustus, Baron Howard de Walden in 1828
- Mary (1810-1874), m Sir William Topham in 1874
- The 4th Duke's papers are part of the Portland (Welbeck) Collection held in Manuscripts and Special Collections and include extensive personal, political and estate correspondence
- The Portland (London) Collection, also held in Manuscripts and Special Collections, contains papers relating to the estate business of the 4th Duke
- The Portland Estate Papers held at Nottinghamshire Archives also contain items relating to the 4th Duke's properties
- Details of collections held elsewhere are available through the National Register of Archives.
Though there are no published biographies exclusively dedicated to the 4th Duke of Portland, his biographical details feature in the following:
- Turberville, A.S., A History of Welbeck Abbey and its Owners, Volume 2, Chapters 15, 16 and 17 (London, 1938)
|
Footprints on the Moon
This is a special encore presentation of the Quantum episode which was made to celebrate the 20th anniversary of the first Moon landing.
First broadcast in 1989, this definitive account wonderfully captures the spirit of the times and the excitement behind the first mission to land on the Moon.
At 12:56pm AEST on July 21, 1969 Neil Armstrong became the first human to walk on the Moon.
It was the culmination of over 10 years of intense competition between the United States and the Soviet Union that used up massive resources and money and saw the sacrifice of human lives in the race for space supremacy.
So what was the fascination with going to the Moon? And why spend so much effort on reaching it, only to seemingly abandon interest after three and a half years of manned lunar landings?
For the answers we invite you to step back in time and relive the race to get the first man on the Moon.
What are your memories of the Moon landing?
Do you remember where you were when you first saw the amazing images beamed back from the moon? Even if you weren't alive at time, what significance does the moon landing hold for you today? And should we be going back to the Moon?
Share your memories on our message board.
|
Stem canker disease of soybean is caused by the fungus, Diaporthe phaseolorum var. caulivora, and at present there are no soybean varieties known to be highly resistant to this disease. Stem canker takes its name from the resemblance of the discolored area of an infected stem to a canker. As the infected area on a stem enlarges, the stem is girdled and the portion of the plant above the girdled area is killed. Stem canker seriously affects soybeans in the northcentral region of the United States and has been reported to cause heavy losses (Athow and Caldwell, 1954; and Dunleavy, 1954, 1955). A description of the disease and the casual organism has been published by Welch and Gilman (1948) and Athow and Caldwell (1954). Differences in varietal susceptibility have been reported by Hildebrand (1953a) and Beeson and Probst (1955).
Proceedings of the Iowa Academy of Science
© Copyright 1956 by the Iowa Academy of Science, Inc.
Dunleavy, John M.
"A Method for Determining Stem Canker Resistance in Soybean,"
Proceedings of the Iowa Academy of Science, 63(1), 274-279.
Available at: https://scholarworks.uni.edu/pias/vol63/iss1/22
|
The Little Robber Girl
The Boy Who Cried Wolf
AMERICAN INDIAN STORIES
Animal Sketches And Stories
Blondine Bonne Biche and Beau Minon
BRER RABBIT and HIS NEIGHBORS
CHINESE MOTHER-GOOSE RHYMES
FABLES FOR CHILDREN
FABLES FROM INDIA
FATHER PLAYS AND MOTHER PLAYS
FIRST STORIES FOR VERY LITTLE FOLK
For Classes Ii. And Iii.
For Classes Iv. And V.
For Kindergarten And Class I.
FUN FOR VERY LITTLE FOLK
Good Little Henry
JAPANESE AND OTHER ORIENTAL TALES]
Jean De La Fontaine
King Alexander's Adventures
KINGS AND WARRIORS
LAND AND WATER FAIRIES
Lessons From Nature
LITTLE STORIES that GROW BIG
MODERN FAIRY TALES
MOTHER GOOSE CONTINUED
MOTHER GOOSE JINGLES
MOTHER GOOSE SONGS AND STORIES
Myths And Legends
NEGLECT THE FIRE
ON POPULAR EDUCATION
PLACES AND FAMILIES
Poems Of Nature
RESURRECTION DAY (EASTER)
RHYMES CONCERNING "MOTHER"
RIDING SONGS for FATHER'S KNEE
ROMANCES OF THE MIDDLE AGES
SAINT VALENTINE'S DAY
Selections From The Bible
SLEEPY-TIME SONGS AND STORIES
Some Children's Poets
Songs Of Life
STORIES BY FAVORITE AMERICAN WRITERS
STORIES FOR CHILDREN
STORIES for LITTLE BOYS
STORIES FROM BOTANY
STORIES FROM GREAT BRITAIN
STORIES FROM IRELAND
STORIES FROM PHYSICS
STORIES FROM SCANDINAVIA
STORIES FROM ZOOLOGY
STORIES _for_ LITTLE GIRLS
THE DAYS OF THE WEEK
The King Of The Golden River; Or, The Black Brothers
The Little Grey Mouse
THE OLD FAIRY TALES
The Princess Rosette
THE THREE HERMITS
THE TWO OLD MEN
UNCLES AND AUNTS AND OTHER RELATIVES
VERSES ABOUT FAIRIES
WHAT MEN LIVE BY
WHERE LOVE IS, THERE GOD IS ALSO
The Dragon's Tail
from Fairy Tales From The German Forests
I wonder if the girls and boys who read these stories, have heard of the
charming and romantic town of Eisenach? I suppose not, for it is a
curious fact that few English people visit the place, though very many
Americans go there. Americans are well known to have a special interest
in old places with historical associations, because they have nothing of
the sort in America; moreover many of them are Germans by birth, and
have heard stories of the Wartburg, that beautiful old castle, which
from the summit of a hill, surrounded by woods, overlooks the town of
The Wartburg is quaintly built with dear little turrets and gables, and
high towers, a long curving wall with dark beams like the peasant
cottages, and windows looking out into the forest. It belongs at present
to the Grand duke of Sachsen-Weimar-Eisenach.
Every stone and corner of the Wartburg is connected with some old story
For instance there is the hall with the raised dais at one end and
beautiful pillars supporting the roof where minnesingers of old times
used to hold their great "musical festivals" as we should say nowadays.
There was keen competition for the prizes that were offered in reward
for the best music and songs.
In the castle are also the rooms of St Elizabeth, that sweet saint who
was so good to the poor, and who suffered so terribly herself in parting
from her husband and children.
Then there is the lion on the roof who could tell a fine tale if he
chose; the great banqueting hall and the little chapel.
On the top of the tower is a beautiful cross that is lit up at night by
electric light and can be seen from a great distance in the country
round. This is of course a modern addition.
But the most interesting room in the castle is that where Dr Martin
Luther spent his time translating the Bible. A reward had been offered
to anyone who should kill this arch-heretic; so his friends brought him
disguised as a knight to the Wartburg, and very few people knew of his
As you look through the latticed windows of that little room, the
exquisite blue and purple hills of the Thueringen-Wald stretch away in
the distance, and no human habitation is to be seen. There too you may
see the famous spot on the wall where Luther threw the inkpot at the
devil. To be correct you can see the hole where the ink-stain used to
be; for visitors have cut away every trace of the ink, and even portions
of the old wooden bedstead. There is the writing-desk with the
translation of the Bible, and the remarkable footstool that consisted of
the bone of a mammoth.
Those were the days in which a man risked his life for his faith; but
they were the days also, we must remember, of witchcraft and magic.
One other story of the Wartburg I must narrate in order to give you some
idea of the interest that still surrounds the place, and influences the
children who grow up there. It was in the days of the old Emperor
The sister of the Emperor whose name was Jutta, was married to the
Landgraf Ludwig of Thueringen, and they lived at the Wartburg.
One day when Barbarossa came to visit them, he observed that the castle
had no outer walls round it, as was usual in those days.
"What a pity," he said, "that such a fine castle should be unprotected
by walls and ramparts, it ought to be more strongly fortified."
"Oh," said Landgraf Ludwig, "if that is all the castle needs, it can
soon have them."
"How soon?" said the Emperor, mockingly.
"In the space of three days," answered his brother-in-law.
"That could only be possible with the aid of the devil," said
Barbarossa, "otherwise it could not be done."
"Wait and see for yourself," said the Landgraf.
On the third day of his visit, Ludwig said to the Emperor: "Would you
care to see the walls? They are finished now."
Barbarossa crossed himself several times, and prepared for some fearful
manifestation of black magic; but what was his surprise to see a living
wall round the castle of stout peasants and burghers, ready armed, with
weapons in their hands; the banners of well-known knights and lords
waved their pennants in the wind where battlements should have been.
The Emperor was much astonished, and called out: "Many thanks,
brother-in-law, for your lesson; stronger walls I have never seen, nor
better fitted together."
"Rough stones they may some of them be," said the Landgraf, "yet I can
rely on them, as you see."
Now as you may imagine, the children who grow up in this town, must have
their heads full of these tales, and many poets and artists have been
inspired by the beauties of Eisenach. The natural surroundings of the
town are so wonderful, that they also provide rich material for the
Helmut was a boy who lived in Eisenach. He was eight years old, and went
to a day school. He lived outside the town, not far from the entrance to
the forest. He was a pale, fair-haired little boy, and did not look the
tremendous hero he fancied himself in his dreams; not even when he
buckled on helmet, breast-plate and sword, and marched out into the
street to take his part in the warfare that went on constantly there,
between the boys of this neighbourhood, and the boys who belonged to
another part of the town.
Now the Dragon's Gorge is a most marvellous place; it is surrounded on
all sides by thick forests, and you come on it suddenly when walking in
the woods. It is a group of huge green rocks like cliffs that stand
picturesquely piled close together, towering up to the sky. There is
only a very narrow pathway between them.
Helmut had often been there with his father and mother or with other
boys. After heavy rain or thawing snow it became impassable; at the best
of times it was advisable for a lady not to put on her Sunday hat,
especially if it were large and had feathers; for the rocks are
constantly dripping with water. The great boulders are covered with
green moss or tiny ferns; and in the spring time, wood sorrel grows on
them in great patches, the under side of the leaves tinged an exquisite
violet or pink colour. The entrance to the Dragon's Gorge is through
these rocks; they narrow and almost meet overhead, obscuring the sky,
till it seems as if one were walking under the sea. Two persons cannot
walk side by side here. In some parts, indeed, one can only just squeeze
through; the way winds in and out in the most curious manner; there are
little side passages too, that you could hardly get into at all.
In some places you can hear the water roaring under your feet; then the
rocks end abruptly and you come out into the forest again, and hear the
birds singing and see the little brook dancing along by the side of the
way. Altogether it is the most fascinating, wet and delightful walk that
you could imagine.
Helmut had long been planning an expedition to these rocks in company
with other boy friends, in order to slay the dragon. He dreamt of it day
and night, until he brought home a bad mark for "attention" in his
school report. He told his mother about it; she laughed and said he
might leave the poor old fellow alone; there were plenty of dragons to
slay at home, self-will, disobedience, inattention, and so on! She made
a momentary impression on the little boy, who always wanted to be good
but found it difficult at times, curious to say, to carry out his
He looked thoughtful and answered: "Of course, mother, I know; but this
time I want to slay a 'really and truly' dragon, may I? Will you let me
go with the other boys, it would be such fun?"
The Dragon's Gorge was not far off, and mother did not think that Helmut
could do himself any harm, except by getting wet and dirty, and that he
might do as well in the garden at home.
"If you put on your old suit and your thick boots, I think you may go.
Keep with the other boys and promise me not to get lost!"
"Oh, I say, won't it be fine fun! I'll run off and tell the other
fellows. Hurrah!" and Helmut ran off into the street. Soon four heads
were to be seen close together making plans for the next day.
"We'll start quite early at six o'clock," they said, "and take our
second breakfast with us." (In Germany eleven o'clock lunch is called
second breakfast.) However it was seven o'clock a.m. before the boys
had had their first breakfast, and met outside the house.
How mother and father laughed to see the little fellows, all dressed in
the most warlike costumes like miniature soldiers, armed with guns and
Mother was a little anxious and hoped they would come to no harm; but
she liked her boy to be independent, and knew how happy children are if
left to play their pretence games alone. She watched the four set off at
a swinging march down the street. Soon they had recruits, for it was a
holiday, and there were plenty of boys about.
Helmut was commanding officer; the boys shouldered their guns, or
presented arms as he directed. They passed the pond and followed the
stream through the woods, until they came to the Dragon's Gorge, where
the rocks rise up suddenly high and imposing looking. Here they could
only proceed in single file. Helmut headed the band feeling as
courageous as in his dreams; his head swam with elation. Huge walls
towered above them; the rocks dropped water on their heads. As yet they
had seen or heard nothing of the dragon. Yet as they held their breath
to listen, they could hear something roaring under their feet.
"Don't you tell me that that is only water," said Helmut, "A little
brook can't make such a row as that--that's the dragon."
The other boys laughed, they were sceptical as to the dragon, and were
only pretending, whereas Helmut was in earnest.
"I'm hungry," said one boy, "supposing we find a dry place and have our
They came to where the path wound out again into the open air, and sat
down on some stones, which could hardly be described as dry. Here they
ate bread and sausage, oranges and bananas.
"Give me the orange peel, you fellows. Mother hates us to throw it
about; it makes the place so untidy." So saying Helmut pushed his orange
peel right into a crevice of the rock and covered it with old leaves.
But the other boys laughed at him, and chucked theirs into the little
stream, which made Helmut very angry.
"I won't be your officer any more, if you do not do as I say," he said,
and they began to quarrel.
"We're not going to fight your old dragon, we're going home again to
play football, that will be far better fun," said the boys who had
joined as recruits, and they went off home, till only Helmut's chums
were left. They were glad enough to get rid of the other boys.
"We have more chance of seeing the dragon without those stupid fellows,"
They finished their lunch, shouldered their guns again, and entered the
second gorge, which is even more picturesque and narrow than the first.
Suddenly Helmut espied something round, and slimy, and long lying on
the path before him like a blind worm, but much thicker than blind worms
generally are. He became fearfully excited, "Come along you fellows,
hurry up," he said, "I do believe it is the dragon's tail!"
They came up close behind him and looked over his shoulders; the gorge
was so narrow here that they could not pass one another.
"Good gracious!" they said, "whatever shall we do now?"
They all felt frightened at the idea of a real dragon, but they stood to
their guns like men, all but the youngest, Adolf, who wanted to run away
home; but the others would not let him.
"Helmut catch hold of it, quick now," whispered Werner and Wolf, the
other two boys.
Helmut stretched out his hand courageously; perhaps it was only a huge,
blind worm after all; but as he tried to catch it, the thing slipped
swiftly away. They all followed it, running as fast as they could
through the narrow gorge, bumping themselves against the walls,
scratching themselves and tearing their clothes, but all the time Helmut
never let that tail (if it was a tail) out of his sight.
"If we had some salt to put on it," said he, "we might catch it like a
"It would be a fine thing to present to a museum," said Wolf.
Well, that thing led them a fine dance. It would stop short, and then
when they thought they had got it, it started off again, until they were
all puffing and blowing.
"We've got to catch it somehow," said Helmut, who thought the chase fine
sport. At that moment the gorge opened out again into the woods, and the
tail gave them the slip; for it disappeared in a crevice of the rock
where there was no room for a boy to follow it.
"It was a blind worm you see," said Werner.
Presently, however, they heard a noise as of thunder, and looking down
the path they saw a head glaring at them out of the rocks, undeniably a
dragon's head, with a huge jaw, red tongue, and rows of jagged teeth.
The boys stared aghast: they were in for an adventure this time, and no
mistake. Slowly the dragon raised himself out of the rocks, so that they
saw his whole scaly length, like a huge crocodile. Then he began to move
along the path away from them. He moved quite slowly now, so there was
no difficulty in keeping up with him; but his tail was so slimy and
slippery that they could not keep hold of it; moreover it wriggled
dreadfully whenever they tried to seize it. But Helmut had inherited
the cool courage of the Wartburg knights, and he was not going to be
overcome by difficulties.
With a wild Indian whoop he sprang on the dragon's back, and all the
other boys followed his example, except little Adolf who was timid and
began to set up a howl for his mother, I'm sorry to say. No sooner were
the boys on his back than the dragon set off at a fine trot up and down
the Dragon's Gorge, they had to hold on tight and to duck whenever the
rock projected overhead, or when they went sharply round a corner.
"Hurrah," cried Helmut waving a flag, "this is better than a motor ride.
Isn't he a jolly old fellow?"
At this remark the jolly old fellow stopped dead and began to snort out
fire and smoke, that made the boys cough and choke.
"Now stop that, will you!" said Helmut imperatively, "or we shall have
to slay you after all, that's what we came out for you know." He pointed
his gun at the head of the dragon as he spoke like a real hero.
The dragon began to tremble, and though they could only see his profile,
they thought he turned pale.
"Where's that other little boy?" he asked in a hollow voice. "If you
will give him to me for my dinner, I will spare you all."
Helmut laughed scornfully, "Thanks, old fellow," he said--"you're very
kind, I'm sure Adolf would be much obliged to you. I expect he's run
home to his mother long ago; he's a bit of a funk, we shan't take him
with us another time."
"He looked so sweet and juicy and tender," said the dragon sighing, "I
never get a child for dinner nowadays! Woe is me," he sniffed.
"You are an old cannibal," said the boys horrified, and mistaking the
meaning of the word cannibal. "Hurry up now and give us another ride,
it's first-rate fun this!"
The dragon groaned and seemed disinclined to stir, but the boys kicked
him with their heels, and there was nothing for it but to gee-up.
After he had been up and down several times, and the boys' clothes were
nearly torn to pieces, he suddenly turned into a great crevice in the
rocks that led down into a dark passage, and the boys felt really
frightened for the first time. Daylight has a wonderfully bracing effect
on the nerves.
In a moment, however, a few rays of sunshine penetrated the black
darkness, and they saw that they were in a small cave. The next thing
they experienced was that the dragon shook himself violently, and the
small boys fell off his back like apples from a tree on to the wet and
sloppy floor. They picked themselves up again in a second, and there
they saw the dragon before them, panting after his exertions and filling
the cavern with a poisonous-smelling smoke. Helmut and Wolf and Werner
stood near the cracks which did the duty of windows, and held their
pistols pointed at him. Luckily he was too stupid to know that they were
only toy guns, and when they fired them off crack-crack, they soon
discovered that he was in a terrible fright.
"What have I done to you, young sirs?" he gasped out. "What have I done
to you, that you should want to shoot me? Yet shoot me! yes, destroy me
if you will and end my miserable existence!" He began to groan until the
cavern reverberated with his cries.
"What's the matter now, old chappie?" said Helmut, who, observing the
weakness of the enemy, had regained his courage.
"I am an anachronism," said the dragon, "don't you know what that
is?--well, I am one born out of my age. I am a survival of anything but
the fittest. You are the masters now, you miserable floppy-looking
race of mankind. You can shoot me, you can blow me up with dynamite,
you can poison me, you can stuff me--Oh, oh--you can put me into a cage
in the Zoological Gardens, you have flying dragons in the sky who could
drop on me suddenly and crush me. You have the power. We great creatures
of bygone ages have only been able to creep into the rocks and caves to
hide from your superior cleverness and your wily machinations. We must
perish while you go on like the brook for ever." So saying he began to
shed great tears, that dropped on the floor splash, splash, like the
water from the rocks.
The boys felt embarrassed: this was not their idea of manly conduct, and
considerably lowered their opinion of dragons in general.
"Do not betray me, young sirs," went on the dragon in a pathetic and
weepy voice, "I have managed so far to lie here concealed though
multitudes of people have passed this way and never perceived me."
"I tell you what," said Helmut touched by the dragon's evident terror,
"let's make friends with him, boys; he's given us a nice ride for
nothing; we will present him with the flag of truce."
Turning to the dragon he said: "Allow us to give you a banana and a roll
in token of our friendship and esteem."
"O," said the dragon brightening up, "I like bananas. People often throw
the skins away here. I prefer them to orange peel. I live on such
things, you must know, the cast-off refuse of humanity," he said,
becoming tragic again.
They presented him with the banana, and he ate it skin and all, it
seemed to give him an appetite. He appeared to recover his spirits, and
the boys thought it would be better to look for the way out. The cavern
seemed quite smooth and round, except for the cracks through which the
daylight came; they could not discover the passage by which they had
entered. The dragon's eyes were beginning to look bloodthirsty;
remembrances of his former strength shot across his dulled brains. He
could crush and eat these little boys after all and nobody would be the
wiser. Little boys tasted nicer than bananas even.
Meanwhile Wolf and Werner had stuck their flags through the holes in the
rocks, so that they were visible from the outside.
Now little Adolf had gone straight home, and had told awful tales of the
games the others were up to, and he conducted the four mothers to the
Dragon's Gorge where they wandered up and down looking for their boys.
Adolf observed the flags sticking up on the rocks, and drew attention to
them. The Dragon's Gorge resounded with the cries of "Helmut! Wolf!
The dragon heard the voices as well; his evil intentions died away; the
chronic fear of discovery came upon him again. He grew paler and paler;
clouds of smoke came from his nostrils, until he became invisible. At
the same moment Helmut groping against the wall that lay in shadow,
found the opening of the passage through which they had come. Through
this the three boys now crawled, hardly daring to breathe, for fear of
exciting the dragon again. Soon a gleam of light at the other end told
of their deliverance. Their tender mothers fell on their necks, and
scolded them at the same time. Truly, never did boys look dirtier or
"We feel positively ashamed to go home with you," their mothers said to
"Well, for once I was jolly glad you did come, mother," said Helmut.
"That treacherous old dragon wanted to turn on us after all; he might
have devoured us, if you had not turned up in the nick of time. Not that
I believe that he really would have done anything of the sort, he was
a coward you know, and when we levelled our guns at him he was awfully
frightened. Still he might have found out that our guns were not
properly loaded, and then it would have been unpleasant."
Mother smiled, she did not seem to take the story quite so seriously as
"We had a gorgeous ride on his back, mother dear; would you like to see
him? You have only to lie down flat and squeeze yourself through that
crack in the rocks till you come to his cave."
"No thank you," said mother, "I think I can do without seeing your
"Oh, we have forgotten our flags!" called out Wolf and Werner, "wait a
minute for us," and they climbed up over the rocks and rescued the
flags. "He's still in there," they whispered to Helmut in a mysterious
"Mother," said Helmut that evening when she came to wish him good night,
"do you know, if you stand up to a dragon like a man, and are not afraid
of him, he is not so difficult to vanquish after all."
"I'm glad you think so," said mother, "'Volo cum Deo'--there is a Latin
proverb for you; it means, that with God's help, will-power is the chief
thing necessary; this even dragons know. Thus a little boy can conquer
even greater dragons than the monsters vast of ages past."
"Hum!" said Helmut musingly, "mother, dear, I was a real hero to-day, I
think you would have been proud of me; but I must confess between
ourselves, that the old dragon was a bit of a fool!"
Next: The Easter Hare
Previous: The Old King
|
The Imitation Game may have been virtually shut out of this year’s Oscars, winning only Best Adapted Screenplay.
But for many, the mere existence of The Imitation Game was far more important than any awards it might receive.
It was a recognition that gay people can not only contribute to society, but in the case of Alan Turing, almost singlehandedly saved 14 million British lives and shortened World War II for the British by at least two years.
Turing, it will be recalled, was the mathematician who invented a machine that cracked the Nazi regime’s “unbreakable” coding device, Enigma, reports XBiz.
But as the movie (and book which preceded it) points out, despite Turing’s brilliance, the fact that he was homosexual led him to be targeted by police, banned from further government work, arrested and chemically castrated to dampen his “urges” — and he eventually committed suicide in 1954 by drinking cyanide.
A bit belatedly, the British government issued an apology for its treatment of Turing in 2009, and Turing himself was eventually pardoned by Queen Elizabeth II under what’s called the “Royal Prerogative of Mercy” — but what about the tens of thousands of other gay men (and women) who were arrested and convicted of “gross indecency” (as gay sex was criminalised in those days)?
Barnes, together with Nevile Hunt, Turing’s great nephew, and Thomas Barnes, his great-great nephew (pictured), delivered their petition to the Prime Minister’s office at No. 10 Downing Street.
The law was taken off the books in 2003, and hadn’t been used for 36 years before that, but Turing’s great-niece, Rachel Barnes, and her extended family estimate that more than 49,000 other Brits suffered the same discrimination and prosecution as Turing did — and they’ve collected over 600,000 names on a petition to urge the government to pardon those victims as well.
“Alan’s treatment by the government was absolutely horrific,” Barnes wrote in an article published yesterday on the Independent.co.uk website.
“Having chosen the latter, both his mental and physical health deteriorated. He ended up with severe depression and was unable to continue his work. In the end, he committed suicide, at the age of 41.
“But Alan’s legacy still hasn’t been properly fulfilled,” she continued. “He wasn’t the only gay man to receive such shocking treatment by the government. An estimated 49,000 other men were also charged with gross indecency, and have never been pardoned by the government. And 15,000 of these men are still alive. If the Government believes in justice, then they must receive the same pardon as Alan within their lifetimes. It would make such a huge difference to them and their families.”
The organization YouGov polled Britains on the subject and found that two out of three voters favored pardoning those who had been convicted of “gross indecency.”
However, 21 percent — mostly conservatives and UKIP voters — opposed the move.
|
Use these engaging real-world scenarios to help infuse your mathematics program with a problem-based approach to the knowledge and skills required by the Common Core State Standards for Mathematics.
This collection of tasks addresses all of the Common Core State Standard categories for high school mathematics:
- Number and Quantity
- Statistics and Probability
Each Problem-Based Task is set in a meaningful context to engage student interest and reinforce the relevance of mathematics. Each is tightly aligned to one or more specific standards from the High School CCSS for Math III.
|
Brain swelling (brain oedema, intracranial pressure)
Brain swelling is also known as cerebral oedema.
It can be caused by an accident or trauma or some medical conditions.
This pressure can prevent blood from flowing to your brain, which deprives your brain of the oxygen it needs to function. Swelling can also block other fluids from leaving your brain, making the swelling even worse. Swelling in the brain can be life threatening.
What causes brain swelling?
Injury, infections, tumours, other health problems, and even high altitudes, any of these problems can cause brain swelling to occur. The following list explains different ways the brain can swell:
- Traumatic brain injury (TBI): In TBI, also called a head injury, a sudden event damages the brain. Both the physical contact itself and the quick acceleration and deceleration of the head can cause the injury. The most common causes of TBI include falls, vehicle crashes, being hit with or crashing into an object, and assaults. The initial injury can cause brain tissue to swell. In addition, broken pieces of bone can rupture blood vessels in any part of the head. The body's response to the injury may also increase swelling. Too much swelling may prevent fluids from leaving the brain.
- Ischaemic strokes: Ischaemic stroke is the most common type of stroke and is caused by a blood clot or blockage in or near the brain. The brain is unable to receive the blood and oxygen it needs to function. As a result, brain cells start to die. As the body responds, swelling occurs.
- Brain (intracerebral) haemorrhages and strokes: Haemorrhage refers to blood leaking from a blood vessel. Haemorrhagic strokes are the most common type of brain haemorrhage. They occur when blood vessels anywhere in the brain rupture. As blood leaks and the body responds, pressure builds inside the brain. High blood pressure is thought to be the most frequent cause of this kind of stroke. Haemorrhages in the brain can also be due to head injury, certain medications, and unknown malformations present from birth.
- Infections: Illness caused by an infectious organism such as a virus or bacterium can lead to brain swelling. Examples of these illnesses include:
- Meningitis: This is an infection in which the covering of the brain becomes inflamed. It can be caused by bacteria, viruses, other organisms, and some medications.
- Encephalitis: This is an infection in which the brain itself becomes inflamed. It is most often caused by a group of viruses and is often spread through insect bites. A similar condition is called encephalopathy, which can be due to Reye's syndrome, for example.
- Toxoplasmosis: This infection is caused by a parasite. Toxoplasmosis most often affects foetuses, young infants and people with damaged immune systems.
- Subdural empyema: Subdural empyema refers to an area of the brain becoming abscessed or filled with pus, usually after another illness such as meningitis or a sinus infection. The infection can spread quickly, causing swelling and blocking other fluid from leaving the brain.
- Tumours: Growths in the brain can cause swelling in several ways. As a tumour develops, it can press against other areas of the brain. Tumours in some parts of the brain may block cerebrospinal fluid from flowing out of the brain. New blood vessels growing in and near the tumour can also lead to swelling.
- High altitudes: Although researchers don't know the exact causes, brain swelling is more likely to occur at altitudes above 1,500 metres. This type of brain oedema is usually associated with severe acute mountain sickness (AMS) or high-altitude cerebral oedema (HACE).
|
- New research links older people’s use of hearing aids with lower risks of dementia, depression, anxiety, and dangerous falls
- Hearing loss can lead to social isolation, which increases the likelihood of developing cognitive difficulties and mood disorders
- There are hearing aids available that are almost invisible, and some devices are very reasonably priced
How Hearing Aids Reduce Dementia and Anxiety
As we grow older, we begin to experience subtle changes in our bodies. It’s not like one day we don’t need glasses to read the newspaper or a menu and the next we do. For most signs of aging, the changes are gradual. That’s why when it comes to a gradual decline in our hearing, it may be easier to deny to ourselves that it has taken place. But it is important to face reality in this case and do something about it because new research shows that using hearing aids can help us prevent other serious conditions that might arise in our senior years.
The study, which took place at the University of Michigan in Ann Arbor, found that older people have a lower risk of developing dementia, depression, anxiety, and injuries from falls when they begin wearing a hearing aid.1Mahmoudi, Elham; et al. “Can Hearing Aids Delay Time to Diagnosis of Dementia, Depression, or Falls in Older Adults?” Journal of the American Geriatrics Society. 4 September 2019. Accessed 8 September 2019. https://onlinelibrary.wiley.com/doi/abs/10.1111/jgs.16109. These results are based on an investigation that analyzed data from a Medicare HMO for 114,862 people 66 and older from 2008 through 2016. This type of insurance differs from traditional Medicare plans in that it partially covers the cost of hearing aids.
Each subject’s medical records were reviewed for one year before their hearing loss diagnosis and for three years afterward to ensure a focus on only new cases of dementia, depression, anxiety, and fall injuries and to exclude any pre-existing cases. In that three-year span after receiving their hearing loss diagnosis, the participants who had gotten hearing aids were shown to have an 18 percent lower risk of dementia, a 13 percent lower risk of injuries due to falls, and an 11 percent lower risk of depression or anxiety.
Loss of Hearing Takes a Significant Toll
Why would hearing loss potentially contribute to seemingly unrelated problems such as dementia, depression, and falls? In the case of dementia, many individuals who can’t hear properly often start becoming more withdrawn and isolated. As the brain receives less stimulation from social connectivity, nerve impulses slow down and memory declines.
As for depression and anxiety, once again it seems that the issue largely stems from the withdrawal from social settings that many older people with hearing loss impose on themselves. The more time they spend alone, the greater their depression and/or anxiety grows. A 2014 study at the National Institutes of Health in Bethesda, Maryland found a link between hearing loss and higher rates of moderate to severe depression.2Li, Chuan-Ming; et al. “Hearing Impairment Associated With Depression in US Adults, National Health and Nutrition Examination Survey 2005-2010.” JAMA Otolaryngology-Head & Neck Surgery. April 2014. Accessed 9 September 2019. https://jamanetwork.com/journals/jamaotolaryngology/fullarticle/1835392. And the risk of falls, too, increases with hearing loss because of the inner ear’s essential role in maintaining balance. But the problem is worsened by the fact that only 12 percent or so of older people actually get hearing aids when they are diagnosed with hearing loss.
Preventing and Managing Hearing Loss
By the age of 65, approximately 33 percent of Americans experience some level of hearing loss. If your hearing is still fully intact—and you know this through an audiological evaluation—that’s great, but you need to do what you can to safeguard it. One of the major causes of hearing loss is exposure to loud noises and it can occur not only from expected sources like the high decibel levels at a concert, but also everyday situations like work (for example, in construction), listening to music too loudly through headphones, or even regularly walking around a city with heavy traffic. Buy yourself earplugs to prevent loud sounds all around you from doing damage to delicate nerve cells within the ears.
If you have already begun to experience hearing loss, it is not enough to get a diagnosis. Consider getting fitted for a hearing aid. There are a wide range of models available and many are very discreet. If your insurance does not cover the cost and it is out of your price range, look into an over-the-counter sound amplification device. These, although not as discreet, can be very effective and help prevent the dementia, mood disorders, and falls all associated with hearing loss.
References [ + ]
|1.||↑||Mahmoudi, Elham; et al. “Can Hearing Aids Delay Time to Diagnosis of Dementia, Depression, or Falls in Older Adults?” Journal of the American Geriatrics Society. 4 September 2019. Accessed 8 September 2019. https://onlinelibrary.wiley.com/doi/abs/10.1111/jgs.16109.|
|2.||↑||Li, Chuan-Ming; et al. “Hearing Impairment Associated With Depression in US Adults, National Health and Nutrition Examination Survey 2005-2010.” JAMA Otolaryngology-Head & Neck Surgery. April 2014. Accessed 9 September 2019. https://jamanetwork.com/journals/jamaotolaryngology/fullarticle/1835392.|
|
Consistent electricity is one of the major issues facing India; in 2005 GMR Energy started developing a 330 megawatt hydroelectric dam and power plant in Uttarakhand, just outside the town of Srinigar to help satisfy the growing demand for power.
Over time, concrete that is not properly waterproofed can deteriorate and fail, which can result in devastating floods. The local town of Srinigar is all too familiar with the devastation of floods. In 1894, over 280 million cubic meters (10,000 million cubic feet) of water from the breached Gohna Lake completely swept away the original town of Srinigar.
Due to the extreme risk potential in building and waterproofing the dam, the team at Alakanada brought Kryton on-board in 2010 to waterproof key areas of the dam. Kryton’s system can add decades to the life of concrete structures and unlike membrane systems that can deteriorate over time, the Kryton system becomes a part of the concrete matrix, waterproofing from the inside out. This gives the system added reliability over others.
Kryton is working with the on-site ready-mix plant to optimize the mix design. In addition, Kryton is helping the on-site construction team to test and apply Kryton’s surface-applied waterproofing system, Krystol T1 & T2 Waterproofing System, and its internal system, Krystol Internal Membrane (KIM), to the dam face, canal and two tunnels.
Krystol T1 & T2 Waterpoofing System were applied to the upstream section of the dam face, which measures 128 m by 340 m (420 ft by 1100 ft). Going forward, the 1.8 km (1.12 mile) long canal leading to the pen stocks will be waterproofed entirely using KIM. Krystol T1 & T2 Waterproofing System will be applied to the two spillway tunnels that are each one kilometer (0.62 miles) long. In all, approximately 1 million kilograms (1,000 tonnes) of Kryton’s waterproofing systems will be used to treat tens of thousands of square meters (hundreds of thousands of square feet) of concrete and protect the dam and India’s electricity demands for years to come.
|
Speed down a Hill
Date: 4/17/96 at 10:43:27 From: Spring Lake Park H.S. Subject: physics How fast is an object going at the bottom of a 14.9m hill, if it starts at the top going 3.0 m/s?
Date: 4/18/96 at 19:32:8 From: Doctor Ken Subject: Re: physics Hello! In order to do this problem, you have to assume a few things: - you have to assume that friction isn't going to be much of a factor - similarly, wind resistance isn't going to play a big part. Also, here's a tip: Since they didn't say how steep the hill was, it probably doesn't make any difference, right? So you might as well use the easiest hill you know of, namely the hill that goes straight down (no hill at all). Does that make the problem easier for you? Remember, the downward acceleration due to gravity of any object on the Earth's surface is about 9.8 meters per second. Write back if you still need more help on this. -Doctor Ken, The Math Forum
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994- The Math Forum at NCTM. All rights reserved.
|
Vanishing Third World Emigrants?
NBER Working Paper No. 14785
This paper documents a stylized fact not well appreciated in the literature. The Third World has been undergoing an emigration life cycle since the 1960s, and, except for Africa, emigration rates have been level or even declining since a peak in the late 1980s and the early 1990s. The current economic crisis will serve only to accelerate those trends. The paper estimates the economic and demographic fundamentals driving these Third World emigration life cycles to the United States since 1970 -- the income gap between the US and the sending country, the education gap between the US and the sending country, the poverty trap, the size of the cohort at risk, and migrant stock dynamics. It then projects the life cycle up to 2024. The projections imply that pressure on Third World emigration over the next two decades will not increase. It also suggests that future US immigrants will be more African and less Hispanic than in the past.
Document Object Identifier (DOI): 10.3386/w14785
Users who downloaded this paper also downloaded these:
|
What is your definition of health? To you is it just your personal health or does it go beyond what happens in your body? How important is health to you? We’re here to help you understand and grasp the definition of health in a way that sticks with you and promotes change!
What does health actually mean- according to the World Health Organization health is “the state of complete physical, mental and social well-being and not merely the absence of disease or infirmity” (Businessdictionary.com). Why should all three of those worlds of health should be a priority is where we come in to stress the importance. Let’s face it, we all define health differently and our complete health falls on all different places of the spectrum of priorities. But, to make an impact on becoming healthier as a whole, there must be a general consensus on a basic understanding of health. Health is not limited to just our bodies, it is a multifaceted status and all have to be balanced to increase positive growth.
Because health is a combination of multiple areas of health, to make a difference as a healthier community, it has to start individually! A healthier YOU gets the ball rolling on a healthier community. Dedication, commitment, and motivation: they all have to start somewhere so why not make it start with you? If you take on the challenge to invest on your own (complete) health, you will begin influencing your family, which is another level of important health status itself. Then, with your family on board, they will begin to influence those around them every day too. See a pattern here?
If we’re all influencing each other to be healthier our largest world of health, in regards to scale, our community will be greater as well. Community health is the science of protecting and improving the health of communities through education, promotion of healthy lifestyles and research for disease and injury prevention. That is the boring text book definition right? At the end of the day it is simply saying community health is maintaining the good, overall health of the community in which you live in. To achieve a healthier community though, we have to start at the base… that’s you! You and your health are the key factors in building a healthier community.
Within our coalition, Activate Buffalo County, we are trying to put that bug in everyone’s ear about being the healthiest version of you. If we can get you to put your whole world of health at the top of your priority list, others outside our coalition will soon follow! In the end, we will reach a healthier community together!
Need help getting started? Stay tuned and keep your eyes peeled for helpful tips about maintaining a healthy lifestyle!
|
day. Here in Britain we don’t directly see these Atlantic crests and troughs
– we are set back from the Atlantic proper, separated from it by a few
hundred miles of paddling pool called the continental shelf. Each time
one of the crests whooshes by in the Atlantic proper, it sends a crest up
our paddling pool. Similarly each Atlantic trough sends a trough up the
paddling pool. Consecutive crests and troughs are separated by six hours.
Or to be more precise, by six and a quarter hours, since the time between
moon-rises is about 25, not 24 hours.
The speed at which the crests and troughs travel varies with the depth
of the paddling pool. The shallower the paddling pool gets, the slower the
crests and troughs travel and the larger they get. Out in the ocean, the
tides are just a foot or two in height. Arriving in European estuaries, the
tidal range is often as big as four metres. In the northern hemisphere, the
Coriolis force (a force, associated with the rotation of the earth, that acts
only on moving objects) makes all tidal crests and troughs tend to hug the
right-hand bank as they go. For example, the tides in the English channel
are bigger on the French side. Similarly, the crests and troughs entering
the North Sea around the Orkneys hug the British side, travelling down
to the Thames Estuary then turning left at the Netherlands to pay their
respects to Denmark.
Tidal energy is sometimes called lunar energy, since it’s mainly thanks
to the moon that the water sloshes around so. Much of the tidal energy,
however, is really coming from the rotational energy of the spinning earth.
The earth is very gradually slowing down.
So, how can we put tidal energy to use, and how much power could
When you think of tidal power, you might think of an artificial pool next
to the sea, with a water-wheel that is turned as the pool fills or empties
(figures 14.2 and 14.3). Chapter G shows how to estimate the power avail-
able from such tide-pools. Assuming a range of 4 m, a typical range in
many European estuaries, the maximum power of an artificial tide-pool
that’s filled rapidly at high tide and emptied rapidly at low tide, generating
power from both flow directions, is about 3 W/m2. This is the same as
the power per unit area of an offshore wind farm. And we already know
how big offshore wind farms need to be to make a difference. They need
to be country-sized. So similarly, to make tide-pools capable of producing
power comparable to Britain’s total consumption, we’d need the total area
of the tide-pools to be similar to the area of Britain.
Amazingly, Britain is already supplied with a natural tide-pool of just
the required dimensions. This tide-pool is known as the North Sea (figure
14.5). If we simply insert generators in appropriate spots, significant
power can be extracted. The generators might look like underwater wind
|2 m||1 W/m2|
|4 m||3 W/m2|
|6 m||7 W/m2|
|8 m||13 W/m2|
|
A recent radio show broadcasted that research from the CDC stated that kids in America are eating too much pizza, which is not a healthy food. I was a bit puzzled, so I wanted to get a little more information on the actual research.
The CDC report is actually on sodium intakes in children and adolescents in the US. Like adults, children and adolescents are consuming more sodium than they need. And even in children, this can lead to increased blood pressure.
Why do we care? First, we don’t want to start kids off with health problems, like high blood pressure. This will only increase the likelihood of these problems as adults. Second, sodium intake is a taste preference. As children are developing their tastes and dietary preferences, we want to give them a healthy palate. Reducing intake when young will hopefully help prevent them from over consuming as adults.
So where does pizza come in? Pizza is the number one contributor of sodium to children and adolescent diets. Bread, poultry, cold cuts, and sandwiches round out the top five. Noticably, these are foods that naturally have high sodium. This isn’t about teaching kids to not salt their food. It is about teaching them to watch their consumption of foods naturally high in sodium.
So can your kid eat pizza? Of course! But, beware of the amount of cheese and cured meats on your toppings. Stick for less cheese, fresh cooked meats, veggies, and homemade sauce if possible. All of these allow you greater control of the sodium going in. Here are a couple of my favorites for pizza:
What are your favorite adaptations to make pizza more healthy? Share them in the comments!
|
1031 The Church gives the name Purgatory to this final purification of the elect, which is entirely different from the punishment of the damned. The Church formulated her doctrine of faith on Purgatory especially at the Councils of Florence and Trent. The tradition of the Church, by reference to certain texts of Scripture, speaks of a cleansing fire:
- As for certain lesser faults, we must believe that, before the Final Judgment, there is a purifying fire. He who is truth says that whoever utters blasphemy against the Holy Spirit will be pardoned neither in this age nor in the age to come. From this sentence we understand that certain offenses can be forgiven in this age, but certain others in the age to come.
|
The Longan is a tropical tree native to southern China. The tree is very sensitive to frost. It is also found in Indonesia and Southeast Asia. It is also called guiyuan (桂圆) in Chinese, lengkeng in Indonesia, mata kucing in Malaysia, and quả nhãn in Vietnamese. The longan (“dragon eyes”) is so named because of the fruit’s resemblance to an eyeball when it is shelled (the black seed shows through the translucent flesh like a pupil/iris).
The fruit is edible, and is often used in East Asian soups, snacks, desserts, and sweet-and-sour foods. They are round with a thin, brown-coloured inedible shell. The flesh of the fruit, which surrounds a big, black seed, is translucent white, soft, and juicy.
When I first arrived in Saigon, my grandpa’s younger brother Ong Ti stopped by my office to say hello. He arrived with a smile and huge bag of nhãn as a welcome gift. The nhãn were very dusty upon arrival, but I washed them thoroughly in water, removed them from their stems, and refrigerated them uncovered in bowls. For the following three weeks, The Astronomer and I were able to snack on nhãn to our hearts’ content. After years of eating canned nhãn coated in heavy syrup, it was a welcomed treat to finally taste the real thing.
|
Texan launches effort to correct entrenched myths about the holiday
Donald Norman-Cox, a 64 year old resident of Denton, Texas has a message for the nation regarding Juneteenth: “Tell it right or stop talking.” Since the mid-2000s, Mr. Norman-Cox has sporadically informed college and community groups that parts of the Juneteenth explanation are flagrantly wrong. This year, his message has muscle.
“Every explanation I’ve heard since childhood made little sense,” Norman-Cox said. But like many others, he never bothered to search for facts. “I wondered how news of the proclamation could travel to Europe faster than it floated across the states. How did news reach what is now New Mexico without going through Texas? When did other states free their slaves?”
Those quandaries and more are addressed in Norman-Cox’s new book Juneteenth 101. The book debunks several widely held myths about Juneteeth, including its primary tenet: news of the Proclamation didn’t reach Texas for two and a half years. “You hear that everywhere, but it’s wrong,” Norman-Cox said. “Delayed emancipation was not caused by not-knowing. The culprit was lack of enforcement.”
Cox admits to holding a near life-long hope that others had researched their explanations. “I challenged nothing,” he laughs, “until one question refused to be ignored.” That question was why do Texans commemorate both Watchnight and Juneteenth?
Watchnight was the night slaves held vigils to watch for freedom, courtesy of the Emancipation Proclamation. Juneteenth occurred supposedly because no one knew the Proclamation existed. Cox said, “Those opposing explanations coexist peacefully only in minds of the oblivious.”
While digging for clarification, Norman-Cox discovered some Texans knew about the Emancipation Proclamation before it was issued. “On September 15, 1862, a newspaper in tiny Clarksville, Texas reported Lincoln was about to issue ‘a proclamation of general universal emancipation’. Nine days later, Lincoln issued his preliminary proclamation. What little guys knew, the big ones did, too.”
That and other discoveries are packed in a new book titled Juneteenth 101.
Norman-Cox calls his findings, “Earth shaking, but nothing new.” He said, “Professional historians – which I’m not – have known these facts probably since emancipation became a topic worthy of scholarly examination. This book translates existing academic discourse into street speak … to help Big Mama and Ray-Ray ‘nem not be wrong.”
Juneteenth 101 claims incorrect explanations oversimplify the complex and chaotic way slavery ended. Believing slavery continued because they didn’t know, misidentifies ‘they’.
“They” refers to slave owners,” Norman-Cox contends. “What slaves knew was irrelevant. Their walking off the job is called running away, not emancipation.”
According to Norman-Cox, Juneteenth falsehoods are pervasive. Even Congress incorrectly refers to Juneteenth as “the day slavery ended in the United States”. Juneteenth 101 identifies 31 congressional resolutions that include or were defended by that statement. “As if six months later, the Thirteenth Amendment did nothing,” Cox added.
To replace that inaccuracy the book offers this explanation, “Juneteenth celebrates the end of slavery; not the day slavery ended.”
Juneteenth 101 is a 104 page book published by Arising Together Publishing; available exclusively at amazonbooks.com for $13.
# # #
Contact:Donald J. Norman-Cox
|
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
Write a 3 page essay on A True, Authentic Hero in Tartuffe and Candide.In my teens, I used to consider the Undertaker as my hero because of his charisma and ability to conquer the ring as a wrestler.
Write a 3 page essay on A True, Authentic Hero in Tartuffe and Candide.
In my teens, I used to consider the Undertaker as my hero because of his charisma and ability to conquer the ring as a wrestler. I used to enjoy a lot, watching him wrestle people in the ring, seldom losing to his opponents. Nevertheless, he was never perfect just like any other human being. After growing up, I realized that there are people greater heroes. This is because, despite Undertaker being a great wrestler, he is not actually worthy of adoration today. The scenario, I believe, is the same for Fernando Alonzo who was once an admired racist but who currently are experiencing a lot of challenges.
In my opinion, a true hero is an individual who does what is right and good without expecting a reward, despite the outcome. He/she is also one who is brave and never gives up the fight to help others around him/her. I concur with Abraham Lincoln’s views on who a true hero is. Lincoln stated that a true hero is one who destroys the ego illusion, which upholds the legitimacy and usefulness of killing and revenge. Lincoln argued that an authentic hero is one who dismantles the assistance of ego in retaliation, while real heroes are those who are highly creative to fight and transform the enemy (Richo 272). At times one may become a hero by winning a war against his or her enemies. Nevertheless, I do not consider one a hero by merely winning a war. A true hero forgives the enemy. Nelson Mandela is one of my long-time true and authentic heroes. He fought tirelessly for the rights of his people and was even jailed several years only later to become the President of South Africa. In his tenure as president, he did not punish his adversaries. instead, he chose to forgive them and led his people in the direction of truth and reconciliation. He is, therefore, a true hero in my view.
There are quite a number of characters in Tartuffe and Candide that I consider heroes.
|
Elections are a fundamental aspect of democracy. They are the mechanism by which citizens choose their representatives and hold them accountable. However, the conduct of elections is not left entirely to chance. Political laws regulate the process and ensure that it is fair and transparent. In this article, we will explore the relationship between elections and political laws, and how they work together to ensure a democratic process.
The Purpose of Political Laws
Political laws serve several purposes. They provide a framework for the conduct of elections, defining the rules and procedures that must be followed. They also set out the responsibilities of electoral officials, political parties, and candidates, and ensure that they operate within the bounds of the law. Additionally, political laws seek to ensure that elections are free and fair, protecting the integrity of the democratic process.
Key Principles of Political Laws
The key principles of political laws include transparency, accountability, and fairness. Transparency means that the electoral process should be open and accessible to all citizens. This includes the process of voter registration, the casting and counting of ballots, and the reporting of results. Accountability means that electoral officials and candidates must be held responsible for their actions. Fairness means that all parties and candidates should have an equal opportunity to participate in the electoral process, and that the rules should be applied equally to all.
Types of Political Laws
There are several types of political laws that regulate the electoral process. These include electoral laws, campaign finance laws, and political party laws. Electoral laws define the rules and procedures for the conduct of elections, including voter registration, voting procedures, and the counting of ballots. Campaign finance laws regulate the financing of political campaigns, including the sources of funding, the amounts that can be spent, and the disclosure of financial information. Political party laws govern the establishment and operation of political parties, including the registration of parties, the nomination of candidates, and the conduct of party activities.
Challenges to Political Laws
Despite their importance, political laws face several challenges. These include attempts to circumvent the rules, such as voter suppression, election fraud, and campaign finance violations. In some cases, the rules themselves may be inadequate or outdated, failing to address emerging issues such as online campaigning and disinformation. Furthermore, political laws may be subject to legal challenges, with opponents arguing that they infringe upon constitutional rights.
Elections are a fundamental aspect of democracy, and political laws are critical to ensuring their fairness and integrity. By providing a framework for the conduct of elections and regulating the behavior of electoral officials, political parties, and candidates, political laws help to ensure that citizens have a say in how they are governed. While challenges remain, including attempts to circumvent the rules and legal challenges, political laws remain an essential safeguard of democracy.
|
The largest ever study of the how climate sensitivity manifests itself in the seasonal behaviour of UK plants and animals warns that some species may struggle to survive.
LONDON, 7 July, 2016 – In a warming world, in which spring arrives ever earlier and rainfall patterns shift, some species may not be able to cope with change.
A major scientific survey warns that plants may flower before insects are ready to pollinate them, and the birds that time their nest cycles to the season for insects might find their prey in short supply.
Scientists in the UK have just published the largest ever study of ecosystems and the changes in the seasons. They report in Nature journal that a consortium of researchers from 18 organisations combed the literature to identify more than 10,000 data sets, containing 370,000 observations of seasonal events in the natural world.
Their search spanned the years 1960 to 2012, and involved 812 plant and animal species from the seashore, the rivers and lakes, and dry land.
The study embraced three vital levels of the food chain: the primary-producer plants and algae that turn sunlight and carbon dioxide into food; the primary-consumer birds and insects that eat the plants and their seeds; and the secondary-consumer birds, fish and mammals that dine on the insects.
They also looked at national temperature change and rainfall data over the period. And their conclusion is that plants and animals differ in their response to climate change, and that creatures at different levels of the food chain differ yet again.
And they forecast that, overall, the primary consumers will have shifted their seasonal timing more than twice as much as the primary producers and the secondary consumers. By 2050, the primary consumers will be at work an average of 6.2 days earlier, whereas the other two groups will have moved forward more than 2.5 days, but less than three.
“This is the largest study of the climatic sensitivity of UK plant and animal seasonal behaviour to date,” says the lead author, Stephen Thackeray, a lake ecologist from the UK Centre for Ecology and Hydrology.
“We are lucky in the UK to have a long history
of people fascinated with observing
and recording events in nature”
“Our results show the potential for climate change to disrupt the relationships between plants and animals, and now it is crucially important that we try to understand the consequences of these changes.”
In nature, timing is everything, and the study rests on a low-key, quietly-observed science called phenology − the record of when things happen in the natural world.
Foresters, gardeners, growers and ornithologists have been recording the dates of such things as first bud, first leaf, first nesting behaviour, first butterfly emergence since the 19th century, largely as a labour of love.
But it became clear two decades ago that such observations had begun to confirm the reality of climate change even more forcefully than slight shifts in the mercury of the thermometer. The evidence was in the form of milder winters, longer growing seasons, and earlier foliage in particular.
Winners and losers
The long-term consequences of global warming remain uncertain. Researchers have recorded slower growth in the giants of the forest, shifts in the distribution of plants, and changes in mountain meadow wildflowers. There is evidence that creatures can adapt, but there must be losers as well.
Right now, nobody can predict who the winners and losers will be. But disturbance to ecosystems must have consequences, and the latest research is just that − an attempt to frame the big picture for further observation. And it rests on a century of naturalist record-keeping.
One of the report’s authors, Deborah Hemming, a climate risk analyst at the UK Met Office, says: “We are lucky in the UK to have a long history of people fascinated with observing and recording events in nature.
“By quantifying the relationships between these phenological records and climate data across the UK, we identify many phenological events that are extremely sensitive to climate variations.
“These provide ideal early indicators, or sentinels, for monitoring and responding to the impacts of climate variability and change on nature.” – Climate News Network
|
The TRIGA Control Rod Drive for research reactors is an electrically driven linear drive mechanism to position and control movement of the reactor control rods and control blades.
The Control Rod Drive Mechanism is an electric-stepping, motor-actuated linear drive equipped with a magnetic coupler and a positive feedback potentiometer. A five-phase stepping motor drives a pinion gear and a 10-turn potentiometer via a chain and pulley gear mechanism. The pinion gear engages a rack attached to the draw tube, housing an electromagnet attached to the lower end. The electromagnet in turn engages an iron armature, which is attached to a connecting rod assembly, which terminates at its lower end in the control rod itself. The electromagnet, armature and the upper portion of the connecting rod are all housed in a tubular barrel that extends below the water line in the reactor pool. The upper portion of this barrel is ventilated to permit unrestricted rod movement in water, whereas the lower portion has graded vent holes to restrict movement and to provide a damping action when the electromagnet is deenergized and the control rod is released from the drive.
A series of microswitches on the drive assembly controls the up-down movement of the control rod and rod drive. Rod up/down motion is controlled from the control console. In the event of a reactor scram, the magnet is deenergized and the armature will be released. The control rod will then drop, reinserting the neutron poison into the reactor core.
The control rod drive speed is adjustable over a wide range, and its design can accommodate control rods in lengths of 15, 24, 30 or 36 inches.
|
Kitia Harris is a single mother raising her eight-year-old daughter in Detroit. Recently, she picked up a minor traffic ticket for “impeding traffic” totaling $276 in court fines and fees. Living off just $1,200 a month in disability payments—not enough to cover rent, utilities, food, clothing, and other basic needs—she was unable to pay her traffic fines.
Because she cannot afford her outstanding court debt, Michigan suspended her license.
Kitia has never committed a crime, and for many years she worked hard in low-wage jobs to support herself and her daughter. In 2014, she was diagnosed with interstitial cystitis, a painful condition with no cure that prevents her from working.
Without a driver’s license, everything is more expensive. Kitia’s disability requires regular medical treatments. Now, instead of driving herself to her appointments, she must pay others to drive her. And because Detroit has the worst public transportation system of any major city in the country, she must also pay for rides for daily tasks like grocery shopping, or picking up her daughter. By forcing her to pay more just to get around, Michigan has trapped her in a cycle of poverty.
This is not fair, and it’s not justice.
Like Kitia, hundreds of thousands of Michiganders have lost their driver’s licenses simply because they are poor. In 2010 alone, Michigan suspended 397,826 licenses for failure to pay court debt or failure to appear.
These residents have not been judged too dangerous to drive; they are not a threat behind the wheel; they have not caused serious injuries while driving. In the vast majority of cases, their only “crime” is that they are too poor to pay.
Michigan’s model creates two different justice systems based on wealth status. For the rich, a minor infraction (like changing lanes without a turn signal) would result in a fine of maybe $135. For those who are poor and unable to pay, the same infraction could eventually lead to a license suspension. This suspension scheme violates our commonly held standards of justice: States should not dole out punishment simply based on wealth status.
But perhaps more importantly: Michigan’s scheme is terrible public policy.
These suspensions laws are trapping productive residents in a cycle of poverty. It’s crushing for Kitia and her daughter, and it is especially bad for Michigan. As a state famous for its poorly managed fiscal situation, Michigan should help its residents pay back their court debt. Instead, the state is making it much harder for them to do so.
On May 4, Equal Justice Under Law filed a class-action lawsuit against the state of Michigan for this wealth-based suspension scheme. Our lawsuit seeks to return licenses to the hundreds of thousands of drivers who have had their licenses suspended solely for the inability to pay court debt, and it asks the state to cease poverty-based suspensions in the future. We are not asking Michigan to change the way it treats drivers who are truly a threat on the road. Nothing we’re asking would allow a driver to commit reckless driving offenses.
We’re only asking that the state stop punishing people for being poor.
We are also asking that Michigan consider alternatives that many other states successfully employ. There should be an ability-to-pay hearing before any license is suspended. If someone is unable to pay due to poverty status, they should be given alternatives, like community service or payment plans. Some states offer payment plans as low as $5 per month.
Some supporters of Michigan’s suspension law claim that those who cannot afford to pay traffic tickets should drive more carefully. But this argument is exactly the kind of unequal justice we must fight against. Our justice system should not be premised on the notion that the rich get to buy their way out of trouble while the poor live under a sword of Damocles for not using a turn signal.
Others say that it’s unfair for poor people to get out of fines just because they’re unable to pay. What I ask of those folks is empathy. For many people—including Kitia Harris—poverty is not a choice. Kitia was raised without a mother or father, spending the majority of her childhood in foster care.
Now 25, she has never had a reliable, supportive adult in her life. She has lived her life in poverty. Calling it “unfair” that Kitia keep her driver’s license even though she cannot pay her court debt misses the fact that Kitia is doing everything in her power to make ends meet.
If she could pay her court debt, she would.
Instead of punishing someone who cannot pay their court debt, Michigan—and every other state—would be better off if people like Kitia were helped to break the cycle of poverty and repay the debt they owe.
Rather than making life harder and more expensive for Kitia, Michigan could provide her with the tools she needs to get back on her feet. Especially in a place like Detroit, which offers no meaningful public transportation option, Kitia needs a way to get around.
She needs empathy from us, and justice from our justice system.
Phil Telfeyan is founding director of Equal Justice Under Law a Washington, DC based nonprofit that challenges “wealth-based discrimination.” He served as a trial attorney in the Civil Rights Division of the United States Department of Justice for five years, where he specialized in employment discrimination and immigrants’ rights. He welcomes comments from readers.
|
Many people think that cotton is a “natural” fabric and therefore chemical-free. That couldn’t be further from the truth. Conventional cotton farming is a chemically-intensive process that puts great stress on the environment, in fact, many consider it to be the world’s most toxic crop!
A Pesticide-Free Crop Is Healthy for Farmers and Wildlife
Let’s show our love for Mother Earth and protect animals big and small. Conventional cotton exposes birds, insects and other wildlife to harsh chemicals. Farmers are also exposed to these highly toxic chemicals including Aldicarb, Parathion, and Methamidophos, creating serious health issues for farmers and their families.
Often cotton farming is conducted in developing countries where education about proper chemical and pesticide handling is limited further exacerbating the problem. This chemical exposure can lead to illness resulting in significant economic and social stress to the community.
Chemical-free Soil Keeps Farmland Healthy for The Future
Soils that are bombarded with chemicals and pesticides aren’t healthy and viable for the long haul. Organically farmed cotton keeps the soil healthy, resilient, and biodiverse so the farmland can continue to produce crops for a long time to come.
Non-GMO Seeds Give Farmers Control of Their Own Destiny
When farmers rely on Genetically Modified Organisms for seeds, they become beholden to large and powerful corporations that don’t necessarily operate with the farmer’s best interest in mind. The farmer’s become reliant on these patented seeds that According to Farm Aid
Farmers who buy GMO seeds must pay licensing fees and sign contracts that dictate how they can grow the crop – and even allow seed companies to inspect their farms. GMO seeds are expensive, and farmers must buy them each year or else be liable for patent infringement. And while contamination can happen through no fault of their own, farmers have been sued for “seed piracy” when unauthorized GMO crops show up in their fields.
Farmers who plant organic and non-GMO cotton are in control of their own farm and can protect their livelihood as they see fit.
Regional Water Supply and Waterways Aren’t Tainted by Fertilizers
When pesticides and herbicides are used for cotton farming the chemicals eventually make their way to the water supply via leaching or runoff. While prudent planting methods can reduce the impact of fertilizer and pesticide there is still significant environmental impact when these are used.
Organic cotton Uses Less Water Than Its Conventional Counterparts
The Textile Change commissioned a study that studied both conventional and organic cotton. The peer-reviewed study showed that “organic production of cotton for an average sized t-shirt resulted in a savings of 1,982 gallons of water compared to the results of chemically grown cotton.”
Growing Organic Means Less Earth-Harming Greenhouse Gas Emissions
Conventional cotton farming uses nitrogen-based fertilizers which has a direct correlation to nitrous oxide emissions. Organic agriculture can help to tackle climate change by reducing greenhouse gas emissions.
Organic Cotton Is Healthier for You Since There Is No Toxic Residue and Skin Irritation
Who wants to expose their skin to harsh chemicals? Not us! Using organic cotton for napkins, bedding, and clothing reduces your chances of chemical exposure. While the negative health outcomes linked to using and wearing conventional cotton have not been clearly established, we say why take the risk?
Shop our organic cotton products here.
|
L.H. v. Hamilton Cty Dept. of Educ. – August, 2018
L.H. is a 15 year-old boy with Down Syndrome and by all accounts is a personable and kind boy. He is also enthusiastic to learn. To accommodate L.H.’s intellectual disability, the IEP team, comprised of the parents and staff prepared an IEP with goals and objectives. As L.H. prepared to begin third grade, the school district unilaterally moved him from his mainstreamed classroom with non-disabled children to a segregated classroom for children with disabilities in a different school.
The self-contained classroom used an online software program that was not peer reviewed nor tied to the state’s general education standards. The educational program did not provide report cards or track educational progress under state standards.
L.H.’s parents rejected the school’s IEP and filed an IDEA Due Process Complaint. The hearing officer ruled against the parents. The parents appealed, and ended up in the Sixth Circuit Court of Appeals.
The Sixth Circuit Court of Appeals opinion affirmed the district court decision finding that the school district violated IDEA when it demanded that a second grade student with Down syndrome be removed from his general education classroom in his neighborhood school to a segregated special education classroom comprised solely of children with disabilities at another school. Rejecting the school district’s argument that that L.H. could receive more “meaningful educational benefit” from placement in the special education classroom at the separate school, the Sixth Circuit reiterated that the LRE is a “separate and different” measure than that of “substantive educational benefits” and that, “in some cases, a placement which may be considered better for academic reasons may not be appropriate because of the failure to provide for mainstreaming.”
In a stern condemnation of the school district’s actions violating L.H.’s right to be educated in the LRE, the Sixth Circuit stated that the school district’s approach “is the type of approach that the IDEA was designed to remedy, not encourage or protect.” The Sixth Circuit further explained that “these actions at Normal Park (home zoned school) do not demonstrate a failure of mainstreaming as a concept, but a failure of L.H.’s teachers and the other staff to properly engage in the process of mainstreaming L.H. rather than isolating him and removing him when the situation became challenging.”
This decision is an up-to-date victory on behalf of children who strive to be educated in their general education classroom. The Oberti and Holland decisions remain “good law”, although they are 20+ years old. L.H. reminds us that the least restrictive environment (LRE) mandate is alive and well!
|
More reasons for efficiency
Resource efficiency improves the quality of life. We can see better with efficient lighting systems, keep food fresher in efficient refrigerators, produce better goods in efficient factories, travel more safely and comfortably in efficient vehicles, feel better in efficient buildings and be better nourished by efficiently grown crops.
Every-thing must go somewhere. Wasted resources pollute the air, water and land. Efficiency combats waste and thus reduces pollution. Resource efficiency can greatly contribute to solving such big problems as acid rain and climatic change, deforestation, loss of soil fertility and congested streets. Energy efficiency coupled with productive, sustainable farming and forestry Oractices could make about 90 per cent of today's environmental problems virtually disappear.
Besides, resource efficiency is usually profitable; it stops wasteful expenditure on resources that are being turned into pollutants, and then, on cleaning them up. Since it has the potential of being profitable, much of it can be implemented largely in the marketplace, driven by individual choice and business competition, rather than requiring governments to tell everyone how to live.
Competition for resources causes or worsens international conflicts. Efficiency stretches resources to meet more needs, and reduces unhealthy resource dependencies that fuel political instability.
Finally, wasting resources is the other face of a distorted economy that splits society into those who have work and those who do not. Either way, human energy and talent are being tragically misspent. A major cause for this waste of people is the wrong thrust of technological progress. We are making ever fewer people more 'productive' while using up more resources, and effectively marginalizing a third of the world's workforce.
Efficiency is a concept as old as the human species. Human progress in all societies has been mostly defined by new ways to do more with less. Much of engineering and business is about using resources more productively. But most of the technological efforts of the last 150 years have been devoted to increasing labour productivity, even if that required a more generous use of natural resources.
__ In recent times, however, resource efficiency has been the subject of a conceptual and practical revolution. Ten years ago, the main news about how to do more with less had to do mainly with the speed of technological improvement. Since the oil crisis of the 1970s, electrical energy efficiency has doubled, thus (theoretically) enabling a 60 per cent reduction in costs. Similar progress continues today - less through new technologies than through better understanding of how to choose and combine the existing ones.
Progress in making resource efficiency bigger and cheaper is thus, positively dramatic. It is, superficially, somewhat comparable to the computer and consumer electronics revolution, where everything is continually becoming smaller, faster, better and cheaper.
But the fuel energy and materials resource experts typically do not think that way. The prejudice remains widespread that saving more energy will always cost more. The belief is that beyond the familiar zone of 'diminishing returns', there is a wall behind which further saving is prohibitively expensive. This was found to be true both for resource savings and for pollution control, and it fitted in nicely with economic theory.
Today, not only are there new technologies, but there are also new ways (theoretically available) of linking them together so that big savings can become cheaper than small savings. That is, greatly improved resource efficiency often costs less, yet works better than the older, less effective ways of conserving resources. When a series of linked efficiency technologies is implemented in concert, in the right sequence, manner and proportions, there is a new economic benefit from the whole that did not exist with the separate technologies.
So why are we all not efficient already? These ideas, though uncomplicated, are all sufficiently new to not have gained enough popular understanding yet. Even fewer people apply them. Practice is still held in the vice-like grip of convention.
Even with proper incentives, it is not easy to apply these new ideas about saving resources. Making big savings cheaper than small savings requires leapfrogging, not incrementalism. Advanced resource productivity requires integration, not reductionism; this bucks this century's trend toward narrow specialisation and disintegration.
These essentially cultural barriers to modern resource efficiency are just the tip of a very large iceberg of underlying problems. The idea of saving resources faces a daunting array of practical obstacles that actively prevent people and businesses from choosing the best buys first.
These include convention: often insurmountable costs of replacing conventional personnel with individuals who know better - this'human factor'may actually be the biggest obstacle and the biggest part of what economists usually call the 'transaction costs', the costs of overcoming inertia.
Other inertia or transaction costs relate to massive financial interest in preserving existing structures; customer ignorance about resource efficiency; discriminatory financial criteria; split incentives (such as landlords and tenants, or home and equipment builders and their buyers); greater ease and convenience of organising and financing one huge project than a million little ones; obsolete regulations that specifically discourage or outlaw efficiency; and the almost universal practice of regulating electric, gas, water and other utilities so they are rewarded for increasing the use and sometimes even penalised for increasing the efficiency of resources.
Despite the exciting opportunities offered by the efficiency revolution, we should be careful against the potential of efficieny to reinforce undesirable patterns; more efficient cars may allow for vastly expanding fleets. Saved water may permit further sprawl into deserts. Resource efficiency in general may support unmitigated population increase for an extending time span. The economic boost achieved from saving resources could thus erode the benefits of the whole exercise if not channeled into a different pattern of development that strengthens the substitution of people for physical resources.
We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together.
Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.
|
ANECDOTAL TALES OF THE SOVIET ERA
In this chapter anecdotes are gathered which relate to the Soviet period but, unlike the oral jokes in Chapters 1 through 8, meet the traditional definition of an anecdote rather than the one applied to the oral jokes during the Soviet period. These are short stories describing real events and real people, and in that the items of this chapter are more of the type of the old anecdotes of chapter 10.
9.1 During the seventy year long period of the communist rule, the Party bureaucracy prescribed every small detail of the people's behavior. In particular, the Party hacks have been overseeing all the scientific activities at research institutions and universities. The official ideology supposedly attached a very high value to scientific progress. Theoretical science had been held in high esteem. On the other hand, the Party always expected the scientists to find ways to put their discoveries to a practical use. It was the Party that decided which science was useful and which had to be condemned as a bourgeois diversion from the patriotic work.
The interference of the Party apparatchiks cost the science in the USSR very dearly. One example of this is as follows. There was a very prestigious research institute in Leningrad, the so called Physico-Technical Institute of the Academy of Sciences whose director was academician Abram Ioffe, a prominent physicist, former favorite collaborator of the X-rays discoverer Roentgen. In this institute, a group of young talented scientists worked in the area of physics of solids. At that time (in the middle of the thirties) important progress in atomic physics was taking shape in the West. Several of those young physicists, including Igor Kurchatov, came to Ioffe with a suggestion to establish a research lab in the area of atomic physics.
Ioffe, who recognized the scientific value of such research, was fully supportive. He knew, though, that it would be very difficult to convince the ignorant Party bosses of Leningrad that such a research lab would serve the cause of building socialism as atomic physics seemed at that time to be a purely scientific exploration without prospects of a practical use in the near future. Ioffe decided to organize the atomic research lab for which several rooms were to be assigned. However, this lab would not exist officially. The doors of that lab would be locked at all times, and the sign on the door would read "Stockroom."
And so it went. Several enthusiasts of the atomic research worked behind the locked doors, and when an inspection commission of Party officials arrived in the institute, they were led past the doors bearing the deceptive sign, to some other areas where the necessary pokazukha was properly maintained.
In 1945, the American scientists exploded an atomic bomb. The Party bosses panicked. They summoned Ioffe and asked him what can be done to catch up with the USA. And at this time, Ioffe had an answer, due to the several years of clandestine research work conducted under the very nose of the stupid Party apparatchiks.
9.2 The famous physicist Petr Kapitsa worked in the twenties in Cambridge, Great Britain, where he became a favorite collaborator of Rutherford. With Rutherford's cooperation and assistance, a special lab of low temperature Physics was built for Kapitsa in Cambridge. Kapitsa married an English woman. Every summer he used to go for a visit to his native Russia.
In 1934, the dictator of the USSR Stalin ordered to hold Kapitsa in Russia. Suddenly, having come for a vacation, Kapitsa found himself trapped in Russia.
The Soviet authorities offered him the directorship of a specially established research institute to be named Institute of Physical Problems. Kapitsa refused to accept this position, as he still hoped the authorities would change their mind and let him go back to his lab in Cambridge. But Stalin was not known for yielding to a mere physicist.
In the meantime, the Soviet authorities managed to convince Rutherford that Kapitsa decided to stay in Russia on his own will. The gullible Nobel prize winner could not imagine that a deception on such a scale could be undertaken. He ordered to dissemble and send to Russia many pieces of unique scientific equipment from the Kapitsa's lab.
In Moscow, extensive edifices for the Institute of Physical Problems were in the process of construction. For a while Kapitsa refused even to look at them. Finally, the authorities managed to persuade him to look at the construction under the pretext he could help avoid errors in the institute's design. Kapitsa went to survey the construction site and found a host of things to be changed. That's how the famous experimentalist finally acquiesced in his fate.
The Institute of Physical Problems grew to become one of the largest and most prestigious scientific establishment on the globe. Kapitsa's famous weekly scientific seminar, being on a par with Landau's seminar at the Lebedev's institute of Physics, always attracted scores of attendees.
In 1945, the USA physicists exploded the first atomic bomb. Stalin ordered to make a Soviet atomic bomb in the shortest time possible, at any cost. Of course, the treason committed by Klaus Fuchs and the efforts by other Soviet spies in the West had much facilitated the forthcoming development of atomic weapon in the USSR. Still, such development required a major effort by skilled experts in atomic physics.
Stalin's security chief, dreaded Beria was put in charge of the atomic bomb creation. The first man whom Beria summoned was Petr Kapitsa.
The history has not preserved any record of what occurred when the world renown scientist met the mass murderer Beria. What is known, is that Kapitsa refused to work on the atomic bomb. Were he a regular citizen, a bullet of a KGB executioner would put end to his life's endeavors. But Kapitsa was too well known all over the world. He was only dismissed from his position and exiled to a country house which he was forbidden to leave.
At the gate of his modest country house Kapitsa posted a sign that read "Hut of Physical Problems." For over ten years, the best Russian experimentalist was held in near isolation, actually under house arrest.
In 1953, Stalin died. Winds of change blew over the country. One of the first signs of the thaw was Kapitsa's release from exile and his reinstatement as the director of the Institute of Physical Problems. The first week upon his return to the institute, a meeting of Kapitsa's seminar was announced. Scores of physicists flooded the Institute. The famous physicist Landau gave the floor to Kapitsa who was met by a standing ovation.
9.3 After Kapitsa refused to participate in the atomic bomb project, Beria summoned Ioffe. The patriarch of Soviet physics who at that time was well over seventy reportedly said that he personally was not cognizant enough of atomic physics to head up the atomic bomb project. He could recommend though some younger scientists well qualified to do the job. He named Israil Kikoin of Sverdlovsk and Moscow and Igor Kurchatov of Leningrad. Since Kikoin, who was the discoverer of the so called photo-galvanomagnetic effect, was a Jew, Kurchatov, who was ethnic Russian, was the obvious choice. Kurchatov was appointed Head of the entire atomic research. He was given free hand in choosing men and women to work on different parts of the project. Neither a Party membership nor the ethnic origin were to play any role in choosing the creators of Russia's atomic power, but only talents and scientific expertise.
After Kurchatov had completed gathering specialists for the project, there was such a high percentage of Jews among them, that in the atomic project's slang the secret city, officially named Arzamas 16, where the atomic bomb was to be constructed, was jokingly referred to as Jerusalem 2. (Another nickname for the secret city was Problema).
In particular, Kurchatov chose as the head of the unit charged with the actual construction of the bomb, a Jew by the name of Yuly Khariton.
Everybody on Khariton's team knew that in case of failure Beria would not hesitate to shoot anybody who allegedly was to blame for the failure. Fortunately for Khariton and his cohorts, they succeeded. In 1949, the Soviet atomic bomb was ready.
When Beria and Stalin received the reports about the success, a shower of awards followed. Khariton, who was forty five at that time, was summoned to the Kremlin where he was awarded the golden star of the Hero of the Socialist Labor, the highest decoration in the country. He left the Kremlin and returned to Jerusalem 2 aka Problema only to be at once summoned back to the Kremlin. There he was handed a Stalin prize which amounted to a huge sum by the Soviet standards. He went back to Arzamas 16 aka Jerusalem 2, only to be summoned back to the Kremlin where he was given one more gift, a big car usually reserved only for the highest Party bosses. Altogether, Khariton was summoned to the Kremlin seven times, and each time one more award was given to him.
Unlike Sakharov, who after participating in the work on the hydrogen bomb, gradually converted into a dedicated fighter against atomic weapons and for human rights, Khariton remained a silent and largely unknown to the Soviet citizens creator of a multitude of dreadful nuclear bombs.
9.4 The famous theoretical physicist, Nobel prize winner Lev Landau, was known for his sharp tongue and condescending attitude to people who were below his intellectual level. Many physicists strived for presenting their work at Landau's seminar. If Landau, to whom his numerous pupils usually referred as Dau, liked the work, he would allow the applicant to give a talk at his seminar. Otherwise he would destroy the hapless applicant's hopes with a derisive comment. Once, upon leafing through a submitted paper, he wrote, 'This work contains many things which are new and interesting. Unfortunately, everything that is new is not interesting, and everything which is interesting, is not new.'
There was in the USSR a physicist by the name of Pines. He was an expert in X-rays applications. Once Professor Pines submitted a paper to Landau in which he offered a proof that certain bodies, if stretched, would, contrary to expectations, also expand (rather than shrink) sideways. Even though such a phenomenon was never observed, it would not contradict any known laws of Physics. When Landau took a look at the topic of Pines' paper, he, without bothering to read Pines' arguments thoroughly. dismissed them out of hand and wrote on the Pines' manuscript. "Pines, if you swap e and i in your name, that will be the only physical body that expands sideways when stretched." (The famous theoretician was wrong, as several years later bodies that expand sideways when stretched were discovered experimentally).
9.5 The famous theoretical physicist Yacov Frenkel was known for his quick mind and cornucopia of ideas. If a new experimental fact was discovered, nobody in the Physico-Technical Institute in Leningrad would offer a theoretical interpretation of the new discovery sooner than Frenkel.
Once Frenkel was walking vary fast, as it was his habit, down the stairs, while a colleague of him, who was an experimentalist, was climbing the stairs toward him. Frenkel stopped and asked the experimentalist about the results of the latter's experiment. The man answered, "You know, A turned out to be larger than B."
"Of course!" Frenkel said. 'It could've been expected. It can be easily explained." And right on the spot, Frenkel delivered a quite sophisticated theoretical explanation of the colleague's result. When Frenkel finished, the colleague said, "Wait a moment! What did I say? Was it that A was larger than B? Sorry, what I meant was that A turned out to be less than B."
"Ah!" Frenkel said. "Of course! It is even easier to explain." And right on the spot he delivered a theoretical explanation as to why A must be less than B.
9.6 It is commonly believed that even a very prolific scientist can publish in the course of 30 to 40 years of scientific research, not more than 300 to 400 papers, or about 10 publications per year on the average. Of course, most of the scientists have published much less than that maximum number. In this respect, it is interesting to note that, while rank and file scientists in the USSR only rarely have to their credit even ten publications per year, one (or often less than one) per year being much more common, there were some personalities who managed to put their names into bylines of enormous number of papers. What is most striking about it, is that all those super-prolific authors happened to be directors of research institutes, who supposedly had to devote most of their time to administrative activities rather than to research. Of course, in none of these papers and books the director was the sole author. He always had a few co-authors from among his subordinates.
For example, director of the Institute of Chemistry of natural compounds, one Yu. P. Ovchinnikov, in the course of 15 years he was a director, accumulated to his credit over 300 papers, among them a few books!
The most striking example was perhaps president of the Academy of Sciences of the USSR A. N. Nesmeyanov. His name appeared in the bylines of over 1200 publications, including a number of books. It means that for forty years in a row Nesmeyanov managed to publish a paper or a book every 12 days, without interruption even for a vacation.
That was what it meant to be a director, or, even better, a president of the Academy!
On the other side of the issue, the names of scientists who fell from the Party's favor, as a rule, disappeared from the title pages of the books they had written. One example is the classic book 'Theory of Oscillations" written by A.Andronov, A. Vitt, and S.Khaikin. Its first edition appeared in the thirties under all three names. Then A. Vitt was arrested and disappeared in the Gulag. The second and several consequent editions were printed with only two authors' names. Many years later, in the fifties, Vitt was rehabilitated post-mortem. The edition of 1956 again appeared with the names of all three authors.
9.7 In 1954, the Academy of Sciences of the USSR elected a forty-four year old geophysicists Mikhail Budyko as a new member of the Academy. Soon afterwards he was appointed director of the Central Geophysics lab in Leningrad. The career of the relatively young but highly qualified scientist seemed to be on its rise. Suddenly, president of the Academy of Sciences Mstislav Keldysh received a letter from the First secretary of the Leningrad Region Party committee Romanov in which that Party hack angrily accused Budyko of not cooperating with the Party organs, the most incriminating accusation for a Soviet official of any status. Of course, this spelled end to Budyko's career. Very soon, Budyko was dismissed from his director's position and replaced by the Party secretary of the Geophysical lab who was not known for any scientific achievements.
Some curious people managed to find out that Romanov's letter to Keldysh was prompted by an accusatory information supplied by the same lab's Party secretary who replaced Budyko. What was actually Budyko's misbehavior? It was awful indeed. The impudent lab's director dared to hire several Jews! It was done against the secret order issued by Romanov that completely prohibited hiring Jews except for extraordinary circumstances which would require in each case Romanov's personal decision.
9.8 Poet Konstantin Simonov (his real name was Kirill Simonov) became very popular shortly before the World War 2, when his poem about evanescent love titled "Five Pages" was printed. During the war, poems by Simonov had brought him even a larger fame. It would be not an exaggeration to state that in 1945 Simonov was by far the most popular and famous Russian poet. Not a wonder that when, in January of 1945, the Moscow university invited Simonov to read his poems and the poet accepted the invitation, it was considered an extraordinary event.
The big hall in the Moscow University building in Mokhovaya street was overfilled with people, mostly students. People sat in the aisles, in many places two people managed to squeeze themselves into one seat, and, generally, as the Russian proverb goes, there was no place for an apple to fall. Simonov was met with a standing ovation. To let him read as many poems as possible, after each piece the people synchronously clapped three times and then at once fell into silence.
After Simonov read many of his poems, some of them immensely popular, and some other, newer, not yet known, he answered questions. One of the questions was, "Do you think your poetry will remain in the annals of Russian literature?"
Simonov shrugged and answered with a quote from Pushkin, "We are destined to nurture good impulses...."
The next question was, "Who, you think, among the contemporary poets will remain in the annals of the Russian poetry?"
Simonov hesitated for a while, as if afraid to utter some thought, and, apparently having overcome his fear, said, "It is hard to predict the future. But there is one name which beyond doubt will remain forever, as long as there is our Russian language...." He stopped, as the audience held breath, waiting for the judgement of the popular poet. After a short silence Simonov exhaled and almost in whisper said, "Pasternak."
The audience gasped. The name of Pasternak was dangerous to mention other than in accusatory terms. Pasternak, the virtuoso of the Russian poetry, a philosophical poet of great sophistication, highly valued by a few connoisseurs of poetry, was little known to an average Russian, and, moreover, was many times denounced in the official press for his alleged indifference to the problems of the socialist society. The common derogatory term for Pasternak was "internal emigrant." To hear from Simonov, whom many in the audience on that day considered the best contemporary Russian poet, such an accolade for Pasternak, was enough to boggle the minds of the listeners.
Time has shown that Simonov's statement was prophetic. Today, over forty years later, only a few in Russia remember Simonov's poetry, while the star of Pasternak is ever higher with time.
9.9 One of the most important Russian poets of the first half of the 20th century was Osip Mandelshtam whom many critics place next to Pasternak. Like those of Pasternak, Mandelshtam's poems offer an extraordinary challenge to a translator into other languages. In most of his masterpieces, of which many had a multilayered and sometimes semi-mystical meaning, Mandelshtam did not touch on any political issues of his time.
In the early thirties a short poem was attributed to Mandelshtam which was disseminated orally and contained an explicit accusation of Stalin in murdering the Russian peasantry. Mandelshtam was arrested and died in exile in December of 1937.
For many years, no works of Mandelshtam were published and even mentioning his name in print was prohibited. After Stalin's death (in 1953) some writers and literary personalities tried to get a permission to publish a collection of Mandelshtam's works. Year after year, they failed. In 1966, writer Ilya Erenburg who for many years enjoyed the status of an officially recognized and privileged writer, but who secretly harbored anti-Soviet feelings, was asked why, despite the thaw, the collection of Mandelshtam's works could not be printed, even though these poems did not contain anything explicitly anti-Soviet. Erenburg smiled and said, "Because they do not know what to kick out. You see, all these poems contain nothing that would be clearly a material to be kicked out. If they had found what to kick out, then the rest would be printed."
A short collection of Mandelshtam's poems was printed only in the seventies.
9.10 The Chief Designer of the Soviet spacecrafts and rockets, Sergei Pavlovich Korolev, was a collaborator of the pioneer of the Russian rocket technology A. Zander. In 1937, Korolev was jailed together with many other prominent scientists and engineers. His prosecutor told him, "For our country, your fireworks and all this pyrotechnical stuff not only are not needed but are rather dangerous. Why didn't you busy yourself with a proper work like designing airplanes? Your rockets, were they not for an assassination of our leaders?"
For a while, Korolev was kept in the dreadful Gulag camps in the Kolyma region. Then the German engineers (the best known among them was Wernher von-Braun) developed the rocket weapon they used against England. The 'wise leaders' of the Party panicked and ordered to organize at once an effort to develop and build Soviet rockets. Korolev was located in a Kolyma camp, brought to Moscow and appointed Chief Designer of a secret rocket institute, while continuing to be a prisoner.
His favorite adage was "They're going to snuff us out without obituary."
When Korolev's rocket was successfully tested, he was finally released from prison and subjected to a shower of awards and decorations. Korolev, with all his new status of a highly honored member of the most privileged elite, never forgot his old prison friends. He still used to say often the same sentence, "They'll snuff us out without obituary."
Despite all his awards, his name was never mentioned in press, and he was simply referred to as Chief Designer. Even when Korolev died, an obituary was printed (contrary to his favorite saying) but in that obituary it was not mentioned that the late Sergei Korolev was actually the mysterious Chief Designer.
9.11 The creator of many models of Soviet aircrafts, Andrei Tupolev, was known for his extraordinary technical intuition. One of the stories about his uncanny ability to make an instant judgment in complicated matters of aircraft design went as follows. Tupolev was walking in the airfield. From a hangar several hundred yards away, an aircraft was rolled out. It was a new experimental model developed by one of the leading aircraft designers. Without turning his head to the machine, Tupolev said, "It'll not fly." Of course, the engineers from the institute that designed the aircraft ignored Tupolev's remark as a bad joke. But the machine did not fly! When Tupolev was asked how he knew it, he shrugged and said, "I just felt it."
On another occasion, in his own aircraft design bureau, Tupolev was shown drawings of a fuselage for a new plane. Glancing at the drawing, Tupolev pointed at a spot in the figure and said, "It'll break down here." The model was built and broke down during its first test exactly where Tupolev indicated.
In 1938, the brilliant creator of aircrafts was arrested and, as many other prominent experts in aviation and other fields of military-related technology, forced to work on the development of military airplanes within the walls of a special prison (usually called in Russian slang "sharaga," or "sharashka.") Of course, the arrest of all those members of the technical and scientific elite was never officially admitted, but nevertheless the NKVD (as the predecessor of the KGB was known) disseminated rumors that Tupolev was the real designer of the German fighter plane Messerschmidt ME 110 whose drawings he sold to Hitler! This malicious lie seemed to be substantiated by the accidental similarity between the shapes of the twinned wingtips of the Tupolev's plane Tu2 and of Messerschmidt ME 110.
Shortly before Hitler started the war with the USSR, several German-made aircrafts were brought to Chkalovskaya testing airfield and the jailed designers, including Tupolev, were shown the German machines. After viewing the German aircrafts, the designers discussed the machines' features with a few generals of the Soviet Air force. At some moment, Tupolev said, "Finally, I was honored to see ME 110. I saw my machine." Everybody in the room fell into silence, as everybody knew that the renown designer, the pride of Russian aviation engineering, was falsely and maliciously accused of treason by the scoundrels who ruled the hapless country.
Later in his life, Tupolev was given all possible awards and decorations, and was honored as the Grandmaster of Russian aviation designs.
9.12 Kazakhstan is a country of immense size but scarcely populated. Its indigenous people, the Kazakhs, constitute only about 30% of the country's population. For centuries, all eight tribes of the Kazakh people had been nomads, horsemen and sheep shepherds. While they had their own nomadic culture and traditions that were transferred orally and by example from generation to generation, but, unlike their neighbors, Uzbeks and Tadjiks, who can boast a thousand year long history, literature, and highly developed agricultural tradition, Kazakhs had no cities, no own alphabet and no written history. Perhaps, this was the reason that the Kazakhs had become the most russified people of all the Turkic ethnic groups in the former USSR. Many young Kazakhs don't speak their native language but speak Russian without accent. Those Kazakhs who moved to cities have largely abandoned the ways of life of their nomadic ancestors.
This apparent russification has not though eliminated the national pride of Kazakhs and their desire to be masters of their land. One of the by-products of that natural and respectable desire was the effort by the newly created educated class of Kazakhs to invent a history for themselves. With all due respect to the national feelings of Kazakhs who certainly have every right to their land and to their national pride, some features of that effort can evoke smiles.
One such event took place in the early seventies.
Somewhere in the vast Kazakhstan's steppes, local tribesmen came across a huge rock covered with inscriptions. They reported their find to the authorities. The local linguists tried unsuccessfully to decipher the transcriptions. The fifteen thousand pounds of the rock were carried to the capital city of Kazakhstan, the beautiful Almaty (in the Russian rendering, Alma-Ata).
In the Institute of linguistics of the Kazakhstan Academy of Sciences the inscriptions were read by the director of that institute S. Kesenbaev, and two of his assistants, Professors R. Musabaev and A. Kaydarov. These three experts announced that the inscriptions dated between 4th and 6th centuries BC and that they described hunting feats of a Prince named Bekar-Tegin. Hence, the commission of the institute concluded, as early as 2500 years ago, the Kazakhs had written language, had their princes, and therefore their own state. A lavishly illustrated volume with the commission's finding was prepared under the Academy of Sciences' auspices.
Before the volume left the typographic plant, an unexpected correction had to be made. The decipherers of the inscriptions decided that their findings deserved a special program to be filmed for wide dissemination of news about their important historical discovery. Cinematographers from the Almaty movie studio were invited. When the movie men saw the rock, they at once recognized it as their creation. A few years before, in 1969, they were shooting a movie named Kyz-Zhibek. For that movie, they carved an inscription on the rock. They invented odd characters for the carved fake text, characters which had no meaning as they belonged to no known language. No Prince Bekar-Tegin or any other prince was mentioned in the text, which was just a chain of meaningless signs.
9.13 Colonel of the KGB O. Baroyan for many years conducted his KGB work in several countries such as Ethiopia and Iran, under the disguise of a medical expert. After he was expelled from the World Health Organization, he switched to a medical research in the USSR. In this endeavor he was supported and promoted by a prominent scientist Lev Zilber. Zilber, who had reached high position in the Soviet science despite being a Jew, was a real scientist of a high caliber. Some of his colleagues wondered why such a prominent scientist promoted a KGB man who seemed to have no talents for medical research. Indeed, Zilber assisted Baroyan in getting the Doctor of Sciences degree, then in being elected a corresponding member of the Academy of Medical Sciences.
The secret of that odd cooperation was really very simple. Zilber, a Jew, who was three times jailed for imaginary crimes, knew too well that his position remained extremely vulnerable despite his scientific achievements. By making himself useful for Baroyan, Zilber had in the latter a kind of a bodyguard protecting the scientist from the whims of the security organs. When asked, why did he promote such scum as Baroyan, Zilber used to smile and say, "Even scum may have its use."
After Zilber died in 1966, Baroyan had already placed himself firmly in the world of science as a powerful bureaucrat. His behavior can be exemplified by the following story. In 1974, one of the researchers in the institute whose director was Baroyan, doctor T. Kryukova, was unfortunate to once contradict Baroyan in some matter of a secondary importance. Baroyan immediately kicked her out of her job. Then the arrogant bureaucrat with the KGB connections said publicly, "I'll let her back to her job if she submits today a written request for reinstatement and will hand it to me while standing on her knees in my office. If she will do it tomorrow, she will have to crawl on her knees, with her written request in hand, from the entrance hall, to my office."
9.14 In the late forties, the communist Party leadership started an unprecedented campaign of "Patriotic struggle for revealing the Russian priority" in all areas of science. Endless showers of books, movies, plays, and newspaper articles hammered into the minds of the benighted population that every invention and discovery was made by Russians. Steam engine, for example, was invented not by James Watt but by Ivan Polzunov. The first aircraft was built not by Wright brothers but by Mozhaysky. Radio was invented not by Marconi but by Popov, etc, etc, etc.
Of course, to 'reveal' the Russian priority in the past would be not good enough unless new discoveries and inventions continued to appear day in and day out. Many crooks and unscrupulous career pursuers realized that the time was propitious for advancement of their careers. All that was required was to claim some great scientific achievement and the Party would do the rest (unless the inventor's name was Jewish).
One of the most ludicrous "discoveries" was made by a veterinary doctor by the name of G. Boshian. This obscure vet worked in a provincial institute of veterinary research when he claimed to have revolutionized the science of viruses. According to Boshian's ideas, microbes could convert into viruses and vice versa. Everything the microbiologists knew from the times of Pasteur was claimed to be completely wrong. Viruses, claimed Boshian, can also become crystals and then reconvert into microbes.
Were it not for the campaign for the Priority, Boshian's nonsensical drivel would only invoke ironic smiles. What happened was quite different. Director of the Institute of Veterinary Research, one Leonov, figured out that Boshian's opuses can be used as a tool for his own career. Leonov went to the Minister of Agriculture Benedictov and told that Party honcho that in his institute one Boshian made a historic biological discovery. This was a godsend to the Politburo.
Soon a book authored by Boshian appeared in bookstores under the title 'On the nature of viruses and microbes.' Boshian was given scientific ranks, titles, and numerous awards. Leonov also got his awards, including the highest scientific degree. The book by Boshian, while describing the results of experiments which allegedly led to his monumental discovery, did not provide any details of those experiments.
In the course of several years, a commission created by the Academy of Medical Sciences to verify Boshian' experiments was denied access to the latter's laboratory under the pretext that his research was highly classified (as were 70% of all scientific research in the USSR).
In the fifties, Boshian's name disappeared from bookstores and newspapers' articles as unexpectedly as it appeared, without any explanation as to what happened to the revolutionary discovery. This inglorious demise of a great scientific achievement was brought about by an accidental lapse in the Party's vigilance. Some Party hack permitted a team headed up by academician Timakov to access Boshian's lab, under conditions of strict secrecy.
When Timakov's team looked at Boshian's samples under a microscope, they saw nothing but dirt! Timakov's team submitted a conclusion that all Boshian's theories amounted to a collection of absolutely meaningless statements by an illiterate maniac.
Boshian's discovery was by no means the only one of that sort. About the same time, another Great discovery in biology, highly acclaimed by the sycophant media, was made by biologist O. Lepeshinskaya, who claimed that she found the so called "living matter' which constituted the core of every living organism. In the same years illiterate charlatan Lysenko who enjoyed a limitless Party's support, decimated the biological science in the USSR by sending hundreds of biologists who dared to disagree with his preposterous theories, to prisons, from which many of them never returned.
There is in Moscow a prominent mathematician Professor Nalimov, the author of an excellent book "The science about science." The following maxim has been attributed to him, "Under our conditions, the secrecy is the unique tool to conceal the illiteracy of scientists."
9.15 The stories about charlatans who exploited the situation created by the communist Party's whims which determined what should be considered a useful, Marxism-compatible science, may serve as a background to a few stories with opposite denouements.
In the sixties, many Physics departments at universities all over the country received strange mailings. Some obscure man from the city of Saratov claimed in a cover letter that he developed a new theory of the well known physical phenomenon, the so called Total Internal Reflection of light. This effect is described in every textbook on Physics and is nowadays widely used in the fiber optics (the so called light guides). Its essence is that when a light beam falls from within a transparent body onto its surface at a certain angle, it is fully reflected within the body.
Actually, the full theory of this effect, based on Maxwell's theory of electromagnetic waves, shows that certain fraction of light does escape the body, but does not propagate away from it.
The man from Saratov (let us call him Krasin; his real name was slightly different, but will not be revealed here for the sake of his family) explained that he had offered his article to several scientific magazines, but all of them rejected his submission despite the great significance of his discovery. Therefore, Krasin explained, he was compelled to resort to mailing his article directly to many scientists with a hope that some of them would appreciate his work and assist in its recognition.
In the mailed envelope the addressees found a sheaf of pages typed on flimsy paper (usually called in Russian 'cigarette paper'). On this pages, Krasin started the explanation of his theory. The text ended in the middle of a sentence, just before the essence of the theory was to be revealed. Krasin offered to mail the rest of the paper for a price of several rubles.
This unheard of method of disseminating scientific information evoked many smiles. The man seemed to be an ignorant with a mania of greatness. Since the theory he supposedly had developed remained unknown, the physicists assumed that at the worst the theory was just a hogwash, or, at best, a version of Maxwell's analysis of the effect, apparently not familiar to the self-deluded man.
Even though it is unknown whether or not Krasin ever received any response to his mailing, it is probably safe to assume that there was none.
About two years later, newspapers printed a story about a murder. The murdered man was a renown physicist, director of the Institute of Optics of the Academy of Sciences in Moscow. The murderer was Krasin.
Being desperate, after futile efforts to convince scientists in the validity of his theory, and confident that his was an important discovery and he a victim of a plot to keep his theory suppressed, the disgruntled discoverer decided to attract attention to his work by an unconventional method. He appeared in the Institute of Optics, holding a cardboard tube of the type usually used to carry rolled-up drawings. He requested to see the Institute's director. After several unsuccessful attempts, he was finally received by the unsuspecting director.
The details of what happened in the director's office are unknown. The essence of what occurred there was that Krasin pulled from his tube a rifle with a sawed-off barrel and shot the director.
Finally he got the fame he dreamed of.
9.16 As the Soviet economy was supposedly based on a scientifically developed plan, the collection of statistical data in the USSR had acquired a special importance. The Central Statistical Authority in Moscow reigned over an army of regional, district, city, and local offices where thousands of men and women holding such titles as Economy Engineers or Statistical Accountants, had been busy with collecting statistical information, tabulating it, summarizing it, and forwarding it to the higher levels of the system for further summarizing and analysis. Then the results of their diligence would be fed into the Planning System to establish the manufacturing goals and quotas. This system by the virtue of its supposed scientific character was considered inherently superior to whatever could be employed by the capitalist world with its chaotic relationship between manufacturers and consumers. How this monumental system of statistical activities worked in reality, may illustrate the following example.
In the early sixties, a new employee joined the staff of the statistical bureau of Kazakhstan located in the city of Alma-Ata, the capital of Kazakhstan.
Her first assignment was to collect data on the number of horses in every region of Kazakhstan. She had a few weeks to complete this job.
As the deadline for submitting the collected information to Moscow was approaching, the bureau's director asked the new worker when her report would be completed. The woman said that she was almost finished, and was waiting for only three more regions to supply the required information.
"Only three?" the director said. "We don't want to wait for them. Just see what number of horses should be added to those reported by the rest of the regions, in order to meet the goals of the plan for the entire republic. Then don't forget that to please Moscow the plan always has to be fulfilled by at least hundred-ten, hundred-fifteen percent. Just add those ten percent to the figure you calculate, and whatever number you get, include in the report one third of that number for each of the three regions that are late in reporting."
"But what if they will later report different figures?"
"Don't worry. First of all, the regions' reports are being prepared using the same technique. The regions collect data from districts, and some districts always will be late in submitting their response. So, what is the actual number of horses is anyone's guess anyway. Second, who cares as long as Moscow is pleased?"
9.17 A group of writers from Moscow went to visit Azerbaijan. Before they started their tour in the republic, the First secretary of the communist Party of Azerbaijan, Bagirov, summoned the writers to his office.
A well known poet from Moscow, Pavel Antokolsky, happened to walk into Bagirov's office when the Party satrap had already started speaking. Bagirov looked at Antokolsky and said, "Who is that?"
"Poet Antokolsky," the late comer mumbled.
"Sit down!" Bagirov barked.
Antokolsky hurriedly took a seat.
"Stand up!" Bagirov roared.
Antokolsky rapidly jumped up.
Antokolsky sat down.
And then Bagirov resumed his pep talk.
Later, Antokolsky explained why he so docilely obeyed the capricious orders by Bagirov, "How could I not to obey? I'm just a rank-and-file Party member, and he a First secretary of a republican Central Committee!"
|
The ancient city of Hama, in northern Syria, has a long history of violence: it has weathered the marches of Romans and Byzantines, the ravages of Turks and Mongols and the brutality of the Crusades. But none of these invaders had the tanks, heavy artillery and air power deployed by the Assad family on their own people.
Hama is currently in the grips of a bloody offensive by state security forces loyal to President Bashar Assad, seeking to root out dissidents who had wrested almost total control of the city in recent weeks from the Assad regime. Days of shooting and shelling, often through the muzzles of tanks, have led to nearly 150 civilian deaths, according to human rights groups.
Hama, of course, is no stranger to such government brutality. The largely Sunni city came under attack in 1982 when Bashar’s father Hafez decided to stomp out a budding Islamist insurgency, led by the Muslim Brotherhood, that had apparently taken root in the city. The Assads and many key members of their regime are Alawites, an obscure minority Muslim sect that has had control over the levers of power in the Ba’athist state for decades. Like in 1982, the Assad regime now also blames Syria’s instability on Islamist elements and “criminal gangs.” Here’s what TIME reported about the Hama massacre in 1982:
The fighting apparently began when security forces searched throughout Hama to uncover hideouts of the outlawed Muslim Brotherhood, a radical Islamic organization violently opposed to Assad’s secularist policies. Members of the Brotherhood reacted by attacking the homes of Baath Party officials and the police station. Describing the incident over Damascus Radio, Baath officials said the rebels, “driven like mad dogs by their black hatred, pounced on our comrades while sleeping in their homes and killed whomever they could of women and children, mutilating the bodies of the martyrs in the streets.”
When the rebels issued a dramatic call to arms over the loudspeakers atop the city’s minarets, the government responded in force. The old quarter was sealed off, helicopter gunships attacked insurgents from outlying villages rushing to aid the rebels, and heavy artillery was wheeled up. In the end, the vicious fighting was house to house. The government said it had discovered an arms cache containing 1,000 machine guns. Some ob servers believe that the arms were supplied by opponents of the Assad regime in Jordan, Iraq and Lebanon, and were being stockpiled in preparation for a major challenge to Assad’s rule.
The Brotherhood does not have a large following of its own in Syria, but has been directing an increasingly fierce terrorist campaign. Religious friction continues to smolder. Although the country is predominantly Sunni Muslim, Assad’s minority Alawite sect dominates the government and armed forces. Assad has also been challenged by elements in his own military, most recently in January, when some 150 officers in elite air force and armored units were arrested on charges of plotting a coup. Still, Western diplomats in the Middle East believe that Assad remains in command. There were no signs last week that the trouble in Hama was spreading elsewhere.∙
The scale of the massacres now don’t rival that of two decades ago — between 10,000 and 20,000 civilians died in Hama in 1982, and much of the city’s historic old quarter was leveled — but, unlike then, the government offensives are lashing out across the country, far from Hama, in cities and towns whose names are becoming infamous for the bloodshed on their streets: Dara’a, Jisr al-Shoghour, Latakia, Homs and others.
|
‘People joke about burping cows without realizing how big the source really is,’ Stanford University scientist Rob Jackson says
Methane emissions have reached the highest levels on record across the globe, research finds.
Studies suggest that increases are being driven primarily by growth of emissions from coal mining, oil and natural gas production, cattle and sheep ranching and landfills.
The findings were outlined in two papers in Earth System Science Data, and Environmental Research Letters by researchers with the Global Carbon Project, an initiative led by Stanford University scientist Rob Jackson, published on Tuesday, according to a press statement.
Between 2000 and 2017, levels of the potent greenhouse gas barrelled up toward pathways that climate models suggest will lead to 3-4℃ of warming before the end of this century.
This is the temperature threshold at which scientists warn that natural disasters, including wildfires, droughts, and floods, and social disruptions such as famines and mass migrations become more likely.
In 2017, the latest when complete global methane data sets were available, Earth’s atmosphere absorbed nearly 600 million tons of the colourless, odourless gas that is 28 times more powerful than carbon dioxide at trapping heat over a 100-year span.
More than half of all methane emissions now come from human activities. Annual methane emissions are up 9%, or 50 million tons per year, from the early 2000s, when methane concentrations in the atmosphere were relatively stable.
In terms of warming potential, adding this much extra methane to the atmosphere since 2000 is equivalent to putting 350 million more cars on the world’s roads or doubling the total emissions of Germany or France.
“We still haven’t turned the corner on methane,” said Jackson, a professor of Earth system science in Stanford’s School of Earth, Energy, and Environmental Sciences.
Globally, fossil fuel sources and cows are twin engines powering methane’s upward climb. “Emissions from cattle and other ruminants are almost as large as those from the fossil fuel industry for methane,” Jackson said. “People joke about burping cows without realizing how big the source really is.”
Throughout the study period, agriculture accounted for roughly two-thirds of all methane emissions related to human activities; fossil fuels contributed most of the remaining third. However, those two sources have contributed in roughly equal measure to the increases seen since the early 2000s.
Methane emissions from agriculture rose to 227 million tons of methane in 2017, up nearly 11% from the 2000-2006 average. Methane from fossil fuel production and use reached 108 million tons in 2017, up nearly 15% from the earlier period.
Amid the coronavirus pandemic, carbon emissions plummeted as manufacturing and transportation ground to a halt. “There’s no chance that methane emissions dropped as much as carbon dioxide emissions because of the virus,” Jackson said. “We’re still heating our homes and buildings, and agriculture keeps growing.”
Methane emissions rose most sharply in Africa and the Middle East, China, and South Asia and Oceania, which includes Australia and many Pacific islands.
Each of these three regions increased emissions by an estimated 10 to 15 million tons per year during the study period.
The United States followed close behind, increasing methane emissions by 4.5 million tons, mostly due to more natural gas drilling, distribution and consumption.
“Natural gas use is rising quickly here in the US and globally,” Jackson said. “It’s offsetting coal in the electricity sector and reducing carbon dioxide emissions, but increasing methane emissions in that sector.”
The US and Canada are also producing more natural gas. “As a result, we’re emitting more methane from oil and gas wells and leaky pipelines,” said Jackson, who is also a senior fellow at Stanford’s Woods Institute for the Environment and Precourt Institute for Energy.
Europe stands out as the only region where methane emissions have decreased over the last two decades, in part by tamping down emissions from chemical manufacturing and growing food more efficiently.
“Policies and better management have reduced emissions from landfills, manure, and other sources here in Europe. People are also eating less beef and more poultry and fish,” said Marielle Saunois of the Université de Versailles Saint-Quentin in France, lead author of the paper in Earth System Science Data.
Tropical and temperate regions have seen the biggest jump in methane emissions. Boreal and polar systems have played a lesser role. Despite fears that melting in the Arctic may unlock a burst of methane from thawing permafrost, the researchers found no evidence for increasing methane emissions in the Arctic – at least through 2017.
Human driven emissions are in many ways easier to pin down than those from natural sources. “We have a surprisingly difficult time identifying where methane is emitted in the tropics and elsewhere because of daily to seasonal changes in how waterlogged soils are,” said Jackson, who also leads a group at Stanford working to map wetlands and waterlogged soils worldwide using satellites, flux towers and other tools.
According to Jackson and colleagues, curbing methane emissions will require reducing fossil fuel use and controlling fugitive emissions such as leaks from pipelines and wells, as well as changes to the way we feed cattle, grow rice and eat.
“We’ll need to eat less meat and reduce emissions associated with cattle and rice farming,” Jackson said, “and replace oil and natural gas in our cars and homes.”
Feed supplements such as algae may help to reduce methane burps from cows, and rice farming can transition away from permanent water-logging that maximizes methane production in low-oxygen environments.
Aircraft, drones, and satellites show promise for monitoring methane from oil and gas wells. Jackson said: “I’m optimistic that, in the next five years, we’ll make real progress in that area.”
|
In general terms, geothermal energy is thermal energy (the energy that determines the temperature of matter) generated and stored in the Earth. The geothermal energy of the Earth's crust originates from the original formation of the planet and from radioactive decay of minerals, resulting in continual production of geothermal energy below the earth's surface. The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface.
In terms of alternative energy, geothermal energy is the energy that is harnessed from the Earth's internal heat and used for practical purposes, such as heating buildings or generating electricity. It also refers to the technology for converting geothermal energy into useable energy. The term geothermal power is used synonymously as the conversion of the Earth's internal heat into a useful form of energy, or more specifically as the generation of electricity from this thermal energy (geothermal electricity).
The four basic means for capturing geothermal energy for practical use are geothermal power plants (dry steam, flash steam, binary cycle), geothermal heat pumps, direct use, and enhanced geothermal systems.
Geothermal provides a huge, reliable, renewable resource, unaffected by changing weather conditions. It reduces reliance on fossil fuels and their inherent price unpredictability, and when managed with sensitivity to the site capacity, it is sustainable. Furthermore, technological advances have dramatically expanded the range and size of viable resources.
However, geothermal also faces challenges in the need for significant capital investment, and a significant amount of time in terms of building geothermal plants. There are limitations in terms of placement of geothermal plants in regions with accessible deposits of high temperature ground water, and construction of power plants can adversely affect land stability. Geothermal power plants also can lead to undesirable emissions, with power plant emitting low levels of carbon dioxide, nitric oxide, sulfur, methane, and hot water from geothermal sources may hold in solution trace amount of toxic elements, such as mercury, boron, and arsenic.
The Earth's geothermal energy comes from the heat from the original formation of the planet (about 20%) and from the thermal energy continually generated by the radioactive decay of minerals (80%) (Turcotte and Schubert 2002). The major heat-producing isotopes in Earth are potassium-40, uranium-238, uranium-235, and thorium-232 (Sanders 2003; UCS 2014).
The Earth's internal thermal energy flows to the surface by conduction at a rate of 44.2 terawatts (TW) (Pollack et al. 1993), and is replenished by radioactive decay of minerals at a rate of 30 TW (Ryback 2007). These power rates are more than double humanity’s current energy consumption from all primary sources, but most of this energy flow is not recoverable. In addition to the internal heat flows, the top layer of the surface to a depth of 10 meters (33 ft) is heated by solar energy during the summer, and releases that energy and cools during the winter.
Outside of the seasonal variations, the geothermal gradient of temperatures through the crust is 25–30 °C (77–86 °F) per kilometer of depth in most of the world. The conductive heat flux averages 0.1 MW/km2. These values are much higher near tectonic plate boundaries where the crust is thinner. They may be further augmented by fluid circulation, either through magma conduits, hot springs, hydrothermal circulation or a combination of these.
Geothermal energy is considered "sustainable energy" and a "renewable energy resource" because the thermal energy is constantly replenished and the extraction by people is small relative to total content (Ryback 2007). Although the planet is slowly cooling, human extraction taps a minute fraction of the natural outflow, often without accelerating it.
The Earth's geothermal resources are theoretically more than adequate to supply humanity's energy needs, but only a very small fraction may be profitably exploited. Estimates of exploitable worldwide geothermal energy resources vary considerably. According to a 1999 study, it was thought that this might amount to between 65 and 138 GW of electrical generation capacity "using enhanced technology" (GEA 2006). This study did not assess the potential with significantly new technologies (GEA 2006). Other estimates range from 35 to 2000 GW of electrical generation capacity, with a further potential for 140 EJ/year of direct use (Fridleifsson et al. 2008).
If heat recovered by ground source heat pumps is included, the non-electric generating capacity of geothermal energy is estimated at more than 100 GW (gigawatts of thermal power) and is used commercially in over 70 countries (MIT 2006). A 2006 report by MIT that took into account the use of Enhanced Geothermal Systems (EGS) concluded that it would be affordable to generate 100 GWe (gigawatts of electricity) or more by 2050, just in the United States, for a maximum investment of 1 billion US dollars in research and development over 15 years (MIT 2006). The MIT report calculated the world's total EGS resources to be over 13 YJ, of which over 200 ZJ would be extractable, with the potential to increase this to over 2 YJ with technology improvements—sufficient to provide all the world's energy needs for several millennia.The total heat content of the Earth is 13,000,000 YJ (Fridleifsson et al. 2008).
Within about 10,000 meters (33,000 feet) of the Earth's surface there is considered to be about 50,000 times the amount of energy in geothermal energy resources as in all the world's oil and natural gas resources (UCS 2009).
The world's biggest geothermal energy resources are in China; the second-largest ones in Hungary. By taking account of its size (about the size area of Illinois), Hungary has the richest such resources per sq mile/km. The world's biggest producer of electricity from geothermal sources is the Philippines. Other important countries are Nicaragua, Iceland, New Zealand.
The adjective geothermal originates from the Greek roots γη (ge), meaning earth, and θερμος (thermos), meaning hot.
Geothermal energy/power is produced by tapping into the thermal energy created and stored within the earth. The four basic categories for capturing geothermal energy for practical use are:
Geothermal energy is used commercially in over 70 countries (MIT 2006). In 2004, 200 petajoules (56 TWh) of electricity was generated from geothermal resources, and an additional 270 petajoules (75 TWh) of geothermal energy was used directly, mostly for space heating. In 2007, the world had a global capacity for 10 GW of electricity generation and an additional 28 GW of direct heating, including extraction by geothermal heat pumps (Fridleifsson et al. 2008). Heat pumps are small and widely distributed, so estimates of their total capacity are uncertain and range up to 100 GW (MIT 2006).
Estimates of the potential for electricity generation from geothermal energy vary sixfold, from .035 to 2TW depending on the scale of investments (Fridleifsson 2008). Upper estimates of geothermal resources assume enhanced geothermal wells as deep as 10 kilometers (6 mi), whereas existing geothermal wells are rarely more than 3 kilometers (2 mi) deep (Fridleifsson 2008). Wells of this depth are now common in the petroleum industry.
In the United States, according to the Geothermal Energy Association's 2013 Annual GEA Industry Update, total installed U.S. geothermal capacity was estimated at 3,386 MW and the installed geothermal capacity grew by 5%, or 147.05 MW, since the previous annual survey in March 2012 (GEA 2013). This report noted that geothermal power plants were operating in eight states (Alaska, California, Hawaii, Idaho, Nevada, Oregon, Utah and Wyoming), and geothermal development was taking place in 6 more (Arizona, Colorado, North Dakota, New Mexico, Texas and Washington) (GEA 2013).
In the United States, as noted above, most geothermal power plants are located in the western states (EIA 2011). California produces the most electricity from geothermal (EIA 2011), with installed capacity estimated to 2,732.2 MW in 2012, while the USA’s second leading geothermal state, Nevada, reached 517.5 MW (GEA 2013). There are a number of geothermal plants concentrated in south central California, on the southeast side of the Salton Sea, near the cities of Niland and Calipatria, California. The Basin and Range geologic province in Nevada, southeastern Oregon, southwestern Idaho, Arizona, and western Utah is now an area of rapid geothermal development.
The type of source impacts which method can be used for capturing geothermal energy for production of electricity or other practical use. Flash plants are the most common way to generate electricity from liquid-dominated reservoirs (LDRs). LDRs are more common with temperatures greater than 200 °C (392 °F) and are found near young volcanoes surrounding the Pacific Ocean and in rift zones and hot spots. Pumps are generally not required, powered instead when the water turns to steam. Lower temperature LDRs (120-200 C) require pumping. They are common in extensional terrains, where heating takes place via deep circulation along faults, such as in the Western United States and Turkey. Lower temperature sources produce the energy equivalent of 100M BBL per year. Sources with temperatures from 30-150 C are used without conversion to electricity for such purposes as district heating, greenhouses, fisheries, mineral recovery, industrial process heating, and bathing. in 75 countries. Heat pumps extract energy from shallow sources at 10-20 C for use in space heating and cooling. Home heating is the fastest-growing means of exploiting geothermal energy, with global annual growth rate of 30% in 2005 (Lund et al. 2005), and 20% in 2012 (Moore and Simmons 2013).
Heating is cost-effective at many more sites than electricity generation. At natural hot springs or geysers, water can be piped directly into radiators. In hot, dry ground, earth tubes or downhole heat exchangers can collect the heat. However, even in areas where the ground is colder than room temperature, heat can often be extracted with a geothermal heat pump more cost-effectively and cleanly than by conventional furnaces. These devices draw on much shallower and colder resources than traditional geothermal techniques. They frequently combine functions, including air conditioning, seasonal thermal energy storage, solar energy collection, and electric heating. Heat pumps can be used for space heating essentially anywhere.
Geothermal power plants use the heat from deep inside the Earth to pump hot water or hot steam to the surface to power generators. Such power plants drill their own wells into the rock to effectively capture the hot water or steam.
Such plants often are placed in places with plenty of geysers, active or geologically young volcanoes, or natural hot springs because these are areas where the Earth is particularly hot a reasonable distance from the surface. The water in such regions also can be more than 200°C (430°F) just below the surface.
There are three different designs for geothermal power plants: dry steam, flash steam, and binary cycle. These all bring hot water or steam from the ground, use it to power generators, and then the condensed steam and remaining geothermal fluid is injected back into the ground to pick up more heat and prolong the heat source. The design selected for generating power from geothermal energy depends on the temperature, depth, and quality of the water and steam in the area. If the hot water is high enough in temperature, flash system can be used. If it comes out as steam, it can be used directly to power the turbine with the dry stream design. If it is not high enough in temperature, then the binary cycle can be used to pass the water through a heat exchanger to heat up a second liquid that boils at a lower temperature than water and can be converted to steam to power the turbine (UCS 2009).
A dry stream power plant uses hot steam, typically above 235°C (455°F), to directly power its turbines. This is the oldest type of power plant and is still in use today. It is the simplest design in that steam goes directly through the turbine to power the generators, then is condensed into water in a cooling tower/condenser, and then returned to the ground.
The largest dry steam field in the world is The Geysers, 72 miles (116 km) north of San Francisco. The area was well known for hot springs, but actually does not have geysers, and the heat used is steam, not hot water. The Geysers began in 1960 and by 1990 had 26 power plants built in the area with a capacity of more than 2000 MW. However, the steam resource has been declining since 1988, due to the technology used and the rapid development of the area. The Geysers still had a net operating capacity of 725 MW by 2009 and the rocks underground remain hot (UCS 2009).
Flash steam power plants use hot water above 182°C (360°F) from geothermal reservoirs and has the addition of a flash tank over the dry steam design. As the water is pumped from the reservoir to the power plant, the drop in pressure in the flash tank causes the water to vaporize into steam (depressurized or "flashed" into steam), which then flows past the turbine, powering the electric generators. Any water not flashed into steam is injected back into the reservoir for reuse, as is the water that is captured from the steam after it has moved the turbines.
As noted above, flash steam plants are the most common way to generate electricity from liquid-dominated reservoirs (LDRs), which are often found near young volcanoes surrounding the Pacific Ocean and in rift zones and hot spots.
The third design, the binary cycle system or binary system, adds a heat exchanger as part of the design, in order to use hot water that is cooler than that of the flash steam plants. The hot fluid from geothermal reservoirs is passed through a heat exchanger, which transfers heat to a separate pipe containing fluids with a much lower boiling point, and thus more easily converted to steam. These fluids, usually Isobutane or Isopentane, running through a closed loop, are vaporized to produce the steam to power the turbine. The water from the ground is only used to transfer its heat to the second fluid and is returned to the ground.
The advantage to binary cycle power plants is their lower cost and increased efficiency. These plants also do not emit any excess gas and are able to utilize lower temperature reservoirs, which are much more common. Most geothermal power plants planned for construction are binary cycle.
A geothermal heat pump (GHP) can be used to extract heat from the ground to provide heat and cooling for buildings. Geothermal heat pumps are also known as ground-source heat pumps, GeoExchange heat pumps, earth-coupled heat pumps, and water-source heat pumps (USDOE 2012). These systems take advantage of the fact that a few feet below the Earth's surface, the temperature of the ground remains relatively constant and thus warmer than the air temperature in cold weather and colder than the air in warm weather. Using water or refrigerant, the pumps utilize pipes buried underground to move heat from the ground to the building during cold weather and from the building to the ground during warm weather. Some combine a air-source heat pump with a geothermal heat pump.
Heat pumps can range from simple systems involving a tube that runs from the outside air, under the ground, and then into a house's ventilation system (UCS 2009). More complex systems involve compressors and pumps to maximize heat transfer (UCS 2009). Enough heat can be extracted from shallow ground anywhere in the world to provide home heating, but industrial applications need the higher temperatures of deep resources.
GHPs can be much more efficient than electric heating and cooling, and are particularly energy-efficient in regions with temperature extremes. As of 2009, in the United States where were more than 600,000 geothermal heat pumps in use in homes and other buildings, with new installations at about 60,000 per year. The United States Department of Energy estimated that pumps can save a typical home hundreds of dollars in energy costs per year. However, GHPs have high up-front costs and installation can be difficult as it involves digging up areas around the building (UCS 2009).
Four basic designs are typically utilized for geothermal heat pump systems: Horizontal closed loop systems, vertical closed loop systems, pond/lake closed loop systems, and the open-loop option. The are variants of these systems as well as hybrid systems that use different geothermal resources (USDOE 2012).
In general, closed loop systems typically circulate an antifreeze solution through a closed loop buried in the ground or immersed in water. Heat is transferred between the refrigerant in the heat pump and the antifreeze solution in the closed loop via a heat exchanger. The possible configurations for the loop are horizontal, vertical, and pond/lake. One variant, direct exchange, does not use a heat exchanger but instead pumps the refrigerant directly through tubing buried in the ground (USDOE 2012).
Open loop systems utilizes surface body water or well water as the heat exchange fluid and circulates this directly through the GPH system. After the water circulates through the system, it is returned to the ground through the well, a recharge well, or surface discharge. This requires a sufficient supply of relatively clean water (USDOE 2012).
Some areas have geothermal resources that can be used directly for heating purposes. For example, hot spring water is used for heating greenhouses, heat spas, heat fish farms, and so forth (UCS 2009).
Iceland is the world leader in direct applications. More than fifty percent of its energy comes from geothermal resources (UCS 2009) and some 93% of its homes are heated with geothermal energy, saving Iceland over $100 million annually in avoided oil imports (Pahl 2007). Reykjavík, Iceland has the world's biggest district heating system, bringing in hot water from 25 kilometers way (UCS 2009). Once known as the most polluted city in the world, it is now one of the cleanest (Pahl 2007).
In the United States, Boise, Idaho and Klamath Falls, Oregon have used geothermal water to heat buildings and homes for more than a century (UCS 2009).
Although geothermal heat is everywhere below the Earth's surface, only about ten percent of the land surface area has conditions where the water circulates near the surface to be easily captured (UCS 2009). Enhanced geothermal systems allow the capturing of heat even in these dry locations. It also is effective in capturing heat from locations where the natural supply of water producing steam from the hot underground magma deposits has been exhausted.
Enhanced geothermal systems (EGS) actively inject water into wells to be heated and pumped back out. The water is injected under high pressure to expand existing rock fissures to enable the water to freely flow in and out. The technique was adapted from oil and gas extraction techniques. However, the geologic formations are deeper and no toxic chemicals are used, reducing the possibility of environmental damage. Drillers can employ directional drilling to expand the size of the reservoir (Moore and Simmons 2013).
The key characteristic of an EGS is that it reaches at least 10 km down into hard rock. Drilling at this depth is now routine for the oil industry (Exxon announced an 11 km hole at the Chayvo field, Sakhalin. At a typical EGS site two holes would be bored and the deep rock between them fractured. Water would be pumped down one and steam would come up the other. The technological challenges are to drill wider bores and to break rock over larger volumes. Apart from the energy used to make the bores, the process releases no greenhouse gases.
The world's total EGS resources have been estimated to be over 13,000 ZJ, of which over 200 ZJ would be extractable, with the potential to increase this to over 2,000 ZJ with technology improvements—sufficient to provide all the world's energy needs for 30,000 years (MIT 2006).
The International Geothermal Association (IGA) reported in 2010 that 10,715 megawatts (MW) of geothermal power in 24 countries was online and was expected to generate 67,246 GWh of electricity in 2010 (Holm et al. 2010). This represents a 20% increase in online capacity since 2005.
In 2010, the United States led the world in geothermal electricity production with 3,086 MW of installed capacity from 77 power plants (Holm et al. 2010). The largest group of geothermal power plants in the world is located at The Geysers. The Philippines is the second highest producer, with 1,904 MW of capacity online in 2010; Geothermal power makes up approximately 27% of Philippine electricity generation (Holm et al. 2010).
Geothermal power is generated in over 20 countries around the world including Iceland (producing over 26 percent of its electricity from geothermal sources in 2006), the United States, Italy, France, New Zealand, Mexico, Nicaragua, Costa Rica, Russia, the Philippines (production capacity of 1931 MW (2nd to US, 27 percent of electricity), Indonesia, the People's Republic of China, and Japan. Canada's government (which officially notes some 30,000 earth-heat installations for providing space heating to Canadian residential and commercial buildings) reports a test geothermal-electrical site in the Meager Mountain–Pebble Creek area of British Columbia, where a 100 MW facility could be developed.
|Percentage of national
|Percentage of global
In the United States, geothermal is one of the renewable energy resources used to produce electricity, but its growth is slower than that of wind and solar energy development and a November 2011 report noted that it produced just 0.4% of the electricity from all sectors nationally during the first 8 months of that year, with 10,898 million kilowatthours (kWh) produced during that time. However, about 5% of the electricity generated in California was produced from geothermal, although there are significant geothermal resources that could be utilized (EIA 2011).
Geothermal thermal energy is used to generate electricity typically via a well that is drilled into an underground reservoir of water that can be as hot as 371 degrees Celsius (700 Fahrenheit). Geothermal electric plants were traditionally built exclusively on the edges of tectonic plates where high temperature geothermal resources are available near the surface. The development of binary cycle power plants and improvements in drilling and extraction technology enable enhanced geothermal systems over a much greater geographical range (MIT 2006).
The thermal efficiency of geothermal electric plants is low, around 10–23%, because geothermal fluids do not reach the high temperatures of steam from boilers. The laws of thermodynamics limits the efficiency of heat engines in extracting useful energy. Exhaust heat is wasted, unless it can be used directly and locally, for example in greenhouses, timber mills, and district heating. System efficiency does not materially affect operational costs as it would for plants that use fuel, but it does affect return on the capital used to build the plant. In order to produce more energy than the pumps consume, electricity generation requires relatively hot fields and specialized heat cycles. Because geothermal power does not rely on variable sources of energy, unlike, for example, wind or solar, its capacity factor can be quite large—up to 96% has been demonstrated (Lund 2003).
Hot springs have been used for bathing at least since paleolithic times (Cataldi 1992). The oldest known spa is a stone pool on China's Lisan mountain built in the Qin Dynasty in the 3rd century B.C.E., at the same site where the Huaqing Chi palace was later built. In the first century AD, Romans conquered Aquae Sulis, now Bath, Somerset, England, and used the hot springs there to feed public baths and underfloor heating. The admission fees for these baths probably represent the first commercial use of geothermal power.
The world's oldest geothermal district heating system in Chaudes-Aigues, France, has been operating since the 14th century (Lund 2007). The earliest industrial exploitation began in 1827 with the use of geyser steam to extract boric acid from volcanic mud in Larderello, Italy.
In 1892, America's first district heating system in Boise, Idaho was powered directly by geothermal energy, and was copied in Klamath Falls, Oregon in 1900. A deep geothermal well was used to heat greenhouses in Boise in 1926, and geysers were used to heat greenhouses in Iceland and Tuscany at about the same time (Dickson and Fanelli 2004). Charlie Lieb developed the first downhole heat exchanger in 1930 to heat his house. Steam and hot water from geysers began heating homes in Iceland starting in 1943.
In the 20th century, demand for electricity led to the consideration of geothermal power as a generating source. Prince Piero Ginori Conti tested the first geothermal power generator on 4 July 1904, at the same Larderello dry steam field where geothermal acid extraction began. It successfully lit four light bulbs (Tiwari and Ghosal 2005). Later, in 1911, the world's first commercial geothermal power plant was built there. It was the world's only industrial producer of geothermal electricity until New Zealand built a plant in 1958. In 2012, it produced some 594 megawatts (Moore and Simmons 2013).
Lord Kelvin invented the heat pump in 1852, and Heinrich Zoelly had patented the idea of using it to draw heat from the ground in 1912 (Zogg 2008). But it was not until the late 1940s that the geothermal heat pump was successfully implemented. The earliest one was probably Robert C. Webber's home-made 2.2 kW direct-exchange system, but sources disagree as to the exact timeline of his invention (Zogg 2008). J. Donald Kroeker designed the first commercial geothermal heat pump to heat the Commonwealth Building (Portland, Oregon) and demonstrated it in 1946 (Kroeker and Chewning 1948). Professor Carl Nielsen of Ohio State University built the first residential open loop version in his home in 1948 (Gannon 1978). The technology became popular in Sweden as a result of the 1973 oil crisis, and has been growing slowly in worldwide acceptance since then. The 1979 development of polybutylene pipe greatly augmented the heat pump’s economic viability (Bloomquist 1999).
The binary cycle power plant was first demonstrated in 1967 in the USSR and later introduced to the US in 1981 (Lund 2004). This technology allows the generation of electricity from much lower temperature resources than previously. In 2006, a binary cycle plant in Chena Hot Springs, Alaska, came on-line, producing electricity from a record low fluid temperature of 57 °C (135 °F) (Erkan et al. 2008).
Geothermal energy offers a huge, reliable, renewable resource. It is sustainable when managed with sensitivity to the site capacity; for example, the hot water extracted in the geothermal process can be re-injected into the ground to produce more steam. It also is a source that is unaffected by changing weather conditions. Furthermore, technological advances have dramatically expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells do release greenhouse gases trapped deep within the earth, but these emissions are much lower per energy unit than those of fossil fuels.
From an economic view, geothermal energy is price competitive in some areas. It also reduces reliance on fossil fuels and their inherent price unpredictability; geothermal power requires little fuel, except for purposes like pumps. Given enough excess capacity, geothermal energy can also be sold to outside sources such as neighboring countries or private businesses that require energy. It also offers a degree of scalability: a large geothermal plant can power entire cities while smaller power plants can supply more remote sites such as rural villages.
Geothermal has minimal land and freshwater requirements. Geothermal plants use 3.5 square kilometers (1.4 sq mi) per gigawatt of electrical production (not capacity) versus 32 square kilometers (12 sq mi) and 12 square kilometers (4.6 sq mi) for coal facilities and wind farms respectively (Lund 2007). They use 20 liters of freshwater per MW·h versus over 1000 liters per MW·h for nuclear, coal, or oil (Lund 2007).
Several entities, such as the National Renewable Energy Laboratory and Sandia National Laboratories, conduct research toward the goal of establishing a proven science around geothermal energy. The International Centre for Geothermal Research (IGC), a German geosciences research organization, is largely focused on geothermal energy development research.
However, use of geothermal energy also faces several challenges. For one, geothermal plants generally are site-specific and limited to regions with accessible deposits of high temperature ground water. Capital costs also are significant. Drilling and exploration for deep resources is very expensive. Drilling accounts for over half the costs, and exploration of deep resources entails significant risks. The completing of a geothermal plant takes significant time (four to eight years) versus the times for wind or solar, and there is a lack of transmission lines (EIA 2011).
There also are several environmental concerns behind geothermal energy.
For one, there can be negative impacts on surrounding lands. Construction of the power plants can adversely affect land stability in the surrounding region and land subsidence can become a problem as older wells begin to cool down. Also, increased seismic activity can occur because of well drilling. Subsidence has occurred in the Wairakei field in New Zealand (Lund 2007). In Staufen im Breisgau, Germany, tectonic uplift occurred instead, due to a previously isolated anhydrite layer coming in contact with water and turning into gypsum, doubling its volume. Enhanced geothermal systems can trigger earthquakes as part of hydraulic fracturing. The project in Basel, Switzerland was suspended because more than 10,000 seismic events measuring up to 3.4 on the Richter Scale occurred over the first 6 days of water injection (Deichmann et al. 2997).
Geothermal power plants can also lead to undesirable emissions. Dry steam and flash steam power plant emit low levels of carbon dioxide, nitric oxide, and sulfur, although at roughly 5 percent of the levels emitted by fossil fuel power plants. Fluids drawn from the deep earth carry a mixture of gases, notably carbon dioxide (CO2), hydrogen sulfide (H2S), methane (CH4) and ammonia (NH3). These pollutants contribute to acid rain, and noxious smells if released, and include some important greenhouse gases. Existing geothermal electric plants emit an average of 122 kilograms (270 lb) of CO2 per megawatt-hour (MW·h) of electricity, a small fraction of the emission intensity of conventional fossil fuel plants (Bertani and Thain 2002). Plants that experience high levels of acids and volatile chemicals are usually equipped with emission-control systems to reduce the exhaust.
In addition to dissolved gases, hot water from geothermal sources may hold in solution trace amounts of toxic elements such as mercury, arsenic, boron, and antimony (Bargali et al. 1997). These chemicals precipitate as the water cools, and can cause environmental damage if released. The modern practice of injecting cooled geothermal fluids back into the Earth to stimulate production has the side benefit of reducing this environmental risk.
Direct geothermal heating systems contain pumps and compressors, which may consume energy from a polluting source. This parasitic load is normally a fraction of the heat output, so it is always less polluting than electric heating. However, if the electricity is produced by burning fossil fuels, then the net emissions of geothermal heating may be comparable to directly burning the fuel for heat. For example, a geothermal heat pump powered by electricity from a combined cycle natural gas plant would produce about as much pollution as a natural gas condensing furnace of the same size (Hanova and Dowlatabai 2007). Therefore the environmental value of direct geothermal heating applications is highly dependent on the emissions intensity of the neighboring electric grid.
All links retrieved December 11, 2013.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed.
|
A Guide to Blanching and Freezing Fresh Produce
See our new Guide to Freezing Fresh Produce! Freezing is the most convenient way to make the most of summer’s bounty and avoid waste. Here’s how to blanche, handy tips to make the most of freezer space, and the easiest crops to put on ice.
Summer’s bountiful harvests are very rewarding, but can sometimes feel a little overwhelming. Why not put aside some of this bounty to enjoy later on in the year? In this article (with video), we’ll show you how to blanch and freeze your vegetables and fruits.
How to Freeze Fresh Produce
There are lots of ways to preserve your fruits and vegetables, but when everything seemingly needs harvesting at the same time, you can’t beat the convenience of the freezer! Here’s the process—from preparing produce, to blanching, and finally, freezing for long-term storage.
Preparing Vegetables for the Freezer
Almost anything can be frozen, with the exception of salads and vegetables like cucumbers with a very high water content. Only ever freeze produce that’s in good condition and that you wouldn’t mind eating fresh. No stringy beans or woody roots, please!
Harvest as close to freezing the produce as you can to lock in freshness at its peak. Process picked fruits and vegetables in batches, so you can get it into the freezer as fast as possible.
Beans, peas and other vegetables with a high sugar content like sweet corn and young carrots are staples of the freezer. Cut the ends off of beans, then chop larger beans in half. Whole cobs of corn take up a lot room, so if space is a concern, remove the corn kernels from the cob beforehand. A clean and tidy way to do this is to carefully pop out the first row of kernels with a knife before simply pushing out each successive row of kernels using little more than your fingertips. This method preserves the entire kernel, minimizing waste.
Before being frozen, most vegetables will need to blanched (fruits and berries do not). Blanching is the process of scalding vegetables in boiling water before freezing them, which is done for a few reasons:
- to stop enzymes from changing the color and taste of the produce
- to sterilize the surface of the produce, removing dirt and organisms
- to stop the loss of nutrients in the produce
Blanching isn’t difficult, but it’s important to blanch vegetables for the correct amount of time. Otherwise, their quality may be affected. To see how long to blanch your veggies for, consult this list of blanching times from the National Center for Home Food Preservation.
How to Blanch Vegetables: Step by Step
Here’s how to blanch your vegetables:
- Bring a large saucepan of lightly salted water to a rolling boil. At the same time, prepare a bowl of ice water to drop the vegetables in after scalding.
- Plunge small batches of your vegetables into the water so it quickly returns to a boil.
- Once the water returns to a boil, the blanching begins.
- Consult this list to see how long your veggies should be blanched. Blanch small vegetables like peas for just one minute, beans for around two minutes, and sliced whole vegetables like carrots for three to four minutes.
- Remove the blanched vegetables from the boiling water and drop them into a bowl of ice-cold water. This stops the cooking process.
- Pat your blanched vegetables dry. They’re now ready to pack and freeze in freezer bags or containers!
Once your produce has been properly blanched, it’s time to store them.
Freezing in Portions
Freeze produce in meal-sized portions so you only ever defrost exactly what you need. Pack into freezer bags labeled with the contents and date so you’ll be able to see what you’ve got at a glance. If you want vegetables to retain their shape, space them out onto trays lined with parchment or greaseproof paper then freeze for about an hour before you pack them up.
Keen on cutting plastic use? Then reuse, for example, old bread bags and label with stickers.
See how to freeze these popular crops:
- How to freeze corn
- How to freeze blueberries
- How to freeze spinach and other greens
- How to freeze herbs (and dry herbs)
- How to freeze zucchini
How to Prevent Freezer Burn
A quick word about freezer burn, which is when produce reacts with air to compromise its appearance and taste. To avoid this, we need to remove as much of the air from our freezer bags as possible. One way to do this is to squeeze out the air before sealing the bag closed, but I prefer the straw method. Pop a straw into the bag, seal up the bag around it, suck out the excess air, then quickly remove the straw and finish sealing. Job done!
How to Freeze Fruit
Unlike vegetables, berries and currants do not require blanching and can simply be frozen whole. Space them out onto trays first so they freeze separately. Then pack them away into portion-sized packs. If the fruit is intended for later pureeing or use in smoothies, you can skip straight to packing it.
Fruits for cooked desserts can be thoroughly coated in sugar before freezing, which helps retain the fruit’s firmness. Or add a splash of water and a little sugar to your fruit then cook it down into a ready-to-go puree for the freezer.
Tomatoes turn to mush once they’re defrosted, so process them into sauces before freezing, which should also save on valuable space.
More Ways to Save Freezer Space
Bags of sauces and purees can be made to stack by first freezing the liquid in a rigid container. Once frozen solid, remove the block from the container and transfer to a freezer bag for long-term storage. Or pop freezer bags filled with sauce into a box and then remove it as soon as it begins to solidify.
Of course, Tupperware containers or old take-out boxes make efficient use of space when they’re filled up – just be sure leave a slight gap at the top to allow the contents to expand as they freeze.
Freezing Fresh Herbs
Fresh herbs are always welcome, so make time to preserve some of summer’s excess. Begin by washing then very finely chopping or mincing freshly picked leaves. Now transfer your chopped herbs into ice cube trays. Pack them in as tight as you can then pour on water to cover. Freeze them solid and then, to save space, pop them out of the trays to pack into labeled bags. You’ll now have a fresh hit of herbs on hand for whenever you need it.
Freezing is the best way to preserve the original flavor and freshness of your produce, and it’s also the simplest. What are you freezing this summer, and do you have a favorite freezer-ready recipe? Let us know in the comments section below.
Introduction to Preserving
Making Quick Pickles
Making Quick Jams: Refrigerator or Freezer Jam
How to Can Pickles
How to Can Jam and Jelly
Salting and Brining
|
Our first president to be impeached was Andrew Johnson in 1868. 11 articles of impeachment were brought against Johnson, mainly for violating the Tenure of Office Act by removing the Secretary of War without congressional approval.
Thomas Nast drew the following cartoon, published in Harper’s Weekly on March 21, 1868, showing a little Andrew Johnson crushed by a large copy of the U.S. Constitution. (The Senate failed to convict Johnson and he remained in office.)
Image credit: The Library of Congress.
The Nib is entirely independent! Become a member today to support us publishing great comics.
|
What are surgical wounds healing by secondary intention?
These are surgical wounds which are left open to heal through the growth of new tissue, rather than being closed in the usual way with stitches or other methods which bring the wound edges together. This is usually done when there is a high risk of infection or a large amount of tissue has been lost from the wound. Wounds which are often treated in this way include chronic wounds in the cleft between the buttocks (pilonidal sinuses) and some types of abscesses.
Why use antibiotics and antiseptics to treat surgical wounds healing by secondary intention?
One reason for allowing a wound to heal by secondary intention after surgery is that the risk of infection in that wound is thought to be high. If a wound has already become infected, then antibiotics or antiseptics are used to kill or slow the growth of the micro-organisms causing the infection and prevent it from getting worse or spreading. This may also help the wound to heal. Even where wounds are not clearly infected, they usually have populations of micro-organisms present. It is thought that they may heal better if these populations are reduced by antibacterial agents. However, the relationship between infection and micro-organism populations in wounds and wound healing is not very clear.
What we found
In November 2015 we searched for as many studies as possible that both had a randomised controlled design and looked at the use of an antibiotic or antiseptic in participants with surgical wounds healing by secondary intention. We found 11 studies which included a total of 886 participants.These all looked at different comparisons. Several different types of wounds were included. Studies looked at wounds after diabetic foot amputation, pilonidal sinus surgery, treatment of various types of abscess, surgery for haemorrhoids, complications after caesarean section and healing of openings created by operations such as colostomy.
Most studies compared a range of different types of antibacterial treatments to treatments without antibacterial activity, but four compared different antibacterial treatments. Although some of the trials suggested that one treatment may be better than another, this evidence was limited by the size of the studies and the ways they were carried out and reported. All of the studies had low numbers of participants and in some cases these numbers were very small. Many of the studies did not report important information about how they were carried out, so it was difficult to tell whether the results presented were likely to be true. More, better quality, research is needed to find out the effects of antimicrobial treatments on surgical wounds which are healing by secondary intention.
Assessed as up to date November 2015.
There is no robust evidence on the relative effectiveness of any antiseptic/antibiotic/anti-bacterial preparation evaluated to date for use on SWHSI. Where some evidence for possible treatment effects was reported, it stemmed from single studies with small participant numbers and was classed as moderate or low quality evidence. This means it is likely or very likely that further research will have an important impact on our confidence in the estimate of effect, and may change this estimate.
Following surgery, incisions are usually closed by fixing the edges together with sutures (stitches), staples, adhesives (glue) or clips. This process helps the cut edges heal together and is called 'healing by primary intention'. However, a minority of surgical wounds are not closed in this way. Where the risk of infection is high or there has been significant loss of tissue, wounds may be left open to heal by the growth of new tissue rather than by primary closure; this is known as 'healing by secondary intention'. There is a risk of infection in open wounds, which may impact on wound healing, and antiseptic or antibiotic treatments may be used with the aim of preventing or treating such infections. This review is one of a suite of Cochrane reviews investigating the evidence on antiseptics and antibiotics in different types of wounds. It aims to present current evidence related to the use of antiseptics and antibiotics for surgical wounds healing by secondary intention (SWHSI).
To assess the effects of systemic and topical antibiotics, and topical antiseptics for the treatment of surgical wounds healing by secondary intention.
In November 2015 we searched: The Cochrane Wounds Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE and EBSCO CINAHL. We also searched three clinical trials registries and the references of included studies and relevant systematic reviews. There were no restrictions with respect to language, date of publication or study setting.
Randomised controlled trials which enrolled adults with a surgical wound healing by secondary intention and assessed treatment with an antiseptic or antibiotic treatment. Studies enrolling people with skin graft donor sites were not included, neither were studies of wounds with a non-surgical origin which had subsequently undergone sharp or surgical debridement or other surgical treatments or wounds within the oral or aural cavities.
Two review authors independently performed study selection, risk of bias assessment and data extraction.
Eleven studies with a total of 886 participants were included in the review. These evaluated a range of comparisons in a range of surgical wounds healing by secondary intention. In general studies were small and some did not present data or analyses that could be easily interpreted or related to clinical outcomes. These factors reduced the quality of the evidence.
Two comparisons compared different iodine preparations with no antiseptic treatment and found no clear evidence of effects for these treatments. The outcome data available were limited and what evidence there was low quality.
One study compared a zinc oxide mesh dressing with a plain mesh dressing. There was no clear evidence of a difference in time to wound healing between groups. There was some evidence of a difference in measures used to assess wound infection (wound with foul smell and number of participants prescribed antibiotics) which favoured the zinc oxide group. This was low quality evidence.
One study reported that sucralfate cream increased the likelihood of healing open wounds following haemorrhoidectomy compared to a petrolatum cream (RR: 1.50, 95% CI 1.13 to 1.99) over a three week period. This evidence was graded as being of moderate quality. The study also reported lower wound pain scores in the sucralfate group.
There was a reduction in time to healing of open wounds following haemorrhoidectomy when treated with Triclosan post-operatively compared with a standard sodium hypochlorite solution (mean difference -1.70 days, 95% CI -3.41 to 0.01). This was classed as low quality evidence.
There was moderate quality evidence that more open wounds resulting from excision of pyomyositis abscesses healed when treated with a honey-soaked gauze compared with a EUSOL-soaked gauze over three weeks' follow-up (RR: 1.58, 95% CI 1.03 to 2.42). There was also some evidence of a reduction in the mean length of hospital stay in the honey group. Evidence was taken from one small study that only had 43 participants.
There was moderate quality evidence that more Dermacym®-treated post-operative foot wounds in people with diabetes healed compared to those treated with iodine (RR 0.61, 95% CI 0.40 to 0.93). Again estimates came from one small study with 40 participants.
|
BY ITSELF, silicon is not an effective conductor.
It does not leave its electrons
free for smooth flow of information. To
make it less rigid, impurities are introduced
into a pure silicon chip. The
impurities free an electron or two and
make the element a better conductor.
This is called doping--a process essential
to creating the heart of every electronic
Concentrations as small as one atom of boron, arsenic or phosphorus (impurities) per 100 million atoms of silicon have been effective so far. But with the world increasingly devising chips smaller than a strand of human hair, the definition of doping had to change, too. This was the challenge James Tour and his team of researchers from the Rice University accepted. The study published in the July issue of the Journal of the American Chemical Society suggests attaching a single layer of molecules on to the surface of the silicon chip, rather than mixing them in, serves the same purpose as doping but works better at the nanolevel. For this,
the team bathed the nanosilicon chip in the dopant solution, just like one creates a photographic film. "We call it silicon with afterburners," said Tour who teaches chemistry at Rice University.
The nanochips have very little volume and you have to deal with them accordingly," he explained. Dopants mixed with silicon in the usual way destroy its homoge - neity and hamper conductivity.
Years of research into replacing silicon with a better semiconductor has yielded little. "So we decided to complement silicon, rather than supplant it," said Tour. "This research gives the Intels and the Samsungs of the world another tool to try, and I guarantee you they'll be trying it," he added.
We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together.
Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.
|
Just when you thought the world had enough to worry about with climate change, income inequality, and corruption, coronavirus dropped a whole new set of problems on our doorstep. There have been more than 395,000 confirmed cases and over 17,800 deaths thus far. To make matters worse, many experts believe that these numbers drastically underestimate the spread of coronavirus.
According to preliminary estimates, the death rate of coronavirus appears to be somewhere between 1-3%, which is significantly higher than the death rate of influenza (0.1%). With these frightening statistics in mind, one important question remains: should we be panicking about coronavirus?
Should we panic about coronavirus?
The answer to this question is complicated. There’s no doubt that the coronavirus is a serious epidemic that will have far-reaching effects. So, as a society, we should absolutely worry about coronavirus.
Without a global initiative to prevent the spread of coronavirus, tens of thousands of people could die. Governments must work together to limit exposure through regulated travel, quarantine, and human coronavirus vaccine research. Coronavirus transmission has already happened quickly, with cases appearing in most major countries. At this point, preventing further cases is the number one priority.
While a global response is necessary, panic about the coronavirus could cause unnecessary harm. For example, many Asian citizens around the world have already faced harsh discrimination in the wake of the coronavirus. Additionally, travel bans could prevent unaffected individuals from escaping high-risk areas. As a result, any plan for dealing with the coronavirus must be reasoned and well-researched.
Should YOU panic about coronavirus?
While governments and health organizations must worry about the coronavirus, individuals should be a little more measured in their response. After all, there are about 7.7 billion people on the planet and just 95,000 confirmed cases so far. This means that your chances of catching coronavirus are about 1 in 80,000. However, your risk of catching the virus could either go up or down depending on where you live, how much you travel, and certain sanitary considerations (washing hands, practicing respiratory hygiene, etc).
Moreover, even if you were to catch the coronavirus, your chances of recovering are very high, especially if you have access to proper medical treatment. That said, there are certain factors that can cause the coronavirus to become fatal. For example, if you live in a remote region with unsanitary conditions or inadequate medical access, your chances of survival will greatly decrease. Additionally, the elderly and people with weak immune systems are considered “high-risk” patients.
So, this still begs the question: should YOU panic about the coronavirus? To summarize, you should really only worry about the coronavirus if you or a loved one fit into two or more of the following categories:
- Elderly (65 or older)
- Weak immune system
- Inadequate access to healthcare
- Proximity to high-risk regions or communities
- Frequent exposure to sick individuals
- Inability to access clean water or engage in good sanitary practices
If you or a loved one do NOT fall into two or more of the categories above, you really shouldn’t panic about the coronavirus. It is a serious medical condition that requires a serious response, but sitting around worrying about your personal safety is unnecessary if the chances of catching coronavirus are very, very small.
If you’d like to learn more about coronavirus treatment, preventative measures, and the ongoing global response, consult the World Health Organization Coronavirus Info Page.
|
Developing and etching
Learn how to develop and etch your circuit just by dipping the board in two kind of liquids. First you need to dip it in sodium metasilicate pentahydrate then you need to dip it in ferric chlorid. Be aware that both liquids are dangerous to your eyes and skin, so always use protective glasses.
Step 1: Dip the board in sodium metasilicate pentahydrate
First you need to dip your copper board in sodium metasilicate pentahydrate to develop the surface which did not contacted with UV rays. Natrium hydroxide has the same effect if you cannot find sodium metasilicate pentahydrate. You should shake the liquid carefully from left to righ, straight and back and do this until you see your PCB design drawn out on the copper board.
Step 2: Place the board in water
Do not touch the board with bear hands. Use a tweezer to dip it from one liquid to another. Now that the drawing of the circuit is developed, you will need to wash the sodium metasilicate pentahydrate from the surface. Dip it in water, shake it a bit and finally take it out from the water with the tweezer.
Step 3: Etch the board with ferric chloride
Place a bowl of hot water under a bowl of ferric chloride to keep it at 45 °C. Place your copper board in the bowl of ferric chloride. Do not forget to use a tweezer for this process. This liquid is a solvent which will eleminate the copper layer where there is no PCB circuit drawn. Please shake this liquid carefully until there is no copper surrounding the PCB circuit. You should see the plastic layer slowly visualizing under the copper. It is essential to watch the plastic layer because this step can take you from a few minutes to 2 hours.
Step 4: Wash the etched board with water
When it is finished you should take it out with a tweezer and dip it in water. Shake the water and when you lift out the board you should see your copper PCB circuit just as you designed it.
|
Automatic code reuse
>> System makes modifications necessary to transplant code from one program into another
BY LARRY HARDESTY, MIT NEWS
CAMBRIDGE, Mass. — Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system that allows programmers to transplant code from one program into another. The programmer can select the code from one program and an insertion point in a second program, and the system will automatically make modifications necessary — such as changing variable names — to integrate the code into its new context.
Crucially, the system is able to translate between “data representations” used by the donor and recipient programs. An image-processing program, for instance, needs to be able to handle files in a range of formats, such as jpeg, tiff, or png. But internally, it will represent all such images using a single standardized scheme. Different programs, however, may use different internal schemes. The CSAIL researchers’ system automatically maps the donor program’s scheme onto that of the recipient, to import code seamlessly.
The researchers presented the new system, dubbed CodeCarbonCopy, at the Association for Computing Machinery’s Symposium on the Foundations of Software Engineering.
“CodeCarbonCopy enables one of the holy grails of software engineering: automatic code reuse,” says Stelios Sidiroglou-Douskos, a research scientist at CSAIL and first author on the paper. “It’s another step toward automating the human away from the development cycle. Our view is that perhaps we have written most of the software that we’ll ever need — we now just need to reuse it.”
The researchers conducted eight experiments in which they used CodeCarbonCopy to transplant code between six popular open-source image-processing programs. Seven of the eight transplants were successful, with the recipient program properly executing the new functionality.
Joining Sidiroglou-Douskos on the paper are Martin Rinard, a professor of electrical engineering and computer science; Fan Long, an MIT graduate student in electrical engineering and computer science; and Eric Lahtinen and Anthony Eden, who were contract programmers at MIT when the work was done.
With CodeCarbonCopy, the first step in transplanting code from one program to another is to feed both of them the same input file. The system then compares how the two programs process the file.
If, for instance, the donor program performs a series of operations on a particular piece of data and loads the result into a variable named “mem_clip->width,” and the recipient performs the same operations on the same piece of data and loads the result into a variable named “picture.width,” the system will infer that the variables are playing the same roles in their respective programs.
Once it has identified correspondences between variables, CodeCarbonCopy presents them to the user. It also presents all the variables in the donor for which it could not find matches in the recipient, together with those variables’ initial definitions. Frequently, those variables are playing some role in the donor that’s irrelevant to the recipient. The user can flag those variables as unnecessary, and CodeCarbonCopy will automatically excise any operations that make use of them from the transplanted code.
To map the data representations from one program onto those of the other, CodeCarbonCopy looks at the precise values that both programs store in memory. Every pixel in a digital image, for instance, is governed by three color values: red, green, and blue. Some programs, however, store those triplets of values in the order red, green, blue, and others store them in the order blue, green, red.
If CodeCarbonCopy finds a systematic relationship between the values stored by one program and those stored by the other, it generates a set of operations for translating between representations.
CodeCarbonCopy works well with file formats, such as images, whose data is rigidly organized, and with programs, such as image processors, that store data representations in arrays, which are essentially rows of identically sized memory units. In ongoing work, the researchers are looking to generalize their approach to file formats that permit more flexible data organization and programs that use data structures other than arrays, such as trees or linked lists.
|
- The process of removing carbon from the atmosphere and depositing it in a reservoir.When carried out deliberately, this may also be referred to as carbon dioxide removal, which is a form of geoengineering.
- The process of carbon capture and storage, where carbon dioxide is removed from flue gases, such as on power stations, before being stored in underground reservoirs.
- Natural biogeochemical cycling of carbon between the atmosphere and reservoirs, such as by chemical weathering of rocks.
Carbon sequestration describes long-term storage of carbon dioxide or other forms of carbon to either mitigate or defer global warming and avoid dangerous climate change. It has been proposed as a way to slow the atmospheric and marine accumulation of greenhouse gases, which are released by burning fossil fuels.Carbon dioxide is naturally captured from the atmosphere through biological, chemical or physical processes. Some anthropogenic sequestration techniques exploit these natural processes, while some use entirely artificial processes.
Carbon dioxide may be captured as a pure by-product in processes related to petroleum refining or from flue gases from power generation.CO2 sequestration includes the storage part of carbon capture and storage, which refers to large-scale, permanent artificial capture and sequestration of industrially produced CO2 using subsurface saline aquifers, reservoirs, ocean water, aging oil fields, or other carbon sinks.
read more here
- Underground Carbon Sequestration Can Cause Earthquakes Too, Study Shows (huffingtonpost.com)
- Study Shows Carbon Sequestration Can Cause Quakes (climatecentral.org)
- China Forestry Carbon Sequestration Industry Indepth Research and Investment Strategy Report, 2013-2017 (forwardchina.wordpress.com)
|
How did eighteenth century readers use their miscellanies? Titles like The Evening Songster or Laugh and be Fat suggest that many were performed informally, in the home, with friends or family. To give a glimpse of that world of informal music-making, here are some clips of a house concert, performed by Alva, with a small audience who sang, laughed and drank their way through a selection of popular ballads and songs. The programme was based on a 1788 miscellany called The Yorkshire Garland, which is a typical eighteenth-century hotchpotch: gambling, racing, a pastoral idyll (set in north Yorkshire), doomed love, and deathbed confession. The miscellany itself can be viewed below.
This ballad, sung to the tune of 'Fair lady, lay your costly robes aside' is based on the true story of John Bolton, tried on 17 March 1775 for the murder of his pregnant apprentice, Elizabeth Rainbow. Bolton was sentenced to execution, but strangled himself on the morning before he was due to be hanged. Like many of the 'true life' popular criminal biographies of the period, the song combines the sensational and grisly detail of the crime with a moralising condemnation of the act.
York and Yorkshire was famous for horse-racing, and nowhere more so than Gatherley Moor, near Richmond. Then, as now, racing was all about the wager.
Yarm, a small town on the south bank of the River Tees, not only boasts BBC Breakfast's 'Best High Street' of 2007, but it is also, according to this song, verdant, peaceful, and affluent – the seat of the goddess Minerva. The chorus 'content, independent, serene and at ease' is sung here by the audience.
|
ST Speeds to a More Wireless World with 70W Fast Wireless Charging Chipset
In a world demanding fewer wires, ST’s new wireless power receiver chip promises to deliver fast and efficient wireless charging to mobile and wearable devices.
The concept of wireless power transfer dates back to the late 1800s with the invention of wireless data transfer in telecommunications. However, only in the past decade has it found a proper use in charging the increasing amount of gadgets and devices that have started being used as a part of everyday life.
A high-level timeline of wireless power transfer. Image used courtesy of IEEE and Mi et al
Today's wireless chargers can bring power to anything from earbuds and phones to electric cars and even busses. They do this via electromagnetic induction, which allows for power transfer between two nearby coils, one situated inside a base station and one embedded into the device being charged.
For devices with a smaller footprint, these coils and their wires have to be smaller and thinner and thus work between shorter distances, usually less than a quarter of an inch.
By using bigger coils in some applications, designers can create a wireless power transfer system that could work at distances of a couple of feet. The nature of electromagnetism also allows for this transfer through certain materials, which is why wireless base stations can be embedded into desks for phones or streets for vehicles.
As the world pushed for technology to become even more wireless, companies like STMicroelectronics (ST) focus on wireless charging and power transfer to ease electronic designs.
This article will look into a recent release from ST; however, before diving into it, it's essential to look into the design considerations at play in creating a wireless power system.
Wireless Power System Design
When designing wireless power transfer systems, multiple factors have to be taken into consideration. These variables range from the device's application to its form factor and even the materials that it's made out of.
With this in mind, engineers have a few technology standards for developing these systems while picking the right materials and designing the adequate transmitter and receiver coils for optimal efficiency.
One of the greatest challenges when building a wireless charge system is charging time. This factor is largely due to the generally low efficiency between the sender coil's power input and the receiver coil's power output during wireless power transfer.
Hoping to solve this challenge, ST has released a new chip for faster wireless charging.
Ditching the Wires: ST's Wireless Chipset
STMicroelectronics claims to have overcome this challenge by introducing its latest power receiver chip offering to bring wireless charge times closer to its wired predecessor.
ST's latest wireless-power chipset: STWLC98. Image used courtesy of STMicroelectronics
The chip in question is called the STWLC98 integrated wireless power receiver and is intended for use with mobile devices such as phones and wearables.
According to ST, this chip can deliver up to 70W of power, and when implemented into a modern smartphone design, it can fully charge the device in just under 30 minutes. This chip is also fully compliant with the Qi EPP 1.3 industry standard for wireless charging used by many smartphone manufacturers, making it easy for implementation into current designs.
An application block diagram for the STWLC98. Image used courtesy of STMicroelectronics [downloadable product brief]
The chip itself comes with a 32 bit, 64 Mhz ARM Cortex M3 core with 16 KB FTP, 16 KB RAM, and 80 KB ROM. It also features built-in over-voltage, over-current, and thermal protection, as well as built-in power management, which can enable energy saving and ultra-low power stand-by modes for increased efficiency.
The STWLC98 is designed to work in conjunction with ST's STWBC2-HP, a wireless power transmitter chip used to develop Qi-certified wireless power applications. The STWBC2-HP contains an embedded ARM 32 bit 64 MHz Cortex M0+ microcontroller or, more specifically, an STM32G071 microcontroller.
These two chips can also work with the STSAFE-A110 secure element chip, storing Qi certificates and providing encrypted authentication.
Though this new release seems to pack a punch when it comes to wireless charging, let's take a look at the benefits of moving towards wireless charging as a whole.
Moving Towards a Wireless World
ST's wireless innovation has the potential to be used in multiple different existing applications while also opening the door for innovation and new alternative uses. Additionally, it could allow for revisiting some old applications where long charge times aren't suitable for practical use.
Some examples of this would be electric bikes and scooters, drones, portable power tools, laptops, and Bluetooth speakers where a long charge time would be inconvenient and not functional.
Of course, smartphones and wearables have the most to gain by using this specific chip. However, investing and designing new wireless power transfer technology, in general, could benefit other areas like medical devices, both wearable and embedded. These devices would gain a lot with quick wireless charging since using wires and ports that can easily be damaged or broken might be a problem for patients with certain conditions.
Ultimately, as manufacturers have adopted these types of technologies before, the STWLC98 chip (and other similar chips and variants), if it can deliver everything that it promises, could become the norm for future wireless devices.
|
We Who Defy Hate Curriculum
Most faith tradition speaks to the idea of creating a more just and loving world. However, learning how to live toward that goal in peaceful collaboration with each other is often hard and complicated.
We Who Defy Hate is a curriculum designed to support people of different faith traditions who want to find places of common ground and solidarity in the service of social justice and action. It is a companion discussion series for the PBS documentary, Defying the Nazis: The Sharps' War.
The film features an American Unitarian minister and his wife, Waitstill and Martha Sharp, who saved scores of lives across Europe during the WWII. When most Americans were turning a blind eye to the growing social injustice and totalitarian threat in Europe, the American Unitarian Association was committed to saving as many people as possible.
We Who Defy Hate curriculum was developed by Dr. Jenice View, Social Justice Educator, in a Curriculum Incubator at the Fahs Collaborative, through generous funding from Artemis Joukowsky III and the Unitarian Universalist Congregation at Shelter Rock, Manhasset, NY.
Complete the form to download this resource.
|
Thursday, November 15, 2012
Educating Parents on the Prevention of Bullying Sponsored by the INTEGRIS Hispanic Initiative
OKLAHOMA CITY – Bullying can be a common experience for many children and adolescents. Surveys indicate as many as half of all children are bullied at some time during their school years, and at least 10 percent are bullied on a regular basis.
Join Deputy Belen Rodriguez, public relations coordinator for Hispanic affairs, to learn more about the growing epidemic of bullying. Rodriguez will provide parents with up-to-date information on the importance of talking to your children about bullying, warning signs of bullying, how to identify victims and bullies as well as resources to help others stop bullying around them.
The presentation, sponsored by the INTEGRIS Hispanic initiative, will be conducted in Spanish and held from 6:30 to 7:30 p.m. on Tuesday, Nov. 20, in the Medical Office Building at INTEGRIS Southwest Medical Center, 4200 S. Douglas Ave, Suite B-10, Oklahoma City, OK 73109.
For more information or to register for the class, please call the INTEGRIS HealthLine at 405-951-2277; press #2 for Spanish.
|
How Popular is the Lottery?
A lottery is a game of chance in which tokens are sold for the purpose of winning a prize, often sponsored by a government. A person who wins a lottery must be a legal citizen of the country in which they play, and the winnings are taxed. Lottery winners are obligated to pay the appropriate withholding taxes. The word lottery is believed to be derived from the Middle Dutch loterie or the Old English hlote, meaning “lot.”
Early state-sanctioned lotteries were modeled on traditional raffles and were popular in Europe during the 1500s. In colonial America, lotteries were used to finance a variety of public works projects, including paving streets, building wharves, and even constructing Harvard and Yale buildings.
Today’s lotteries are based on the same principle as those of the past, with a draw of numbers that award prizes to those who match them. There are also a number of variations on this theme, with the most common being instant games such as scratch-off tickets. These games use a computer to generate the random numbers that people mark on their playslips. The computer then matches each player’s selections with those of the other players and announces the winner.
Despite these innovations, the basic dynamic remains unchanged: the lottery expands its popularity rapidly after it first appears and then levels off or declines as people get bored with waiting for the results. This is why lotteries constantly introduce new games to keep the public interested.
A key factor in a lottery’s ability to attract and retain public support is its reputation as benefiting some specific form of public good, such as education. This argument is particularly powerful in times of economic stress, when the prospect of tax increases or cuts to public programs may raise public anxiety. However, the popularity of a lottery does not appear to depend on a state’s actual fiscal condition; lotteries have won broad approval even when governments are in relatively strong financial shape.
The popularity of the lottery also depends on the way it is marketed. Lottery advertisements are often designed to portray the experience of scratching a ticket as a fun activity, and make it seem like a harmless pastime rather than a potentially addictive addiction. This messaging obscures the regressivity of the lottery and helps to deflect criticism that it is an expensive way for a government to spend its resources.
While some lottery players are able to control their spending habits, others are not. For those who have difficulty controlling their urges, a good strategy is to purchase lottery tickets only when they have extra money to spare. This can be a great way to build an emergency fund or to pay off credit card debt. Moreover, if you do win the lottery, be sure to budget for the additional tax burden, which can take up to half of your winnings. It is also a good idea to purchase a separate insurance policy that covers the potential loss of your winnings.
|
The JWSC is proud to announce that the water provided to its customers meets or exceeds all environmental requirements of the state and federal governments.
Download our Water Quality Report for Service Provided from January, 2017 to December, 2017: BGJWSC_All_Systems_Water_Quality_Report_2017
Water is a basic need for life. Without it, life could not exist. Most people depend on public water supplies for their drinking water. Even though only a small portion of water sold is used for drinking and cooking, all the water produced and distributed must be of high quality-good enough for human consumption. We all depend on our water supply to be protected from contaminants that could threaten our health.
The water for the JWSC system comes from deep underground. The source is a porous limestone structure called the Upper Floridan Aquifer. There are eight wells drilled into the part of the limestone that is carrying the water. The wells range in depth from 750 to 1050 feet, with half of them between 800 to 850 feet.
Water with high levels of chloride from deeper in the earth moving upwards is threatening water quality in the Brunswick peninsula. Ancient brine has been pulled upwards and has contaminated a large portion of the aquifer in the Brunswick area.
The Georgia Environmental Protection Division (EPD) of the Department of Natural Resources has performed a Ground Water Source Vulnerability Risk Assessment. The report, dated June 16, 1999, is available in the Office of the Director.
Some people may be more vulnerable to contaminants in drinking water than the general population. Immune-comprised persons such as persons with cancer undergoing chemotherapy, persons who have undergone organ transplants, persons with HIV/AIDS or other immune system disorders, some elderly, and infants can be particularly at risk from infections. These people should seek advice about drinking water from their health care providers. EPA/CDC guidelines on appropriate means to lessen the risk of infection by Cryptosporidium and other microbial contaminants are available from the Safe Drinking Water Hotline (1-800-426-4791).
Contaminants that may be present in source water include:
- Microbial contaminants, such as viruses and bacteria, which may come from sewage treatment plants, septic systems, agricultural livestock operations and wildlife.
- Inorganic contaminants, such as salts and metals, which can be naturally occurring or result from urban storm water runoff, industrial or domestic wastewater discharges, oil and gas production, mining and farming.
- Pesticides and herbicides, which may come from a variety of sources such as agriculture, urban storm water runoff, and residential uses.
- Organic chemical contaminants, including synthetic and volatile organic chemicals, which are byproducts of industrial processes and petroleum production, and can also come from gas stations, urban storm water runoff and septic systems.
- Radioactive contaminants, which can be naturally occurring or be the result of oil and gas production and mining activities.
In order to ensure that tap water is safe to drink, the United State Environmental Protection Agency (EPA) prescribes regulations which limit the amount of certain contaminants in water provided by public water systems. Food and Drug Administration regulations establish limits for contaminants in bottled water which must provide the same protection for public health.
The JWSC provides the water quality report below annually with details about contaminants, their recorded levels within each system and the related limits. Drinking water, including bottled water, may reasonably be expected to contain at least small amounts of some contaminants. The presence of contaminants does not necessarily indicate that water poses a health risk. More information about contaminants and potential health effects can obtained by calling the EPA’s Safe Drinking Water Hotline (1-800-426-4791).
Lead in Drinking Water
If present, elevated levels of lead can cause serious health problems, especially for pregnant women and young children. Lead in drinking water is primarily from materials and components associated with service lines and home plumbing. The JWSC is responsible for providing high quality drinking water, but cannot control the variety of materials used in plumbing components. When your water has been sitting for several hours, you can minimize the potential for lead exposure by flushing your tap for 30 seconds to 2 minutes before using water for drinking or cooking. If you are concerned about lead in your water, you may wish to have your water tested. Information on lead in drinking water, testing methods, and steps you can take to minimize exposure is available from the Safe Water Hotline or at http://www.epa.gov/safewater/lead.
If you have any questions about this water quality report or would like additional information about the water system, please contact the Office of the Director at our headquarters at 1703 Gloucester St in Brunswick.
|
Amazon Store - Video Games
Video games are frowned upon by parents as time-wasters, and worse, some education experts think that these games corrupt the brain. Playing violent video games are easily blamed by the media and some experts as the reason why some young people become violent or commit extreme anti-social behavior. But many scientists and psychologists find that video games can actually have many benefits – the main one is making kids smart. Video games may actually teach kids high-level thinking skills that they will need in the future.
Some of the mental skills enhanced by video games include:
- Following instructions,
- Problem solving and logic,
- Hand-eye coordination, fine motor and spatial skills,
- Planning, resource management and logistics,
- Multitasking, simultaneous tracking of many shifting variables and managing multiple objective.
Here you can find those kind of video games for kids which will stimulate their brain cells and keeps them active at all the time. Play Stations makes them to live in a virtual world. But only thing have to keep in mind is there is a limit for everything.
|
The recent ratification of the Integrated Coastal Zone Management Protocol by Morocco is a step forward towards a more sustainable use of the Mediterranean coast. The Ministry of Foreign Affairs and Co-operation of Spain, in its capacity of Depositary of the Treaty for the Protection of the Marine Environment and the Mediterranean Seaside Region (Barcelona Convention), informs that on 21 September 2012, the Kingdom of Morocco deposited the Ratification Instrument of the Protocol on Integrated Coastal Zone Management in the Mediterranean (ICZM).
The Protocol on Integrated Coastal Zone Management in the Mediterranean, the first legally binding instrument of its type in the world, was adopted by the Conference held in Madrid, Spain on 21 January 2008. It entered into force on 24 March 2011. To date, it has been ratified by Albania, the EU, France, Montenegro, Morocco, Slovenia, Spain and Syria.
The Paris Declaration in February 2012 urges the Mediterranean countries to ratify the ICZM Protocol and implement the ICZM Action Plan as rapidly as possible. It also calls upon them to recognize the need to improve coherence between the different levels of coastal governance, supplemented by optimal national frameworks for ICZM and to liaise with other relevant regional and global plans and programmes, in particular through the maritime spatial planning, in order to strengthen and optimize the achievement of the overarching goals of the Barcelona Convention.
|
Ever feel the need to crush things before you recycle them? Well, it’s often the right instinct, as it conserves space and is more efficient for pick-up by recyclers. Plastic bottles, for instance, are generally best crushed (and left with caps on) when thrown in the recycling bin.
Aluminium cans, however, are a rare exception, as Popular Science noted last week. It turns out that if cans are crushed or flattened, you’re often making the jobs of recycling facilities that much harder.
According to Matt Meenan, the senior director of public affairs at the Aluminium Association, when crushed cans enter the recycling stream, they become more difficult to sort out and can contaminate other recyclable materials.
A flattened soda can can be sorted as “paper,” for instance, thus contaminating the paper recyclables. “Crushed aluminium cans may fall through the spaces of the sorting equipment and either be lost entirely or improperly sorted,” he added.
This does, however, come with the caveat that it may depend on your recycling infrastructure. Recycling programs operate on one of two methods: single-stream recycling (the kind where you throw all your recyclables in a single bin at the curb) versus multi- or dual-stream recycling (where you separate them into different bins at the curb).
If you’re in a city with multi-stream recycling, you’re in luck. Whether crushed or not, you’re fine to recycle them in any way you want.
So what should you do about your cans in question? First, you can determine what kind of curbside recycling program operates in your neighbourhood. It’s pretty easy. If you’re throwing your separated glass, paper, and plastic together in one bin at the curb, you’re participating in a single-stream program.
And if it isn’t obvious already, it’s as simple as making sure you don’t attempt to crush or flatten cans as you throw them into the recycling bin. Again, if you’re single-stream or just unsure, err on the side of caution and leave them uncrushed. A dent is probably fine (they don’t need to be pristine), but resist the urge to crush.
|
Water is essential to human life. In fact, it is essential to all of the forms of life known to humankind in general, as there are no known species that can survive without it. Though marine biologists are unsure just how many kinds of creatures reside in our planet’s 5 oceans, it is estimated that about one-quarter of all of the Earth’s species do. Not to mention how very important the oceans are to our civilization—for thousands of years, braving their waters has been one of the bravest feats a human being could accomplish, one that often led to amazing discoveries and the general increase of our knowledge of the planet we inhabit. For all of these reasons and many, many more, Mother Ocean Day is a long-overdue celebration of our oceans in all of their majesty and peril.
The History of Mother Ocean Day
Mother Ocean Day is relatively a new celebration, as it was introduced for the first time in 2013. It is a concept thought up by the South Florida Kayak Fishing Club that has since sought the approval of the City of Miami to declare this a day official. The point, of course, is to take a day to celebrate the beauty and wonder of the ocean, and it is no surprise that inhabitants of Florida were the ones to come forward with this idea, as Florida is famous for particularly gorgeous white sand beaches and clear, aquamarine waters.
How to Celebrate Mother Ocean Day
There are many things people can do on Mother Ocean Day, what’s important is to pay homage to this incredible force of nature and enjoy what it has to offer to the full. Taking to the waves, whether this be on a boat or a surfboard, is one way to enjoy the day. Snorkeling and diving are both unforgettable ways to get to know the ocean better by taking a look at some of the plants, fish and other creatures living in it. If you prefer to stay on dry land, a picnic on the beach enjoying the calm, soothing sound of the waves could be the perfect way for you to appreciate the ocean. Just remember to clean up afterwards! And for those who wish to celebrate the day from the comfort of their own home, eating a meal made from foods of the ocean, such as fish and shellfish, could be a deliciously appropriate way to go about observing this occasion. For example, have you ever tried langoustines? Langoustines are an excellent alternative to lobsters, as they are much cheaper, but have a similar flavor some chefs even find superior to lobster because of its delicate sweetness. They are also surprisingly easy to prepare—all you really need is some salty water to briefly boil them, and some garlic butter to brush over them. If you love your barbecue, langoustines can also be barbecued and then dipped in a simple dijon mustard sauce. Originally, langoustines were eaten in Europe, but they have recently become popular in North America as well, so if you have never tried them, this day is the perfect time!
However, regardless of whether it’s Mother Ocean Day or not, we should always respect the oceans and the beaches leading into them by never polluting them in any way, so future generations can enjoy them as much as we do today.
|
The word “badlands,” to geologists, means a region that has been badly eroded by water and wind.
The result is an area of deep gullies and small, steep hills. The soil is poor or nonexistent, and few plants grow there.
Badlands National Park in southwestern South Dakota is such a place. It features acres of sharp buttes, pinnacles, and spires of eroded rock.
The primary vegetation is grasses, in fact, Badlands is the largest protected mixed grass prairie in the United States.
Because floods and winds have swept away so much of the soil and rock in badlands areas, dinosaur and other kinds of fossils often show up. Badlands National Park is no exception, it is home to the richest Oligocene epoch fossilbed in the world.
Fossil remains of ancient horses, sheep, rhinoceroses, and pigs have been found there.
|
The Dodd-Frank Act versus the Rule of Law
In response to the 2008 financial collapse, Congress passed the Dodd-Frank Wall Street Reform and Consumer Protection Act. Dodd-Frank increased regulation of banks, stockbrokers, insurers and other financial institutions that are “too big — or interconnected — to fail” and that could require a government bailout to prevent a banking system collapse.
The Act created a board — the Financial Stability Oversight Council — composed mostly of the heads of various federal financial regulatory agencies, including some newly created agencies [see the table]. The Council has the responsibility to identify institutions whose failure might create systemic distress — and the discretion to impose “prudential” regulations on them different from the regulations imposed on other financial institutions.
Regulation is discretionary when the requirements imposed on privately owned institutions vary from firm to firm in ways that are difficult to explain or anticipate, particularly by the affected firms. In the extreme form, regulations could be imposed on specific firms, regardless of the type of business they conduct, or whether they are chartered by individual states or the federal government.
Discretionary regulation is contrary to the rule of law, but is consistent with the “rule of men” — in this case, experts in financial regulation. However, by their very nature, discretionary regulation cannot reliably produce the results its advocates desire; furthermore, these regulations will increase financial instability, rather than reduce it.
Discretionary Regulation in Theory. Economists who support Dodd-Frank usually cite network theories that claim the financial system is inherently unstable because all its institutions are interconnected and interdependent. These network theories say that a shock in one region or sector of the economy, such as the collapse of a major bank or insurance company, can spread to the whole — a so-called contagion. Advocates of such theories believe regulators can reduce instability by constraining the investment portfolios of financial institutions in order to reduce the “domino effect” — that if one institution fails they will all fail. Some economists, including the executive director of financial stability for the Bank of England, advocate “shaping the network topology,” which comes close to saying regulators should pick portfolios for the banks.
Following the 2008 financial crisis, economist Janet Yellen recommended supervision of so-called systemic institutions, “defined by key characteristics, such as size, leverage, reliance on short-term funding, importance as sources of credit or liquidity, and interconnectedness in the financial system — not by the kinds of charters they have.”
Such regulations could vary the amount and composition of a firm’s capital (risk-based capital requirements), the ratio of its debts to its capital stock (leverage limits), and how much cash it is required to keep on hand (liquidity requirements). Yellen’s recommendations were to a great extent enacted in Dodd-Frank.
Discretionary Regulation in Practice. Discretionary regulation is not new. For example, the Sherman Antitrust Act of 1890 outlawed the unreasonable restraint of trade, a vague term that has never been satisfactorily defined. Such vagueness invites discretion in interpretation and enforcement.
Although discretionary regulation is not new, the element of discretion created by Dodd-Frank is so great that the regulators themselves seem perplexed. For example, Dodd-Frank added a new section to the Bank Holding Act (BHC) of 1956 requiring a group of regulatory agencies to formulate a Volcker rule — a proposal by former Federal Reserve Board Chairman Paul Volcker to limit proprietary trading and conflicts of interest between financial institutions and their clients. In the rule proposed in November 2011, the agencies note “the delineation of what constitutes a prohibited or permitted activity under section 13 of the BHC Act often involves subtle distinctions that are difficult both to describe comprehensively within regulation and to evaluate in practice.” Thus, the very regulators empowered to execute the Act report that it is not merely hard to understand, but opaque.
By one count, Dodd-Frank calls for regulators to formulate 533 rules, all of them more or less open ended and unspecified. However, recommendations have been made regarding what such regulations should do, including:
- Levying taxes (“fees”) on institutions in proportion to the risks they pose to the financial system.
- Imposing higher liquidity requirements on the most connected banks in the network.
- Establishing central clearing of financial transactions —simplifying but centralizing the network.
- Limiting bank size or activities, and controlling the composition of bank portfolios.
The discretionary regulations called for in Dodd-Frank make it difficult for people to anticipate the legal consequences of their actions. Thus, it violates one of the principal requirements of the rule of law. As Harvard law scholar Richard H. Fallon, Jr., explains, “People must be able to understand the law and comply with it.”
Rule by Experts. The Financial Stability Oversight Council is a body of experts. Such experts are imagined to exist and operate, somehow, above the system, and uninfluenced by it. However, it is more appropriate to treat experts as ordinary humans. As humans, experts may try (perhaps in vain) to maximize utility — that is, enact their own preferences. Moreover, their cognition is limited and erring; they are not smarter than the rest of us. And, importantly, incentivesdo influence the errors the experts make.
Maximizing Utility. If experts seek to maximize utility, they may not be impartial. They could serve other ends, such as larger budgets for their agencies. Even conscientious regulators could have risk preferences different from those of the public, and as a result they might over- or under-price risk. The motives of policy makers need not be selfish to be dangerous.
Limited and Erring Cognition. Dodd-Frank gives regulators the responsibility to assess the risks to the financial system posed by financial institutions. There is no competitive market for risk assessment. Indeed, the goal is to monopolize risk assessment. But absent market signals of greater and lesser risk, authorities cannot assign reliable risk charges to the capital portfolios of different institutions. The very complexity used as a justification for centralized risk assessment may make it impossible for a central body to reliably assess risk.
Influence of Incentives. Finally, incentives skew the errors experts will tend to make, even honest errors. Indeed, all observations are biased by the expectations and motives of the observers. Regulatory errors will tend to serve the biases of the regulators. It is quite possible that collectivized risk assessment may be beset with systemic biases. Without a market to test those assessments, such biases could go uncorrected, and could grow over time.
Conclusion. By replacing competitive evaluations of risk in the marketplace with centralized risk pricing, Dodd-Frank ensures that financial institutions, investors and depositors will have less information about the risks they face. And by further displacing the rule of law with discretionary regulation, the Act ensures that there will be less certainty about the future. As a result, it increases the instability of financial institutions, rather than reducing it.
Roger Koppl is a professor of economics and finance in the Silberman College of Business and director of the Institute for Forensic Science Administration at Fairleigh Dickinson University, and a senior fellow with the National Center for Policy Analysis.
|
Removing the toxic and odorous emissions of ammonia from the industrial production of fertilizer is a costly and energy-intensive process. Now, researchers in Bangladesh have turned to microbes and inexpensive wood charcoal to create a biofilter that can extract the noxious gas from vented gases and so reduce pollution levels from factories in the developing world.
Writing in the International Journal of Environment and Pollution, Jahir Bin Alam, A. Hasan and A.H. Pathan of the Department of Civil and Environmental Engineering, at Shahjalal University of Science and Technology, in Sylhet, explain that biofiltration using soil or compost has been used to treate waste gases for the last two decades. There are simple filters for reducing odors and more sophisticated units for removing specific chemicals, such as hydrogen sulfide, from industrial sources.
Among the many advantages are the fact that biofiltration is environment friendly technology, resulting in the complete degradation by oxidation of toxic pollutants to water and carbon dioxide without generating a residual waste stream. It also uses very little energy. Biofilters are widely used in the developed world but their use in the developing world which is rapidly being industrialized but not necessarily considering pollution control.
The Shahjalal team has now built a prototype biofilter for ammonia extraction based on wood charcoal in which the nitrogen-fixing microbe Nitrosomonas europaea has been grown. This microbe derives all its energy for metabolism, growth, and reproduction from ammonia, which it absorbs and oxidizes to nitrite. The microbe is commonly found in soil, sewage, freshwater, and on buildings and monuments in polluted cities.
The team found that their prototype biofilter could function at an ammonia concentration of 100 to 500 milligrams per liter of gas and remove the ammonia from this gas stream almost completely. Approximately 93% removal of ammonia gas was seen within seven days.
|
The Future of College Admissions & Student Portfolios
Everyday of school may seem repetitive, but each day is only a step in a student’s educational journey. The story behind every student is not always told, or even shared. Every essay, athletic game, music performance, art show, and accomplishment makes up the core of every student’s educational career. When a student applies to college, he or she can only submit a glimpse of their achievements in terms of the application. Colleges believe they choose the students that best match the school, but this assumption is based on very little information. If a student could produce a portfolio filled with all of their own accomplishments, strengths, and ideals then colleges would have to reconsider its standards. A portfolio connected through the Internet, the school, and a world of professionals would change the way students embark not only their educational journey, but also the journey for the rest of their lives.
Imagine if a student wanted to get into AP English his junior year. If that student received a B or even a C+ his freshman year then his chances are greatly diminished. What if it were possible to work on an essay past its due date, maybe even a year later? If a student could improve his essays by the end of sophomore year, teachers could evaluate his or her portfolio and best judge if AP or Honors is the right choice. This could be applied to projects, lab reports, and many different assignments. Students would also be able to work collaboratively with teachers who could provide assistance on assignments. The portfolio would allow the editing, sharing, and syncing of documents so that a bad grade or bad paper could be improved little by little over time.
Teachers could use this structure to build their curriculums based on individual students. If a teacher posted three projects that must be completed by a certain deadline then each student could complete their assignments and upload it according to their own schedules. The freedom to work according to an individual’s own rate would allow the students to plan when to complete certain assignments and when to get ahead. A requirement could be set for each academic class, which must be met by the end of the year. When students complete their years in high school then they could progress knowing that the requirements that they have met will showcase what they truly have done.
The virtual portfolio would be a place where students store their lives, not only by themselves, but also through the efforts of peers, teachers, and those who wish to recognize student accomplishments. When a team wins a varsity game then everyone wants to know the score. People tend to not share or remember how well individual players performed. If a coach could post scores immediately after a game and share photos or even individual success then a student could display the gradual increase or highlights of his performance. When a student performs in a concert, displays his or her art, attends a science competition, or is running an activity or club then those moments need to be archived. People could link snippets of performed songs, pictures of events, videos of a debate competition or even a successful club meeting in which students are individually tagged. These tags would be immediately organized into the student’s portfolio and show what is being done in and outside of school. If professionals viewed a student portfolio, they could offer a chance to professionally record a track or invite students to conferences in fields that they excel in. A virtual portfolio would not be enhanced by only the owner, but also by collaborative efforts of the community that wants to see the excellence in everyone.
As college slowly creeps out from behind the corner, students worry if they have done enough to impress their college choices. Many colleges and universities look for the activities that students participate in, while all look at how well students do in their education. If a portfolio was able to organize everything into an easy download to share, then students can be assured that they have left no stone unturned. A complete representation of a student would be at the click of a mouse for the admission councilors.
If Facebook can represent the social life of an individual and networks such as LinkedIn showcase industrial expertise then what can accurately depict the educational journey that every single student embarks on? This virtual portfolio would revolutionize the way we look at education, excellence, and ourselves.
|
To anyone unfamiliar with the Observer, “Texas Left” might seem an oxymoron, or some grotesque creature hunted to near-extinction in the Piney Woods. Was it last sighted jaywalking across Guadalupe Street when hit by a Hummer? Maybe we can read it as the beginning of a phrase: Texas left … the Union. According to Carl H. Moneyhon, one of 14 scholars contributing to The Texas Left, the state turned left in the aftermath of the Civil War when Union loyalists deposed the secessionists who had seized their property and killed their companions. During Reconstruction, the only sustained period in which the right did not spread its wing across Texas, radical Republicans pushed for universal suffrage and public education. By the 1870s, the white oligarchy was securely back in power.
In his essay for this academic collection, Greg Cantrell provides an inspiring reminder that Texas populism has not always manifested itself in fractious crowds of creationists, birthers, tax rebels, right-to-lifers, gun-worshippers and immigrant-bashers. Founded in 1877, the Farmers Alliance fought exploitative pricing through collective action and agricultural cooperatives. The People’s Party, founded in 1891, advocated a graduated income tax, women’s suffrage, secret ballots and direct election of senators. So alien was the Reagan doctrine, “government is the problem,” that the party favored public ownership of railroads. “I have never been frightened by that scarecrow, strong government,” declared Texas Populist Charles Jenkins in 1894. “I believe in a government strong enough to protect the lives, liberty and property of its citizens.”
After gubernatorial candidate Jerome Kearby captured 44 percent of the vote in 1896, the People’s Party faded away. Within a decade, the Socialist Party was drawing more votes than the Republicans, but not enough to keep the governor’s mansion free from a succession of reactionary residents. Despite successful struggles for safety codes, minimum wages, accident compensation and child-labor laws, George Norris Green and Michael R. Botson Jr. contend that 1920 “turned out to be labor’s high-water mark.” Anti-union animosity soon turned Texas into a “right-to-work” state. Jim Crow continued flapping more than a decade after Brown v. Board of Education, and Latinos still lack opportunities commensurate with their numbers. Though Texas Left contributor Patrick Cox believes that “Texas evinced a more moderate—or at least more diverse—identity than in the rest of the South.”
Texas Left includes the women’s movement, but nothing about gays and lesbians, American Indians or environmental activists. It discusses the NAACP and LULAC, but omits the ACLU. Molly Ivins, Barbara Jordan and Ralph Yarborough receive at least passing mention, but other icons of the left like Sissy Farenthold and John Henry Faulk are missing. A comprehensive account of the Texas left would demand many more pages. That’s small consolation to the beleaguered left in Texas today.
Contributing writer Steven G. Kellman teaches at the University of Texas at San Antonio.
|
Ratch"et (?), n. [Properly a diminutive from the same word as rack: cf. F. rochet. See 2d Ratch, Rack the instrument.]
A pawl, click, or detent, for holding or propelling a ratchet wheel, or ratch, etc.
A mechanism composed of a ratchet wheel, or ratch, and pawl. See Ratchet wheel, below, and 2d Ratch.
Ratchet brace Mech., a boring brace, having a ratchet wheel and pawl for rotating the tool by back and forth movements of the brace handle. -- Ratchet drill, a portable machine for working a drill by hand, consisting of a hand lever carrying at one end a drill holder which is revolved by means of a ratchet wheel and pawl, by swinging the lever back and forth. -- Ratchet wheel Mach., a circular wheel having teeth, usually angular, with which a reciprocating pawl engages to turn the wheel forward, or a stationary pawl to hold it from turning backward.
<-- illustr. Ratchet wheel and ilustr. of ratchet drill -->
⇒ In the cut, the moving pawl c slides over the teeth in one direction, but in returning, draws the wheel with it, while the pawl d prevents it from turning in the contrary direction.
© Webster 1913.
|
Upon completion of the course, the student should be able to:
Describe the use of unconsolidated soils in building and construction technology, and describe basic concepts in soil science
Describe and classify the bedrock and its structure in terms of its technical characteristics for construction on and in bedrock. Describe the prevalence and use of rock and aggregate material, including processes
Describe the stress field in bedrock and tunnelling and reinforcement methods
Describe soil behaviour during compression and shear
Describe the main features of geotechnical design.
Engineering geology: The Quaternary stratas occurrence and use as construction and aggregate material. The Swedish bedrock structure, rock types and technical properties of rock when building in bedrock and using aggregates. Rock reinforcement methods. Soil science: soil composition and classification. The soil's structural composition. Basic mechanics. Rock Mechanics: Rock stress state, dimensioning of reinforcement. Calculations in rock mechanics. Soil behaviour: Stress conditions in soil. Compression and shear of soil. Movement in soil. Soil mechanics: Consolidation subsidence, bearing capacity, pressure, slope stability. Field and laboratory methods in soil mechanics. Soil stabilisation methods Geotechnical investigation and design.
Lectures, exercises, laboratory work and excursions.
Laboratory reports and excursion report (3 credits), exercises (4 credits), written exam (3 credits).
|
Hendra's portrayal of women is undoubtedly one of the most beautiful female portraits in the history of Indonesian Modern Art. Indeed, it is difficult to find parallels for them in the Indonesian history of portraiture as these ravishing images are rendered with the most unique and individualistic style of the artist. The artist's muse is not of a specific person but rather he drew inspiration from the many women he knew of in the community, she could be a vendor on the side of the street, a neighbour or simply someone who was just in passing and whom Hendra managed to catch a glimpse and thus transposed into an eternal icon on the canvas.
Woman was a reflection of the artist's cosmos. Hendra's obsession with all things 'Indonesian' finds a perfect expression with this subject. If she was breastfeeding a baby with her strong, masculine feet rooted firmly to the ground, she was a symbol of the artist's beloved motherland, the young republic of Indonesia. If she was depicted in glorious colours, dressed in the finest traditional batik, she would be the symbol of the great Javanese culture that was close to the artist's heart. If she was placed in a grandiose landscape with which she could almost merge as one entity with her curvaceous body which Hendra had intended it to be reminiscent of the dramatic landscape, she would be the embodiment of all things that are beautiful and Indonesian. Hers was the privileged body on which the light fell to perfection.
Many factors contributed to the exceptional quality of Woman with Jackfruit. The composition is typical of Hendra's classical portraitures where the subject, with her elongated and slender frame was donned with a batik dress of exquisite intricacy and her side profile enhanced. The palette is exceptionally rich, make up of colours that are deeply saturated and yet luminous and vibrant. The paint was applied with such spontaneity and vigor that it still looks fresh, as if Hendra had completed the painting only moments ago.
Her body was rendered by the same undulating forms that had characterized much of Hendra's work since the early days, but whereas before the forms had so often seemed predatory which by this stage of the artist's career (1960s on wards) evolved to be slower, softer, more welcoming and more organic.
It seems with Woman with the Jackfruit, Hendra was at his whimsical best. As a colourist, the red and green shades complimented one another with a vividness that is in accordance of Hendra's decorative tendency, which he took it further in this work by painting the enhanced details of the dark curls that crowned the subject, giving her an effect of bejeweled and leaving the onlooker in awe.
|
More information about Frilled Lizards
Special anatomical, physiological
The ability of the Frilled Lizards to spread its frill is very interesting. The frill is extended when the lizard opens his mouth widely. The frills composed of rods made out of cartilage. These rods are connected to the tongue and jaw muscles. The frill is a bright color and is only extended when the lizard is frightened.
Comments about the lizards of the Fort Worth Zoo:
The lizards are between the ages of 5 and 20 years
old. The Fort Worth Zoo received the Frilled Lizards from the US Fish and
Wildlife. The US Fish and Wildlife confiscated the lizards at the airport,
where they were going to be exported.
noticed the male Frill Lizards are very hostile towards each other. However,
the baby lizards (male and female) get along with each other very well.
Also, I noticed the lizards hardly, ever shows its frill at the zoo.
Juvenile Frilled Lizards at the FWZ Herpetarium Nursery
Sources and Links
Reptiles and Amphibians at the Fort Worth Zo
|
Death With Dignity States 2020
Physician-assisted dying is a controversial topic in the United States. Some people believe that a terminally ill person should have the right to end their life if they choose without the interference of government or without religious beliefs coming into play. In some U.S. states, the government agrees with this and has implemented death with dignity laws.
There are safeguards in place that go along with these laws to prevent misuse. For example, two physicians must confirm the patient’s diagnosis and prognosis. Waiting periods are also required before the prescription will be filled. Perhaps most importantly, the decision falls on the patient, and doctors must confirm the mental competency before moving forward with administering the medication.
In the United States, states that have death with dignity laws make it legal for adults with terminal illnesses to receive prescription medication that assists in their death. This is also known as physician-assisted suicide.
Even states that have legalized this form of death have several conditions that patients must meet. Physician-assisted suicide is only allowed for mentally competent adults who have been given 6 months or less to live because of a terminal illness. In other words, death must be inevitable in order for a person to receive the medication that ends the patient’s pain and suffering.
This has been a hotly debated topic across the nation. Some believe that it’s a person’s right to choose and the government shouldn’t have a say in how someone ends their own life. On the flip-side, some religious leaders advocate against death with dignity laws, believing that suicide is a sin.
The majority of the states in the U.S. have made physician-assisted suicide illegal. In fact, there are just eight states plus Washington, D.C., that have passed death with dignity laws. Many of these laws have been passed only within the last couple of years.
The states that have death with dignity laws are:
The state of Oregon was the first state in the U.S. to pass death with dignity laws through the Death With Dignity Act of 1994 and 1997. The next laws weren’t passed until over 10 years later, under Washington’s Death With Dignity Act of 2008.
In Vermont, physician-assisted suicide was legalized under the Patient Choice and Control at the End of Life Act, which was passed in 2013. In 2015, California passed the End of Life Option Act, although the laws didn’t go into effect until the next year. In 2016, Colorado and the District of Columbia passed laws – the End of Life Options Act and the D.C. Death with Dignity Act, respectively.
The most recent laws were passed in 2018 and 2019. Hawaii passed its Our Care, Our Choice Act in 2018 and 2019, Maine passed its own Death with Dignity Act in 2019, and New Jersey passed the Aid in Dying for the Terminally Ill Act in 2019.
|
Functional Behavioral Assessment - 14712
Part of PRO-ED Series on Autism Spectrum Disorders, 2E
Read your book anywhere, on any device, through RedShelf's cloud based eReader.
Digital Notes and Study Tools
Built-in study tools include highlights, study guides, annotations, definitions, flashcards, and collaboration.
Have the book read to you!
The publisher of this book allows a portion of the content to be used offline.
Additional Book Details
Functional Behavior Assessment describes a process to develop successful, function-based intervention plans and design maximally effective management programs for children and youth with autism spectrum disorders (ASD). Functional behavioral assessment (FBA) is an information gathering process to identify environmental variables that produce challenging behaviors. This book presents the foundation of FBA, the steps in the FBA process, data collection procedures to complete an FBA, and guidelines for developing function-based interventions.
2. Assumptions of FBA
3. The FBA Process: An Overview
4. Collecting FBA Data
5. Function-Based Interventions
|Sold By||PRO-ED Publishing, Inc.|
|Number of Pages||76|
|
Native to (or naturalized in) Oregon:
- Evergreen shrub, 3-6 ft (0.9-1.8 m) tall, forms large clumps and thickets as branches often root when in contact with the soil. Trunks mahogany colored, exfoliating, but branches are covered with pubescence and having a yellow-green to grayish appearance. The branches are reddish, similar to those of Arctostaphylos patula (Green Manzanita). Leaves alternate, simple, erect, 2-4.5 cm long, elliptical to lanceolate, wedge-shaped base, pointed tip, entire margins, leathery, both surface are pale bluish-green due to a fine covering of hairs. Flowers white and pinkish, about 6 mm, urn-shaped, waxy, in dense clusters. Fruit 6 mm across, chestnut brown to terra cotta, shiny, flattened spheres.
- Sun, tolerates alkaline soils, sand, clay.
- Hardy to USDA Zone 5 Native range from Texas west to Utah, New Mexico, Arizona, and California, south in Mexico to Oaxaca.
|
Wisconsin once had a 'model' voting rights program for people with disabilities. Officials have let it decline.
Despite the clamor to turn out Wisconsin voters in 2020, some voters might be stopped at the doors of their polling places.
Auditors have flagged hundreds of violations at Wisconsin polls that make it harder or impossible for voters with disabilities to vote in person. A Journal Sentinel review of audits found officials are missing required action plans to fix most of these issues from the last two years.
Though Wisconsin once had a robust program for monitoring accessibility problems at polls — one that was lauded as a best practice by a presidential commission in 2014 — state officials have let it wane. Since the recognition, officials have missed audits, been slow to follow up on accessibility violations and provided fewer supplies to help polling places become more accessible.
“This dramatic decrease in the audit program is troubling as these audits provide critical information on the accessibility of polling places around the state,” said Denise Jess, executive director of the Wisconsin Council of the Blind and Visually Impaired.
Jess serves on an advisory committee for the Wisconsin Elections Commission, which runs the accessibility program. She and other disability rights advocates on the committee want to see the commission do more to address problems that shut out voters with disabilities.
People with disabilities are about as likely to register to vote as people without disabilities, but they are less likely to get their votes cast, according to a Rutgers University report.
In Wisconsin, an estimated 55% of people with disabilities voted in November 2018, compared to an estimated 66% of people without disabilities, the Rutgers report found. Wisconsin’s gap is about twice as wide as the national gap and has gotten worse since 2014.
State officials have long recognized accessibility problems at polls. In 1999, state lawmakers reviewing election processes passed a statute: They wanted reports every two years about any impediments faced by elderly voters and voters with disabilities.
Fifteen years later, President Barack Obama received a report from his commission on election administration that called Wisconsin’s monitoring program a model for other states. At the time, the program was run by the state’s now-defunct Government Accountability Board.
“(Other states’) jurisdictions can learn a lot from the state of Wisconsin, which… has one of the most thorough election data-gathering programs,” the report stated.
Unlike other states, which checked polling places in advance of elections, monitors in Wisconsin showed up unannounced on election days to get a better read of the “actual voter experience.” This allowed them to catch different problems, like sites where accessible entrances were locked on election day or assistive technology wasn’t set up.
At that time, Wisconsin had recently released its report on audits done over the previous two years. Auditors had visited 1,614 polling places in 66 of the state’s 72 counties. They found more than 10,000 violations of the Americans with Disabilities Act.
In the following report, covering 2014 and 2015, GAB oversaw 808 audits. In 2016, there were 366.
In 2017, after GAB folded and the program fell to the Wisconsin Elections Commission, there were zero audits and the agency failed to write a report for lawmakers. WEC officials said this was partly due to staffing issues during the transition.
Though the program was revived in 2018 and 2019, auditors visited just 78 sites over the two years.
The decline is troubling to Jennifer Neugart of the Wisconsin Board for People with Developmental Disabilities, who also serves on WEC’s advisory committee.
“With that drastic of a cut, there is no real way to know what people with disabilities are experiencing at the polls or if accessibility issues are getting better or worse,” Neugart said.
Of the sites checked in the past two years, only four had no violations. Monitors found 345 problems. There were heavy doors without automatic openers, doorbells or greeters to help people without the ability to open them. There were steps without ramps, elevators or proper lifts for wheelchairs.
Many polling places hadn’t set up assistive voting equipment necessary for those with vision impairments and other voters. More had failed to post proper signage to identify accessible parking, routes and voting instructions.
Of the problems listed in the new report, over a third were considered likely to prevent a voter with a disability from entering a polling place and independently casting a private ballot.
It’s unclear what was done about these problems. State officials didn't provide clerks with their audits from the past two years until September.
Clerks were told to respond with action plans to address each problem. With prewritten options provided by WEC, clerks were to check boxes next to their chosen approaches.
As of Dec. 14, state officials were only able to provide 13 plans in response to a Journal Sentinel request. They were missing 61.
Reid Magney, WEC’s public information officer, said the agency would send follow-up emails and make phone calls in the coming weeks.
Neugart, part of Wisconsin's Disability Vote Coalition, said it's important for WEC to ensure clerks follow through on changes after audits.
"Most of the time, accessibility problems can be addressed quite easily but municipalities need guidance and resources to make those changes," she said.
Neugart also said accessibility problems are likely to affect a growing number of people as Wisconsin's population ages.
Magney gave multiple reasons for the decline in the auditing program. He said it’s partly because of greater post-2016 pressure on staff time to tighten election security and prevent hacking; the agency focused on implementing and training nearly 2,000 election workers on information technology security measures.
The accessibility program also took a hit with the loss of federal funds under the Help America Vote Act, which previously allowed WEC to hire monitors and provide equipment to help polling places become more accessible.
Wisconsin had been receiving about $200,000 annually for accessibility programming under that act from 2003 through 2011. Each payout came with a five-year expiration date, and WEC spent the last of the money in 2016.
For 2019, WEC requested and received $48,000 in state funding to allow the program to continue. Still lacking funds to hire auditors, the agency primarily relied on staff from the advocacy group, Disability Rights Wisconsin, to conduct the audits at no cost to the state.
Magney said WEC is planning to hire auditors in 2020 but said the plan is still under development. When asked how many audits WEC aims to do in 2020, Magney said staff members are still working to develop a goal and that the agency "looks forward to recharging."
Voting rights advocates want to see the program expand. Disability Rights Wisconsin has recommended WEC push for more state funding for accessibility oversight.
"The report is an important resource for policymakers, advocates and to everyone who wants to increase the accessibility of our electoral system," said Barbara Beckert, who represents Disability Rights Wisconsin on WEC’s advisory committee.
Beckert said the program also needs resources to follow up with polling places after audits to ensure clerks fix problematic conditions.
Additionally, advocates say many polling places need new accessible voting machines, as the old ones fall out of date. Jess said it would be helpful to have a state-funded grant program for municipalities to update this equipment — the way they’ve updated IT systems for security.
WEC does offer some supplies for polling places to improve accessibility, though the amount distributed has declined.
Recently, WEC replenished a supply of accessible parking and entrance signs and tools to guide people with visual impairments through the voting process. Staff also added new supplies to their stock including wireless doorbells. Any community can request supplies from WEC at no cost, regardless of whether the community is audited through the program.
Voters can also request poll workers bring a ballot to their car or entrance of the polling place — known as curbside voting. Polling places should post signs with a number to call. In Milwaukee, it's (414) 286-3963.
The Wisconsin Disability Vote Coalition offers captioned videos about registering to vote, becoming a poll worker and other voting rights topics at disabilityvote.org/videos.
Rory Linnane reports on public health and works to make information accessible so readers can improve their lives and hold officials accountable. Contact Rory at (414) 801-1525 or email@example.com. Follow her on Twitter at @RoryLinnane.
|
Solar energy means radiant light and heat from the sun, is harnessed using a range of ever-evolving technologies such as solar heating, solar photovoltaics, solar thermal electricity, solar architecture and artificial photosynthesis.
Solar technologies are broadly characterized as either passive solar or active solar depending on the way they capture, convert and distribute solar energy. Active solar techniques include the use of photovoltaic panels and solar thermal collectors to harness the energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties, and designing spaces that naturally circulate air.Solar technology is also used to power air conditioner,to heat room space. Smartclima a working for renewable energy,mainly studying solar hybrid products,and try our best to supply good products of solar energy and wind energy.Our mainly products is solar photovoltaic panels,solar mounting bracket system,power inverter,solar power store battery,solar thermodynamic panel,solar PVT panel,solar air heater,solar powered air conditioner.
|
As an extension
to the previous lesson, a series of portraits from the Museum of Fine
Arts, Houston's Bayou Bend Collection can be used to compare the portraits
refelcting the culture of viceregal Mexico with the culture of the same
time period in North America.
1. What criteria
did we develop?
The same criteria
used to analyze the Franz Mayer portraits can be used to analyze these
portraits and compare the answers to those from the previous lesson.
2. How does this
criteria apply to other portraits of women from the same time period
but different geographic area?
The teacher shows
the three portraits below either from prints or from the links below,
and asks the students to work in small groups or individually to answer
the questions in the criteria.
3. How can you
describe the culture and history surrounding these portraits?
Students can share
the comparisons about these portraits in a variety of ways.
Students can present their summary to the entire class or to each
other in small groups.
can provide their answers in a short writing assignment. Depending
on the age of the students and their writing abilities, the assignment
might be as simple as one that summarizes what the portrait reveals
or as complex as writing a short story or poem about the subject of
Students can write a headline and a caption for the portrait as if
it is in a newspaper. These headlines can be placed under the picture
if displayed on a bulletin board.
can take the role of the subject of the portrait and perform a short
skit as that character.
painted by John Singleton Copley in the Bayou Bend Collection
are from the same time period as the two portraits from Lesson
3 (1757 to 1782).
|
In 1909 Fritz Haber established the conditions under which nitrogen, N2(g), and hydrogen, H2(g), would combine using
- medium temperature (~500oC)
- very high pressure (~250 atmospheres, ~25,500kPa)
- a catalyst (a porous iron catalyst prepared by reducing magnetite, Fe3O4).
Osmium is a much better catalyst for the reaction but is very expensive.
This process produces an ammonia, NH3(g), yield of approximately 10-20%.
The Haber synthesis was developed into an industrial process by Carl Bosch.
The reaction between nitrogen gas and hydrogen gas to produce ammonia gas is an exothermic equilibrium reaction, releasing 92.4kJ/mol of energy at 298K (25oC).
|heat, pressure, catalyst
|ΔH = -92.4 kJ mol-1
|heat, pressure, catalyst
| + 92.4 kJ mol-1
By Le Chetalier's Principle:
- increasing the pressure causes the equilibrium position to move to the right resulting in a higher yeild of ammonia since there are more gas molecules on the left hand side of the equation (4 in total) than there are on the right hand side of the equation (2). Increasing the pressure means the system adjusts to reduce the effect of the change, that is, to reduce the pressure by having fewer gas molecules.
- decreasing the temperature causes the equilibrium position to move to the right resulting in a higher yield of ammonia since the reaction is exothermic (releases heat). Reducing the temperature means the system will adjust to minimise the effect of the change, that is, it will produce more heat since energy is a product of the reaction, and will therefore produce more ammonia gas as well
However, the rate of the reaction at lower temperatures is extremely slow, so a higher temperature must be used to speed up the reaction which results in a lower yield of ammonia.
|The equilibrium expression for this reaction is:
|As the temperature increases, the equilibrium constant decreases as the yield of ammonia decreases.
||6.4 x 102|
||4.4 x 10-1|
||4.3 x 10-3|
||1.6 x 10-4|
||1.5 x 10-5|
- A catalyst such as an iron catalyst is used to speed up the reaction by lowering the activation energy so that the N2 bonds and H2 bonds can be more readily broken.
- Increased temperature means more reactant molecules have sufficient energy to overcome the energy barrier to reacting (activation energy) so the reaction is faster at higher temperatures (but the yield of ammonia is lower as discussed above).
A temperature range of 400-500oC is a compromise designed to achieve an acceptable yield of ammonia (10-20%) within an acceptable time period.
At 200oC and pressures above 750atm there is an almost 100% conversion of reactants to the ammonia product.
Since there are difficulties associated with containing larger amounts of materials at this high pressure, lower pressures of around 200 atm are used industrially.
By using a pressure of around 200atm and a temperature of about 500oC, the yield of ammonia is 10-20%, while costs and safety concerns in the biulding and during operation of the plant are minimised
During industrial production of ammonia, the reaction never reaches equilibrium as the gas mixture leaving the reactor is cooled to liquefy and remove the ammonia. The remaining mixture of reactant gases are recycled through the reactor. The heat released by the reaction is removed and used to heat the incoming gas mixture.
Uses of Ammonia
- ammonium sulfate, (NH4)2SO4
- ammonium phosphate, (NH4)3PO4
- ammonium nitrate, NH4NO3
- urea, (NH2)2CO,also used in the production of barbiturates (sedatives), is made by the reaction of ammonia with carbon dioxide
- nitric acid, HNO3, which is used in making explosives such as TNT (2,4,6-trinitrotoluene), nitroglycerine which is also used as a vasodilator (a substance that dilates blood vessels) and PETN (pentaerythritol nitrate).
- sodium hydrogen carbonate (sodium bicarbonate), NaHCO3
- sodium carbonate, Na2CO3
- hydrogen cyanide (hydrocyanic acid), HCN
- hydrazine, N2H4 (used in rocket propulsion systems)
||ammonium nitrate, NH4NO3|
|Fibres and Plastics
||nylon, -[(CH2)4-CO-NH-(CH2)6-NH-CO]-,and other polyamides|
||used for making ice, large scale refrigeration plants, air-conditioning units in buildings and plants|
||used in the manufacture of drugs such as sulfonamide which inhibit the growth and multiplication of bacteria that require p-aminobenzoic acid (PABA) for the biosynthesis of folic acids, anti-malarials and vitamins such as the B vitamins nicotinamide (niacinamide) and thiamine.|
|Pulp and Paper
||ammonium hydrogen sulfite, NH4HSO3, enables some hardwoods to be used|
|Mining and Metallurgy
||used in nitriding (bright annealing) steel,|
used in zinc and nickel extraction
||ammonia in solution is used as a cleaning agent such as in 'cloudy ammonia'|
A Brief History
At the beginning of the 20th century there was a shortage of naturally occurring, nitrogen-rich fertilisers, such as Chile saltpetre, which prompted the German Chemist Fritz Haber, and others, to look for ways of combining the nitrogen in the air with hydrogen to form ammonia, which is a convenient starting point in the manufacture of fertilisers.This process was also of interest to the German chemical industry as Germany was preparing for World War I and nitrogen compounds were needed for explosives.
The hydrogen for the ammonia synthesis was made by the water-gas process (a Carl Bosch invention) which involves blowing steam through a bed of red hot coke resulting in the separation of hydrogen from oxygen. The nitrogen was obtained by distillation of liquid air, then by cooling and compressing air.
These days, the hydrogen is produced by reforming light petroleum fractions or natural gas (methane, CH4) by adding steam:
|CH4(g) + H2O(g)
|CO(g) + 3H2(g)|
Enough steam is used to react with about 45% of the methane (CH4), the rest of the methane is reacted with air:
|2CH4(g) + ||O2(g) + 4N2(g)
|2CO(g) + 4H2(g) + 4N2(g)|
All the carbon monoxide (CO) in the mixture is oxidised to CO2 using steam and an iron oxide catalyst:
|CO(g) + H2O(g)
||iron oxide catalyst
|H2(g) + CO2(g)|
The carbon dioxide (CO2) is removed using a suitable base so that only the nitrogen gas (N2) and hydrogen gas (H2) remain and are used in the production of ammonia (NH3).
In ammonia production the hydrogen and nitrogen are mixed together in a ratio of 3:1 by volume and compressed to around 200 times atmospheric pressure.
|
ote of the directors named a site including portions of Golden Gate Park, Lincoln Park, the Presidio, and Harbor View. Before 100,000 people President Taft broke ground for the Exposition in the Stadium of Golden Gate Park. But it was not long before the choice settled finally on Harbor View alone.
The work began with the organization of the architectural staff. The following architects accepted places on the commission: McKim, Mead and White, Henry Bacon, and Thomas Hastings of New York; Robert Farquhar of Los Angeles; and Louis Christian Mullgardt, George W. Kelham, Willis Polk, William B. Faville, Clarence R. Ward, and Arthur Brown of San Francisco. To their number was later added Bernard R. Maybeck of San Francisco, who designed the Palace of Fine Arts, while Edward H. Bennett, an associate of Burnham, of Chicago, made the final ground plan of the Exposition group. When San Francisco had been before Congress asking national endorsement for the Exposition here, the plans which were then presented, and o
|
Scaffolding in PBL
According to The Glossary of Education Reform, scaffolding refers to a variety of instructional techniques used to move students progressively toward stronger understanding and, ultimately, greater independence in the learning process. Scaffolding in PBL is important because it meets the needs of students learning so that they can be successful. In PBL scaffolding is a form of learning support. It’s planned for the entire project in advance, and this makes sure that your project is aligned from beginning to end. When scaffolding it’s important to make sure that your students are clear on what the learning outcomes and targets are. It is also important to connect scaffolding to a learner’s background knowledge and look at their prerequisites skills that they will need.
In creating this project, we’ve had to develop a student learning guide. This guide is a great tool to plan to scaffold. It’s mostly backward planning. You start off with the learning outcomes and what you want learners to achieve and begin the process of creating the actual project using these outcomes as a guide, this includes the products and the knowledge and skills the students will need to create them, checkpoints and formative assessment and instructional strategies and resources. This will give learners the knowledge they need to be successful.
Scaffolding is important because it meets students where they are and supports their learning. It’s designed to get students to the next level of their learning and get students to become independent learners and problem solvers. The way I will be addressing this in my project is by providing clear direction and anticipating problems that the student may encounter and let them know what they must do to overcome it. I’ll clearly state the purpose of the project and help students understand why a website for the classroom is an important tool for communication. I will make sure that learners stay on task by constantly checking in on them and allow them to make their decisions about which avenues they take to explore. I will also incorporate an assessment plan that includes both summative and formative assessments like a rubric and checklist.
Blog. (n.d.). Retrieved June 04, 2016, from http://bie.org/blog/gold_standard_pbl_scaffold_student_learning
Scaffolding Definition. (2013). Retrieved June 05, 2016, from http://edglossary.org/scaffolding/
|
I wonder who it was defined
man as a rational animal.
There is nothing absurd or impracticable in the idea of a league or alliance between independent nations for certain defined
purposes precisely stated in a treaty regulating all the details of time, place, circumstance, and quantity; leaving nothing to future discretion; and depending for its execution on the good faith of the parties.
The faculties of the mind itself have never yet been distinguished and defined
, with satisfactory precision, by all the efforts of the most acute and metaphysical philosophers.
Suddenly, the brown tiles shone, the mosses glittered, fantastic shadows danced upon the meadows and beneath the trees; fading colors revived; striking contrasts developed, the foliage of the trees and shrubs defined
itself more clearly in the light.
They were passing through a tangled forest when the boy's sharp eyes discovered from the lower branches through which he was traveling an old but well-marked spoor--a spoor that set his heart to leaping--the spoor of man, of white men, for among the prints of naked feet were the well defined
outlines of European made boots.
There was still a chink of light above the sill, a warm, mild glow behind the window; the roof of the cottage and some of the banks and hazels were defined
in denser darkness against the sky; but all else was formless, breathless, and noiseless like the pit.
I heard once of an American who so defined
faith, `that faculty which enables us to believe things which we know to be untrue.
The broken place in the iron plates was so perfectly defined
that it could not have been more neatly done by a punch.
Montgomery and Moreau were too peculiar and individual to keep my general impressions of humanity well defined
It would mean that I was positively defined
, it would mean that there was something to say about me.
Though indeed some persons may have this further [1276a] doubt, whether a citizen can be a citizen when he is illegally made; as if an illegal citizen, and one who is no citizen at all, were in the same predicament: but since we see some persons govern unjustly, whom yet we admit to govern, though not justly, and the definition of a citizen is one who exercises certain offices, for such a one we have defined
a citizen to be, it is evident, that a citizen illegally created yet continues to be a citizen, but whether justly or unjustly so belongs to the former inquiry.
Among the opinions and voices in this immense, restless, brilliant, and proud sphere, Prince Andrew noticed the following sharply defined
subdivisions of and parties:
|
Smart snack swaps can give you better nutrition, increased energy and more protection against disease.
An easy way to protect your health, keep your weight under control and get more essential vitamins, minerals and nutrients into your diet is to make some simple and tasty snack swaps. You won’t have to sacrifice delicious, but you’ll be doing your body a favour and will find that you feel better and have more energy with the right healthy snack choices.
Protect against heart disease and some forms of cancer by eating watermelon instead of honeydew melon. The red colour in watermelon is from lycopene, a powerful antioxidant. Watermelon makes a sweet snack with no guilt — the high water content makes it very low in calories!
Go for black or dark grapes instead of the green grapes. Both are loaded with high levels of vitamin C but the dark varieties have a higher level of antioxidants and offer increased protection against heart disease.
Little sandwiches made with lean protein topped with veggies and lettuce makes a filling snack, but substitute watercress for the iceberg lettuce and you’ll get 12 times the amount of vitamin C and three times more iron!
A variety of fresh veggies to dip in hummus or yoghurt is a smart and satisfying afternoon snack for both kids and adults. When choosing fresh peppers, opt for the red instead of green and add more vitamin C to your snack, along with 10 times the amount of beta carotene. Switching from regular to tenderstem broccoli adds the antioxidant known as glucosinolate, which has been shown in lab tests to help fight cancer.
Sweet potato chips
Potato snackers — and who doesn’t sometimes crave chips? — should ditch the white variety in favour of yummy sweet potatoes. Slice them thin and bake them with a light mist of healthy olive oil for a delicious chip that aids your skin with a bigger helping of beta carotene. If you snack on cereal, switch from corn flakes to a healthier bran cereal for 13 times the dietary fibre. It’s filling and a big boost to your digestive system!
Sometimes nothing but chocolate will do for a snack! When you have the urge to indulge, switch from milk chocolate to dark chocolate for a bigger dose of flavonoids. Choosing a dark chocolate with 70 percent cocoa solids saves 131 calories and still let’s you enjoy creamy taste with the benefit of heart-healthy antioxidants — and the essential fats help bust wrinkles!
Make your own energy mix as a substitute for packaged brands that can have high levels of sugar and preservatives. Apples, plums, pears and apricots mixed with 170 grams of hazelnuts taste better and give you an afternoon energy boost with protein and an appealing taste combo that’s salty and sweet.
|
Nawa Jigtar was working in the village of Ghat, in Nepal, when the sound of crashing sent him rushing out of his home. He emerged to see his herd of cattle being swept away by a wall of water.
Jigtar and his fellow villagers were able to scramble to safety. They were lucky: 'If it had come at night, none of us would have survived.'
Ghat was destroyed when a lake, high in the Himalayas, burst its banks. Swollen with glacier meltwaters, its walls of rock and ice had suddenly disintegrated. Several million cubic metres of water crashed down the mountain.
When Ghat was destroyed, in 1985, such incidents were rare - but not any more. Last week, scientists revealed that there has been a tenfold jump in such catastrophes in the past two decades, the result of global warming. Himalayan glacier lakes are filling up with more and more melted ice and 24 of them are now poised to burst their banks in Bhutan, with a similar number at risk in Nepal.
But that is just the beginning, a report in Nature said last week. Future disasters around the Himalayas will include 'floods, droughts, land erosion, biodiversity loss and changes in rainfall and the monsoon'.
The roof of the world is changing, as can be seen by Nepal's Khumbu glacier, where Hillary and Tenzing began their 1953 Everest expedition. It has retreated three miles since their ascent. Almost 95 per cent of Himalayan glaciers are also shrinking - and that kind of ice loss has profound implications, not just for Nepal and Bhutan, but for surrounding nations, including China, India and Pakistan.
Eventually, the Himalayan glaciers will shrink so much their meltwaters will dry up, say scientists. Catastrophes like Ghat will die out. At the same time, rivers fed by these melted glaciers - such as the Indus, Yellow River and Mekong - will turn to trickles. Drinking and irrigation water will disappear. Hundreds of millions of people will be affected.
'There is a short-term danger of too much water coming out the Himalayas and a greater long-term danger of there not being enough,' said Dr Phil Porter, of the University of Hertfordshire. 'Either way, it is easy to pinpoint the cause: global warming.'
According to Nature, temperatures in the region have increased by more than 1C recently and are set to rise by a further 1.2C by 2050, and by 3C by the end of the century. This heating has already caused 24 of Bhutan's glacial lakes to reach 'potentially dangerous' status, according to government officials. Nepal is similarly affected.
'A glacier lake catastrophe happened once in a decade 50 years ago,' said UK geologist John Reynolds, whose company advises Nepal. 'Five years ago, they were happening every three years. By 2010, a glacial lake catastrophe will happen every year.'
An example of the impact is provided by Luggye Tsho, in Bhutan, which burst its banks in 1994, sweeping 10 million cubic metres of water down the mountain. It struck Panukha, 50 miles away, killing 21 people.
Now a nearby lake, below the Thorthormi glacier, is in imminent danger of bursting. That could release 50 million cubic metres of water, a flood reaching to northern India 150 miles downstream.
'Mountains were once considered indomitable, unchanging and impregnable,' said Klaus Tipfer, of the United Nations Environment Programme. 'We are learning they are as vulnerable to environmental threats as oceans, grasslands and forest.'
Not only villages are under threat: Nepal has built an array of hydro-electric plants and is now selling electricity to India and other countries. But these could be destroyed in coming years, warned Reynolds. 'A similar lake burst near Machu Picchu in Peru recently destroyed an entire hydro-electric plant. The same thing is waiting to happen in Nepal.'
Even worse, when Nepal's glaciers melt, there could be no water to drive the plants. 'The region faces losing its most dependable source of fresh water,' said Mike Hambrey, of the University of Wales.
A Greenpeace report last month suggested that the region is already experiencing serious loss of vegetation. In the long term, starvation is a real threat.
'The man in the street in Britain still isn't sure about the dangers posed by global warming,' said Porter. 'But people living in the Himalayas know about it now. They are having to deal with its consequences every day.'
· Additional reporting: Amelia Gentleman and Felix Lowe
|
In 1845, the famous American writer Edgar Allan Poe published his masterpiece “The Raven”.
You can read the poem here. A brief extract can give you an idea of the style:
“Prophet!” said I, “thing of evil! — prophet still, if bird or devil! —
Whether Tempter sent, or whether tempest tossed thee here ashore,
Desolate yet all undaunted, on this desert land enchanted —
On this home by Horror haunted — tell me truly, I implore —
Is there — is there balm in Gilead? — tell me — tell me, I implore!”
Quoth the Raven “Nevermore.”
The poem narrator is a lover who is lamenting the loss of his love, Lenore. It’s one of the most famous poems ever written.
In 1846, Poe published a follow-up essay, “The Philosophy of Composition”. In this text, Poe tell us how he, in a completely cold and analytical fashion, created the poem.
We tend to imagine a poet like a sentimental man that looks at the flowers of the garden while he remembers tender or sad episodes of his love life. In “The Philosophy of Composition”, Poe presents himself much more like a marketer who’s defining the attributes of his product to cause maximum effect on its target audience.
Let’s look at how he defines his product (or poem ) previous to its writing, thinking on the effect he wants to cause.
Target Market: Poe clearly defines who he is targeting:
Let us dismiss, as irrelevant to the poem, per se, the circumstance- or say the necessity- which, in the first place, gave rise to the intention of composing a poem that should suit at once the popular and the critical taste.
The length: the poem must be short enough to be read on one sit and long enough to produce the desired effect:
for it is clear that the brevity must be in direct ratio of the intensity of the intended effect- this, with one proviso- that a certain degree of duration is absolutely requisite for the production of any effect at all
…Holding in view these considerations, as well as that degree of excitement which I deemed not above the popular, while not below the critical taste, I reached at once what I conceived the proper length for my intended poem- a length of about one hundred lines. It is, in fact, a hundred and eight.
The province or domain: after some careful thought, he decides that it will be about beauty.
Now I designate Beauty as the province of the poem, merely because it is an obvious rule of Art that effects should be made to spring from direct causes- that objects should be attained through means best adapted for their attainment- no one as yet having been weak enough to deny that the peculiar elevation alluded to is most readily attained in the poem.
The tone of the poem: it must be Melancholy…
Regarding, then, Beauty as my province, my next question referred to the tone of its highest manifestation- and all experience has shown that this tone is one of sadness. Beauty of whatever kind in its supreme development invariably excites the sensitive soul to tears. Melancholy is thus the most legitimatee of all the poetical tones.
With a similar line of thought, Poe decides that he will select a word or stanza “Nevermore”. He then sets to seek “ a pretext for the continuous use of the one word nevermore”.
In observing the difficulty which I had at once found in inventing a sufficiently plausible reason for its continuous repetition, I did not fail to perceive that this difficulty arose solely from the preassumption that the word was to be so continuously or monotonously spoken by a human being- I did not fail to perceive, in short, that the difficulty lay in the reconciliation of this monotony with the exercise of reason on the part of the creature repeating the word. Here, then, immediately arose the idea of a non-reasoning creature capable of speech, and very naturally, a parrot, in the first instance, suggested itself, but was superseded forthwith by a Raven as equally capable of speech, and infinitely more in keeping with the intended tone.
Well, here we are with “nevermore”, the Raven and a lot of the product attributes defined. Now Poe with a last effort finally selects the central theme of the poem:
I had now gone so far as the conception of a Raven, the bird of ill-omen, monotonously repeating the one word “Nevermore” at the conclusion of each stanza in a poem of melancholy tone, and in length about one hundred lines. Now, never losing sight of the object- supremeness or perfection at all points, I asked myself- “Of all melancholy topics what, according to the universal understanding of mankind, is the most melancholy?” Death, was the obvious reply. “And when,” I said, “is this most melancholy of topics most poetical?” From what I have already explained at some length the answer here also is obvious- “When it most closely allies itself to Beauty: the death then of a beautiful woman is unquestionably the most poetical topic in the world, and equally is it beyond doubt that the lips best suited for such topic are those of a bereaved lover.”
The essay follows with many similar ideas and deductions, but the idea is the same, I invite you to read it, it’s interesting.
As you can read here, it is uncertain if Poe really followed the method he describes in “The Philosophy of Composition”. Anyway, as always with Poe, his idea makes one think.
When you decide to make a product, what is your style? do you let your personal tastes and emotions take control? do you “fall in love with the idea”? Or do you use reasoning to select a domain and coldly define a product in the Poe’s style?
Just a follow up about those two iPhone game developers that generously provides their sales info.
MicroCars for iPhone downloads and sales are going down very fast. I would like to understand exactly how it happens. What factors (blogs, IM, news, anything) made the App reach high rankings in the store and what made it go down after?
iLightning is hardly getting a sale a day. I bet Jabavu just forgot about it.
Victor Hugo, French poet, playwright, novelist, essayist, visual artist, statesman, human rights activist and exponent of the Romantic movement in France.
Victor Hugo has two impressive quotes about the way ideas get string and things change.
In his novel “The Hunchback of Notre Dame”, he said “this will kill that“. He talks about the new thing, idea, trend killing the old one.
In another famous quote, Victor Hugo says “You can resist an invading army; you cannot resist an idea whose time has come“.
The killing is normally announced very early, and those destined to death tend to ignore it. Let’s look at two recent examples:
in the first days of internet (1990s?), the death of newspapers was announced. 19 years after, most of them exists yet, but agony of the medium itself is near. Well managed newspaper companies have make a successful evolution into internet based media and other better looking and more profitable business.
the record music industry, in its complete chain, is being defeated by digital music sharing. Again some companies are reconverting but it’s not a simple task.
In both examples, the advent of a new technology changes the basis of an industry. In both cases, as soon as the specific use of the technology is announced, the death of the players as they are known is announced also.
Those two examples aren’t isolated. Information Technology with is innumerable applications can change the way most human activities develop. It’s impact, enormous as it has been, is still in embryo state compared to what comes.
We need no Jules Verne, no Merlin, no magic endowed fortune teller to tell us about some things that will happen soon (in the next 20 or 30 years). All this trends have started, but they will consolidate and replace the older ways.
knowledge related services market turns global.
IT fulfills its basic promise of automating repetitive tasks and enormous quantities of people who actually labors on “human information processing” lost their jobs. Read “clerks” of any type, lots of lawyers accountants.
powerful standards for modeling and storing information enable real data sharing between software artifacts. This reduces or annihilates the cost of changes based business model that today rules IT.
Pure Software development gets cheaper, cheaper, cheaper. New tools make it easier to develop, lots of developers (and damned talented ones) from poor countries enter the world developer pool.
Software artifacts become massive consume goods. and all type of software product gets cheaper, cheaper, cheaper.
costs of managing private IT infrastructure drives forward the cloud and related concepts. As all the information is on the same media, data sharing as a general principle becomes easier.
That’s what we must call the immediate wave. Things that will happen fast and for sure. There’s a second wave also that is far more frightening, but also will happen. I’m a fan of Isaac Asimov. Asimov liked computers. His dreams were about omnipotent and omniscient machines that ruled the word. MULTIVAC as he called his big computer, was an enormous subterranean computer that occupied a complete city. Now it’s obvious that Isaac was wrong, but I still believe some of his dreams will come true. I believe that in a non distant future (less than 100 years, I bet you a beer!) we will have computers smarter than a human being. Computers that will be better computer designers than us. That’s from my viewpoint the real discontinuity: the creation of an artificial intelligence that can make better versions of itself being able to reach an “infinite intelligence”.
The necessary foundations are not here yet. We lack paradigms and building modules to build this first superhuman computer. But lots of brilliant brains are working on it, and now we have the power of super-communication.
Victor Hugo shows us that predicting the future has an easy part and a difficult one:
it’s easy to know the trends i.e. where are we moving to.
it’s difficult to know when the time of each trend or idea will come.
As you will remember for my previous post, MicroCars nice, well written and complex game was close to the sales of iLightning small gadget 2 weeks after launch.
Well, one month and 10 days after its launch, MicroCars has sold 1035 units, more than iLightning performance in 6 months.
What happened?. Simple. Martin launched a “Lite” version of the game, and that free version got damned popular in the store… As I’m writing MicroCars lite is in the 16th place in the category Games->Racing (top free).
Let’s see a simple graph of Martin’s sales. Can you guess when the Lite version was launched?
As you might have guessed, the Lite version was launched near August 5th. The effect is impressive. Before that sales were averaging 12 a day and looking at the graph we can see a downward trend. After the launch of the Lite version, daily sales average jump to 43 a day and, at least to the naked eye, the trend is going upwards.
We are talking of an exceptionally well written game here. The App Store holds near 65.000 apps, most of which must be games. For a game to make the first places on one category, it has to be good.
Anyway, we have a clear morale here. For an iPhone app, the Lite version is a must.
I’m just starting in the shrink wrap software world, so I read a lot of forums and blogs, specially BOS forum. I learn a lot of things, and have a great time
One thing that amazes me is the rigid position of the guys there regarding the price. They all stick to the “price signals quality” way of thinking and seldom consider other points of view.
Many of them are also afraid of special sales, they think that after you make a sale it’s difficult to return your price to the previous tag.
On a recent article on Coding Horror, Jeff Atwood presented the case of Valve, a game producer. They tested the reaction to their holiday sales, with impressive results:
The massive Steam holiday sale was also a big win for Valve and its partners. The following holiday sales data was released, showing the sales breakdown organized by price reduction:
75% sale = 1470% increase in sales
10% sale = 35% increase in sales (real dollars, not units shipped)
25% sale = 245% increase in sales
50% sale = 320% increase in sales
Not all the markets are the same, not all the products are the same. Sales must be carefully designed, planned and communicated to avoid affecting product image. But we must remember that, with all that taken into account, price is a powerful lever to play with!
As this July 15th article says, App Store has served 1,5 billion downloads… and there are 65,000 applications.
“The App Store is like nothing the industry has ever seen before in both scale and quality” said Jobs. Well, at least the scale part is already clear. Something is going on here, a lot of people in obscure corners of the world (greetings from Chile here) are wetting their hands on Objective C and Cocoa to produce enormous quantities of applications.
Lots of the 65.000 apps are no doubt very small gadgets, mostly for a moment of fun. One of those is iLightning by Javabu Adams:
The game is very good. Labour of Love. Even the sound is nice. Hundreds of hours must have gone into it.
Well, the sales on first two weeks were 228. On the first two weeks iLightning sold 216 units.
iLightning started in Feb 09 while MicroCarsIPhone started in July. In the meanwhile the number of apps in the store grew by 3x. One must think that, as the app number grows, it will be more difficult to get noticed.
Humm, I need good stats here, like what was the change in the universe of iPhones + iPod touch…
Anyway, I would like to know how the sales of MicroCars… move. My gut feeling is that simple, one moment gadget apps like iLighting will have a decrease in sales while more stably appealing apps will get reviews, word of mouth and will get growing sales.
Since January I’ve thinking of my first iPhone application… It took a lot of time to sign in the iPhone developer program. As I am in Chile, and te Apple store doesn’t operate here I had to send a fax and the fax at Apple was busy for three or four months… (sad as it sounds).
Now I’m signed, I have my new iMac and my iPod Touch will get here on Saturday. I’m learning objective-C, Cocoa and getting to know the iPhone development environment. So far so good…
But I need an idea to start. Here’s the history of my ideas…
First one: a medication reminder. Sounds good don’t you think?. Sorry. Two problems;
There are already for such apps:
http://dontforgetthepill.com/ (for birth control pill, nice pink interface and discrete warnings!)
The iPhone OS does’nt allow the setting of alarms of timers and your app cannot run in the background, so the user is forced to have your app open in order to get the alarms…
Ok, it seemed a good idea, then I thought about a little stupid but well, maybe that’s the way to go… My new idea, an iPhone insult generator. Orginal, isn’t it. Well, it’s not, at least 10 apps. There even exists a “Shakesperean Insult Generator” Greetings thou botlesss…
Third idea, a wheel that makes decisions for you !! Cool, don’t you think. You write as many options as you want, spin the wheel and the chance decides.
Why program it? it’s easier to buy it on the app store…
A lot of products ideas come to my mind. That’s a blessing/curse I share with a lot of people in the uISV community. A good thing is to get your ideas tested again the reality as soon as possible, in any case, before you invest any considerable amount of time and work in them.
Though obviously it’s an old idea, I like the term Andy Brice coined “Lazy Instantiation Marketing“. The idea is to create a minimal website with some registration features and see how it goes with the people…
This weekend I was haunted by the idea of making a website where people can register, pay and get SMS messages reminding them to take their medications. Following the “Lazy …” idea, I registered a domain and created a simple website. My idea was to use adwords to get some people to the site and test reactions, % of registration, etc.
The “Lazy…” concept worked even before I started. When I got to adwords and tried to register my ad I learnt that you must have an “Online Pharmacy ID” to create any campaign with “Pills” or “Medication” on it. So the “Lazy..” concept saved me a lot of work. Now I know that the idea has an important drawback and that I will have to think about different way to promote it if I finally decide to go on. And I hadn’t yet programmed a single line of code!
|
More than half a century ago, on July 10, 1959, American glaciologist and explorer Paul T. Walker was working in a remote region of the Canadian Arctic, the Los Angeles Times reports. In a quirky stroke of genius, Walker left a handwritten note to any scientists who might come behind him, and he stuck the message in a bottle under a pile of rocks.
“To Whom it May Concern: This and a similar cairn 21.3 feet to the west were set on July 10, 1959,” the note states. “The distance from this cairn to the glacier edge about 4 ft. from the rock floor is 168.3 feet.”
(Story continues below)
Walker hoped that anyone who found the note might take new measurements and send them to his lab at Ohio State University, the Times reports. Sadly, Walker suffered a stroke mere weeks after he left the note and died a few months later. But 54 years later, his scientific mission lives on thanks to researchers who uncovered the message in a bottle.
Dr. Warwick F. Vincent, director of the Center for Northern Studies at Laval University in Quebec City, revealed the find earlier in December and said reading the famous names of Walker and his colleague Albert Crary gave him goosebumps.
Vincent, a biologist, and his colleague Denis Sarrazin found the note over the summer in a very remote area near the edge of a glacier, he told GrindTV Outdoor.
“It’s a story about climate change, but it is also a story about the incredibly brave and strong men who worked in this extreme high Arctic environment in the 1950s—back before GPS and sat phone technology,” Vincent told the outlet. “This is the most remote part of North America, and the coldest coastal zone (average temperature -18C). This also makes the evidence of substantial glacial retreat of great interest.”
The pair carried out Walker’s wish, measuring the distance to the glacier in question, just as Walker had done decades earlier. Vincent and Sarrazin’s measurement revealed that the glacier had retreated 233 feet since 1954.
While the effect of global warming on glaciers is not a new concept, Vincent told The Huffington Post the discovery is still significant.
“The substantial retreat of the glacier, based on our measurement on 18 July 2013 relative to that by Paul Walker on 10 July 1959, is not especially surprising given that glacier retreat as a consequence of global warming has been well documented at many places around the world,” Vincent explained via email. “But northern Ellesmere Island has one of the world’s coldest coastal climates, with average air temperatures that are similar to coastal Antarctica (where I have worked previously), and glacial melting might be less expected in such a place of extreme cold.”
Given the fact that there is still such noticeable loss in glacial ice, this special message should be heeded by scientists around the world, Vincent added.
“Paul Walker’s message from the past is a wake-up call to how fast our global climate is already changing, and it signals much larger changes in the future that may affect us all,” he said.
Full Story Via Weird News – The Huffington Post
|
A typical home water heater can provide between 30 and 60 gallons of clean drinking water during a disaster. Hurricanes, floods, earthquakes, and other power outages may prevent you from having many things, but clean drinking water should not be one of them. To reclaim some clean drinking water from you water heater, and to tap your inner MacGyver, this is what you'll need to do.
Getting Drinking Water from Your Water Heater
1Turn off the electricity or gas to the water heater. Turn off the circuit breaker for electric water heaters or close the gas valve for natural gas and propane types. If the power is still on when the tank is empty, your tank will almost certainly sustain significant damage. Most electric water heaters in residential applications are 208/240 volts, and supplied by a double-pole circuit breaker or two fuses rated at 30 amps.
- Some gas valves have a thermostatic control knob facing forward. The "Off - Pilot - On" gas supply knob is located on the top, between the red interlock button the black "push-button" ignitor. Simply rotate the top knob from the "On" to the "Off" position to stop the flow of gas to the burner.
- Some electric-reliant heaters have double-pole 30 amp circuit breakers. Switch the circuit breaker from the "On" position to the "Off." Once off, there is no danger of damaging the heating elements.
2Preserve the cleanliness of the water in the tank by closing the supply valve to the tank. When water service is restored, the water department will be pumping water that could be contaminated. This will be fine to use for flushing toilets and for cooking, but not for drinking.
- Determine whether you're dealing with a ball valve or a gate valve. Unlike a traditional gate valve's handle, which needs to be turned completely several times in order to shut off, a ball valve handle is rotated just a 1/4 turn between full on and off positions.
- If older, traditional gate valves were installed instead, bear in mind that the color of the handle does not guarantee an association with the temperature of the water in the pipe.
3Find the valve at the bottom of the tank for draining. This is where your clean drinking water will come from. Many water heater valves have a connector for hooking up a garden hose to the drain valve. A short 3 foot (0.9 m) length of garden hose will make the collection of the water easier. A washing machine's supply hose is the perfect length and is available in many homes. Connect the hose and open the valve briefly to flush any debris that may have collected in the valve. Make sure the drain, hose, and container are clean before using them.
- Threads are usually provided to connect an ordinary garden hose (or washer supply hose). Some gate valves do not have a traditional handle, but rather a slot at the end of the stem where a handle would normally attach. The slot allows for operation with a screwdriver, or coin. Work this valve gently, as these valves are seldom used more than once or twice per year under normal service conditions, and could be damaged if forced.
4Turn on the hot water from any tap in the house. In order for water to be drained from the tank, you must allow air to get into it. This is easy to do by opening any hot water tap in the building such as the kitchen or bathroom sink. When either faucet is open, a sucking sound may be heard whenever water is drawn from the water heater's drain valve, and is normal.
5Remove any sediment that has collected at the bottom of the the water heater. Water heaters are notorious for trapping sediments. The heavier-than-water sediment sinks and collects at the bottom of the tank because hot water is drawn from the top of the tank, rather than the bottom. If you have sediment in the drinking water let it stand for a period of time to let it settle to the bottom of container.
- Typical mineral sediment that has settled in the hot water is usually harmless, but if your heater has an aluminum anode, there may be a lot of jelly-like aluminum corrosion by-product on the tank bottom.
- Many people mistakenly believe that the tank is made of glass (or another inert substance). It is not. The inside of the tank will likely be lined with glass to prevent corrosion, since corrosion is the leading cause of water heater failure. There is no danger cooking or consuming water that has been contained in a water heater.
Other Practical Considerations
1Although water from a water heater is considered safe to drink, consider purifying or filtering it before drinking. Although it's probably fine to drink water from the heater during an emergency, it's best to be on the safe side. You can purify water by boiling it or using iodine or bleach in very small quantities. You can filter water in an emergency by layering filtering agents on top of each other.
2Seriously consider replacing the original valve on the water heater with a ball-valve drain assembly. Factory valves do not have a straight path and have small orifices. In hard-water areas, those can easily clog with sediment buildup and then no water will flow from the tank.
3In an emergency, consider other options for potable water. If you can't, for whatever reason, access your water heater in an emergency, don't panic. You should have plenty of other options. Consider these to get at potable water:
- Possible indoor sources of water:
- Liquid from canned fruit and vegetables
- Water from the toilet tank (not the toilet bowl), unless it has been chemically treated with toilet cleaners
- Water from melted ice cubes
- Possible outdoor sources of water:
- Water from a rainwater collection system.
- Water from rivers, steams, springs, and other moving bodies of water
- Water from ponds, dams, and lakes
- Possible indoor sources of water:
How I get potable water?wikiHow ContributorThe most important part is to boil the water or use very small amounts of bleach to kill any microbes. If you also want to filter out sediment, try this trick: Cut a plastic bottle in two. Turn the top of the bottle upside down, and place a cotton ball in the opening. Pour water into the top of the bottle. The cotton ball will filter the water, giving you an emergency water supply. For more details, see this article on how to purify drinking water.
- It is a good idea to flush some water from the bottom of the tank once or twice a year. Sediment can collect on the bottom of the tank. Draining some water under pressure will clean out the sediment.
- A "tankless" water heater will not provide this source of drinking water. Tankless systems provide heated water from a coiled pipe located in a furnace. Water that is passed through the coiled pipe is rapidly heated and available for immediate use. There is no storage of the heated water - hence the term "tankless".
- Before disaster hits, mark which valve is for the water supply. Run some hot water from any sink. Go back to the hot water tank and feel the two pipes attached to it. The supply line will be the colder one. Somehow mark the valve as "supply". This will be the one to close in an emergency so that contaminated water will not go into the tank as you drain the clean drinking water that is stored in it.
- Always have at least several gallons of drinking water on hand. Increase this amount in anticipation of severe weather. Replace water that has been stored for more than a year or so with clean, fresh water.
- Allow the tank to fill before restoring power to the water heater. Open the supply valve and wait for the water to run out of the open hot water faucet.
- Turn off the power supply to the tank first. Even if there is a power failure you must unplug, turn off the circuit breaker, or close the gas valve first.
- Be sure the water inside the water heater is not soft water. It can contain excess sodium (the harder your water supply is, the more sodium is used to soften it), which is not recommended for those with certain health concerns (such as high blood pressure, cardiovascular or kidney disease). If you don't have a water softener...you're good to use the water inside the heater like normal!
- Be sure that the water has had time to cool before opening any valves on the water heater!
- If you live in an apartment contact the management first.
Things You'll Need
- Flashlight to find the circuit breaker, plug, and valves if it is dark
- A short water hose to drain the water from the tank. The supply hose for a washing machine is perfect.
- A screwdriver or coin, to operate the drainage valve
- A shallow pan that fits under the valve to collect the water in. If you have a short hose you can use cooking pots, clean bucket, empty plastic gallon jugs, & water bottles.
Sources and Citations
In other languages:
Italiano: Ottenere Acqua Potabile da uno Scaldabagno durante un'Emergenza, Español: obtener agua potable de un calentador de agua durante una emergencia, Русский: в чрезвычайной ситуации получить питьевую воду из водонагревателя, Français: récupérer de l'eau potable de votre chauffe eau en cas d'urgence, Bahasa Indonesia: Mendapatkan Air Minum untuk Keperluan Darurat dari Alat Pemanas Air
Thanks to all authors for creating a page that has been read 144,799 times.
|
Changing face of Britain
FOR many years, Caribbean immigrants in Britain packed grounds like Lords and Old Trafford to support the West Indies cricket team. They stood in terraces and cheered star footballers such as Cyril Regis.
But being black in the United Kingdom has changed considerably in the last 25 years. Gemma Feare is part of that transformation.
The 21-year-old Feare is the reigning Miss Jamaica UK. Unlike West Indians who moved to the UK in droves during the 1950s, she was raised in a much more tolerant society, something she recently spoke to the Jamaica Observer about.
"Things are definitely easier for my generation, when my father moved to England there was a lot of racism against people of colour. There's still underlying tones (of prejudice) but there are so many positives now, like institutions prepared to help persons of different ethnic backgrounds in business or entertainment," she said.
A flood of immigrants from Asia, Africa and the Caribbean in the past four decades has made the UK world's largest melting pot. The capital, London, is the hub for much of that diversity.
Feare was born in the East Midlands, an area in England known for its large West Indian community. Her father is from Westmoreland while her mother was born in London but also has roots in that west Jamaica parish.
The slender Feare has visited Jamaica regularly since infancy through trips initiated by her parents to help build awareness of her Caribbean heritage.
She credits those 'fact-finding' visits for helping her develop an appreciation of Jamaican culture, a feeling that is not particularly widespread among Britons with a Caribbean background.
"Some of my friends are interested in their Caribbean roots but not a lot, but that also goes for many people in the UK with an immigrant background," Feare explained. "But I'm absolutely Caribbean, I think it's important everybody has knowledge of their heritage."
Starting in the late 1940s, there was an influx of Caribbean nationals to the UK. Most went to shore up that country's economy which took a battering during World War II (1939-45).
Gemma Feare is in Jamaica for Reggae Month activities.
— Howard Campbell
|
Facts, Identification & Control
There are three distinct groups into which termites are divided: subterranean, drywood, and dampwood. Since the worker termites in these groups more or less look the same, the appearance of the reproductive caste (alates) and soldiers is important. Swarmers shed their wings very quickly after swarming, so most all dead swarmer bodies do not have attached wings. This is a good characteristic to distinguish drywood termite swarms from subterranean termite swarms since subterranean swarmers will consist of dead swarmers with and without attached wings. Swarmers can be up to 12 mm long.
Nymphs pass through four to seven instars before reaching adulthood; sexual forms eventually swarm to form new colonies.
It is estimated that termites cause over a billion dollars in damage to United States homes each year. Unlike fires, hurricanes and tornadoes, termite damage is seldom covered in homeowner insurance policies. The dangers of termite infestation are also underpublicized, leading most homeowners to believe that no preventive measures are necessary.
However, annual inspections are an effective means of preventing major damage to your home. There are two major families of termite present in North America: subterranean and drywood termites. Both species feed on cellulose material, including books, dried plants and furniture, as well structural wood. While subterranean termites burrow underground, drywood termites do not need the soil. After a colony of drywood termites has gained entrance to a home, they are capable of dispersing widely throughout many rooms and floors.
Although drywood termites are far less common than subterranean termites and are found primarily in coastal, southern states and the Southwestern states, drywood termite damage is substantial. Drywood termite infestations are identifiable by piles of fecal pellets. These fecal pellets are often first noticed in places like windowsills. If you find piles of tiny pellets in your home, it could be a sign of a drywood termite infestation. A trained pest control professional can provide a thorough inspection.
Signs of Dampwood Termite Infestation
When a drywood termite colony is mature, swarms of winged male and female reproductive insects are produced. These reproductive termites fly out of their colony to create new colonies after mating. Warm temperatures and heavy rains instigate swarms.
Drywood termites extract as much water as possible from the feces to conserve it. The result are very distinct fecal pellets called frass. They are a hexagonal and all are a similar size of 1 mm long. The termites kick them out of their tunnel. Appearance of mounds of these pellets indicate activity. It is important to note that pellets can remain almost indefinitely from a dead colony and may mislead a homeowner that it is current activity. Contact a termite control professional to confirm current activity.
|
Secondhand Smoke in Pregnancy Seems to Harm Baby, Too
WEDNESDAY, Sept. 19 -- Expectant mothers are often told they shouldn't smoke, but a new study reports that even secondhand smoke has a negative effect on the brain development of newborns.
Pregnant women who smoke or inhale secondhand smoke put their children at risk for learning difficulties, attention-deficit/hyperactivity disorder and obesity, the researchers from Spain said. The investigators also found that babies who have been exposed to nicotine have impaired physiological, sensory, motor and attention responses in the first two to three days of life.
For the study, scientists from the Behaviour Evaluation and Measurement Research Center of the Rovira i Virgili University examined 282 healthy babies 48 to 72 hours after they were born to assess their behavior and responses.
Of the mothers involved in the study, 22 percent smoked during their pregnancy and nearly 6 percent were exposed to secondhand smoke. Among those who smoked, 12.4 percent had no more than five cigarettes per day, 6.7 percent had between six and 10 cigarettes daily and 2.8 percent smoked 10 to 15 cigarettes, the investigators found.
The study findings indicated that the babies born to women who smoked or were exposed to secondhand smoke were less able to block stimuli that could alter their central nervous system.
The research also revealed that babies of women who inhaled secondhand smoke had poor motor development. In addition, the newborns of mothers who smoked during pregnancy were less able to regulate their physiological, sensory, motor and attention responses.
"Newborns who have had intrauterine exposure to nicotine, whether in an active or passive way, show signs of being more affected in terms of their neurobehavioral development," stated the study's lead author, Josefa Canals Sans, in a Spanish Foundation for Science and Technology news release.
The study authors specifically advised that women should be warned about the effects of secondhand smoke on fetus and infant development.
The study was published recently in Early Human Development.
While the study found an association between maternal smoke exposure and infant brain development, it did not prove a cause-and-effect relationship.
The U.S. Environmental Protection Agency has more about the health effects of secondhand smoke.
Posted: September 2012
|
Just Transition: Exactly What’s in It for Workers
In recent years the goal of Just Transition has received a lot of attention in Canada and internationally. Yet there are surprisingly few ground-level examples of it being done well. Often when industries are closed down, the consequences for workers and their communities are an after-thought.
Coal is a major energy source used around the world. It is also recognized as a main source of greenhouse gas, and action to shift away from coal-fired electricity production and towards cleaner energy is beginning to happen globally, but is not happening fast enough. Many experts believe that coal generation needs to be phased out as soon as technically possible, regardless of coal reserves, and that any use of fossil fuels including natural gas (aka methane) to generate electricity will have to end globally by 2040.
Just Transition: Exactly What’s in It for Workers involves four case studies on the phase-out of coal in Canada and Australia, each with a short narrative of what exactly was offered to workers when such transitions occur. LEC has developed the Sevens R to evaluate each case and will offer a brief comparative analysis of what was achieved by the social dialogue that happened during the transition process.
Watch this space in early 2019 for the full case studies.
The Seven R’s :
- Re-Deployment of Workers within same employers’ other operations
- Re-Employment in local area jobs (often on employees’ own initiative)
- Re-Training for a new profession (sometimes within same company)
- Re Location allowances (including real estate price adjustments)
- Retirement (early retirement bridging)
- Redundancy payments
Re-investment in affected communities to make up for loss of tax base:
This project is part of Adapting Canadian Work and Workplaces to Respond to Climate Change, or ACW, a 7 year research program based at York U, led by Dr. Carla Lipsig-Mummé. The research program aims to address how Canadian work and workplaces can contribute to slow global warming through addressing how Canadian work and workplaces contribute to slowing global warming. http://adaptingcanadianwork.ca/
ACW is funded through the Social Sciences and Humanities Research Council, (SSHRC), the Government of Canada’s lead funding agency for research in the humanities and social sciences, mandated for building knowledge of sustainability as it involves Canadians in an evolving local, regional and world context.
For more information please contact Rick Ciccarelli: email@example.com
|
Why is machine learning important?
All these things mean that it is possible to produce models quickly and automatically that can analyze larger and more complex data and produce faster and more accurate results - even on a very large scale. And with the construction of precise models, an organization has a better chance of identifying profitable opportunities - or avoiding unknown risks.
Our Web Analytics Agency will take the necessary time to understand your online objectives, design solutions based on your requirements and ensure that the reports are focused on the business and are feasible.
Who uses it?
Most industries that work with large amounts of data have recognized the value of machine learning technology. By obtaining insights from this data - often in real time - organizations can work more efficiently or gain an advantage over their competitors.
Algorithms are the engines that drive machine learning. In general, two main types of machine learning algorithms are currently used: supervised learning and unsupervised learning. The difference between them is defined by how each one learns about the data to make predictions.
Supervised Machine Learning:
Supervised machine learning algorithms are the most used. With this model, a data scientist acts as a guide and teaches the algorithm the conclusions to be made. Like a child who learns to identify fruits by memorizing them with an image book, in supervised learning, the algorithm is trained by a set of data that is already labeled and has a predefined result. Examples of supervised machine learning include algorithms such as linear and logistic regression, multi-class classification and support vector machines.
Machine Learning Unsupervised:
Unsupervised machine learning uses a more independent approach, in which a computer learns to identify complex processes and patterns without a human being providing close and constant guidance. Unsupervised machine learning involves data-based training that has no labels or a specific defined outcome.
To continue with the analogy of child education, unsupervised machine learning is similar to a child who learns to identify fruits by observing colors and patterns, instead of memorizing names with the help of a teacher. The child would look for similarities between the images and separate them into groups, assigning each group its own new label. Examples of unsupervised machine learning algorithms include k-means clustering, principal and independent component analysis and association laws.
Choice Of An Approach:
What is the best approach for your needs? The choice of a supervised or unsupervised machine learning algorithm generally depends on factors related to the structure and volume of your data, and the use case to which you want to apply it. Machine learning has flourished in a wide range of industries, offering assistance in a variety of business objectives and use cases that include:
- Customer Life Value.
- Anomaly detection. .
- Dynamic pricing.
- Predictive Maintenance.
- Image Classification.
- Recommendation Engines .
How Does It Work?
To get the most value from machine learning, you have to know how to match the best algorithms with the right tools and processes. SAS combines a rich and refined heritage in statistics and data mining with new architectural advances to ensure that your models are processed as quickly as possible - even in large business environments.
|
Standard Costing and Variance Analysis Case Study:
- Case A: Effect of assumed standard levels
- Case B: Factory overhead variance analysis
Effect of Assumed Standard Levels:
Harden Company has experienced increased production costs. The primary area of concern identified by management is direct labor. The company is considering adopting a standard cost system to help control labor and other costs. Useful historical data are not available because detailed production records have not been maintained.
To establish labor standards, Harden Company has retained an engineering consulting firm. After a complete study of the work process, the consultants recommended a labor standard of one unit of production every 30 minutes, or 16 units per day for each worker. The consultants further advised that Harden’s wage rates were below the prevailing rate of $ per hour.
Harden’s production vice-president thought that this labor standard was too tight, and from experience with the labor force, believed that a labor standard of 40 minutes per unit or 12 units per day for each worker would be more reasonable.
The president of Harden Company believed the standard should be set at a high level to motivate the workers and to provide adequate information for control and reasonable cost comparison. After much discussion, management decided to use a dual standard. The labor standard of one unit every 30 minutes, recommended by the consulting firm, would be employed in the plant as a motivation device, while a cost standard of 40 minutes per unit would be used in reporting. Management also concluded that the workers would not be informed of the cost standard used for reporting purposes. The production vice-president conducted several sessions prior to implementation in the plant, informing the workers of the new standard cost system and answering questions. The new standards were not related to incentive pay but were introduced when wages were increased to $7 per hour.
The standard cost system was implemented on January 1, 19–. At the end of six months of operation, these statistics on labor performance were presented to executive management:
|Direct labor hours
|Variance based on labor standard (one unit each 30 minutes)
|Variance based on cost standard (one unit each 40 minutes)
*U = Unfavorable; F = Favorable
Materials quality, labor mix, and plant facilities and conditions have not changed to any great extent during the six month period.
- A discussion of the impact of different types of standards on motivations, and specifically the likely effect on motivation of adopting the labor standard recommended for Harden Company by the engineering firm.
- An evaluation of Harden Company’s decision to employ dual standards in its standard cost system.
- Standards are often classified into three types – theoretical (tight), normal (reasonable), or expected actual (loose). Standards which are too loose or too tight will generally have a negative impact on workers motivation. If too loose, workers will tend to set their goals at this low rate, thus reducing productivity below what is obtainable; if too tight, workers will realize that it is impossible to attain the standard, become frustrated, and will not attempt to meet the standard. An attainable or reasonable standard which can be achieved under normal working conditions is likely to contribute to the worker’s motivation to achieve the designated level of activity.
If executive management imposes standards, workers and plant management will tend to react negatively because they feel threatened. If workers and plant management participate in setting the standard, they can more readily identify with it and it could become one of their personal goals.
In Harden’s case, it appears that the standard was imposed on the workers by management. In addition, management used an ideal standard to measure performance. Both of these actions appear to have had a negative impact on output over the first six months.
- Harden made a poor decision to use dual standards. If the workers learn of the dual standards, the company’s entire measurement system may may become suspect and credibility will be lost. Company morale could suffer because the workers would not know for sure how the company evaluates their performance. as a result, disregard for the present and any future cost control system may develop.
Factory Overhead Variance Analysis:
Strayer Company uses a standard cost system and budgets the following sales and costs for 19–
|Total production cost at standard
The 19– budgeted sales level was the normal capacity level used in calculating the factory overhead predetermined standard cost rate per direct labor hour.
At the end of 19–, Strayer Company reported production and sales of 19,200 units. Total factory overhead incurred was exactly equal to budgeted factory overhead for the year and there was under-applied total factory overhead of $2,000 at December 31. Factory overhead is applied to the work in process inventory on the basis of standard direct labor hours allowed for units produced. Although there was a favorable labor efficiency variance, there was neither a labor rate variance nor materials variances for the year.
Require: An explanation of the under-applied factory overhead of $2,000, being as specific as the data permit and indicating the overhead variances affected. Strayer uses a three variance method to analyze the total factory overhead.
Under-applied factory overhead will arise when actual factory overhead incurred is larger than the standard amount of factory overhead applied to work in process. The standard amount of factory overhead applied to work in process is based on actual rather than on budgeted units of output.
Based on the information given, the sum of the factory overhead spending, efficiency, and idle capacity variances resulted in an unfavorable total factory overhead variance of $2,000.
The factory overhead efficiency variance must be favorable because it is computed on the same basis as the direct labor efficiency variance which was given as favorable.
Strayer would have an unfavorable idle capacity variance because the actual activity level for the year was less than the capacity level used in calculating the standard cost rate for factory overhead.
As to the factory overhead spending variance, the balance would be unfavorable because actual costs would have had to exceed the budgeted cost of the actual units produced since the budget allowance for production of 19,200 units must be less than for 20,000 units and the actual costs were exactly equal to the budget allowance for 20,000 units. The magnitude of the spending variance is indeterminate from the information given.
You may also be interested in other articles from “standard costing and variance analysis” chapter
- Standard Costs and Management By Exception
- Setting Standard Costs – Ideal Versus Practical Standards
- Direct Materials Price and Quantity Standards
- Direct Materials Price Variance
- Direct Materials Quantity Variance
- Direct Labor Rate and Efficiency Standards
- Direct Labor Rate/Price Variance
- Direct Labor Efficiency | Usage | Quantity Variance
- Manufacturing Overhead Standards
- Overall or net factory overhead variance.
- Controllable variance
- Volume variance
- Spending variance
- Idle capacity variance
- Efficiency variance
- Spending variance
- Variable efficiency variance
- Fixed efficiency variance
- Idle capacity variance
- Mix and Yield Variance – Definition and Explanation
- Materials Mix and Yield Variance
- Labor Yield Variance
- Factory Overhead Yield variance
- Variance Analysis and Management By Exception
- Managerial importance and usefulness of variance analysis
- Advantages and Disadvantages of Standard Costing System
- Standard Costing Discussion Questions and Answers
- Standard Costing and Variance Analysis Formulas
- Standard Costing and Variance Analysis Problems and Solution
- Standard Costing and Variance Analysis Case Study
Other Related Accounting Articles:
|
By Jessica Warner on Wed, 12/19/2018 - 11:09
At the Smithsonian Institution’s (SI) Digitization Program Office (DPO), our mission is to partner with others to increase the quantity, quality, and impact of digitized collections. This blog post is the first in a series highlighting the outstanding work of our partners from around the Smithsonian who help us do just that.
Since 2015 the DPO’s Mass Digitization Program has teamed up with staff in the Smithsonian’s National Museum of Natural History’s (NMNH) Botany Department to image over 2 million (out of a total 5 million) botanical specimens held in the United States National Herbarium so they can be made available to researchers around the world. And this is where our colleagues in the Smithsonian’s Office of Research Computing (RC) and the Data Science Lab come in!
RC works with Smithsonian researchers as a collaborator, partner and advocate. One of the primary objectives of RC is to use Data Science to help researchers develop innovative approaches to move their research forward, and make it more widely accessible to others. This was evident in a recent collaboration, where Adam Metallo of the DPO worked closely with colleagues in the Data Science Lab (Rebecca Dikow and Paul Frandsen) and NMNH (Laurence Dorr, Sylvia Orli, and Eric Schuettpelz) to explore how applying deep learning principles to such a massive quantity of digital assets could reveal hidden information and invite new questions to consider.
The first deep learning experiment included an investigation of whether computer vision could be used to detect mercury staining on the botanical specimens. The team trained a convolutional neural network that was 91% accurate in detecting mercury staining (Schuettpelz et al., 2017). The team’s current work, led by Alex White, a post-doctoral fellow, is looking at extending the initial model to distinguish among genera of ferns and fern-allies.
By having access to a vast data set of digitized botanical specimens, the RC’s Data Science team has been able to use deep learning technology to uncover significant information about them in a whole new way.
Smithsonian’s Office of the Chief Information Officer | Research Computing group consists of:
Beth Stern – Director
Data Science Lab:
Dr. Rebecca Dikow - Research Data Scientist
Mike Trizna – Data Scientist
Mirian Tsuchiya – Post-doctoral Fellow (Genomics)
Alex White - Post-doctoral Fellow (Machine Learning)
Keri Thompson – Research Data Management
Dan Davis – Technical Manager
Adam Soroka – Senior Solutions Architect
The work of the DPO is enabled and enriched by our enterprising co-workers, whether we’re working with SI museum, archive or library staff who select and prepare collections for digital capture,
or with IT professionals who ensure that digitized assets move smoothly and safely through our data ecosystem,
or with educators who use digital assets to bring collections into classrooms around the world.
|
Codon adaptation is codon usage bias that results from selective pressure to increase the translation efficiency of a gene. Codon adaptation has been studied across a wide range of genomes and some early analyses of plastids have shown evidence for codon adaptation in a limited set of highly expressed plastid genes. Here we study codon usage bias across all fully sequenced plastid genomes which includes representatives of the Rhodophyta, Alveolata, Cryptophyta, Euglenozoa, Glaucocystophyceae, Rhizaria, Stramenopiles and numerous lineages within the Viridiplantae, including Chlorophyta and Embryophyta. We show evidence that codon adaptation occurs in all genomes except for two, Theileria parva and Heicosporidium sp., both of which have highly reduced gene contents and no photosynthesis genes. We also show evidence that selection for codon adaptation increases the representation of the same set of codons, which we refer to as the adaptive codons, across this wide range of taxa, which is probably due to common features descended from the initial endosymbiont. We use various measures to estimate the relative strength of selection in the different lineages and show that it appears to be fairly strong in certain Stramenopiles and Chlorophyta lineages but relatively weak in many members of the Rhodophyta, Euglenozoa and Embryophyta. Given these results we propose that codon adaptation in plastids is widespread and displays the same general features as adaptation in eubacterial genomes.
ASJC Scopus subject areas
- Biochemistry, Genetics and Molecular Biology(all)
- Agricultural and Biological Sciences(all)
|
Donkeys are different from horses and have slightly different health care considerations. However, their basic care needs of good diet, regular hoof care, dental care, and vaccinations are still the same.
Donkeys require fewer calories to maintain weight than a pony of the same size. Feed your donkey good quality feeding hay, and restrict intake of rich feeds such as haylage or grass. Donkeys are prone to obesity, which is especially dangerous for several reasons. Donkeys are at a high risk of using their excess fat as energy in times of stress. This is problematic because it can lead to fatty liver, and liver and other organ failures. Stressful situations can be as simple as dental disease, paddock movement, or as complicated as another disease process and preventative measures are most effective. A simple blood test available through your veterinarian can help diagnose any problems with your donkey or act as a wellness screen.
Donkeys have slightly different feet than horses. Their hooves are more upright, and they are also more prone to seedy toe (separation of the layers of the hoof). It is important to pick your donkeys’ feet out daily and look for early signs of seedy toe such as a bad smell, or mud/black material packing in abnormal crevices on the bottom of their feet. Seedy toe can cause lameness in donkeys, and can lead to chronic changes and hoof abscesses within the hoof if it progresses. Laminitis is also another common cause of lameness in donkeys, especially if they are overweight.
In general, donkeys are more stoic than horses and are more subtle in exhibiting signs of pain. It is important to know your donkey and it’s normal behaviours well. Take note of any changes in behaviour such as dullness, reluctance to move, decreased appetite, etc, and call your vet if you notice any of these signs.
It is also important to manage parasite or worm burdens in your donkey. A faecal egg count through your vet can help determine an individualized deworming plan for your donkey. Good management such as regular poo picking is also helpful in minimizing parasite burdens. If you have any horses paddocked together with or rotating through paddocks with donkeys, there is an additional parasite to be aware of. The donkey lungworm does not cause clinical signs in donkeys but can cause coughing, difficulty breathing, weight loss, and other signs in horses. Lungworm can be identified on faecal egg counts.
If you have any questions about your donkey, feel free to give us a call at 09 238 2471. The Donkey Sanctuary is also a great resource for more information on donkeys.
|
There are a lot of ways to define science. The broadest might characterize it as a systematic process for uncovering facts or explanations about the way the world works. From there, individual scientists sometimes differ over the exact features that distinguish science from other enterprises, but they all tend to accept the basic proposition that it is an empirical enterprise. The degree of agreement between theory and observation is what ultimately decides whether a scientific idea offers a good or bad explanation of natural phenomena.
This is why science is often depicted as a naturalistic enterprise. Which is true, but there are different strains of naturalism. It’s worth taking a moment to distinguish them. First, there is metaphysical or ontological naturalism. This is the view that the universe is entirely of matter or other measurable stuff, governed exclusively by natural forces. This stands in contrast to methodological naturalism. Advocates of methodological naturalism grant that the universe may be filled with or influenced by supernatural or immaterial forces, but stipulate that those are irrelevant to science.
Both ideas have their weaknesses. In The Big Picture, Sean Carroll introduces the concept of poetic naturalism as a way of getting around them. Poetic naturalism (PE) grants breathing room for concepts that don’t necessarily relate to the steely, unforgiving rudiments of physical reality. It is traditional naturalism’s less conservative, more ecumenical progeny. PE grants room for higher order concepts like consciousness and protons in a world populated by more fundamental stuff. It even allows room for the “supernatural”, so long as it produces some measurable effect and offers some explanatory merit.
There are certainly some physicists who are considerably less generous when it comes to the reality of emergent phenomena like biological evolution and human consciousness. Though Carroll adopts a more tolerant pose, it’s by no means revolutionary. Old school naturalism was never married to the idea that the only thing science can meaningfully address are esoteric subatomatic particles like quarks and gluons. It recognizes that everything observable is made from those things, but doesn’t automatically suggest that the only substantive way to talk about the workings of reality is in terms of fundamental physics. That is a perspective that seems exclusive to provincially minded physicists.
In this, Carroll is starting from a strange place. He is introducing a new concept in order to account for how the way most skeptically minded critical thinkers – including most scientists – already think of the world. His reasons for doing so are clear. Viewed from the realm of ordinary experience – or even, for that matter, sciences like psychology or biology – the picture of the universe that comes from studying fundamental physics is extraordinarily weird. Basic concepts like time and causality begin to look less and less essential to the way things work. The physical world of ordinary experience is mostly empty space permeated by fields, sprinkled with tiny particles that simultaneously occupy every possible state.
That’s all very bizarre. On the scales most people are used to, the ordered flow of time and the causal connections between events are conceptually indispensable. But in the world of quantum physics, it seems like they are superfluous. Our best theories seem to work perfectly well without them. In this sense, they only emerge as an artifact of our particular frame of reference: relatively large, slow moving creatures inhabiting a certain spot in the universe.
According to Carroll, anything beyond the world of the subatomic – the infinitesimal, fuzzy world of point particles and force fields – is an emergent feature that is somehow less real than the elementary stuff of which it is made. Nowhere does Carroll say this outright. Instead, it is implicit as the motivating core of poetic naturalism. It is a philosophy of science that Carroll has invented as a way to avoid saying flowers and cells and eyeballs are less real than neutrinos and electromagnetism.
The basic argument at the heart of his notion of poetic naturalism is that the truth or veracity of a scientific idea is inextricably linked to its usefulness. It is a reworking of the old instrumentalist doctrine that it doesn’t matter so much whether or not a theory is true in any axiomatic or Platonic sense. That is, science needn’t hang its hat on whether or not it is about things that actually exist “out there” in the universe. The important thing is that it reliably yields accurate predictions.
Carroll’s innovation is to essentially turn instrumentalism on its head. Back in the pioneering days of quantum physics, in the first decades of the 20th century, scientists (and philosophers) struggled to reconcile the probabilistic theories they were uncovering with the apparently deterministic world in which they lived. Unmistakably, it is puzzling. The quantum world is one of imperfect knowledge, where your ability to know one feature of a particle in great detail (say its velocity) actually impinges on the precision with which you can measure its position. Systems are described according to wave-functions, where their state is understood as an evolving probability distribution. A particle has good odds of being here and having these properties, poorer odds of being there and having those properties, and so forth. Prior to observation, they occupy a “superposition” of all possible states. This is the world of Schrodinger’s infamously dead and alive cat. It’s almost unsettlingly counter-intuitive. It also happens to be true, as illustrated by the famous double-slit experiment.
The deep peculiarity of the quantum world led to a purely utilitarian interpretation of the relationship between the emerging physics and experimental results. Though the relevant mathematics describe the behavior of subatomic particles in terms of collapsing wave functions, physicists like Neils Bohr adopted the position that one could remain agnostic about the concrete physical nature of both the particles and their wave-like nature. All that matters is that describing them in that way yields predictions that are upheld by experiment. This isn’t a view that treats wave-functions and point particles as convenient fictions. Instead, it simply says the precise physical nature of the objects of study doesn’t matter. The important thing is that the physics yields robust predictions.
This kind of agnosticism doesn’t appeal to Carroll. In his view, the suite of subatomic particles and fields described by modern quantum mechanics are the real deal. Quarks and gluons and neutrinos and photons are the raw building blocks of reality. Everything else is emergent. Carroll’s poetic naturalism is an inverted instrumentalism. Reality is built of interacting particles and fields. Higher order theories, like Darwinian evolution and plate tectonics, are just useful approximations. But since PE grants that a given theory’s truth hinges on its usefulness within a given domain of application, the fact that higher level theories aren’t directly tethered to direct reference to quantum phenomena isn’t a problem. Because Darwinism and plate tectonics work on the relevant scales, they can be thought of as just as true as quantum field theory within the appropriate domains. According to Carroll’s PE, the elegant mathematics of fundamental physics is a more fine grained description of fundamental workings of reality.
To a degree, this particular flavor of naturalism is a sympathetic view. But there is also a point at which PE’s reliance on an instrumentalist account of science begins to rob it of some its resilience. Good scientific ideas are indeed those we find most useful. Doubtlessly, this is a critical part of how science works. Yet it is also far from the whole story.
For one thing, PE remains rather sparse on the issue of what does and does not count as useful. The most obvious and objective way to evaluate a theory’s usefulness is to test how accurately the predictions it generates match observation. Carroll never spells out his conception of theoretical utility. Throughout most of The Big Picture, it seems like Carroll’s version of scientific usefulness boils down to the correspondence between prediction and observation. That’s a pragmatic – if somewhat tepid – view. However, there are points where it is clear Carroll is invoking a picture of utility that might lead us into unnecessarily murky waters.
If all Carroll means by utility is a capacity to generate reliable predictions, it would be hard to quibble. It is conceivable that we might arrive at algorithms that allow us to make good predictions about the behavior of systems by pure chance or brute trial-and-error, all the while remaining entirely agnostic about the underlying processes. In some sense, this is always the case. For most people, the most familiar scientific theories are those that deal with large objects moving at relatively low speeds over relatively small distances. It makes sense to talk in terms of solid fingers impacting a solid keyboard. At that scale, talking instead of tiny particles and fields and empty space is unwieldy. It doesn’t buy us much in the way of understanding. There isn’t a clear connection between what is going on subatomic scales and the kinds of explanations that work in the realm of evolving biological systems or dynamic geological processes. This is the case for a large swath of science, from neurology to planetary astronomy. Scientists build robust, powerful explanations for how the world works and remain agnostic about the precise mechanisms that link the behavior of the big with the behavior of the very small.
Inasmuch as utility is a measure of veracity, it is also true that higher order theories are true in a more concrete ontological sense. Genes and minerals are not just reified constructs, invented for the purpose of making predictions about how organisms change over time or how different temperatures and pressures yield different kinds of crystalline structures. They aren’t posited with a wink and nod to a deeper understanding that what is really going on is explained in terms of particle physics. They are things that actually exist, out there in the world. Evolutionary biology and geology are good theories because they make reliable predictions and consistently avoid falsification. And they make those predictions because they present accurate models of how the world actually works. This is a considerable step beyond instrumentalism. There is no obvious contradiction between thinking of cell as a fundamental component of a biological system and thinking of a cell as something that emerges from the interactions of subatomic particles. Subatomic particles are real. And so are all the higher order structures they comprise, from protons and planets to tadpoles and trees.
As a criticism of poetic naturalism, this might look a bit like trivial complaint rattling around in a big bag of pedantry. Carroll isn’t going to dispute that there really are such things as nucleotide sequences, neurons, and metamorphic rocks. PE begins to break down when it strays from the cold, decisive reaches of traditional science. In science, utility has long been understood as an important – if partial – measure of truth. But there comes a point in The Big Picture where Carroll makes moves to substantially broaden the definition of utility. It ceases to be a circumscribed instrument for talking about how much fidelity there is between theory and observation and becomes dangerously tied up in subjective preferences.
Consider a definition of instrumentalism where usefulness is defined much more widely than might be captured by agreement between prediction and observation. An idea’s utility can be more broadly construed as a measure of how well it works to achieve or justify an end. For instance, some wealthy billionaires have the aim of maintaining their wealth and accruing more. In this, they may find libertarian economic systems enormously useful. Does that make the underlying principles true? Under a broad enough definition of utility, it obviously does. The fact that extreme libertarian philosophies don’t offer good solutions to problems of third-party enforcement or public goods dilemmas is hardly a problem. Likewise, the fact that they make thoroughly erroneous assumptions about the nature of economic systems and human behavior is irrelevant.
As a groundwork for any kind of epistemology (that is, a theory of knowledge and how to go about gaining it) this seems garishly ridiculous. Carroll, I suspect, would instantly object. Yet he puts precisely this kind of lily-livered instrumentalism to use in his defense of compatibilist free will. For the unfamiliar, compatibilism is the stance that conscious agents can exercise a narrow range of agency within otherwise entirely deterministic systems. It accepts that we are built of organs that are built of cells that are built of molecules that are built of atoms that are built of subatomic particles and fields. Likewise, it accepts that our minds emerge from the meat in our skulls, itself built of physical ingredients all the way down to the subatomic realm. But is posits that consciousness, as an emergent product of otherwise physical, deterministic systems, somehow exerts some amount of sovereignty over the natural world. To put it a little more simply, in the world of compatibilism, particles bump into particles, building more and more intricate and sophisticated structures, until they pass a threshold of complexity beyond which they produce a system sufficiently complicated that it can escape that causal chain.
There are plenty of people who find the idea of strict determinism unpalatable, for a lot of different reasons. Agency is a fundamental component of humanity’s self-conception. It is tied up in religious notions of sin and damnation and salvation. Similarly, it is exerts considerable social force when it comes to legal notions of punishment and justice. It is intrinsic to the embarrassment we feel after making a mistake and the triumph we feel after accomplishing a goal. Whatever science might say, it certainly feels like we make choices. Free will is one of the founding precepts of commonplace ideas about what it means to be human.
The problem is, our best scientific understanding of the natural world leaves less and less room for it. The more we understand about how humans work as biological systems, the less space there is for the notion that we are boundlessly willful agents. Consequently, the idea of so-called libertarian free will – that human preferences and decisions are an entirely unconstrained, top-down affair – has basically been consigned to the philosophical dustbin. Very few people who take the results of science seriously believe that humans can do whatever they want, and those who do go through some brutally torturous intellectual gymnastics to get there. Science teaches us about the world as it actually is, not as we want it to be. In the domain of consciousness and identity and free will, it teaches us this: We are our brains and, to lesser but still meaningful extent, our bodies and those systems are governed by the same rules as everything else in the universe.
Now, for the sake of intellectual honesty, it is worth pointing out that science has not ruled out the possibility of free will. It has merely (if one can describe such a monumental reordering of the human worldview so flippantly) shrunk the domain in which free will can operate. A few centuries ago, it was a free range affair, able to roam wherever philosophers and theologians cared to take it. Now, it lives in an increasingly cramped paddock. The scope for specifying what free will is and what it is capable of is constantly shrinking. One can posit free will, but the move itself is extraneous and costly. The only work it might do is explain why we perceive ourselves as volitional agents. That could be satisfying on an existential level, but it’s a little strange, considering the bulk of science consists of rigorous attempts to prevent our perceptions about what the world is like from fooling us about how the world actually works. Simultaneously, arguing for free will introduces the burden of explaining why the human brain is exempt from the rules that govern all other matter. As the neurobiologist Robert Sapolsky put it, free will has become a kind of psychological god-of-the-gaps argument. It doesn’t carry any explanatory weight, but interested parties can still find room to invoke it if they wish.
In Carroll’s case, the argument for free will rests on the grounds that the concept remains useful, particularly when it comes to issues of responsibility and punishment (or reward). He doesn’t seem to care whether or not that description is compatible with more fundamental descriptions of reality, be they quantum mechanical, molecular, neurobiological, or evolutionary. Only that it maintains a domain of utility.
Conceivably, one could define wider domains in which less circumspect claims are thought to be useful. Many religious sects find the idea of libertarian free will a useful component of cosmologies of eternal reward and suffering. That reasoning is, of course, painfully circular. Humans are imbued with free will as means to earn reward or punishment. And in the absence of free will, the concepts of eternal punishment and reward become ethically unjustifiable nightmares – the workings of a capricious hand in a cruel universe.
Carroll is an atheist, so these kinds of cosmic, supernatural reward schemes don’t appeal to him. But in his defense of free will, he makes just this kind of argument. Our criminal justice system is built on penalties and punishments. If we don’t have free will, the argument goes, that system doesn’t make much sense. Well, yes and no. The idea that our actions are determined outside the realm of conscious, willful influence is perfectly compatible with the idea that behavior is sensitive to environmental inputs. In this framework, punishment can serve two ends: discouraging repeat behavior and keeping dangerous people away from the rest of us. A system sensitive to external influence can still be entirely deterministic.
However, the idea that punishment is an end unto itself or that we should endorse the idea of free will as a means of justifying the existing system of criminal justice doesn’t hold a lot of water. Modern criminal justice systems – particularly in the United States – use the concept of punishment to perverse and unjustifiable ends. We shouldn’t use free will as a prop to stabilize that old and barbaric edifice. Rather, we should use our growing understanding that behavior isn’t subject to very much volitional control as an impetus for reform. In Carroll’s view, we ought to hold onto a constrained version of free will as a means of justifying the status quo. The more enlightened (and scientifically consistent) perspective is that we ought to use our understanding of what actually shapes human behavior to build a more humane and effective criminal justice system.
Curiously, Carroll even goes so far as to hitch the idea of free will to the uncertainty of future actions. Under most conditions, we can’t predict human behavior with very much precision. Therefore, he argues, it must be subject to some sort of top-down control. Under that line of thinking, he even goes so far as to say we will have progressively less free will as our understanding of human neurobiology grows and our capacity to forecast human behavior improves. This is a step beyond instrumentalism. It straddles an uncomfortable boundary with the kinds of idealist fantasies he had earlier rejected in dealing with “quantum consciousness” and the hazy spiritualist belief that conscious precedes existence. Here, he is not just saying that we should hold onto to the idea of free will because it serves beneficent societal purposes. He is actually saying that the existential state of free will is caught up in how well we understand the human brain.
Reality is what it is, regardless of whether or not humans understand it. And that is the fatal flaw of poetic naturalism. By binding his epistemology to an excessively permissive breed of instrumentalism, Carroll is suggesting that there are cases in which truth is anchored to human reasoning. That misses a more nuanced point. Our ability to uncover truth is inextricably tied to the power of human reasoning. What is and is not true is not. Either free will exists or it doesn’t. How useful we find the concept is irrelevant.
Carroll’s use of PE to defend a constrained version of free will is damning, primarily because it is easy to dismantle. But the flaw that cripples the poetic naturalist’s conception of free also cripples his conception of everything else. It doesn’t make sense to say that everything that can’t be reduced to particle physics is only real insofar as humans find it useful, because it binds all of reality to human understanding. The fact that our understanding of vision has yet to be reduced to the equations of quantum field theory doesn’t mean that eyeballs and retinas and optic nerves don’t really exist on a fundamental level. It only means that there are gaps in our understanding. Hopefully, they will one day be filled. But it is also entirely possible that they won’t. We may never be able to explain certain higher order phenomena in terms of fundamental physics. That doesn’t mean we need to invent a new version of naturalism to account for them or that we can use the gaps in our knowledge as an excuse to entertain any brand of wishful thinking we find convenient.
|
A Short History of Applause
Posted Dec 15, 2013
Last week, the Korean Central News Agency released a 2700 word document that announced the execution of Jang Song Thaek, the 67-year-old uncle of supreme leader Kim Jong Un. A womanizer who led “a dissolute and depraved life,” a “despicable human scum” who was worse than a dog, he’d “perpetrated anti-party, counter-revolutionary factional acts in a bid to overthrow the leadership of our party and state.” His arrogant and insolent malfeasances included “unwillingly standing up from his seat and half-heartedly clapping” when the supreme leader showed up.
Supreme leaders are often quick to take offense; and lesser mortals tend to give themselves away when they don't cheer loud enough. As late as the 18th century on the Lower Mississippi, Natchez overlords had the power of life and death over their subjects, and were never approached without being howled at. “They do the same when they retire, and they retire walking backward,” remembered the Jesuit missionary Pierre de Charlevoix. In 19th-century Fiji, the Methodist missionary Joseph Waterhouse reported that betters were warmly addressed: “Clapping of hands is usual after a person of rank has partaken of refreshment, smoked a cigar, or sneezed;” because after all, “Fijians have been slain for disrespectful approach to chiefs.” And in the 20th century, Aleksandr Solzhenitsyn wrote about a party conference held in honor of Josef Stalin, with “stormy applause rising to an ovation” by an exhausted audience that was afraid to be quiet. After 11 minutes, the director of a local paper factory had the nerve to sit down; he was arrested that night. “Don’t ever be the first to stop applauding,” were his interrogator’s words.
2057 years ago, just before spring began, Julius Caesar was put to death by members of the Roman senate. They were offended that he’d accepted a list of excessive honors—a life dictatorship, a perpetual censorship, a Father of his Country tag, and a golden throne. But they hated him most of all because, when they presented him with his honors list, he didn’t bother to get up. As the civil servant, Suetonius Tranquillus, elaborated: “According to some accounts he would have risen had not Cornelius Balbus prevented him; according to others, he made no such move and grimaced angrily at Gaius Trebatius who suggested this courtesy.” So more than 60 conspirators banded together against him, and 23 daggers found their way into his flesh. That blow was struck for the republic.
But eventually, the empire struck back. Caesar’s principal conspirators, Gaius Cassius and Marcus Brutus, were slaughtered on a field at Philippi by Caesar’s heir. Every one of his successors eviscerated the senate; and the last member of his dynasty, Nero Claudius Caesar Augustus Germanicus, was appreciated for it. As Suetonius told the story: "So captivated was he by the rhythmic applause of a crowd of Alexandrians from a fleet which had just put in, that he sent to Alexandria for more. He also chose some young knights, and more than 5000 sturdy ordinary youths, whom he divided into large groups to learn the Alexandrian method of applause." They were known as his "Bricks," or "Rooftiles," or "Bees."
Colonies of the honeybee, Apis mellifera, produce about 10 queens. But the first of those queens to emerge searches out the others and kills them, or is killed herself. Afterwards, the survivor’s tens of thousands of sterile workers clean cells, feed brood, store nectar, forage for pollen, and defend the hive–sacrificing their viscera, and their lives, with their barbed stings. But the queen–who grows up to twice their length, and lives up to 50 times as long–specializes as an egg-laying machine. To whom deference is due.
Seeley, Thomas. 1995. The Wisdom of the Hive. Cambridge: Harvard University Press.
Betzig, Laura. 2010. The end of the republic. In P. Kappler and J. Silk, Mind the Gap: Primate Behavior and Human Universals, pp. 153-168. Berlin: Springer.
|
A homeopathic drug extracted from a plant native to the US and used as a traditional medicine in that country the 19th century, promises a cure for dengue, says a study by the state-run King Institute of Preventive Medicine.
The King Institute team headed by a Chennai-based homoeopath administered the drug extracted from Eupatorium perfoliatum to 50 patients with secondary dengue and found all of them recovered. "The platelet counts came under control for almost all patients and blood tests showed marked improvement," said King Institute director Dr P Gunasekaran. The study, lead by Dr N R Jayakumar of Madan Homoeo Clinic, was presented at an international symposium on 'Challenges and strategies in the prevention and management of viral infections' at Central Learther Research Institute
Jayakumar said it wasn't a new idea to administer the drug to patients with dengue. Earlier the drug was given to patients in Delhi and Sri Lanka
during epidemics. In June, the drug was administered to dengue patients at the Madurai Government Rajaji Hospitals. "We wanted to scientifically prove the drug is efficient. The patients were given two doses a day. The platelet count of all the patients improved. The good thing about this drug is that it can also be given alongside allopathic medicines," Dr Jayakumar said.
"In allopathic medicine, there is no drug for this disease. The only treatment is IV fluids to replace body fluids. Most patients we chose for the study had platelet count less than 10,000. We prevented death and blood transfusion in all the 50 patients who took this drug," said Dr Gunasekaran.
Dengue virus is spread by aedes mosquito. The symptoms include fever, headache, body pain and rashes. Some patients develop life-threatening dengue hemorrhagic fever, resulting in bleeding, low levels of blood platelets and blood plasma leakage.
|
A newly developed portable MRI device can identify intracranial hemorrhages, offering life-saving information to doctors, particularly in resource-limited settings. It can help doctors quickly make a life or death determination, particularly in areas or scenarios where access to sophisticated brain imaging scans is not readily available.
Undoubtedly, this device can help save lives, especially in rural hospitals or developing countries.
The device, known as the Portable Point-of-Care MRI system, can be wheeled down a hospital hallway, costs a fraction of traditional MRI technologies, and can be used almost anywhere by medical technicians with even minimal training.
Scientists at Yale University examined the efficacy of the device. They compared the results of portable MRI scans of 144 patients at Yale-New Haven Hospital with results obtained from traditional neuroimaging scans. Specifically, the portable MRI was used to scan brain injury patients at the bedside.
Neuroradiologists interpreting images acquired by the portable MRI device correctly identified 80% of intracerebral hemorrhages.
This is the first study that validated a brain hemorrhage’s appearance and clinical implications using a portable MRI device.
- Mazurek, M.H., Cahn, B.A., Yuen, M.M. et al. Portable, bedside, low-field magnetic resonance imaging to evaluate intracerebral hemorrhage. Nat Commun 12, 5119 (2021). DOI: 10.1038/s41467-021-25441-6
|
Caffeine is found in a variety of foods and drinks such as coffees, teas, colas and chocolates. Most of us do not give much thought to our morning cup of coffee. In fact, many of us feel that it completes our mornings and that without it we wouldn’t be able to get through those hectic days. But how does our daily dose of caffeine affect our bodies?
Caffeine works in our body as a stimulant. It works to speed up our central nervous system. The fact is that caffeine is the most widely used drug in the world. While it occurs naturally in things like coffee, it is also added to a variety of medications for medicinal purposes.
Caffeine will act as a stimulus to the brain, it will postpone fatigue (which is why we reach for coffee when we haven’t had much sleep) and it can alter your mood. Caffeine has also been known to enhance the performance of tasks that require endurance.
Caffeine in small amounts carries little risk for an otherwise healthy adult. However, for people who are predisposed to anxiety and/or panic attacks, caffeine can cause flair ups of symptoms, as well as negative physical effects on the body.
If you are susceptible to anxiety or panic attacks, caffeine can work against you. This is because caffeine sets off your “fight or flight” reaction. This is your body’s natural reaction to situations where your brain perceives that you are in immediate danger. It is a way of human survival. While this is a useful survival tool, it will likely not do you much good through your day at work.
For those who do not suffer from anxiety or panic disorder, this reaction can lead to increased alertness, productivity, and endurance. But to someone who does grapple with anxiety or is predisposed to anxiety and panic attacks, this can cause a flair up of symptoms such as nervousness, increased heart rate, a feeling of impending “doom”. Caffeine can also cause muscle twitches, negative mood fluctuations, sweating, agitation and irritability for those who are dealing with anxiety.
Caffeine in moderation poses little risk. But for those who struggle with anxiety and panic disorder, decreasing their caffeine intake can help alleviate the symptoms. Understanding how your body responds to caffeine the first step. Keep track of your caffeine intake and watch your body’s reaction as you slowly decreased your amount of caffeine. Take note of you feelings as your caffeine intake comes down. Maybe you really don’t need that morning coffee after all.
|
Nickel phosphide is gray crystalline solid. There are other names as nickel phosphide, dinickel phosphide and trinickelous phosphorus(-3) anion. It dissolves in aqua regia, but not dissolves in cold water, dilute acid and dilute alkali. Nickel phosphate is the raw material.
|CAS No.: 12035-64-2||EINECS No.: 234-828-0||Molecular Formula: Ni2P||Molecular Weight: 148.36|
|Melting Point: 1112℃||Density: 6.31|
Nickel phosphide mainly apply to chemical nickel-plating.
|
Ribavirin is given in
a mist form along with oxygen. The mist can be delivered through a large, clear
plastic hood placed over the head. Older children usually receive the medicine
through an oxygen tent over the bed or through a face mask. Treatment usually
lasts 3 to 5 days.
How It Works
Ribavirin prevents the respiratory syncytial virus from reproducing.
Why It Is Used
Ribavirin is rarely used. But it may
be considered as treatment for people at high risk for bronchiolitis or
pneumonia, which can develop as a complication of RSV.
How Well It Works
In some children, ribavirin may:
- Shorten an RSV illness.
- Reduce the
severity of or decrease the serious problems of lower respiratory infection and
complications of RSV infection.
Ribavirin may reduce the spread of
Side effects of this medicine include:
See Drug Reference for a full list of side effects. (Drug
Reference is not available in all systems.)
What To Think About
Ribavirin may make RSV infection and
complications more severe.
It's unknown if this medicine has long-term effects on a person or on
the person's subsequent children.
Medicine is one of the many tools your doctor has to treat a health problem. If your child takes medicine as your doctor suggests, it will improve your child's health and may prevent future problems. If your child doesn't take the medicines properly, his or her health (and perhaps life) may be at risk.
There are many reasons why people have trouble taking their medicine. But in most cases, there is something you can do. For suggestions on how to work around common problems, see the topic Taking Medicines as Prescribed.
Advice for women
If you are pregnant, avoid close contact with a child who is getting ribavirin. If you are planning to get pregnant soon and want to be around your child during this treatment, talk to your doctor about how you can prevent pregnancy until the treatment is complete. It's not known if a fetus may develop birth defects if exposed to this medicine.
Follow-up care is a key part of your child's treatment and safety. Be sure to make and go to all appointments, and call your doctor if your child is having problems. It's also a good idea to know your child's test results and keep a list of the medicines your child takes.
Complete the new medication information form (PDF)(What is a PDF document?) to help you understand this medication.
Primary Medical Reviewer
||Susan C. Kim, MD - Pediatrics
Specialist Medical Reviewer
||John Pope, MD - Pediatrics
Current as of
||June 25, 2012
|
Learn about this topic in these articles:
...consist of 19 rugged islands and scores of islets and rocks situated about 600 miles (900 km) west of the mainland. The largest island, Isabela (Albemarle), rises to 5,541 feet (1,689 metres) at Mount Azul, the archipelago’s highest point. The second largest island is Santa Cruz.
...craters, and cliffs. The largest of the islands, Isabela (Albemarle), is approximately 82 miles (132 km) long and constitutes more than half of the total land area of the archipelago; it contains Mount Azul, at 5,541 feet (1,689 metres) the highest point of the Galapagos Islands. The second largest island is Santa Cruz.
|